diff mbox series

[v8,1/8] xarray: add xas_try_split() to split a multi-index entry

Message ID 20250218235012.1542225-2-ziy@nvidia.com
State New
Headers show
Series Buddy allocator like (or non-uniform) folio split | expand

Commit Message

Zi Yan Feb. 18, 2025, 11:50 p.m. UTC
A preparation patch for non-uniform folio split, which always split a
folio into half iteratively, and minimal xarray entry split.

Currently, xas_split_alloc() and xas_split() always split all slots from a
multi-index entry.  They cost the same number of xa_node as the
to-be-split slots.  For example, to split an order-9 entry, which takes
2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8
xa_node are needed.  Instead xas_try_split() is intended to be used
iteratively to split the order-9 entry into 2 order-8 entries, then split
one order-8 entry, based on the given index, to 2 order-7 entries, ...,
and split one order-1 entry to 2 order-0 entries.  When splitting the
order-6 entry and a new xa_node is needed, xas_try_split() will try to
allocate one if possible.  As a result, xas_try_split() would only need
one xa_node instead of 8.

When a new xa_node is needed during the split, xas_try_split() can try to
allocate one but no more.  -ENOMEM will be return if a node cannot be
allocated.  -EINVAL will be return if a sibling node is split or cascade
split happens, where two or more new nodes are needed, and these are not
supported by xas_try_split().

xas_split_alloc() and xas_split() split an order-9 to order-0:

         ---------------------------------
         |   |   |   |   |   |   |   |   |
         | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
         |   |   |   |   |   |   |   |   |
         ---------------------------------
           |   |                   |   |
     -------   ---               ---   -------
     |           |     ...       |           |
     V           V               V           V
----------- -----------     ----------- -----------
| xa_node | | xa_node | ... | xa_node | | xa_node |
----------- -----------     ----------- -----------

xas_try_split() splits an order-9 to order-0:
   ---------------------------------
   |   |   |   |   |   |   |   |   |
   | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
   |   |   |   |   |   |   |   |   |
   ---------------------------------
     |
     |
     V
-----------
| xa_node |
-----------

Signed-off-by: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Kirill A. Shuemov <kirill.shutemov@linux.intel.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yang Shi <yang@os.amperecomputing.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 Documentation/core-api/xarray.rst |  14 ++-
 include/linux/xarray.h            |   7 ++
 lib/test_xarray.c                 |  47 ++++++++++
 lib/xarray.c                      | 138 ++++++++++++++++++++++++++----
 tools/testing/radix-tree/Makefile |   1 +
 5 files changed, 190 insertions(+), 17 deletions(-)

Comments

Zi Yan Feb. 26, 2025, 3 p.m. UTC | #1
On 26 Feb 2025, at 2:11, Baolin Wang wrote:

> Hi Zi,
>
> On 2025/2/19 07:50, Zi Yan wrote:
>> A preparation patch for non-uniform folio split, which always split a
>> folio into half iteratively, and minimal xarray entry split.
>>
>> Currently, xas_split_alloc() and xas_split() always split all slots from a
>> multi-index entry.  They cost the same number of xa_node as the
>> to-be-split slots.  For example, to split an order-9 entry, which takes
>> 2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8
>> xa_node are needed.  Instead xas_try_split() is intended to be used
>> iteratively to split the order-9 entry into 2 order-8 entries, then split
>> one order-8 entry, based on the given index, to 2 order-7 entries, ...,
>> and split one order-1 entry to 2 order-0 entries.  When splitting the
>> order-6 entry and a new xa_node is needed, xas_try_split() will try to
>> allocate one if possible.  As a result, xas_try_split() would only need
>> one xa_node instead of 8.
>>
>> When a new xa_node is needed during the split, xas_try_split() can try to
>> allocate one but no more.  -ENOMEM will be return if a node cannot be
>> allocated.  -EINVAL will be return if a sibling node is split or cascade
>> split happens, where two or more new nodes are needed, and these are not
>> supported by xas_try_split().
>>
>> xas_split_alloc() and xas_split() split an order-9 to order-0:
>>
>>           ---------------------------------
>>           |   |   |   |   |   |   |   |   |
>>           | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
>>           |   |   |   |   |   |   |   |   |
>>           ---------------------------------
>>             |   |                   |   |
>>       -------   ---               ---   -------
>>       |           |     ...       |           |
>>       V           V               V           V
>> ----------- -----------     ----------- -----------
>> | xa_node | | xa_node | ... | xa_node | | xa_node |
>> ----------- -----------     ----------- -----------
>>
>> xas_try_split() splits an order-9 to order-0:
>>     ---------------------------------
>>     |   |   |   |   |   |   |   |   |
>>     | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
>>     |   |   |   |   |   |   |   |   |
>>     ---------------------------------
>>       |
>>       |
>>       V
>> -----------
>> | xa_node |
>> -----------
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: John Hubbard <jhubbard@nvidia.com>
>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Cc: Kirill A. Shuemov <kirill.shutemov@linux.intel.com>
>> Cc: Miaohe Lin <linmiaohe@huawei.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Yang Shi <yang@os.amperecomputing.com>
>> Cc: Yu Zhao <yuzhao@google.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> ---
>>   Documentation/core-api/xarray.rst |  14 ++-
>>   include/linux/xarray.h            |   7 ++
>>   lib/test_xarray.c                 |  47 ++++++++++
>>   lib/xarray.c                      | 138 ++++++++++++++++++++++++++----
>>   tools/testing/radix-tree/Makefile |   1 +
>>   5 files changed, 190 insertions(+), 17 deletions(-)
>>
>> diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
>> index f6a3eef4fe7f..c6c91cbd0c3c 100644
>> --- a/Documentation/core-api/xarray.rst
>> +++ b/Documentation/core-api/xarray.rst
>> @@ -489,7 +489,19 @@ Storing ``NULL`` into any index of a multi-index entry will set the
>>   entry at every index to ``NULL`` and dissolve the tie.  A multi-index
>>   entry can be split into entries occupying smaller ranges by calling
>>   xas_split_alloc() without the xa_lock held, followed by taking the lock
>> -and calling xas_split().
>> +and calling xas_split() or calling xas_try_split() with xa_lock. The
>> +difference between xas_split_alloc()+xas_split() and xas_try_alloc() is
>> +that xas_split_alloc() + xas_split() split the entry from the original
>> +order to the new order in one shot uniformly, whereas xas_try_split()
>> +iteratively splits the entry containing the index non-uniformly.
>> +For example, to split an order-9 entry, which takes 2^(9-6)=8 slots,
>> +assuming ``XA_CHUNK_SHIFT`` is 6, xas_split_alloc() + xas_split() need
>> +8 xa_node. xas_try_split() splits the order-9 entry into
>> +2 order-8 entries, then split one order-8 entry, based on the given index,
>> +to 2 order-7 entries, ..., and split one order-1 entry to 2 order-0 entries.
>> +When splitting the order-6 entry and a new xa_node is needed, xas_try_split()
>> +will try to allocate one if possible. As a result, xas_try_split() would only
>> +need 1 xa_node instead of 8.
>>    Functions and structures
>>   ========================
>> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
>> index 0b618ec04115..9eb8c7425090 100644
>> --- a/include/linux/xarray.h
>> +++ b/include/linux/xarray.h
>> @@ -1555,6 +1555,8 @@ int xa_get_order(struct xarray *, unsigned long index);
>>   int xas_get_order(struct xa_state *xas);
>>   void xas_split(struct xa_state *, void *entry, unsigned int order);
>>   void xas_split_alloc(struct xa_state *, void *entry, unsigned int order, gfp_t);
>> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
>> +		gfp_t gfp);
>>   #else
>>   static inline int xa_get_order(struct xarray *xa, unsigned long index)
>>   {
>> @@ -1576,6 +1578,11 @@ static inline void xas_split_alloc(struct xa_state *xas, void *entry,
>>   		unsigned int order, gfp_t gfp)
>>   {
>>   }
>> +
>> +static inline void xas_try_split(struct xa_state *xas, void *entry,
>> +		unsigned int order, gfp_t gfp)
>> +{
>> +}
>>   #endif
>>    /**
>
> [snip]
>
>> diff --git a/lib/xarray.c b/lib/xarray.c
>> index 116e9286c64e..b9a63d7fbd58 100644
>> --- a/lib/xarray.c
>> +++ b/lib/xarray.c
>> @@ -1007,6 +1007,31 @@ static void node_set_marks(struct xa_node *node, unsigned int offset,
>>   	}
>>   }
>>  +static struct xa_node *__xas_alloc_node_for_split(struct xa_state *xas,
>> +		void *entry, gfp_t gfp)
>> +{
>> +	unsigned int i;
>> +	void *sibling = NULL;
>> +	struct xa_node *node;
>> +	unsigned int mask = xas->xa_sibs;
>> +
>> +	node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
>> +	if (!node)
>> +		return NULL;
>> +	node->array = xas->xa;
>> +	for (i = 0; i < XA_CHUNK_SIZE; i++) {
>> +		if ((i & mask) == 0) {
>> +			RCU_INIT_POINTER(node->slots[i], entry);
>> +			sibling = xa_mk_sibling(i);
>> +		} else {
>> +			RCU_INIT_POINTER(node->slots[i], sibling);
>> +		}
>> +	}
>> +	RCU_INIT_POINTER(node->parent, xas->xa_alloc);
>> +
>> +	return node;
>> +}
>> +
>>   /**
>>    * xas_split_alloc() - Allocate memory for splitting an entry.
>>    * @xas: XArray operation state.
>> @@ -1025,7 +1050,6 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
>>   		gfp_t gfp)
>>   {
>>   	unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
>> -	unsigned int mask = xas->xa_sibs;
>>    	/* XXX: no support for splitting really large entries yet */
>>   	if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order))
>> @@ -1034,23 +1058,9 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
>>   		return;
>>    	do {
>> -		unsigned int i;
>> -		void *sibling = NULL;
>> -		struct xa_node *node;
>> -
>> -		node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
>> +		struct xa_node *node = __xas_alloc_node_for_split(xas, entry, gfp);
>>   		if (!node)
>>   			goto nomem;
>> -		node->array = xas->xa;
>> -		for (i = 0; i < XA_CHUNK_SIZE; i++) {
>> -			if ((i & mask) == 0) {
>> -				RCU_INIT_POINTER(node->slots[i], entry);
>> -				sibling = xa_mk_sibling(i);
>> -			} else {
>> -				RCU_INIT_POINTER(node->slots[i], sibling);
>> -			}
>> -		}
>> -		RCU_INIT_POINTER(node->parent, xas->xa_alloc);
>>   		xas->xa_alloc = node;
>>   	} while (sibs-- > 0);
>>  @@ -1122,6 +1132,102 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
>>   	xas_update(xas, node);
>>   }
>>   EXPORT_SYMBOL_GPL(xas_split);
>> +
>> +/**
>> + * xas_try_split() - Try to split a multi-index entry.
>> + * @xas: XArray operation state.
>> + * @entry: New entry to store in the array.
>> + * @order: Current entry order.
>> + * @gfp: Memory allocation flags.
>> + *
>> + * The size of the new entries is set in @xas.  The value in @entry is
>> + * copied to all the replacement entries. If and only if one xa_node needs to
>> + * be allocated, the function will use @gfp to get one. If more xa_node are
>> + * needed, the function gives EINVAL error.
>> + *
>> + * Context: Any context.  The caller should hold the xa_lock.
>> + */
>> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
>> +		gfp_t gfp)
>
> The xas_try_split() may sleep if ‘gfp’ flags permit while holding the xa_lock, which can cause issues. So can we add a check for the ‘gfp’ or only use GFP_NOWAIT?

You mean only allow gfp to be GFP_NOWAIT or GFP_ATOMIC?

Best Regards,
Yan, Zi
Zi Yan Feb. 26, 2025, 8:58 p.m. UTC | #2
On 26 Feb 2025, at 10:07, Baolin Wang wrote:

> On 2025/2/26 23:00, Zi Yan wrote:
>> On 26 Feb 2025, at 2:11, Baolin Wang wrote:
>>
>>> Hi Zi,
>>>
>>> On 2025/2/19 07:50, Zi Yan wrote:
>>>> A preparation patch for non-uniform folio split, which always split a
>>>> folio into half iteratively, and minimal xarray entry split.
>>>>
>>>> Currently, xas_split_alloc() and xas_split() always split all slots from a
>>>> multi-index entry.  They cost the same number of xa_node as the
>>>> to-be-split slots.  For example, to split an order-9 entry, which takes
>>>> 2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8
>>>> xa_node are needed.  Instead xas_try_split() is intended to be used
>>>> iteratively to split the order-9 entry into 2 order-8 entries, then split
>>>> one order-8 entry, based on the given index, to 2 order-7 entries, ...,
>>>> and split one order-1 entry to 2 order-0 entries.  When splitting the
>>>> order-6 entry and a new xa_node is needed, xas_try_split() will try to
>>>> allocate one if possible.  As a result, xas_try_split() would only need
>>>> one xa_node instead of 8.
>>>>
>>>> When a new xa_node is needed during the split, xas_try_split() can try to
>>>> allocate one but no more.  -ENOMEM will be return if a node cannot be
>>>> allocated.  -EINVAL will be return if a sibling node is split or cascade
>>>> split happens, where two or more new nodes are needed, and these are not
>>>> supported by xas_try_split().
>>>>
>>>> xas_split_alloc() and xas_split() split an order-9 to order-0:
>>>>
>>>>            ---------------------------------
>>>>            |   |   |   |   |   |   |   |   |
>>>>            | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
>>>>            |   |   |   |   |   |   |   |   |
>>>>            ---------------------------------
>>>>              |   |                   |   |
>>>>        -------   ---               ---   -------
>>>>        |           |     ...       |           |
>>>>        V           V               V           V
>>>> ----------- -----------     ----------- -----------
>>>> | xa_node | | xa_node | ... | xa_node | | xa_node |
>>>> ----------- -----------     ----------- -----------
>>>>
>>>> xas_try_split() splits an order-9 to order-0:
>>>>      ---------------------------------
>>>>      |   |   |   |   |   |   |   |   |
>>>>      | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
>>>>      |   |   |   |   |   |   |   |   |
>>>>      ---------------------------------
>>>>        |
>>>>        |
>>>>        V
>>>> -----------
>>>> | xa_node |
>>>> -----------
>>>>
>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Cc: David Hildenbrand <david@redhat.com>
>>>> Cc: Hugh Dickins <hughd@google.com>
>>>> Cc: John Hubbard <jhubbard@nvidia.com>
>>>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>> Cc: Kirill A. Shuemov <kirill.shutemov@linux.intel.com>
>>>> Cc: Miaohe Lin <linmiaohe@huawei.com>
>>>> Cc: Matthew Wilcox <willy@infradead.org>
>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>> Cc: Yang Shi <yang@os.amperecomputing.com>
>>>> Cc: Yu Zhao <yuzhao@google.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> ---
>>>>    Documentation/core-api/xarray.rst |  14 ++-
>>>>    include/linux/xarray.h            |   7 ++
>>>>    lib/test_xarray.c                 |  47 ++++++++++
>>>>    lib/xarray.c                      | 138 ++++++++++++++++++++++++++----
>>>>    tools/testing/radix-tree/Makefile |   1 +
>>>>    5 files changed, 190 insertions(+), 17 deletions(-)
>>>>
>>>> diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
>>>> index f6a3eef4fe7f..c6c91cbd0c3c 100644
>>>> --- a/Documentation/core-api/xarray.rst
>>>> +++ b/Documentation/core-api/xarray.rst
>>>> @@ -489,7 +489,19 @@ Storing ``NULL`` into any index of a multi-index entry will set the
>>>>    entry at every index to ``NULL`` and dissolve the tie.  A multi-index
>>>>    entry can be split into entries occupying smaller ranges by calling
>>>>    xas_split_alloc() without the xa_lock held, followed by taking the lock
>>>> -and calling xas_split().
>>>> +and calling xas_split() or calling xas_try_split() with xa_lock. The
>>>> +difference between xas_split_alloc()+xas_split() and xas_try_alloc() is
>>>> +that xas_split_alloc() + xas_split() split the entry from the original
>>>> +order to the new order in one shot uniformly, whereas xas_try_split()
>>>> +iteratively splits the entry containing the index non-uniformly.
>>>> +For example, to split an order-9 entry, which takes 2^(9-6)=8 slots,
>>>> +assuming ``XA_CHUNK_SHIFT`` is 6, xas_split_alloc() + xas_split() need
>>>> +8 xa_node. xas_try_split() splits the order-9 entry into
>>>> +2 order-8 entries, then split one order-8 entry, based on the given index,
>>>> +to 2 order-7 entries, ..., and split one order-1 entry to 2 order-0 entries.
>>>> +When splitting the order-6 entry and a new xa_node is needed, xas_try_split()
>>>> +will try to allocate one if possible. As a result, xas_try_split() would only
>>>> +need 1 xa_node instead of 8.
>>>>     Functions and structures
>>>>    ========================
>>>> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
>>>> index 0b618ec04115..9eb8c7425090 100644
>>>> --- a/include/linux/xarray.h
>>>> +++ b/include/linux/xarray.h
>>>> @@ -1555,6 +1555,8 @@ int xa_get_order(struct xarray *, unsigned long index);
>>>>    int xas_get_order(struct xa_state *xas);
>>>>    void xas_split(struct xa_state *, void *entry, unsigned int order);
>>>>    void xas_split_alloc(struct xa_state *, void *entry, unsigned int order, gfp_t);
>>>> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
>>>> +		gfp_t gfp);
>>>>    #else
>>>>    static inline int xa_get_order(struct xarray *xa, unsigned long index)
>>>>    {
>>>> @@ -1576,6 +1578,11 @@ static inline void xas_split_alloc(struct xa_state *xas, void *entry,
>>>>    		unsigned int order, gfp_t gfp)
>>>>    {
>>>>    }
>>>> +
>>>> +static inline void xas_try_split(struct xa_state *xas, void *entry,
>>>> +		unsigned int order, gfp_t gfp)
>>>> +{
>>>> +}
>>>>    #endif
>>>>     /**
>>>
>>> [snip]
>>>
>>>> diff --git a/lib/xarray.c b/lib/xarray.c
>>>> index 116e9286c64e..b9a63d7fbd58 100644
>>>> --- a/lib/xarray.c
>>>> +++ b/lib/xarray.c
>>>> @@ -1007,6 +1007,31 @@ static void node_set_marks(struct xa_node *node, unsigned int offset,
>>>>    	}
>>>>    }
>>>>   +static struct xa_node *__xas_alloc_node_for_split(struct xa_state *xas,
>>>> +		void *entry, gfp_t gfp)
>>>> +{
>>>> +	unsigned int i;
>>>> +	void *sibling = NULL;
>>>> +	struct xa_node *node;
>>>> +	unsigned int mask = xas->xa_sibs;
>>>> +
>>>> +	node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
>>>> +	if (!node)
>>>> +		return NULL;
>>>> +	node->array = xas->xa;
>>>> +	for (i = 0; i < XA_CHUNK_SIZE; i++) {
>>>> +		if ((i & mask) == 0) {
>>>> +			RCU_INIT_POINTER(node->slots[i], entry);
>>>> +			sibling = xa_mk_sibling(i);
>>>> +		} else {
>>>> +			RCU_INIT_POINTER(node->slots[i], sibling);
>>>> +		}
>>>> +	}
>>>> +	RCU_INIT_POINTER(node->parent, xas->xa_alloc);
>>>> +
>>>> +	return node;
>>>> +}
>>>> +
>>>>    /**
>>>>     * xas_split_alloc() - Allocate memory for splitting an entry.
>>>>     * @xas: XArray operation state.
>>>> @@ -1025,7 +1050,6 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
>>>>    		gfp_t gfp)
>>>>    {
>>>>    	unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
>>>> -	unsigned int mask = xas->xa_sibs;
>>>>     	/* XXX: no support for splitting really large entries yet */
>>>>    	if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order))
>>>> @@ -1034,23 +1058,9 @@ void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
>>>>    		return;
>>>>     	do {
>>>> -		unsigned int i;
>>>> -		void *sibling = NULL;
>>>> -		struct xa_node *node;
>>>> -
>>>> -		node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
>>>> +		struct xa_node *node = __xas_alloc_node_for_split(xas, entry, gfp);
>>>>    		if (!node)
>>>>    			goto nomem;
>>>> -		node->array = xas->xa;
>>>> -		for (i = 0; i < XA_CHUNK_SIZE; i++) {
>>>> -			if ((i & mask) == 0) {
>>>> -				RCU_INIT_POINTER(node->slots[i], entry);
>>>> -				sibling = xa_mk_sibling(i);
>>>> -			} else {
>>>> -				RCU_INIT_POINTER(node->slots[i], sibling);
>>>> -			}
>>>> -		}
>>>> -		RCU_INIT_POINTER(node->parent, xas->xa_alloc);
>>>>    		xas->xa_alloc = node;
>>>>    	} while (sibs-- > 0);
>>>>   @@ -1122,6 +1132,102 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
>>>>    	xas_update(xas, node);
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(xas_split);
>>>> +
>>>> +/**
>>>> + * xas_try_split() - Try to split a multi-index entry.
>>>> + * @xas: XArray operation state.
>>>> + * @entry: New entry to store in the array.
>>>> + * @order: Current entry order.
>>>> + * @gfp: Memory allocation flags.
>>>> + *
>>>> + * The size of the new entries is set in @xas.  The value in @entry is
>>>> + * copied to all the replacement entries. If and only if one xa_node needs to
>>>> + * be allocated, the function will use @gfp to get one. If more xa_node are
>>>> + * needed, the function gives EINVAL error.
>>>> + *
>>>> + * Context: Any context.  The caller should hold the xa_lock.
>>>> + */
>>>> +void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
>>>> +		gfp_t gfp)
>>>
>>> The xas_try_split() may sleep if ‘gfp’ flags permit while holding the xa_lock, which can cause issues. So can we add a check for the ‘gfp’ or only use GFP_NOWAIT?
>>
>> You mean only allow gfp to be GFP_NOWAIT or GFP_ATOMIC?
>
> Yes.

After discussed with Matthew, I think it is better to use GFP_NOWAIT in
xas_try_split() and user can use xas_nomem() if xas_try_split() fails to
allocate a xa_node. So I will remove gfp in the parameter.

I also discovered a bug in xas_try_split() when a xa_node is allocated
from xas_nomem(), during my refactoring. Basically, the xa_node from
xas_nomem() is not initialized for split, namely node->slots is not
set correctly, so using that node in xas_try_split() corrupts xarray.
This bug does not affect this series, but Minimize xa_node allocation
during xarry split series.

I will send out new versions of both series.


Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index f6a3eef4fe7f..c6c91cbd0c3c 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -489,7 +489,19 @@  Storing ``NULL`` into any index of a multi-index entry will set the
 entry at every index to ``NULL`` and dissolve the tie.  A multi-index
 entry can be split into entries occupying smaller ranges by calling
 xas_split_alloc() without the xa_lock held, followed by taking the lock
-and calling xas_split().
+and calling xas_split() or calling xas_try_split() with xa_lock. The
+difference between xas_split_alloc()+xas_split() and xas_try_alloc() is
+that xas_split_alloc() + xas_split() split the entry from the original
+order to the new order in one shot uniformly, whereas xas_try_split()
+iteratively splits the entry containing the index non-uniformly.
+For example, to split an order-9 entry, which takes 2^(9-6)=8 slots,
+assuming ``XA_CHUNK_SHIFT`` is 6, xas_split_alloc() + xas_split() need
+8 xa_node. xas_try_split() splits the order-9 entry into
+2 order-8 entries, then split one order-8 entry, based on the given index,
+to 2 order-7 entries, ..., and split one order-1 entry to 2 order-0 entries.
+When splitting the order-6 entry and a new xa_node is needed, xas_try_split()
+will try to allocate one if possible. As a result, xas_try_split() would only
+need 1 xa_node instead of 8.
 
 Functions and structures
 ========================
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 0b618ec04115..9eb8c7425090 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -1555,6 +1555,8 @@  int xa_get_order(struct xarray *, unsigned long index);
 int xas_get_order(struct xa_state *xas);
 void xas_split(struct xa_state *, void *entry, unsigned int order);
 void xas_split_alloc(struct xa_state *, void *entry, unsigned int order, gfp_t);
+void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
+		gfp_t gfp);
 #else
 static inline int xa_get_order(struct xarray *xa, unsigned long index)
 {
@@ -1576,6 +1578,11 @@  static inline void xas_split_alloc(struct xa_state *xas, void *entry,
 		unsigned int order, gfp_t gfp)
 {
 }
+
+static inline void xas_try_split(struct xa_state *xas, void *entry,
+		unsigned int order, gfp_t gfp)
+{
+}
 #endif
 
 /**
diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index 0e865bab4a10..b76d9809f5c1 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -1858,6 +1858,49 @@  static void check_split_1(struct xarray *xa, unsigned long index,
 	xa_destroy(xa);
 }
 
+static void check_split_2(struct xarray *xa, unsigned long index,
+				unsigned int order, unsigned int new_order)
+{
+	XA_STATE_ORDER(xas, xa, index, new_order);
+	unsigned int i, found;
+	void *entry;
+
+	xa_store_order(xa, index, order, xa, GFP_KERNEL);
+	xa_set_mark(xa, index, XA_MARK_1);
+
+	xas_lock(&xas);
+	xas_try_split(&xas, xa, order, GFP_KERNEL);
+	if (((new_order / XA_CHUNK_SHIFT) < (order / XA_CHUNK_SHIFT)) &&
+	    new_order < order - 1) {
+		XA_BUG_ON(xa, !xas_error(&xas) || xas_error(&xas) != -EINVAL);
+		xas_unlock(&xas);
+		goto out;
+	}
+	for (i = 0; i < (1 << order); i += (1 << new_order))
+		__xa_store(xa, index + i, xa_mk_index(index + i), 0);
+	xas_unlock(&xas);
+
+	for (i = 0; i < (1 << order); i++) {
+		unsigned int val = index + (i & ~((1 << new_order) - 1));
+		XA_BUG_ON(xa, xa_load(xa, index + i) != xa_mk_index(val));
+	}
+
+	xa_set_mark(xa, index, XA_MARK_0);
+	XA_BUG_ON(xa, !xa_get_mark(xa, index, XA_MARK_0));
+
+	xas_set_order(&xas, index, 0);
+	found = 0;
+	rcu_read_lock();
+	xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_1) {
+		found++;
+		XA_BUG_ON(xa, xa_is_internal(entry));
+	}
+	rcu_read_unlock();
+	XA_BUG_ON(xa, found != 1 << (order - new_order));
+out:
+	xa_destroy(xa);
+}
+
 static noinline void check_split(struct xarray *xa)
 {
 	unsigned int order, new_order;
@@ -1869,6 +1912,10 @@  static noinline void check_split(struct xarray *xa)
 			check_split_1(xa, 0, order, new_order);
 			check_split_1(xa, 1UL << order, order, new_order);
 			check_split_1(xa, 3UL << order, order, new_order);
+
+			check_split_2(xa, 0, order, new_order);
+			check_split_2(xa, 1UL << order, order, new_order);
+			check_split_2(xa, 3UL << order, order, new_order);
 		}
 	}
 }
diff --git a/lib/xarray.c b/lib/xarray.c
index 116e9286c64e..b9a63d7fbd58 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1007,6 +1007,31 @@  static void node_set_marks(struct xa_node *node, unsigned int offset,
 	}
 }
 
+static struct xa_node *__xas_alloc_node_for_split(struct xa_state *xas,
+		void *entry, gfp_t gfp)
+{
+	unsigned int i;
+	void *sibling = NULL;
+	struct xa_node *node;
+	unsigned int mask = xas->xa_sibs;
+
+	node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
+	if (!node)
+		return NULL;
+	node->array = xas->xa;
+	for (i = 0; i < XA_CHUNK_SIZE; i++) {
+		if ((i & mask) == 0) {
+			RCU_INIT_POINTER(node->slots[i], entry);
+			sibling = xa_mk_sibling(i);
+		} else {
+			RCU_INIT_POINTER(node->slots[i], sibling);
+		}
+	}
+	RCU_INIT_POINTER(node->parent, xas->xa_alloc);
+
+	return node;
+}
+
 /**
  * xas_split_alloc() - Allocate memory for splitting an entry.
  * @xas: XArray operation state.
@@ -1025,7 +1050,6 @@  void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
 		gfp_t gfp)
 {
 	unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
-	unsigned int mask = xas->xa_sibs;
 
 	/* XXX: no support for splitting really large entries yet */
 	if (WARN_ON(xas->xa_shift + 2 * XA_CHUNK_SHIFT <= order))
@@ -1034,23 +1058,9 @@  void xas_split_alloc(struct xa_state *xas, void *entry, unsigned int order,
 		return;
 
 	do {
-		unsigned int i;
-		void *sibling = NULL;
-		struct xa_node *node;
-
-		node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);
+		struct xa_node *node = __xas_alloc_node_for_split(xas, entry, gfp);
 		if (!node)
 			goto nomem;
-		node->array = xas->xa;
-		for (i = 0; i < XA_CHUNK_SIZE; i++) {
-			if ((i & mask) == 0) {
-				RCU_INIT_POINTER(node->slots[i], entry);
-				sibling = xa_mk_sibling(i);
-			} else {
-				RCU_INIT_POINTER(node->slots[i], sibling);
-			}
-		}
-		RCU_INIT_POINTER(node->parent, xas->xa_alloc);
 		xas->xa_alloc = node;
 	} while (sibs-- > 0);
 
@@ -1122,6 +1132,102 @@  void xas_split(struct xa_state *xas, void *entry, unsigned int order)
 	xas_update(xas, node);
 }
 EXPORT_SYMBOL_GPL(xas_split);
+
+/**
+ * xas_try_split() - Try to split a multi-index entry.
+ * @xas: XArray operation state.
+ * @entry: New entry to store in the array.
+ * @order: Current entry order.
+ * @gfp: Memory allocation flags.
+ *
+ * The size of the new entries is set in @xas.  The value in @entry is
+ * copied to all the replacement entries. If and only if one xa_node needs to
+ * be allocated, the function will use @gfp to get one. If more xa_node are
+ * needed, the function gives EINVAL error.
+ *
+ * Context: Any context.  The caller should hold the xa_lock.
+ */
+void xas_try_split(struct xa_state *xas, void *entry, unsigned int order,
+		gfp_t gfp)
+{
+	unsigned int sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
+	unsigned int offset, marks;
+	struct xa_node *node;
+	void *curr = xas_load(xas);
+	int values = 0;
+
+	node = xas->xa_node;
+	if (xas_top(node))
+		return;
+
+	if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
+		gfp |= __GFP_ACCOUNT;
+
+	marks = node_get_marks(node, xas->xa_offset);
+
+	offset = xas->xa_offset + sibs;
+
+	if (xas->xa_shift < node->shift) {
+		struct xa_node *child = xas->xa_alloc;
+		unsigned int expected_sibs =
+			(1 << ((order - 1) % XA_CHUNK_SHIFT)) - 1;
+
+		/*
+		 * No support for splitting sibling entries
+		 * (horizontally) or cascade split (vertically), which
+		 * requires two or more new xa_nodes.
+		 * Since if one xa_node allocation fails,
+		 * it is hard to free the prior allocations.
+		 */
+		if (sibs || xas->xa_sibs != expected_sibs) {
+			xas_destroy(xas);
+			xas_set_err(xas, -EINVAL);
+			return;
+		}
+
+		if (!child) {
+			child = __xas_alloc_node_for_split(xas, entry,
+					gfp);
+			if (!child) {
+				xas_destroy(xas);
+				xas_set_err(xas, -ENOMEM);
+				return;
+			}
+		}
+
+		xas->xa_alloc = rcu_dereference_raw(child->parent);
+		child->shift = node->shift - XA_CHUNK_SHIFT;
+		child->offset = offset;
+		child->count = XA_CHUNK_SIZE;
+		child->nr_values = xa_is_value(entry) ?
+				XA_CHUNK_SIZE : 0;
+		RCU_INIT_POINTER(child->parent, node);
+		node_set_marks(node, offset, child, xas->xa_sibs,
+				marks);
+		rcu_assign_pointer(node->slots[offset],
+				xa_mk_node(child));
+		if (xa_is_value(curr))
+			values--;
+		xas_update(xas, child);
+
+	} else {
+		do {
+			unsigned int canon = offset - xas->xa_sibs;
+
+			node_set_marks(node, canon, NULL, 0, marks);
+			rcu_assign_pointer(node->slots[canon], entry);
+			while (offset > canon)
+				rcu_assign_pointer(node->slots[offset--],
+						xa_mk_sibling(canon));
+			values += (xa_is_value(entry) - xa_is_value(curr)) *
+					(xas->xa_sibs + 1);
+		} while (offset-- > xas->xa_offset);
+	}
+
+	node->nr_values += values;
+	xas_update(xas, node);
+}
+EXPORT_SYMBOL_GPL(xas_try_split);
 #endif
 
 /**
diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
index 8b3591a51e1f..b2a6660bbd92 100644
--- a/tools/testing/radix-tree/Makefile
+++ b/tools/testing/radix-tree/Makefile
@@ -14,6 +14,7 @@  include ../shared/shared.mk
 
 main:	$(OFILES)
 
+xarray.o: ../../../lib/test_xarray.c
 idr-test.o: ../../../lib/test_ida.c
 idr-test: idr-test.o $(CORE_OFILES)