2019-12-31 12:25:06

by Uladzislau Rezki

[permalink] [raw]
Subject: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

kfree_rcu() logic can be improved further by using kfree_bulk()
interface along with "basic batching support" introduced earlier.

The are at least two advantages of using "bulk" interface:
- in case of large number of kfree_rcu() requests kfree_bulk()
reduces the per-object overhead caused by calling kfree()
per-object.

- reduces the number of cache-misses due to "pointer chasing"
between objects which can be far spread between each other.

This approach defines a new kfree_rcu_bulk_data structure that
stores pointers in an array with a specific size. Number of entries
in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
structure to be exactly one page.

Since it deals with "block-chain" technique there is an extra
need in dynamic allocation when a new block is required. Memory
is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
allows to skip direct reclaim under low memory condition to
prevent stalling and fails silently under high memory pressure.

The "emergency path" gets maintained when a system is run out
of memory. In that case objects are linked into regular list
and that is it.

In order to evaluate it, the "rcuperf" was run to analyze how
much memory is consumed and what is kfree_bulk() throughput.

Testing on the HiKey-960, arm64, 8xCPUs with below parameters:

CONFIG_SLAB=y
kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1

102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB

rcuperf shows approximately ~12% better throughput(Total time)
in case of using "bulk" interface. The "drain logic" or its RCU
callback does the work faster that leads to better throughput.

Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>
---
kernel/rcu/tree.c | 154 ++++++++++++++++++++++++++++++++++++++--------
1 file changed, 130 insertions(+), 24 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 48fba2257748..4ee5c737558b 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2754,22 +2754,45 @@ EXPORT_SYMBOL_GPL(call_rcu);
#define KFREE_DRAIN_JIFFIES (HZ / 50)
#define KFREE_N_BATCHES 2

+/*
+ * This macro defines how many entries the "records" array
+ * will contain. It is based on the fact that the size of
+ * kfree_rcu_bulk_data structure becomes exactly one page.
+ */
+#define KFREE_BULK_MAX_ENTR ((PAGE_SIZE / sizeof(void *)) - 2)
+
+/**
+ * struct kfree_rcu_bulk_data - single block to store kfree_rcu() pointers
+ * @nr_records: Number of active pointers in the array
+ * @records: Array of the kfree_rcu() pointers
+ * @next: Next bulk object in the block chain
+ */
+struct kfree_rcu_bulk_data {
+ unsigned long nr_records;
+ void *records[KFREE_BULK_MAX_ENTR];
+ struct kfree_rcu_bulk_data *next;
+};
+
/**
* struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests
* @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period
* @head_free: List of kfree_rcu() objects waiting for a grace period
+ * @bhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period
* @krcp: Pointer to @kfree_rcu_cpu structure
*/

struct kfree_rcu_cpu_work {
struct rcu_work rcu_work;
struct rcu_head *head_free;
+ struct kfree_rcu_bulk_data *bhead_free;
struct kfree_rcu_cpu *krcp;
};

/**
* struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
* @head: List of kfree_rcu() objects not yet waiting for a grace period
+ * @bhead: Bulk-List of kfree_rcu() objects not yet waiting for a grace period
+ * @bcached: Keeps at most one object for later reuse when build chain blocks
* @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period
* @lock: Synchronize access to this structure
* @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES
@@ -2783,6 +2806,8 @@ struct kfree_rcu_cpu_work {
*/
struct kfree_rcu_cpu {
struct rcu_head *head;
+ struct kfree_rcu_bulk_data *bhead;
+ struct kfree_rcu_bulk_data *bcached;
struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES];
spinlock_t lock;
struct delayed_work monitor_work;
@@ -2800,6 +2825,7 @@ static void kfree_rcu_work(struct work_struct *work)
{
unsigned long flags;
struct rcu_head *head, *next;
+ struct kfree_rcu_bulk_data *bhead, *bnext;
struct kfree_rcu_cpu *krcp;
struct kfree_rcu_cpu_work *krwp;

@@ -2809,22 +2835,39 @@ static void kfree_rcu_work(struct work_struct *work)
spin_lock_irqsave(&krcp->lock, flags);
head = krwp->head_free;
krwp->head_free = NULL;
+ bhead = krwp->bhead_free;
+ krwp->bhead_free = NULL;
spin_unlock_irqrestore(&krcp->lock, flags);

- // List "head" is now private, so traverse locklessly.
+ /* List "bhead" is now private, so traverse locklessly. */
+ for (; bhead; bhead = bnext) {
+ bnext = bhead->next;
+
+ rcu_lock_acquire(&rcu_callback_map);
+ kfree_bulk(bhead->nr_records, bhead->records);
+ rcu_lock_release(&rcu_callback_map);
+
+ if (cmpxchg(&krcp->bcached, NULL, bhead))
+ free_page((unsigned long) bhead);
+
+ cond_resched_tasks_rcu_qs();
+ }
+
+ /*
+ * Emergency case only. It can happen under low memory
+ * condition when an allocation gets failed, so the "bulk"
+ * path can not be temporary maintained.
+ */
for (; head; head = next) {
unsigned long offset = (unsigned long)head->func;

next = head->next;
- // Potentially optimize with kfree_bulk in future.
debug_rcu_head_unqueue(head);
rcu_lock_acquire(&rcu_callback_map);
trace_rcu_invoke_kfree_callback(rcu_state.name, head, offset);

- if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset))) {
- /* Could be optimized with kfree_bulk() in future. */
+ if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset)))
kfree((void *)head - offset);
- }

rcu_lock_release(&rcu_callback_map);
cond_resched_tasks_rcu_qs();
@@ -2839,26 +2882,45 @@ static void kfree_rcu_work(struct work_struct *work)
*/
static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
{
+ struct kfree_rcu_cpu_work *krwp;
+ bool queued = false;
int i;
- struct kfree_rcu_cpu_work *krwp = NULL;

lockdep_assert_held(&krcp->lock);
- for (i = 0; i < KFREE_N_BATCHES; i++)
- if (!krcp->krw_arr[i].head_free) {
- krwp = &(krcp->krw_arr[i]);
- break;
- }

- // If a previous RCU batch is in progress, we cannot immediately
- // queue another one, so return false to tell caller to retry.
- if (!krwp)
- return false;
+ for (i = 0; i < KFREE_N_BATCHES; i++) {
+ krwp = &(krcp->krw_arr[i]);

- krwp->head_free = krcp->head;
- krcp->head = NULL;
- INIT_RCU_WORK(&krwp->rcu_work, kfree_rcu_work);
- queue_rcu_work(system_wq, &krwp->rcu_work);
- return true;
+ /*
+ * Try to detach bhead or head and attach it over any
+ * available corresponding free channel. It can be that
+ * a previous RCU batch is in progress, it means that
+ * immediately to queue another one is not possible so
+ * return false to tell caller to retry.
+ */
+ if ((krcp->bhead && !krwp->bhead_free) ||
+ (krcp->head && !krwp->head_free)) {
+ if (!krwp->bhead_free) {
+ krwp->bhead_free = krcp->bhead;
+ krcp->bhead = NULL;
+ }
+
+ if (!krwp->head_free) {
+ krwp->head_free = krcp->head;
+ krcp->head = NULL;
+ }
+
+ /*
+ * The work can already be queued. If so, it means that
+ * within a short time, second, either head or bhead has
+ * been detached as well.
+ */
+ queue_rcu_work(system_wq, &krwp->rcu_work);
+ queued = true;
+ }
+ }
+
+ return queued;
}

static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
@@ -2895,6 +2957,39 @@ static void kfree_rcu_monitor(struct work_struct *work)
spin_unlock_irqrestore(&krcp->lock, flags);
}

+static inline bool
+kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
+{
+ struct kfree_rcu_bulk_data *bnode;
+
+ if (unlikely(!krcp->initialized))
+ return false;
+
+ lockdep_assert_held(&krcp->lock);
+
+ /* Check if a new block is required. */
+ if (!krcp->bhead ||
+ krcp->bhead->nr_records == KFREE_BULK_MAX_ENTR) {
+ bnode = xchg(&krcp->bcached, NULL);
+ if (!bnode)
+ bnode = (struct kfree_rcu_bulk_data *)
+ __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+
+ /* No cache or an allocation got failed. */
+ if (unlikely(!bnode))
+ return false;
+
+ /* Initialize the new block. */
+ bnode->nr_records = 0;
+ bnode->next = krcp->bhead;
+ krcp->bhead = bnode;
+ }
+
+ /* Finally insert. */
+ krcp->bhead->records[krcp->bhead->nr_records++] = ptr;
+ return true;
+}
+
/*
* Queue a request for lazy invocation of kfree() after a grace period.
*
@@ -2926,9 +3021,17 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
__func__, head);
goto unlock_return;
}
- head->func = func;
- head->next = krcp->head;
- krcp->head = head;
+
+ /*
+ * Under high memory pressure GFP_NOWAIT can fail,
+ * in that case the emergency path is maintained.
+ */
+ if (unlikely(!kfree_call_rcu_add_ptr_to_bulk(krcp,
+ (void *) head - (unsigned long) func))) {
+ head->func = func;
+ head->next = krcp->head;
+ krcp->head = head;
+ }

// Set timer to drain after KFREE_DRAIN_JIFFIES.
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
@@ -3834,8 +3937,11 @@ static void __init kfree_rcu_batch_init(void)
struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);

spin_lock_init(&krcp->lock);
- for (i = 0; i < KFREE_N_BATCHES; i++)
+ for (i = 0; i < KFREE_N_BATCHES; i++) {
+ INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work);
krcp->krw_arr[i].krcp = krcp;
+ }
+
INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
krcp->initialized = true;
}
--
2.20.1


2020-01-13 19:04:42

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> kfree_rcu() logic can be improved further by using kfree_bulk()
> interface along with "basic batching support" introduced earlier.
>
> The are at least two advantages of using "bulk" interface:
> - in case of large number of kfree_rcu() requests kfree_bulk()
> reduces the per-object overhead caused by calling kfree()
> per-object.
>
> - reduces the number of cache-misses due to "pointer chasing"
> between objects which can be far spread between each other.
>
> This approach defines a new kfree_rcu_bulk_data structure that
> stores pointers in an array with a specific size. Number of entries
> in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> structure to be exactly one page.
>
> Since it deals with "block-chain" technique there is an extra
> need in dynamic allocation when a new block is required. Memory
> is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> allows to skip direct reclaim under low memory condition to
> prevent stalling and fails silently under high memory pressure.
>
> The "emergency path" gets maintained when a system is run out
> of memory. In that case objects are linked into regular list
> and that is it.
>
> In order to evaluate it, the "rcuperf" was run to analyze how
> much memory is consumed and what is kfree_bulk() throughput.
>
> Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
>
> CONFIG_SLAB=y
> kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
>
> 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
>
> rcuperf shows approximately ~12% better throughput(Total time)
> in case of using "bulk" interface. The "drain logic" or its RCU
> callback does the work faster that leads to better throughput.

Nice improvement!

But rcuperf uses a single block size, which turns into kfree_bulk() using
a single slab, which results in good locality of reference. So I have to
ask... Is this performance result representative of production workloads?

Thanx, Paul

> Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>
> ---
> kernel/rcu/tree.c | 154 ++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 130 insertions(+), 24 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 48fba2257748..4ee5c737558b 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2754,22 +2754,45 @@ EXPORT_SYMBOL_GPL(call_rcu);
> #define KFREE_DRAIN_JIFFIES (HZ / 50)
> #define KFREE_N_BATCHES 2
>
> +/*
> + * This macro defines how many entries the "records" array
> + * will contain. It is based on the fact that the size of
> + * kfree_rcu_bulk_data structure becomes exactly one page.
> + */
> +#define KFREE_BULK_MAX_ENTR ((PAGE_SIZE / sizeof(void *)) - 2)
> +
> +/**
> + * struct kfree_rcu_bulk_data - single block to store kfree_rcu() pointers
> + * @nr_records: Number of active pointers in the array
> + * @records: Array of the kfree_rcu() pointers
> + * @next: Next bulk object in the block chain
> + */
> +struct kfree_rcu_bulk_data {
> + unsigned long nr_records;
> + void *records[KFREE_BULK_MAX_ENTR];
> + struct kfree_rcu_bulk_data *next;
> +};
> +
> /**
> * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests
> * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period
> * @head_free: List of kfree_rcu() objects waiting for a grace period
> + * @bhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period
> * @krcp: Pointer to @kfree_rcu_cpu structure
> */
>
> struct kfree_rcu_cpu_work {
> struct rcu_work rcu_work;
> struct rcu_head *head_free;
> + struct kfree_rcu_bulk_data *bhead_free;
> struct kfree_rcu_cpu *krcp;
> };
>
> /**
> * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
> * @head: List of kfree_rcu() objects not yet waiting for a grace period
> + * @bhead: Bulk-List of kfree_rcu() objects not yet waiting for a grace period
> + * @bcached: Keeps at most one object for later reuse when build chain blocks
> * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period
> * @lock: Synchronize access to this structure
> * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES
> @@ -2783,6 +2806,8 @@ struct kfree_rcu_cpu_work {
> */
> struct kfree_rcu_cpu {
> struct rcu_head *head;
> + struct kfree_rcu_bulk_data *bhead;
> + struct kfree_rcu_bulk_data *bcached;
> struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES];
> spinlock_t lock;
> struct delayed_work monitor_work;
> @@ -2800,6 +2825,7 @@ static void kfree_rcu_work(struct work_struct *work)
> {
> unsigned long flags;
> struct rcu_head *head, *next;
> + struct kfree_rcu_bulk_data *bhead, *bnext;
> struct kfree_rcu_cpu *krcp;
> struct kfree_rcu_cpu_work *krwp;
>
> @@ -2809,22 +2835,39 @@ static void kfree_rcu_work(struct work_struct *work)
> spin_lock_irqsave(&krcp->lock, flags);
> head = krwp->head_free;
> krwp->head_free = NULL;
> + bhead = krwp->bhead_free;
> + krwp->bhead_free = NULL;
> spin_unlock_irqrestore(&krcp->lock, flags);
>
> - // List "head" is now private, so traverse locklessly.
> + /* List "bhead" is now private, so traverse locklessly. */
> + for (; bhead; bhead = bnext) {
> + bnext = bhead->next;
> +
> + rcu_lock_acquire(&rcu_callback_map);
> + kfree_bulk(bhead->nr_records, bhead->records);
> + rcu_lock_release(&rcu_callback_map);
> +
> + if (cmpxchg(&krcp->bcached, NULL, bhead))
> + free_page((unsigned long) bhead);
> +
> + cond_resched_tasks_rcu_qs();
> + }
> +
> + /*
> + * Emergency case only. It can happen under low memory
> + * condition when an allocation gets failed, so the "bulk"
> + * path can not be temporary maintained.
> + */
> for (; head; head = next) {
> unsigned long offset = (unsigned long)head->func;
>
> next = head->next;
> - // Potentially optimize with kfree_bulk in future.
> debug_rcu_head_unqueue(head);
> rcu_lock_acquire(&rcu_callback_map);
> trace_rcu_invoke_kfree_callback(rcu_state.name, head, offset);
>
> - if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset))) {
> - /* Could be optimized with kfree_bulk() in future. */
> + if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset)))
> kfree((void *)head - offset);
> - }
>
> rcu_lock_release(&rcu_callback_map);
> cond_resched_tasks_rcu_qs();
> @@ -2839,26 +2882,45 @@ static void kfree_rcu_work(struct work_struct *work)
> */
> static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
> {
> + struct kfree_rcu_cpu_work *krwp;
> + bool queued = false;
> int i;
> - struct kfree_rcu_cpu_work *krwp = NULL;
>
> lockdep_assert_held(&krcp->lock);
> - for (i = 0; i < KFREE_N_BATCHES; i++)
> - if (!krcp->krw_arr[i].head_free) {
> - krwp = &(krcp->krw_arr[i]);
> - break;
> - }
>
> - // If a previous RCU batch is in progress, we cannot immediately
> - // queue another one, so return false to tell caller to retry.
> - if (!krwp)
> - return false;
> + for (i = 0; i < KFREE_N_BATCHES; i++) {
> + krwp = &(krcp->krw_arr[i]);
>
> - krwp->head_free = krcp->head;
> - krcp->head = NULL;
> - INIT_RCU_WORK(&krwp->rcu_work, kfree_rcu_work);
> - queue_rcu_work(system_wq, &krwp->rcu_work);
> - return true;
> + /*
> + * Try to detach bhead or head and attach it over any
> + * available corresponding free channel. It can be that
> + * a previous RCU batch is in progress, it means that
> + * immediately to queue another one is not possible so
> + * return false to tell caller to retry.
> + */
> + if ((krcp->bhead && !krwp->bhead_free) ||
> + (krcp->head && !krwp->head_free)) {
> + if (!krwp->bhead_free) {
> + krwp->bhead_free = krcp->bhead;
> + krcp->bhead = NULL;
> + }
> +
> + if (!krwp->head_free) {
> + krwp->head_free = krcp->head;
> + krcp->head = NULL;
> + }
> +
> + /*
> + * The work can already be queued. If so, it means that
> + * within a short time, second, either head or bhead has
> + * been detached as well.
> + */
> + queue_rcu_work(system_wq, &krwp->rcu_work);
> + queued = true;
> + }
> + }
> +
> + return queued;
> }
>
> static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> @@ -2895,6 +2957,39 @@ static void kfree_rcu_monitor(struct work_struct *work)
> spin_unlock_irqrestore(&krcp->lock, flags);
> }
>
> +static inline bool
> +kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
> +{
> + struct kfree_rcu_bulk_data *bnode;
> +
> + if (unlikely(!krcp->initialized))
> + return false;
> +
> + lockdep_assert_held(&krcp->lock);
> +
> + /* Check if a new block is required. */
> + if (!krcp->bhead ||
> + krcp->bhead->nr_records == KFREE_BULK_MAX_ENTR) {
> + bnode = xchg(&krcp->bcached, NULL);
> + if (!bnode)
> + bnode = (struct kfree_rcu_bulk_data *)
> + __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
> +
> + /* No cache or an allocation got failed. */
> + if (unlikely(!bnode))
> + return false;
> +
> + /* Initialize the new block. */
> + bnode->nr_records = 0;
> + bnode->next = krcp->bhead;
> + krcp->bhead = bnode;
> + }
> +
> + /* Finally insert. */
> + krcp->bhead->records[krcp->bhead->nr_records++] = ptr;
> + return true;
> +}
> +
> /*
> * Queue a request for lazy invocation of kfree() after a grace period.
> *
> @@ -2926,9 +3021,17 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
> __func__, head);
> goto unlock_return;
> }
> - head->func = func;
> - head->next = krcp->head;
> - krcp->head = head;
> +
> + /*
> + * Under high memory pressure GFP_NOWAIT can fail,
> + * in that case the emergency path is maintained.
> + */
> + if (unlikely(!kfree_call_rcu_add_ptr_to_bulk(krcp,
> + (void *) head - (unsigned long) func))) {
> + head->func = func;
> + head->next = krcp->head;
> + krcp->head = head;
> + }
>
> // Set timer to drain after KFREE_DRAIN_JIFFIES.
> if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
> @@ -3834,8 +3937,11 @@ static void __init kfree_rcu_batch_init(void)
> struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
>
> spin_lock_init(&krcp->lock);
> - for (i = 0; i < KFREE_N_BATCHES; i++)
> + for (i = 0; i < KFREE_N_BATCHES; i++) {
> + INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work);
> krcp->krw_arr[i].krcp = krcp;
> + }
> +
> INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
> krcp->initialized = true;
> }
> --
> 2.20.1
>

2020-01-14 16:50:42

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

Hi Paul,

On Mon, Jan 13, 2020 at 11:03:15AM -0800, Paul E. McKenney wrote:
> On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> > kfree_rcu() logic can be improved further by using kfree_bulk()
> > interface along with "basic batching support" introduced earlier.
> >
> > The are at least two advantages of using "bulk" interface:
> > - in case of large number of kfree_rcu() requests kfree_bulk()
> > reduces the per-object overhead caused by calling kfree()
> > per-object.
> >
> > - reduces the number of cache-misses due to "pointer chasing"
> > between objects which can be far spread between each other.
> >
> > This approach defines a new kfree_rcu_bulk_data structure that
> > stores pointers in an array with a specific size. Number of entries
> > in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> > structure to be exactly one page.
> >
> > Since it deals with "block-chain" technique there is an extra
> > need in dynamic allocation when a new block is required. Memory
> > is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> > allows to skip direct reclaim under low memory condition to
> > prevent stalling and fails silently under high memory pressure.
> >
> > The "emergency path" gets maintained when a system is run out
> > of memory. In that case objects are linked into regular list
> > and that is it.
> >
> > In order to evaluate it, the "rcuperf" was run to analyze how
> > much memory is consumed and what is kfree_bulk() throughput.
> >
> > Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
> >
> > CONFIG_SLAB=y
> > kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
> >
> > 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> > 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
> >
> > rcuperf shows approximately ~12% better throughput(Total time)
> > in case of using "bulk" interface. The "drain logic" or its RCU
> > callback does the work faster that leads to better throughput.
>
> Nice improvement!
>
> But rcuperf uses a single block size, which turns into kfree_bulk() using
> a single slab, which results in good locality of reference. So I have to

You meant a "single cache" category when you say "single slab"? Just to
mention, the number of slabs (in a single cache) when a large number of
objects are allocated is more than 1 (not single). With current rcuperf, I
see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
32-byte slab object).

> ask... Is this performance result representative of production workloads?

I added more variation to allocation sizes to rcuperf (patch below) to distribute
allocations across 4 kmalloc slabs (32,64,96 and 128) and I see a signficant
improvement with Ulad's patch in SLAB in terms of completion time of the
test. Below are the results. With SLUB I see slightly higher memory
footprint, I have never used SLUB and not sure who is using it so I am not
too concerned since the degradation in memory footprint is only slight with
SLAB having the signifcant improvement.

with SLAB:

with Ulad's patch:
[ 19.096052] Total time taken by all kfree'ers: 17519684419 ns, loops: 10000, batches: 3378, memory footprint: 319MB
[ 18.980837] Total time taken by all kfree'ers: 17460918969 ns, loops: 10000, batches: 3399, memory footprint: 312MB
[ 18.671535] Total time taken by all kfree'ers: 17116640301 ns, loops: 10000, batches: 3331, memory footprint: 268MB
[ 18.737601] Total time taken by all kfree'ers: 17227635828 ns, loops: 10000, batches: 3311, memory footprint: 329MB

without Ulad's patch:
[ 22.679112] Total time taken by all kfree'ers: 21174999896 ns, loops: 10000, batches: 2722, memory footprint: 314MB
[ 22.099168] Total time taken by all kfree'ers: 20528110989 ns, loops: 10000, batches: 2611, memory footprint: 240MB
[ 22.477571] Total time taken by all kfree'ers: 20975674614 ns, loops: 10000, batches: 2763, memory footprint: 341MB
[ 22.772915] Total time taken by all kfree'ers: 21207270347 ns, loops: 10000, batches: 2765, memory footprint: 329MB

with SLUB:

without Ulad's patch:
[ 10.714471] Total time taken by all kfree'ers: 9216968353 ns, loops: 10000, batches: 1099, memory footprint: 393MB
[ 11.188174] Total time taken by all kfree'ers: 9613032449 ns, loops: 10000, batches: 1147, memory footprint: 387MB
[ 11.077431] Total time taken by all kfree'ers: 9547675890 ns, loops: 10000, batches: 1292, memory footprint: 296MB
[ 11.212767] Total time taken by all kfree'ers: 9712869591 ns, loops: 10000, batches: 1155, memory footprint: 387MB


with Ulad's patch
[ 11.241949] Total time taken by all kfree'ers: 9681912225 ns, loops: 10000, batches: 1087, memory footprint: 417MB
[ 11.651831] Total time taken by all kfree'ers: 10154268745 ns, loops: 10000, batches: 1184, memory footprint: 416MB
[ 11.342659] Total time taken by all kfree'ers: 9844937317 ns, loops: 10000, batches: 1137, memory footprint: 477MB
[ 11.718769] Total time taken by all kfree'ers: 10138649532 ns, loops: 10000, batches: 1159, memory footprint: 395MB

Test patch for rcuperf is below. The memory footprint measurement for rcuperf
is still under discussion in another thread, but I tested based on that anyway:

---8<-----------------------

From d44e4c6112c388d39f7c2241e061dd77cca28d9e Mon Sep 17 00:00:00 2001
From: Joel Fernandes <[email protected]>
Date: Tue, 14 Jan 2020 09:59:23 -0500
Subject: [PATCH] rcuperf: Add support to vary the slab object sizes

Signed-off-by: Joel Fernandes <[email protected]>
---
kernel/rcu/rcuperf.c | 43 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 36 insertions(+), 7 deletions(-)

diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
index a4a8d097d84d..216d7c072ca2 100644
--- a/kernel/rcu/rcuperf.c
+++ b/kernel/rcu/rcuperf.c
@@ -600,17 +600,29 @@ static int kfree_nrealthreads;
static atomic_t n_kfree_perf_thread_started;
static atomic_t n_kfree_perf_thread_ended;

-struct kfree_obj {
- char kfree_obj[8];
- struct rcu_head rh;
-};
+/*
+ * Define a kfree_obj with size as the @size parameter + the size of rcu_head
+ * (rcu_head is 16 bytes on 64-bit arch).
+ */
+#define DEFINE_KFREE_OBJ(size) \
+struct kfree_obj_ ## size { \
+ char kfree_obj[size]; \
+ struct rcu_head rh; \
+}
+
+/* This should goto the right sized slabs on both 32-bit and 64-bit arch */
+DEFINE_KFREE_OBJ(16); // goes on kmalloc-32 slab
+DEFINE_KFREE_OBJ(32); // goes on kmalloc-64 slab
+DEFINE_KFREE_OBJ(64); // goes on kmalloc-96 slab
+DEFINE_KFREE_OBJ(96); // goes on kmalloc-128 slab

static int
kfree_perf_thread(void *arg)
{
int i, loop = 0;
long me = (long)arg;
- struct kfree_obj *alloc_ptr;
+ void *alloc_ptr;
+
u64 start_time, end_time;
long long mem_begin, mem_during = 0;

@@ -635,11 +647,28 @@ kfree_perf_thread(void *arg)
}

for (i = 0; i < kfree_alloc_num; i++) {
- alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
+ int kfree_type = i % 4;
+
+ if (kfree_type == 0)
+ alloc_ptr = kmalloc(sizeof(struct kfree_obj_16), GFP_KERNEL);
+ else if (kfree_type == 1)
+ alloc_ptr = kmalloc(sizeof(struct kfree_obj_32), GFP_KERNEL);
+ else if (kfree_type == 2)
+ alloc_ptr = kmalloc(sizeof(struct kfree_obj_64), GFP_KERNEL);
+ else
+ alloc_ptr = kmalloc(sizeof(struct kfree_obj_96), GFP_KERNEL);
+
if (!alloc_ptr)
return -ENOMEM;

- kfree_rcu(alloc_ptr, rh);
+ if (kfree_type == 0)
+ kfree_rcu((struct kfree_obj_16 *)alloc_ptr, rh);
+ else if (kfree_type == 1)
+ kfree_rcu((struct kfree_obj_32 *)alloc_ptr, rh);
+ else if (kfree_type == 2)
+ kfree_rcu((struct kfree_obj_64 *)alloc_ptr, rh);
+ else
+ kfree_rcu((struct kfree_obj_96 *)alloc_ptr, rh);
}

cond_resched();
--
2.25.0.rc1.283.g88dfdc4193-goog

2020-01-15 13:17:28

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

Hello, Joel, Paul.

Thank you for comments and testing!

> >
> > Nice improvement!
> >
> > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > a single slab, which results in good locality of reference. So I have to
>
> You meant a "single cache" category when you say "single slab"? Just to
> mention, the number of slabs (in a single cache) when a large number of
> objects are allocated is more than 1 (not single). With current rcuperf, I
> see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> 32-byte slab object).
>
I think that is about using different slab caches to break locality. It
makes sense, IMHO, because usually the system make use of different slabs,
because of different object sizes. From the other hand i guess there are
test cases when only one slab gets used.

> > ask... Is this performance result representative of production workloads?
>
> I added more variation to allocation sizes to rcuperf (patch below) to distribute
> allocations across 4 kmalloc slabs (32,64,96 and 128) and I see a signficant
> improvement with Ulad's patch in SLAB in terms of completion time of the
> test. Below are the results. With SLUB I see slightly higher memory
> footprint, I have never used SLUB and not sure who is using it so I am not
> too concerned since the degradation in memory footprint is only slight with
> SLAB having the signifcant improvement.
>
Nice patch! I think, it would be useful to have it in "rcuperf" tool with
extra parameter like "different_obj_sizes".

> with SLAB:
>
> with Ulad's patch:
> [ 19.096052] Total time taken by all kfree'ers: 17519684419 ns, loops: 10000, batches: 3378, memory footprint: 319MB
> [ 18.980837] Total time taken by all kfree'ers: 17460918969 ns, loops: 10000, batches: 3399, memory footprint: 312MB
> [ 18.671535] Total time taken by all kfree'ers: 17116640301 ns, loops: 10000, batches: 3331, memory footprint: 268MB
> [ 18.737601] Total time taken by all kfree'ers: 17227635828 ns, loops: 10000, batches: 3311, memory footprint: 329MB
>
> without Ulad's patch:
> [ 22.679112] Total time taken by all kfree'ers: 21174999896 ns, loops: 10000, batches: 2722, memory footprint: 314MB
> [ 22.099168] Total time taken by all kfree'ers: 20528110989 ns, loops: 10000, batches: 2611, memory footprint: 240MB
> [ 22.477571] Total time taken by all kfree'ers: 20975674614 ns, loops: 10000, batches: 2763, memory footprint: 341MB
> [ 22.772915] Total time taken by all kfree'ers: 21207270347 ns, loops: 10000, batches: 2765, memory footprint: 329MB
>
> with SLUB:
>
> without Ulad's patch:
> [ 10.714471] Total time taken by all kfree'ers: 9216968353 ns, loops: 10000, batches: 1099, memory footprint: 393MB
> [ 11.188174] Total time taken by all kfree'ers: 9613032449 ns, loops: 10000, batches: 1147, memory footprint: 387MB
> [ 11.077431] Total time taken by all kfree'ers: 9547675890 ns, loops: 10000, batches: 1292, memory footprint: 296MB
> [ 11.212767] Total time taken by all kfree'ers: 9712869591 ns, loops: 10000, batches: 1155, memory footprint: 387MB
>
>
> with Ulad's patch
> [ 11.241949] Total time taken by all kfree'ers: 9681912225 ns, loops: 10000, batches: 1087, memory footprint: 417MB
> [ 11.651831] Total time taken by all kfree'ers: 10154268745 ns, loops: 10000, batches: 1184, memory footprint: 416MB
> [ 11.342659] Total time taken by all kfree'ers: 9844937317 ns, loops: 10000, batches: 1137, memory footprint: 477MB
> [ 11.718769] Total time taken by all kfree'ers: 10138649532 ns, loops: 10000, batches: 1159, memory footprint: 395MB
>
> Test patch for rcuperf is below. The memory footprint measurement for rcuperf
> is still under discussion in another thread, but I tested based on that anyway:
>
> ---8<-----------------------
>
> From d44e4c6112c388d39f7c2241e061dd77cca28d9e Mon Sep 17 00:00:00 2001
> From: Joel Fernandes <[email protected]>
> Date: Tue, 14 Jan 2020 09:59:23 -0500
> Subject: [PATCH] rcuperf: Add support to vary the slab object sizes
>
> Signed-off-by: Joel Fernandes <[email protected]>
> ---
> kernel/rcu/rcuperf.c | 43 ++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 36 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
> index a4a8d097d84d..216d7c072ca2 100644
> --- a/kernel/rcu/rcuperf.c
> +++ b/kernel/rcu/rcuperf.c
> @@ -600,17 +600,29 @@ static int kfree_nrealthreads;
> static atomic_t n_kfree_perf_thread_started;
> static atomic_t n_kfree_perf_thread_ended;
>
> -struct kfree_obj {
> - char kfree_obj[8];
> - struct rcu_head rh;
> -};
> +/*
> + * Define a kfree_obj with size as the @size parameter + the size of rcu_head
> + * (rcu_head is 16 bytes on 64-bit arch).
> + */
> +#define DEFINE_KFREE_OBJ(size) \
> +struct kfree_obj_ ## size { \
> + char kfree_obj[size]; \
> + struct rcu_head rh; \
> +}
> +
> +/* This should goto the right sized slabs on both 32-bit and 64-bit arch */
> +DEFINE_KFREE_OBJ(16); // goes on kmalloc-32 slab
> +DEFINE_KFREE_OBJ(32); // goes on kmalloc-64 slab
> +DEFINE_KFREE_OBJ(64); // goes on kmalloc-96 slab
> +DEFINE_KFREE_OBJ(96); // goes on kmalloc-128 slab
>
> static int
> kfree_perf_thread(void *arg)
> {
> int i, loop = 0;
> long me = (long)arg;
> - struct kfree_obj *alloc_ptr;
> + void *alloc_ptr;
> +
> u64 start_time, end_time;
> long long mem_begin, mem_during = 0;
>
> @@ -635,11 +647,28 @@ kfree_perf_thread(void *arg)
> }
>
> for (i = 0; i < kfree_alloc_num; i++) {
> - alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
> + int kfree_type = i % 4;
> +
> + if (kfree_type == 0)
> + alloc_ptr = kmalloc(sizeof(struct kfree_obj_16), GFP_KERNEL);
> + else if (kfree_type == 1)
> + alloc_ptr = kmalloc(sizeof(struct kfree_obj_32), GFP_KERNEL);
> + else if (kfree_type == 2)
> + alloc_ptr = kmalloc(sizeof(struct kfree_obj_64), GFP_KERNEL);
> + else
> + alloc_ptr = kmalloc(sizeof(struct kfree_obj_96), GFP_KERNEL);
> +
> if (!alloc_ptr)
> return -ENOMEM;
>
> - kfree_rcu(alloc_ptr, rh);
> + if (kfree_type == 0)
> + kfree_rcu((struct kfree_obj_16 *)alloc_ptr, rh);
> + else if (kfree_type == 1)
> + kfree_rcu((struct kfree_obj_32 *)alloc_ptr, rh);
> + else if (kfree_type == 2)
> + kfree_rcu((struct kfree_obj_64 *)alloc_ptr, rh);
> + else
> + kfree_rcu((struct kfree_obj_96 *)alloc_ptr, rh);
> }
>
> cond_resched();
> --
> 2.25.0.rc1.283.g88dfdc4193-goog
I also have done some tests with your patch on my Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz, 12xCPUs
machine to simulate different slab usage:

dev.2020.01.10a branch

# Default, CONFIG_SLAB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
[ 83.762963] Total time taken by all kfree'ers: 53607352517 ns, loops: 200000, batches: 1885, memory footprint: 1248MB
[ 80.108401] Total time taken by all kfree'ers: 53529637912 ns, loops: 200000, batches: 1921, memory footprint: 1193MB
[ 76.622252] Total time taken by all kfree'ers: 53570175705 ns, loops: 200000, batches: 1929, memory footprint: 1250MB

# With the patch, CONFIG_SLAB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
[ 48.265008] Total time taken by all kfree'ers: 23981587315 ns, loops: 200000, batches: 810, memory footprint: 1219MB
[ 53.263943] Total time taken by all kfree'ers: 23879375281 ns, loops: 200000, batches: 822, memory footprint: 1190MB
[ 50.366440] Total time taken by all kfree'ers: 24086841707 ns, loops: 200000, batches: 794, memory footprint: 1380MB

# Default, CONFIG_SLUB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
[ 81.818576] Total time taken by all kfree'ers: 51291025022 ns, loops: 200000, batches: 1713, memory footprint: 741MB
[ 77.854866] Total time taken by all kfree'ers: 51278911477 ns, loops: 200000, batches: 1671, memory footprint: 719MB
[ 76.329577] Total time taken by all kfree'ers: 51256183045 ns, loops: 200000, batches: 1719, memory footprint: 647MB

# With the patch, CONFIG_SLUB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
[ 76.254485] Total time taken by all kfree'ers: 50709919132 ns, loops: 200000, batches: 1618, memory footprint: 456MB
[ 75.891521] Total time taken by all kfree'ers: 50736297452 ns, loops: 200000, batches: 1633, memory footprint: 507MB
[ 76.172573] Total time taken by all kfree'ers: 50660403893 ns, loops: 200000, batches: 1628, memory footprint: 429MB

in case of CONFIG_SLAB there is double increase in performance but slightly higher memory usage.
As for CONFIG_SLUB, i still see higher performance figures + lower memory usage with the patch.

Apart of that, I have got the report from the "kernel test robot":

<snip>
[ 13.957168] ------------[ cut here ]------------
[ 13.958256] ODEBUG: free active (active state 1) object type: rcu_head hint: 0x0
[ 13.962148] WARNING: CPU: 0 PID: 212 at lib/debugobjects.c:484 debug_print_object+0x95/0xd0
[ 13.964298] Modules linked in:
[ 13.964960] CPU: 0 PID: 212 Comm: kworker/0:2 Not tainted 5.5.0-rc1-00136-g883a2cefc0684 #1
[ 13.966712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 13.968528] Workqueue: events kfree_rcu_work
[ 13.969466] RIP: 0010:debug_print_object+0x95/0xd0
[ 13.970480] Code: d2 e8 2f 06 d6 ff 8b 43 10 4d 89 f1 4c 89 e6 8b 4b 14 48 c7 c7 88 73 be 82 4d 8b 45 00 48 8b 14 c5 a0 5f 6d 82 e8 7b 65 c6 ff <0f> 0b b9 01 00 00 00 31 d2 be 01 00 00 00 48 c7 c7 98 b8 0c 83 e8
[ 13.974435] RSP: 0000:ffff888231677bf8 EFLAGS: 00010282
[ 13.975531] RAX: 0000000000000000 RBX: ffff88822d4200e0 RCX: 0000000000000000
[ 13.976730] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8306e028
[ 13.977568] RBP: ffff888231677c18 R08: 0000000000000000 R09: ffff888231670790
[ 13.978412] R10: ffff888231670000 R11: 0000000000000003 R12: ffffffff82bc5299
[ 13.979250] R13: ffffffff82e77360 R14: 0000000000000000 R15: dead000000000100
[ 13.980089] FS: 0000000000000000(0000) GS:ffffffff82e4f000(0000) knlGS:0000000000000000
[ 13.981069] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 13.981746] CR2: 00007f1e913fc77c CR3: 0000000225ce9000 CR4: 00000000000006f0
[ 13.982587] Call Trace:
[ 13.982911] __debug_check_no_obj_freed+0x19a/0x200
[ 13.983494] debug_check_no_obj_freed+0x14/0x20
[ 13.984036] free_pcp_prepare+0xee/0x1d0
[ 13.984541] free_unref_page+0x1b/0x80
[ 13.984994] __free_pages+0x19/0x20
[ 13.985503] __free_pages+0x13/0x20
[ 13.985924] slob_free_pages+0x7d/0x90
[ 13.986373] slob_free+0x34f/0x530
[ 13.986784] kfree+0x154/0x210
[ 13.987155] __kmem_cache_free_bulk+0x44/0x60
[ 13.987673] kmem_cache_free_bulk+0xe/0x10
[ 13.988163] kfree_rcu_work+0x95/0x310
[ 13.989010] ? kfree_rcu_work+0x64/0x310
[ 13.989884] process_one_work+0x378/0x7c0
[ 13.990770] worker_thread+0x40/0x600
[ 13.991587] kthread+0x14e/0x170
[ 13.992344] ? process_one_work+0x7c0/0x7c0
[ 13.993256] ? kthread_create_on_node+0x70/0x70
[ 13.994246] ret_from_fork+0x3a/0x50
[ 13.995039] ---[ end trace cdf242638b0e32a0 ]---
[child0:632] trace_fd was -1
<snip>

the trace happens when the kernel is built with CONFIG_DEBUG_OBJECTS_FREE
and CONFIG_DEBUG_OBJECTS_RCU_HEAD. Basically it is not a problem of the patch
itself or there is any bug there. It just does not pair with debug_rcu_head_queue(head)
in the kfree_rcu_work() function, that is why the kernel thinks about freeing
an active object that is not active in reality.

I will upload a V2 to fix that.

--
Vlad Rezki

2020-01-16 01:26:05

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Wed, Jan 15, 2020 at 02:14:46PM +0100, Uladzislau Rezki wrote:
> Hello, Joel, Paul.
>
> Thank you for comments and testing!
>
> > >
> > > Nice improvement!
> > >
> > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > a single slab, which results in good locality of reference. So I have to
> >
> > You meant a "single cache" category when you say "single slab"? Just to
> > mention, the number of slabs (in a single cache) when a large number of
> > objects are allocated is more than 1 (not single). With current rcuperf, I
> > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > 32-byte slab object).
> >
> I think that is about using different slab caches to break locality. It
> makes sense, IMHO, because usually the system make use of different slabs,
> because of different object sizes. From the other hand i guess there are
> test cases when only one slab gets used.

I was wondering about "locality". A cache can be split into many slabs. Only
the data on a page is local (contiguous). If there are a large number of
objects, then it goes to a new slab (on the same cache). At least on the
kmalloc slabs, there is only 1 slab per page. So for example, if on
kmalloc-32 slab, there are more than 128 objects, then it goes to a different
slab / page. So how is there still locality?

Further the slab (not sure about slub) doesn't seem to do anything at the
moment to take advantage of locality within a slab.

That said, I am fully supportive of your patch and see the same
improvements as well which are for the reasons you mentioned in the changelog.

> > > ask... Is this performance result representative of production workloads?
> >
> > I added more variation to allocation sizes to rcuperf (patch below) to distribute
> > allocations across 4 kmalloc slabs (32,64,96 and 128) and I see a signficant
> > improvement with Ulad's patch in SLAB in terms of completion time of the
> > test. Below are the results. With SLUB I see slightly higher memory
> > footprint, I have never used SLUB and not sure who is using it so I am not
> > too concerned since the degradation in memory footprint is only slight with
> > SLAB having the signifcant improvement.
> >
> Nice patch! I think, it would be useful to have it in "rcuperf" tool with
> extra parameter like "different_obj_sizes".

cool, I posted something like this.

> > 2.25.0.rc1.283.g88dfdc4193-goog
> I also have done some tests with your patch on my Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz, 12xCPUs
> machine to simulate different slab usage:
>
> dev.2020.01.10a branch
>
> # Default, CONFIG_SLAB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
> [ 83.762963] Total time taken by all kfree'ers: 53607352517 ns, loops: 200000, batches: 1885, memory footprint: 1248MB
> [ 80.108401] Total time taken by all kfree'ers: 53529637912 ns, loops: 200000, batches: 1921, memory footprint: 1193MB
> [ 76.622252] Total time taken by all kfree'ers: 53570175705 ns, loops: 200000, batches: 1929, memory footprint: 1250MB
>
> # With the patch, CONFIG_SLAB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
> [ 48.265008] Total time taken by all kfree'ers: 23981587315 ns, loops: 200000, batches: 810, memory footprint: 1219MB
> [ 53.263943] Total time taken by all kfree'ers: 23879375281 ns, loops: 200000, batches: 822, memory footprint: 1190MB
> [ 50.366440] Total time taken by all kfree'ers: 24086841707 ns, loops: 200000, batches: 794, memory footprint: 1380MB
>
> # Default, CONFIG_SLUB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
> [ 81.818576] Total time taken by all kfree'ers: 51291025022 ns, loops: 200000, batches: 1713, memory footprint: 741MB
> [ 77.854866] Total time taken by all kfree'ers: 51278911477 ns, loops: 200000, batches: 1671, memory footprint: 719MB
> [ 76.329577] Total time taken by all kfree'ers: 51256183045 ns, loops: 200000, batches: 1719, memory footprint: 647MB
>
> # With the patch, CONFIG_SLUB, kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1, 16, 32, 64, 96 obj sizes
> [ 76.254485] Total time taken by all kfree'ers: 50709919132 ns, loops: 200000, batches: 1618, memory footprint: 456MB
> [ 75.891521] Total time taken by all kfree'ers: 50736297452 ns, loops: 200000, batches: 1633, memory footprint: 507MB
> [ 76.172573] Total time taken by all kfree'ers: 50660403893 ns, loops: 200000, batches: 1628, memory footprint: 429MB
>
> in case of CONFIG_SLAB there is double increase in performance but slightly higher memory usage.
> As for CONFIG_SLUB, i still see higher performance figures + lower memory usage with the patch.

Ok, testing today, our results are quite similar.

>
> Apart of that, I have got the report from the "kernel test robot":
> <snip>
> [ 13.957168] ------------[ cut here ]------------
> [ 13.958256] ODEBUG: free active (active state 1) object type: rcu_head hint: 0x0
> [ 13.962148] WARNING: CPU: 0 PID: 212 at lib/debugobjects.c:484 debug_print_object+0x95/0xd0
> [ 13.964298] Modules linked in:
> [ 13.964960] CPU: 0 PID: 212 Comm: kworker/0:2 Not tainted 5.5.0-rc1-00136-g883a2cefc0684 #1
> [ 13.966712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
> [ 13.968528] Workqueue: events kfree_rcu_work
> [ 13.969466] RIP: 0010:debug_print_object+0x95/0xd0
> [ 13.970480] Code: d2 e8 2f 06 d6 ff 8b 43 10 4d 89 f1 4c 89 e6 8b 4b 14 48 c7 c7 88 73 be 82 4d 8b 45 00 48 8b 14 c5 a0 5f 6d 82 e8 7b 65 c6 ff <0f> 0b b9 01 00 00 00 31 d2 be 01 00 00 00 48 c7 c7 98 b8 0c 83 e8
> [ 13.974435] RSP: 0000:ffff888231677bf8 EFLAGS: 00010282
> [ 13.975531] RAX: 0000000000000000 RBX: ffff88822d4200e0 RCX: 0000000000000000
> [ 13.976730] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8306e028
> [ 13.977568] RBP: ffff888231677c18 R08: 0000000000000000 R09: ffff888231670790
> [ 13.978412] R10: ffff888231670000 R11: 0000000000000003 R12: ffffffff82bc5299
> [ 13.979250] R13: ffffffff82e77360 R14: 0000000000000000 R15: dead000000000100
> [ 13.980089] FS: 0000000000000000(0000) GS:ffffffff82e4f000(0000) knlGS:0000000000000000
> [ 13.981069] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 13.981746] CR2: 00007f1e913fc77c CR3: 0000000225ce9000 CR4: 00000000000006f0
> [ 13.982587] Call Trace:
> [ 13.982911] __debug_check_no_obj_freed+0x19a/0x200
> [ 13.983494] debug_check_no_obj_freed+0x14/0x20
> [ 13.984036] free_pcp_prepare+0xee/0x1d0
> [ 13.984541] free_unref_page+0x1b/0x80
> [ 13.984994] __free_pages+0x19/0x20
> [ 13.985503] __free_pages+0x13/0x20
> [ 13.985924] slob_free_pages+0x7d/0x90
> [ 13.986373] slob_free+0x34f/0x530
> [ 13.986784] kfree+0x154/0x210
> [ 13.987155] __kmem_cache_free_bulk+0x44/0x60
> [ 13.987673] kmem_cache_free_bulk+0xe/0x10
> [ 13.988163] kfree_rcu_work+0x95/0x310
> [ 13.989010] ? kfree_rcu_work+0x64/0x310
> [ 13.989884] process_one_work+0x378/0x7c0
> [ 13.990770] worker_thread+0x40/0x600
> [ 13.991587] kthread+0x14e/0x170
> [ 13.992344] ? process_one_work+0x7c0/0x7c0
> [ 13.993256] ? kthread_create_on_node+0x70/0x70
> [ 13.994246] ret_from_fork+0x3a/0x50
> [ 13.995039] ---[ end trace cdf242638b0e32a0 ]---
> [child0:632] trace_fd was -1
> <snip>
>
> the trace happens when the kernel is built with CONFIG_DEBUG_OBJECTS_FREE
> and CONFIG_DEBUG_OBJECTS_RCU_HEAD. Basically it is not a problem of the patch
> itself or there is any bug there. It just does not pair with debug_rcu_head_queue(head)
> in the kfree_rcu_work() function, that is why the kernel thinks about freeing
> an active object that is not active in reality.
>
> I will upload a V2 to fix that.

Oh good point. Thanks for fixing that.

thanks,

- Joel

2020-01-16 03:37:51

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> kfree_rcu() logic can be improved further by using kfree_bulk()
> interface along with "basic batching support" introduced earlier.
>
> The are at least two advantages of using "bulk" interface:
> - in case of large number of kfree_rcu() requests kfree_bulk()
> reduces the per-object overhead caused by calling kfree()
> per-object.
>
> - reduces the number of cache-misses due to "pointer chasing"
> between objects which can be far spread between each other.
>
> This approach defines a new kfree_rcu_bulk_data structure that
> stores pointers in an array with a specific size. Number of entries
> in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> structure to be exactly one page.
>
> Since it deals with "block-chain" technique there is an extra
> need in dynamic allocation when a new block is required. Memory
> is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> allows to skip direct reclaim under low memory condition to
> prevent stalling and fails silently under high memory pressure.
>
> The "emergency path" gets maintained when a system is run out
> of memory. In that case objects are linked into regular list
> and that is it.
>
> In order to evaluate it, the "rcuperf" was run to analyze how
> much memory is consumed and what is kfree_bulk() throughput.
>
> Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
>
> CONFIG_SLAB=y
> kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
>
> 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
>
> rcuperf shows approximately ~12% better throughput(Total time)
> in case of using "bulk" interface. The "drain logic" or its RCU
> callback does the work faster that leads to better throughput.

Tested-by: Joel Fernandes (Google) <[email protected]>

(Vlad is going to post a v2 which fixes a debugobjects bug but that should
not have any impact on testing).

thanks,

- Joel



>
> Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>
> ---
> kernel/rcu/tree.c | 154 ++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 130 insertions(+), 24 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 48fba2257748..4ee5c737558b 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2754,22 +2754,45 @@ EXPORT_SYMBOL_GPL(call_rcu);
> #define KFREE_DRAIN_JIFFIES (HZ / 50)
> #define KFREE_N_BATCHES 2
>
> +/*
> + * This macro defines how many entries the "records" array
> + * will contain. It is based on the fact that the size of
> + * kfree_rcu_bulk_data structure becomes exactly one page.
> + */
> +#define KFREE_BULK_MAX_ENTR ((PAGE_SIZE / sizeof(void *)) - 2)
> +
> +/**
> + * struct kfree_rcu_bulk_data - single block to store kfree_rcu() pointers
> + * @nr_records: Number of active pointers in the array
> + * @records: Array of the kfree_rcu() pointers
> + * @next: Next bulk object in the block chain
> + */
> +struct kfree_rcu_bulk_data {
> + unsigned long nr_records;
> + void *records[KFREE_BULK_MAX_ENTR];
> + struct kfree_rcu_bulk_data *next;
> +};
> +
> /**
> * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests
> * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period
> * @head_free: List of kfree_rcu() objects waiting for a grace period
> + * @bhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period
> * @krcp: Pointer to @kfree_rcu_cpu structure
> */
>
> struct kfree_rcu_cpu_work {
> struct rcu_work rcu_work;
> struct rcu_head *head_free;
> + struct kfree_rcu_bulk_data *bhead_free;
> struct kfree_rcu_cpu *krcp;
> };
>
> /**
> * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
> * @head: List of kfree_rcu() objects not yet waiting for a grace period
> + * @bhead: Bulk-List of kfree_rcu() objects not yet waiting for a grace period
> + * @bcached: Keeps at most one object for later reuse when build chain blocks
> * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period
> * @lock: Synchronize access to this structure
> * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES
> @@ -2783,6 +2806,8 @@ struct kfree_rcu_cpu_work {
> */
> struct kfree_rcu_cpu {
> struct rcu_head *head;
> + struct kfree_rcu_bulk_data *bhead;
> + struct kfree_rcu_bulk_data *bcached;
> struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES];
> spinlock_t lock;
> struct delayed_work monitor_work;
> @@ -2800,6 +2825,7 @@ static void kfree_rcu_work(struct work_struct *work)
> {
> unsigned long flags;
> struct rcu_head *head, *next;
> + struct kfree_rcu_bulk_data *bhead, *bnext;
> struct kfree_rcu_cpu *krcp;
> struct kfree_rcu_cpu_work *krwp;
>
> @@ -2809,22 +2835,39 @@ static void kfree_rcu_work(struct work_struct *work)
> spin_lock_irqsave(&krcp->lock, flags);
> head = krwp->head_free;
> krwp->head_free = NULL;
> + bhead = krwp->bhead_free;
> + krwp->bhead_free = NULL;
> spin_unlock_irqrestore(&krcp->lock, flags);
>
> - // List "head" is now private, so traverse locklessly.
> + /* List "bhead" is now private, so traverse locklessly. */
> + for (; bhead; bhead = bnext) {
> + bnext = bhead->next;
> +
> + rcu_lock_acquire(&rcu_callback_map);
> + kfree_bulk(bhead->nr_records, bhead->records);
> + rcu_lock_release(&rcu_callback_map);
> +
> + if (cmpxchg(&krcp->bcached, NULL, bhead))
> + free_page((unsigned long) bhead);
> +
> + cond_resched_tasks_rcu_qs();
> + }
> +
> + /*
> + * Emergency case only. It can happen under low memory
> + * condition when an allocation gets failed, so the "bulk"
> + * path can not be temporary maintained.
> + */
> for (; head; head = next) {
> unsigned long offset = (unsigned long)head->func;
>
> next = head->next;
> - // Potentially optimize with kfree_bulk in future.
> debug_rcu_head_unqueue(head);
> rcu_lock_acquire(&rcu_callback_map);
> trace_rcu_invoke_kfree_callback(rcu_state.name, head, offset);
>
> - if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset))) {
> - /* Could be optimized with kfree_bulk() in future. */
> + if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset)))
> kfree((void *)head - offset);
> - }
>
> rcu_lock_release(&rcu_callback_map);
> cond_resched_tasks_rcu_qs();
> @@ -2839,26 +2882,45 @@ static void kfree_rcu_work(struct work_struct *work)
> */
> static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
> {
> + struct kfree_rcu_cpu_work *krwp;
> + bool queued = false;
> int i;
> - struct kfree_rcu_cpu_work *krwp = NULL;
>
> lockdep_assert_held(&krcp->lock);
> - for (i = 0; i < KFREE_N_BATCHES; i++)
> - if (!krcp->krw_arr[i].head_free) {
> - krwp = &(krcp->krw_arr[i]);
> - break;
> - }
>
> - // If a previous RCU batch is in progress, we cannot immediately
> - // queue another one, so return false to tell caller to retry.
> - if (!krwp)
> - return false;
> + for (i = 0; i < KFREE_N_BATCHES; i++) {
> + krwp = &(krcp->krw_arr[i]);
>
> - krwp->head_free = krcp->head;
> - krcp->head = NULL;
> - INIT_RCU_WORK(&krwp->rcu_work, kfree_rcu_work);
> - queue_rcu_work(system_wq, &krwp->rcu_work);
> - return true;
> + /*
> + * Try to detach bhead or head and attach it over any
> + * available corresponding free channel. It can be that
> + * a previous RCU batch is in progress, it means that
> + * immediately to queue another one is not possible so
> + * return false to tell caller to retry.
> + */
> + if ((krcp->bhead && !krwp->bhead_free) ||
> + (krcp->head && !krwp->head_free)) {
> + if (!krwp->bhead_free) {
> + krwp->bhead_free = krcp->bhead;
> + krcp->bhead = NULL;
> + }
> +
> + if (!krwp->head_free) {
> + krwp->head_free = krcp->head;
> + krcp->head = NULL;
> + }
> +
> + /*
> + * The work can already be queued. If so, it means that
> + * within a short time, second, either head or bhead has
> + * been detached as well.
> + */
> + queue_rcu_work(system_wq, &krwp->rcu_work);
> + queued = true;
> + }
> + }
> +
> + return queued;
> }
>
> static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> @@ -2895,6 +2957,39 @@ static void kfree_rcu_monitor(struct work_struct *work)
> spin_unlock_irqrestore(&krcp->lock, flags);
> }
>
> +static inline bool
> +kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
> +{
> + struct kfree_rcu_bulk_data *bnode;
> +
> + if (unlikely(!krcp->initialized))
> + return false;
> +
> + lockdep_assert_held(&krcp->lock);
> +
> + /* Check if a new block is required. */
> + if (!krcp->bhead ||
> + krcp->bhead->nr_records == KFREE_BULK_MAX_ENTR) {
> + bnode = xchg(&krcp->bcached, NULL);
> + if (!bnode)
> + bnode = (struct kfree_rcu_bulk_data *)
> + __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
> +
> + /* No cache or an allocation got failed. */
> + if (unlikely(!bnode))
> + return false;
> +
> + /* Initialize the new block. */
> + bnode->nr_records = 0;
> + bnode->next = krcp->bhead;
> + krcp->bhead = bnode;
> + }
> +
> + /* Finally insert. */
> + krcp->bhead->records[krcp->bhead->nr_records++] = ptr;
> + return true;
> +}
> +
> /*
> * Queue a request for lazy invocation of kfree() after a grace period.
> *
> @@ -2926,9 +3021,17 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
> __func__, head);
> goto unlock_return;
> }
> - head->func = func;
> - head->next = krcp->head;
> - krcp->head = head;
> +
> + /*
> + * Under high memory pressure GFP_NOWAIT can fail,
> + * in that case the emergency path is maintained.
> + */
> + if (unlikely(!kfree_call_rcu_add_ptr_to_bulk(krcp,
> + (void *) head - (unsigned long) func))) {
> + head->func = func;
> + head->next = krcp->head;
> + krcp->head = head;
> + }
>
> // Set timer to drain after KFREE_DRAIN_JIFFIES.
> if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
> @@ -3834,8 +3937,11 @@ static void __init kfree_rcu_batch_init(void)
> struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
>
> spin_lock_init(&krcp->lock);
> - for (i = 0; i < KFREE_N_BATCHES; i++)
> + for (i = 0; i < KFREE_N_BATCHES; i++) {
> + INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work);
> krcp->krw_arr[i].krcp = krcp;
> + }
> +
> INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
> krcp->initialized = true;
> }
> --
> 2.20.1
>

2020-01-16 04:04:12

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Wed, Jan 15, 2020 at 08:14:10PM -0500, Joel Fernandes wrote:
> On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> > kfree_rcu() logic can be improved further by using kfree_bulk()
> > interface along with "basic batching support" introduced earlier.
> >
> > The are at least two advantages of using "bulk" interface:
> > - in case of large number of kfree_rcu() requests kfree_bulk()
> > reduces the per-object overhead caused by calling kfree()
> > per-object.
> >
> > - reduces the number of cache-misses due to "pointer chasing"
> > between objects which can be far spread between each other.
> >
> > This approach defines a new kfree_rcu_bulk_data structure that
> > stores pointers in an array with a specific size. Number of entries
> > in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> > structure to be exactly one page.
> >
> > Since it deals with "block-chain" technique there is an extra
> > need in dynamic allocation when a new block is required. Memory
> > is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> > allows to skip direct reclaim under low memory condition to
> > prevent stalling and fails silently under high memory pressure.
> >
> > The "emergency path" gets maintained when a system is run out
> > of memory. In that case objects are linked into regular list
> > and that is it.
> >
> > In order to evaluate it, the "rcuperf" was run to analyze how
> > much memory is consumed and what is kfree_bulk() throughput.
> >
> > Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
> >
> > CONFIG_SLAB=y
> > kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
> >
> > 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> > 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
> >
> > rcuperf shows approximately ~12% better throughput(Total time)
> > in case of using "bulk" interface. The "drain logic" or its RCU
> > callback does the work faster that leads to better throughput.
>
> Tested-by: Joel Fernandes (Google) <[email protected]>
>
> (Vlad is going to post a v2 which fixes a debugobjects bug but that should
> not have any impact on testing).

Very good! Uladzislau, could you please add Joel's Tested-by in
your next posting?

Thanx, Paul

> thanks,
>
> - Joel
>
>
>
> >
> > Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>
> > ---
> > kernel/rcu/tree.c | 154 ++++++++++++++++++++++++++++++++++++++--------
> > 1 file changed, 130 insertions(+), 24 deletions(-)
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 48fba2257748..4ee5c737558b 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -2754,22 +2754,45 @@ EXPORT_SYMBOL_GPL(call_rcu);
> > #define KFREE_DRAIN_JIFFIES (HZ / 50)
> > #define KFREE_N_BATCHES 2
> >
> > +/*
> > + * This macro defines how many entries the "records" array
> > + * will contain. It is based on the fact that the size of
> > + * kfree_rcu_bulk_data structure becomes exactly one page.
> > + */
> > +#define KFREE_BULK_MAX_ENTR ((PAGE_SIZE / sizeof(void *)) - 2)
> > +
> > +/**
> > + * struct kfree_rcu_bulk_data - single block to store kfree_rcu() pointers
> > + * @nr_records: Number of active pointers in the array
> > + * @records: Array of the kfree_rcu() pointers
> > + * @next: Next bulk object in the block chain
> > + */
> > +struct kfree_rcu_bulk_data {
> > + unsigned long nr_records;
> > + void *records[KFREE_BULK_MAX_ENTR];
> > + struct kfree_rcu_bulk_data *next;
> > +};
> > +
> > /**
> > * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests
> > * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period
> > * @head_free: List of kfree_rcu() objects waiting for a grace period
> > + * @bhead_free: Bulk-List of kfree_rcu() objects waiting for a grace period
> > * @krcp: Pointer to @kfree_rcu_cpu structure
> > */
> >
> > struct kfree_rcu_cpu_work {
> > struct rcu_work rcu_work;
> > struct rcu_head *head_free;
> > + struct kfree_rcu_bulk_data *bhead_free;
> > struct kfree_rcu_cpu *krcp;
> > };
> >
> > /**
> > * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period
> > * @head: List of kfree_rcu() objects not yet waiting for a grace period
> > + * @bhead: Bulk-List of kfree_rcu() objects not yet waiting for a grace period
> > + * @bcached: Keeps at most one object for later reuse when build chain blocks
> > * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period
> > * @lock: Synchronize access to this structure
> > * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES
> > @@ -2783,6 +2806,8 @@ struct kfree_rcu_cpu_work {
> > */
> > struct kfree_rcu_cpu {
> > struct rcu_head *head;
> > + struct kfree_rcu_bulk_data *bhead;
> > + struct kfree_rcu_bulk_data *bcached;
> > struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES];
> > spinlock_t lock;
> > struct delayed_work monitor_work;
> > @@ -2800,6 +2825,7 @@ static void kfree_rcu_work(struct work_struct *work)
> > {
> > unsigned long flags;
> > struct rcu_head *head, *next;
> > + struct kfree_rcu_bulk_data *bhead, *bnext;
> > struct kfree_rcu_cpu *krcp;
> > struct kfree_rcu_cpu_work *krwp;
> >
> > @@ -2809,22 +2835,39 @@ static void kfree_rcu_work(struct work_struct *work)
> > spin_lock_irqsave(&krcp->lock, flags);
> > head = krwp->head_free;
> > krwp->head_free = NULL;
> > + bhead = krwp->bhead_free;
> > + krwp->bhead_free = NULL;
> > spin_unlock_irqrestore(&krcp->lock, flags);
> >
> > - // List "head" is now private, so traverse locklessly.
> > + /* List "bhead" is now private, so traverse locklessly. */
> > + for (; bhead; bhead = bnext) {
> > + bnext = bhead->next;
> > +
> > + rcu_lock_acquire(&rcu_callback_map);
> > + kfree_bulk(bhead->nr_records, bhead->records);
> > + rcu_lock_release(&rcu_callback_map);
> > +
> > + if (cmpxchg(&krcp->bcached, NULL, bhead))
> > + free_page((unsigned long) bhead);
> > +
> > + cond_resched_tasks_rcu_qs();
> > + }
> > +
> > + /*
> > + * Emergency case only. It can happen under low memory
> > + * condition when an allocation gets failed, so the "bulk"
> > + * path can not be temporary maintained.
> > + */
> > for (; head; head = next) {
> > unsigned long offset = (unsigned long)head->func;
> >
> > next = head->next;
> > - // Potentially optimize with kfree_bulk in future.
> > debug_rcu_head_unqueue(head);
> > rcu_lock_acquire(&rcu_callback_map);
> > trace_rcu_invoke_kfree_callback(rcu_state.name, head, offset);
> >
> > - if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset))) {
> > - /* Could be optimized with kfree_bulk() in future. */
> > + if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset)))
> > kfree((void *)head - offset);
> > - }
> >
> > rcu_lock_release(&rcu_callback_map);
> > cond_resched_tasks_rcu_qs();
> > @@ -2839,26 +2882,45 @@ static void kfree_rcu_work(struct work_struct *work)
> > */
> > static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp)
> > {
> > + struct kfree_rcu_cpu_work *krwp;
> > + bool queued = false;
> > int i;
> > - struct kfree_rcu_cpu_work *krwp = NULL;
> >
> > lockdep_assert_held(&krcp->lock);
> > - for (i = 0; i < KFREE_N_BATCHES; i++)
> > - if (!krcp->krw_arr[i].head_free) {
> > - krwp = &(krcp->krw_arr[i]);
> > - break;
> > - }
> >
> > - // If a previous RCU batch is in progress, we cannot immediately
> > - // queue another one, so return false to tell caller to retry.
> > - if (!krwp)
> > - return false;
> > + for (i = 0; i < KFREE_N_BATCHES; i++) {
> > + krwp = &(krcp->krw_arr[i]);
> >
> > - krwp->head_free = krcp->head;
> > - krcp->head = NULL;
> > - INIT_RCU_WORK(&krwp->rcu_work, kfree_rcu_work);
> > - queue_rcu_work(system_wq, &krwp->rcu_work);
> > - return true;
> > + /*
> > + * Try to detach bhead or head and attach it over any
> > + * available corresponding free channel. It can be that
> > + * a previous RCU batch is in progress, it means that
> > + * immediately to queue another one is not possible so
> > + * return false to tell caller to retry.
> > + */
> > + if ((krcp->bhead && !krwp->bhead_free) ||
> > + (krcp->head && !krwp->head_free)) {
> > + if (!krwp->bhead_free) {
> > + krwp->bhead_free = krcp->bhead;
> > + krcp->bhead = NULL;
> > + }
> > +
> > + if (!krwp->head_free) {
> > + krwp->head_free = krcp->head;
> > + krcp->head = NULL;
> > + }
> > +
> > + /*
> > + * The work can already be queued. If so, it means that
> > + * within a short time, second, either head or bhead has
> > + * been detached as well.
> > + */
> > + queue_rcu_work(system_wq, &krwp->rcu_work);
> > + queued = true;
> > + }
> > + }
> > +
> > + return queued;
> > }
> >
> > static inline void kfree_rcu_drain_unlock(struct kfree_rcu_cpu *krcp,
> > @@ -2895,6 +2957,39 @@ static void kfree_rcu_monitor(struct work_struct *work)
> > spin_unlock_irqrestore(&krcp->lock, flags);
> > }
> >
> > +static inline bool
> > +kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
> > +{
> > + struct kfree_rcu_bulk_data *bnode;
> > +
> > + if (unlikely(!krcp->initialized))
> > + return false;
> > +
> > + lockdep_assert_held(&krcp->lock);
> > +
> > + /* Check if a new block is required. */
> > + if (!krcp->bhead ||
> > + krcp->bhead->nr_records == KFREE_BULK_MAX_ENTR) {
> > + bnode = xchg(&krcp->bcached, NULL);
> > + if (!bnode)
> > + bnode = (struct kfree_rcu_bulk_data *)
> > + __get_free_page(GFP_NOWAIT | __GFP_NOWARN);
> > +
> > + /* No cache or an allocation got failed. */
> > + if (unlikely(!bnode))
> > + return false;
> > +
> > + /* Initialize the new block. */
> > + bnode->nr_records = 0;
> > + bnode->next = krcp->bhead;
> > + krcp->bhead = bnode;
> > + }
> > +
> > + /* Finally insert. */
> > + krcp->bhead->records[krcp->bhead->nr_records++] = ptr;
> > + return true;
> > +}
> > +
> > /*
> > * Queue a request for lazy invocation of kfree() after a grace period.
> > *
> > @@ -2926,9 +3021,17 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
> > __func__, head);
> > goto unlock_return;
> > }
> > - head->func = func;
> > - head->next = krcp->head;
> > - krcp->head = head;
> > +
> > + /*
> > + * Under high memory pressure GFP_NOWAIT can fail,
> > + * in that case the emergency path is maintained.
> > + */
> > + if (unlikely(!kfree_call_rcu_add_ptr_to_bulk(krcp,
> > + (void *) head - (unsigned long) func))) {
> > + head->func = func;
> > + head->next = krcp->head;
> > + krcp->head = head;
> > + }
> >
> > // Set timer to drain after KFREE_DRAIN_JIFFIES.
> > if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
> > @@ -3834,8 +3937,11 @@ static void __init kfree_rcu_batch_init(void)
> > struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu);
> >
> > spin_lock_init(&krcp->lock);
> > - for (i = 0; i < KFREE_N_BATCHES; i++)
> > + for (i = 0; i < KFREE_N_BATCHES; i++) {
> > + INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work);
> > krcp->krw_arr[i].krcp = krcp;
> > + }
> > +
> > INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor);
> > krcp->initialized = true;
> > }
> > --
> > 2.20.1
> >

2020-01-16 17:47:18

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Thu, Jan 16, 2020 at 06:27:53PM +0100, Uladzislau Rezki wrote:
> On Wed, Jan 15, 2020 at 06:41:26PM -0800, Paul E. McKenney wrote:
> > On Wed, Jan 15, 2020 at 08:14:10PM -0500, Joel Fernandes wrote:
> > > On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> > > > kfree_rcu() logic can be improved further by using kfree_bulk()
> > > > interface along with "basic batching support" introduced earlier.
> > > >
> > > > The are at least two advantages of using "bulk" interface:
> > > > - in case of large number of kfree_rcu() requests kfree_bulk()
> > > > reduces the per-object overhead caused by calling kfree()
> > > > per-object.
> > > >
> > > > - reduces the number of cache-misses due to "pointer chasing"
> > > > between objects which can be far spread between each other.
> > > >
> > > > This approach defines a new kfree_rcu_bulk_data structure that
> > > > stores pointers in an array with a specific size. Number of entries
> > > > in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> > > > structure to be exactly one page.
> > > >
> > > > Since it deals with "block-chain" technique there is an extra
> > > > need in dynamic allocation when a new block is required. Memory
> > > > is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> > > > allows to skip direct reclaim under low memory condition to
> > > > prevent stalling and fails silently under high memory pressure.
> > > >
> > > > The "emergency path" gets maintained when a system is run out
> > > > of memory. In that case objects are linked into regular list
> > > > and that is it.
> > > >
> > > > In order to evaluate it, the "rcuperf" was run to analyze how
> > > > much memory is consumed and what is kfree_bulk() throughput.
> > > >
> > > > Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
> > > >
> > > > CONFIG_SLAB=y
> > > > kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
> > > >
> > > > 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> > > > 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
> > > >
> > > > rcuperf shows approximately ~12% better throughput(Total time)
> > > > in case of using "bulk" interface. The "drain logic" or its RCU
> > > > callback does the work faster that leads to better throughput.
> > >
> > > Tested-by: Joel Fernandes (Google) <[email protected]>
> > >
> > > (Vlad is going to post a v2 which fixes a debugobjects bug but that should
> > > not have any impact on testing).
> >
> > Very good! Uladzislau, could you please add Joel's Tested-by in
> > your next posting?
> >
> I will add for sure, with the a V2 version. Also, i will update the
> commit message by adding the results related to different slab cache
> usage, i mean with Joel's recent patch.

Sounds good, looking forward to it!

Thanx, Paul

2020-01-16 18:22:48

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Wed, Jan 15, 2020 at 06:41:26PM -0800, Paul E. McKenney wrote:
> On Wed, Jan 15, 2020 at 08:14:10PM -0500, Joel Fernandes wrote:
> > On Tue, Dec 31, 2019 at 01:22:41PM +0100, Uladzislau Rezki (Sony) wrote:
> > > kfree_rcu() logic can be improved further by using kfree_bulk()
> > > interface along with "basic batching support" introduced earlier.
> > >
> > > The are at least two advantages of using "bulk" interface:
> > > - in case of large number of kfree_rcu() requests kfree_bulk()
> > > reduces the per-object overhead caused by calling kfree()
> > > per-object.
> > >
> > > - reduces the number of cache-misses due to "pointer chasing"
> > > between objects which can be far spread between each other.
> > >
> > > This approach defines a new kfree_rcu_bulk_data structure that
> > > stores pointers in an array with a specific size. Number of entries
> > > in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
> > > structure to be exactly one page.
> > >
> > > Since it deals with "block-chain" technique there is an extra
> > > need in dynamic allocation when a new block is required. Memory
> > > is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
> > > allows to skip direct reclaim under low memory condition to
> > > prevent stalling and fails silently under high memory pressure.
> > >
> > > The "emergency path" gets maintained when a system is run out
> > > of memory. In that case objects are linked into regular list
> > > and that is it.
> > >
> > > In order to evaluate it, the "rcuperf" was run to analyze how
> > > much memory is consumed and what is kfree_bulk() throughput.
> > >
> > > Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
> > >
> > > CONFIG_SLAB=y
> > > kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
> > >
> > > 102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
> > > 89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
> > >
> > > rcuperf shows approximately ~12% better throughput(Total time)
> > > in case of using "bulk" interface. The "drain logic" or its RCU
> > > callback does the work faster that leads to better throughput.
> >
> > Tested-by: Joel Fernandes (Google) <[email protected]>
> >
> > (Vlad is going to post a v2 which fixes a debugobjects bug but that should
> > not have any impact on testing).
>
> Very good! Uladzislau, could you please add Joel's Tested-by in
> your next posting?
>
I will add for sure, with the a V2 version. Also, i will update the
commit message by adding the results related to different slab cache
usage, i mean with Joel's recent patch.

Thank you.

--
Vlad Rezki

2020-01-16 23:10:12

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

>
> Tested-by: Joel Fernandes (Google) <[email protected]>
>
Thank you and appreciate your help, Joel.

>
> (Vlad is going to post a v2 which fixes a debugobjects bug but that should
> not have any impact on testing).
>
I will do that :)

--
Vlad Rezki

2020-01-17 17:53:45

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

> > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > a single slab, which results in good locality of reference. So I have to
> > >
> > > You meant a "single cache" category when you say "single slab"? Just to
> > > mention, the number of slabs (in a single cache) when a large number of
> > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > 32-byte slab object).
> > >
> > I think that is about using different slab caches to break locality. It
> > makes sense, IMHO, because usually the system make use of different slabs,
> > because of different object sizes. From the other hand i guess there are
> > test cases when only one slab gets used.
>
> I was wondering about "locality". A cache can be split into many slabs. Only
> the data on a page is local (contiguous). If there are a large number of
> objects, then it goes to a new slab (on the same cache). At least on the
> kmalloc slabs, there is only 1 slab per page. So for example, if on
> kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> slab / page. So how is there still locality?
>
Hmm.. On a high level:

one slab cache manages a specific object size, i.e. the slab memory consists of
contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
into equal object size. For example when kmalloc() gets called, the appropriate
cache size(slab that serves only specific size) is selected and an object assigned
from it is returned.

But that is theory and i have not deeply analyzed how the SLAB works internally,
so i can be wrong :)

You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
them follows each other, i mean it is sequential and is like regular array. In
that sense freeing can be beneficial because when an access is done to any object
whole CPU cache-line is fetched(if it was not before), usually it is 64K.

That is what i meant "locality". In order to "break it" i meant to allocate from
different slabs to see how kfree_slub() behaves in that sense, what is more real
scenario and workload, i think.

--
Vlad Rezki

2020-01-17 18:58:46

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Fri, Jan 17, 2020 at 06:52:17PM +0100, Uladzislau Rezki wrote:
> > > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > > a single slab, which results in good locality of reference. So I have to
> > > >
> > > > You meant a "single cache" category when you say "single slab"? Just to
> > > > mention, the number of slabs (in a single cache) when a large number of
> > > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > > 32-byte slab object).
> > > >
> > > I think that is about using different slab caches to break locality. It
> > > makes sense, IMHO, because usually the system make use of different slabs,
> > > because of different object sizes. From the other hand i guess there are
> > > test cases when only one slab gets used.
> >
> > I was wondering about "locality". A cache can be split into many slabs. Only
> > the data on a page is local (contiguous). If there are a large number of
> > objects, then it goes to a new slab (on the same cache). At least on the
> > kmalloc slabs, there is only 1 slab per page. So for example, if on
> > kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> > slab / page. So how is there still locality?
> >
> Hmm.. On a high level:
>
> one slab cache manages a specific object size, i.e. the slab memory consists of
> contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
> into equal object size. For example when kmalloc() gets called, the appropriate
> cache size(slab that serves only specific size) is selected and an object assigned
> from it is returned.
>
> But that is theory and i have not deeply analyzed how the SLAB works internally,
> so i can be wrong :)
>
> You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
> them follows each other, i mean it is sequential and is like regular array. In

Yes, for these 128 objects it is sequential. But the next 128 could be on
some other page is what I was saying And we are allocating 10s of 1000s of
objects in this test. (I believe pages are sequential only per slab and not
for a different slab within same cache).

> that sense freeing can be beneficial because when an access is done to any object
> whole CPU cache-line is fetched(if it was not before), usually it is 64K.

You mean size of the whole L1 cache right? cachelines are in the order of bytes.

> That is what i meant "locality". In order to "break it" i meant to allocate from
> different slabs to see how kfree_slub() behaves in that sense, what is more real
> scenario and workload, i think.

Ok, agreed.
(BTW I do agree your patch is beneficial, just wanted to get the slab
discussion right).

thanks,

- Joel

2020-01-17 21:39:21

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Fri, Jan 17, 2020 at 01:57:32PM -0500, Joel Fernandes wrote:
> On Fri, Jan 17, 2020 at 06:52:17PM +0100, Uladzislau Rezki wrote:
> > > > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > > > a single slab, which results in good locality of reference. So I have to
> > > > >
> > > > > You meant a "single cache" category when you say "single slab"? Just to
> > > > > mention, the number of slabs (in a single cache) when a large number of
> > > > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > > > 32-byte slab object).
> > > > >
> > > > I think that is about using different slab caches to break locality. It
> > > > makes sense, IMHO, because usually the system make use of different slabs,
> > > > because of different object sizes. From the other hand i guess there are
> > > > test cases when only one slab gets used.
> > >
> > > I was wondering about "locality". A cache can be split into many slabs. Only
> > > the data on a page is local (contiguous). If there are a large number of
> > > objects, then it goes to a new slab (on the same cache). At least on the
> > > kmalloc slabs, there is only 1 slab per page. So for example, if on
> > > kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> > > slab / page. So how is there still locality?
> > >
> > Hmm.. On a high level:
> >
> > one slab cache manages a specific object size, i.e. the slab memory consists of
> > contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
> > into equal object size. For example when kmalloc() gets called, the appropriate
> > cache size(slab that serves only specific size) is selected and an object assigned
> > from it is returned.
> >
> > But that is theory and i have not deeply analyzed how the SLAB works internally,
> > so i can be wrong :)
> >
> > You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
> > them follows each other, i mean it is sequential and is like regular array. In
>
> Yes, for these 128 objects it is sequential. But the next 128 could be on
> some other page is what I was saying And we are allocating 10s of 1000s of
> objects in this test. (I believe pages are sequential only per slab and not
> for a different slab within same cache).
>
> > that sense freeing can be beneficial because when an access is done to any object
> > whole CPU cache-line is fetched(if it was not before), usually it is 64K.
>
> You mean size of the whole L1 cache right? cachelines are in the order of bytes.
>
> > That is what i meant "locality". In order to "break it" i meant to allocate from
> > different slabs to see how kfree_slub() behaves in that sense, what is more real
> > scenario and workload, i think.
>
> Ok, agreed.
> (BTW I do agree your patch is beneficial, just wanted to get the slab
> discussion right).

Thank you both!

Then I should be looking for an updated version of the patch with an upgraded
commit log? Or is there more investigation/testing/review in process?

Thanx, Paul

2020-01-17 22:01:40

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

On Fri, Jan 17, 2020 at 01:37:21PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 17, 2020 at 01:57:32PM -0500, Joel Fernandes wrote:
> > On Fri, Jan 17, 2020 at 06:52:17PM +0100, Uladzislau Rezki wrote:
> > > > > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > > > > a single slab, which results in good locality of reference. So I have to
> > > > > >
> > > > > > You meant a "single cache" category when you say "single slab"? Just to
> > > > > > mention, the number of slabs (in a single cache) when a large number of
> > > > > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > > > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > > > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > > > > 32-byte slab object).
> > > > > >
> > > > > I think that is about using different slab caches to break locality. It
> > > > > makes sense, IMHO, because usually the system make use of different slabs,
> > > > > because of different object sizes. From the other hand i guess there are
> > > > > test cases when only one slab gets used.
> > > >
> > > > I was wondering about "locality". A cache can be split into many slabs. Only
> > > > the data on a page is local (contiguous). If there are a large number of
> > > > objects, then it goes to a new slab (on the same cache). At least on the
> > > > kmalloc slabs, there is only 1 slab per page. So for example, if on
> > > > kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> > > > slab / page. So how is there still locality?
> > > >
> > > Hmm.. On a high level:
> > >
> > > one slab cache manages a specific object size, i.e. the slab memory consists of
> > > contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
> > > into equal object size. For example when kmalloc() gets called, the appropriate
> > > cache size(slab that serves only specific size) is selected and an object assigned
> > > from it is returned.
> > >
> > > But that is theory and i have not deeply analyzed how the SLAB works internally,
> > > so i can be wrong :)
> > >
> > > You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
> > > them follows each other, i mean it is sequential and is like regular array. In
> >
> > Yes, for these 128 objects it is sequential. But the next 128 could be on
> > some other page is what I was saying And we are allocating 10s of 1000s of
> > objects in this test. (I believe pages are sequential only per slab and not
> > for a different slab within same cache).
> >
> > > that sense freeing can be beneficial because when an access is done to any object
> > > whole CPU cache-line is fetched(if it was not before), usually it is 64K.
> >
> > You mean size of the whole L1 cache right? cachelines are in the order of bytes.
> >
> > > That is what i meant "locality". In order to "break it" i meant to allocate from
> > > different slabs to see how kfree_slub() behaves in that sense, what is more real
> > > scenario and workload, i think.
> >
> > Ok, agreed.
> > (BTW I do agree your patch is beneficial, just wanted to get the slab
> > discussion right).
>
> Thank you both!
>
> Then I should be looking for an updated version of the patch with an upgraded
> commit log? Or is there more investigation/testing/review in process?
>

From my side the review is complete. I believe he will repost with
debugobjects fix and we should be good.

2020-01-19 13:06:33

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()

Hello, Paul, Joel.

> >
> > Thank you both!
> >
> > Then I should be looking for an updated version of the patch with an upgraded
> > commit log? Or is there more investigation/testing/review in process?
> >
>
> From my side the review is complete. I believe he will repost with
> debugobjects fix and we should be good.
>
I have put the V2 on the test over the weekend, so i will post it next week.
Yes, V2 will contain the debugobjects fix. Also i need to add tracing probe,
something like:

..
trace_rcu_invoke_kfree_bulk_callback();
kfree_bulk(bhead->nr_records, bhead->records);
..

probably it can be done as separate patch.

Thank you.

--
Vlad Rezki