2024-05-16 13:51:26

by Carlos Llamas

[permalink] [raw]
Subject: [PATCH v3] binder: use bitmap for faster descriptor lookup

When creating new binder references, the driver assigns a descriptor id
that is shared with userspace. Regrettably, the driver needs to keep the
descriptors small enough to accommodate userspace potentially using them
as Vector indexes. Currently, the driver performs a linear search on the
rb-tree of references to find the smallest available descriptor id. This
approach, however, scales poorly as the number of references grows.

This patch introduces the usage of bitmaps to boost the performance of
descriptor assignments. This optimization results in notable performance
gains, particularly in processes with a large number of references. The
following benchmark with 100,000 references showcases the difference in
latency between the dbitmap implementation and the legacy approach:

[ 587.145098] get_ref_desc_olocked: 15us (dbitmap on)
[ 602.788623] get_ref_desc_olocked: 47343us (dbitmap off)

Note the bitmap size is dynamically adjusted in line with the number of
references, ensuring efficient memory usage. In cases where growing the
bitmap is not possible, the driver falls back to the slow legacy method.

A previous attempt to solve this issue was proposed in [1]. However,
such method involved adding new ioctls which isn't great, plus older
userspace code would not have benefited from the optimizations either.

Link: https://lore.kernel.org/all/[email protected]/ [1]
Cc: Tim Murray <[email protected]>
Cc: Arve Hjønnevåg <[email protected]>
Cc: Alice Ryhl <[email protected]>
Cc: Martijn Coenen <[email protected]>
Cc: Todd Kjos <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Steven Moreland <[email protected]>
Suggested-by: Nick Chen <[email protected]>
Signed-off-by: Carlos Llamas <[email protected]>
---

Notes:
v1: first solution using a new "flags" ioctl:
https://lore.kernel.org/all/[email protected]/

v2: switched to different implementation using bitmaps.

v3: dropped __GFP_ZERO when using zalloc().
renamed ...get_first_zero_bit() to ...acquire_first_zero_bit()

drivers/android/binder.c | 113 +++++++++++++++++++++---
drivers/android/binder_internal.h | 5 +-
drivers/android/dbitmap.h | 139 ++++++++++++++++++++++++++++++
3 files changed, 243 insertions(+), 14 deletions(-)
create mode 100644 drivers/android/dbitmap.h

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index dd6923d37931..39e3d840fb96 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1045,6 +1045,67 @@ static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,
return NULL;
}

+/* find first non-zero available descriptor the "slow way" */
+static u32 slow_desc_lookup_olocked(struct binder_proc *proc)
+{
+ struct binder_ref *ref;
+ struct rb_node *n;
+ u32 desc;
+
+ desc = 1;
+ for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) {
+ ref = rb_entry(n, struct binder_ref, rb_node_desc);
+ if (ref->data.desc > desc)
+ break;
+ desc = ref->data.desc + 1;
+ }
+
+ return desc;
+}
+
+/*
+ * Find an available reference descriptor. Note that the proc->outer_lock
+ * might be released in the process. In such case, -EAGAIN is returned and
+ * the provided descriptor is invalid.
+ */
+static int get_ref_desc_olocked(struct binder_proc *proc,
+ struct binder_node *node,
+ u32 *desc)
+{
+ struct dbitmap *dmap = &proc->dmap;
+ unsigned long *new, bit;
+ unsigned int nbits;
+
+ /* 0 is reserved for the context manager */
+ if (node == proc->context->binder_context_mgr_node) {
+ *desc = 0;
+ return 0;
+ }
+
+ if (unlikely(!dbitmap_enabled(dmap))) {
+ *desc = slow_desc_lookup_olocked(proc);
+ return 0;
+ }
+
+ if (dbitmap_acquire_first_zero_bit(dmap, &bit) == 0) {
+ *desc = bit;
+ return 0;
+ }
+
+ /*
+ * The descriptors bitmap is full and needs to be expanded.
+ * The proc->outer_lock is briefly released to allocate the
+ * new bitmap safely.
+ */
+ nbits = dbitmap_expand_nbits(dmap);
+ binder_proc_unlock(proc);
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_expand(dmap, new, nbits);
+
+ return -EAGAIN;
+}
+
/**
* binder_get_ref_for_node_olocked() - get the ref associated with given node
* @proc: binder_proc that owns the ref
@@ -1068,12 +1129,14 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
struct binder_node *node,
struct binder_ref *new_ref)
{
- struct binder_context *context = proc->context;
- struct rb_node **p = &proc->refs_by_node.rb_node;
- struct rb_node *parent = NULL;
struct binder_ref *ref;
- struct rb_node *n;
+ struct rb_node *parent;
+ struct rb_node **p;
+ u32 desc;

+retry:
+ p = &proc->refs_by_node.rb_node;
+ parent = NULL;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
@@ -1088,6 +1151,10 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
if (!new_ref)
return NULL;

+ /* might release the proc->outer_lock */
+ if (get_ref_desc_olocked(proc, node, &desc) == -EAGAIN)
+ goto retry;
+
binder_stats_created(BINDER_STAT_REF);
new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
new_ref->proc = proc;
@@ -1095,14 +1162,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);

- new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
- for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
- ref = rb_entry(n, struct binder_ref, rb_node_desc);
- if (ref->data.desc > new_ref->data.desc)
- break;
- new_ref->data.desc = ref->data.desc + 1;
- }
-
+ new_ref->data.desc = desc;
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
@@ -1131,6 +1191,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(

static void binder_cleanup_ref_olocked(struct binder_ref *ref)
{
+ struct dbitmap *dmap = &ref->proc->dmap;
bool delete_node = false;

binder_debug(BINDER_DEBUG_INTERNAL_REFS,
@@ -1138,6 +1199,8 @@ static void binder_cleanup_ref_olocked(struct binder_ref *ref)
ref->proc->pid, ref->data.debug_id, ref->data.desc,
ref->node->debug_id);

+ if (dbitmap_enabled(dmap))
+ dbitmap_clear_bit(dmap, ref->data.desc);
rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);

@@ -1298,6 +1361,25 @@ static void binder_free_ref(struct binder_ref *ref)
kfree(ref);
}

+/* shrink descriptor bitmap if needed */
+static void try_shrink_dmap(struct binder_proc *proc)
+{
+ unsigned long *new;
+ int nbits;
+
+ binder_proc_lock(proc);
+ nbits = dbitmap_shrink_nbits(&proc->dmap);
+ binder_proc_unlock(proc);
+
+ if (!nbits)
+ return;
+
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_shrink(&proc->dmap, new, nbits);
+ binder_proc_unlock(proc);
+}
+
/**
* binder_update_ref_for_handle() - inc/dec the ref for given handle
* @proc: proc containing the ref
@@ -1334,8 +1416,10 @@ static int binder_update_ref_for_handle(struct binder_proc *proc,
*rdata = ref->data;
binder_proc_unlock(proc);

- if (delete_ref)
+ if (delete_ref) {
binder_free_ref(ref);
+ try_shrink_dmap(proc);
+ }
return ret;

err_no_ref:
@@ -4931,6 +5015,7 @@ static void binder_free_proc(struct binder_proc *proc)
put_task_struct(proc->tsk);
put_cred(proc->cred);
binder_stats_deleted(BINDER_STAT_PROC);
+ dbitmap_free(&proc->dmap);
kfree(proc);
}

@@ -5634,6 +5719,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
+
+ dbitmap_init(&proc->dmap);
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
index 7270d4d22207..256bb75d9d8c 100644
--- a/drivers/android/binder_internal.h
+++ b/drivers/android/binder_internal.h
@@ -14,6 +14,7 @@
#include <linux/uidgid.h>
#include <uapi/linux/android/binderfs.h>
#include "binder_alloc.h"
+#include "dbitmap.h"

struct binder_context {
struct binder_node *binder_context_mgr_node;
@@ -368,6 +369,8 @@ struct binder_ref {
* @freeze_wait: waitqueue of processes waiting for all outstanding
* transactions to be processed
* (protected by @inner_lock)
+ * @dmap bitmap to manage available reference descriptors
+ * (protected by @outer_lock)
* @todo: list of work for this process
* (protected by @inner_lock)
* @stats: per-process binder statistics
@@ -417,7 +420,7 @@ struct binder_proc {
bool sync_recv;
bool async_recv;
wait_queue_head_t freeze_wait;
-
+ struct dbitmap dmap;
struct list_head todo;
struct binder_stats stats;
struct list_head delivered_death;
diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
new file mode 100644
index 000000000000..496f3f904b6c
--- /dev/null
+++ b/drivers/android/dbitmap.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _LINUX_DBITMAP_H
+#define _LINUX_DBITMAP_H
+#include <linux/bitmap.h>
+
+#define NBITS_MIN BITS_PER_TYPE(unsigned long)
+
+struct dbitmap {
+ unsigned int nbits;
+ unsigned long *map;
+};
+
+static inline int dbitmap_enabled(struct dbitmap *dmap)
+{
+ return dmap->map != NULL;
+}
+
+static inline void dbitmap_free(struct dbitmap *dmap)
+{
+ dmap->nbits = 0;
+ kfree(dmap->map);
+ dmap->map = NULL;
+}
+
+static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
+{
+ unsigned int bit;
+
+ if (dmap->nbits <= NBITS_MIN)
+ return 0;
+
+ bit = find_last_bit(dmap->map, dmap->nbits);
+ if (unlikely(bit == dmap->nbits))
+ return NBITS_MIN;
+
+ if (unlikely(bit <= (dmap->nbits >> 2)))
+ return dmap->nbits >> 1;
+
+ return 0;
+}
+
+static inline void
+dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
+ kfree(dmap->map);
+ dmap->map = new;
+ dmap->nbits = nbits;
+}
+
+static inline void
+dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ if (unlikely(!new))
+ return;
+
+ /*
+ * Make sure we can still shrink to the requested nbits as
+ * this call might have raced with another shrink or more
+ * bits have been assigned. In such case, release the @new
+ * bitmap and move on.
+ */
+ if (unlikely(!dbitmap_enabled(dmap) ||
+ dbitmap_shrink_nbits(dmap) != nbits)) {
+ kfree(new);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+static inline unsigned int
+dbitmap_expand_nbits(struct dbitmap *dmap)
+{
+ return dmap->nbits << 1;
+}
+
+static inline void
+dbitmap_expand(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ /*
+ * Determine if the expand is still valid as it might have
+ * raced with another expand or free. In such case, release
+ * the @new bitmap and move on.
+ */
+ if (unlikely(!dbitmap_enabled(dmap) || nbits <= dmap->nbits)) {
+ kfree(new);
+ return;
+ }
+
+ /*
+ * ENOMEM is checked here as we can now discard a potential
+ * race with another successful expand. In such case, disable
+ * the dbitmap and fallback to slow_desc_lookup_olocked().
+ */
+ if (unlikely(!new)) {
+ dbitmap_free(dmap);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+static inline int
+dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
+{
+ unsigned long n;
+
+ n = find_first_zero_bit(dmap->map, dmap->nbits);
+ if (unlikely(n == dmap->nbits))
+ return -ENOSPC;
+
+ *bit = n;
+ set_bit(n, dmap->map);
+
+ return 0;
+}
+
+static inline void
+dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
+{
+ clear_bit(bit, dmap->map);
+}
+
+static inline int dbitmap_init(struct dbitmap *dmap)
+{
+ dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
+ if (!dmap->map) {
+ dmap->nbits = 0;
+ return -ENOMEM;
+ }
+
+ dmap->nbits = NBITS_MIN;
+ /* 0 is reserved for the context manager */
+ set_bit(0, dmap->map);
+
+ return 0;
+}
+#endif
--
2.45.0.rc1.225.g2a3ae87e7f-goog



2024-05-16 14:11:17

by Alice Ryhl

[permalink] [raw]
Subject: Re: [PATCH v3] binder: use bitmap for faster descriptor lookup

On Thu, May 16, 2024 at 3:39 PM Carlos Llamas <[email protected]> wrote:
>
> When creating new binder references, the driver assigns a descriptor id
> that is shared with userspace. Regrettably, the driver needs to keep the
> descriptors small enough to accommodate userspace potentially using them
> as Vector indexes. Currently, the driver performs a linear search on the
> rb-tree of references to find the smallest available descriptor id. This
> approach, however, scales poorly as the number of references grows.
>
> This patch introduces the usage of bitmaps to boost the performance of
> descriptor assignments. This optimization results in notable performance
> gains, particularly in processes with a large number of references. The
> following benchmark with 100,000 references showcases the difference in
> latency between the dbitmap implementation and the legacy approach:
>
> [ 587.145098] get_ref_desc_olocked: 15us (dbitmap on)
> [ 602.788623] get_ref_desc_olocked: 47343us (dbitmap off)
>
> Note the bitmap size is dynamically adjusted in line with the number of
> references, ensuring efficient memory usage. In cases where growing the
> bitmap is not possible, the driver falls back to the slow legacy method.
>
> A previous attempt to solve this issue was proposed in [1]. However,
> such method involved adding new ioctls which isn't great, plus older
> userspace code would not have benefited from the optimizations either.
>
> Link: https://lore.kernel.org/all/[email protected]/ [1]
> Cc: Tim Murray <[email protected]>
> Cc: Arve Hjønnevåg <[email protected]>
> Cc: Alice Ryhl <[email protected]>
> Cc: Martijn Coenen <[email protected]>
> Cc: Todd Kjos <[email protected]>
> Cc: John Stultz <[email protected]>
> Cc: Steven Moreland <[email protected]>
> Suggested-by: Nick Chen <[email protected]>
> Signed-off-by: Carlos Llamas <[email protected]>

LGTM. One nit below, but it's not a correctness issue.

Reviewed-by: Alice Ryhl <[email protected]>

> +static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
> +{
> + unsigned int bit;
> +
> + if (dmap->nbits <= NBITS_MIN)
> + return 0;
> +
> + bit = find_last_bit(dmap->map, dmap->nbits);
> + if (unlikely(bit == dmap->nbits))
> + return NBITS_MIN;
> +
> + if (unlikely(bit <= (dmap->nbits >> 2)))
> + return dmap->nbits >> 1;

I think this is intended to say that we only shrink if only the lower
fourth of the bits have any bits set, but for the condition to
actually be that, you need `bit < (map->nbits >> 2)` here instead of
`<=`.

Alice

2024-05-17 03:24:27

by Carlos Llamas

[permalink] [raw]
Subject: Re: [PATCH v3] binder: use bitmap for faster descriptor lookup

On Thu, May 16, 2024 at 04:10:40PM +0200, Alice Ryhl wrote:
> On Thu, May 16, 2024 at 3:39 PM Carlos Llamas <[email protected]> wrote:
> >
> > When creating new binder references, the driver assigns a descriptor id
> > that is shared with userspace. Regrettably, the driver needs to keep the
> > descriptors small enough to accommodate userspace potentially using them
> > as Vector indexes. Currently, the driver performs a linear search on the
> > rb-tree of references to find the smallest available descriptor id. This
> > approach, however, scales poorly as the number of references grows.
> >
> > This patch introduces the usage of bitmaps to boost the performance of
> > descriptor assignments. This optimization results in notable performance
> > gains, particularly in processes with a large number of references. The
> > following benchmark with 100,000 references showcases the difference in
> > latency between the dbitmap implementation and the legacy approach:
> >
> > [ 587.145098] get_ref_desc_olocked: 15us (dbitmap on)
> > [ 602.788623] get_ref_desc_olocked: 47343us (dbitmap off)
> >
> > Note the bitmap size is dynamically adjusted in line with the number of
> > references, ensuring efficient memory usage. In cases where growing the
> > bitmap is not possible, the driver falls back to the slow legacy method.
> >
> > A previous attempt to solve this issue was proposed in [1]. However,
> > such method involved adding new ioctls which isn't great, plus older
> > userspace code would not have benefited from the optimizations either.
> >
> > Link: https://lore.kernel.org/all/[email protected]/ [1]
> > Cc: Tim Murray <[email protected]>
> > Cc: Arve Hjønnevåg <[email protected]>
> > Cc: Alice Ryhl <[email protected]>
> > Cc: Martijn Coenen <[email protected]>
> > Cc: Todd Kjos <[email protected]>
> > Cc: John Stultz <[email protected]>
> > Cc: Steven Moreland <[email protected]>
> > Suggested-by: Nick Chen <[email protected]>
> > Signed-off-by: Carlos Llamas <[email protected]>
>
> LGTM. One nit below, but it's not a correctness issue.
>
> Reviewed-by: Alice Ryhl <[email protected]>
>
> > +static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
> > +{
> > + unsigned int bit;
> > +
> > + if (dmap->nbits <= NBITS_MIN)
> > + return 0;
> > +
> > + bit = find_last_bit(dmap->map, dmap->nbits);
> > + if (unlikely(bit == dmap->nbits))
> > + return NBITS_MIN;
> > +
> > + if (unlikely(bit <= (dmap->nbits >> 2)))
> > + return dmap->nbits >> 1;
>
> I think this is intended to say that we only shrink if only the lower
> fourth of the bits have any bits set, but for the condition to
> actually be that, you need `bit < (map->nbits >> 2)` here instead of
> `<=`.

True, the range goes [0...n-1] so it should be "bit < n". Let me fix
that. Thanks.

>
> Alice

2024-05-17 03:29:18

by Carlos Llamas

[permalink] [raw]
Subject: [PATCH v4] binder: use bitmap for faster descriptor lookup

When creating new binder references, the driver assigns a descriptor id
that is shared with userspace. Regrettably, the driver needs to keep the
descriptors small enough to accommodate userspace potentially using them
as Vector indexes. Currently, the driver performs a linear search on the
rb-tree of references to find the smallest available descriptor id. This
approach, however, scales poorly as the number of references grows.

This patch introduces the usage of bitmaps to boost the performance of
descriptor assignments. This optimization results in notable performance
gains, particularly in processes with a large number of references. The
following benchmark with 100,000 references showcases the difference in
latency between the dbitmap implementation and the legacy approach:

[ 587.145098] get_ref_desc_olocked: 15us (dbitmap on)
[ 602.788623] get_ref_desc_olocked: 47343us (dbitmap off)

Note the bitmap size is dynamically adjusted in line with the number of
references, ensuring efficient memory usage. In cases where growing the
bitmap is not possible, the driver falls back to the slow legacy method.

A previous attempt to solve this issue was proposed in [1]. However,
such method involved adding new ioctls which isn't great, plus older
userspace code would not have benefited from the optimizations either.

Link: https://lore.kernel.org/all/[email protected]/ [1]
Cc: Tim Murray <[email protected]>
Cc: Arve Hjønnevåg <[email protected]>
Cc: Alice Ryhl <[email protected]>
Cc: Martijn Coenen <[email protected]>
Cc: Todd Kjos <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Steven Moreland <[email protected]>
Suggested-by: Nick Chen <[email protected]>
Reviewed-by: Alice Ryhl <[email protected]>
Signed-off-by: Carlos Llamas <[email protected]>
---

Notes:
v1: first solution using a new "flags" ioctl:
https://lore.kernel.org/all/[email protected]/

v2: switched to different implementation using bitmaps.

v3: dropped __GFP_ZERO when using zalloc().
renamed ...get_first_zero_bit() to ...acquire_first_zero_bit()

v4: fix bit range check for 25% bitmap capacity.

drivers/android/binder.c | 113 +++++++++++++++++++++---
drivers/android/binder_internal.h | 5 +-
drivers/android/dbitmap.h | 139 ++++++++++++++++++++++++++++++
3 files changed, 243 insertions(+), 14 deletions(-)
create mode 100644 drivers/android/dbitmap.h

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index dd6923d37931..39e3d840fb96 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1045,6 +1045,67 @@ static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,
return NULL;
}

+/* find first non-zero available descriptor the "slow way" */
+static u32 slow_desc_lookup_olocked(struct binder_proc *proc)
+{
+ struct binder_ref *ref;
+ struct rb_node *n;
+ u32 desc;
+
+ desc = 1;
+ for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) {
+ ref = rb_entry(n, struct binder_ref, rb_node_desc);
+ if (ref->data.desc > desc)
+ break;
+ desc = ref->data.desc + 1;
+ }
+
+ return desc;
+}
+
+/*
+ * Find an available reference descriptor. Note that the proc->outer_lock
+ * might be released in the process. In such case, -EAGAIN is returned and
+ * the provided descriptor is invalid.
+ */
+static int get_ref_desc_olocked(struct binder_proc *proc,
+ struct binder_node *node,
+ u32 *desc)
+{
+ struct dbitmap *dmap = &proc->dmap;
+ unsigned long *new, bit;
+ unsigned int nbits;
+
+ /* 0 is reserved for the context manager */
+ if (node == proc->context->binder_context_mgr_node) {
+ *desc = 0;
+ return 0;
+ }
+
+ if (unlikely(!dbitmap_enabled(dmap))) {
+ *desc = slow_desc_lookup_olocked(proc);
+ return 0;
+ }
+
+ if (dbitmap_acquire_first_zero_bit(dmap, &bit) == 0) {
+ *desc = bit;
+ return 0;
+ }
+
+ /*
+ * The descriptors bitmap is full and needs to be expanded.
+ * The proc->outer_lock is briefly released to allocate the
+ * new bitmap safely.
+ */
+ nbits = dbitmap_expand_nbits(dmap);
+ binder_proc_unlock(proc);
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_expand(dmap, new, nbits);
+
+ return -EAGAIN;
+}
+
/**
* binder_get_ref_for_node_olocked() - get the ref associated with given node
* @proc: binder_proc that owns the ref
@@ -1068,12 +1129,14 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
struct binder_node *node,
struct binder_ref *new_ref)
{
- struct binder_context *context = proc->context;
- struct rb_node **p = &proc->refs_by_node.rb_node;
- struct rb_node *parent = NULL;
struct binder_ref *ref;
- struct rb_node *n;
+ struct rb_node *parent;
+ struct rb_node **p;
+ u32 desc;

+retry:
+ p = &proc->refs_by_node.rb_node;
+ parent = NULL;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
@@ -1088,6 +1151,10 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
if (!new_ref)
return NULL;

+ /* might release the proc->outer_lock */
+ if (get_ref_desc_olocked(proc, node, &desc) == -EAGAIN)
+ goto retry;
+
binder_stats_created(BINDER_STAT_REF);
new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
new_ref->proc = proc;
@@ -1095,14 +1162,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);

- new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
- for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
- ref = rb_entry(n, struct binder_ref, rb_node_desc);
- if (ref->data.desc > new_ref->data.desc)
- break;
- new_ref->data.desc = ref->data.desc + 1;
- }
-
+ new_ref->data.desc = desc;
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
@@ -1131,6 +1191,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(

static void binder_cleanup_ref_olocked(struct binder_ref *ref)
{
+ struct dbitmap *dmap = &ref->proc->dmap;
bool delete_node = false;

binder_debug(BINDER_DEBUG_INTERNAL_REFS,
@@ -1138,6 +1199,8 @@ static void binder_cleanup_ref_olocked(struct binder_ref *ref)
ref->proc->pid, ref->data.debug_id, ref->data.desc,
ref->node->debug_id);

+ if (dbitmap_enabled(dmap))
+ dbitmap_clear_bit(dmap, ref->data.desc);
rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);

@@ -1298,6 +1361,25 @@ static void binder_free_ref(struct binder_ref *ref)
kfree(ref);
}

+/* shrink descriptor bitmap if needed */
+static void try_shrink_dmap(struct binder_proc *proc)
+{
+ unsigned long *new;
+ int nbits;
+
+ binder_proc_lock(proc);
+ nbits = dbitmap_shrink_nbits(&proc->dmap);
+ binder_proc_unlock(proc);
+
+ if (!nbits)
+ return;
+
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_shrink(&proc->dmap, new, nbits);
+ binder_proc_unlock(proc);
+}
+
/**
* binder_update_ref_for_handle() - inc/dec the ref for given handle
* @proc: proc containing the ref
@@ -1334,8 +1416,10 @@ static int binder_update_ref_for_handle(struct binder_proc *proc,
*rdata = ref->data;
binder_proc_unlock(proc);

- if (delete_ref)
+ if (delete_ref) {
binder_free_ref(ref);
+ try_shrink_dmap(proc);
+ }
return ret;

err_no_ref:
@@ -4931,6 +5015,7 @@ static void binder_free_proc(struct binder_proc *proc)
put_task_struct(proc->tsk);
put_cred(proc->cred);
binder_stats_deleted(BINDER_STAT_PROC);
+ dbitmap_free(&proc->dmap);
kfree(proc);
}

@@ -5634,6 +5719,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
+
+ dbitmap_init(&proc->dmap);
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
index 7270d4d22207..256bb75d9d8c 100644
--- a/drivers/android/binder_internal.h
+++ b/drivers/android/binder_internal.h
@@ -14,6 +14,7 @@
#include <linux/uidgid.h>
#include <uapi/linux/android/binderfs.h>
#include "binder_alloc.h"
+#include "dbitmap.h"

struct binder_context {
struct binder_node *binder_context_mgr_node;
@@ -368,6 +369,8 @@ struct binder_ref {
* @freeze_wait: waitqueue of processes waiting for all outstanding
* transactions to be processed
* (protected by @inner_lock)
+ * @dmap bitmap to manage available reference descriptors
+ * (protected by @outer_lock)
* @todo: list of work for this process
* (protected by @inner_lock)
* @stats: per-process binder statistics
@@ -417,7 +420,7 @@ struct binder_proc {
bool sync_recv;
bool async_recv;
wait_queue_head_t freeze_wait;
-
+ struct dbitmap dmap;
struct list_head todo;
struct binder_stats stats;
struct list_head delivered_death;
diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
new file mode 100644
index 000000000000..2cf470702bbb
--- /dev/null
+++ b/drivers/android/dbitmap.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _LINUX_DBITMAP_H
+#define _LINUX_DBITMAP_H
+#include <linux/bitmap.h>
+
+#define NBITS_MIN BITS_PER_TYPE(unsigned long)
+
+struct dbitmap {
+ unsigned int nbits;
+ unsigned long *map;
+};
+
+static inline int dbitmap_enabled(struct dbitmap *dmap)
+{
+ return dmap->map != NULL;
+}
+
+static inline void dbitmap_free(struct dbitmap *dmap)
+{
+ dmap->nbits = 0;
+ kfree(dmap->map);
+ dmap->map = NULL;
+}
+
+static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
+{
+ unsigned int bit;
+
+ if (dmap->nbits <= NBITS_MIN)
+ return 0;
+
+ bit = find_last_bit(dmap->map, dmap->nbits);
+ if (unlikely(bit == dmap->nbits))
+ return NBITS_MIN;
+
+ if (unlikely(bit < (dmap->nbits >> 2)))
+ return dmap->nbits >> 1;
+
+ return 0;
+}
+
+static inline void
+dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
+ kfree(dmap->map);
+ dmap->map = new;
+ dmap->nbits = nbits;
+}
+
+static inline void
+dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ if (unlikely(!new))
+ return;
+
+ /*
+ * Make sure we can still shrink to the requested nbits as
+ * this call might have raced with another shrink or more
+ * bits have been assigned. In such case, release the @new
+ * bitmap and move on.
+ */
+ if (unlikely(!dbitmap_enabled(dmap) ||
+ dbitmap_shrink_nbits(dmap) != nbits)) {
+ kfree(new);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+static inline unsigned int
+dbitmap_expand_nbits(struct dbitmap *dmap)
+{
+ return dmap->nbits << 1;
+}
+
+static inline void
+dbitmap_expand(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ /*
+ * Determine if the expand is still valid as it might have
+ * raced with another expand or free. In such case, release
+ * the @new bitmap and move on.
+ */
+ if (unlikely(!dbitmap_enabled(dmap) || nbits <= dmap->nbits)) {
+ kfree(new);
+ return;
+ }
+
+ /*
+ * ENOMEM is checked here as we can now discard a potential
+ * race with another successful expand. In such case, disable
+ * the dbitmap and fallback to slow_desc_lookup_olocked().
+ */
+ if (unlikely(!new)) {
+ dbitmap_free(dmap);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+static inline int
+dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
+{
+ unsigned long n;
+
+ n = find_first_zero_bit(dmap->map, dmap->nbits);
+ if (unlikely(n == dmap->nbits))
+ return -ENOSPC;
+
+ *bit = n;
+ set_bit(n, dmap->map);
+
+ return 0;
+}
+
+static inline void
+dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
+{
+ clear_bit(bit, dmap->map);
+}
+
+static inline int dbitmap_init(struct dbitmap *dmap)
+{
+ dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
+ if (!dmap->map) {
+ dmap->nbits = 0;
+ return -ENOMEM;
+ }
+
+ dmap->nbits = NBITS_MIN;
+ /* 0 is reserved for the context manager */
+ set_bit(0, dmap->map);
+
+ return 0;
+}
+#endif
--
2.45.0.rc1.225.g2a3ae87e7f-goog


2024-06-04 14:24:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v4] binder: use bitmap for faster descriptor lookup

On Fri, May 17, 2024 at 03:28:27AM +0000, Carlos Llamas wrote:
> diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
> new file mode 100644
> index 000000000000..2cf470702bbb
> --- /dev/null
> +++ b/drivers/android/dbitmap.h
> @@ -0,0 +1,139 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef _LINUX_DBITMAP_H
> +#define _LINUX_DBITMAP_H
> +#include <linux/bitmap.h>

No copyright line for a new file? Somehow I doubt that's what your
corporate policy is :(


> +
> +#define NBITS_MIN BITS_PER_TYPE(unsigned long)
> +
> +struct dbitmap {
> + unsigned int nbits;
> + unsigned long *map;
> +};

Some documentation about how this all works would be nice so we can
verify / validate it is doing what it should be doing.

And maybe a test?

> +
> +static inline int dbitmap_enabled(struct dbitmap *dmap)
> +{
> + return dmap->map != NULL;
> +}
> +
> +static inline void dbitmap_free(struct dbitmap *dmap)
> +{
> + dmap->nbits = 0;
> + kfree(dmap->map);
> + dmap->map = NULL;

Why are you setting this to NULL after freeing it? What does that
signal?

> +}
> +
> +static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
> +{
> + unsigned int bit;
> +
> + if (dmap->nbits <= NBITS_MIN)
> + return 0;
> +
> + bit = find_last_bit(dmap->map, dmap->nbits);
> + if (unlikely(bit == dmap->nbits))
> + return NBITS_MIN;
> +
> + if (unlikely(bit < (dmap->nbits >> 2)))
> + return dmap->nbits >> 1;

And these unlikely() markings actually work better than not having them?
Please document that if so.


> +
> + return 0;
> +}
> +
> +static inline void
> +dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> +{
> + bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
> + kfree(dmap->map);
> + dmap->map = new;
> + dmap->nbits = nbits;
> +}
> +
> +static inline void
> +dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> +{
> + if (unlikely(!new))
> + return;

All unlikely/likely needs to be "proven" to be needed, otherwise the
compiler and cpu almost always does a better job over time.

> +
> + /*
> + * Make sure we can still shrink to the requested nbits as
> + * this call might have raced with another shrink or more
> + * bits have been assigned. In such case, release the @new
> + * bitmap and move on.
> + */
> + if (unlikely(!dbitmap_enabled(dmap) ||
> + dbitmap_shrink_nbits(dmap) != nbits)) {
> + kfree(new);
> + return;
> + }
> +
> + dbitmap_replace(dmap, new, nbits);
> +}
> +
> +static inline unsigned int
> +dbitmap_expand_nbits(struct dbitmap *dmap)
> +{
> + return dmap->nbits << 1;
> +}
> +
> +static inline void
> +dbitmap_expand(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> +{
> + /*
> + * Determine if the expand is still valid as it might have
> + * raced with another expand or free. In such case, release
> + * the @new bitmap and move on.

Shouldn't locks protect any race? otherwise what happens if it changes
right after you check for this?


> + */
> + if (unlikely(!dbitmap_enabled(dmap) || nbits <= dmap->nbits)) {
> + kfree(new);
> + return;
> + }
> +
> + /*
> + * ENOMEM is checked here as we can now discard a potential
> + * race with another successful expand. In such case, disable
> + * the dbitmap and fallback to slow_desc_lookup_olocked().
> + */
> + if (unlikely(!new)) {

As you control the callers, how can this happen?

> + dbitmap_free(dmap);
> + return;
> + }
> +
> + dbitmap_replace(dmap, new, nbits);
> +}
> +
> +static inline int
> +dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
> +{
> + unsigned long n;
> +
> + n = find_first_zero_bit(dmap->map, dmap->nbits);
> + if (unlikely(n == dmap->nbits))
> + return -ENOSPC;
> +
> + *bit = n;
> + set_bit(n, dmap->map);
> +
> + return 0;
> +}
> +
> +static inline void
> +dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
> +{
> + clear_bit(bit, dmap->map);
> +}
> +
> +static inline int dbitmap_init(struct dbitmap *dmap)
> +{
> + dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
> + if (!dmap->map) {
> + dmap->nbits = 0;
> + return -ENOMEM;
> + }
> +
> + dmap->nbits = NBITS_MIN;
> + /* 0 is reserved for the context manager */
> + set_bit(0, dmap->map);

Yeah, this all needs to be documented somewhere please :)

thanks,

greg k-h

2024-06-06 18:50:30

by Carlos Llamas

[permalink] [raw]
Subject: Re: [PATCH v4] binder: use bitmap for faster descriptor lookup

On Tue, Jun 04, 2024 at 04:06:16PM +0200, Greg Kroah-Hartman wrote:
> On Fri, May 17, 2024 at 03:28:27AM +0000, Carlos Llamas wrote:
> > diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
> > new file mode 100644
> > index 000000000000..2cf470702bbb
> > --- /dev/null
> > +++ b/drivers/android/dbitmap.h
> > @@ -0,0 +1,139 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +#ifndef _LINUX_DBITMAP_H
> > +#define _LINUX_DBITMAP_H
> > +#include <linux/bitmap.h>
>
> No copyright line for a new file? Somehow I doubt that's what your
> corporate policy is :(

It is not, sorry I'll fix it.

>
>
> > +
> > +#define NBITS_MIN BITS_PER_TYPE(unsigned long)
> > +
> > +struct dbitmap {
> > + unsigned int nbits;
> > + unsigned long *map;
> > +};
>
> Some documentation about how this all works would be nice so we can
> verify / validate it is doing what it should be doing.
>
> And maybe a test?
>
> > +
> > +static inline int dbitmap_enabled(struct dbitmap *dmap)
> > +{
> > + return dmap->map != NULL;
> > +}
> > +
> > +static inline void dbitmap_free(struct dbitmap *dmap)
> > +{
> > + dmap->nbits = 0;
> > + kfree(dmap->map);
> > + dmap->map = NULL;
>
> Why are you setting this to NULL after freeing it? What does that
> signal?

The signal is on dbitmap_enabled(). I'll either add a comment to this or
create a dbitmap_disable() to pair with it in the next version.

>
> > +}
> > +
> > +static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
> > +{
> > + unsigned int bit;
> > +
> > + if (dmap->nbits <= NBITS_MIN)
> > + return 0;
> > +
> > + bit = find_last_bit(dmap->map, dmap->nbits);
> > + if (unlikely(bit == dmap->nbits))
> > + return NBITS_MIN;
> > +
> > + if (unlikely(bit < (dmap->nbits >> 2)))
> > + return dmap->nbits >> 1;
>
> And these unlikely() markings actually work better than not having them?
> Please document that if so.
>
>
> > +
> > + return 0;
> > +}
> > +
> > +static inline void
> > +dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
> > + kfree(dmap->map);
> > + dmap->map = new;
> > + dmap->nbits = nbits;
> > +}
> > +
> > +static inline void
> > +dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + if (unlikely(!new))
> > + return;
>
> All unlikely/likely needs to be "proven" to be needed, otherwise the
> compiler and cpu almost always does a better job over time.

Ok, I'll drop all these unlikely/likely and let the compiler and cpu do
their thing.

>
> > +
> > + /*
> > + * Make sure we can still shrink to the requested nbits as
> > + * this call might have raced with another shrink or more
> > + * bits have been assigned. In such case, release the @new
> > + * bitmap and move on.
> > + */
> > + if (unlikely(!dbitmap_enabled(dmap) ||
> > + dbitmap_shrink_nbits(dmap) != nbits)) {
> > + kfree(new);
> > + return;
> > + }
> > +
> > + dbitmap_replace(dmap, new, nbits);
> > +}
> > +
> > +static inline unsigned int
> > +dbitmap_expand_nbits(struct dbitmap *dmap)
> > +{
> > + return dmap->nbits << 1;
> > +}
> > +
> > +static inline void
> > +dbitmap_expand(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
> > +{
> > + /*
> > + * Determine if the expand is still valid as it might have
> > + * raced with another expand or free. In such case, release
> > + * the @new bitmap and move on.
>
> Shouldn't locks protect any race? otherwise what happens if it changes
> right after you check for this?

Yes, there are locks protecting this call. However, these are spinlocks
so the bitmap gets alloc'ed outside of them. e.g.

nbits = dbitmap_expand_nbits(dmap);
spin_unlock(&proc->lock);
new = bimap_zalloc(nbits, GFP_KERNEL);
spin_lock(&proc->lock);
dbitmap_expand(dmap, new, nbits);

Thus why we need to check if other task beat us to the expand.
Sorry, I now realize this needs to be documented.

>
>
> > + */
> > + if (unlikely(!dbitmap_enabled(dmap) || nbits <= dmap->nbits)) {
> > + kfree(new);
> > + return;
> > + }
> > +
> > + /*
> > + * ENOMEM is checked here as we can now discard a potential
> > + * race with another successful expand. In such case, disable
> > + * the dbitmap and fallback to slow_desc_lookup_olocked().
> > + */
> > + if (unlikely(!new)) {
>
> As you control the callers, how can this happen?

Delaying this check has the advantage that we can rely on another
successful expand call. So in this case the "race" is good. The comment
above the check has these details.

>
> > + dbitmap_free(dmap);
> > + return;
> > + }
> > +
> > + dbitmap_replace(dmap, new, nbits);
> > +}
> > +
> > +static inline int
> > +dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
> > +{
> > + unsigned long n;
> > +
> > + n = find_first_zero_bit(dmap->map, dmap->nbits);
> > + if (unlikely(n == dmap->nbits))
> > + return -ENOSPC;
> > +
> > + *bit = n;
> > + set_bit(n, dmap->map);
> > +
> > + return 0;
> > +}
> > +
> > +static inline void
> > +dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
> > +{
> > + clear_bit(bit, dmap->map);
> > +}
> > +
> > +static inline int dbitmap_init(struct dbitmap *dmap)
> > +{
> > + dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
> > + if (!dmap->map) {
> > + dmap->nbits = 0;
> > + return -ENOMEM;
> > + }
> > +
> > + dmap->nbits = NBITS_MIN;
> > + /* 0 is reserved for the context manager */
> > + set_bit(0, dmap->map);
>
> Yeah, this all needs to be documented somewhere please :)

Yeah ok, I'll add the documentation.

>
> thanks,
>
> greg k-h

Thanks,
Carlos Llamas

2024-06-12 04:25:56

by Carlos Llamas

[permalink] [raw]
Subject: [PATCH v5] binder: use bitmap for faster descriptor lookup

When creating new binder references, the driver assigns a descriptor id
that is shared with userspace. Regrettably, the driver needs to keep the
descriptors small enough to accommodate userspace potentially using them
as Vector indexes. Currently, the driver performs a linear search on the
rb-tree of references to find the smallest available descriptor id. This
approach, however, scales poorly as the number of references grows.

This patch introduces the usage of bitmaps to boost the performance of
descriptor assignments. This optimization results in notable performance
gains, particularly in processes with a large number of references. The
following benchmark with 100,000 references showcases the difference in
latency between the dbitmap implementation and the legacy approach:

[ 587.145098] get_ref_desc_olocked: 15us (dbitmap on)
[ 602.788623] get_ref_desc_olocked: 47343us (dbitmap off)

Note the bitmap size is dynamically adjusted in line with the number of
references, ensuring efficient memory usage. In cases where growing the
bitmap is not possible, the driver falls back to the slow legacy method.

A previous attempt to solve this issue was proposed in [1]. However,
such method involved adding new ioctls which isn't great, plus older
userspace code would not have benefited from the optimizations either.

Link: https://lore.kernel.org/all/[email protected]/ [1]
Cc: Tim Murray <[email protected]>
Cc: Arve Hjønnevåg <[email protected]>
Cc: Alice Ryhl <[email protected]>
Cc: Martijn Coenen <[email protected]>
Cc: Todd Kjos <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Steven Moreland <[email protected]>
Suggested-by: Nick Chen <[email protected]>
Reviewed-by: Alice Ryhl <[email protected]>
Signed-off-by: Carlos Llamas <[email protected]>
---

Notes:
v1: first solution using a new "flags" ioctl:
https://lore.kernel.org/all/[email protected]/

v2: switched to different implementation using bitmaps.

v3: dropped __GFP_ZERO when using zalloc().
renamed ...get_first_zero_bit() to ...acquire_first_zero_bit()

v4: fix bit range check for 25% bitmap capacity.

v5: Per gregkh's feedback:
- fix new file copyright line according to Google's policy
- add plenty of comments documenting dbitmap's behavior
- drop (un)likely() and let compiler/cpu do a better job
- use dmap->nbits to determine if dbitmap is disabled
Also:
- make sure dbitmap can't clear BIT(0)
- rename "_expand()" to "_grow()"

drivers/android/binder.c | 112 ++++++++++++++++---
drivers/android/binder_internal.h | 5 +-
drivers/android/dbitmap.h | 176 ++++++++++++++++++++++++++++++
3 files changed, 279 insertions(+), 14 deletions(-)
create mode 100644 drivers/android/dbitmap.h

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index b21a7b246a0d..0c2161b1f057 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1045,6 +1045,66 @@ static struct binder_ref *binder_get_ref_olocked(struct binder_proc *proc,
return NULL;
}

+/* Find the smallest unused descriptor the "slow way" */
+static u32 slow_desc_lookup_olocked(struct binder_proc *proc)
+{
+ struct binder_ref *ref;
+ struct rb_node *n;
+ u32 desc;
+
+ desc = 1;
+ for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) {
+ ref = rb_entry(n, struct binder_ref, rb_node_desc);
+ if (ref->data.desc > desc)
+ break;
+ desc = ref->data.desc + 1;
+ }
+
+ return desc;
+}
+
+/*
+ * Find an available reference descriptor ID. The proc->outer_lock might
+ * be released in the process, in which case -EAGAIN is returned and the
+ * @desc should be considered invalid.
+ */
+static int get_ref_desc_olocked(struct binder_proc *proc,
+ struct binder_node *node,
+ u32 *desc)
+{
+ struct dbitmap *dmap = &proc->dmap;
+ unsigned long *new, bit;
+ unsigned int nbits;
+
+ /* 0 is reserved for the context manager */
+ if (node == proc->context->binder_context_mgr_node) {
+ *desc = 0;
+ return 0;
+ }
+
+ if (!dbitmap_enabled(dmap)) {
+ *desc = slow_desc_lookup_olocked(proc);
+ return 0;
+ }
+
+ if (dbitmap_acquire_first_zero_bit(dmap, &bit) == 0) {
+ *desc = bit;
+ return 0;
+ }
+
+ /*
+ * The dbitmap is full and needs to grow. The proc->outer_lock
+ * is briefly released to allocate the new bitmap safely.
+ */
+ nbits = dbitmap_grow_nbits(dmap);
+ binder_proc_unlock(proc);
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_grow(dmap, new, nbits);
+
+ return -EAGAIN;
+}
+
/**
* binder_get_ref_for_node_olocked() - get the ref associated with given node
* @proc: binder_proc that owns the ref
@@ -1068,12 +1128,14 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
struct binder_node *node,
struct binder_ref *new_ref)
{
- struct binder_context *context = proc->context;
- struct rb_node **p = &proc->refs_by_node.rb_node;
- struct rb_node *parent = NULL;
struct binder_ref *ref;
- struct rb_node *n;
+ struct rb_node *parent;
+ struct rb_node **p;
+ u32 desc;

+retry:
+ p = &proc->refs_by_node.rb_node;
+ parent = NULL;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
@@ -1088,6 +1150,10 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
if (!new_ref)
return NULL;

+ /* might release the proc->outer_lock */
+ if (get_ref_desc_olocked(proc, node, &desc) == -EAGAIN)
+ goto retry;
+
binder_stats_created(BINDER_STAT_REF);
new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
new_ref->proc = proc;
@@ -1095,14 +1161,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);

- new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
- for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
- ref = rb_entry(n, struct binder_ref, rb_node_desc);
- if (ref->data.desc > new_ref->data.desc)
- break;
- new_ref->data.desc = ref->data.desc + 1;
- }
-
+ new_ref->data.desc = desc;
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
@@ -1131,6 +1190,7 @@ static struct binder_ref *binder_get_ref_for_node_olocked(

static void binder_cleanup_ref_olocked(struct binder_ref *ref)
{
+ struct dbitmap *dmap = &ref->proc->dmap;
bool delete_node = false;

binder_debug(BINDER_DEBUG_INTERNAL_REFS,
@@ -1138,6 +1198,8 @@ static void binder_cleanup_ref_olocked(struct binder_ref *ref)
ref->proc->pid, ref->data.debug_id, ref->data.desc,
ref->node->debug_id);

+ if (dbitmap_enabled(dmap))
+ dbitmap_clear_bit(dmap, ref->data.desc);
rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);

@@ -1298,6 +1360,25 @@ static void binder_free_ref(struct binder_ref *ref)
kfree(ref);
}

+/* shrink descriptor bitmap if needed */
+static void try_shrink_dmap(struct binder_proc *proc)
+{
+ unsigned long *new;
+ int nbits;
+
+ binder_proc_lock(proc);
+ nbits = dbitmap_shrink_nbits(&proc->dmap);
+ binder_proc_unlock(proc);
+
+ if (!nbits)
+ return;
+
+ new = bitmap_zalloc(nbits, GFP_KERNEL);
+ binder_proc_lock(proc);
+ dbitmap_shrink(&proc->dmap, new, nbits);
+ binder_proc_unlock(proc);
+}
+
/**
* binder_update_ref_for_handle() - inc/dec the ref for given handle
* @proc: proc containing the ref
@@ -1334,8 +1415,10 @@ static int binder_update_ref_for_handle(struct binder_proc *proc,
*rdata = ref->data;
binder_proc_unlock(proc);

- if (delete_ref)
+ if (delete_ref) {
binder_free_ref(ref);
+ try_shrink_dmap(proc);
+ }
return ret;

err_no_ref:
@@ -4931,6 +5014,7 @@ static void binder_free_proc(struct binder_proc *proc)
put_task_struct(proc->tsk);
put_cred(proc->cred);
binder_stats_deleted(BINDER_STAT_PROC);
+ dbitmap_free(&proc->dmap);
kfree(proc);
}

@@ -5634,6 +5718,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
+
+ dbitmap_init(&proc->dmap);
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
index 5b7c80b99ae8..7d4fc53f7a73 100644
--- a/drivers/android/binder_internal.h
+++ b/drivers/android/binder_internal.h
@@ -14,6 +14,7 @@
#include <linux/uidgid.h>
#include <uapi/linux/android/binderfs.h>
#include "binder_alloc.h"
+#include "dbitmap.h"

struct binder_context {
struct binder_node *binder_context_mgr_node;
@@ -368,6 +369,8 @@ struct binder_ref {
* @freeze_wait: waitqueue of processes waiting for all outstanding
* transactions to be processed
* (protected by @inner_lock)
+ * @dmap dbitmap to manage available reference descriptors
+ * (protected by @outer_lock)
* @todo: list of work for this process
* (protected by @inner_lock)
* @stats: per-process binder statistics
@@ -417,7 +420,7 @@ struct binder_proc {
bool sync_recv;
bool async_recv;
wait_queue_head_t freeze_wait;
-
+ struct dbitmap dmap;
struct list_head todo;
struct binder_stats stats;
struct list_head delivered_death;
diff --git a/drivers/android/dbitmap.h b/drivers/android/dbitmap.h
new file mode 100644
index 000000000000..b8ac7b4764fd
--- /dev/null
+++ b/drivers/android/dbitmap.h
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright 2024 Google LLC
+ *
+ * dbitmap - dynamically sized bitmap library.
+ *
+ * Used by the binder driver to optimize the allocation of the smallest
+ * available descriptor ID. Each bit in the bitmap represents the state
+ * of an ID, with the exception of BIT(0) which is used exclusively to
+ * reference binder's context manager.
+ *
+ * A dbitmap can grow or shrink as needed. This part has been designed
+ * considering that users might need to briefly release their locks in
+ * order to allocate memory for the new bitmap. These operations then,
+ * are verified to determine if the grow or shrink is sill valid.
+ *
+ * This library does not provide protection against concurrent access
+ * by itself. Binder uses the proc->outer_lock for this purpose.
+ */
+
+#ifndef _LINUX_DBITMAP_H
+#define _LINUX_DBITMAP_H
+#include <linux/bitmap.h>
+
+#define NBITS_MIN BITS_PER_TYPE(unsigned long)
+
+struct dbitmap {
+ unsigned int nbits;
+ unsigned long *map;
+};
+
+static inline int dbitmap_enabled(struct dbitmap *dmap)
+{
+ return !!dmap->nbits;
+}
+
+static inline void dbitmap_free(struct dbitmap *dmap)
+{
+ dmap->nbits = 0;
+ kfree(dmap->map);
+}
+
+/* Returns the nbits that a dbitmap can shrink to, 0 if not possible. */
+static inline unsigned int dbitmap_shrink_nbits(struct dbitmap *dmap)
+{
+ unsigned int bit;
+
+ if (dmap->nbits <= NBITS_MIN)
+ return 0;
+
+ /*
+ * Determine if the bitmap can shrink based on the position of
+ * its last set bit. If the bit is within the first quarter of
+ * the bitmap then shrinking is possible. In this case, the
+ * bitmap should shrink to half its current size.
+ */
+ bit = find_last_bit(dmap->map, dmap->nbits);
+ if (bit < (dmap->nbits >> 2))
+ return dmap->nbits >> 1;
+
+ /*
+ * Note that find_last_bit() returns dmap->nbits when no bits
+ * are set. While this is technically not possible here since
+ * BIT(0) is always set, this check is left for extra safety.
+ */
+ if (bit == dmap->nbits)
+ return NBITS_MIN;
+
+ return 0;
+}
+
+/* Replace the internal bitmap with a new one of different size */
+static inline void
+dbitmap_replace(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ bitmap_copy(new, dmap->map, min(dmap->nbits, nbits));
+ kfree(dmap->map);
+ dmap->map = new;
+ dmap->nbits = nbits;
+}
+
+static inline void
+dbitmap_shrink(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ if (!new)
+ return;
+
+ /*
+ * Verify that shrinking to @nbits is still possible. The @new
+ * bitmap might have been allocated without locks, so this call
+ * could now be outdated. In this case, free @new and move on.
+ */
+ if (!dbitmap_enabled(dmap) || dbitmap_shrink_nbits(dmap) != nbits) {
+ kfree(new);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+/* Returns the nbits that a dbitmap can grow to. */
+static inline unsigned int dbitmap_grow_nbits(struct dbitmap *dmap)
+{
+ return dmap->nbits << 1;
+}
+
+static inline void
+dbitmap_grow(struct dbitmap *dmap, unsigned long *new, unsigned int nbits)
+{
+ /*
+ * Verify that growing to @nbits is still possible. The @new
+ * bitmap might have been allocated without locks, so this call
+ * could now be outdated. In this case, free @new and move on.
+ */
+ if (!dbitmap_enabled(dmap) || nbits <= dmap->nbits) {
+ kfree(new);
+ return;
+ }
+
+ /*
+ * Check for ENOMEM after confirming the grow operation is still
+ * required. This ensures we only disable the dbitmap when it's
+ * necessary. Once the dbitmap is disabled, binder will fallback
+ * to slow_desc_lookup_olocked().
+ */
+ if (!new) {
+ dbitmap_free(dmap);
+ return;
+ }
+
+ dbitmap_replace(dmap, new, nbits);
+}
+
+/*
+ * Finds and sets the first zero bit in the bitmap. Upon success @bit
+ * is populated with the index and 0 is returned. Otherwise, -ENOSPC
+ * is returned to indicate that a dbitmap_grow() is needed.
+ */
+static inline int
+dbitmap_acquire_first_zero_bit(struct dbitmap *dmap, unsigned long *bit)
+{
+ unsigned long n;
+
+ n = find_first_zero_bit(dmap->map, dmap->nbits);
+ if (n == dmap->nbits)
+ return -ENOSPC;
+
+ *bit = n;
+ set_bit(n, dmap->map);
+
+ return 0;
+}
+
+static inline void
+dbitmap_clear_bit(struct dbitmap *dmap, unsigned long bit)
+{
+ /* BIT(0) should always set for the context manager */
+ if (bit)
+ clear_bit(bit, dmap->map);
+}
+
+static inline int dbitmap_init(struct dbitmap *dmap)
+{
+ dmap->map = bitmap_zalloc(NBITS_MIN, GFP_KERNEL);
+ if (!dmap->map) {
+ dmap->nbits = 0;
+ return -ENOMEM;
+ }
+
+ dmap->nbits = NBITS_MIN;
+ /* BIT(0) is reserved for the context manager */
+ set_bit(0, dmap->map);
+
+ return 0;
+}
+#endif
--
2.45.2.505.gda0bf45e8d-goog


2024-06-13 16:51:51

by Markus Elfring

[permalink] [raw]
Subject: Re: [PATCH v5] binder: use bitmap for faster descriptor lookup


> +++ b/drivers/android/binder.c

> +static void try_shrink_dmap(struct binder_proc *proc)
> +{

> + binder_proc_lock(proc);
> + nbits = dbitmap_shrink_nbits(&proc->dmap);
> + binder_proc_unlock(proc);

> + new = bitmap_zalloc(nbits, GFP_KERNEL);
> + binder_proc_lock(proc);
> + dbitmap_shrink(&proc->dmap, new, nbits);
> + binder_proc_unlock(proc);
> +}


Would you become interested to apply lock guards?
https://elixir.bootlin.com/linux/v6.10-rc3/source/include/linux/cleanup.h#L124

Regards,
Markus

2024-06-14 09:16:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v5] binder: use bitmap for faster descriptor lookup

On Thu, Jun 13, 2024 at 06:50:53PM +0200, Markus Elfring wrote:
> …
> > +++ b/drivers/android/binder.c
> …
> > +static void try_shrink_dmap(struct binder_proc *proc)
> > +{
> …
> > + binder_proc_lock(proc);
> > + nbits = dbitmap_shrink_nbits(&proc->dmap);
> > + binder_proc_unlock(proc);
> …
> > + new = bitmap_zalloc(nbits, GFP_KERNEL);
> > + binder_proc_lock(proc);
> > + dbitmap_shrink(&proc->dmap, new, nbits);
> > + binder_proc_unlock(proc);
> > +}
> …
>
> Would you become interested to apply lock guards?
> https://elixir.bootlin.com/linux/v6.10-rc3/source/include/linux/cleanup.h#L124


Hi,

This is the semi-friendly patch-bot of Greg Kroah-Hartman.

Markus, you seem to have sent a nonsensical or otherwise pointless
review comment to a patch submission on a Linux kernel developer mailing
list. I strongly suggest that you not do this anymore. Please do not
bother developers who are actively working to produce patches and
features with comments that, in the end, are a waste of time.

Patch submitter, please ignore Markus's suggestion; you do not need to
follow it at all. The person/bot/AI that sent it is being ignored by
almost all Linux kernel maintainers for having a persistent pattern of
behavior of producing distracting and pointless commentary, and
inability to adapt to feedback. Please feel free to also ignore emails
from them.

thanks,

greg k-h's patch email bot