2013-03-27 13:18:19

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 0/6] idr: add idr_alloc_cyclic and convert existing cyclic users

(Andrew, I think this is probably best routed via -mm since it touches
several different places)

As Tejun points out, there are several users of the IDR facility that
attempt to use it in a cyclic fashion. These users are likely to see
-ENOSPC errors after the counter wraps one or more times however.

This patchset adds a new idr_alloc_cyclic routine and converts several
of these users to it. Many of these users are in obscure parts of the
kernel, and I don't have a good way to test some of them. The change is
pretty straightforward though, so hopefully it won't be an issue.

There is one other cyclic user of idr_alloc that I didn't touch in
ipc/util.c. That one is doing some strange stuff that I didn't quite
understand, but it looks like it should probably be converted later
somehow.

Jeff Layton (6):
idr: introduce idr_alloc_cyclic
amso1100: convert to using idr_alloc_cyclic
mlx4: convert to using idr_alloc_cyclic
nfsd: convert nfs4_alloc_stid to use idr_alloc_cyclic
inotify: convert inotify_add_to_idr to use idr_alloc_cyclic
sctp: convert sctp_assoc_set_id to use idr_alloc_cyclic

drivers/infiniband/hw/amso1100/c2.h | 1 -
drivers/infiniband/hw/amso1100/c2_qp.c | 5 ++--
drivers/infiniband/hw/mlx4/cm.c | 6 ++---
fs/nfsd/nfs4state.c | 9 ++-----
fs/notify/inotify/inotify_user.c | 10 +++-----
include/linux/fsnotify_backend.h | 1 -
include/linux/idr.h | 10 +++++++-
lib/idr.c | 47 +++++++++++++++++++++++++++++++---
net/sctp/associola.c | 15 +----------
net/sctp/protocol.c | 2 +-
10 files changed, 63 insertions(+), 43 deletions(-)

--
1.7.11.7


2013-03-27 13:18:28

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 6/6] sctp: convert sctp_assoc_set_id to use idr_alloc_cyclic

(Note: compile-tested only)

Signed-off-by: Jeff Layton <[email protected]>
Cc: Vlad Yasevich <[email protected]>
Cc: Sridhar Samudrala <[email protected]>
Cc: Neil Horman <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
net/sctp/associola.c | 15 +--------------
net/sctp/protocol.c | 2 +-
2 files changed, 2 insertions(+), 15 deletions(-)

diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index d2709e2..8b21563 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -66,13 +66,6 @@ static void sctp_assoc_bh_rcv(struct work_struct *work);
static void sctp_assoc_free_asconf_acks(struct sctp_association *asoc);
static void sctp_assoc_free_asconf_queue(struct sctp_association *asoc);

-/* Keep track of the new idr low so that we don't re-use association id
- * numbers too fast. It is protected by they idr spin lock is in the
- * range of 1 - INT_MAX.
- */
-static u32 idr_low = 1;
-
-
/* 1st Level Abstractions. */

/* Initialize a new association from provided memory. */
@@ -1601,13 +1594,7 @@ int sctp_assoc_set_id(struct sctp_association *asoc, gfp_t gfp)
if (preload)
idr_preload(gfp);
spin_lock_bh(&sctp_assocs_id_lock);
- /* 0 is not a valid id, idr_low is always >= 1 */
- ret = idr_alloc(&sctp_assocs_id, asoc, idr_low, 0, GFP_NOWAIT);
- if (ret >= 0) {
- idr_low = ret + 1;
- if (idr_low == INT_MAX)
- idr_low = 1;
- }
+ ret = idr_alloc_cyclic(&sctp_assocs_id, asoc, 1, 0, GFP_NOWAIT);
spin_unlock_bh(&sctp_assocs_id_lock);
if (preload)
idr_preload_end();
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index 1c2e46c..bc7b069 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -1352,7 +1352,7 @@ SCTP_STATIC __init int sctp_init(void)
sctp_max_outstreams = SCTP_DEFAULT_OUTSTREAMS;

/* Initialize handle used for association ids. */
- idr_init(&sctp_assocs_id);
+ idr_init_cyclic(&sctp_assocs_id, 1);

limit = nr_free_buffer_pages() / 8;
limit = max(limit, 128UL);
--
1.7.11.7

2013-03-27 13:18:25

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 4/6] nfsd: convert nfs4_alloc_stid to use idr_alloc_cyclic

Signed-off-by: Jeff Layton <[email protected]>
Cc: "J. Bruce Fields" <[email protected]>
Cc: [email protected]
---
fs/nfsd/nfs4state.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 2e27430..2efb034 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -234,7 +234,6 @@ static struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, struct
kmem_cache *slab)
{
struct idr *stateids = &cl->cl_stateids;
- static int min_stateid = 0;
struct nfs4_stid *stid;
int new_id;

@@ -242,7 +241,7 @@ kmem_cache *slab)
if (!stid)
return NULL;

- new_id = idr_alloc(stateids, stid, min_stateid, 0, GFP_KERNEL);
+ new_id = idr_alloc_cyclic(stateids, stid, 0, 0, GFP_KERNEL);
if (new_id < 0)
goto out_free;
stid->sc_client = cl;
@@ -261,10 +260,6 @@ kmem_cache *slab)
* amount of time until an id is reused, by ensuring they always
* "increase" (mod INT_MAX):
*/
-
- min_stateid = new_id+1;
- if (min_stateid == INT_MAX)
- min_stateid = 0;
return stid;
out_free:
kfree(stid);
@@ -1287,7 +1282,7 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
spin_unlock(&nn->client_lock);
return NULL;
}
- idr_init(&clp->cl_stateids);
+ idr_init_cyclic(&clp->cl_stateids, 0);
atomic_set(&clp->cl_refcount, 0);
clp->cl_cb_state = NFSD4_CB_UNKNOWN;
INIT_LIST_HEAD(&clp->cl_idhash);
--
1.7.11.7

2013-03-27 13:18:22

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 2/6] amso1100: convert to using idr_alloc_cyclic

(Note: compile-tested only)

Signed-off-by: Jeff Layton <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Steve Wise <[email protected]>
Cc: Tom Tucker <[email protected]>
Cc: [email protected]
---
drivers/infiniband/hw/amso1100/c2.h | 1 -
drivers/infiniband/hw/amso1100/c2_qp.c | 5 ++---
2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/amso1100/c2.h b/drivers/infiniband/hw/amso1100/c2.h
index ba7a1208..d619d73 100644
--- a/drivers/infiniband/hw/amso1100/c2.h
+++ b/drivers/infiniband/hw/amso1100/c2.h
@@ -265,7 +265,6 @@ struct c2_pd_table {
struct c2_qp_table {
struct idr idr;
spinlock_t lock;
- int last;
};

struct c2_element {
diff --git a/drivers/infiniband/hw/amso1100/c2_qp.c b/drivers/infiniband/hw/amso1100/c2_qp.c
index 0ab826b..703b33f 100644
--- a/drivers/infiniband/hw/amso1100/c2_qp.c
+++ b/drivers/infiniband/hw/amso1100/c2_qp.c
@@ -385,8 +385,7 @@ static int c2_alloc_qpn(struct c2_dev *c2dev, struct c2_qp *qp)
idr_preload(GFP_KERNEL);
spin_lock_irq(&c2dev->qp_table.lock);

- ret = idr_alloc(&c2dev->qp_table.idr, qp, c2dev->qp_table.last++, 0,
- GFP_NOWAIT);
+ ret = idr_alloc_cyclic(&c2dev->qp_table.idr, qp, 0, 0, GFP_NOWAIT);
if (ret >= 0)
qp->qpn = ret;

@@ -1016,7 +1015,7 @@ out:
void c2_init_qp_table(struct c2_dev *c2dev)
{
spin_lock_init(&c2dev->qp_table.lock);
- idr_init(&c2dev->qp_table.idr);
+ idr_init_cyclic(&c2dev->qp_table.idr, 0);
}

void c2_cleanup_qp_table(struct c2_dev *c2dev)
--
1.7.11.7

2013-03-27 13:18:57

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 5/6] inotify: convert inotify_add_to_idr to use idr_alloc_cyclic

Signed-off-by: Jeff Layton <[email protected]>
Cc: John McCutchan <[email protected]>
Cc: Robert Love <[email protected]>
Cc: Eric Paris <[email protected]>
---
fs/notify/inotify/inotify_user.c | 10 +++-------
include/linux/fsnotify_backend.h | 1 -
2 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index e0f7c12..6940434 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -359,7 +359,6 @@ static int inotify_find_inode(const char __user *dirname, struct path *path, uns
}

static int inotify_add_to_idr(struct idr *idr, spinlock_t *idr_lock,
- int *last_wd,
struct inotify_inode_mark *i_mark)
{
int ret;
@@ -367,11 +366,10 @@ static int inotify_add_to_idr(struct idr *idr, spinlock_t *idr_lock,
idr_preload(GFP_KERNEL);
spin_lock(idr_lock);

- ret = idr_alloc(idr, i_mark, *last_wd + 1, 0, GFP_NOWAIT);
+ ret = idr_alloc_cyclic(idr, i_mark, 0, 0, GFP_NOWAIT);
if (ret >= 0) {
/* we added the mark to the idr, take a reference */
i_mark->wd = ret;
- *last_wd = i_mark->wd;
fsnotify_get_mark(&i_mark->fsn_mark);
}

@@ -638,8 +636,7 @@ static int inotify_new_watch(struct fsnotify_group *group,
if (atomic_read(&group->inotify_data.user->inotify_watches) >= inotify_max_user_watches)
goto out_err;

- ret = inotify_add_to_idr(idr, idr_lock, &group->inotify_data.last_wd,
- tmp_i_mark);
+ ret = inotify_add_to_idr(idr, idr_lock, tmp_i_mark);
if (ret)
goto out_err;

@@ -696,8 +693,7 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events)
group->max_events = max_events;

spin_lock_init(&group->inotify_data.idr_lock);
- idr_init(&group->inotify_data.idr);
- group->inotify_data.last_wd = 0;
+ idr_init_cyclic(&group->inotify_data.idr, 0);
group->inotify_data.user = get_current_user();

if (atomic_inc_return(&group->inotify_data.user->inotify_devs) >
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index d5b0910..4b2ee8d 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -157,7 +157,6 @@ struct fsnotify_group {
struct inotify_group_private_data {
spinlock_t idr_lock;
struct idr idr;
- u32 last_wd;
struct user_struct *user;
} inotify_data;
#endif
--
1.7.11.7

2013-03-27 13:19:23

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 3/6] mlx4: convert to using idr_alloc_cyclic

(Note: compile tested only)

Signed-off-by: Jeff Layton <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jack Morgenstein <[email protected]>
Cc: Or Gerlitz <[email protected]>
Cc: Roland Dreier <[email protected]>
Cc: [email protected]
---
drivers/infiniband/hw/mlx4/cm.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
index add98d0..524e168 100644
--- a/drivers/infiniband/hw/mlx4/cm.c
+++ b/drivers/infiniband/hw/mlx4/cm.c
@@ -204,7 +204,6 @@ static struct id_map_entry *
id_map_alloc(struct ib_device *ibdev, int slave_id, u32 sl_cm_id)
{
int ret;
- static int next_id;
struct id_map_entry *ent;
struct mlx4_ib_sriov *sriov = &to_mdev(ibdev)->sriov;

@@ -223,9 +222,8 @@ id_map_alloc(struct ib_device *ibdev, int slave_id, u32 sl_cm_id)
idr_preload(GFP_KERNEL);
spin_lock(&to_mdev(ibdev)->sriov.id_map_lock);

- ret = idr_alloc(&sriov->pv_id_table, ent, next_id, 0, GFP_NOWAIT);
+ ret = idr_alloc_cyclic(&sriov->pv_id_table, ent, 0, 0, GFP_NOWAIT);
if (ret >= 0) {
- next_id = max(ret + 1, 0);
ent->pv_cm_id = (u32)ret;
sl_id_map_add(ibdev, ent);
list_add_tail(&ent->list, &sriov->cm_list);
@@ -361,7 +359,7 @@ void mlx4_ib_cm_paravirt_init(struct mlx4_ib_dev *dev)
spin_lock_init(&dev->sriov.id_map_lock);
INIT_LIST_HEAD(&dev->sriov.cm_list);
dev->sriov.sl_id_map = RB_ROOT;
- idr_init(&dev->sriov.pv_id_table);
+ idr_init_cyclic(&dev->sriov.pv_id_table, 0);
}

/* slave = -1 ==> all slaves */
--
1.7.11.7

2013-03-27 13:19:55

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v1 1/6] idr: introduce idr_alloc_cyclic

Thus spake Tejun Heo:

Ooh, BTW, the cyclic allocation is broken. It's prone to -ENOSPC
after the first wraparound. There are several cyclic users in the
kernel and I think it probably would be best to implement cyclic
support in idr.

This patch does that by adding new idr_alloc_cyclic function that such
users in the kernel can use. With this, there's no need for a caller to
keep track of the last value used as that's now tracked internally.
This should prevent the ENOSPC problems that can hit when the "last
allocated" counter exceeds INT_MAX.

Later patches will convert existing cyclic users to the new interface.

Cc: Tejun Heo <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
---
include/linux/idr.h | 10 +++++++++-
lib/idr.c | 47 +++++++++++++++++++++++++++++++++++++++++++----
2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/include/linux/idr.h b/include/linux/idr.h
index 2640c7e..01752b1 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -42,6 +42,7 @@ struct idr {
struct idr_layer *id_free;
int layers; /* only valid w/o concurrent changes */
int id_free_cnt;
+ int cur; /* current pos for cyclic allocation */
spinlock_t lock;
};

@@ -75,6 +76,7 @@ struct idr {
void *idr_find_slowpath(struct idr *idp, int id);
void idr_preload(gfp_t gfp_mask);
int idr_alloc(struct idr *idp, void *ptr, int start, int end, gfp_t gfp_mask);
+int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end, gfp_t gfp_mask);
int idr_for_each(struct idr *idp,
int (*fn)(int id, void *p, void *data), void *data);
void *idr_get_next(struct idr *idp, int *nextid);
@@ -82,7 +84,13 @@ void *idr_replace(struct idr *idp, void *ptr, int id);
void idr_remove(struct idr *idp, int id);
void idr_free(struct idr *idp, int id);
void idr_destroy(struct idr *idp);
-void idr_init(struct idr *idp);
+void idr_init_cyclic(struct idr *idp, int start);
+
+static inline void
+idr_init(struct idr *idp)
+{
+ idr_init_cyclic(idp, 0);
+}

/**
* idr_preload_end - end preload section started with idr_preload()
diff --git a/lib/idr.c b/lib/idr.c
index 322e281..992f53f 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -495,6 +495,44 @@ int idr_alloc(struct idr *idr, void *ptr, int start, int end, gfp_t gfp_mask)
}
EXPORT_SYMBOL_GPL(idr_alloc);

+/**
+ * idr_alloc_cyclic - allocate new idr entry in a cyclical fashion
+ * @idr: the (initialized) idr
+ * @ptr: pointer to be associated with the new id
+ * @start: the minimum id (inclusive)
+ * @end: the maximum id (exclusive, <= 0 for max)
+ * @cur: ptr to current position in the range (typically, last allocated + 1)
+ * @gfp_mask: memory allocation flags
+ *
+ * Essentially the same as idr_alloc, but prefers to allocate progressively
+ * higher ids if it can. If the "cur" counter wraps, then it will start again
+ * at the "start" end of the range and allocate one that has already been used.
+ *
+ * Note that people using cyclic allocation to avoid premature reuse of an
+ * already-used ID may be in for a nasty surprise after idr->cur wraps. The
+ * IDR code is designed to avoid unnecessary allocations. If there is space
+ * in an existing layer that holds high IDs then it will return one of those
+ * instead of allocating a new layer at the bottom of the range.
+ */
+int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end,
+ gfp_t gfp_mask)
+{
+ int id;
+ int cur = idr->cur;
+
+ if (unlikely(start > cur))
+ cur = start;
+
+ id = idr_alloc(idr, ptr, cur, end, gfp_mask);
+ if (id == -ENOSPC)
+ id = idr_alloc(idr, ptr, start, end, gfp_mask);
+
+ if (likely(id >= 0))
+ idr->cur = id + 1;
+ return id;
+}
+EXPORT_SYMBOL(idr_alloc_cyclic);
+
static void idr_remove_warning(int id)
{
printk(KERN_WARNING
@@ -831,19 +869,20 @@ void __init idr_init_cache(void)
}

/**
- * idr_init - initialize idr handle
+ * idr_init_cyclic - initialize idr handle
* @idp: idr handle
+ * @start: starting id value for cyclic users
*
* This function is use to set up the handle (@idp) that you will pass
* to the rest of the functions.
*/
-void idr_init(struct idr *idp)
+void idr_init_cyclic(struct idr *idp, int start)
{
memset(idp, 0, sizeof(struct idr));
spin_lock_init(&idp->lock);
+ idp->cur = start;
}
-EXPORT_SYMBOL(idr_init);
-
+EXPORT_SYMBOL(idr_init_cyclic);

/**
* DOC: IDA description
--
1.7.11.7

2013-03-27 16:26:01

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] idr: introduce idr_alloc_cyclic

Hello, Jeff.

On Wed, Mar 27, 2013 at 09:18:03AM -0400, Jeff Layton wrote:
> + * Note that people using cyclic allocation to avoid premature reuse of an
> + * already-used ID may be in for a nasty surprise after idr->cur wraps. The
> + * IDR code is designed to avoid unnecessary allocations. If there is space
> + * in an existing layer that holds high IDs then it will return one of those
> + * instead of allocating a new layer at the bottom of the range.

Ooh, does it? Where?

> +int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end,
> + gfp_t gfp_mask)
> +{
> + int id;
> + int cur = idr->cur;
> +
> + if (unlikely(start > cur))
> + cur = start;
> +
> + id = idr_alloc(idr, ptr, cur, end, gfp_mask);

Would max(id->cur, start) be easier to follow?

> + if (id == -ENOSPC)
> + id = idr_alloc(idr, ptr, start, end, gfp_mask);
> +
> + if (likely(id >= 0))
> + idr->cur = id + 1;

If @id is INT_MAX, idr->cur will be -1 which is okay as start > cur
test above will correct it on the next iteration but maybe we can do
idr->cur = max(id + 1, 0); for clarity?

Both my points are cosmetic and the patch looks good to me.

Reviewed-by: Tejun Heo <[email protected]>

Thanks!

--
tejun

2013-03-27 16:28:14

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v1 2/6] amso1100: convert to using idr_alloc_cyclic

On Wed, Mar 27, 2013 at 09:18:04AM -0400, Jeff Layton wrote:
> void c2_init_qp_table(struct c2_dev *c2dev)
> {
> spin_lock_init(&c2dev->qp_table.lock);
> - idr_init(&c2dev->qp_table.idr);
> + idr_init_cyclic(&c2dev->qp_table.idr, 0);
> }

Why is this necessary? In general, why is idr_init_cyclic()
necessary?

Thanks.

--
tejun

2013-03-27 16:48:13

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] idr: introduce idr_alloc_cyclic

On Wed, 27 Mar 2013 09:25:53 -0700
Tejun Heo <[email protected]> wrote:

> Hello, Jeff.
>
> On Wed, Mar 27, 2013 at 09:18:03AM -0400, Jeff Layton wrote:
> > + * Note that people using cyclic allocation to avoid premature reuse of an
> > + * already-used ID may be in for a nasty surprise after idr->cur wraps. The
> > + * IDR code is designed to avoid unnecessary allocations. If there is space
> > + * in an existing layer that holds high IDs then it will return one of those
> > + * instead of allocating a new layer at the bottom of the range.
>
> Ooh, does it? Where?
>

That's what I gathered from looking at idr_get_empty_slot. I could be
wrong here, so please correct me if I am. The IDR internals are really
hard to follow...

In any case, it looks like it only tries to allocate a new layer if:

idr->top is empty

...or...

while (id > idr_max(layers)) {
...
}

After the wrap, idr->top won't be empty if we have at least one layer
still in use. We start with id = starting_id, which after wrap will be
much lower than idr_max() at that point (right?).

So we'll skip the while loop and fall right into sub_alloc and will
quite possibly end up allocating a slot out of the current layer, which
is almost certainly not near the bottom of the range.

Again, I'm far from sure of my understanding of the internals here, so
please do correct me if that's not right...

> > +int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end,
> > + gfp_t gfp_mask)
> > +{
> > + int id;
> > + int cur = idr->cur;
> > +
> > + if (unlikely(start > cur))
> > + cur = start;
> > +
> > + id = idr_alloc(idr, ptr, cur, end, gfp_mask);
>
> Would max(id->cur, start) be easier to follow?
>

Sure. I also noticed that the kerneldoc has an extra parm in it, so I
need to fix that too.

> > + if (id == -ENOSPC)
> > + id = idr_alloc(idr, ptr, start, end, gfp_mask);
> > +
> > + if (likely(id >= 0))
> > + idr->cur = id + 1;
>
> If @id is INT_MAX, idr->cur will be -1 which is okay as start > cur
> test above will correct it on the next iteration but maybe we can do
> idr->cur = max(id + 1, 0); for clarity?
>

We could, but that means we'll have to evaluate that max() on every
call into here. I think it's more efficient overall to just do the
retry when we hit INT_MAX since that should be pretty rare.

--
Jeff Layton <[email protected]>

2013-03-27 16:50:34

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v1 2/6] amso1100: convert to using idr_alloc_cyclic

On Wed, 27 Mar 2013 09:27:55 -0700
Tejun Heo <[email protected]> wrote:

> On Wed, Mar 27, 2013 at 09:18:04AM -0400, Jeff Layton wrote:
> > void c2_init_qp_table(struct c2_dev *c2dev)
> > {
> > spin_lock_init(&c2dev->qp_table.lock);
> > - idr_init(&c2dev->qp_table.idr);
> > + idr_init_cyclic(&c2dev->qp_table.idr, 0);
> > }
>
> Why is this necessary? In general, why is idr_init_cyclic()
> necessary?
>
> Thanks.
>

My thinking was that you might want to initialize the "cur" value to an
arbitrary value. All the current users though initialize it to the same
as the "start" value passed into idr_alloc_cyclic. Starting with it at
0 should be fine in all of the existing users.

I'll remove that in v2...

Thanks!
--
Jeff Layton <[email protected]>

2013-03-27 17:01:50

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] idr: introduce idr_alloc_cyclic

Hello,

On Wed, Mar 27, 2013 at 12:48:04PM -0400, Jeff Layton wrote:
> > > + * Note that people using cyclic allocation to avoid premature reuse of an
> > > + * already-used ID may be in for a nasty surprise after idr->cur wraps. The
> > > + * IDR code is designed to avoid unnecessary allocations. If there is space
> > > + * in an existing layer that holds high IDs then it will return one of those
> > > + * instead of allocating a new layer at the bottom of the range.
> >
> > Ooh, does it? Where?
> >
>
> That's what I gathered from looking at idr_get_empty_slot. I could be
> wrong here, so please correct me if I am. The IDR internals are really
> hard to follow...

Amen, it's horrible.

> In any case, it looks like it only tries to allocate a new layer if:
>
> idr->top is empty
>
> ...or...
>
> while (id > idr_max(layers)) {
> ...
> }
>
> After the wrap, idr->top won't be empty if we have at least one layer
> still in use. We start with id = starting_id, which after wrap will be
> much lower than idr_max() at that point (right?).
>
> So we'll skip the while loop and fall right into sub_alloc and will
> quite possibly end up allocating a slot out of the current layer, which
> is almost certainly not near the bottom of the range.
>
> Again, I'm far from sure of my understanding of the internals here, so
> please do correct me if that's not right...

So, there are two paths which do layer allocation.
idr_get_empty_slot() does bottom -> top expansion. ie. it grows the
tree if the current position can't be covered by the current tree.
Note that the tree can always point to zero. That is, the first slot
of the top layer always includes zero.

The other part - building tree top -> bottom - happens in sub_alloc()
which traverses the tree downwards for the current position and
creates new idr_layer if it doesn't exist.

So, after wrap, the tree is already tall enough so
idr_get_empty_slot() will just call into sub_alloc() which will build
the tree downwards. AFAICS, it does guarantee lowest-packing.

Thanks.

--
tejun

2013-03-27 17:21:46

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v1 1/6] idr: introduce idr_alloc_cyclic

On Wed, 27 Mar 2013 10:01:35 -0700
Tejun Heo <[email protected]> wrote:

> Hello,
>
> On Wed, Mar 27, 2013 at 12:48:04PM -0400, Jeff Layton wrote:
> > > > + * Note that people using cyclic allocation to avoid premature reuse of an
> > > > + * already-used ID may be in for a nasty surprise after idr->cur wraps. The
> > > > + * IDR code is designed to avoid unnecessary allocations. If there is space
> > > > + * in an existing layer that holds high IDs then it will return one of those
> > > > + * instead of allocating a new layer at the bottom of the range.
> > >
> > > Ooh, does it? Where?
> > >
> >
> > That's what I gathered from looking at idr_get_empty_slot. I could be
> > wrong here, so please correct me if I am. The IDR internals are really
> > hard to follow...
>
> Amen, it's horrible.
>
> > In any case, it looks like it only tries to allocate a new layer if:
> >
> > idr->top is empty
> >
> > ...or...
> >
> > while (id > idr_max(layers)) {
> > ...
> > }
> >
> > After the wrap, idr->top won't be empty if we have at least one layer
> > still in use. We start with id = starting_id, which after wrap will be
> > much lower than idr_max() at that point (right?).
> >
> > So we'll skip the while loop and fall right into sub_alloc and will
> > quite possibly end up allocating a slot out of the current layer, which
> > is almost certainly not near the bottom of the range.
> >
> > Again, I'm far from sure of my understanding of the internals here, so
> > please do correct me if that's not right...
>
> So, there are two paths which do layer allocation.
> idr_get_empty_slot() does bottom -> top expansion. ie. it grows the
> tree if the current position can't be covered by the current tree.
> Note that the tree can always point to zero. That is, the first slot
> of the top layer always includes zero.
>
> The other part - building tree top -> bottom - happens in sub_alloc()
> which traverses the tree downwards for the current position and
> creates new idr_layer if it doesn't exist.
>
> So, after wrap, the tree is already tall enough so
> idr_get_empty_slot() will just call into sub_alloc() which will build
> the tree downwards. AFAICS, it does guarantee lowest-packing.
>

Ok, that's good to know, and I'll remove the comment for the v2 patch.
I'm glad to be wrong in this case :)

--
Jeff Layton <[email protected]>

2013-04-03 19:24:06

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v1 4/6] nfsd: convert nfs4_alloc_stid to use idr_alloc_cyclic

ACK.--b.

On Wed, Mar 27, 2013 at 09:18:06AM -0400, Jeff Layton wrote:
> Signed-off-by: Jeff Layton <[email protected]>
> Cc: "J. Bruce Fields" <[email protected]>
> Cc: [email protected]
> ---
> fs/nfsd/nfs4state.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 2e27430..2efb034 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -234,7 +234,6 @@ static struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, struct
> kmem_cache *slab)
> {
> struct idr *stateids = &cl->cl_stateids;
> - static int min_stateid = 0;
> struct nfs4_stid *stid;
> int new_id;
>
> @@ -242,7 +241,7 @@ kmem_cache *slab)
> if (!stid)
> return NULL;
>
> - new_id = idr_alloc(stateids, stid, min_stateid, 0, GFP_KERNEL);
> + new_id = idr_alloc_cyclic(stateids, stid, 0, 0, GFP_KERNEL);
> if (new_id < 0)
> goto out_free;
> stid->sc_client = cl;
> @@ -261,10 +260,6 @@ kmem_cache *slab)
> * amount of time until an id is reused, by ensuring they always
> * "increase" (mod INT_MAX):
> */
> -
> - min_stateid = new_id+1;
> - if (min_stateid == INT_MAX)
> - min_stateid = 0;
> return stid;
> out_free:
> kfree(stid);
> @@ -1287,7 +1282,7 @@ static struct nfs4_client *create_client(struct xdr_netobj name,
> spin_unlock(&nn->client_lock);
> return NULL;
> }
> - idr_init(&clp->cl_stateids);
> + idr_init_cyclic(&clp->cl_stateids, 0);
> atomic_set(&clp->cl_refcount, 0);
> clp->cl_cb_state = NFSD4_CB_UNKNOWN;
> INIT_LIST_HEAD(&clp->cl_idhash);
> --
> 1.7.11.7
>