2009-04-29 06:54:15

by Nikanth Karthikesan

[permalink] [raw]
Subject: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

Hi Jens

Currently io_context has an atomic_t(int) as refcount. In case of cfq, for
each device a task does I/O, a reference to the io_context would be taken. And
when there are multiple process sharing io_contexts(CLONE_IO) would also have
a reference to the same io_context. Theoretically the possible maximum number
of processes sharing the same io_context + the number of disks/cfq_data
referring to the same io_context can overflow the 32-bit counter on a very
high-end machine. Even though it is an improbable case, let us make it
difficult by changing the refcount to atomic64_t(long).

Signed-off-by: Nikanth Karthikesan <[email protected]>

---

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..4f5ff79 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc)
if (ioc == NULL)
return 1;

- BUG_ON(atomic_read(&ioc->refcount) == 0);
+ BUG_ON(atomic64_read(&ioc->refcount) == 0);

- if (atomic_dec_and_test(&ioc->refcount)) {
+ if (atomic64_dec_and_test(&ioc->refcount)) {
rcu_read_lock();
if (ioc->aic && ioc->aic->dtor)
ioc->aic->dtor(ioc->aic);
@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node)
ret = current_io_context(gfp_flags, node);
if (unlikely(!ret))
break;
- } while (!atomic_inc_not_zero(&ret->refcount));
+ } while (!atomic64_inc_not_zero(&ret->refcount));

return ret;
}
@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc)
struct io_context *dst = *pdst;

if (src) {
- BUG_ON(atomic_read(&src->refcount) == 0);
- atomic_inc(&src->refcount);
+ BUG_ON(atomic64_read(&src->refcount) == 0);
+ atomic64_inc(&src->refcount);
put_io_context(dst);
*pdst = src;
}
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..f829bd2 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
if (!cfqd->active_cic) {
struct cfq_io_context *cic = RQ_CIC(rq);

- atomic_inc(&cic->ioc->refcount);
+ atomic64_inc(&cic->ioc->refcount);
cfqd->active_cic = cic;
}
}
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..f1a57f4 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -64,7 +64,7 @@ struct cfq_io_context {
* and kmalloc'ed. These could be shared between processes.
*/
struct io_context {
- atomic_t refcount;
+ atomic64_t refcount;
atomic_t nr_tasks;

/* all the fields below are protected by this lock */
@@ -91,7 +91,7 @@ static inline struct io_context *ioc_task_link(struct io_context *ioc)
* if ref count is zero, don't allow sharing (ioc is going away, it's
* a race).
*/
- if (ioc && atomic_inc_not_zero(&ioc->refcount)) {
+ if (ioc && atomic64_inc_not_zero(&ioc->refcount)) {
atomic_inc(&ioc->nr_tasks);
return ioc;
}


2009-04-29 08:01:32

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

On Wed, 29 Apr 2009 12:21:39 +0530 Nikanth Karthikesan <[email protected]> wrote:

> Hi Jens
>
> Currently io_context has an atomic_t(int) as refcount. In case of cfq, for
> each device a task does I/O, a reference to the io_context would be taken. And
> when there are multiple process sharing io_contexts(CLONE_IO) would also have
> a reference to the same io_context. Theoretically the possible maximum number
> of processes sharing the same io_context + the number of disks/cfq_data
> referring to the same io_context can overflow the 32-bit counter on a very
> high-end machine. Even though it is an improbable case, let us make it
> difficult by changing the refcount to atomic64_t(long).
>

Sorry, atomic64_t isn't implemented on 32 bit architectures.

Perhaps it should be, but I expect it'd be pretty slow.

2009-04-29 10:05:41

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

On Wednesday 29 April 2009 13:29:30 Andrew Morton wrote:
> On Wed, 29 Apr 2009 12:21:39 +0530 Nikanth Karthikesan <[email protected]> wrote:
> > Hi Jens
> >
> > Currently io_context has an atomic_t(int) as refcount. In case of cfq,
> > for each device a task does I/O, a reference to the io_context would be
> > taken. And when there are multiple process sharing io_contexts(CLONE_IO)
> > would also have a reference to the same io_context. Theoretically the
> > possible maximum number of processes sharing the same io_context + the
> > number of disks/cfq_data referring to the same io_context can overflow
> > the 32-bit counter on a very high-end machine. Even though it is an
> > improbable case, let us make it difficult by changing the refcount to
> > atomic64_t(long).
>
> Sorry, atomic64_t isn't implemented on 32 bit architectures.
>
> Perhaps it should be, but I expect it'd be pretty slow.

Oh! Sorry, I didn't notice the #ifdef earlier. I guess thats why there is only
a single in-tree user for atomic64_t!

In this case, could we make it atomic64_t only on 64-bit architectures and
keep it as atomic_t on 32-bit machines? Something like the attached patch.

I wonder whether we should also add BUG_ON's whenever the refcount is about to
wrap? Or try to handle it gracefully. Another approach would be to impose an
artificial limit on the no of tasks that could share an io_context. Or resort
to lock protection. The problem is not very serious/common.

Thanks
Nikanth

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..5be4585 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc)
if (ioc == NULL)
return 1;

- BUG_ON(atomic_read(&ioc->refcount) == 0);
+ BUG_ON(atomic_read_ioc_refcount(ioc) == 0);

- if (atomic_dec_and_test(&ioc->refcount)) {
+ if (atomic_dec_and_test_ioc_refcount(ioc)) {
rcu_read_lock();
if (ioc->aic && ioc->aic->dtor)
ioc->aic->dtor(ioc->aic);
@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node)
ret = current_io_context(gfp_flags, node);
if (unlikely(!ret))
break;
- } while (!atomic_inc_not_zero(&ret->refcount));
+ } while (!atomic_inc_not_zero_ioc_refcount(ret));

return ret;
}
@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc)
struct io_context *dst = *pdst;

if (src) {
- BUG_ON(atomic_read(&src->refcount) == 0);
- atomic_inc(&src->refcount);
+ BUG_ON(atomic_read_ioc_refcount(src) == 0);
+ atomic_inc_ioc_refcount(src);
put_io_context(dst);
*pdst = src;
}
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..42d5018 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
if (!cfqd->active_cic) {
struct cfq_io_context *cic = RQ_CIC(rq);

- atomic_inc(&cic->ioc->refcount);
+ atomic_inc_ioc_refcount(cic->ioc);
cfqd->active_cic = cic;
}
}
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..bdc7156 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -64,7 +64,11 @@ struct cfq_io_context {
* and kmalloc'ed. These could be shared between processes.
*/
struct io_context {
+#ifdef CONFIG_64BIT
+ atomic64_t refcount;
+#else
atomic_t refcount;
+#endif
atomic_t nr_tasks;

/* all the fields below are protected by this lock */
@@ -85,14 +89,30 @@ struct io_context {
void *ioc_data;
};

+#ifdef CONFIG_64BIT
+
+#define atomic_read_ioc_refcount(ioc) atomic64_read(&ioc->refcount)
+#define atomic_inc_ioc_refcount(ioc) atomic64_inc(&ioc->refcount)
+#define atomic_dec_and_test_ioc_refcount(ioc) atomic64_dec_and_test(&ioc->refcount)
+#define atomic_inc_not_zero_ioc_refcount(ioc) atomic64_inc_not_zero(&ioc->refcount)
+
+#else
+
+#define atomic_read_ioc_refcount(ioc) atomic_read(&ioc->refcount)
+#define atomic_inc_ioc_refcount(ioc) atomic_inc(&ioc->refcount)
+#define atomic_dec_and_test_ioc_refcount(ioc) atomic_dec_and_test(&ioc->refcount)
+#define atomic_inc_not_zero_ioc_refcount(ioc) atomic_inc_not_zero(&ioc->refcount)
+
+#endif
+
static inline struct io_context *ioc_task_link(struct io_context *ioc)
{
/*
* if ref count is zero, don't allow sharing (ioc is going away, it's
* a race).
*/
- if (ioc && atomic_inc_not_zero(&ioc->refcount)) {
- atomic_inc(&ioc->nr_tasks);
+ if (ioc && atomic_inc_not_zero_ioc_refcount(ioc)) {
+ atomic_inc_ioc_refcount(ioc);
return ioc;
}

2009-04-29 15:17:59

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

On Wed, 29 Apr 2009 15:33:06 +0530 Nikanth Karthikesan <[email protected]> wrote:

> On Wednesday 29 April 2009 13:29:30 Andrew Morton wrote:
> > On Wed, 29 Apr 2009 12:21:39 +0530 Nikanth Karthikesan <[email protected]> wrote:
> > > Hi Jens
> > >
> > > Currently io_context has an atomic_t(int) as refcount. In case of cfq,
> > > for each device a task does I/O, a reference to the io_context would be
> > > taken. And when there are multiple process sharing io_contexts(CLONE_IO)
> > > would also have a reference to the same io_context. Theoretically the
> > > possible maximum number of processes sharing the same io_context + the
> > > number of disks/cfq_data referring to the same io_context can overflow
> > > the 32-bit counter on a very high-end machine. Even though it is an
> > > improbable case, let us make it difficult by changing the refcount to
> > > atomic64_t(long).
> >
> > Sorry, atomic64_t isn't implemented on 32 bit architectures.
> >
> > Perhaps it should be, but I expect it'd be pretty slow.
>
> Oh! Sorry, I didn't notice the #ifdef earlier. I guess thats why there is only
> a single in-tree user for atomic64_t!

Yes, it's a bit irritating.

> In this case, could we make it atomic64_t only on 64-bit architectures and
> keep it as atomic_t on 32-bit machines?

Sure.

> Something like the attached patch.

Check out atomic_long_t ;)

> I wonder whether we should also add BUG_ON's whenever the refcount is about to
> wrap? Or try to handle it gracefully. Another approach would be to impose an
> artificial limit on the no of tasks that could share an io_context. Or resort
> to lock protection. The problem is not very serious/common.
>

For a long time there was a debug patch in -mm which would warn if
atomic_dec() ever took any atomic_t from zero to -1. I don't think it
ever triggered false positives and it did find a couple of bugs.

I forget what happened to the patch - probably it died when the atomic
code got altered.

It could well be that a similar kernel-wide check for atomic_inc()
overflows would be similarly useful.

2009-04-30 07:30:52

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH][RFC] Handle improbable possibility of io_context->refcount overflow

On Wednesday 29 April 2009 20:45:58 Andrew Morton wrote:
> On Wed, 29 Apr 2009 15:33:06 +0530 Nikanth Karthikesan <[email protected]>
wrote:
> > On Wednesday 29 April 2009 13:29:30 Andrew Morton wrote:
> > > On Wed, 29 Apr 2009 12:21:39 +0530 Nikanth Karthikesan
<[email protected]> wrote:
> > > > Hi Jens
> > > >
> > > > Currently io_context has an atomic_t(int) as refcount. In case of
> > > > cfq, for each device a task does I/O, a reference to the io_context
> > > > would be taken. And when there are multiple process sharing
> > > > io_contexts(CLONE_IO) would also have a reference to the same
> > > > io_context. Theoretically the possible maximum number of processes
> > > > sharing the same io_context + the number of disks/cfq_data referring
> > > > to the same io_context can overflow the 32-bit counter on a very
> > > > high-end machine. Even though it is an improbable case, let us make
> > > > it difficult by changing the refcount to atomic64_t(long).
> > >
> > > Sorry, atomic64_t isn't implemented on 32 bit architectures.
> > >
> > > Perhaps it should be, but I expect it'd be pretty slow.
> >
> > Oh! Sorry, I didn't notice the #ifdef earlier. I guess thats why there is
> > only a single in-tree user for atomic64_t!
>
> Yes, it's a bit irritating.
>
> > In this case, could we make it atomic64_t only on 64-bit architectures
> > and keep it as atomic_t on 32-bit machines?
>
> Sure.
>
> > Something like the attached patch.
>
> Check out atomic_long_t ;)
>

Oh, thanks! I was about to re-invent it. :) Sending a patch using that in a
seperate mail.

> > I wonder whether we should also add BUG_ON's whenever the refcount is
> > about to wrap? Or try to handle it gracefully. Another approach would be
> > to impose an artificial limit on the no of tasks that could share an
> > io_context. Or resort to lock protection. The problem is not very
> > serious/common.
>
> For a long time there was a debug patch in -mm which would warn if
> atomic_dec() ever took any atomic_t from zero to -1. I don't think it
> ever triggered false positives and it did find a couple of bugs.
>
> I forget what happened to the patch - probably it died when the atomic
> code got altered.
>
> It could well be that a similar kernel-wide check for atomic_inc()
> overflows would be similarly useful.

Sending a patch for this as well as a seperate mail.

Thanks
Nikanth

2009-04-30 07:31:15

by Nikanth Karthikesan

[permalink] [raw]
Subject: [PATCH v2] Handle improbable possibility of io_context->refcount overflow

Currently io_context has an atomic_t(32-bit) as refcount. In case of cfq, for
each device a task does I/O, a reference to the io_context would be taken. And
when there are multiple process sharing io_contexts(CLONE_IO) would also have
a reference to the same io_context. Theoretically the possible maximum number
of processes sharing the same io_context + the number of disks/cfq_data
referring to the same io_context can overflow the 32-bit counter on a very
high-end machine. Even though it is an improbable case, let us make it
difficult atleast on 64-bit architectures by changing the refcount to
atomic_long_t.

Signed-off-by: Nikanth Karthikesan <[email protected]>

---

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 012f065..d4ed600 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc)
if (ioc == NULL)
return 1;

- BUG_ON(atomic_read(&ioc->refcount) == 0);
+ BUG_ON(atomic_long_read(&ioc->refcount) == 0);

- if (atomic_dec_and_test(&ioc->refcount)) {
+ if (atomic_long_dec_and_test(&ioc->refcount)) {
rcu_read_lock();
if (ioc->aic && ioc->aic->dtor)
ioc->aic->dtor(ioc->aic);
@@ -90,7 +90,7 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node)

ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
if (ret) {
- atomic_set(&ret->refcount, 1);
+ atomic_long_set(&ret->refcount, 1);
atomic_set(&ret->nr_tasks, 1);
spin_lock_init(&ret->lock);
ret->ioprio_changed = 0;
@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node)
ret = current_io_context(gfp_flags, node);
if (unlikely(!ret))
break;
- } while (!atomic_inc_not_zero(&ret->refcount));
+ } while (!atomic_long_inc_not_zero(&ret->refcount));

return ret;
}
@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc)
struct io_context *dst = *pdst;

if (src) {
- BUG_ON(atomic_read(&src->refcount) == 0);
- atomic_inc(&src->refcount);
+ BUG_ON(atomic_long_read(&src->refcount) == 0);
+ atomic_long_inc(&src->refcount);
put_io_context(dst);
*pdst = src;
}
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index a55a9bd..6a79c5b 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
if (!cfqd->active_cic) {
struct cfq_io_context *cic = RQ_CIC(rq);

- atomic_inc(&cic->ioc->refcount);
+ atomic_long_inc(&cic->ioc->refcount);
cfqd->active_cic = cic;
}
}
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 08b987b..dd05434 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -64,7 +64,7 @@ struct cfq_io_context {
* and kmalloc'ed. These could be shared between processes.
*/
struct io_context {
- atomic_t refcount;
+ atomic_long_t refcount;
atomic_t nr_tasks;

/* all the fields below are protected by this lock */
@@ -91,8 +91,8 @@ static inline struct io_context *ioc_task_link(struct io_context *ioc)
* if ref count is zero, don't allow sharing (ioc is going away, it's
* a race).
*/
- if (ioc && atomic_inc_not_zero(&ioc->refcount)) {
- atomic_inc(&ioc->nr_tasks);
+ if (ioc && atomic_long_inc_not_zero(&ioc->refcount)) {
+ atomic_long_inc(&ioc->refcount);
return ioc;
}

2009-04-30 07:31:39

by Nikanth Karthikesan

[permalink] [raw]
Subject: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

Add a debug option to detect and warn when the 32-bit atomic_t wraps around
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>

---

diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..92b898f 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -3,8 +3,10 @@

#include <linux/compiler.h>
#include <linux/types.h>
+#include <linux/kernel.h>
#include <asm/processor.h>
#include <asm/cmpxchg.h>
+#include <asm/bug.h>

/*
* Atomic operations that C can't guarantee us. Useful for
@@ -77,6 +79,8 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
return c;
}

+static inline int atomic_add_unless(atomic_t *v, int a, int u);
+
/**
* atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
@@ -85,8 +89,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
*/
static inline void atomic_inc(atomic_t *v)
{
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+#else
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
+#endif
}

/**
@@ -97,8 +105,12 @@ static inline void atomic_inc(atomic_t *v)
*/
static inline void atomic_dec(atomic_t *v)
{
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+#else
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
+#endif
}

/**
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..c34a6fa 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -2,8 +2,10 @@
#define _ASM_X86_ATOMIC_64_H

#include <linux/types.h>
+#include <linux/kernel.h>
#include <asm/alternative.h>
#include <asm/cmpxchg.h>
+#include <asm/bug.h>

/*
* Atomic operations that C can't guarantee us. Useful for
@@ -76,6 +78,8 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
return c;
}

+static inline int atomic_add_unless(atomic_t *v, int a, int u);
+
/**
* atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
@@ -84,9 +88,13 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
*/
static inline void atomic_inc(atomic_t *v)
{
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+#else
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
: "m" (v->counter));
+#endif
}

/**
@@ -97,9 +105,13 @@ static inline void atomic_inc(atomic_t *v)
*/
static inline void atomic_dec(atomic_t *v)
{
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+#else
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
: "m" (v->counter));
+#endif
}

/**
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..a446a98 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,13 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 08:24:31

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> Add a debug option to detect and warn when the 32-bit atomic_t
> wraps around during atomic_inc and atomic_dec.
>
> Signed-off-by: Nikanth Karthikesan <[email protected]>

hm, what's the motivation?

As a generic debug helper this is not appropriate i think - counts
can easily have a meaning when going negative as well. (we have no
signed-atomic primitives)

> static inline void atomic_inc(atomic_t *v)
> {
> +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> +#else
> asm volatile(LOCK_PREFIX "incl %0"
> : "+m" (v->counter));
> +#endif
> }

also looks a bit ugly - this ugly #ifdef would spread into every
architecture.

If we want to restrict atomic_t value ranges like that then the
clean solution would be to add generic wrappers doing the debug
(once, in generic code), and renaming the arch primitives to
raw_atomic_inc() (etc), doing the lowlevel bits cleanly.

Do we really want this?

Ingo

2009-04-30 10:13:42

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 13:53:50 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > Add a debug option to detect and warn when the 32-bit atomic_t
> > wraps around during atomic_inc and atomic_dec.
> >
> > Signed-off-by: Nikanth Karthikesan <[email protected]>
>
> hm, what's the motivation?
>

See http://lkml.org/lkml/2009/4/29/424
Andrew said that a generic atomic_t overflow checker might be useful.

> As a generic debug helper this is not appropriate i think - counts
> can easily have a meaning when going negative as well. (we have no
> signed-atomic primitives)
>

This doesn't warn when it becomes negative/positive from zero, but only
when it wraps around^W^Woverflows, trying to add past INT_MAX or
subtract from INT_MIN.

> > static inline void atomic_inc(atomic_t *v)
> > {
> > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > +#else
> > asm volatile(LOCK_PREFIX "incl %0"
> >
> > : "+m" (v->counter));
> >
> > +#endif
> > }
>
> also looks a bit ugly - this ugly #ifdef would spread into every
> architecture.
>
> If we want to restrict atomic_t value ranges like that then the
> clean solution would be to add generic wrappers doing the debug
> (once, in generic code), and renaming the arch primitives to
> raw_atomic_inc() (etc), doing the lowlevel bits cleanly.
>

Here is a patch which does it this way.

Thanks
Nikanth

Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..6eda22b 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,52 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+}
+
+#else
+
+#define atomic_inc(v) raw_atomic_inc(v)
+#define atomic_dec(v) raw_atomic_dec(v)
+
+#endif
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..a446a98 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,13 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 10:47:51

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> > As a generic debug helper this is not appropriate i think -
> > counts can easily have a meaning when going negative as well.
> > (we have no signed-atomic primitives)
>
> This doesn't warn when it becomes negative/positive from zero, but
> only when it wraps around^W^Woverflows, trying to add past INT_MAX
> or subtract from INT_MIN.

ah! The small difference between INT_MAX and UINT_MAX.

Sure, this debug check makes a lot of sense. -mm even had such
stuff, many years ago? Never went upstream.

> > > static inline void atomic_inc(atomic_t *v)
> > > {
> > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > > +#else
> > > asm volatile(LOCK_PREFIX "incl %0"
> > >
> > > : "+m" (v->counter));
> > >
> > > +#endif
> > > }
> >
> > also looks a bit ugly - this ugly #ifdef would spread into every
> > architecture.
> >
> > If we want to restrict atomic_t value ranges like that then the
> > clean solution would be to add generic wrappers doing the debug
> > (once, in generic code), and renaming the arch primitives to
> > raw_atomic_inc() (etc), doing the lowlevel bits cleanly.
> >
>
> Here is a patch which does it this way.
>
> Thanks
> Nikanth
>
> Detect and warn on atomic_inc/atomic_dec overflow.
>
> Add a debug option to detect and warn when the 32-bit atomic_t overflows
> during atomic_inc and atomic_dec.
>
> diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
> index 85b46fb..c6a17bb 100644
> --- a/arch/x86/include/asm/atomic_32.h
> +++ b/arch/x86/include/asm/atomic_32.h
> @@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
> }
>
> /**
> - * atomic_inc - increment atomic variable
> + * raw_atomic_inc - increment atomic variable
> * @v: pointer of type atomic_t
> *
> * Atomically increments @v by 1.
> */
> -static inline void atomic_inc(atomic_t *v)
> +static inline void raw_atomic_inc(atomic_t *v)
> {
> asm volatile(LOCK_PREFIX "incl %0"
> : "+m" (v->counter));
> }
>
> /**
> - * atomic_dec - decrement atomic variable
> + * raw_atomic_dec - decrement atomic variable
> * @v: pointer of type atomic_t
> *
> * Atomically decrements @v by 1.
> */
> -static inline void atomic_dec(atomic_t *v)
> +static inline void raw_atomic_dec(atomic_t *v)
> {
> asm volatile(LOCK_PREFIX "decl %0"
> : "+m" (v->counter));
> diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
> index 8c21731..1183b85 100644
> --- a/arch/x86/include/asm/atomic_64.h
> +++ b/arch/x86/include/asm/atomic_64.h
> @@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
> }
>
> /**
> - * atomic_inc - increment atomic variable
> + * raw_atomic_inc - increment atomic variable
> * @v: pointer of type atomic_t
> *
> * Atomically increments @v by 1.
> */
> -static inline void atomic_inc(atomic_t *v)
> +static inline void raw_atomic_inc(atomic_t *v)
> {
> asm volatile(LOCK_PREFIX "incl %0"
> : "=m" (v->counter)
> @@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
> }
>
> /**
> - * atomic_dec - decrement atomic variable
> + * raw_atomic_dec - decrement atomic variable
> * @v: pointer of type atomic_t
> *
> * Atomically decrements @v by 1.
> */
> -static inline void atomic_dec(atomic_t *v)
> +static inline void raw_atomic_dec(atomic_t *v)
> {
> asm volatile(LOCK_PREFIX "decl %0"
> : "=m" (v->counter)
> diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
> index 7abdaa9..6eda22b 100644
> --- a/include/asm-generic/atomic.h
> +++ b/include/asm-generic/atomic.h
> @@ -4,15 +4,52 @@
> * Copyright (C) 2005 Silicon Graphics, Inc.
> * Christoph Lameter
> *
> - * Allows to provide arch independent atomic definitions without the need to
> - * edit all arch specific atomic.h files.
> */
>
> +#include <linux/kernel.h>
> #include <asm/types.h>
> +#include <asm/bug.h>
> +
> +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)

#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP

> +
> +/**
> + * atomic_inc - increment atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically increments @v by 1.
> + * Prints a warning if it wraps around.
> + */
> +static inline void atomic_inc(atomic_t *v)
> +{
> + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> +}
> +
> +/**
> + * atomic_dec - decrement atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically decrements @v by 1.
> + * Prints a warning if it wraps around.
> + */
> +static inline void atomic_dec(atomic_t *v)
> +{
> + WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
> +}
> +
> +#else
> +
> +#define atomic_inc(v) raw_atomic_inc(v)
> +#define atomic_dec(v) raw_atomic_dec(v)

Please turn these into proper, type aware inline functions.

> +config ENABLE_WARN_ATOMIC_INC_WRAP
> + bool "Enable warning on atomic_inc()/atomic_dec() wrap"
> + default y
> + help
> + Enable printing a warning when atomic_inc() or atomic_dec()
> + operation wraps around the 32-bit value.
> +

this needs HAVE_ARCH_DEBUG_ATOMIC added to arch/x86/Kconfig, and a
depends on HAVE_ARCH_DEBUG_ATOMIC. Otherwise this wont build very
well on non-x86 when enabled, right?

With those small details fixed:

Acked-by: Ingo Molnar <[email protected]>

Ingo

2009-04-30 12:11:26

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 16:17:29 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > > As a generic debug helper this is not appropriate i think -
> > > counts can easily have a meaning when going negative as well.
> > > (we have no signed-atomic primitives)
> >
> > This doesn't warn when it becomes negative/positive from zero, but
> > only when it wraps around^W^Woverflows, trying to add past INT_MAX
> > or subtract from INT_MIN.
>
> ah! The small difference between INT_MAX and UINT_MAX.
>
> Sure, this debug check makes a lot of sense. -mm even had such
> stuff, many years ago? Never went upstream.
>
> > > > static inline void atomic_inc(atomic_t *v)
> > > > {
> > > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > > > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > > > +#else
> > > > asm volatile(LOCK_PREFIX "incl %0"
> > > >
> > > > : "+m" (v->counter));
> > > >
> > > > +#endif
> > > > }
> > >
> > > also looks a bit ugly - this ugly #ifdef would spread into every
> > > architecture.
> > >
> > > If we want to restrict atomic_t value ranges like that then the
> > > clean solution would be to add generic wrappers doing the debug
> > > (once, in generic code), and renaming the arch primitives to
> > > raw_atomic_inc() (etc), doing the lowlevel bits cleanly.
> >
> > Here is a patch which does it this way.
> >
> > Thanks
> > Nikanth
> >
> > Detect and warn on atomic_inc/atomic_dec overflow.
> >
> > Add a debug option to detect and warn when the 32-bit atomic_t overflows
> > during atomic_inc and atomic_dec.
> >
> > diff --git a/arch/x86/include/asm/atomic_32.h
> > b/arch/x86/include/asm/atomic_32.h index 85b46fb..c6a17bb 100644
> > --- a/arch/x86/include/asm/atomic_32.h
> > +++ b/arch/x86/include/asm/atomic_32.h
> > @@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t
> > *v) }
> >
> > /**
> > - * atomic_inc - increment atomic variable
> > + * raw_atomic_inc - increment atomic variable
> > * @v: pointer of type atomic_t
> > *
> > * Atomically increments @v by 1.
> > */
> > -static inline void atomic_inc(atomic_t *v)
> > +static inline void raw_atomic_inc(atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "incl %0"
> >
> > : "+m" (v->counter));
> >
> > }
> >
> > /**
> > - * atomic_dec - decrement atomic variable
> > + * raw_atomic_dec - decrement atomic variable
> > * @v: pointer of type atomic_t
> > *
> > * Atomically decrements @v by 1.
> > */
> > -static inline void atomic_dec(atomic_t *v)
> > +static inline void raw_atomic_dec(atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "decl %0"
> >
> > : "+m" (v->counter));
> >
> > diff --git a/arch/x86/include/asm/atomic_64.h
> > b/arch/x86/include/asm/atomic_64.h index 8c21731..1183b85 100644
> > --- a/arch/x86/include/asm/atomic_64.h
> > +++ b/arch/x86/include/asm/atomic_64.h
> > @@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t
> > *v) }
> >
> > /**
> > - * atomic_inc - increment atomic variable
> > + * raw_atomic_inc - increment atomic variable
> > * @v: pointer of type atomic_t
> > *
> > * Atomically increments @v by 1.
> > */
> > -static inline void atomic_inc(atomic_t *v)
> > +static inline void raw_atomic_inc(atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "incl %0"
> >
> > : "=m" (v->counter)
> >
> > @@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
> > }
> >
> > /**
> > - * atomic_dec - decrement atomic variable
> > + * raw_atomic_dec - decrement atomic variable
> > * @v: pointer of type atomic_t
> > *
> > * Atomically decrements @v by 1.
> > */
> > -static inline void atomic_dec(atomic_t *v)
> > +static inline void raw_atomic_dec(atomic_t *v)
> > {
> > asm volatile(LOCK_PREFIX "decl %0"
> >
> > : "=m" (v->counter)
> >
> > diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
> > index 7abdaa9..6eda22b 100644
> > --- a/include/asm-generic/atomic.h
> > +++ b/include/asm-generic/atomic.h
> > @@ -4,15 +4,52 @@
> > * Copyright (C) 2005 Silicon Graphics, Inc.
> > * Christoph Lameter
> > *
> > - * Allows to provide arch independent atomic definitions without the
> > need to - * edit all arch specific atomic.h files.
> > */
> >
> > +#include <linux/kernel.h>
> > #include <asm/types.h>
> > +#include <asm/bug.h>
> > +
> > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
>
> #ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
>
> > +
> > +/**
> > + * atomic_inc - increment atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically increments @v by 1.
> > + * Prints a warning if it wraps around.
> > + */
> > +static inline void atomic_inc(atomic_t *v)
> > +{
> > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > +}
> > +
> > +/**
> > + * atomic_dec - decrement atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically decrements @v by 1.
> > + * Prints a warning if it wraps around.
> > + */
> > +static inline void atomic_dec(atomic_t *v)
> > +{
> > + WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
> > +}
> > +
> > +#else
> > +
> > +#define atomic_inc(v) raw_atomic_inc(v)
> > +#define atomic_dec(v) raw_atomic_dec(v)
>
> Please turn these into proper, type aware inline functions.
>

Ok, done.

> > +config ENABLE_WARN_ATOMIC_INC_WRAP
> > + bool "Enable warning on atomic_inc()/atomic_dec() wrap"
> > + default y
> > + help
> > + Enable printing a warning when atomic_inc() or atomic_dec()
> > + operation wraps around the 32-bit value.
> > +
>
> this needs HAVE_ARCH_DEBUG_ATOMIC added to arch/x86/Kconfig, and a
> depends on HAVE_ARCH_DEBUG_ATOMIC. Otherwise this wont build very
> well on non-x86 when enabled, right?
>

Ok, done.

> With those small details fixed:
>
> Acked-by: Ingo Molnar <[email protected]>
>
> Ingo

Here is the patch with all those changes incorporated.

Thanks
Nikanth

Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>
Acked-by: Ingo Molnar <[email protected]>

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df9e885..dd49be3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -46,6 +46,7 @@ config X86
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_ARCH_DEBUG_ATOMIC

config ARCH_DEFCONFIG
string
diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..9f476d7 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,71 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+}
+
+#else
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+ raw_atomic_inc(v);
+}
+
+/**
+ * atomic_dec - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+ raw_atomic_dec(v);
+}
+
+#endif
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..773c1a4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config HAVE_ARCH_DEBUG_ATOMIC
+ bool
+
+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ depends on HAVE_ARCH_DEBUG_ATOMIC
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 12:22:21

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> On Thursday 30 April 2009 16:17:29 Ingo Molnar wrote:
> > * Nikanth Karthikesan <[email protected]> wrote:
> > > > As a generic debug helper this is not appropriate i think -
> > > > counts can easily have a meaning when going negative as well.
> > > > (we have no signed-atomic primitives)
> > >
> > > This doesn't warn when it becomes negative/positive from zero, but
> > > only when it wraps around^W^Woverflows, trying to add past INT_MAX
> > > or subtract from INT_MIN.
> >
> > ah! The small difference between INT_MAX and UINT_MAX.
> >
> > Sure, this debug check makes a lot of sense. -mm even had such
> > stuff, many years ago? Never went upstream.
> >
> > > > > static inline void atomic_inc(atomic_t *v)
> > > > > {
> > > > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > > > > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > > > > +#else
> > > > > asm volatile(LOCK_PREFIX "incl %0"
> > > > >
> > > > > : "+m" (v->counter));
> > > > >
> > > > > +#endif
> > > > > }
> > > >
> > > > also looks a bit ugly - this ugly #ifdef would spread into every
> > > > architecture.
> > > >
> > > > If we want to restrict atomic_t value ranges like that then the
> > > > clean solution would be to add generic wrappers doing the debug
> > > > (once, in generic code), and renaming the arch primitives to
> > > > raw_atomic_inc() (etc), doing the lowlevel bits cleanly.
> > >
> > > Here is a patch which does it this way.
> > >
> > > Thanks
> > > Nikanth
> > >
> > > Detect and warn on atomic_inc/atomic_dec overflow.
> > >
> > > Add a debug option to detect and warn when the 32-bit atomic_t overflows
> > > during atomic_inc and atomic_dec.
> > >
> > > diff --git a/arch/x86/include/asm/atomic_32.h
> > > b/arch/x86/include/asm/atomic_32.h index 85b46fb..c6a17bb 100644
> > > --- a/arch/x86/include/asm/atomic_32.h
> > > +++ b/arch/x86/include/asm/atomic_32.h
> > > @@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t
> > > *v) }
> > >
> > > /**
> > > - * atomic_inc - increment atomic variable
> > > + * raw_atomic_inc - increment atomic variable
> > > * @v: pointer of type atomic_t
> > > *
> > > * Atomically increments @v by 1.
> > > */
> > > -static inline void atomic_inc(atomic_t *v)
> > > +static inline void raw_atomic_inc(atomic_t *v)
> > > {
> > > asm volatile(LOCK_PREFIX "incl %0"
> > >
> > > : "+m" (v->counter));
> > >
> > > }
> > >
> > > /**
> > > - * atomic_dec - decrement atomic variable
> > > + * raw_atomic_dec - decrement atomic variable
> > > * @v: pointer of type atomic_t
> > > *
> > > * Atomically decrements @v by 1.
> > > */
> > > -static inline void atomic_dec(atomic_t *v)
> > > +static inline void raw_atomic_dec(atomic_t *v)
> > > {
> > > asm volatile(LOCK_PREFIX "decl %0"
> > >
> > > : "+m" (v->counter));
> > >
> > > diff --git a/arch/x86/include/asm/atomic_64.h
> > > b/arch/x86/include/asm/atomic_64.h index 8c21731..1183b85 100644
> > > --- a/arch/x86/include/asm/atomic_64.h
> > > +++ b/arch/x86/include/asm/atomic_64.h
> > > @@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t
> > > *v) }
> > >
> > > /**
> > > - * atomic_inc - increment atomic variable
> > > + * raw_atomic_inc - increment atomic variable
> > > * @v: pointer of type atomic_t
> > > *
> > > * Atomically increments @v by 1.
> > > */
> > > -static inline void atomic_inc(atomic_t *v)
> > > +static inline void raw_atomic_inc(atomic_t *v)
> > > {
> > > asm volatile(LOCK_PREFIX "incl %0"
> > >
> > > : "=m" (v->counter)
> > >
> > > @@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
> > > }
> > >
> > > /**
> > > - * atomic_dec - decrement atomic variable
> > > + * raw_atomic_dec - decrement atomic variable
> > > * @v: pointer of type atomic_t
> > > *
> > > * Atomically decrements @v by 1.
> > > */
> > > -static inline void atomic_dec(atomic_t *v)
> > > +static inline void raw_atomic_dec(atomic_t *v)
> > > {
> > > asm volatile(LOCK_PREFIX "decl %0"
> > >
> > > : "=m" (v->counter)
> > >
> > > diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
> > > index 7abdaa9..6eda22b 100644
> > > --- a/include/asm-generic/atomic.h
> > > +++ b/include/asm-generic/atomic.h
> > > @@ -4,15 +4,52 @@
> > > * Copyright (C) 2005 Silicon Graphics, Inc.
> > > * Christoph Lameter
> > > *
> > > - * Allows to provide arch independent atomic definitions without the
> > > need to - * edit all arch specific atomic.h files.
> > > */
> > >
> > > +#include <linux/kernel.h>
> > > #include <asm/types.h>
> > > +#include <asm/bug.h>
> > > +
> > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> >
> > #ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP

you missed this feedback :-)

Ingo

2009-04-30 12:29:20

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 17:51:59 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > On Thursday 30 April 2009 16:17:29 Ingo Molnar wrote:
> > > * Nikanth Karthikesan <[email protected]> wrote:
> > > > > As a generic debug helper this is not appropriate i think -
> > > > > counts can easily have a meaning when going negative as well.
> > > > > (we have no signed-atomic primitives)
> > > >
> > > > This doesn't warn when it becomes negative/positive from zero, but
> > > > only when it wraps around^W^Woverflows, trying to add past INT_MAX
> > > > or subtract from INT_MIN.
> > >
> > > ah! The small difference between INT_MAX and UINT_MAX.
> > >
> > > Sure, this debug check makes a lot of sense. -mm even had such
> > > stuff, many years ago? Never went upstream.
> > >
> > > > > > static inline void atomic_inc(atomic_t *v)
> > > > > > {
> > > > > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > > > > > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > > > > > +#else
> > > > > > asm volatile(LOCK_PREFIX "incl %0"
> > > > > >
> > > > > > : "+m" (v->counter));
> > > > > >
> > > > > > +#endif
> > > > > > }
> > > > >
> > > > > also looks a bit ugly - this ugly #ifdef would spread into every
> > > > > architecture.
> > > > >
> > > > > If we want to restrict atomic_t value ranges like that then the
> > > > > clean solution would be to add generic wrappers doing the debug
> > > > > (once, in generic code), and renaming the arch primitives to
> > > > > raw_atomic_inc() (etc), doing the lowlevel bits cleanly.
> > > >
> > > > Here is a patch which does it this way.
> > > >
> > > > Thanks
> > > > Nikanth
> > > >
> > > > Detect and warn on atomic_inc/atomic_dec overflow.
> > > >
> > > > Add a debug option to detect and warn when the 32-bit atomic_t
> > > > overflows during atomic_inc and atomic_dec.
> > > >
> > > > diff --git a/arch/x86/include/asm/atomic_32.h
> > > > b/arch/x86/include/asm/atomic_32.h index 85b46fb..c6a17bb 100644
> > > > --- a/arch/x86/include/asm/atomic_32.h
> > > > +++ b/arch/x86/include/asm/atomic_32.h
> > > > @@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i,
> > > > atomic_t *v) }
> > > >
> > > > /**
> > > > - * atomic_inc - increment atomic variable
> > > > + * raw_atomic_inc - increment atomic variable
> > > > * @v: pointer of type atomic_t
> > > > *
> > > > * Atomically increments @v by 1.
> > > > */
> > > > -static inline void atomic_inc(atomic_t *v)
> > > > +static inline void raw_atomic_inc(atomic_t *v)
> > > > {
> > > > asm volatile(LOCK_PREFIX "incl %0"
> > > >
> > > > : "+m" (v->counter));
> > > >
> > > > }
> > > >
> > > > /**
> > > > - * atomic_dec - decrement atomic variable
> > > > + * raw_atomic_dec - decrement atomic variable
> > > > * @v: pointer of type atomic_t
> > > > *
> > > > * Atomically decrements @v by 1.
> > > > */
> > > > -static inline void atomic_dec(atomic_t *v)
> > > > +static inline void raw_atomic_dec(atomic_t *v)
> > > > {
> > > > asm volatile(LOCK_PREFIX "decl %0"
> > > >
> > > > : "+m" (v->counter));
> > > >
> > > > diff --git a/arch/x86/include/asm/atomic_64.h
> > > > b/arch/x86/include/asm/atomic_64.h index 8c21731..1183b85 100644
> > > > --- a/arch/x86/include/asm/atomic_64.h
> > > > +++ b/arch/x86/include/asm/atomic_64.h
> > > > @@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i,
> > > > atomic_t *v) }
> > > >
> > > > /**
> > > > - * atomic_inc - increment atomic variable
> > > > + * raw_atomic_inc - increment atomic variable
> > > > * @v: pointer of type atomic_t
> > > > *
> > > > * Atomically increments @v by 1.
> > > > */
> > > > -static inline void atomic_inc(atomic_t *v)
> > > > +static inline void raw_atomic_inc(atomic_t *v)
> > > > {
> > > > asm volatile(LOCK_PREFIX "incl %0"
> > > >
> > > > : "=m" (v->counter)
> > > >
> > > > @@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
> > > > }
> > > >
> > > > /**
> > > > - * atomic_dec - decrement atomic variable
> > > > + * raw_atomic_dec - decrement atomic variable
> > > > * @v: pointer of type atomic_t
> > > > *
> > > > * Atomically decrements @v by 1.
> > > > */
> > > > -static inline void atomic_dec(atomic_t *v)
> > > > +static inline void raw_atomic_dec(atomic_t *v)
> > > > {
> > > > asm volatile(LOCK_PREFIX "decl %0"
> > > >
> > > > : "=m" (v->counter)
> > > >
> > > > diff --git a/include/asm-generic/atomic.h
> > > > b/include/asm-generic/atomic.h index 7abdaa9..6eda22b 100644
> > > > --- a/include/asm-generic/atomic.h
> > > > +++ b/include/asm-generic/atomic.h
> > > > @@ -4,15 +4,52 @@
> > > > * Copyright (C) 2005 Silicon Graphics, Inc.
> > > > * Christoph Lameter
> > > > *
> > > > - * Allows to provide arch independent atomic definitions without the
> > > > need to - * edit all arch specific atomic.h files.
> > > > */
> > > >
> > > > +#include <linux/kernel.h>
> > > > #include <asm/types.h>
> > > > +#include <asm/bug.h>
> > > > +
> > > > +#if defined(CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP)
> > >
> > > #ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
>
> you missed this feedback :-)
>

Yes, sorry. Fixed Now. Thanks for the review. :-)

Thanks
Nikanth

Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>
Acked-by: Ingo Molnar <[email protected]>

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df9e885..dd49be3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -46,6 +46,7 @@ config X86
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_ARCH_DEBUG_ATOMIC

config ARCH_DEFCONFIG
string
diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..edf5619 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,71 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+}
+
+#else
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+ raw_atomic_inc(v);
+}
+
+/**
+ * atomic_dec - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+ raw_atomic_dec(v);
+}
+
+#endif
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..773c1a4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config HAVE_ARCH_DEBUG_ATOMIC
+ bool
+
+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ depends on HAVE_ARCH_DEBUG_ATOMIC
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 12:50:27

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
> index 7abdaa9..edf5619 100644
> --- a/include/asm-generic/atomic.h
> +++ b/include/asm-generic/atomic.h
> @@ -4,15 +4,71 @@
> * Copyright (C) 2005 Silicon Graphics, Inc.
> * Christoph Lameter
> *
> - * Allows to provide arch independent atomic definitions without the need to
> - * edit all arch specific atomic.h files.
> */
>
> +#include <linux/kernel.h>
> #include <asm/types.h>
> +#include <asm/bug.h>
> +
> +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> +
> +/**
> + * atomic_inc - increment atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically increments @v by 1.
> + * Prints a warning if it wraps around.
> + */
> +static inline void atomic_inc(atomic_t *v)
> +{
> + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> +}
> +
> +/**
> + * atomic_dec - decrement atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically decrements @v by 1.
> + * Prints a warning if it wraps around.
> + */
> +static inline void atomic_dec(atomic_t *v)
> +{
> + WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
> +}
> +
> +#else
> +
> +/**
> + * atomic_inc - increment atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically increments @v by 1.
> + */
> +static inline void atomic_inc(atomic_t *v)
> +{
> + raw_atomic_inc(v);
> +}
> +
> +/**
> + * atomic_dec - increment atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically decrements @v by 1.
> + */
> +static inline void atomic_dec(atomic_t *v)
> +{
> + raw_atomic_dec(v);
> +}
> +
> +#endif

Here i think it makes sense to have a single definition:

static inline void atomic_inc(atomic_t *v)
{
#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
#else
raw_atomic_inc(v);
#endif
}

Or, better yet, i'd suggest to define an 'invalid value' range for
atomic integer types - not restricted to the single value of
INT_MAX, but in the range of: [ UINT_MAX/4 ... INT_MAX/4*3 ].

Then there could be a single, straightforward value check:

static inline void atomic_inc(atomic_t *v)
{
debug_atomic_check_value(v);
raw_atomic_inc(v);
}

Where debug_atomic_check_value() is just an atomic_read():

static inline void debug_atomic_check_value(atomic_t *v)
{
WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
KERN_ERR "atomic counter check failure!");
}

It's a constant check.

If are overflowing on such a massive rate, it doesnt matter how
early or late we check the value.

Agreed?

Ingo

2009-04-30 13:31:56

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 18:20:04 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
> > index 7abdaa9..edf5619 100644
> > --- a/include/asm-generic/atomic.h
> > +++ b/include/asm-generic/atomic.h
> > @@ -4,15 +4,71 @@
> > * Copyright (C) 2005 Silicon Graphics, Inc.
> > * Christoph Lameter
> > *
> > - * Allows to provide arch independent atomic definitions without the
> > need to - * edit all arch specific atomic.h files.
> > */
> >
> > +#include <linux/kernel.h>
> > #include <asm/types.h>
> > +#include <asm/bug.h>
> > +
> > +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> > +
> > +/**
> > + * atomic_inc - increment atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically increments @v by 1.
> > + * Prints a warning if it wraps around.
> > + */
> > +static inline void atomic_inc(atomic_t *v)
> > +{
> > + WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> > +}
> > +
> > +/**
> > + * atomic_dec - decrement atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically decrements @v by 1.
> > + * Prints a warning if it wraps around.
> > + */
> > +static inline void atomic_dec(atomic_t *v)
> > +{
> > + WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
> > +}
> > +
> > +#else
> > +
> > +/**
> > + * atomic_inc - increment atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically increments @v by 1.
> > + */
> > +static inline void atomic_inc(atomic_t *v)
> > +{
> > + raw_atomic_inc(v);
> > +}
> > +
> > +/**
> > + * atomic_dec - increment atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically decrements @v by 1.
> > + */
> > +static inline void atomic_dec(atomic_t *v)
> > +{
> > + raw_atomic_dec(v);
> > +}
> > +
> > +#endif
>
> Here i think it makes sense to have a single definition:
>
> static inline void atomic_inc(atomic_t *v)
> {
> #ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
> #else
> raw_atomic_inc(v);
> #endif
> }
>

Ok, done.

> Or, better yet, i'd suggest to define an 'invalid value' range for
> atomic integer types - not restricted to the single value of
> INT_MAX, but in the range of: [ UINT_MAX/4 ... INT_MAX/4*3 ].
>

atomic_t is defined an int, why use UINT_MAX?

> Then there could be a single, straightforward value check:
>
> static inline void atomic_inc(atomic_t *v)
> {
> debug_atomic_check_value(v);
> raw_atomic_inc(v);
> }
>
> Where debug_atomic_check_value() is just an atomic_read():
>
> static inline void debug_atomic_check_value(atomic_t *v)
> {
> WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> KERN_ERR "atomic counter check failure!");
> }
>

I do not understand, why UINT_MAX/4 to UINT_MAX/4*3?
Roughly,
UINT_MAX/4 = INT_MAX/2
UINT_MAX/4*3 = INT_MAX/2*3 which we will never reach with an int.

> It's a constant check.
>
> If are overflowing on such a massive rate, it doesnt matter how
> early or late we check the value.
>

UINT_MAX/4 early, might be too early. And if it doesn't matter how early or
late, why try to be over-cautious and produce false warnings. ;-)

> Agreed?
>

Hmm.. partly yes, but not the range. Could the range be made as INT_MAX/4*3 to
INT_MAX? But even then, this might trigger false-positives, when atomic_t is
used in places where it was meant to exercise the full int range with the
assumption it would never go above INT_MAX.

If you still think that, it would be better to catch it early, I would send
you another patch with the range checking incorporated.

Thanks
Nikanth

Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>
Acked-by: Ingo Molnar <[email protected]>

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df9e885..dd49be3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -46,6 +46,7 @@ config X86
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_ARCH_DEBUG_ATOMIC

config ARCH_DEFCONFIG
string
diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..ca47932 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,52 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ON(atomic_add_unless(v, 1, INT_MAX) == 0);
+#else
+ raw_atomic_inc(v);
+#endif
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ * Prints a warning if it wraps around.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ON(atomic_add_unless(v, -1, INT_MIN) == 0);
+#else
+ raw_atomic_dec(v);
+#endif
+
+}
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..773c1a4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config HAVE_ARCH_DEBUG_ATOMIC
+ bool
+
+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ depends on HAVE_ARCH_DEBUG_ATOMIC
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 13:38:23

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> > Then there could be a single, straightforward value check:
> >
> > static inline void atomic_inc(atomic_t *v)
> > {
> > debug_atomic_check_value(v);
> > raw_atomic_inc(v);
> > }
> >
> > Where debug_atomic_check_value() is just an atomic_read():
> >
> > static inline void debug_atomic_check_value(atomic_t *v)
> > {
> > WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > KERN_ERR "atomic counter check failure!");
> > }
> >
>
> I do not understand, why UINT_MAX/4 to UINT_MAX/4*3?
> Roughly,
> UINT_MAX/4 = INT_MAX/2
> UINT_MAX/4*3 = INT_MAX/2*3 which we will never reach with an int.

i mean:

WARN_ONCE(in_range((u32)atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
KERN_ERR "atomic counter check failure!");

that's a single range check on an u32, selecting 'too large' and
'too small' s32 values.

> > It's a constant check.
> >
> > If are overflowing on such a massive rate, it doesnt matter how
> > early or late we check the value.
>
> UINT_MAX/4 early, might be too early. And if it doesn't matter how
> early or late, why try to be over-cautious and produce false
> warnings. ;-)

UINT_MAX/4 is ~1 billion. If we reach a value of 1 billion we are
leaking. Your check basically is a sharp test for the specific case
of overflowing the boundary - but it makes the code slower (it uses
more complex atomic ops) and uglifies it via #ifdefs as well.

It doesnt matter whether we wrap over at around +2 billion into -2
billion, or treat the whole above-1-billion and
below-minus-1-billion range as invalid. (other than we'll catch bugs
sooner via this method, and have faster and cleaner code)

Ingo

2009-04-30 13:54:28

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 19:07:57 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > > Then there could be a single, straightforward value check:
> > >
> > > static inline void atomic_inc(atomic_t *v)
> > > {
> > > debug_atomic_check_value(v);
> > > raw_atomic_inc(v);
> > > }
> > >
> > > Where debug_atomic_check_value() is just an atomic_read():
> > >
> > > static inline void debug_atomic_check_value(atomic_t *v)
> > > {
> > > WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > > KERN_ERR "atomic counter check failure!");
> > > }
> >
> > I do not understand, why UINT_MAX/4 to UINT_MAX/4*3?
> > Roughly,
> > UINT_MAX/4 = INT_MAX/2
> > UINT_MAX/4*3 = INT_MAX/2*3 which we will never reach with an int.
>
> i mean:
>
> WARN_ONCE(in_range((u32)atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> KERN_ERR "atomic counter check failure!");
>
> that's a single range check on an u32, selecting 'too large' and
> 'too small' s32 values.
>
> > > It's a constant check.
> > >
> > > If are overflowing on such a massive rate, it doesnt matter how
> > > early or late we check the value.
> >
> > UINT_MAX/4 early, might be too early. And if it doesn't matter how
> > early or late, why try to be over-cautious and produce false
> > warnings. ;-)
>
> UINT_MAX/4 is ~1 billion. If we reach a value of 1 billion we are
> leaking. Your check basically is a sharp test for the specific case
> of overflowing the boundary - but it makes the code slower (it uses
> more complex atomic ops) and uglifies it via #ifdefs as well.
>
> It doesnt matter whether we wrap over at around +2 billion into -2
> billion, or treat the whole above-1-billion and
> below-minus-1-billion range as invalid. (other than we'll catch bugs
> sooner via this method, and have faster and cleaner code)
>

Ah.. got it. But, range checking is not required as we are just verifying it
during increment and decrement, not atomic_add, atomic_sub etc... Should we
add debug checks to those operations as well? If we want to test those
operations as well, range check would be useful.

Here is a patch, without the overhead of a costly atomic operation which would
warn if it goes out of [INT_MIN/2 .. INT_MAX/2].

Thanks
Nikanth


Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>
Acked-by: Ingo Molnar <[email protected]>

---


diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df9e885..dd49be3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -46,6 +46,7 @@ config X86
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_ARCH_DEBUG_ATOMIC

config ARCH_DEFCONFIG
string
diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..007931a 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,51 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ONCE((atomic_read(v) > (INT_MAX / 2)),
+ KERN_ERR "atomic counter check failure!");
+#endif
+ raw_atomic_inc(v);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ONCE((atomic_read(v) < (INT_MIN / 2)),
+ KERN_ERR "atomic counter check failure!");
+#endif
+ raw_atomic_dec(v);
+
+}
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..773c1a4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config HAVE_ARCH_DEBUG_ATOMIC
+ bool
+
+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ depends on HAVE_ARCH_DEBUG_ATOMIC
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 14:06:30

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> On Thursday 30 April 2009 19:07:57 Ingo Molnar wrote:
> > * Nikanth Karthikesan <[email protected]> wrote:
> > > > Then there could be a single, straightforward value check:
> > > >
> > > > static inline void atomic_inc(atomic_t *v)
> > > > {
> > > > debug_atomic_check_value(v);
> > > > raw_atomic_inc(v);
> > > > }
> > > >
> > > > Where debug_atomic_check_value() is just an atomic_read():
> > > >
> > > > static inline void debug_atomic_check_value(atomic_t *v)
> > > > {
> > > > WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > > > KERN_ERR "atomic counter check failure!");
> > > > }
> > >
> > > I do not understand, why UINT_MAX/4 to UINT_MAX/4*3?
> > > Roughly,
> > > UINT_MAX/4 = INT_MAX/2
> > > UINT_MAX/4*3 = INT_MAX/2*3 which we will never reach with an int.
> >
> > i mean:
> >
> > WARN_ONCE(in_range((u32)atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > KERN_ERR "atomic counter check failure!");
> >
> > that's a single range check on an u32, selecting 'too large' and
> > 'too small' s32 values.
> >
> > > > It's a constant check.
> > > >
> > > > If are overflowing on such a massive rate, it doesnt matter how
> > > > early or late we check the value.
> > >
> > > UINT_MAX/4 early, might be too early. And if it doesn't matter how
> > > early or late, why try to be over-cautious and produce false
> > > warnings. ;-)
> >
> > UINT_MAX/4 is ~1 billion. If we reach a value of 1 billion we are
> > leaking. Your check basically is a sharp test for the specific case
> > of overflowing the boundary - but it makes the code slower (it uses
> > more complex atomic ops) and uglifies it via #ifdefs as well.
> >
> > It doesnt matter whether we wrap over at around +2 billion into -2
> > billion, or treat the whole above-1-billion and
> > below-minus-1-billion range as invalid. (other than we'll catch bugs
> > sooner via this method, and have faster and cleaner code)
> >
>
> Ah.. got it. But, range checking is not required as we are just
> verifying it during increment and decrement, not atomic_add,
> atomic_sub etc... Should we add debug checks to those operations
> as well? If we want to test those operations as well, range check
> would be useful.

Good point! Indeed the checks can be even simpler that way - a
single test.

> Here is a patch, without the overhead of a costly atomic operation
> which would warn if it goes out of [INT_MIN/2 .. INT_MAX/2].

> +static inline void atomic_inc(atomic_t *v)
> +{
> +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> + WARN_ONCE((atomic_read(v) > (INT_MAX / 2)),
> + KERN_ERR "atomic counter check failure!");

here the message can be more specific i think:

KERN_ERR "atomic inc overflow!");

> +#endif
> + raw_atomic_inc(v);
> +}
> +
> +/**
> + * atomic_dec - decrement atomic variable
> + * @v: pointer of type atomic_t
> + *
> + * Atomically decrements @v by 1.
> + */
> +static inline void atomic_dec(atomic_t *v)
> +{
> +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> + WARN_ONCE((atomic_read(v) < (INT_MIN / 2)),
> + KERN_ERR "atomic counter check failure!");
> +#endif

and here:

KERN_ERR "atomic inc underflow!");

other than these two small details this is looking really nice now.

Ingo

2009-04-30 14:13:04

by Nikanth Karthikesan

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thursday 30 April 2009 19:35:59 Ingo Molnar wrote:
> * Nikanth Karthikesan <[email protected]> wrote:
> > On Thursday 30 April 2009 19:07:57 Ingo Molnar wrote:
> > > * Nikanth Karthikesan <[email protected]> wrote:
> > > > > Then there could be a single, straightforward value check:
> > > > >
> > > > > static inline void atomic_inc(atomic_t *v)
> > > > > {
> > > > > debug_atomic_check_value(v);
> > > > > raw_atomic_inc(v);
> > > > > }
> > > > >
> > > > > Where debug_atomic_check_value() is just an atomic_read():
> > > > >
> > > > > static inline void debug_atomic_check_value(atomic_t *v)
> > > > > {
> > > > > WARN_ONCE(in_range(atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > > > > KERN_ERR "atomic counter check failure!");
> > > > > }
> > > >
> > > > I do not understand, why UINT_MAX/4 to UINT_MAX/4*3?
> > > > Roughly,
> > > > UINT_MAX/4 = INT_MAX/2
> > > > UINT_MAX/4*3 = INT_MAX/2*3 which we will never reach with an int.
> > >
> > > i mean:
> > >
> > > WARN_ONCE(in_range((u32)atomic_read(v), UINT_MAX/4, UINT_MAX/4*3),
> > > KERN_ERR "atomic counter check failure!");
> > >
> > > that's a single range check on an u32, selecting 'too large' and
> > > 'too small' s32 values.
> > >
> > > > > It's a constant check.
> > > > >
> > > > > If are overflowing on such a massive rate, it doesnt matter how
> > > > > early or late we check the value.
> > > >
> > > > UINT_MAX/4 early, might be too early. And if it doesn't matter how
> > > > early or late, why try to be over-cautious and produce false
> > > > warnings. ;-)
> > >
> > > UINT_MAX/4 is ~1 billion. If we reach a value of 1 billion we are
> > > leaking. Your check basically is a sharp test for the specific case
> > > of overflowing the boundary - but it makes the code slower (it uses
> > > more complex atomic ops) and uglifies it via #ifdefs as well.
> > >
> > > It doesnt matter whether we wrap over at around +2 billion into -2
> > > billion, or treat the whole above-1-billion and
> > > below-minus-1-billion range as invalid. (other than we'll catch bugs
> > > sooner via this method, and have faster and cleaner code)
> >
> > Ah.. got it. But, range checking is not required as we are just
> > verifying it during increment and decrement, not atomic_add,
> > atomic_sub etc... Should we add debug checks to those operations
> > as well? If we want to test those operations as well, range check
> > would be useful.
>
> Good point! Indeed the checks can be even simpler that way - a
> single test.
>
> > Here is a patch, without the overhead of a costly atomic operation
> > which would warn if it goes out of [INT_MIN/2 .. INT_MAX/2].
> >
> > +static inline void atomic_inc(atomic_t *v)
> > +{
> > +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> > + WARN_ONCE((atomic_read(v) > (INT_MAX / 2)),
> > + KERN_ERR "atomic counter check failure!");
>
> here the message can be more specific i think:
>
> KERN_ERR "atomic inc overflow!");
>
> > +#endif
> > + raw_atomic_inc(v);
> > +}
> > +
> > +/**
> > + * atomic_dec - decrement atomic variable
> > + * @v: pointer of type atomic_t
> > + *
> > + * Atomically decrements @v by 1.
> > + */
> > +static inline void atomic_dec(atomic_t *v)
> > +{
> > +#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
> > + WARN_ONCE((atomic_read(v) < (INT_MIN / 2)),
> > + KERN_ERR "atomic counter check failure!");
> > +#endif
>
> and here:
>
> KERN_ERR "atomic inc underflow!");
>
> other than these two small details this is looking really nice now.
>

Done.

Thanks
Nikanth

Detect and warn on atomic_inc/atomic_dec overflow.

Add a debug option to detect and warn when the 32-bit atomic_t overflows
during atomic_inc and atomic_dec.

Signed-off-by: Nikanth Karthikesan <[email protected]>
Acked-by: Ingo Molnar <[email protected]>

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df9e885..dd49be3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -46,6 +46,7 @@ config X86
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
+ select HAVE_ARCH_DEBUG_ATOMIC

config ARCH_DEFCONFIG
string
diff --git a/arch/x86/include/asm/atomic_32.h
b/arch/x86/include/asm/atomic_32.h
index 85b46fb..c6a17bb 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -78,24 +78,24 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter));
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter));
diff --git a/arch/x86/include/asm/atomic_64.h
b/arch/x86/include/asm/atomic_64.h
index 8c21731..1183b85 100644
--- a/arch/x86/include/asm/atomic_64.h
+++ b/arch/x86/include/asm/atomic_64.h
@@ -77,12 +77,12 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
}

/**
- * atomic_inc - increment atomic variable
+ * raw_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1.
*/
-static inline void atomic_inc(atomic_t *v)
+static inline void raw_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
: "=m" (v->counter)
@@ -90,12 +90,12 @@ static inline void atomic_inc(atomic_t *v)
}

/**
- * atomic_dec - decrement atomic variable
+ * raw_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1.
*/
-static inline void atomic_dec(atomic_t *v)
+static inline void raw_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
: "=m" (v->counter)
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 7abdaa9..e0ffa32 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -4,15 +4,51 @@
* Copyright (C) 2005 Silicon Graphics, Inc.
* Christoph Lameter
*
- * Allows to provide arch independent atomic definitions without the need to
- * edit all arch specific atomic.h files.
*/

+#include <linux/kernel.h>
#include <asm/types.h>
+#include <asm/bug.h>
+
+
+/**
+ * atomic_inc - increment atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1.
+ */
+static inline void atomic_inc(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ONCE((atomic_read(v) > (INT_MAX / 2)),
+ KERN_ERR "atomic inc overflow!");
+#endif
+ raw_atomic_inc(v);
+}
+
+/**
+ * atomic_dec - decrement atomic variable
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrements @v by 1.
+ */
+static inline void atomic_dec(atomic_t *v)
+{
+#ifdef CONFIG_ENABLE_WARN_ATOMIC_INC_WRAP
+ WARN_ONCE((atomic_read(v) < (INT_MIN / 2)),
+ KERN_ERR "atomic dec underflow!");
+#endif
+ raw_atomic_dec(v);
+
+}
+

/*
* Suppport for atomic_long_t
*
+ * Allows to provide arch independent atomic definitions without the need to
+ * edit all arch specific atomic.h files.
+ *
* Casts for parameters are avoided for existing atomic functions in order to
* avoid issues with cast-as-lval under gcc 4.x and other limitations that
the
* macros of a platform may have.
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 812c282..773c1a4 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
Disable this to suppress the "warning: 'foo' is deprecated
(declared at kernel/power/somefile.c:1234)" messages.

+config HAVE_ARCH_DEBUG_ATOMIC
+ bool
+
+config ENABLE_WARN_ATOMIC_INC_WRAP
+ bool "Enable warning on atomic_inc()/atomic_dec() wrap"
+ depends on HAVE_ARCH_DEBUG_ATOMIC
+ default y
+ help
+ Enable printing a warning when atomic_inc() or atomic_dec()
+ operation wraps around the 32-bit value.
+
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y

2009-04-30 14:45:28

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around


* Nikanth Karthikesan <[email protected]> wrote:

> > and here:
> >
> > KERN_ERR "atomic inc underflow!");
> >
> > other than these two small details this is looking really nice now.
> >
>
> Done.

Thanks. Looks unconditionally good now.

Andrew, this looks like it fits -mm's profile best - can you see any
other problems in the patch? (Perhaps CONFIG_VM_DEBUG should
auto-select this option?)

Ingo

2009-04-30 21:48:47

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] Detect and warn on atomic_inc/atomic_dec wrapping around

On Thu, 30 Apr 2009 19:39:50 +0530
Nikanth Karthikesan <[email protected]> wrote:

>

If I had a dollar for each wordwrapped patch I get sent...

> Detect and warn on atomic_inc/atomic_dec overflow.
>
> Add a debug option to detect and warn when the 32-bit atomic_t overflows
> during atomic_inc and atomic_dec.
>

OK.

I'll beef the changelog up a bit - this one is wimpy.

The question is: do we put this in mainline? I guess we might as well
give it a shot. It may well find bugs and it might also trigger false
positives. We can then fix the bugs and decide whether the false
positives warrant reverting it again, all very easy.

> +#include <asm/bug.h>

checkpatch says

WARNING: Use #include <linux/bug.h> instead of <asm/bug.h>
#215: FILE: include/asm-generic/atomic.h:11:
+#include <asm/bug.h>

Was this an oversight, or did you try using linux/bug.h and discovered
some problem?

> index 812c282..773c1a4 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -17,6 +17,17 @@ config ENABLE_WARN_DEPRECATED
> Disable this to suppress the "warning: 'foo' is deprecated
> (declared at kernel/power/somefile.c:1234)" messages.
>
> +config HAVE_ARCH_DEBUG_ATOMIC
> + bool
> +
> +config ENABLE_WARN_ATOMIC_INC_WRAP
> + bool "Enable warning on atomic_inc()/atomic_dec() wrap"
> + depends on HAVE_ARCH_DEBUG_ATOMIC
> + default y
> + help
> + Enable printing a warning when atomic_inc() or atomic_dec()
> + operation wraps around the 32-bit value.
> +

Yes, I agree with `default y' for now. But we might want to turn it
off again later. Adding that WARN to every atomic_inc/atomic_dec site
must do terrible things to the kernel text footprint.

Of course, if we make if `default y' for a while and then switch it to
`default n', the `y' state will linger for a very long time in all the
kernel developers' .configs. Good! Very sneaky.