Changes in v2:
* dropped already merged patches
* rebase on top of linux-next/master
* Now by default refcount_t = atomic_t (*) and uses all atomic
standard operations unless CONFIG_REFCOUNT_FULL is enabled.
This is a compromise for the systems that are critical on
performance (such as net) and cannot accept even slight delay
on the refcounter operations.
This series, for core kernel components, replaces atomic_t reference
counters with the new refcount_t type and API (see include/linux/refcount.h).
By doing this we prevent intentional or accidental
underflows or overflows that can led to use-after-free vulnerabilities.
The patches are fully independent and can be cherry-picked separately.
If there are no objections to the patches, please merge them via respective trees.
If you want to test with refcount_t protection enabled, CONFIG_REFCOUNT_FULL
must be enabled.
* The respective change is currently merged into -next as
"locking/refcount: Create unchecked atomic_t implementation".
Elena Reshetova (15):
kernel: convert sighand_struct.count from atomic_t to refcount_t
kernel: convert signal_struct.sigcnt from atomic_t to refcount_t
kernel: convert user_struct.__count from atomic_t to refcount_t
kernel: convert task_struct.usage from atomic_t to refcount_t
kernel: convert task_struct.stack_refcount from atomic_t to refcount_t
kernel: convert perf_event_context.refcount from atomic_t to
refcount_t
kernel: convert ring_buffer.refcount from atomic_t to refcount_t
kernel: convert ring_buffer.aux_refcount from atomic_t to refcount_t
kernel: convert uprobe.ref from atomic_t to refcount_t
kernel: convert nsproxy.count from atomic_t to refcount_t
kernel: convert group_info.usage from atomic_t to refcount_t
kernel: convert cred.usage from atomic_t to refcount_t
kernel: convert numa_group.refcount from atomic_t to refcount_t
kernel: convert futex_pi_state.refcount from atomic_t to refcount_t
kernel: convert kcov.refcount from atomic_t to refcount_t
fs/exec.c | 4 ++--
fs/proc/task_nommu.c | 2 +-
include/linux/cred.h | 13 ++++++------
include/linux/init_task.h | 7 +++---
include/linux/nsproxy.h | 6 +++---
include/linux/perf_event.h | 3 ++-
include/linux/sched.h | 5 +++--
include/linux/sched/signal.h | 5 +++--
include/linux/sched/task.h | 4 ++--
include/linux/sched/task_stack.h | 2 +-
include/linux/sched/user.h | 5 +++--
kernel/cred.c | 46 ++++++++++++++++++++--------------------
kernel/events/core.c | 18 ++++++++--------
kernel/events/internal.h | 5 +++--
kernel/events/ring_buffer.c | 8 +++----
kernel/events/uprobes.c | 8 +++----
kernel/fork.c | 24 ++++++++++-----------
kernel/futex.c | 13 ++++++------
kernel/groups.c | 2 +-
kernel/kcov.c | 9 ++++----
kernel/nsproxy.c | 6 +++---
kernel/sched/fair.c | 8 +++----
kernel/user.c | 8 +++----
23 files changed, 110 insertions(+), 101 deletions(-)
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
fs/exec.c | 4 ++--
fs/proc/task_nommu.c | 2 +-
include/linux/init_task.h | 2 +-
include/linux/sched/signal.h | 3 ++-
kernel/fork.c | 8 ++++----
5 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 9041990..6d38375 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1199,7 +1199,7 @@ static int de_thread(struct task_struct *tsk)
flush_itimer_signals();
#endif
- if (atomic_read(&oldsighand->count) != 1) {
+ if (refcount_read(&oldsighand->count) != 1) {
struct sighand_struct *newsighand;
/*
* This ->sighand is shared with the CLONE_SIGHAND
@@ -1209,7 +1209,7 @@ static int de_thread(struct task_struct *tsk)
if (!newsighand)
return -ENOMEM;
- atomic_set(&newsighand->count, 1);
+ refcount_set(&newsighand->count, 1);
memcpy(newsighand->action, oldsighand->action,
sizeof(newsighand->action));
diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c
index 38ca416..eea6b91 100644
--- a/fs/proc/task_nommu.c
+++ b/fs/proc/task_nommu.c
@@ -63,7 +63,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
else
bytes += kobjsize(current->files);
- if (current->sighand && atomic_read(¤t->sighand->count) > 1)
+ if (current->sighand && refcount_read(¤t->sighand->count) > 1)
sbytes += kobjsize(current->sighand);
else
bytes += kobjsize(current->sighand);
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 9fa5aae..369211d 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -85,7 +85,7 @@ extern struct fs_struct init_fs;
extern struct nsproxy init_nsproxy;
#define INIT_SIGHAND(sighand) { \
- .count = ATOMIC_INIT(1), \
+ .count = REFCOUNT_INIT(1), \
.action = { { { .sa_handler = SIG_DFL, } }, }, \
.siglock = __SPIN_LOCK_UNLOCKED(sighand.siglock), \
.signalfd_wqh = __WAIT_QUEUE_HEAD_INITIALIZER(sighand.signalfd_wqh), \
diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 2a0dd40..4d5cdf1 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -7,13 +7,14 @@
#include <linux/sched/jobctl.h>
#include <linux/sched/task.h>
#include <linux/cred.h>
+#include <linux/refcount.h>
/*
* Types defining task->signal and task->sighand and APIs using them:
*/
struct sighand_struct {
- atomic_t count;
+ refcount_t count;
struct k_sigaction action[_NSIG];
spinlock_t siglock;
wait_queue_head_t signalfd_wqh;
diff --git a/kernel/fork.c b/kernel/fork.c
index 6dded82..4e28f50 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1317,7 +1317,7 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
struct sighand_struct *sig;
if (clone_flags & CLONE_SIGHAND) {
- atomic_inc(¤t->sighand->count);
+ refcount_inc(¤t->sighand->count);
return 0;
}
sig = kmem_cache_alloc(sighand_cachep, GFP_KERNEL);
@@ -1325,14 +1325,14 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
if (!sig)
return -ENOMEM;
- atomic_set(&sig->count, 1);
+ refcount_set(&sig->count, 1);
memcpy(sig->action, current->sighand->action, sizeof(sig->action));
return 0;
}
void __cleanup_sighand(struct sighand_struct *sighand)
{
- if (atomic_dec_and_test(&sighand->count)) {
+ if (refcount_dec_and_test(&sighand->count)) {
signalfd_cleanup(sighand);
/*
* sighand_cachep is SLAB_TYPESAFE_BY_RCU so we can free it
@@ -2236,7 +2236,7 @@ static int check_unshare_flags(unsigned long unshare_flags)
return -EINVAL;
}
if (unshare_flags & (CLONE_SIGHAND | CLONE_VM)) {
- if (atomic_read(¤t->sighand->count) > 1)
+ if (refcount_read(¤t->sighand->count) > 1)
return -EINVAL;
}
if (unshare_flags & CLONE_VM) {
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/sched/signal.h | 2 +-
kernel/fork.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 4d5cdf1..c5f1a67 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -77,7 +77,7 @@ struct thread_group_cputimer {
* the locking of signal_struct.
*/
struct signal_struct {
- atomic_t sigcnt;
+ refcount_t sigcnt;
atomic_t live;
int nr_threads;
struct list_head thread_head;
diff --git a/kernel/fork.c b/kernel/fork.c
index 4e28f50..a9763f6 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -399,7 +399,7 @@ static inline void free_signal_struct(struct signal_struct *sig)
static inline void put_signal_struct(struct signal_struct *sig)
{
- if (atomic_dec_and_test(&sig->sigcnt))
+ if (refcount_dec_and_test(&sig->sigcnt))
free_signal_struct(sig);
}
@@ -1379,7 +1379,7 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
sig->nr_threads = 1;
atomic_set(&sig->live, 1);
- atomic_set(&sig->sigcnt, 1);
+ refcount_set(&sig->sigcnt, 1);
/* list_add(thread_node, thread_head) without INIT_LIST_HEAD() */
sig->thread_head = (struct list_head)LIST_HEAD_INIT(tsk->thread_node);
@@ -1888,7 +1888,7 @@ static __latent_entropy struct task_struct *copy_process(
} else {
current->signal->nr_threads++;
atomic_inc(¤t->signal->live);
- atomic_inc(¤t->signal->sigcnt);
+ refcount_inc(¤t->signal->sigcnt);
list_add_tail_rcu(&p->thread_group,
&p->group_leader->thread_group);
list_add_tail_rcu(&p->thread_node,
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/init_task.h | 2 +-
include/linux/sched.h | 3 ++-
include/linux/sched/task.h | 4 ++--
kernel/fork.c | 4 ++--
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 369211d..348466a 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -228,7 +228,7 @@ extern struct cred init_cred;
INIT_TASK_TI(tsk) \
.state = 0, \
.stack = init_stack, \
- .usage = ATOMIC_INIT(2), \
+ .usage = REFCOUNT_INIT(2), \
.flags = PF_KTHREAD, \
.prio = MAX_PRIO-20, \
.static_prio = MAX_PRIO-20, \
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4e933f3..6a01517 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -20,6 +20,7 @@
#include <linux/seccomp.h>
#include <linux/nodemask.h>
#include <linux/rcupdate.h>
+#include <linux/refcount.h>
#include <linux/resource.h>
#include <linux/latencytop.h>
#include <linux/sched/prio.h>
@@ -516,7 +517,7 @@ struct task_struct {
randomized_struct_fields_start
void *stack;
- atomic_t usage;
+ refcount_t usage;
/* Per task flags (PF_*), defined further below: */
unsigned int flags;
unsigned int ptrace;
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index c97e5f0..afd5e58 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -86,13 +86,13 @@ extern void sched_exec(void);
#define sched_exec() {}
#endif
-#define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
+#define get_task_struct(tsk) do { refcount_inc(&(tsk)->usage); } while(0)
extern void __put_task_struct(struct task_struct *t);
static inline void put_task_struct(struct task_struct *t)
{
- if (atomic_dec_and_test(&t->usage))
+ if (refcount_dec_and_test(&t->usage))
__put_task_struct(t);
}
diff --git a/kernel/fork.c b/kernel/fork.c
index a9763f6..c549b0b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -406,7 +406,7 @@ static inline void put_signal_struct(struct signal_struct *sig)
void __put_task_struct(struct task_struct *tsk)
{
WARN_ON(!tsk->exit_state);
- WARN_ON(atomic_read(&tsk->usage));
+ WARN_ON(refcount_read(&tsk->usage));
WARN_ON(tsk == current);
cgroup_free(tsk);
@@ -561,7 +561,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
* One for us, one for whoever does the "release_task()" (usually
* parent)
*/
- atomic_set(&tsk->usage, 2);
+ refcount_set(&tsk->usage, 2);
#ifdef CONFIG_BLK_DEV_IO_TRACE
tsk->btrace_seq = 0;
#endif
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/perf_event.h | 3 ++-
kernel/events/core.c | 12 ++++++------
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index a3b873f..f7a9802 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -54,6 +54,7 @@ struct perf_guest_info_callbacks {
#include <linux/perf_regs.h>
#include <linux/workqueue.h>
#include <linux/cgroup.h>
+#include <linux/refcount.h>
#include <asm/local.h>
struct perf_callchain_entry {
@@ -750,7 +751,7 @@ struct perf_event_context {
int nr_stat;
int nr_freq;
int rotate_disable;
- atomic_t refcount;
+ refcount_t refcount;
struct task_struct *task;
/*
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 1538df9..11d051f 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1109,7 +1109,7 @@ static void perf_event_ctx_deactivate(struct perf_event_context *ctx)
static void get_ctx(struct perf_event_context *ctx)
{
- WARN_ON(!atomic_inc_not_zero(&ctx->refcount));
+ refcount_inc(&ctx->refcount);
}
static void free_ctx(struct rcu_head *head)
@@ -1123,7 +1123,7 @@ static void free_ctx(struct rcu_head *head)
static void put_ctx(struct perf_event_context *ctx)
{
- if (atomic_dec_and_test(&ctx->refcount)) {
+ if (refcount_dec_and_test(&ctx->refcount)) {
if (ctx->parent_ctx)
put_ctx(ctx->parent_ctx);
if (ctx->task && ctx->task != TASK_TOMBSTONE)
@@ -1201,7 +1201,7 @@ perf_event_ctx_lock_nested(struct perf_event *event, int nesting)
again:
rcu_read_lock();
ctx = ACCESS_ONCE(event->ctx);
- if (!atomic_inc_not_zero(&ctx->refcount)) {
+ if (!refcount_inc_not_zero(&ctx->refcount)) {
rcu_read_unlock();
goto again;
}
@@ -1329,7 +1329,7 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
}
if (ctx->task == TASK_TOMBSTONE ||
- !atomic_inc_not_zero(&ctx->refcount)) {
+ !refcount_inc_not_zero(&ctx->refcount)) {
raw_spin_unlock(&ctx->lock);
ctx = NULL;
} else {
@@ -3760,7 +3760,7 @@ static void __perf_event_init_context(struct perf_event_context *ctx)
INIT_LIST_HEAD(&ctx->pinned_groups);
INIT_LIST_HEAD(&ctx->flexible_groups);
INIT_LIST_HEAD(&ctx->event_list);
- atomic_set(&ctx->refcount, 1);
+ refcount_set(&ctx->refcount, 1);
}
static struct perf_event_context *
@@ -9791,7 +9791,7 @@ __perf_event_ctx_lock_double(struct perf_event *group_leader,
again:
rcu_read_lock();
gctx = READ_ONCE(group_leader->ctx);
- if (!atomic_inc_not_zero(&gctx->refcount)) {
+ if (!refcount_inc_not_zero(&gctx->refcount)) {
rcu_read_unlock();
goto again;
}
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/events/core.c | 4 ++--
kernel/events/internal.h | 3 ++-
kernel/events/ring_buffer.c | 2 +-
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 11d051f..890c3a8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5049,7 +5049,7 @@ struct ring_buffer *ring_buffer_get(struct perf_event *event)
rcu_read_lock();
rb = rcu_dereference(event->rb);
if (rb) {
- if (!atomic_inc_not_zero(&rb->refcount))
+ if (!refcount_inc_not_zero(&rb->refcount))
rb = NULL;
}
rcu_read_unlock();
@@ -5059,7 +5059,7 @@ struct ring_buffer *ring_buffer_get(struct perf_event *event)
void ring_buffer_put(struct ring_buffer *rb)
{
- if (!atomic_dec_and_test(&rb->refcount))
+ if (!refcount_dec_and_test(&rb->refcount))
return;
WARN_ON_ONCE(!list_empty(&rb->event_list));
diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78..b8e6fdf 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -3,13 +3,14 @@
#include <linux/hardirq.h>
#include <linux/uaccess.h>
+#include <linux/refcount.h>
/* Buffer handling */
#define RING_BUFFER_WRITABLE 0x01
struct ring_buffer {
- atomic_t refcount;
+ refcount_t refcount;
struct rcu_head rcu_head;
#ifdef CONFIG_PERF_USE_VMALLOC
struct work_struct work;
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index ee97196..3353572 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -284,7 +284,7 @@ ring_buffer_init(struct ring_buffer *rb, long watermark, int flags)
else
rb->overwrite = 1;
- atomic_set(&rb->refcount, 1);
+ refcount_set(&rb->refcount, 1);
INIT_LIST_HEAD(&rb->event_list);
spin_lock_init(&rb->event_lock);
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/events/core.c | 2 +-
kernel/events/internal.h | 2 +-
kernel/events/ring_buffer.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 890c3a8..aaea75b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5124,7 +5124,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
/* this has to be the last one */
rb_free_aux(rb);
- WARN_ON_ONCE(atomic_read(&rb->aux_refcount));
+ WARN_ON_ONCE(refcount_read(&rb->aux_refcount));
mutex_unlock(&event->mmap_mutex);
}
diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index b8e6fdf..fb55716 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -48,7 +48,7 @@ struct ring_buffer {
atomic_t aux_mmap_count;
unsigned long aux_mmap_locked;
void (*free_aux)(void *);
- atomic_t aux_refcount;
+ refcount_t aux_refcount;
void **aux_pages;
void *aux_priv;
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 3353572..ba12fd2 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -357,7 +357,7 @@ void *perf_aux_output_begin(struct perf_output_handle *handle,
if (!atomic_read(&rb->aux_mmap_count))
goto err;
- if (!atomic_inc_not_zero(&rb->aux_refcount))
+ if (!refcount_inc_not_zero(&rb->aux_refcount))
goto err;
/*
@@ -648,7 +648,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
* we keep a refcount here to make sure either of the two can
* reference them safely.
*/
- atomic_set(&rb->aux_refcount, 1);
+ refcount_set(&rb->aux_refcount, 1);
rb->aux_overwrite = overwrite;
rb->aux_watermark = watermark;
@@ -667,7 +667,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
void rb_free_aux(struct ring_buffer *rb)
{
- if (atomic_dec_and_test(&rb->aux_refcount))
+ if (refcount_dec_and_test(&rb->aux_refcount))
__rb_free_aux(rb);
}
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/nsproxy.h | 6 +++---
kernel/nsproxy.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/nsproxy.h b/include/linux/nsproxy.h
index ac0d65b..f862ba8 100644
--- a/include/linux/nsproxy.h
+++ b/include/linux/nsproxy.h
@@ -28,7 +28,7 @@ struct fs_struct;
* nsproxy is copied.
*/
struct nsproxy {
- atomic_t count;
+ refcount_t count;
struct uts_namespace *uts_ns;
struct ipc_namespace *ipc_ns;
struct mnt_namespace *mnt_ns;
@@ -74,14 +74,14 @@ int __init nsproxy_cache_init(void);
static inline void put_nsproxy(struct nsproxy *ns)
{
- if (atomic_dec_and_test(&ns->count)) {
+ if (refcount_dec_and_test(&ns->count)) {
free_nsproxy(ns);
}
}
static inline void get_nsproxy(struct nsproxy *ns)
{
- atomic_inc(&ns->count);
+ refcount_inc(&ns->count);
}
#endif
diff --git a/kernel/nsproxy.c b/kernel/nsproxy.c
index f6c5d33..5bfe691 100644
--- a/kernel/nsproxy.c
+++ b/kernel/nsproxy.c
@@ -31,7 +31,7 @@
static struct kmem_cache *nsproxy_cachep;
struct nsproxy init_nsproxy = {
- .count = ATOMIC_INIT(1),
+ .count = REFCOUNT_INIT(1),
.uts_ns = &init_uts_ns,
#if defined(CONFIG_POSIX_MQUEUE) || defined(CONFIG_SYSVIPC)
.ipc_ns = &init_ipc_ns,
@@ -52,7 +52,7 @@ static inline struct nsproxy *create_nsproxy(void)
nsproxy = kmem_cache_alloc(nsproxy_cachep, GFP_KERNEL);
if (nsproxy)
- atomic_set(&nsproxy->count, 1);
+ refcount_set(&nsproxy->count, 1);
return nsproxy;
}
@@ -225,7 +225,7 @@ void switch_task_namespaces(struct task_struct *p, struct nsproxy *new)
p->nsproxy = new;
task_unlock(p);
- if (ns && atomic_dec_and_test(&ns->count))
+ if (ns && refcount_dec_and_test(&ns->count))
free_nsproxy(ns);
}
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/cred.h | 7 ++++---
kernel/cred.c | 2 +-
kernel/groups.c | 2 +-
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include/linux/cred.h b/include/linux/cred.h
index 099058e..00948dd 100644
--- a/include/linux/cred.h
+++ b/include/linux/cred.h
@@ -17,6 +17,7 @@
#include <linux/key.h>
#include <linux/selinux.h>
#include <linux/atomic.h>
+#include <linux/refcount.h>
#include <linux/uidgid.h>
#include <linux/sched.h>
#include <linux/sched/user.h>
@@ -28,7 +29,7 @@ struct inode;
* COW Supplementary groups list
*/
struct group_info {
- atomic_t usage;
+ refcount_t usage;
int ngroups;
kgid_t gid[0];
} __randomize_layout;
@@ -44,7 +45,7 @@ struct group_info {
*/
static inline struct group_info *get_group_info(struct group_info *gi)
{
- atomic_inc(&gi->usage);
+ refcount_inc(&gi->usage);
return gi;
}
@@ -54,7 +55,7 @@ static inline struct group_info *get_group_info(struct group_info *gi)
*/
#define put_group_info(group_info) \
do { \
- if (atomic_dec_and_test(&(group_info)->usage)) \
+ if (refcount_dec_and_test(&(group_info)->usage)) \
groups_free(group_info); \
} while (0)
diff --git a/kernel/cred.c b/kernel/cred.c
index ecf0365..8122d7c 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -36,7 +36,7 @@ do { \
static struct kmem_cache *cred_jar;
/* init to 2 - one for init_task, one to ensure it is never freed */
-struct group_info init_groups = { .usage = ATOMIC_INIT(2) };
+struct group_info init_groups = { .usage = REFCOUNT_INIT(2) };
/*
* The initial credentials for the initial task
diff --git a/kernel/groups.c b/kernel/groups.c
index 434f666..5fc6e21 100644
--- a/kernel/groups.c
+++ b/kernel/groups.c
@@ -23,7 +23,7 @@ struct group_info *groups_alloc(int gidsetsize)
if (!gi)
return NULL;
- atomic_set(&gi->usage, 1);
+ refcount_set(&gi->usage, 1);
gi->ngroups = gidsetsize;
return gi;
}
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/sched/fair.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 008c514..6e5b714 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1130,7 +1130,7 @@ static void account_numa_dequeue(struct rq *rq, struct task_struct *p)
}
struct numa_group {
- atomic_t refcount;
+ refcount_t refcount;
spinlock_t lock; /* nr_tasks, tasks */
int nr_tasks;
@@ -2177,12 +2177,12 @@ static void task_numa_placement(struct task_struct *p)
static inline int get_numa_group(struct numa_group *grp)
{
- return atomic_inc_not_zero(&grp->refcount);
+ return refcount_inc_not_zero(&grp->refcount);
}
static inline void put_numa_group(struct numa_group *grp)
{
- if (atomic_dec_and_test(&grp->refcount))
+ if (refcount_dec_and_test(&grp->refcount))
kfree_rcu(grp, rcu);
}
@@ -2203,7 +2203,7 @@ static void task_numa_group(struct task_struct *p, int cpupid, int flags,
if (!grp)
return;
- atomic_set(&grp->refcount, 1);
+ refcount_set(&grp->refcount, 1);
grp->active_nodes = 1;
grp->max_faults_cpu = 0;
spin_lock_init(&grp->lock);
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/futex.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/kernel/futex.c b/kernel/futex.c
index 16dbe4c..7458dfc 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -67,6 +67,7 @@
#include <linux/freezer.h>
#include <linux/bootmem.h>
#include <linux/fault-inject.h>
+#include <linux/refcount.h>
#include <asm/futex.h>
@@ -209,7 +210,7 @@ struct futex_pi_state {
struct rt_mutex pi_mutex;
struct task_struct *owner;
- atomic_t refcount;
+ refcount_t refcount;
union futex_key key;
} __randomize_layout;
@@ -794,7 +795,7 @@ static int refill_pi_state_cache(void)
INIT_LIST_HEAD(&pi_state->list);
/* pi_mutex gets initialized later */
pi_state->owner = NULL;
- atomic_set(&pi_state->refcount, 1);
+ refcount_set(&pi_state->refcount, 1);
pi_state->key = FUTEX_KEY_INIT;
current->pi_state_cache = pi_state;
@@ -814,7 +815,7 @@ static struct futex_pi_state *alloc_pi_state(void)
static void get_pi_state(struct futex_pi_state *pi_state)
{
- WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
+ WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
}
/*
@@ -828,7 +829,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
if (!pi_state)
return;
- if (!atomic_dec_and_test(&pi_state->refcount))
+ if (!refcount_dec_and_test(&pi_state->refcount))
return;
/*
@@ -852,7 +853,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
* refcount is at 0 - put it back to 1.
*/
pi_state->owner = NULL;
- atomic_set(&pi_state->refcount, 1);
+ refcount_set(&pi_state->refcount, 1);
current->pi_state_cache = pi_state;
}
}
@@ -1046,7 +1047,7 @@ static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
* and futex_wait_requeue_pi() as it cannot go to 0 and consequently
* free pi_state before we can take a reference ourselves.
*/
- WARN_ON(!atomic_read(&pi_state->refcount));
+ WARN_ON(!refcount_read(&pi_state->refcount));
/*
* Now that we have a pi_state, we can acquire wait_lock
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/kcov.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/kcov.c b/kernel/kcov.c
index cd77199..a6c7df3 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -19,6 +19,7 @@
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/kcov.h>
+#include <linux/refcount.h>
#include <asm/setup.h>
/*
@@ -35,7 +36,7 @@ struct kcov {
* - opened file descriptor
* - task with enabled coverage (we can't unwire it from another task)
*/
- atomic_t refcount;
+ refcount_t refcount;
/* The lock protects mode, size, area and t. */
spinlock_t lock;
enum kcov_mode mode;
@@ -94,12 +95,12 @@ EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
static void kcov_get(struct kcov *kcov)
{
- atomic_inc(&kcov->refcount);
+ refcount_inc(&kcov->refcount);
}
static void kcov_put(struct kcov *kcov)
{
- if (atomic_dec_and_test(&kcov->refcount)) {
+ if (refcount_dec_and_test(&kcov->refcount)) {
vfree(kcov->area);
kfree(kcov);
}
@@ -175,7 +176,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
if (!kcov)
return -ENOMEM;
- atomic_set(&kcov->refcount, 1);
+ refcount_set(&kcov->refcount, 1);
spin_lock_init(&kcov->lock);
filep->private_data = kcov;
return nonseekable_open(inode, filep);
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/cred.h | 6 +++---
kernel/cred.c | 44 ++++++++++++++++++++++----------------------
2 files changed, 25 insertions(+), 25 deletions(-)
diff --git a/include/linux/cred.h b/include/linux/cred.h
index 00948dd..a9f217b 100644
--- a/include/linux/cred.h
+++ b/include/linux/cred.h
@@ -109,7 +109,7 @@ extern bool may_setgroups(void);
* same context as task->real_cred.
*/
struct cred {
- atomic_t usage;
+ refcount_t usage;
#ifdef CONFIG_DEBUG_CREDENTIALS
atomic_t subscribers; /* number of processes subscribed */
void *put_addr;
@@ -222,7 +222,7 @@ static inline bool cap_ambient_invariant_ok(const struct cred *cred)
*/
static inline struct cred *get_new_cred(struct cred *cred)
{
- atomic_inc(&cred->usage);
+ refcount_inc(&cred->usage);
return cred;
}
@@ -262,7 +262,7 @@ static inline void put_cred(const struct cred *_cred)
struct cred *cred = (struct cred *) _cred;
validate_creds(cred);
- if (atomic_dec_and_test(&(cred)->usage))
+ if (refcount_dec_and_test(&(cred)->usage))
__put_cred(cred);
}
diff --git a/kernel/cred.c b/kernel/cred.c
index 8122d7c..49664e5 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -42,7 +42,7 @@ struct group_info init_groups = { .usage = REFCOUNT_INIT(2) };
* The initial credentials for the initial task
*/
struct cred init_cred = {
- .usage = ATOMIC_INIT(4),
+ .usage = REFCOUNT_INIT(4),
#ifdef CONFIG_DEBUG_CREDENTIALS
.subscribers = ATOMIC_INIT(2),
.magic = CRED_MAGIC,
@@ -101,17 +101,17 @@ static void put_cred_rcu(struct rcu_head *rcu)
#ifdef CONFIG_DEBUG_CREDENTIALS
if (cred->magic != CRED_MAGIC_DEAD ||
- atomic_read(&cred->usage) != 0 ||
+ refcount_read(&cred->usage) != 0 ||
read_cred_subscribers(cred) != 0)
panic("CRED: put_cred_rcu() sees %p with"
" mag %x, put %p, usage %d, subscr %d\n",
cred, cred->magic, cred->put_addr,
- atomic_read(&cred->usage),
+ refcount_read(&cred->usage),
read_cred_subscribers(cred));
#else
- if (atomic_read(&cred->usage) != 0)
+ if (refcount_read(&cred->usage) != 0)
panic("CRED: put_cred_rcu() sees %p with usage %d\n",
- cred, atomic_read(&cred->usage));
+ cred, refcount_read(&cred->usage));
#endif
security_cred_free(cred);
@@ -135,10 +135,10 @@ static void put_cred_rcu(struct rcu_head *rcu)
void __put_cred(struct cred *cred)
{
kdebug("__put_cred(%p{%d,%d})", cred,
- atomic_read(&cred->usage),
+ refcount_read(&cred->usage),
read_cred_subscribers(cred));
- BUG_ON(atomic_read(&cred->usage) != 0);
+ BUG_ON(refcount_read(&cred->usage) != 0);
#ifdef CONFIG_DEBUG_CREDENTIALS
BUG_ON(read_cred_subscribers(cred) != 0);
cred->magic = CRED_MAGIC_DEAD;
@@ -159,7 +159,7 @@ void exit_creds(struct task_struct *tsk)
struct cred *cred;
kdebug("exit_creds(%u,%p,%p,{%d,%d})", tsk->pid, tsk->real_cred, tsk->cred,
- atomic_read(&tsk->cred->usage),
+ refcount_read(&tsk->cred->usage),
read_cred_subscribers(tsk->cred));
cred = (struct cred *) tsk->real_cred;
@@ -194,7 +194,7 @@ const struct cred *get_task_cred(struct task_struct *task)
do {
cred = __task_cred((task));
BUG_ON(!cred);
- } while (!atomic_inc_not_zero(&((struct cred *)cred)->usage));
+ } while (!refcount_inc_not_zero(&((struct cred *)cred)->usage));
rcu_read_unlock();
return cred;
@@ -212,7 +212,7 @@ struct cred *cred_alloc_blank(void)
if (!new)
return NULL;
- atomic_set(&new->usage, 1);
+ refcount_set(&new->usage, 1);
#ifdef CONFIG_DEBUG_CREDENTIALS
new->magic = CRED_MAGIC;
#endif
@@ -258,7 +258,7 @@ struct cred *prepare_creds(void)
old = task->cred;
memcpy(new, old, sizeof(struct cred));
- atomic_set(&new->usage, 1);
+ refcount_set(&new->usage, 1);
set_cred_subscribers(new, 0);
get_group_info(new->group_info);
get_uid(new->user);
@@ -335,7 +335,7 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
get_cred(p->cred);
alter_cred_subscribers(p->cred, 2);
kdebug("share_creds(%p{%d,%d})",
- p->cred, atomic_read(&p->cred->usage),
+ p->cred, refcount_read(&p->cred->usage),
read_cred_subscribers(p->cred));
atomic_inc(&p->cred->user->processes);
return 0;
@@ -426,7 +426,7 @@ int commit_creds(struct cred *new)
const struct cred *old = task->real_cred;
kdebug("commit_creds(%p{%d,%d})", new,
- atomic_read(&new->usage),
+ refcount_read(&new->usage),
read_cred_subscribers(new));
BUG_ON(task->cred != old);
@@ -435,7 +435,7 @@ int commit_creds(struct cred *new)
validate_creds(old);
validate_creds(new);
#endif
- BUG_ON(atomic_read(&new->usage) < 1);
+ BUG_ON(refcount_read(&new->usage) < 1);
get_cred(new); /* we will require a ref for the subj creds too */
@@ -500,13 +500,13 @@ EXPORT_SYMBOL(commit_creds);
void abort_creds(struct cred *new)
{
kdebug("abort_creds(%p{%d,%d})", new,
- atomic_read(&new->usage),
+ refcount_read(&new->usage),
read_cred_subscribers(new));
#ifdef CONFIG_DEBUG_CREDENTIALS
BUG_ON(read_cred_subscribers(new) != 0);
#endif
- BUG_ON(atomic_read(&new->usage) < 1);
+ BUG_ON(refcount_read(&new->usage) < 1);
put_cred(new);
}
EXPORT_SYMBOL(abort_creds);
@@ -523,7 +523,7 @@ const struct cred *override_creds(const struct cred *new)
const struct cred *old = current->cred;
kdebug("override_creds(%p{%d,%d})", new,
- atomic_read(&new->usage),
+ refcount_read(&new->usage),
read_cred_subscribers(new));
validate_creds(old);
@@ -534,7 +534,7 @@ const struct cred *override_creds(const struct cred *new)
alter_cred_subscribers(old, -1);
kdebug("override_creds() = %p{%d,%d}", old,
- atomic_read(&old->usage),
+ refcount_read(&old->usage),
read_cred_subscribers(old));
return old;
}
@@ -552,7 +552,7 @@ void revert_creds(const struct cred *old)
const struct cred *override = current->cred;
kdebug("revert_creds(%p{%d,%d})", old,
- atomic_read(&old->usage),
+ refcount_read(&old->usage),
read_cred_subscribers(old));
validate_creds(old);
@@ -611,7 +611,7 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
validate_creds(old);
*new = *old;
- atomic_set(&new->usage, 1);
+ refcount_set(&new->usage, 1);
set_cred_subscribers(new, 0);
get_uid(new->user);
get_user_ns(new->user_ns);
@@ -735,7 +735,7 @@ static void dump_invalid_creds(const struct cred *cred, const char *label,
printk(KERN_ERR "CRED: ->magic=%x, put_addr=%p\n",
cred->magic, cred->put_addr);
printk(KERN_ERR "CRED: ->usage=%d, subscr=%d\n",
- atomic_read(&cred->usage),
+ refcount_read(&cred->usage),
read_cred_subscribers(cred));
printk(KERN_ERR "CRED: ->*uid = { %d,%d,%d,%d }\n",
from_kuid_munged(&init_user_ns, cred->uid),
@@ -809,7 +809,7 @@ void validate_creds_for_do_exit(struct task_struct *tsk)
{
kdebug("validate_creds_for_do_exit(%p,%p{%d,%d})",
tsk->real_cred, tsk->cred,
- atomic_read(&tsk->cred->usage),
+ refcount_read(&tsk->cred->usage),
read_cred_subscribers(tsk->cred));
__validate_process_creds(tsk, __FILE__, __LINE__);
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
kernel/events/uprobes.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 456596d..ca093957 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -66,7 +66,7 @@ static struct percpu_rw_semaphore dup_mmap_sem;
struct uprobe {
struct rb_node rb_node; /* node in the rb tree */
- atomic_t ref;
+ refcount_t ref;
struct rw_semaphore register_rwsem;
struct rw_semaphore consumer_rwsem;
struct list_head pending_list;
@@ -371,13 +371,13 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v
static struct uprobe *get_uprobe(struct uprobe *uprobe)
{
- atomic_inc(&uprobe->ref);
+ refcount_inc(&uprobe->ref);
return uprobe;
}
static void put_uprobe(struct uprobe *uprobe)
{
- if (atomic_dec_and_test(&uprobe->ref))
+ if (refcount_dec_and_test(&uprobe->ref))
kfree(uprobe);
}
@@ -459,7 +459,7 @@ static struct uprobe *__insert_uprobe(struct uprobe *uprobe)
rb_link_node(&uprobe->rb_node, parent, p);
rb_insert_color(&uprobe->rb_node, &uprobes_tree);
/* get access + creation ref */
- atomic_set(&uprobe->ref, 2);
+ refcount_set(&uprobe->ref, 2);
return u;
}
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/sched/user.h | 5 +++--
kernel/user.c | 8 ++++----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h
index 5d5415e..0a9953d 100644
--- a/include/linux/sched/user.h
+++ b/include/linux/sched/user.h
@@ -3,6 +3,7 @@
#include <linux/uidgid.h>
#include <linux/atomic.h>
+#include <linux/refcount.h>
struct key;
@@ -10,7 +11,7 @@ struct key;
* Some day this will be a full-fledged user tracking system..
*/
struct user_struct {
- atomic_t __count; /* reference count */
+ refcount_t __count; /* reference count */
atomic_t processes; /* How many processes does this user have? */
atomic_t sigpending; /* How many pending signals does this user have? */
#ifdef CONFIG_FANOTIFY
@@ -53,7 +54,7 @@ extern struct user_struct root_user;
extern struct user_struct * alloc_uid(kuid_t);
static inline struct user_struct *get_uid(struct user_struct *u)
{
- atomic_inc(&u->__count);
+ refcount_inc(&u->__count);
return u;
}
extern void free_uid(struct user_struct *);
diff --git a/kernel/user.c b/kernel/user.c
index 00281ad..c072348 100644
--- a/kernel/user.c
+++ b/kernel/user.c
@@ -90,7 +90,7 @@ static DEFINE_SPINLOCK(uidhash_lock);
/* root_user.__count is 1, for init task cred */
struct user_struct root_user = {
- .__count = ATOMIC_INIT(1),
+ .__count = REFCOUNT_INIT(1),
.processes = ATOMIC_INIT(1),
.sigpending = ATOMIC_INIT(0),
.locked_shm = 0,
@@ -116,7 +116,7 @@ static struct user_struct *uid_hash_find(kuid_t uid, struct hlist_head *hashent)
hlist_for_each_entry(user, hashent, uidhash_node) {
if (uid_eq(user->uid, uid)) {
- atomic_inc(&user->__count);
+ refcount_inc(&user->__count);
return user;
}
}
@@ -163,7 +163,7 @@ void free_uid(struct user_struct *up)
return;
local_irq_save(flags);
- if (atomic_dec_and_lock(&up->__count, &uidhash_lock))
+ if (refcount_dec_and_lock(&up->__count, &uidhash_lock))
free_user(up, flags);
else
local_irq_restore(flags);
@@ -184,7 +184,7 @@ struct user_struct *alloc_uid(kuid_t uid)
goto out_unlock;
new->uid = uid;
- atomic_set(&new->__count, 1);
+ refcount_set(&new->__count, 1);
/*
* Before adding this, check whether we raced
--
2.7.4
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <[email protected]>
Signed-off-by: Hans Liljestrand <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: David Windsor <[email protected]>
---
include/linux/init_task.h | 3 ++-
include/linux/sched.h | 2 +-
include/linux/sched/task_stack.h | 2 +-
kernel/fork.c | 6 +++---
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 348466a..96802198 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -12,6 +12,7 @@
#include <linux/securebits.h>
#include <linux/seqlock.h>
#include <linux/rbtree.h>
+#include <linux/refcount.h>
#include <linux/sched/autogroup.h>
#include <net/net_namespace.h>
#include <linux/sched/rt.h>
@@ -208,7 +209,7 @@ extern struct cred init_cred;
#ifdef CONFIG_THREAD_INFO_IN_TASK
# define INIT_TASK_TI(tsk) \
.thread_info = INIT_THREAD_INFO(tsk), \
- .stack_refcount = ATOMIC_INIT(1),
+ .stack_refcount = REFCOUNT_INIT(1),
#else
# define INIT_TASK_TI(tsk)
#endif
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6a01517..73be022 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1070,7 +1070,7 @@ struct task_struct {
#endif
#ifdef CONFIG_THREAD_INFO_IN_TASK
/* A live task holds one reference: */
- atomic_t stack_refcount;
+ refcount_t stack_refcount;
#endif
#ifdef CONFIG_LIVEPATCH
int patch_state;
diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
index df6ea66..aab3809 100644
--- a/include/linux/sched/task_stack.h
+++ b/include/linux/sched/task_stack.h
@@ -60,7 +60,7 @@ static inline unsigned long *end_of_stack(struct task_struct *p)
#ifdef CONFIG_THREAD_INFO_IN_TASK
static inline void *try_get_task_stack(struct task_struct *tsk)
{
- return atomic_inc_not_zero(&tsk->stack_refcount) ?
+ return refcount_inc_not_zero(&tsk->stack_refcount) ?
task_stack_page(tsk) : NULL;
}
diff --git a/kernel/fork.c b/kernel/fork.c
index c549b0b..1e0e14e 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -354,7 +354,7 @@ static void release_task_stack(struct task_struct *tsk)
#ifdef CONFIG_THREAD_INFO_IN_TASK
void put_task_stack(struct task_struct *tsk)
{
- if (atomic_dec_and_test(&tsk->stack_refcount))
+ if (refcount_dec_and_test(&tsk->stack_refcount))
release_task_stack(tsk);
}
#endif
@@ -372,7 +372,7 @@ void free_task(struct task_struct *tsk)
* If the task had a separate stack allocation, it should be gone
* by now.
*/
- WARN_ON_ONCE(atomic_read(&tsk->stack_refcount) != 0);
+ WARN_ON_ONCE(refcount_read(&tsk->stack_refcount) != 0);
#endif
rt_mutex_debug_task_free(tsk);
ftrace_graph_exit_task(tsk);
@@ -532,7 +532,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
tsk->stack_vm_area = stack_vm_area;
#endif
#ifdef CONFIG_THREAD_INFO_IN_TASK
- atomic_set(&tsk->stack_refcount, 1);
+ refcount_set(&tsk->stack_refcount, 1);
#endif
if (err)
--
2.7.4
On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is used as
> a reference counter. This allows to avoid accidental
> refcounter overflows that might lead to use-after-free
> situations.
>
> Signed-off-by: Elena Reshetova <[email protected]>
> Signed-off-by: Hans Liljestrand <[email protected]>
> Signed-off-by: Kees Cook <[email protected]>
> Signed-off-by: David Windsor <[email protected]>
I'll let tglx comment on the SoB chain, I know he likes those :-) You
did Cc him right, seeing how he's the maintainer of this stuff..
*sigh* you didn't :-( After so many patches send you _really_ should
know to Cc the right people.
> ---
> kernel/futex.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
> @@ -814,7 +815,7 @@ static struct futex_pi_state *alloc_pi_state(void)
>
> static void get_pi_state(struct futex_pi_state *pi_state)
> {
> - WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
> + WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
> }
I think we have refcount_inc() for just that case, no?
On Fri, Jul 07, 2017 at 12:04:20PM +0300, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is used as
> a reference counter. This allows to avoid accidental
> refcounter overflows that might lead to use-after-free
> situations.
>
> Signed-off-by: Elena Reshetova <[email protected]>
> Signed-off-by: Hans Liljestrand <[email protected]>
> Signed-off-by: Kees Cook <[email protected]>
> Signed-off-by: David Windsor <[email protected]>
That's an invalid SoB chain.. can't do anything with these patches.
On Fri, Jul 07, 2017 at 12:04:27PM +0300, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is used as
> a reference counter. This allows to avoid accidental
> refcounter overflows that might lead to use-after-free
> situations.
>
> Signed-off-by: Elena Reshetova <[email protected]>
> Signed-off-by: Hans Liljestrand <[email protected]>
> Signed-off-by: Kees Cook <[email protected]>
> Signed-off-by: David Windsor <[email protected]>
Invalid SoB..
> Subject: [PATCH 13/15] kernel: convert numa_group.refcount from atomic_t to refcount_t
> ---
> kernel/sched/fair.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
Invalid Subject prefix too, scheduler patches get to have "sched:"
On Fri, 7 Jul 2017, Peter Zijlstra wrote:
> On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> > Signed-off-by: Elena Reshetova <[email protected]>
> > Signed-off-by: Hans Liljestrand <[email protected]>
> > Signed-off-by: Kees Cook <[email protected]>
> > Signed-off-by: David Windsor <[email protected]>
>
> I'll let tglx comment on the SoB chain, I know he likes those :-) You
> did Cc him right, seeing how he's the maintainer of this stuff..
Right, that SOB chain is crap. It suggests that the patch was written by
Elena and then carried on by Hans, handed over to Kees and from there to
David. And now it's resent by Elena. Circular dependencies or what?
Thanks,
tglx
Subject: Re: [PATCH 06/15] kernel: convert perf_event_context.refcount from
> atomic_t to refcount_t
>
> On Fri, Jul 07, 2017 at 12:04:20PM +0300, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> > Signed-off-by: Elena Reshetova <[email protected]>
> > Signed-off-by: Hans Liljestrand <[email protected]>
> > Signed-off-by: Kees Cook <[email protected]>
> > Signed-off-by: David Windsor <[email protected]>
>
> That's an invalid SoB chain.. can't do anything with these patches.
OK, will fix.
> On Fri, Jul 07, 2017 at 12:04:27PM +0300, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> > Signed-off-by: Elena Reshetova <[email protected]>
> > Signed-off-by: Hans Liljestrand <[email protected]>
> > Signed-off-by: Kees Cook <[email protected]>
> > Signed-off-by: David Windsor <[email protected]>
>
> Invalid SoB..
>
> > Subject: [PATCH 13/15] kernel: convert numa_group.refcount from atomic_t to
> refcount_t
>
> > ---
> > kernel/sched/fair.c | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
>
> Invalid Subject prefix too, scheduler patches get to have "sched:"
OK, will fix.
Best Regards,
Elena.
> On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> > refcount_t type and corresponding API should be
> > used instead of atomic_t when the variable is used as
> > a reference counter. This allows to avoid accidental
> > refcounter overflows that might lead to use-after-free
> > situations.
> >
> > Signed-off-by: Elena Reshetova <[email protected]>
> > Signed-off-by: Hans Liljestrand <[email protected]>
> > Signed-off-by: Kees Cook <[email protected]>
> > Signed-off-by: David Windsor <[email protected]>
>
> I'll let tglx comment on the SoB chain, I know he likes those :-) You
> did Cc him right, seeing how he's the maintainer of this stuff..
>
> *sigh* you didn't :-( After so many patches send you _really_ should
> know to Cc the right people.
It is not so trivial as you might think. Unless right person shows up as maintainer/supporter
when I run get_maintainer script, it is hard to figure out who is the right CC person.
And the amount of sending patches doesn't help, because if a person reacts on
patches and asks to change/fix stuff, it doesn't mean he is the right person,
he might be just reading mailing list and having time to do reviews :(
That's said, I will try to improve the CC list.
>
> > ---
> > kernel/futex.c | 13 +++++++------
> > 1 file changed, 7 insertions(+), 6 deletions(-)
>
> > @@ -814,7 +815,7 @@ static struct futex_pi_state *alloc_pi_state(void)
> >
> > static void get_pi_state(struct futex_pi_state *pi_state)
> > {
> > - WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
> > + WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
> > }
>
> I think we have refcount_inc() for just that case, no?
>
Yes, this slipped through. Will fix so it would look shorted. Thank you for catching it!
Best Regards,
Elena.
> On Fri, 7 Jul 2017, Peter Zijlstra wrote:
>
> > On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> > > refcount_t type and corresponding API should be
> > > used instead of atomic_t when the variable is used as
> > > a reference counter. This allows to avoid accidental
> > > refcounter overflows that might lead to use-after-free
> > > situations.
> > >
> > > Signed-off-by: Elena Reshetova <[email protected]>
> > > Signed-off-by: Hans Liljestrand <[email protected]>
> > > Signed-off-by: Kees Cook <[email protected]>
> > > Signed-off-by: David Windsor <[email protected]>
> >
> > I'll let tglx comment on the SoB chain, I know he likes those :-) You
> > did Cc him right, seeing how he's the maintainer of this stuff..
>
> Right, that SOB chain is crap. It suggests that the patch was written by
> Elena and then carried on by Hans, handed over to Kees and from there to
> David. And now it's resent by Elena. Circular dependencies or what?
I will fix SOB on all patches and resend.
Best Regards,
Elena
* Reshetova, Elena <[email protected]> wrote:
> > On Fri, 7 Jul 2017, Peter Zijlstra wrote:
> >
> > > On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> > > > refcount_t type and corresponding API should be
> > > > used instead of atomic_t when the variable is used as
> > > > a reference counter. This allows to avoid accidental
> > > > refcounter overflows that might lead to use-after-free
> > > > situations.
> > > >
> > > > Signed-off-by: Elena Reshetova <[email protected]>
> > > > Signed-off-by: Hans Liljestrand <[email protected]>
> > > > Signed-off-by: Kees Cook <[email protected]>
> > > > Signed-off-by: David Windsor <[email protected]>
> > >
> > > I'll let tglx comment on the SoB chain, I know he likes those :-) You
> > > did Cc him right, seeing how he's the maintainer of this stuff..
> >
> > Right, that SOB chain is crap. It suggests that the patch was written by
> > Elena and then carried on by Hans, handed over to Kees and from there to
> > David. And now it's resent by Elena. Circular dependencies or what?
>
> I will fix SOB on all patches and resend.
Please don't resend any of these until the merge window is over! This is probably
the worst possible moment to seek review feedback and merging...
Thanks,
Ingo
On Fri, Jul 07, 2017 at 10:24:20AM +0000, Reshetova, Elena wrote:
> It is not so trivial as you might think. Unless right person shows up
> as maintainer/supporter when I run get_maintainer script,
In this case though it should:
FUTEX SUBSYSTEM
M: Thomas Gleixner <[email protected]>
M: Ingo Molnar <[email protected]>
R: Peter Zijlstra <[email protected]>
R: Darren Hart <[email protected]>
L: [email protected]
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking/core
S: Maintained
F: kernel/futex.c
F: kernel/futex_compat.c
F: include/asm-generic/futex.h
F: include/linux/futex.h
F: include/uapi/linux/futex.h
F: tools/testing/selftests/futex/
F: tools/perf/bench/futex*
F: Documentation/*futex*
On Fri, Jul 07, 2017 at 12:35:16PM +0200, Ingo Molnar wrote:
>
> * Reshetova, Elena <[email protected]> wrote:
>
> > > On Fri, 7 Jul 2017, Peter Zijlstra wrote:
> > >
> > > > On Fri, Jul 07, 2017 at 12:04:28PM +0300, Elena Reshetova wrote:
> > > > > refcount_t type and corresponding API should be
> > > > > used instead of atomic_t when the variable is used as
> > > > > a reference counter. This allows to avoid accidental
> > > > > refcounter overflows that might lead to use-after-free
> > > > > situations.
> > > > >
> > > > > Signed-off-by: Elena Reshetova <[email protected]>
> > > > > Signed-off-by: Hans Liljestrand <[email protected]>
> > > > > Signed-off-by: Kees Cook <[email protected]>
> > > > > Signed-off-by: David Windsor <[email protected]>
> > > >
> > > > I'll let tglx comment on the SoB chain, I know he likes those :-) You
> > > > did Cc him right, seeing how he's the maintainer of this stuff..
> > >
> > > Right, that SOB chain is crap. It suggests that the patch was written by
> > > Elena and then carried on by Hans, handed over to Kees and from there to
> > > David. And now it's resent by Elena. Circular dependencies or what?
> >
> > I will fix SOB on all patches and resend.
>
> Please don't resend any of these until the merge window is over! This is probably
> the worst possible moment to seek review feedback and merging...
>
> Thanks,
>
> Ingo
Ah, you need a "I'm ignoring your patches for two weeks" email bot,
feel free to steal the text of mine below :)
----------------------
Hi,
This is the friendly semi-automated patch-bot of Greg Kroah-Hartman.
You have sent him a patch that has triggered this response.
Right now, the development tree you have sent a patch for is "closed"
due to the timing of the merge window. Don't worry, the patch(es) you
have sent are not lost, and will be looked at after the merge window is
over (after the -rc1 kernel is released by Linus).
So thank you for your patience and your patches will be reviewed at this
later time, you do not have to do anything further, this is just a short
note to let you know the patch status and so you don't worry they didn't
make it through.
thanks,
greg k-h's patch email bot