Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752019AbdGRKgs (ORCPT ); Tue, 18 Jul 2017 06:36:48 -0400 Received: from mga04.intel.com ([192.55.52.120]:57036 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751453AbdGRKgn (ORCPT ); Tue, 18 Jul 2017 06:36:43 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,377,1496127600"; d="scan'208";a="994241049" From: Elena Reshetova To: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, peterz@infradead.org, gregkh@linuxfoundation.org, viro@zeniv.linux.org.uk, tj@kernel.org, mingo@redhat.com, hannes@cmpxchg.org, lizefan@huawei.com, acme@kernel.org, alexander.shishkin@linux.intel.com, eparis@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, luto@kernel.org, keescook@chromium.org, tglx@linutronix.de, dvhart@infradead.org, ebiederm@xmission.com, Elena Reshetova Subject: [PATCH 05/15] sched/task_struct: convert task_struct.usage to refcount_t Date: Tue, 18 Jul 2017 13:29:58 +0300 Message-Id: <1500373808-8089-6-git-send-email-elena.reshetova@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1500373808-8089-1-git-send-email-elena.reshetova@intel.com> References: <1500373808-8089-1-git-send-email-elena.reshetova@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3561 Lines: 106 atomic_t variables are currently used to implement reference counters with the following properties: - counter is initialized to 1 using atomic_set() - a resource is freed upon counter reaching zero - once counter reaches zero, its further increments aren't allowed - counter schema uses basic atomic operations (set, inc, inc_not_zero, dec_and_test, etc.) Such atomic variables should be converted to a newly provided refcount_t type and API that prevents accidental counter overflows and underflows. This is important since overflows and underflows can lead to use-after-free situation and be exploitable. The variable task_struct.usage is used as pure reference counter. Convert it to refcount_t and fix up the operations. Suggested-by: Kees Cook Reviewed-by: David Windsor Reviewed-by: Hans Liljestrand Signed-off-by: Elena Reshetova --- include/linux/init_task.h | 2 +- include/linux/sched.h | 3 ++- include/linux/sched/task.h | 4 ++-- kernel/fork.c | 4 ++-- 4 files changed, 7 insertions(+), 6 deletions(-) diff --git a/include/linux/init_task.h b/include/linux/init_task.h index 61777d6..7ba3fca 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -234,7 +234,7 @@ extern struct cred init_cred; INIT_TASK_TI(tsk) \ .state = 0, \ .stack = init_stack, \ - .usage = ATOMIC_INIT(2), \ + .usage = REFCOUNT_INIT(2), \ .flags = PF_KTHREAD, \ .prio = MAX_PRIO-20, \ .static_prio = MAX_PRIO-20, \ diff --git a/include/linux/sched.h b/include/linux/sched.h index 8337e2d..9380e44 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -534,7 +535,7 @@ struct task_struct { randomized_struct_fields_start void *stack; - atomic_t usage; + refcount_t usage; /* Per task flags (PF_*), defined further below: */ unsigned int flags; unsigned int ptrace; diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index c97e5f0..afd5e58 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -86,13 +86,13 @@ extern void sched_exec(void); #define sched_exec() {} #endif -#define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0) +#define get_task_struct(tsk) do { refcount_inc(&(tsk)->usage); } while(0) extern void __put_task_struct(struct task_struct *t); static inline void put_task_struct(struct task_struct *t) { - if (atomic_dec_and_test(&t->usage)) + if (refcount_dec_and_test(&t->usage)) __put_task_struct(t); } diff --git a/kernel/fork.c b/kernel/fork.c index 337904a..cfd8a42 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -406,7 +406,7 @@ static inline void put_signal_struct(struct signal_struct *sig) void __put_task_struct(struct task_struct *tsk) { WARN_ON(!tsk->exit_state); - WARN_ON(atomic_read(&tsk->usage)); + WARN_ON(refcount_read(&tsk->usage)); WARN_ON(tsk == current); cgroup_free(tsk); @@ -561,7 +561,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) * One for us, one for whoever does the "release_task()" (usually * parent) */ - atomic_set(&tsk->usage, 2); + refcount_set(&tsk->usage, 2); #ifdef CONFIG_BLK_DEV_IO_TRACE tsk->btrace_seq = 0; #endif -- 2.7.4