Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751559Ab2KTGON (ORCPT ); Tue, 20 Nov 2012 01:14:13 -0500 Received: from mail-ob0-f174.google.com ([209.85.214.174]:45163 "EHLO mail-ob0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750952Ab2KTGOL (ORCPT ); Tue, 20 Nov 2012 01:14:11 -0500 MIME-Version: 1.0 In-Reply-To: <20121120034515.GA5212@sergelap> References: <20121119233459.GA9524@www.outflux.net> <20121120034515.GA5212@sergelap> Date: Mon, 19 Nov 2012 22:14:08 -0800 X-Google-Sender-Auth: lNk5sTaizlF4K4Mx_5l0fE2X-zI Message-ID: Subject: Re: [PATCH] Yama: remove locking from delete path From: Kees Cook To: Serge Hallyn Cc: linux-kernel@vger.kernel.org, James Morris , John Johansen , "Serge E. Hallyn" , Eric Paris , linux-security-module@vger.kernel.org Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5674 Lines: 158 On Mon, Nov 19, 2012 at 7:45 PM, Serge Hallyn wrote: > Quoting Kees Cook (keescook@chromium.org): >> Instead of locking the list during a delete, mark entries as invalid >> and trigger a workqueue to clean them up. This lets us easily handle >> task_free from interrupt context. >> >> Cc: Sasha Levin >> Signed-off-by: Kees Cook >> --- >> security/yama/yama_lsm.c | 43 ++++++++++++++++++++++++++++++++++++------- >> 1 file changed, 36 insertions(+), 7 deletions(-) >> >> diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c >> index 17da6ca..1cba901 100644 >> --- a/security/yama/yama_lsm.c >> +++ b/security/yama/yama_lsm.c >> @@ -17,6 +17,7 @@ >> #include >> #include >> #include >> +#include >> >> #define YAMA_SCOPE_DISABLED 0 >> #define YAMA_SCOPE_RELATIONAL 1 >> @@ -29,6 +30,7 @@ static int ptrace_scope = YAMA_SCOPE_RELATIONAL; >> struct ptrace_relation { >> struct task_struct *tracer; >> struct task_struct *tracee; >> + bool invalid; >> struct list_head node; >> struct rcu_head rcu; >> }; >> @@ -36,6 +38,27 @@ struct ptrace_relation { >> static LIST_HEAD(ptracer_relations); >> static DEFINE_SPINLOCK(ptracer_relations_lock); >> >> +static void yama_relation_cleanup(struct work_struct *work); >> +static DECLARE_WORK(yama_relation_work, yama_relation_cleanup); >> + >> +/** >> + * yama_relation_cleanup - remove invalid entries from the relation list >> + * >> + */ >> +static void yama_relation_cleanup(struct work_struct *work) >> +{ >> + struct ptrace_relation *relation; >> + >> + spin_lock(&ptracer_relations_lock); >> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) { >> + list_del_rcu(&relation->node); >> + kfree_rcu(relation, rcu); >> + } >> + } >> + spin_unlock(&ptracer_relations_lock); >> +} >> + >> /** >> * yama_ptracer_add - add/replace an exception for this tracer/tracee pair >> * @tracer: the task_struct of the process doing the ptrace >> @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, >> >> added->tracee = tracee; >> added->tracer = tracer; >> + added->invalid = false; >> >> - spin_lock_bh(&ptracer_relations_lock); >> + spin_lock(&ptracer_relations_lock); >> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee) { >> list_replace_rcu(&relation->node, &added->node); >> kfree_rcu(relation, rcu); >> @@ -70,7 +96,7 @@ static int yama_ptracer_add(struct task_struct *tracer, >> list_add_rcu(&added->node, &ptracer_relations); >> >> out: >> - spin_unlock_bh(&ptracer_relations_lock); >> + spin_unlock(&ptracer_relations_lock); >> return 0; >> } >> >> @@ -84,15 +110,15 @@ static void yama_ptracer_del(struct task_struct *tracer, >> { >> struct ptrace_relation *relation; >> >> - spin_lock_bh(&ptracer_relations_lock); > > I don't understand - is there a patch I don't have sitting around > which puts the calls to yama_ptracer_del() under rcu_read_lock()? > If not, I don't see how it's safe to walk the list here and risk > racing against another yama_relation_cleanup() run. > > I'm probably missing something really cool about the locking, > but it doesn't look right to me. I would think you'd want to > do the loop under rcu_read_lock(), set a boolean if one is > changed, and call schedule_work() once at the end if the boolean > is set. Unless I'm mistaken and my lockdep tests are wrong, list_for_each_entry_rcu runs under rcu_read_lock(). I could optimize it to only run schedule_work() once all the marking is done at the end of the loop. -Kees > >> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee || >> (tracer && relation->tracer == tracer)) { >> - list_del_rcu(&relation->node); >> - kfree_rcu(relation, rcu); >> + relation->invalid = true; >> + schedule_work(&yama_relation_work); >> } >> } >> - spin_unlock_bh(&ptracer_relations_lock); >> } >> >> /** >> @@ -219,12 +245,15 @@ static int ptracer_exception_found(struct task_struct *tracer, >> rcu_read_lock(); >> if (!thread_group_leader(tracee)) >> tracee = rcu_dereference(tracee->group_leader); >> - list_for_each_entry_rcu(relation, &ptracer_relations, node) >> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee) { >> parent = relation->tracer; >> found = true; >> break; >> } >> + } >> >> if (found && (parent == NULL || task_is_descendant(parent, tracer))) >> rc = 1; >> -- >> 1.7.9.5 >> >> >> -- >> Kees Cook >> Chrome OS Security -- Kees Cook Chrome OS Security -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/