Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752199AbdFNO6C (ORCPT ); Wed, 14 Jun 2017 10:58:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50420 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751829AbdFNO6B (ORCPT ); Wed, 14 Jun 2017 10:58:01 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com A59A5C0587E1 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jpoimboe@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com A59A5C0587E1 Date: Wed, 14 Jun 2017 09:57:56 -0500 From: Josh Poimboeuf To: Petr Mladek Cc: Joe Lawrence , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Jessica Yu , Jiri Kosina , Miroslav Benes Subject: Re: [PATCH 3/3] livepatch: add shadow variable sample program Message-ID: <20170614145756.gom3zf6uv7ua423h@treble> References: <1496341526-19061-1-git-send-email-joe.lawrence@redhat.com> <1496341526-19061-4-git-send-email-joe.lawrence@redhat.com> <20170614142102.GA2583@pathway.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20170614142102.GA2583@pathway.suse.cz> User-Agent: Mutt/1.6.0.1 (2016-04-01) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 14 Jun 2017 14:58:00 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5801 Lines: 186 On Wed, Jun 14, 2017 at 04:21:02PM +0200, Petr Mladek wrote: > On Thu 2017-06-01 14:25:26, Joe Lawrence wrote: > > Modify the sample livepatch to demonstrate the shadow variable API. > > > > Signed-off-by: Joe Lawrence > > --- > > samples/livepatch/livepatch-sample.c | 39 +++++++++++++++++++++++++++++++++++- > > 1 file changed, 38 insertions(+), 1 deletion(-) > > > > diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c > > index 84795223f15f..e0236750cefb 100644 > > --- a/samples/livepatch/livepatch-sample.c > > +++ b/samples/livepatch/livepatch-sample.c > > @@ -25,26 +25,57 @@ > > > > /* > > * This (dumb) live patch overrides the function that prints the > > - * kernel boot cmdline when /proc/cmdline is read. > > + * kernel boot cmdline when /proc/cmdline is read. It also > > + * demonstrates a contrived shadow-variable usage. > > * > > * Example: > > * > > * $ cat /proc/cmdline > > * > > + * current= count= > > * > > * $ insmod livepatch-sample.ko > > * $ cat /proc/cmdline > > * this has been live patched > > + * current=ffff8800331c9540 count=1 > > + * $ cat /proc/cmdline > > + * this has been live patched > > + * current=ffff8800331c9540 count=2 > > * > > * $ echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled > > * $ cat /proc/cmdline > > * > > */ > > > > +static LIST_HEAD(shadow_list); > > + > > +struct task_ctr { > > + struct list_head list; > > + int count; > > +}; > > + > > #include > > +#include > > static int livepatch_cmdline_proc_show(struct seq_file *m, void *v) > > { > > + struct task_ctr *nd; > > + > > + nd = klp_shadow_get(current, "task_ctr"); > > + if (!nd) { > > + nd = kzalloc(sizeof(*nd), GFP_KERNEL); > > + if (nd) { > > + list_add(&nd->list, &shadow_list); > > + klp_shadow_attach(current, "task_ctr", GFP_KERNEL, nd); > > + } > > + } > > Hmm, this looks racy: > > CPU0 CPU1 > > klp_shadow_get() klp_shadow_get() > # returns NULL # returns NULL > if (!nd) { if (!nd) { > nd = kzalloc(); nd = kzalloc(); > list_add(&nd->list, list_add(&nd->list, > &shadow_list); &shadow_list); > > BANG: This is one race. We add items into the shared shadow_list > in parallel. The question is if we need it. IMHO, the API > could help here, see the comments for livepatch_exit() below. > > klp_shadow_attach(); klp_shadow_attach(); > > WARNING: This is fine because the shadow objects are for > different objects. I mean that "current" must point > to different processes on the two CPUs. > > But it is racy in general. The question is if the API > could help here. A possibility might be to allow to > define a callback function that would create the shadow > structure when it does not exist. I mean something like > > typedef void (*klp_shadow_create_obj_func_t)(void * obj); > > void *klp_shadow_get_or_create(void *obj, int key, gfp_t gfp, > klp_shadow_create_obj_fun_t *create) > { > struct klp_shadow *shadow; > > shadow = klp_shadow_get(obj, key); > > if (!shadow && create) { > void *shadow_obj; > > spin_lock_irqsave(&klp_shadow_lock, flags); > shadow = klp_shadow_get(obj, key); > if (shadow) > goto out; > > shadow_obj = create(obj); > shadow = __klp_shadow_attach(obj, key, gfp, > shadow_obj); > out: > spin_unlock_irqrestore(&klp_shadow_lock, flags); > } > > return shadow; > } > > I do not know. Maybe it is too ugly. Or will it safe a duplicated code > in many cases? I think this sample module is confusing because it uses the API in a contrived way. In reality, we use it more like the API documentation describes: klp_shadow_attach() is called right after the parent struct is allocated and klp_shadow_detach() is called right before the parent struct is freed. So the above race wouldn't normally exist. I think Joe implemented it this way in order to keep it simple, so it wouldn't have to use kallsyms to do manual relocations, etc. But maybe a more realistic example would be better since it represents how things should really be done in the absence of out-of-tree tooling like kpatch-build or klp-convert. > > seq_printf(m, "%s\n", "this has been live patched"); > > + > > + if (nd) { > > + nd->count++; > > + seq_printf(m, "current=%p count=%d\n", current, nd->count); > > + } > > + > > return 0; > > } > > > > @@ -99,6 +130,12 @@ static int livepatch_init(void) > > > > static void livepatch_exit(void) > > { > > + struct task_ctr *nd, *tmp; > > + > > + list_for_each_entry_safe(nd, tmp, &shadow_list, list) { > > + list_del(&nd->list); > > + kfree(nd); > > I think that this will be a rather common operation. Do we need > the extra list? It might be enough to delete all entries > in the hash table with a given key. Well, we might need > a callback again: > > typedef void (*klp_shadow_free_obj_func_t)(void *shadow_obj); > > void klp_shadow_free_all(int key, klp_shadow_free_obj_fun_t *shadow_free) > { > int i; > struct klp_shadow *shadow; > > spin_lock_irqsave(&klp_shadow_lock, flags); > > hash_for_each(klp_shadow_hash, i, shadow, node) { > hash_del_rcu(&shadow->node); > /* patch and shadow data are not longer used. */ > shadow_free(shadow->shadow_obj); > /* > * shadow structure might still be traversed > * by someone > */ > kfree_rcu(shadow, rcu_head); > } > } > > spin_unlocklock_irqrestore(&klp_shadow_lock, flags); > } I often wonder whether it's really a good idea to even allow the unloading of patch modules at all. It adds complexity to the livepatch code. Is it worth it? I don't have an answer but I'd be interested in other people's opinion. -- Josh