Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751603AbdGROru (ORCPT ); Tue, 18 Jul 2017 10:47:50 -0400 Received: from mx2.suse.de ([195.135.220.15]:50125 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751434AbdGROrt (ORCPT ); Tue, 18 Jul 2017 10:47:49 -0400 Date: Tue, 18 Jul 2017 16:47:45 +0200 From: Petr Mladek To: Joe Lawrence Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Josh Poimboeuf , Jessica Yu , Jiri Kosina , Miroslav Benes Subject: Re: [PATCH v2 2/2] livepatch: add shadow variable sample programs Message-ID: <20170718144745.GG32632@pathway.suse.cz> References: <1498664247-12296-1-git-send-email-joe.lawrence@redhat.com> <1498664247-12296-3-git-send-email-joe.lawrence@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1498664247-12296-3-git-send-email-joe.lawrence@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4499 Lines: 168 On Wed 2017-06-28 11:37:27, Joe Lawrence wrote: > diff --git a/samples/livepatch/livepatch-shadow-mod.c b/samples/livepatch/livepatch-shadow-mod.c > new file mode 100644 > index 000000000000..423f4b7b0adb > --- /dev/null > +++ b/samples/livepatch/livepatch-shadow-mod.c > + * Usage > + * ----- > + * > + * Fix the memory leak > + * ------------------- > + * > + * Extend functionality > + * -------------------- When I made first quick look, I had troubles to understand that these two sections were still part of the Usage section. Also the commands how to enable the modules were hidden deep in the expected output analyze. I wonder if it would make sense to move most of the information into the respective module sources. I actually missed some more info in that modules. The few lines were hard to spot after the long copyright/license blob. > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +MODULE_LICENSE("GPL"); > +MODULE_AUTHOR("Joe Lawrence "); > +MODULE_DESCRIPTION("Buggy module for shadow variable demo"); > + > +#define T1_PERIOD 1 /* allocator thread */ > +#define T2_PERIOD (3 * T1_PERIOD) /* cleanup thread */ > + > +LIST_HEAD(dummy_list); > +DEFINE_MUTEX(dummy_list_mutex); > + > +struct dummy { > + struct list_head list; > + unsigned long jiffies_expire; > +}; > + > +noinline struct dummy *dummy_alloc(void) > +{ > + struct dummy *d; > + void *leak; > + > + d = kzalloc(sizeof(*d), GFP_KERNEL); > + if (!d) > + return NULL; > + > + /* Dummies live long enough to see a few t2 instances */ > + d->jiffies_expire = jiffies + 1000 * 4 * T2_PERIOD; > + > + /* Oops, forgot to save leak! */ > + leak = kzalloc(sizeof(int), GFP_KERNEL); > + > + pr_info("%s: dummy @ %p, expires @ %lx\n", > + __func__, d, d->jiffies_expire); > + > + return d; > +} > + > +noinline void dummy_free(struct dummy *d) > +{ > + pr_info("%s: dummy @ %p, expired = %lx\n", > + __func__, d, d->jiffies_expire); > + > + kfree(d); > +} > + > +noinline bool dummy_check(struct dummy *d, unsigned long jiffies) > +{ > + return time_after(jiffies, d->jiffies_expire); > +} > + > +/* > + * T1: alloc_thread allocates new dummy structures, allocates additional > + * memory, aptly named "leak", but doesn't keep permanent record of it. > + */ > +struct workqueue_struct *alloc_wq; A custom workqueue is needed only for special purposes. IMHO, the standard system workqueue is perfectly fine in our case. AFAIK, it is able to spawn/run about 256 worker threads and process this number of different works in parallel. > +struct delayed_work alloc_dwork; > +static void alloc_thread(struct work_struct *work) The suffix "_thread" is misleading. I think that alloc_work_func() is the most descriptive name. > +{ > + struct dummy *d; > + > + d = dummy_alloc(); > + if (!d) > + return; > + > + mutex_lock(&dummy_list_mutex); > + list_add(&d->list, &dummy_list); > + mutex_unlock(&dummy_list_mutex); > + > + queue_delayed_work(alloc_wq, &alloc_dwork, > + msecs_to_jiffies(1000 * T1_PERIOD)); > +} > + > +static int livepatch_shadow_mod_init(void) > +{ > + alloc_wq = create_singlethread_workqueue("klp_demo_alloc_wq"); > + if (!alloc_wq) > + return -1; > + > + cleanup_wq = create_singlethread_workqueue("klp_demo_cleanup_wq"); > + if (!cleanup_wq) > + goto exit_free_alloc; > + > + INIT_DELAYED_WORK(&alloc_dwork, alloc_thread); > + queue_delayed_work(alloc_wq, &alloc_dwork, 1000 * T1_PERIOD); > + > + INIT_DELAYED_WORK(&cleanup_dwork, cleanup_thread); > + queue_delayed_work(cleanup_wq, &cleanup_dwork, > + msecs_to_jiffies(1000 * T2_PERIOD)); If you use the system workqueue and DECLARE_DELAYED_WORK, you might reduce this to: schedule_delayed_work(&alloc_dwork, 1000 * T1_PERIOD); schedule_delayed_work(&cleanup_dwork, msecs_to_jiffies(1000 * T2_PERIOD)); > + return 0; > + > +exit_free_alloc: > + destroy_workqueue(alloc_wq); > + > + return -1; > +} > + > +static void livepatch_shadow_mod_exit(void) > +{ > + struct dummy *d, *tmp; > + > + /* Cleanup T1 */ > + if (!cancel_delayed_work(&alloc_dwork)) > + flush_workqueue(alloc_wq); You will get the same with cancel_delayed_work_sync(&alloc_dwork); I am sorry, I spent too much time working on kthread worker API and was not able to help myself. Also Tejun would put a shame on me if I did not suggest this ;-) Otherwise I like the examples. Best Regards, Petr