Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935893AbcLOM6B (ORCPT ); Thu, 15 Dec 2016 07:58:01 -0500 Received: from mx2.suse.de ([195.135.220.15]:60281 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752673AbcLOM54 (ORCPT ); Thu, 15 Dec 2016 07:57:56 -0500 Date: Thu, 15 Dec 2016 13:57:48 +0100 From: Petr Mladek To: "Luis R. Rodriguez" Cc: shuah@kernel.org, jeyu@redhat.com, rusty@rustcorp.com.au, ebiederm@xmission.com, dmitry.torokhov@gmail.com, acme@redhat.com, corbet@lwn.net, martin.wilck@suse.com, mmarek@suse.com, hare@suse.com, rwright@hpe.com, jeffm@suse.com, DSterba@suse.com, fdmanana@suse.com, neilb@suse.com, linux@roeck-us.net, rgoldwyn@suse.com, subashab@codeaurora.org, xypron.glpk@gmx.de, keescook@chromium.org, atomlin@redhat.com, mbenes@suse.cz, paulmck@linux.vnet.ibm.com, dan.j.williams@intel.com, jpoimboe@redhat.com, davem@davemloft.net, mingo@redhat.com, akpm@linux-foundation.org, torvalds@linux-foundation.org, linux-kselftest@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC 06/10] kmod: provide sanity check on kmod_concurrent access Message-ID: <20161215125747.GB14324@pathway.suse.cz> References: <20161208184801.1689-1-mcgrof@kernel.org> <20161208194850.2627-1-mcgrof@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161208194850.2627-1-mcgrof@kernel.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2078 Lines: 67 On Thu 2016-12-08 11:48:50, Luis R. Rodriguez wrote: > Only decrement *iff* we're possitive. Warn if we've hit > a situation where the counter is already 0 after we're done > with a modprobe call, this would tell us we have an unaccounted > counter access -- this in theory should not be possible as > only one routine controls the counter, however preemption is > one case that could trigger this situation. Avoid that situation > by disabling preemptiong while we access the counter. > > Signed-off-by: Luis R. Rodriguez > --- > kernel/kmod.c | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/kernel/kmod.c b/kernel/kmod.c > index ab38539f7e91..09cf35a2075a 100644 > --- a/kernel/kmod.c > +++ b/kernel/kmod.c > @@ -113,16 +113,28 @@ static int call_modprobe(char *module_name, int wait) > > static int kmod_umh_threads_get(void) > { > + int ret = 0; > + > + preempt_disable(); > atomic_inc(&kmod_concurrent); > if (atomic_read(&kmod_concurrent) < max_modprobes) > - return 0; > - atomic_dec(&kmod_concurrent); > - return -EBUSY; > + goto out; I though more about it and the disabled preemtion might make sense here. It makes sure that we are not rescheduled here and that kmod_concurrent is not increased by mistake for too long. Well, it still would make sense to increment the value only when it is under the limit and set the incremented value using cmpxchg to avoid races. I mean to use similar trick that is used by refcount_inc(), see https://lkml.kernel.org/r/20161114174446.832175072@infradead.org > + atomic_dec_if_positive(&kmod_concurrent); > + ret = -EBUSY; > +out: > + preempt_enable(); > + return 0; > } > > static void kmod_umh_threads_put(void) > { > - atomic_dec(&kmod_concurrent); > + int ret; > + > + preempt_disable(); > + ret = atomic_dec_if_positive(&kmod_concurrent); > + WARN_ON(ret < 0); > + preempt_enable(); The disabled preemption does not make much sense here. We do not need to tie the atomic operation and the WARN together so tightly. Best Regards, Petr