Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752281AbdLUNkt (ORCPT ); Thu, 21 Dec 2017 08:40:49 -0500 Received: from mx2.suse.de ([195.135.220.15]:44262 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751762AbdLUNkr (ORCPT ); Thu, 21 Dec 2017 08:40:47 -0500 From: Miroslav Benes To: jpoimboe@redhat.com, jeyu@kernel.org, jikos@kernel.org Cc: pmladek@suse.com, jbaron@akamai.com, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Miroslav Benes Subject: [PATCH v2] livepatch: add locking to force and signal functions Date: Thu, 21 Dec 2017 14:40:43 +0100 Message-Id: <20171221134043.32543-1-mbenes@suse.cz> X-Mailer: git-send-email 2.15.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2814 Lines: 103 klp_send_signals() and klp_force_transition() do not acquire klp_mutex, because it seemed to be superfluous. A potential race in klp_send_signals() was harmless and there was nothing in klp_force_transition() which needed to be synchronized. That changed with the addition of klp_forced variable during the review process. There is a small window now, when klp_complete_transition() does not see klp_forced set to true while all tasks have been already transitioned to the target state. module_put() is called and the module can be removed. Acquire klp_mutex in sysfs callback to prevent it. Do the same for the signal sending just to be sure. There is no real downside to that. Reported-by: Jason Baron Signed-off-by: Miroslav Benes --- Changes v1->v2: - Add (patch != klp_transition_patch) check to critical sections and move the locking to sysfs callbacks (Petr) kernel/livepatch/core.c | 52 ++++++++++++++++++++++++++----------------------- 1 file changed, 28 insertions(+), 24 deletions(-) diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 1c3c9b27c916..8fd8e8f126da 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -537,22 +537,24 @@ static ssize_t signal_store(struct kobject *kobj, struct kobj_attribute *attr, int ret; bool val; - patch = container_of(kobj, struct klp_patch, kobj); - - /* - * klp_mutex lock is not grabbed here intentionally. It is not really - * needed. The race window is harmless and grabbing the lock would only - * hold the action back. - */ - if (patch != klp_transition_patch) - return -EINVAL; - ret = kstrtobool(buf, &val); if (ret) return ret; - if (val) - klp_send_signals(); + if (!val) + return count; + + mutex_lock(&klp_mutex); + + patch = container_of(kobj, struct klp_patch, kobj); + if (patch != klp_transition_patch) { + mutex_unlock(&klp_mutex); + return -EINVAL; + } + + klp_send_signals(); + + mutex_unlock(&klp_mutex); return count; } @@ -564,22 +566,24 @@ static ssize_t force_store(struct kobject *kobj, struct kobj_attribute *attr, int ret; bool val; - patch = container_of(kobj, struct klp_patch, kobj); - - /* - * klp_mutex lock is not grabbed here intentionally. It is not really - * needed. The race window is harmless and grabbing the lock would only - * hold the action back. - */ - if (patch != klp_transition_patch) - return -EINVAL; - ret = kstrtobool(buf, &val); if (ret) return ret; - if (val) - klp_force_transition(); + if (!val) + return count; + + mutex_lock(&klp_mutex); + + patch = container_of(kobj, struct klp_patch, kobj); + if (patch != klp_transition_patch) { + mutex_unlock(&klp_mutex); + return -EINVAL; + } + + klp_force_transition(); + + mutex_unlock(&klp_mutex); return count; } -- 2.15.1