Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp3871918imm; Mon, 15 Oct 2018 05:41:05 -0700 (PDT) X-Google-Smtp-Source: ACcGV63Dd0+vPRddQvLBoUMG/oLXmQNNGfeWVUMzc5MBtI0AONi5l9g4j4eNAlQYgGqZfsO3ro3q X-Received: by 2002:a17:902:d20a:: with SMTP id t10-v6mr17059884ply.256.1539607265096; Mon, 15 Oct 2018 05:41:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539607265; cv=none; d=google.com; s=arc-20160816; b=KGejAwx2C4bvWEUwV9J+hRMgpZ9DnN7g07hF8iwxBLFyw26hhHrtB7gZKubWF9hGjD NEygMEyPjCrAu8UQTN2y8kD/+CIquXPh+G5KhXy1/ky8+7r76k1x6G/8/CDTx9lNVeYD erxgkrkPVebZ2YpfBTMyYc2z0jgCmeUgNVGeKGzE/pC3rE2RGMIetevt7LsWp/WErxQp vTyJ/+2NHzW8DE/Y/VKu9dH1AmVg09gkzcumDZ6quWChgN/aiwnpXIKmGrjpTXh42kmv mYHNHEJCjWobgFGZChyqTMwkmJqAe4LoodxtyDfFgs3fdr8YZ4cYQdBpyY4I2Pw3ON5l 0cYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=OZRGfpaGpBAgwuY6sTCGfDI6QF2WTm8C7R6QeKHmrNs=; b=gCz3Ti8bQwW5mDGVekRNXndRtFUFTfH9Akn5AU6Wg+i1zSpXoPGHphO56RrqZP3+aD eXnYmWs1DP+d9zG7IWFN9GY2xMxoLlQoo8yhF3Bqt8y1KhcFnh/h5iaFMEbvi1B+1ne4 znDeZlR++v3Mu7JkEflKF8trVn7Qaz4VsmC8+uTrUNt/pVIPgiX2G8UrDtmLA+hkGGse Ozeg2ahm3QZ/21Yqbe9oFlMG50D/lYZeqtiBCcsSCOyaqgQIXsiuV3JxIZDca+N9A2Ps 77L589kBD0BFyIqxa/jPvws1EwFxgBbpIsgXfgl6T+Ue0SFtxIDIjoSf+EqxwSk1SIoh Md9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q13-v6si10148481pgq.526.2018.10.15.05.40.50; Mon, 15 Oct 2018 05:41:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727014AbeJOUXn (ORCPT + 99 others); Mon, 15 Oct 2018 16:23:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:52312 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726400AbeJOUWj (ORCPT ); Mon, 15 Oct 2018 16:22:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F3D6CAF13; Mon, 15 Oct 2018 12:37:28 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v13 03/12] livepatch: Shuffle klp_enable_patch()/klp_disable_patch() code Date: Mon, 15 Oct 2018 14:37:04 +0200 Message-Id: <20181015123713.25868-4-pmladek@suse.com> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20181015123713.25868-1-pmladek@suse.com> References: <20181015123713.25868-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We are going to simplify the API and code by removing the registration step. This would require calling init/free functions from enable/disable ones. This patch just moves the code to prevent more forward declarations. This patch does not change the code except of two forward declarations. Signed-off-by: Petr Mladek --- kernel/livepatch/core.c | 330 ++++++++++++++++++++++++------------------------ 1 file changed, 166 insertions(+), 164 deletions(-) diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 577ebeb43024..b3956cce239e 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -278,170 +278,6 @@ static int klp_write_object_relocations(struct module *pmod, return ret; } -static int __klp_disable_patch(struct klp_patch *patch) -{ - struct klp_object *obj; - - if (WARN_ON(!patch->enabled)) - return -EINVAL; - - if (klp_transition_patch) - return -EBUSY; - - /* enforce stacking: only the last enabled patch can be disabled */ - if (!list_is_last(&patch->list, &klp_patches) && - list_next_entry(patch, list)->enabled) - return -EBUSY; - - klp_init_transition(patch, KLP_UNPATCHED); - - klp_for_each_object(patch, obj) - if (obj->patched) - klp_pre_unpatch_callback(obj); - - /* - * Enforce the order of the func->transition writes in - * klp_init_transition() and the TIF_PATCH_PENDING writes in - * klp_start_transition(). In the rare case where klp_ftrace_handler() - * is called shortly after klp_update_patch_state() switches the task, - * this ensures the handler sees that func->transition is set. - */ - smp_wmb(); - - klp_start_transition(); - klp_try_complete_transition(); - patch->enabled = false; - - return 0; -} - -/** - * klp_disable_patch() - disables a registered patch - * @patch: The registered, enabled patch to be disabled - * - * Unregisters the patched functions from ftrace. - * - * Return: 0 on success, otherwise error - */ -int klp_disable_patch(struct klp_patch *patch) -{ - int ret; - - mutex_lock(&klp_mutex); - - if (!klp_is_patch_registered(patch)) { - ret = -EINVAL; - goto err; - } - - if (!patch->enabled) { - ret = -EINVAL; - goto err; - } - - ret = __klp_disable_patch(patch); - -err: - mutex_unlock(&klp_mutex); - return ret; -} -EXPORT_SYMBOL_GPL(klp_disable_patch); - -static int __klp_enable_patch(struct klp_patch *patch) -{ - struct klp_object *obj; - int ret; - - if (klp_transition_patch) - return -EBUSY; - - if (WARN_ON(patch->enabled)) - return -EINVAL; - - /* enforce stacking: only the first disabled patch can be enabled */ - if (patch->list.prev != &klp_patches && - !list_prev_entry(patch, list)->enabled) - return -EBUSY; - - /* - * A reference is taken on the patch module to prevent it from being - * unloaded. - */ - if (!try_module_get(patch->mod)) - return -ENODEV; - - pr_notice("enabling patch '%s'\n", patch->mod->name); - - klp_init_transition(patch, KLP_PATCHED); - - /* - * Enforce the order of the func->transition writes in - * klp_init_transition() and the ops->func_stack writes in - * klp_patch_object(), so that klp_ftrace_handler() will see the - * func->transition updates before the handler is registered and the - * new funcs become visible to the handler. - */ - smp_wmb(); - - klp_for_each_object(patch, obj) { - if (!klp_is_object_loaded(obj)) - continue; - - ret = klp_pre_patch_callback(obj); - if (ret) { - pr_warn("pre-patch callback failed for object '%s'\n", - klp_is_module(obj) ? obj->name : "vmlinux"); - goto err; - } - - ret = klp_patch_object(obj); - if (ret) { - pr_warn("failed to patch object '%s'\n", - klp_is_module(obj) ? obj->name : "vmlinux"); - goto err; - } - } - - klp_start_transition(); - klp_try_complete_transition(); - patch->enabled = true; - - return 0; -err: - pr_warn("failed to enable patch '%s'\n", patch->mod->name); - - klp_cancel_transition(); - return ret; -} - -/** - * klp_enable_patch() - enables a registered patch - * @patch: The registered, disabled patch to be enabled - * - * Performs the needed symbol lookups and code relocations, - * then registers the patched functions with ftrace. - * - * Return: 0 on success, otherwise error - */ -int klp_enable_patch(struct klp_patch *patch) -{ - int ret; - - mutex_lock(&klp_mutex); - - if (!klp_is_patch_registered(patch)) { - ret = -EINVAL; - goto err; - } - - ret = __klp_enable_patch(patch); - -err: - mutex_unlock(&klp_mutex); - return ret; -} -EXPORT_SYMBOL_GPL(klp_enable_patch); - /* * Sysfs Interface * @@ -454,6 +290,8 @@ EXPORT_SYMBOL_GPL(klp_enable_patch); * /sys/kernel/livepatch// * /sys/kernel/livepatch/// */ +static int __klp_disable_patch(struct klp_patch *patch); +static int __klp_enable_patch(struct klp_patch *patch); static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) @@ -904,6 +742,170 @@ int klp_register_patch(struct klp_patch *patch) } EXPORT_SYMBOL_GPL(klp_register_patch); +static int __klp_disable_patch(struct klp_patch *patch) +{ + struct klp_object *obj; + + if (WARN_ON(!patch->enabled)) + return -EINVAL; + + if (klp_transition_patch) + return -EBUSY; + + /* enforce stacking: only the last enabled patch can be disabled */ + if (!list_is_last(&patch->list, &klp_patches) && + list_next_entry(patch, list)->enabled) + return -EBUSY; + + klp_init_transition(patch, KLP_UNPATCHED); + + klp_for_each_object(patch, obj) + if (obj->patched) + klp_pre_unpatch_callback(obj); + + /* + * Enforce the order of the func->transition writes in + * klp_init_transition() and the TIF_PATCH_PENDING writes in + * klp_start_transition(). In the rare case where klp_ftrace_handler() + * is called shortly after klp_update_patch_state() switches the task, + * this ensures the handler sees that func->transition is set. + */ + smp_wmb(); + + klp_start_transition(); + klp_try_complete_transition(); + patch->enabled = false; + + return 0; +} + +/** + * klp_disable_patch() - disables a registered patch + * @patch: The registered, enabled patch to be disabled + * + * Unregisters the patched functions from ftrace. + * + * Return: 0 on success, otherwise error + */ +int klp_disable_patch(struct klp_patch *patch) +{ + int ret; + + mutex_lock(&klp_mutex); + + if (!klp_is_patch_registered(patch)) { + ret = -EINVAL; + goto err; + } + + if (!patch->enabled) { + ret = -EINVAL; + goto err; + } + + ret = __klp_disable_patch(patch); + +err: + mutex_unlock(&klp_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(klp_disable_patch); + +static int __klp_enable_patch(struct klp_patch *patch) +{ + struct klp_object *obj; + int ret; + + if (klp_transition_patch) + return -EBUSY; + + if (WARN_ON(patch->enabled)) + return -EINVAL; + + /* enforce stacking: only the first disabled patch can be enabled */ + if (patch->list.prev != &klp_patches && + !list_prev_entry(patch, list)->enabled) + return -EBUSY; + + /* + * A reference is taken on the patch module to prevent it from being + * unloaded. + */ + if (!try_module_get(patch->mod)) + return -ENODEV; + + pr_notice("enabling patch '%s'\n", patch->mod->name); + + klp_init_transition(patch, KLP_PATCHED); + + /* + * Enforce the order of the func->transition writes in + * klp_init_transition() and the ops->func_stack writes in + * klp_patch_object(), so that klp_ftrace_handler() will see the + * func->transition updates before the handler is registered and the + * new funcs become visible to the handler. + */ + smp_wmb(); + + klp_for_each_object(patch, obj) { + if (!klp_is_object_loaded(obj)) + continue; + + ret = klp_pre_patch_callback(obj); + if (ret) { + pr_warn("pre-patch callback failed for object '%s'\n", + klp_is_module(obj) ? obj->name : "vmlinux"); + goto err; + } + + ret = klp_patch_object(obj); + if (ret) { + pr_warn("failed to patch object '%s'\n", + klp_is_module(obj) ? obj->name : "vmlinux"); + goto err; + } + } + + klp_start_transition(); + klp_try_complete_transition(); + patch->enabled = true; + + return 0; +err: + pr_warn("failed to enable patch '%s'\n", patch->mod->name); + + klp_cancel_transition(); + return ret; +} + +/** + * klp_enable_patch() - enables a registered patch + * @patch: The registered, disabled patch to be enabled + * + * Performs the needed symbol lookups and code relocations, + * then registers the patched functions with ftrace. + * + * Return: 0 on success, otherwise error + */ +int klp_enable_patch(struct klp_patch *patch) +{ + int ret; + + mutex_lock(&klp_mutex); + + if (!klp_is_patch_registered(patch)) { + ret = -EINVAL; + goto err; + } + + ret = __klp_enable_patch(patch); + +err: + mutex_unlock(&klp_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(klp_enable_patch); + /* * Remove parts of patches that touch a given kernel module. The list of * patches processed might be limited. When limit is NULL, all patches -- 2.13.7