Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2247926imu; Thu, 29 Nov 2018 01:47:36 -0800 (PST) X-Google-Smtp-Source: AFSGD/V50iTS7Yicp5nJTFiZ+DSQBUdzUhgQ5kLe9cVDrgQMpdpEBTNbpwgJnSePbS+UUEdS0Ftc X-Received: by 2002:a62:931a:: with SMTP id b26mr748641pfe.65.1543484856337; Thu, 29 Nov 2018 01:47:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543484856; cv=none; d=google.com; s=arc-20160816; b=lxlKjfRH/Ucje4bo/qLDmdWgE2uAtgWOHLPSaqQe6k85VaCeGvTn3f5/I1H8BlwbQ/ sDKF4kA35mYjrjuQPbBINEzcVcsMmPpndOKo6Fhj3eiy5huPNZXjZbwiocIpRPmJIffM d2WArlodmi6epKLg2F9kaZkIekp5CsGTGu/sHKbMClU0hfkWhJ3TI05eWDiVvm91Z+m7 aG5q7VdVQmfvsOph0a1pBaVn4fZc275ybedlK1V1wmQv8KEO1XkXP3tKK7rYZdbcnEn+ uCDD7+CPvn9iGOmyBeWLKxKVydjZhZhZO9XRy/rZB9Ykqz57OoXeBSM6jYBztbaVAdgM Hpow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=tTLLfzkGvelol0kO8917ck0KsgGW6fy3qr5TeN2cc9k=; b=IPx/CjOrdUn98VMPoe167vnd0Baau5dKy2KbO7eiZhdG4/fe124TecEHlMANAoVNb3 fvDtLr2dwh10xGrSHQwt7ySIfdSaY4DQOr7cScDqrCX872iuKlrWn5JXx8kH/INqI84K UX2TRMw9dEunEmq8cFs+4Zhp6BGkkyxuSiGzdXV2rH/iAVwb5lgj23W2dI+fU1RfpBje OVJYRwtLUT9jPR1exsmoxPEfm75n7YMl0KZzivoPZuaZybhoIlRYXlDrDGn30PHNG7Lw tOUIkB/kubzHd9QbyYRj+Y/N+EivfuomxVo6z2/wekll4i4tJx99c2D9f8hV8N60xfAZ L05g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11si1372010pgj.442.2018.11.29.01.47.20; Thu, 29 Nov 2018 01:47:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728201AbeK2UuD (ORCPT + 99 others); Thu, 29 Nov 2018 15:50:03 -0500 Received: from mx2.suse.de ([195.135.220.15]:48092 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728063AbeK2UuA (ORCPT ); Thu, 29 Nov 2018 15:50:00 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 924E5ADBE; Thu, 29 Nov 2018 09:45:13 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v14 10/11] livepatch: Remove ordering and refuse loading conflicting patches Date: Thu, 29 Nov 2018 10:44:30 +0100 Message-Id: <20181129094431.7801-11-pmladek@suse.com> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20181129094431.7801-1-pmladek@suse.com> References: <20181129094431.7801-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The atomic replace and cumulative patches were introduced as a more secure way to handle dependent patches. They simplify the logic: + Any new cumulative patch is supposed to take over shadow variables and changes made by callbacks from previous livepatches. + All replaced patches are discarded and the modules can be unloaded. As a result, there is only one scenario when a cumulative livepatch gets disabled. The different handling of "normal" and cumulative patches might cause confusion. It would make sense to keep only one mode. On the other hand, it would be rude to enforce using the cumulative livepatches even for trivial and independent (hot) fixes. This patch removes the stack of patches. The list of enabled patches is still needed but the ordering is not longer enforced. Note that it is not possible to catch all possible dependencies. It is the responsibility of the livepatch authors to decide. Nevertheless this patch prevents having two patches for the same function enabled at the same time after the transition finishes. It might help to catch obvious mistakes. But more importantly, we do not need to handle situation when a patch in the middle of the function stack (ops->func_stack) is being removed. Signed-off-by: Petr Mladek --- Documentation/livepatch/cumulative-patches.txt | 11 ++---- Documentation/livepatch/livepatch.txt | 30 ++++++++------- kernel/livepatch/core.c | 51 ++++++++++++++++++++++++-- 3 files changed, 68 insertions(+), 24 deletions(-) diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt index a8089f7fe306..ca1fbb4351c8 100644 --- a/Documentation/livepatch/cumulative-patches.txt +++ b/Documentation/livepatch/cumulative-patches.txt @@ -7,8 +7,8 @@ to do different changes to the same function(s) then we need to define an order in which the patches will be installed. And function implementations from any newer livepatch must be done on top of the older ones. -This might become a maintenance nightmare. Especially if anyone would want -to remove a patch that is in the middle of the stack. +This might become a maintenance nightmare. Especially when more patches +modified the same function in different ways. An elegant solution comes with the feature called "Atomic Replace". It allows to create so called "Cumulative Patches". They include all wanted changes @@ -26,11 +26,9 @@ for example: .replace = true, }; -Such a patch is added on top of the livepatch stack when enabled. - All processes are then migrated to use the code only from the new patch. Once the transition is finished, all older patches are automatically -disabled and removed from the stack of patches. +disabled. Ftrace handlers are transparently removed from functions that are no longer modified by the new cumulative patch. @@ -57,8 +55,7 @@ The atomic replace allows: + Remove eventual performance impact caused by core redirection for functions that are no longer patched. - + Decrease user confusion about stacking order and what code - is actually in use. + + Decrease user confusion about dependencies between livepatches. Limitations: diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt index ba6e83a08209..3c150ab19b99 100644 --- a/Documentation/livepatch/livepatch.txt +++ b/Documentation/livepatch/livepatch.txt @@ -143,9 +143,9 @@ without HAVE_RELIABLE_STACKTRACE are not considered fully supported by the kernel livepatching. The /sys/kernel/livepatch//transition file shows whether a patch -is in transition. Only a single patch (the topmost patch on the stack) -can be in transition at a given time. A patch can remain in transition -indefinitely, if any of the tasks are stuck in the initial patch state. +is in transition. Only a single patch can be in transition at a given +time. A patch can remain in transition indefinitely, if any of the tasks +are stuck in the initial patch state. A transition can be reversed and effectively canceled by writing the opposite value to the /sys/kernel/livepatch//enabled file while @@ -329,9 +329,10 @@ The livepatch gets enabled by calling klp_enable_patch() typically from module_init() callback. The system will start using the new implementation of the patched functions at this stage. -First, the addresses of the patched functions are found according to their -names. The special relocations, mentioned in the section "New functions", -are applied. The relevant entries are created under +First, possible conflicts are checked for non-cummulative patches with +disabled replace flag. The addresses of the patched functions are found +according to their names. The special relocations, mentioned in the section +"New functions", are applied. The relevant entries are created under /sys/kernel/livepatch/. The patch is rejected when any above operation fails. @@ -345,11 +346,11 @@ this process, see the "Consistency model" section. Finally, once all tasks have been patched, the 'transition' value changes to '0'. -[*] Note that functions might be patched multiple times. The ftrace handler - is registered only once for a given function. Further patches just add - an entry to the list (see field `func_stack`) of the struct klp_ops. - The right implementation is selected by the ftrace handler, see - the "Consistency model" section. +[*] Note that two patches might modify the same function during the transition + to a new cumulative patch. The ftrace handler is registered only once + for a given function. The new patch just adds an entry to the list + (see field `func_stack`) of the struct klp_ops. The right implementation + is selected by the ftrace handler, see the "Consistency model" section. 5.3. Replacing @@ -389,8 +390,11 @@ becomes empty. Third, the sysfs interface is destroyed. -Note that patches must be disabled in exactly the reverse order in which -they were enabled. It makes the problem and the implementation much easier. +Note that any patch dependencies have to be handled by the atomic replace +and cumulative patches, see Documentation/livepatch/cumulative-patches.txt. +Therefore there is usually only one patch enabled on the system. There is +still possibility to have more trivial and independent livepatches enabled +at the same time. These can be disabled in any order. 5.5. Removing diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 0ce752e9e8bb..d8f6f49ac6b3 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -122,6 +122,47 @@ static struct klp_object *klp_find_object(struct klp_patch *patch, return NULL; } +static int klp_check_obj_conflict(struct klp_patch *patch, + struct klp_object *old_obj) +{ + struct klp_object *obj; + struct klp_func *func, *old_func; + + obj = klp_find_object(patch, old_obj); + if (!obj) + return 0; + + klp_for_each_func(old_obj, old_func) { + func = klp_find_func(obj, old_func); + if (!func) + continue; + + pr_err("Function '%s,%lu' in object '%s' has already been livepatched.\n", + func->old_name, func->old_sympos ? func->old_sympos : 1, + obj->name ? obj->name : "vmlinux"); + return -EINVAL; + } + + return 0; +} + +static int klp_check_patch_conflict(struct klp_patch *patch) +{ + struct klp_patch *old_patch; + struct klp_object *old_obj; + int ret; + + list_for_each_entry(old_patch, &klp_patches, list) { + klp_for_each_object(old_patch, old_obj) { + ret = klp_check_obj_conflict(patch, old_obj); + if (ret) + return ret; + } + } + + return 0; +} + struct klp_find_arg { const char *objname; const char *name; @@ -893,6 +934,12 @@ static int klp_init_patch_before_free(struct klp_patch *patch) } } + /* The dynamic lists must be already in place. */ + if (!patch->replace && klp_check_patch_conflict(patch)) { + pr_err("Use cumulative livepatches for dependent changes.\n"); + return -EINVAL; + } + if (!try_module_get(patch->mod)) return -ENODEV; @@ -937,10 +984,6 @@ static int __klp_disable_patch(struct klp_patch *patch) if (klp_transition_patch) return -EBUSY; - /* enforce stacking: only the last enabled patch can be disabled */ - if (!list_is_last(&patch->list, &klp_patches)) - return -EBUSY; - klp_init_transition(patch, KLP_UNPATCHED); klp_for_each_object(patch, obj) -- 2.13.7