Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp750617imu; Wed, 9 Jan 2019 05:52:10 -0800 (PST) X-Google-Smtp-Source: ALg8bN7/P4siOfft+LFGE+ic8H+VdEuaIfpgLh2A19AYyx8qRhGkqsB38gblgdfYMNL4rcZg7J8N X-Received: by 2002:a17:902:b18b:: with SMTP id s11mr6112933plr.56.1547041930916; Wed, 09 Jan 2019 05:52:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547041930; cv=none; d=google.com; s=arc-20160816; b=MTvjMDFVBpk2TeR29QXxKYkI8KImDxAe/jCKnnk2LeayF15qd5mIkhbVvJDWuD/hMy Y+tCkOpK4xX2Iga/sp/s9y9ipOp4GEw7iZXbCvVsUWjuXdc9bcJeLtXODIeVH5NR0ZHQ G7bkhsSsjGzikD/FMBI1ayh7ChELT3FBuOn9HYBR5M76ojV8xIpjsCKs/8Glra03gwql 5hm0HuknOp3g3NbZQw/6RxEsn91RUbGg5CzmOq7ljmAFeNCp+8tWwV4YNtZ2w9PdZ38O k2E27K2TOe6GovrdMYpLZauCZ96E+c9jpMCgxIUNGzpdALGrxzyk+bYHW+iAvXRgiMSb ij9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ME//8Gi+jxRksfa/4g9CAIM9YnmwNKSGLU7vqX/AAGo=; b=HaZoNJ+FlDHhtxaGM/yQSMxyHtuAQMjc40vzSsziU8bHumsADihNVbDHu+rimP8YAw rG0QnTLOehZV91HnPfFEHCSUfQH/kT6ctnIn9mKpXTQVLE6YjRbiBW/5g13z5Ge89n2K 1NzzfbcBc8qE7KLl2mtyclcEFRLnQcH8iEbjSXGNz7LDkVtJ9EIfF0h8uXHUm19q1u9f nQpEPeBLm0+wK9vuOuhJAOS9cxdAqPMM8B4QcSUeP7Bf9ynvdQgcdkrOPZn01Jf04CUB jLG+IXOyQ3xGwMaEcaPIscIM8xtrdCbpHIx7LWwB2yYF5YdxO9Of+FlyINTj9bKMIxHp ZAPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i5si16355203pgg.279.2019.01.09.05.51.55; Wed, 09 Jan 2019 05:52:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730537AbfAIMoY (ORCPT + 99 others); Wed, 9 Jan 2019 07:44:24 -0500 Received: from mx2.suse.de ([195.135.220.15]:42890 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731215AbfAIMns (ORCPT ); Wed, 9 Jan 2019 07:43:48 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E84C0AD8C; Wed, 9 Jan 2019 12:43:46 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v15 10/11] livepatch: Remove ordering (stacking) of the livepatches Date: Wed, 9 Jan 2019 13:43:28 +0100 Message-Id: <20190109124329.21991-11-pmladek@suse.com> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20190109124329.21991-1-pmladek@suse.com> References: <20190109124329.21991-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The atomic replace and cumulative patches were introduced as a more secure way to handle dependent patches. They simplify the logic: + Any new cumulative patch is supposed to take over shadow variables and changes made by callbacks from previous livepatches. + All replaced patches are discarded and the modules can be unloaded. As a result, there is only one scenario when a cumulative livepatch gets disabled. The different handling of "normal" and cumulative patches might cause confusion. It would make sense to keep only one mode. On the other hand, it would be rude to enforce using the cumulative livepatches even for trivial and independent (hot) fixes. However, the stack of patches is not really necessary any longer. The patch ordering was never clearly visible via the sysfs interface. Also the "normal" patches need a lot of caution anyway. Note that the list of enabled patches is still necessary but the ordering is not longer enforced. Otherwise, the code is ready to disable livepatches in an random order. Namely, klp_check_stack_func() always looks for the function from the livepatch that is being disabled. klp_func structures are just removed from the related func_stack. Finally, the ftrace handlers is removed only when the func_stack becomes empty. Signed-off-by: Petr Mladek --- Documentation/livepatch/cumulative-patches.txt | 11 ++++------- Documentation/livepatch/livepatch.txt | 13 +++++++------ kernel/livepatch/core.c | 4 ---- 3 files changed, 11 insertions(+), 17 deletions(-) diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt index e7cf5be69f23..0012808e8d44 100644 --- a/Documentation/livepatch/cumulative-patches.txt +++ b/Documentation/livepatch/cumulative-patches.txt @@ -7,8 +7,8 @@ to do different changes to the same function(s) then we need to define an order in which the patches will be installed. And function implementations from any newer livepatch must be done on top of the older ones. -This might become a maintenance nightmare. Especially if anyone would want -to remove a patch that is in the middle of the stack. +This might become a maintenance nightmare. Especially when more patches +modified the same function in different ways. An elegant solution comes with the feature called "Atomic Replace". It allows creation of so called "Cumulative Patches". They include all wanted changes @@ -26,11 +26,9 @@ for example: .replace = true, }; -Such a patch is added on top of the livepatch stack when enabled. - All processes are then migrated to use the code only from the new patch. Once the transition is finished, all older patches are automatically -disabled and removed from the stack of patches. +disabled. Ftrace handlers are transparently removed from functions that are no longer modified by the new cumulative patch. @@ -57,8 +55,7 @@ The atomic replace allows: + Remove eventual performance impact caused by core redirection for functions that are no longer patched. - + Decrease user confusion about stacking order and what code - is actually in use. + + Decrease user confusion about dependencies between livepatches. Limitations: diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt index 6f32d6ea2fcb..71d7f286ec4d 100644 --- a/Documentation/livepatch/livepatch.txt +++ b/Documentation/livepatch/livepatch.txt @@ -143,9 +143,9 @@ without HAVE_RELIABLE_STACKTRACE are not considered fully supported by the kernel livepatching. The /sys/kernel/livepatch//transition file shows whether a patch -is in transition. Only a single patch (the topmost patch on the stack) -can be in transition at a given time. A patch can remain in transition -indefinitely, if any of the tasks are stuck in the initial patch state. +is in transition. Only a single patch can be in transition at a given +time. A patch can remain in transition indefinitely, if any of the tasks +are stuck in the initial patch state. A transition can be reversed and effectively canceled by writing the opposite value to the /sys/kernel/livepatch//enabled file while @@ -351,6 +351,10 @@ to '0'. The right implementation is selected by the ftrace handler, see the "Consistency model" section. + That said, it is highly recommended to use cumulative livepatches + because they help keeping the consistency of all changes. In this case, + functions might be patched two times only during the transition period. + 5.3. Replacing -------------- @@ -389,9 +393,6 @@ becomes empty. Third, the sysfs interface is destroyed. -Note that patches must be disabled in exactly the reverse order in which -they were enabled. It makes the problem and the implementation much easier. - 5.5. Removing ------------- diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 113645ee86b6..adca5cf07f7e 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -925,10 +925,6 @@ static int __klp_disable_patch(struct klp_patch *patch) if (klp_transition_patch) return -EBUSY; - /* enforce stacking: only the last enabled patch can be disabled */ - if (!list_is_last(&patch->list, &klp_patches)) - return -EBUSY; - klp_init_transition(patch, KLP_UNPATCHED); klp_for_each_object(patch, obj) -- 2.13.7