Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2246990imu; Thu, 29 Nov 2018 01:46:25 -0800 (PST) X-Google-Smtp-Source: AFSGD/VT8oIdJudB4siBzQsPdwgKzl+AbkJvjMYGPFSRyFtnRlVT00MUFu2QBQcSVO5ZbQcQE04Y X-Received: by 2002:a17:902:830a:: with SMTP id bd10mr748829plb.321.1543484784955; Thu, 29 Nov 2018 01:46:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543484784; cv=none; d=google.com; s=arc-20160816; b=EpR3y2QssRIVQkjxVQK4ghfnLsVIVyKo+t8dIjky3oinGC/ZO3QbfQTWT8HqJ2aZaK tn46amQWWJGnSFfsbrC4n2RCi3ENS+sahVVfZKRYYueOmm99cFjwdh+mfOtagxLk5Aq2 dE257dO5QfWG/fD2sMKoa2XBH9o/x9A8NUavldLkgENmuYpwlxW4VJpsakt4H0z3Ph0g KVvG9ReIoE3mozBYj+irt/7C/rSRm6HU9ALOl42oPwMrubrQMiO1c0DY87zZZC1uaRtz tVC84YoRsHyCWjtosEnV3d3IwZLK1wQlUh+aw6bx1F+/EPPIm1gnIbe6auj4nTa9bcPD lT1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=1dPY4pKBWAi6ENLQJFWvdIy1w0IQ/esMLIzuVFa7irI=; b=K1W8e3+tKVvoBMKNlF4E59vBkSX2x1BHRO66PCOA1X9RxCs2B7PsjY5aTXL7yRtfLk c7EGpsNZ+WCViUKzi1iidvp3vyGxLqVuc/njOkHGjldOIVlku4SH8ew5LkAwbhlQNB7W qE7R8ZFX09FAubLH8cRyGzZom5o5M6luHDe9i8zXJWmAas+LdnD+yMHH/EI0KklTZlAh FqKdL29OROmzVtX/oF5yj8XDBhUrH5M8TEOUrj1l+g4pTY/STFQzh7DL76AeSosPU0/s wnXWIcrlG+irkXmXKVPd66A9btEWdswT2QU2GpmdBVuuMeok9hCH5psh371r9MqhV6I7 tmpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g8si1566372pgb.128.2018.11.29.01.46.10; Thu, 29 Nov 2018 01:46:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728174AbeK2UuA (ORCPT + 99 others); Thu, 29 Nov 2018 15:50:00 -0500 Received: from mx2.suse.de ([195.135.220.15]:48156 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727010AbeK2Ut7 (ORCPT ); Thu, 29 Nov 2018 15:49:59 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1E8EAAD64; Thu, 29 Nov 2018 09:45:12 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v14 09/11] livepatch: Atomic replace and cumulative patches documentation Date: Thu, 29 Nov 2018 10:44:29 +0100 Message-Id: <20181129094431.7801-10-pmladek@suse.com> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20181129094431.7801-1-pmladek@suse.com> References: <20181129094431.7801-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org User documentation for the atomic replace feature. It makes it easier to maintain livepatches using so-called cumulative patches. Signed-off-by: Petr Mladek --- Documentation/livepatch/cumulative-patches.txt | 105 +++++++++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 Documentation/livepatch/cumulative-patches.txt diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt new file mode 100644 index 000000000000..a8089f7fe306 --- /dev/null +++ b/Documentation/livepatch/cumulative-patches.txt @@ -0,0 +1,105 @@ +=================================== +Atomic Replace & Cumulative Patches +=================================== + +There might be dependencies between livepatches. If multiple patches need +to do different changes to the same function(s) then we need to define +an order in which the patches will be installed. And function implementations +from any newer livepatch must be done on top of the older ones. + +This might become a maintenance nightmare. Especially if anyone would want +to remove a patch that is in the middle of the stack. + +An elegant solution comes with the feature called "Atomic Replace". It allows +to create so called "Cumulative Patches". They include all wanted changes +from all older livepatches and completely replace them in one transition. + +Usage +----- + +The atomic replace can be enabled by setting "replace" flag in struct klp_patch, +for example: + + static struct klp_patch patch = { + .mod = THIS_MODULE, + .objs = objs, + .replace = true, + }; + +Such a patch is added on top of the livepatch stack when enabled. + +All processes are then migrated to use the code only from the new patch. +Once the transition is finished, all older patches are automatically +disabled and removed from the stack of patches. + +Ftrace handlers are transparently removed from functions that are no +longer modified by the new cumulative patch. + +As a result, the livepatch authors might maintain sources only for one +cumulative patch. It helps to keep the patch consistent while adding or +removing various fixes or features. + +Users could keep only the last patch installed on the system after +the transition to has finished. It helps to clearly see what code is +actually in use. Also the livepatch might then be seen as a "normal" +module that modifies the kernel behavior. The only difference is that +it can be updated at runtime without breaking its functionality. + + +Features +-------- + +The atomic replace allows: + + + Atomically revert some functions in a previous patch while + upgrading other functions. + + + Remove eventual performance impact caused by core redirection + for functions that are no longer patched. + + + Decrease user confusion about stacking order and what code + is actually in use. + + +Limitations: +------------ + + + Once the operation finishes, there is no straightforward way + to reverse it and restore the replaced patches atomically. + + A good practice is to set .replace flag in any released livepatch. + Then re-adding an older livepatch is equivalent to downgrading + to that patch. This is safe as long as the livepatches do _not_ do + extra modifications in (un)patching callbacks or in the module_init() + or module_exit() functions, see below. + + Also note that the replaced patch can be removed and loaded again + only when the transition was not forced. + + + + Only the (un)patching callbacks from the _new_ cumulative livepatch are + executed. Any callbacks from the replaced patches are ignored. + + In other words, the cumulative patch is responsible for doing any actions + that are necessary to properly replace any older patch. + + As a result, it might be dangerous to replace newer cumulative patches by + older ones. The old livepatches might not provide the necessary callbacks. + + This might be seen as a limitation in some scenarios. But it makes the life + easier in many others. Only the new cumulative livepatch knows what + fixes/features are added/removed and what special actions are necessary + for a smooth transition. + + In any case, it would be a nightmare to think about the order of + the various callbacks and their interactions if the callbacks from all + enabled patches were called. + + + + There is no special handling of shadow variables. Livepatch authors + must create their own rules how to pass them from one cumulative + patch to the other. Especially they should not blindly remove them + in module_exit() functions. + + A good practice might be to remove shadow variables in the post-unpatch + callback. It is called only when the livepatch is properly disabled. -- 2.13.7