Received: by 10.223.176.5 with SMTP id f5csp500345wra; Tue, 6 Feb 2018 02:38:05 -0800 (PST) X-Google-Smtp-Source: AH8x227E77uw6+PHi8Shz0tNgq11QIcN8qjepPd6dnO0zSi7YipaYvQVM4KRNWE5tERZXxJMNPAM X-Received: by 2002:a17:902:5582:: with SMTP id g2-v6mr1931232pli.4.1517913484967; Tue, 06 Feb 2018 02:38:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517913484; cv=none; d=google.com; s=arc-20160816; b=VvamDWwYQRsJYSkfpdjEvlIop/zhIaQLZSZ9peVk8cslccgZ/mfbx/nHBanXBXQXY/ MEBL461hW6leqv15C1U0+FcFvUisfHSCMfPtvV8FfqvVB/rBeq3XNKPQ+CL5rIDYTyCZ 5r5ZY3DxAf0OVJLOvKvnTL0Msgs/u0zikZt6M+42HOrRmVcqtXgA2J5uV62ByfIOdbwe t2oBXeO2HszOvOIhiEptu3K90G4MpjlLrlWMxZZCNDpwTmt3uKuues2tGcMaKSTbKxC5 KRZFzJTwOVWpU6TIB8dvcWaA2Lxnv3ubjJmyXfiYyohTDAstFz4sdGPBvkSS5XtU1wZU KPGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=rPcPC2qhxXBnaRwm7M9RA5/QFvscTrPrVKkCPVfmpxY=; b=EoYjf9dWnpmQ2aaAZOHt7swjY/qRiHOQo/YGAE6cThP4vJXUiCwwrW6NTHdGIt9vme DYX8rkbVB7JCfEeF6k8v9tgeKJtUGyEhVesYgu2qdAPcy8JpypR1nPhN+9lD0Di0V584 FO0VGVZG8Df5ai19Lg0pcHIlEOHUt2MeedCfX2Us+ZqlfcqI99kx3kgHk24U3YqUc2ZP 1QcBGKgJIRJVVSPyd6z/lUP6I+JpCWQMnYp9jTUNOo84ieEQWCVnW9IONg8rss6sHFsd c0wx0TXFe5RjdgyKTqFPHm/LdPFh531PHZWudSXiNNT3DZInd4/F9XYE4JeJG3t8NowS 8FVA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 2-v6si8478995ple.739.2018.02.06.02.37.50; Tue, 06 Feb 2018 02:38:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752901AbeBFKfm (ORCPT + 99 others); Tue, 6 Feb 2018 05:35:42 -0500 Received: from mx2.suse.de ([195.135.220.15]:60018 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752813AbeBFKe6 (ORCPT ); Tue, 6 Feb 2018 05:34:58 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id EF2AFAE49; Tue, 6 Feb 2018 10:34:51 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Jessica Yu , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v7 6/7] livepatch: Add atomic replace Date: Tue, 6 Feb 2018 11:34:23 +0100 Message-Id: <20180206103424.10829-7-pmladek@suse.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180206103424.10829-1-pmladek@suse.com> References: <20180206103424.10829-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Baron Sometimes we would like to revert a particular fix. Currently, this is not easy because we want to keep all other fixes active and we could revert only the last applied patch. One solution would be to apply new patch that implemented all the reverted functions like in the original code. It would work as expected but there will be unnecessary redirections. In addition, it would also require knowing which functions need to be reverted at build time. Another problem is when there are many patches that touch the same functions. There might be dependencies between patches that are not enforced on the kernel side. Also it might be pretty hard to actually prepare the patch and ensure compatibility with the other patches. A better solution would be to create cumulative patch and say that it replaces all older ones. This patch adds a new "replace" flag to struct klp_patch. When it is enabled, a set of 'nop' klp_func will be dynamically created for all functions that are already being patched but that will not longer be modified by the new patch. They are temporarily used as a new target during the patch transition. There are used several simplifications: + nops' structures are generated already when the patch is registered. All registered patches are taken into account, even the disabled ones. As a result, there might be more nops than are really needed when the patch is enabled and some disabled patches were removed before. But we are on the safe side and it simplifies the implementation. Especially we could reuse the existing init() functions. Also freeing is easier because the same nops are created and removed only once. Alternative solution would be to create nops when the patch is enabled. But then we would need to complicated to reuse the init() functions and error paths. It would increase the risk of errors because of late kobject initialization. It would need tricky waiting for freed kobjects when finalizing a reverted enable transaction. + The replaced patches are removed from the stack and cannot longer be enabled directly. Otherwise, we would need to implement a more complex logic of handling the stack of patches. It might be hard to come with a reasonable semantic. A fallback is to remove (rmmod) the replaced patches and register (insmod) them again. + Nops are handled like normal function patches. It reduces changes in the existing code. It would be possible to copy internal values when they are allocated and make short cuts in init() functions. It would be possible to use the fact that old_func and new_func point to the same function and do not init new_func and new_size at all. It would be possible to detect nop func in ftrace handler and just leave. But all these would just complicate the code and maintenance. + The callbacks from the replaced patches are not called. It would be pretty hard to define a reasonable semantic and implement it. It might even be counter-productive. The new patch is cumulative. It is supposed to include most of the changes from older patches. In most cases, it will not want to call pre_unpatch() post_unpatch() callbacks from the replaced patches. It would disable/break things for no good reasons. Also it should be easier to handle various scenarios in a single script in the new patch than think about interactions caused by running many scripts from older patches. No to say that the old scripts even would not expect to be called in this situation. Signed-off-by: Jason Baron [pmladek@suse.com: Split, reuse existing code, simplified] Signed-off-by: Petr Mladek Cc: Josh Poimboeuf Cc: Jessica Yu Cc: Jiri Kosina Cc: Miroslav Benes Cc: Petr Mladek --- include/linux/livepatch.h | 3 + kernel/livepatch/core.c | 160 +++++++++++++++++++++++++++++++++++++++++- kernel/livepatch/core.h | 4 ++ kernel/livepatch/transition.c | 39 ++++++++++ 4 files changed, 205 insertions(+), 1 deletion(-) diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h index 7fb76e7d2693..ed598d849029 100644 --- a/include/linux/livepatch.h +++ b/include/linux/livepatch.h @@ -42,6 +42,7 @@ enum klp_func_type { KLP_FUNC_ANY = -1, /* Substitute any type */ KLP_FUNC_STATIC = 0, /* Original statically defined structure */ + KLP_FUNC_NOP, /* Dynamically allocated NOP function patch */ }; /** @@ -153,6 +154,7 @@ struct klp_object { * struct klp_patch - patch structure for live patching * @mod: reference to the live patch module * @objs: object entries for kernel objects to be patched + * @replace: replace all already registered patches * @list: list node for global list of registered patches * @kobj: kobject for sysfs resources * @obj_list: head of list for struct klp_object @@ -163,6 +165,7 @@ struct klp_patch { /* external */ struct module *mod; struct klp_object *objs; + bool replace; /* internal */ struct list_head list; diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 11632fe8716a..d16896fa6a4c 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -142,6 +142,21 @@ static bool klp_initialized(void) return !!klp_root_kobj; } +static struct klp_func *klp_find_func(struct klp_object *obj, + struct klp_func *old_func) +{ + struct klp_func *func; + + klp_for_each_func(obj, func) { + if ((strcmp(old_func->old_name, func->old_name) == 0) && + (old_func->old_sympos == func->old_sympos)) { + return func; + } + } + + return NULL; +} + static struct klp_object *klp_find_object(struct klp_patch *patch, struct klp_object *old_obj) { @@ -342,6 +357,39 @@ static int klp_write_object_relocations(struct module *pmod, return ret; } +/* + * This function removes replaced patches from both func_stack + * and klp_patches stack. + * + * We could be pretty aggressive here. It is called in situation + * when these structures are not longer accessible. All functions + * are redirected using the klp_transition_patch. They use either + * a new code or they in the original code because of the special + * nop function patches. + */ +void klp_throw_away_replaced_patches(struct klp_patch *new_patch, + bool keep_module) +{ + struct klp_patch *old_patch, *tmp_patch; + + list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) { + if (old_patch == new_patch) + return; + + klp_unpatch_objects(old_patch, KLP_FUNC_ANY); + old_patch->enabled = false; + + /* + * Replaced patches could not get re-enabled to keep + * the code sane. + */ + list_move(&old_patch->list, &klp_replaced_patches); + + if (!keep_module) + module_put(old_patch->mod); + } +} + static int __klp_disable_patch(struct klp_patch *patch) { struct klp_object *obj; @@ -537,7 +585,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr, if (!klp_is_patch_usable(patch)) { /* * Module with the patch could either disappear meanwhile or is - * not properly initialized yet. + * not properly initialized yet or the patch was just replaced. */ ret = -EINVAL; goto err; @@ -662,8 +710,16 @@ static struct attribute *klp_patch_attrs[] = { /* * Dynamically allocated objects and functions. */ +static void klp_free_func_nop(struct klp_func *func) +{ + kfree(func->old_name); + kfree(func); +} + static void klp_free_func_dynamic(struct klp_func *func) { + if (func->ftype == KLP_FUNC_NOP) + klp_free_func_nop(func); } static void klp_free_object_dynamic(struct klp_object *obj) @@ -708,6 +764,102 @@ static struct klp_object *klp_get_or_add_object(struct klp_patch *patch, return obj; } +static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func, + struct klp_object *obj) +{ + struct klp_func *func; + + func = kzalloc(sizeof(*func), GFP_KERNEL); + if (!func) + return ERR_PTR(-ENOMEM); + + if (old_func->old_name) { + func->old_name = kstrdup(old_func->old_name, GFP_KERNEL); + if (!func->old_name) { + kfree(func); + return ERR_PTR(-ENOMEM); + } + } + func->old_sympos = old_func->old_sympos; + /* NOP func is the same as using the original implementation. */ + func->new_func = (void *)old_func->old_addr; + func->ftype = KLP_FUNC_NOP; + + return func; +} + +static int klp_add_func_nop(struct klp_object *obj, + struct klp_func *old_func) +{ + struct klp_func *func; + + func = klp_find_func(obj, old_func); + + if (func) + return 0; + + func = klp_alloc_func_nop(old_func, obj); + if (IS_ERR(func)) + return PTR_ERR(func); + + klp_init_func_list(obj, func); + + return 0; +} + +static int klp_add_object_nops(struct klp_patch *patch, + struct klp_object *old_obj) +{ + struct klp_object *obj; + struct klp_func *old_func; + int err = 0; + + obj = klp_get_or_add_object(patch, old_obj); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + klp_for_each_func(old_obj, old_func) { + err = klp_add_func_nop(obj, old_func); + if (err) + return err; + } + + return 0; +} + +/* + * Add 'nop' functions which simply return to the caller to run + * the original function. The 'nop' functions are added to a + * patch to facilitate a 'replace' mode + * + * The nops are generated for all patches on the stack when + * the new patch is initialized. It is safe even though some + * older patches might get disabled and removed before the + * new one is enabled. In the worst case, there might be nops + * there will not be really needed. But it does not harm and + * simplifies the implementation a lot. Especially we could + * use the init functions as is. + */ +static int klp_add_nops(struct klp_patch *patch) +{ + struct klp_patch *old_patch; + struct klp_object *old_obj; + int err = 0; + + if (WARN_ON(!patch->replace)) + return -EINVAL; + + list_for_each_entry(old_patch, &klp_patches, list) { + klp_for_each_object(old_patch, old_obj) { + err = klp_add_object_nops(patch, old_obj); + if (err) + return err; + } + } + + return 0; +} + /* * Patch release framework must support the following scenarios: * @@ -952,6 +1104,12 @@ static int klp_init_patch(struct klp_patch *patch) return ret; } + if (patch->replace) { + ret = klp_add_nops(patch); + if (ret) + goto free; + } + klp_for_each_object(patch, obj) { ret = klp_init_object(patch, obj); if (ret) diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h index 48a83d4364cf..43184a5318d8 100644 --- a/kernel/livepatch/core.h +++ b/kernel/livepatch/core.h @@ -6,6 +6,10 @@ extern struct mutex klp_mutex; +void klp_throw_away_replaced_patches(struct klp_patch *new_patch, + bool keep_module); +void klp_free_objects(struct klp_patch *patch, enum klp_func_type ftype); + static inline bool klp_is_object_loaded(struct klp_object *obj) { return !obj->name || obj->mod; diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c index 6917100fbe79..d6af190865d2 100644 --- a/kernel/livepatch/transition.c +++ b/kernel/livepatch/transition.c @@ -87,6 +87,36 @@ static void klp_complete_transition(void) klp_transition_patch->mod->name, klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); + /* + * For replace patches, we disable all previous patches, and replace + * the dynamic no-op functions by removing the ftrace hook. + */ + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) { + /* + * Make sure that no ftrace handler accesses any older patch + * on the stack. This might happen when the user forced the + * transaction while some running tasks were still falling + * back to the old code. There might even still be ftrace + * handlers that have not seen the last patch on the stack yet. + * + * It probably is not necessary because of the rcu-safe access. + * But better be safe than sorry. + */ + if (klp_forced) + klp_synchronize_transition(); + + klp_throw_away_replaced_patches(klp_transition_patch, + klp_forced); + + /* + * There is no need to synchronize the transition after removing + * nops. They must be the last on the func_stack. Ftrace + * gurantees that nobody will stay in the trampoline after + * the ftrace handler is unregistered. + */ + klp_unpatch_objects(klp_transition_patch, KLP_FUNC_NOP); + } + if (klp_target_state == KLP_UNPATCHED) { /* * All tasks have transitioned to KLP_UNPATCHED so we can now @@ -143,6 +173,15 @@ static void klp_complete_transition(void) if (!klp_forced && klp_target_state == KLP_UNPATCHED) module_put(klp_transition_patch->mod); + /* + * We do not need to wait until the objects are really freed. + * The patch must be on the bottom of the stack. Therefore it + * will never replace anything else. The only important thing + * is that we wait when the patch is being unregistered. + */ + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) + klp_free_objects(klp_transition_patch, KLP_FUNC_NOP); + klp_target_state = KLP_UNDEFINED; klp_transition_patch = NULL; } -- 2.13.6