Received: by 10.213.65.68 with SMTP id h4csp299863imn; Fri, 23 Mar 2018 05:04:08 -0700 (PDT) X-Google-Smtp-Source: AG47ELvtZlC6ZwDQOArtS5JZkhNvGdTowIIFoc/HATYUHTueW42TkbUBuPo5qvox+BfvjIZF6zI3 X-Received: by 10.98.30.66 with SMTP id e63mr23813993pfe.67.1521806648046; Fri, 23 Mar 2018 05:04:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521806648; cv=none; d=google.com; s=arc-20160816; b=mwbarc8nXZXT2sp7sgnnDJ3CmteEAFOkeKKsO2lV5vVwxN1vymFtaaOdEbkDS1u5HJ k78WzfBw8Q87XfADk125ToqICyUroHlC55MD3+wt45awKr7xK7JisBBZzTQJCl99Wh7y w97pgrHkp0Lh71wWtkiBV2t1npwU+QNnekY1Rtov/Ps0qf7u/hg3Koj/cpY2vagl0PbZ j6k6Ar6q/KJW+bD2/SelNSTgdu9AxVx1U9JHrtK4PRDiFdhy5cXLY1ABhilIKGrVY2t4 7f/Jd8rIz0Ky4cKr4gT29xx4eMcQtuqhjiCo+tkJxWrSmSENKz4HdTZmH8sOHCgSITPN b8pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=m7EDV3mYvgnP4xsqrmBzQaFSEyF5tQwG2OcRTSlB/zo=; b=ppdgofYXBV2lN1/YcT+eycpSX3V9s3F+CRfNr0u7GyYXhbEkhK11VNjI3eByg8wf/g L5KVy3mp/1y71mpZp+n0tcp3bfDn6YvP8fc/W52qsfeCZ+unuQMH71Wfc1pqSkp1nOVe 1FvkkFcduq+gFZ6HRxi/tZU2BH8rKMxlbTHpVscGHi8krGVTYT+dNBTVleEoOmsrNsBw Kyk833qzHcGVG75kAhXCVjhuM7T/AyN01rr3Xa4VrMVbNdDJFpxLKDFore3P7mmo68Fa QBJTQaUELfxnQIJF9To5IhG5uIpB1NOFdzCvePDni0/p8y+KvmKsa5uNKER3MTMFuSzG K5GQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e6si5818829pgt.442.2018.03.23.05.03.53; Fri, 23 Mar 2018 05:04:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754137AbeCWMBE (ORCPT + 99 others); Fri, 23 Mar 2018 08:01:04 -0400 Received: from mx2.suse.de ([195.135.220.15]:39636 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754111AbeCWMBB (ORCPT ); Fri, 23 Mar 2018 08:01:01 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E9701AEA4; Fri, 23 Mar 2018 12:00:59 +0000 (UTC) From: Petr Mladek To: Jiri Kosina , Josh Poimboeuf , Miroslav Benes Cc: Jason Baron , Joe Lawrence , Jessica Yu , Evgenii Shatokhin , live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH 6/8] livepatch: Remove Nop structures when unused Date: Fri, 23 Mar 2018 13:00:26 +0100 Message-Id: <20180323120028.31451-7-pmladek@suse.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20180323120028.31451-1-pmladek@suse.com> References: <20180323120028.31451-1-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Replaced patches are removed from the stack when the transition is finished. It means that Nop structures will never be needed again and can be removed. Why should we care? + Nop structures make false feeling that the function is patched even though the ftrace handler has no effect. + Ftrace handlers are not completely for free. They cause slowdown that might be visible in some workloads. The ftrace-related slowdown might actually be the reason why the function is not longer patched in the new cumulative patch. One would expect that cumulative patch would allow to solve these problems as well. + Cumulative patches are supposed to replace any earlier version of the patch. The amount of NOPs depends on which version was replaced. This multiplies the amount of scenarios that might happen. One might say that NOPs are innocent. But there are even optimized NOP instructions for different processor, for example, see arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much more complicated. + It sounds natural to clean up a mess that is not longer needed. It could only be worse if we do not do it. This patch allows to unpatch and free the dynamic structures independently when the transition finishes. The free part is a bit tricky because kobject free callbacks are called asynchronously. We could not wait for them easily. Fortunately, we do not have to. Any further access can be avoided by removing them from the dynamic lists. Finally, the patch become the first on the stack when enabled. The replace functionality will not longer be needed. Let's clear patch->replace to avoid the special handling when it is eventually disabled/enabled again. Signed-off-by: Petr Mladek --- include/linux/livepatch.h | 6 ++++++ kernel/livepatch/core.c | 42 +++++++++++++++++++++++++++++++++++------- kernel/livepatch/core.h | 1 + kernel/livepatch/patch.c | 31 ++++++++++++++++++++++++++----- kernel/livepatch/patch.h | 1 + kernel/livepatch/transition.c | 26 +++++++++++++++++++++++++- 6 files changed, 94 insertions(+), 13 deletions(-) diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h index d6e6d8176995..1635b30bb1ec 100644 --- a/include/linux/livepatch.h +++ b/include/linux/livepatch.h @@ -172,6 +172,9 @@ struct klp_patch { #define klp_for_each_object_static(patch, obj) \ for (obj = patch->objs; obj->funcs || obj->name; obj++) +#define klp_for_each_object_safe(patch, obj, tmp_obj) \ + list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node) + #define klp_for_each_object(patch, obj) \ list_for_each_entry(obj, &patch->obj_list, node) @@ -180,6 +183,9 @@ struct klp_patch { func->old_name || func->new_func || func->old_sympos; \ func++) +#define klp_for_each_func_safe(obj, func, tmp_func) \ + list_for_each_entry_safe(func, tmp_func, &obj->func_list, node) + #define klp_for_each_func(obj, func) \ list_for_each_entry(func, &obj->func_list, node) diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 35f4bbff395f..0b3be6e14b80 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -852,11 +852,20 @@ static struct kobj_type klp_ktype_func = { .sysfs_ops = &kobj_sysfs_ops, }; -static void klp_free_funcs(struct klp_object *obj) +static void __klp_free_funcs(struct klp_object *obj, bool free_all) { - struct klp_func *func; + struct klp_func *func, *tmp_func; + + klp_for_each_func_safe(obj, func, tmp_func) { + if (!free_all && !func->nop) + continue; + + /* + * Avoid double free. It would be tricky to wait for kobject + * callbacks when only NOPs are handled. + */ + list_del(&func->node); - klp_for_each_func(obj, func) { /* Might be called from klp_init_patch() error path. */ if (func->kobj.state_initialized) kobject_put(&func->kobj); @@ -880,12 +889,21 @@ static void klp_free_object_loaded(struct klp_object *obj) } } -static void klp_free_objects(struct klp_patch *patch) +static void __klp_free_objects(struct klp_patch *patch, bool free_all) { - struct klp_object *obj; + struct klp_object *obj, *tmp_obj; - klp_for_each_object(patch, obj) { - klp_free_funcs(obj); + klp_for_each_object_safe(patch, obj, tmp_obj) { + __klp_free_funcs(obj, free_all); + + if (!free_all && !obj->dynamic) + continue; + + /* + * Avoid double free. It would be tricky to wait for kobject + * callbacks when only dynamic objects are handled. + */ + list_del(&obj->node); /* Might be called from klp_init_patch() error path. */ if (obj->kobj.state_initialized) @@ -895,6 +913,16 @@ static void klp_free_objects(struct klp_patch *patch) } } +static void klp_free_objects(struct klp_patch *patch) +{ + __klp_free_objects(patch, true); +} + +void klp_free_objects_dynamic(struct klp_patch *patch) +{ + __klp_free_objects(patch, false); +} + static void klp_free_patch(struct klp_patch *patch) { klp_free_objects(patch); diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h index 2d2b724d56a4..0837360a7170 100644 --- a/kernel/livepatch/core.h +++ b/kernel/livepatch/core.h @@ -8,6 +8,7 @@ extern struct mutex klp_mutex; void klp_discard_replaced_patches(struct klp_patch *new_patch, bool keep_module); +void klp_free_objects_dynamic(struct klp_patch *patch); static inline bool klp_is_object_loaded(struct klp_object *obj) { diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c index fbf1a3a47fc3..64b9ec3facf7 100644 --- a/kernel/livepatch/patch.c +++ b/kernel/livepatch/patch.c @@ -244,15 +244,26 @@ static int klp_patch_func(struct klp_func *func) return ret; } -void klp_unpatch_object(struct klp_object *obj) +static void __klp_unpatch_object(struct klp_object *obj, bool unpatch_all) { struct klp_func *func; - klp_for_each_func(obj, func) + klp_for_each_func(obj, func) { + if (!unpatch_all && !func->nop) + continue; + if (func->patched) klp_unpatch_func(func); + } - obj->patched = false; + if (unpatch_all || obj->dynamic) + obj->patched = false; +} + + +void klp_unpatch_object(struct klp_object *obj) +{ + __klp_unpatch_object(obj, true); } int klp_patch_object(struct klp_object *obj) @@ -275,11 +286,21 @@ int klp_patch_object(struct klp_object *obj) return 0; } -void klp_unpatch_objects(struct klp_patch *patch) +static void __klp_unpatch_objects(struct klp_patch *patch, bool unpatch_all) { struct klp_object *obj; klp_for_each_object(patch, obj) if (obj->patched) - klp_unpatch_object(obj); + __klp_unpatch_object(obj, unpatch_all); +} + +void klp_unpatch_objects(struct klp_patch *patch) +{ + __klp_unpatch_objects(patch, true); +} + +void klp_unpatch_objects_dynamic(struct klp_patch *patch) +{ + __klp_unpatch_objects(patch, false); } diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h index e72d8250d04b..cd8e1f03b22b 100644 --- a/kernel/livepatch/patch.h +++ b/kernel/livepatch/patch.h @@ -30,5 +30,6 @@ struct klp_ops *klp_find_ops(unsigned long old_addr); int klp_patch_object(struct klp_object *obj); void klp_unpatch_object(struct klp_object *obj); void klp_unpatch_objects(struct klp_patch *patch); +void klp_unpatch_objects_dynamic(struct klp_patch *patch); #endif /* _LIVEPATCH_PATCH_H */ diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c index 24daed700ee7..05ea2a8e03bd 100644 --- a/kernel/livepatch/transition.c +++ b/kernel/livepatch/transition.c @@ -87,9 +87,18 @@ static void klp_complete_transition(void) klp_transition_patch->mod->name, klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); - if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) { klp_discard_replaced_patches(klp_transition_patch, klp_forced); + /* + * There is no need to synchronize the transition after removing + * nops. They must be the last on the func_stack. Ftrace + * guarantees that nobody will stay in the trampoline after + * the ftrace handler is unregistered. + */ + klp_unpatch_objects_dynamic(klp_transition_patch); + } + if (klp_target_state == KLP_UNPATCHED) { /* * All tasks have transitioned to KLP_UNPATCHED so we can now @@ -146,6 +155,21 @@ static void klp_complete_transition(void) if (!klp_forced && klp_target_state == KLP_UNPATCHED) module_put(klp_transition_patch->mod); + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) { + /* + * We do not need to wait until the objects are really freed. + * We will never need them again because the patch must be on + * the bottom of the stack now. + */ + klp_free_objects_dynamic(klp_transition_patch); + /* + * Replace behavior will not longer be needed. Avoid the related + * code when disabling and enabling again. + */ + klp_transition_patch->replace = false; + } + + klp_target_state = KLP_UNDEFINED; klp_transition_patch = NULL; } -- 2.13.6