Received: by 10.223.176.46 with SMTP id f43csp206933wra; Thu, 25 Jan 2018 20:28:52 -0800 (PST) X-Google-Smtp-Source: AH8x22757vPa19XuRMuzKgCy7NpnK8+Fo8+FoGmLrL688aH+apo7mh2ctt81TMoNM5dFPImqhmjN X-Received: by 2002:a17:902:2f03:: with SMTP id s3-v6mr11628077plb.112.1516940932138; Thu, 25 Jan 2018 20:28:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516940932; cv=none; d=google.com; s=arc-20160816; b=iZzaZacpv5MSkiBDsne3hUtSTtCQswaS6Vfr/rhURm5a/iw/CTKrnX9YqPV7JFhrW8 KmG3iixDLcB2HVDrUea74F+72Th3RlSij0JBi4K1C87yeMZg71TuR0+5yg19y9t1cU1x SXacQTgQCex7aHoO5Z9mAg+f2+L+lUGz5nRzxK6BqcPChth2N6TMqaY7FhBfYZmTfW7m ZyQZS//noyS0Xb8kNaf7iVFkneEmt3hSnK/HWXdTLt+F29FqRXC1qUUEh6aCZvBd+lQK SjOEiuY3p3CE9M2bu2pwR72UejFAFNs5AX/Atb1vbxeMeuaez+djeNaaGZprQsn9qveo mfYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=mjV+NXYllMroavk8jTCsLfI5HlgewW6h5v8cJd6eFN8=; b=KB2FZ3U64xA1Of4dywi86WIZ1GeyiL7itWhaPR1g2K8RVuhKZOB9tHDXyHRWR3U3w4 5aBpB0+5MKs9Pzk4f++KlG+anc0fLLNhXpx/8jKO7ltFX/eChE4Pz3GEk96kWMznSU8H RQ/w6m2g7c++LipVGhQ4rzBEKuEvzJ0sTpVAFPHTb1I+bm7fFhbnRrdUzvisrymxjI6h B9gLZ5PMQWBKxWua31PybsKszcvxZcfUC3SG6jFXVxDRl/PKQVLGo3oo1icH496NVTwr Upra4v3J9GplNjwrknYb0AqctcjpklQ72vWWMmnvdpyeFgFthCumqY9F3waJY4EiHw+U sudg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@akamai.com header.s=jan2016.eng header.b=a0AzbwcB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=akamai.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w17-v6si3142621plp.44.2018.01.25.20.28.37; Thu, 25 Jan 2018 20:28:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@akamai.com header.s=jan2016.eng header.b=a0AzbwcB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=akamai.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751559AbeAZE2L (ORCPT + 99 others); Thu, 25 Jan 2018 23:28:11 -0500 Received: from mx0a-00190b01.pphosted.com ([67.231.149.131]:58710 "EHLO mx0a-00190b01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751377AbeAZE2J (ORCPT ); Thu, 25 Jan 2018 23:28:09 -0500 Received: from pps.filterd (m0122332.ppops.net [127.0.0.1]) by mx0a-00190b01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w0Q4RkVR013012; Fri, 26 Jan 2018 04:27:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=akamai.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=jan2016.eng; bh=mjV+NXYllMroavk8jTCsLfI5HlgewW6h5v8cJd6eFN8=; b=a0AzbwcBbNVSDxemawrJCBMU7V9oMkiTTXEK9odYEPHcn5UnMZd5C5cpao+BXQZsp/ig fvyrXHXNLRMDEjcEzRKSKf4HsTYdr6pNR0L7XXWzKHuycKKh549t5ERR1e5zBk5Qhpqp kFnT3fkhYGT7oF9wYjKRtsdRdgkJK2FO/Hoisrykw5eik2BmDYRwv/e+Ufy0qiDFreyO IP/N4DMuqzXVrVjal+82bqU76yevgTls3C4sOeGWS1yQnLj3Wburgl4L7GTGpFFMm7oW gmUuGsxjtDVEZdk+XL7myPYVUdLmRXKDh1eZLYbALccBd9dX7K4d52GWE7Jr8CuOjxxP Yw== Received: from prod-mail-ppoint3 ([96.6.114.86]) by mx0a-00190b01.pphosted.com with ESMTP id 2fpkvq2pm4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 26 Jan 2018 04:27:59 +0000 Received: from pps.filterd (prod-mail-ppoint3.akamai.com [127.0.0.1]) by prod-mail-ppoint3.akamai.com (8.16.0.21/8.16.0.21) with SMTP id w0Q4PcgE014136; Thu, 25 Jan 2018 23:27:58 -0500 Received: from prod-mail-relay14.akamai.com ([172.27.17.39]) by prod-mail-ppoint3.akamai.com with ESMTP id 2fm200rsht-1; Thu, 25 Jan 2018 23:27:58 -0500 Received: from [0.0.0.0] (prod-ssh-gw01.bos01.corp.akamai.com [172.27.119.138]) by prod-mail-relay14.akamai.com (Postfix) with ESMTP id 8C31C82FCE; Thu, 25 Jan 2018 21:27:57 -0700 (MST) Subject: Re: PATCH v6 6/6] livepatch: Add atomic replace To: Petr Mladek , jpoimboe@redhat.com, jikos@kernel.org, mbenes@suse.cz Cc: jeyu@kernel.org, Evgenii Shatokhin , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org References: <20180125160203.28959-1-pmladek@suse.com> <20180125160203.28959-7-pmladek@suse.com> From: Jason Baron Message-ID: <693db35b-3bc8-d5de-207e-6321226d7f58@akamai.com> Date: Thu, 25 Jan 2018 23:27:57 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <20180125160203.28959-7-pmladek@suse.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-01-26_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1801260057 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-01-26_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1801260057 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/25/2018 11:02 AM, Petr Mladek wrote: > From: Jason Baron > > Sometimes we would like to revert a particular fix. Currently, this > is not easy because we want to keep all other fixes active and we > could revert only the last applied patch. > > One solution would be to apply new patch that implemented all > the reverted functions like in the original code. It would work > as expected but there will be unnecessary redirections. In addition, > it would also require knowing which functions need to be reverted at > build time. > > Another problem is when there are many patches that touch the same > functions. There might be dependencies between patches that are > not enforced on the kernel side. Also it might be pretty hard to > actually prepare the patch and ensure compatibility with > the other patches. > > A better solution would be to create cumulative patch and say that > it replaces all older ones. > > This patch adds a new "replace" flag to struct klp_patch. When it is > enabled, a set of 'nop' klp_func will be dynamically created for all > functions that are already being patched but that will not longer be > modified by the new patch. They are temporarily used as a new target > during the patch transition. > > There are used several simplifications: > > + nops' structures are generated already when the patch is registered. > All registered patches are taken into account, even the disabled ones. > As a result, there might be more nops than are really needed when > the patch is enabled and some disabled patches were removed before. > But we are on the safe side and it simplifies the implementation. > Especially we could reuse the existing init() functions. Also freeing > is easier because the same nops are created and removed only once. > > Alternative solution would be to create nops when the patch is enabled. > But then we would need to complicated to reuse the init() functions > and error paths. It would increase the risk of errors because of > late kobject initialization. It would need tricky waiting for > freed kobjects when finalizing a reverted enable transaction. > > + The replaced patches are removed from the stack and cannot longer > be enabled directly. Otherwise, we would need to implement a more > complex logic of handling the stack of patches. It might be hard > to come with a reasonable semantic. > > A fallback is to remove (rmmod) the replaced patches and register > (insmod) them again. > > + Nops are handled like normal function patches. It reduces changes > in the existing code. > > It would be possible to copy internal values when they are allocated > and make short cuts in init() functions. It would be possible to use > the fact that old_func and new_func point to the same function and > do not init new_func and new_size at all. It would be possible to > detect nop func in ftrace handler and just leave. But all these would > just complicate the code and maintenance. > > + The callbacks from the replaced patches are not called. It would be > pretty hard to define a reasonable semantic and implement it. > > It might even be counter-productive. The new patch is cumulative. > It is supposed to include most of the changes from older patches. > In most cases, it will not want to call pre_unpatch() post_unpatch() > callbacks from the replaced patches. It would disable/break things > for no good reasons. Also it should be easier to handle various > scenarios in a single script in the new patch than think about > interactions caused by running many scripts from older patches. > No to say that the old scripts even would not expect to be called > in this situation. > > Signed-off-by: Jason Baron > [pmladek@suse.com: Split, reuse existing code, simplified] Hi Petr, Thanks for cleaning this up - it looks good. I just had one comment/issue below thus far. > Signed-off-by: Petr Mladek > Cc: Josh Poimboeuf > Cc: Jessica Yu > Cc: Jiri Kosina > Cc: Miroslav Benes > Cc: Petr Mladek > --- > include/linux/livepatch.h | 3 + > kernel/livepatch/core.c | 162 +++++++++++++++++++++++++++++++++++++++++- > kernel/livepatch/core.h | 4 ++ > kernel/livepatch/transition.c | 36 ++++++++++ > 4 files changed, 203 insertions(+), 2 deletions(-) > > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h > index 21cad200f949..9d44d105b278 100644 > --- a/include/linux/livepatch.h > +++ b/include/linux/livepatch.h > @@ -42,6 +42,7 @@ > enum klp_func_type { > KLP_FUNC_ANY = -1, /* Substitute any type */ > KLP_FUNC_ORIGINAL = 0, /* Original statically defined structure */ > + KLP_FUNC_NOP, /* Dynamically allocated NOP function patch */ > }; > > /** > @@ -153,6 +154,7 @@ struct klp_object { > * struct klp_patch - patch structure for live patching > * @mod: reference to the live patch module > * @objs: object entries for kernel objects to be patched > + * @replace: replace all already registered patches > * @list: list node for global list of registered patches > * @kobj: kobject for sysfs resources > * @obj_list: head of list for struct klp_object > @@ -163,6 +165,7 @@ struct klp_patch { > /* external */ > struct module *mod; > struct klp_object *objs; > + bool replace; > > /* internal */ > struct list_head list; > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c > index 6ad3195d995a..c606b291dfcd 100644 > --- a/kernel/livepatch/core.c > +++ b/kernel/livepatch/core.c > @@ -142,6 +142,21 @@ static bool klp_initialized(void) > return !!klp_root_kobj; > } > > +static struct klp_func *klp_find_func(struct klp_object *obj, > + struct klp_func *old_func) > +{ > + struct klp_func *func; > + > + klp_for_each_func(obj, func) { > + if ((strcmp(old_func->old_name, func->old_name) == 0) && > + (old_func->old_sympos == func->old_sympos)) { > + return func; > + } > + } > + > + return NULL; > +} > + > static struct klp_object *klp_find_object(struct klp_patch *patch, > struct klp_object *old_obj) > { > @@ -342,6 +357,39 @@ static int klp_write_object_relocations(struct module *pmod, > return ret; > } > > +/* > + * This function removes replaced patches from both func_stack > + * and klp_patches stack. > + * > + * We could be pretty aggressive here. It is called in situation > + * when these structures are not longer accessible. All functions > + * are redirected using the klp_transition_patch. They use either > + * a new code or they in the original code because of the special > + * nop function patches. > + */ > +void klp_throw_away_replaced_patches(struct klp_patch *new_patch, > + bool keep_module) > +{ > + struct klp_patch *old_patch, *tmp_patch; > + > + list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) { > + if (old_patch == new_patch) > + return; > + > + klp_unpatch_objects(old_patch, KLP_FUNC_ANY); > + old_patch->enabled = false; > + > + /* > + * Replaced patches could not get re-enabled to keep > + * the code sane. > + */ > + list_del_init(&old_patch->list); I'm wondering if this should be: list_move(&old_patch->list, &klp_replaced_patches); Which ensures that the only valid transition after a patch has been 'replaced' is an 'unregister'. Otherwise, klp_replaced_patches is not used anywhere. Thanks, -Jason > + > + if (!keep_module) > + module_put(old_patch->mod); > + } > +} > + > static int __klp_disable_patch(struct klp_patch *patch) > { > struct klp_object *obj; > @@ -537,7 +585,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr, > if (!klp_is_patch_usable(patch)) { > /* > * Module with the patch could either disappear meanwhile or is > - * not properly initialized yet. > + * not properly initialized yet or the patch was just replaced. > */ > ret = -EINVAL; > goto err; > @@ -662,8 +710,16 @@ static struct attribute *klp_patch_attrs[] = { > /* > * Dynamically allocated objects and functions. > */ > +static void klp_free_func_nop(struct klp_func *func) > +{ > + kfree(func->old_name); > + kfree(func); > +} > + > static void klp_free_func_dynamic(struct klp_func *func) > { > + if (func->ftype == KLP_FUNC_NOP) > + klp_free_func_nop(func); > } > > static void klp_free_object_dynamic(struct klp_object *obj) > @@ -691,7 +747,7 @@ static struct klp_object *klp_alloc_object_dynamic(const char *name) > return obj; > } > > -struct klp_object *klp_get_or_add_object(struct klp_patch *patch, > +static struct klp_object *klp_get_or_add_object(struct klp_patch *patch, > struct klp_object *old_obj) > { > struct klp_object *obj; > @@ -708,6 +764,102 @@ struct klp_object *klp_get_or_add_object(struct klp_patch *patch, > return obj; > } > > +static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func, > + struct klp_object *obj) > +{ > + struct klp_func *func; > + > + func = kzalloc(sizeof(*func), GFP_KERNEL); > + if (!func) > + return ERR_PTR(-ENOMEM); > + > + if (old_func->old_name) { > + func->old_name = kstrdup(old_func->old_name, GFP_KERNEL); > + if (!func->old_name) { > + kfree(func); > + return ERR_PTR(-ENOMEM); > + } > + } > + func->old_sympos = old_func->old_sympos; > + /* NOP func is the same as using the original implementation. */ > + func->new_func = (void *)old_func->old_addr; > + func->ftype = KLP_FUNC_NOP; > + > + return func; > +} > + > +static int klp_add_func_nop(struct klp_object *obj, > + struct klp_func *old_func) > +{ > + struct klp_func *func; > + > + func = klp_find_func(obj, old_func); > + > + if (func) > + return 0; > + > + func = klp_alloc_func_nop(old_func, obj); > + if (IS_ERR(func)) > + return PTR_ERR(func); > + > + klp_init_func_list(obj, func); > + > + return 0; > +} > + > +static int klp_add_object_nops(struct klp_patch *patch, > + struct klp_object *old_obj) > +{ > + struct klp_object *obj; > + struct klp_func *old_func; > + int err = 0; > + > + obj = klp_get_or_add_object(patch, old_obj); > + if (IS_ERR(obj)) > + return PTR_ERR(obj); > + > + klp_for_each_func(old_obj, old_func) { > + err = klp_add_func_nop(obj, old_func); > + if (err) > + return err; > + } > + > + return 0; > +} > + > +/* > + * Add 'nop' functions which simply return to the caller to run > + * the original function. The 'nop' functions are added to a > + * patch to facilitate a 'replace' mode > + * > + * The nops are generated for all patches on the stack when > + * the new patch is initialized. It is safe even though some > + * older patches might get disabled and removed before the > + * new one is enabled. In the worst case, there might be nops > + * there will not be really needed. But it does not harm and > + * simplifies the implementation a lot. Especially we could > + * use the init functions as is. > + */ > +static int klp_add_nops(struct klp_patch *patch) > +{ > + struct klp_patch *old_patch; > + struct klp_object *old_obj; > + int err = 0; > + > + if (WARN_ON(!patch->replace)) > + return -EINVAL; > + > + list_for_each_entry(old_patch, &klp_patches, list) { > + klp_for_each_object(old_patch, old_obj) { > + err = klp_add_object_nops(patch, old_obj); > + if (err) > + return err; > + } > + } > + > + return 0; > +} > + > /* > * Patch release framework must support the following scenarios: > * > @@ -952,6 +1104,12 @@ static int klp_init_patch(struct klp_patch *patch) > return ret; > } > > + if (patch->replace) { > + ret = klp_add_nops(patch); > + if (ret) > + goto free; > + } > + > klp_for_each_object(patch, obj) { > ret = klp_init_object(patch, obj); > if (ret) > diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h > index 48a83d4364cf..43184a5318d8 100644 > --- a/kernel/livepatch/core.h > +++ b/kernel/livepatch/core.h > @@ -6,6 +6,10 @@ > > extern struct mutex klp_mutex; > > +void klp_throw_away_replaced_patches(struct klp_patch *new_patch, > + bool keep_module); > +void klp_free_objects(struct klp_patch *patch, enum klp_func_type ftype); > + > static inline bool klp_is_object_loaded(struct klp_object *obj) > { > return !obj->name || obj->mod; > diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c > index 6917100fbe79..3f6cf5b9e048 100644 > --- a/kernel/livepatch/transition.c > +++ b/kernel/livepatch/transition.c > @@ -87,6 +87,33 @@ static void klp_complete_transition(void) > klp_transition_patch->mod->name, > klp_target_state == KLP_PATCHED ? "patching" : "unpatching"); > > + /* > + * For replace patches, we disable all previous patches, and replace > + * the dynamic no-op functions by removing the ftrace hook. > + */ > + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) { > + /* > + * Make sure klp_ftrace_handler() can no longer see functions > + * from the replaced patches. Then all functions will be > + * redirected using klp_transition_patch. They will use > + * either a new code or they will stay in the original code > + * because of the nop funcs' structures. > + */ > + if (klp_forced) > + klp_synchronize_transition(); > + > + klp_throw_away_replaced_patches(klp_transition_patch, > + klp_forced); > + > + /* > + * There is no need to synchronize the transition after removing > + * nops. They must be the last on the func_stack. Ftrace > + * gurantees that nobody will stay in the trampoline after > + * the ftrace handler is unregistered. > + */ > + klp_unpatch_objects(klp_transition_patch, KLP_FUNC_NOP); > + } > + > if (klp_target_state == KLP_UNPATCHED) { > /* > * All tasks have transitioned to KLP_UNPATCHED so we can now > @@ -143,6 +170,15 @@ static void klp_complete_transition(void) > if (!klp_forced && klp_target_state == KLP_UNPATCHED) > module_put(klp_transition_patch->mod); > > + /* > + * We do not need to wait until the objects are really freed. > + * The patch must be on the bottom of the stack. Therefore it > + * will never replace anything else. The only important thing > + * is that we wait when the patch is being unregistered. > + */ > + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) > + klp_free_objects(klp_transition_patch, KLP_FUNC_NOP); > + > klp_target_state = KLP_UNDEFINED; > klp_transition_patch = NULL; > } >