Received: by 10.223.176.5 with SMTP id f5csp4353470wra; Tue, 30 Jan 2018 06:09:02 -0800 (PST) X-Google-Smtp-Source: AH8x226IlFmCtpTji9eURRWHpr/3T2iYpQP1idxllaLH3S9ms9crrn5B9nL0MawNOwPBsX71O0xo X-Received: by 10.101.97.139 with SMTP id c11mr21810458pgv.219.1517321342514; Tue, 30 Jan 2018 06:09:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517321342; cv=none; d=google.com; s=arc-20160816; b=x6gjGLxTymK8310gceYkNAbHToW8iZyQ8QOvt/HCedZHqkAKVrtpKfdUQuh4/D+W5w +GAfF66DazHQ4mfs1NkKIHQXX0p2bQ6M82Q3IV+Aw77K63cN6REX2XDhQXklPEXdQxkc tX23Xi8E+ZUf14N7RZsYpx7IeM1yzaGITdxVnOt/SadctmW3kncLOevQBGTvpnd9dd9A sfKmijdswF6WIi8yIChuCsAuEMKt7RUwUo1CAWqRCP305sc17Hl8LZc4P29gRJ0HnW0f 3N+TQor559ygkGF1+PrhNgzP5l5O5KnW5INVkNB/TPdWAAmgBD5FkCXMC8S3x9dDPesA v8Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=p/xYUIl9ZtEwdyl1pUQxllFhVFYomyNTjYgjNdvUQuo=; b=KwwbTZWqu3Vkfu66DsinUogNkfNvN4y7izziVD3tsA+fNrGgUOTQe5YAB/WZ/995iQ PSKthdSv8llt+h7CHeTd1g1pLt5K40ep/z8f2a0hWTP19OnQ7ZU+6Pxoz+36+H1nnc9f weH4n1ZOK4aFl7s4g6Isq2RoRtJEfBk5BxAYxpiu38RLKTMNBsawxFfUM81Gm/Hwmp0a 8urUZl1VOf778i56lryUiT+81aZKYksGNsbN83Fb9LYYgpDgY3sW8rzHI4iHEUpPLpCC hmtkkT4ky79oHSpBZDuRS2IbVLUHt8Z5k/E2RRTsZtvNyPMdfx2dGfbOB/8Nd3xyABFS tdSA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 42-v6si12050868ple.151.2018.01.30.06.08.48; Tue, 30 Jan 2018 06:09:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752933AbeA3ODH (ORCPT + 99 others); Tue, 30 Jan 2018 09:03:07 -0500 Received: from mx2.suse.de ([195.135.220.15]:37980 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752540AbeA3ODF (ORCPT ); Tue, 30 Jan 2018 09:03:05 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 4FAF2AAB9; Tue, 30 Jan 2018 14:03:04 +0000 (UTC) Date: Tue, 30 Jan 2018 15:03:03 +0100 From: Petr Mladek To: Evgenii Shatokhin Cc: Jason Baron , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, jpoimboe@redhat.com, jeyu@kernel.org, jikos@kernel.org, mbenes@suse.cz, joe.lawrence@redhat.com Subject: Re: [PATCH v5 0/3] livepatch: introduce atomic replace Message-ID: <20180130140303.6xmjgnbdemovzif5@pathway.suse.cz> References: <86cac2eb-0de4-bae7-f633-5ad03297880d@akamai.com> <20180126102326.u5jscbbgburrzatp@pathway.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 2018-01-26 14:29:36, Evgenii Shatokhin wrote: > On 26.01.2018 13:23, Petr Mladek wrote: > > On Fri 2018-01-19 16:10:42, Jason Baron wrote: > > > > > > > > > On 01/19/2018 02:20 PM, Evgenii Shatokhin wrote: > > > > On 12.01.2018 22:55, Jason Baron wrote: > > > > There is one more thing that might need attention here. In my > > > > experiments with this patch series, I saw that unpatch callbacks are not > > > > called for the older binary patch (the one being replaced). > > > > > > So I think the pre_unpatch() can be called for any prior livepatch > > > modules from __klp_enable_patch(). Perhaps in reverse order of loading > > > (if there is more than one), and *before* the pre_patch() for the > > > livepatch module being loaded. Then, if it sucessfully patches in > > > klp_complete_transition() the post_unpatch() can be called for any prior > > > livepatch modules as well. I think again it makes sense to call the > > > post_unpatch() for prior modules *before* the post_patch() for the > > > current livepatch modules. > > > > So, we are talking about a lot of rather non-trivial code. > > IMHO, it might be easier to run just the callbacks from > > the new patch. In reality, the author should always know > > what it might be replacing and what needs to be done. > > > > By other words, it might be much easier to handle all > > situations in a single script in the new patch. Alternative > > would be doing crazy hacks to prevent the older scripts from > > destroying what we would like to keep. We would need to > > keep in mind interactions between the scripts and > > the order in which they are called. > > > > Or do you plan to use cumulative patches to simply > > get rid of any other "random" livepatches with something > > completely different? In this case, it might be much more > > safe to disable the older patches a normal way. > > In my experience, it was quite convenient sometimes to just "replace all > binary patches the user currently has loaded with this single one". No > matter what these original binary patches did and where they came from. To be honest, I would feel better if the livepatch framework is more safe. It should prevent breaking the system by a patch that atomically replaces other random patches that rely on callbacks. Well, combining random livepatches from random vendors is a call for troubles. It might easily fail when two patches add different changes to the same function. I wonder if we should introduce some tags, keys, vendors. I mean to define an identification or dependencies that would say that some patches are compatible or incompatible. We could allow to livepatch one function by two livepatches only if they are from the same vendor. But then the order still might be important. Also I am not sure if this condition is safe enough. One the other hand, we could handle callbacks like the shadow variables. Every shadow variable has an ID. We have an API to add/read/update/remove them. We might allow to have more callbacks with different IDs. Then we could run callbacks only for IDs that are newly added or removed. Sigh, this would be very complex implementation as well. But it might make these features easier to use and maintain. Alternatively, I thought about having two modes. One is stack of "random" patches. Second is a cumulative mode. IMHO, the combination of the two modes makes things very complex. It might be much easier if we allow to load patch with the replace flag enabled only on top of another patch with this flag enabled. > Another problematic situation is when you need to actually downgrade a > cumulative patch. Should be rare, but... Good point. Well, especially the callbacks should be rare. I would like to hear from people that have some experience or plans with using callbacks and cumulative patches. I wonder how they plan to technically reuse a feature in the cummulative patches. IMHO, it should be very rare to add/remove callbacks. Then it might be safe to downgrade a cummulative patch if the callbacks are exactly the same. > Well, I think we will disable the old patches explicitly in these cases, > before loading of the new one. May be fragile but easier to maintain. I am afraid that we will not be able to support all use cases and keep the code sane. Best Regards, Petr