Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752655AbbBXMMW (ORCPT ); Tue, 24 Feb 2015 07:12:22 -0500 Received: from cantor2.suse.de ([195.135.220.15]:38622 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752510AbbBXMMU (ORCPT ); Tue, 24 Feb 2015 07:12:20 -0500 Date: Tue, 24 Feb 2015 13:12:17 +0100 From: Vojtech Pavlik To: Ingo Molnar Cc: Andrew Morton , Jiri Kosina , Josh Poimboeuf , Peter Zijlstra , Ingo Molnar , Seth Jennings , linux-kernel@vger.kernel.org, Linus Torvalds , Arjan van de Ven , Thomas Gleixner , Peter Zijlstra , Borislav Petkov , live-patching@vger.kernel.org Subject: Re: live kernel upgrades (was: live kernel patching design) Message-ID: <20150224121217.GA3081@suse.cz> References: <20150221194840.GA10126@gmail.com> <20150222084601.GA23491@gmail.com> <20150222094639.GA23684@gmail.com> <20150222104841.GA25335@gmail.com> <20150222150148.3c566837.akpm@linux-foundation.org> <20150223063552.GA3675@suse.com> <20150224094405.GB19976@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150224094405.GB19976@gmail.com> X-Bounce-Cookie: It's a lemon tree, dear Watson! User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3935 Lines: 103 On Tue, Feb 24, 2015 at 10:44:05AM +0100, Ingo Molnar wrote: > > This is the most common argument that's raised when live > > patching is discussed. "Why do need live patching when we > > have redundancy?" > > My argument is that if we start off with a latency of 10 > seconds and improve that gradually, it will be good for > everyone with a clear, actionable route for even those who > cannot take a 10 seconds delay today. Sure, we can do it that way. Or do it in the other direction. Today we have a tool (livepatch) in the kernel that can apply trivial single-function fixes without a measurable disruption to applications. And we can improve it gradually to expand the range of fixes it can apply. Dependent functions can be done by kGraft's lazy migration. Limited data structure changes can be handled by shadowing. Major data structure and/or locking changes require stopping the kernel, and trapping all tasks at the kernel/userspace boundary is clearly the cleanest way to do that. I comes at a steep latency cost, though. Full code replacement without change scope consideration requires full serialization and deserialization of hardware and userspace interface state, which is something we don't have today and would require work on every single driver. Possible, but probably a decade of effort. With this approach you have something useful at every point and every piece of effort put in gives you a rewars. > Lets see the use cases: > > > [...] Examples would be legacy applications which can't > > run in an active-active cluster and need to be restarted > > on failover. > > Most clusters (say web frontends) can take a stoppage of a > couple of seconds. It's easy to find examples of workloads that can be stopped. It doesn't rule out a significant set of those where stopping them is very expensive. > > Another usecase is large HPC clusters, where all nodes > > have to run carefully synchronized. Once one gets behind > > in a calculation cycle, others have to wait for the > > results and the efficiency of the whole cluster goes > > down. [...] > > I think calculation nodes on large HPC clusters qualify as > the specialized case that I mentioned, where the update > latency could be brought down into the 1 second range. > > But I don't think calculation nodes are patched in the > typical case: you might want to patch Internet facing > frontend systems, the rest is left as undisturbed as > possible. So I'm not even sure this is a typical usecase. They're not patched for security bugs, but stability bugs are an important issue for multi-month calculations. > In any case, there's no hard limit on how fast such a > kernel upgrade can get in principle, and the folks who care > about that latency will sure help out optimizing it and > many HPC projects are well funded. So far, unless you come up with an effective solutions, if you're catching all tasks at the kernel/userspace boundary (the "Kragle" approach), the service interruption is effectively unbounded due to tasks in D state. > > The value of live patching is in near zero disruption. > > Latency is a good attribute of a kernel upgrade mechanism, > but it's by far not the only attribute and we should > definitely not design limitations into the approach and > hurt all the other attributes, just to optimize that single > attribute. It's an attribute I'm not willing to give up. On the other hand, I definitely wouldn't argue against having modes of operation where the latency is higher and the tool is more powerful. > I.e. don't make it a single-issue project. There is no need to worry about that. -- Vojtech Pavlik Director SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/