Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933958Ab0D3S7z (ORCPT ); Fri, 30 Apr 2010 14:59:55 -0400 Received: from claw.goop.org ([74.207.240.146]:56580 "EHLO claw.goop.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932927Ab0D3S7S (ORCPT ); Fri, 30 Apr 2010 14:59:18 -0400 Message-ID: <4BDB2883.8070606@goop.org> Date: Fri, 30 Apr 2010 11:59:15 -0700 From: Jeremy Fitzhardinge User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-2.fc12 Lightning/1.0b2pre Thunderbird/3.0.4 MIME-Version: 1.0 To: Avi Kivity CC: Dan Magenheimer , Dave Hansen , Pavel Machek , linux-kernel@vger.kernel.org, linux-mm@kvack.org, hugh.dickins@tiscali.co.uk, ngupta@vflare.org, JBeulich@novell.com, chris.mason@oracle.com, kurt.hackel@oracle.com, dave.mccracken@oracle.com, npiggin@suse.de, akpm@linux-foundation.org, riel@redhat.com Subject: Re: Frontswap [PATCH 0/4] (was Transcendent Memory): overview References: <4BD16D09.2030803@redhat.com>> > <4BD1A74A.2050003@redhat.com>> <4830bd20-77b7-46c8-994b-8b4fa9a79d27@default>> <4BD1B427.9010905@redhat.com> <4BD1B626.7020702@redhat.com>> <5fa93086-b0d7-4603-bdeb-1d6bfca0cd08@default>> <4BD3377E.6010303@redhat.com>> <1c02a94a-a6aa-4cbb-a2e6-9d4647760e91@default4BD43033.7090706@redhat.com>> > <20100428055538.GA1730@ucw.cz> <1272591924.23895.807.camel@nimitz 4BDA8324.7090409@redhat.com> <084f72bf-21fd-4721-8844-9d10cccef316@default> <4BDB026E.1030605@redhat.com> <4BDB18CE.2090608@goop.org> <4BDB2069.4000507@redhat.com> In-Reply-To: <4BDB2069.4000507@redhat.com> X-Enigmail-Version: 1.0.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5012 Lines: 106 On 04/30/2010 11:24 AM, Avi Kivity wrote: >> I'd argue the opposite. There's no point in having the host do swapping >> on behalf of guests if guests can do it themselves; it's just a >> duplication of functionality. > > The problem with relying on the guest to swap is that it's voluntary. > The guest may not be able to do it. When the hypervisor needs memory > and guests don't cooperate, it has to swap. Or fail whatever operation its trying to do. You can only use overcommit to fake unlimited resources for so long before you need a government bailout. >> You end up having two IO paths for each >> guest, and the resulting problems in trying to account for the IO, >> rate-limit it, etc. If you can simply say "all guest disk IO happens >> via this single interface", its much easier to manage. >> > > With tmem you have to account for that memory, make sure it's > distributed fairly, claim it back when you need it (requiring guest > cooperation), live migrate and save/restore it. It's a much larger > change than introducing a write-back device for swapping (which has > the benefit of working with unmodified guests). Well, with caveats. To be useful with migration the backing store needs to be shared like other storage, so you can't use a specific host-local fast (ssd) swap device. And because the device is backed by pagecache with delayed writes, it has much weaker integrity guarantees than a normal device, so you need to be sure that the guests are only going to use it for swap. Sure, these are deployment issues rather than code ones, but they're still issues. >> If frontswap has value, it's because its providing a new facility to >> guests that doesn't already exist and can't be easily emulated with >> existing interfaces. >> >> It seems to me the great strengths of the synchronous interface are: >> >> * it matches the needs of an existing implementation (tmem in Xen) >> * it is simple to understand within the context of the kernel code >> it's used in >> >> Simplicity is important, because it allows the mm code to be understood >> and maintained without having to have a deep understanding of >> virtualization. > > If we use the existing paths, things are even simpler, and we match > more needs (hypervisors with dma engines, the ability to reclaim > memory without guest cooperation). Well, you still can't reclaim memory; you can write it out to storage. It may be cheaper/byte, but it's still a resource dedicated to the guest. But that's just a consequence of allowing overcommit, and to what extent you're happy to allow it. What kind of DMA engine do you have in mind? Are there practical memory->memory DMA engines that would be useful in this context? >>> At this point we're back with the ordinary swap API. Simply have your >>> host expose a device which is write cached by host memory, you'll have >>> all the benefits of frontswap with none of the disadvantages, and with >>> no changes to guest code. >>> >> Yes, that's comfortably within the "guests page themselves" model. >> Setting up a block device for the domain which is backed by pagecache >> (something we usually try hard to avoid) is pretty straightforward. But >> it doesn't work well for Xen unless the blkback domain is sized so that >> it has all of Xen's free memory in its pagecache. >> > > Could be easily achieved with ballooning? It could be achieved with ballooning, but it isn't completely trivial. It wouldn't work terribly well with a driver domain setup, unless all the swap-devices turned out to be backed by the same domain (which in turn would need to know how to balloon in response to overall system demand). The partitioning of the pagecache among the guests would be at the mercy of the mm subsystem rather than subject to any specific QoS or other per-domain policies you might want to put in place (maybe fiddling around with [fm]advise could get you some control over that). > >> That said, it does concern me that the host/hypervisor is left holding >> the bag on frontswapped pages. A evil/uncooperative/lazy can just pump >> a whole lot of pages into the frontswap pool and leave them there. I >> guess this is mitigated by the fact that the API is designed such that >> they can't update or read the data without also allowing the hypervisor >> to drop the page (updates can fail destructively, and reads are also >> destructive), so the guest can't use it as a clumsy extension of their >> normal dedicated memory. >> > > Eventually you'll have to swap frontswap pages, or kill uncooperative > guests. At which point all of the simplicity is gone. Killing guests is pretty simple. Presumably the oom killer will get kvm processes like anything else? J -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/