Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932206AbcCDJM0 (ORCPT ); Fri, 4 Mar 2016 04:12:26 -0500 Received: from mga11.intel.com ([192.55.52.93]:20661 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758869AbcCDJMQ convert rfc822-to-8bit (ORCPT ); Fri, 4 Mar 2016 04:12:16 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,535,1449561600"; d="scan'208";a="901134429" From: "Li, Liang Z" To: "Dr. David Alan Gilbert" , Roman Kagan , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "mst@redhat.com" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" Subject: RE: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Thread-Topic: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Thread-Index: AQHRdTqPjTxTnYWKZEWM4lf/HjT6rZ9HeJiAgADtUoCAAImCUP//gbqAgAAJEwCAAIakYA== Date: Fri, 4 Mar 2016 09:12:12 +0000 Message-ID: References: <1457001868-15949-1-git-send-email-liang.z.li@intel.com> <20160303174615.GF2115@work-vm> <20160304075538.GC9100@rkaganb.sw.ru> <20160304083550.GE9100@rkaganb.sw.ru> <20160304090820.GA2149@work-vm> In-Reply-To: <20160304090820.GA2149@work-vm> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiY2I0NmNiZjAtY2NjZS00MWQ1LWIwZGYtMjVjYzEyM2M2YTU3IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImZzdk9zZm5qVUJlYkE3ekU0UWMxVUExK2g5VVNjeDdoRG1FblJWUU5oc3c9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3282 Lines: 71 > * Roman Kagan (rkagan@virtuozzo.com) wrote: > > On Fri, Mar 04, 2016 at 08:23:09AM +0000, Li, Liang Z wrote: > > > > On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wrote: > > > > > * Liang Li (liang.z.li@intel.com) wrote: > > > > > > The current QEMU live migration implementation mark the all > > > > > > the guest's RAM pages as dirtied in the ram bulk stage, all > > > > > > these pages will be processed and that takes quit a lot of CPU cycles. > > > > > > > > > > > > From guest's point of view, it doesn't care about the content > > > > > > in free pages. We can make use of this fact and skip > > > > > > processing the free pages in the ram bulk stage, it can save a > > > > > > lot CPU cycles and reduce the network traffic significantly > > > > > > while speed up the live migration process obviously. > > > > > > > > > > > > This patch set is the QEMU side implementation. > > > > > > > > > > > > The virtio-balloon is extended so that QEMU can get the free > > > > > > pages information from the guest through virtio. > > > > > > > > > > > > After getting the free pages information (a bitmap), QEMU can > > > > > > use it to filter out the guest's free pages in the ram bulk > > > > > > stage. This make the live migration process much more efficient. > > > > > > > > > > Hi, > > > > > An interesting solution; I know a few different people have > > > > > been looking at how to speed up ballooned VM migration. > > > > > > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that can be used to detect > > > > > unmapped/zero mapped pages in the guest ram, would it achieve > > > > > the > > > > same result? > > > > > > > > Yes I was about to suggest the same thing: it's simple and makes > > > > use of the existing infrastructure. And you wouldn't need to care > > > > if the pages were unmapped by ballooning or anything else > > > > (alternative balloon implementations, not yet touched by the > > > > guest, etc.). Besides, you wouldn't need to synchronize with the guest. > > > > > > > > Roman. > > > > > > The unmapped/zero mapped pages can be detected by parsing > > > /proc/self/pagemap, but the free pages can't be detected by this. > > > Imaging an application allocates a large amount of memory , after > > > using, it frees the memory, then live migration happens. All these free > pages will be process and sent to the destination, it's not optimal. > > > > First, the likelihood of such a situation is marginal, there's no > > point optimizing for it specifically. > > > > And second, even if that happens, you inflate the balloon right before > > the migration and the free memory will get umapped very quickly, so > > this case is covered nicely by the same technique that works for more > > realistic cases, too. > > Although I wonder which is cheaper; that would be fairly expensive for the > guest wouldn't it? And you'd somehow have to kick the guest before > migration to do the ballooning - and how long would you wait for it to finish? About 5 seconds for an 8G guest, balloon to 1G. Get the free pages bitmap take about 20ms for an 8G idle guest. Liang > > Dave > > > > > Roman. > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK