Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758787AbcCDKhL (ORCPT ); Fri, 4 Mar 2016 05:37:11 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36953 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751557AbcCDKhH (ORCPT ); Fri, 4 Mar 2016 05:37:07 -0500 Date: Fri, 4 Mar 2016 12:36:58 +0200 From: "Michael S. Tsirkin" To: "Li, Liang Z" Cc: "Dr. David Alan Gilbert" , Roman Kagan , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Message-ID: <20160304122456-mutt-send-email-mst@redhat.com> References: <1457001868-15949-1-git-send-email-liang.z.li@intel.com> <20160303174615.GF2115@work-vm> <20160304075538.GC9100@rkaganb.sw.ru> <20160304083550.GE9100@rkaganb.sw.ru> <20160304090820.GA2149@work-vm> <20160304114519-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1567 Lines: 39 On Fri, Mar 04, 2016 at 10:11:00AM +0000, Li, Liang Z wrote: > > On Fri, Mar 04, 2016 at 09:12:12AM +0000, Li, Liang Z wrote: > > > > Although I wonder which is cheaper; that would be fairly expensive > > > > for the guest wouldn't it? And you'd somehow have to kick the guest > > > > before migration to do the ballooning - and how long would you wait for > > it to finish? > > > > > > About 5 seconds for an 8G guest, balloon to 1G. Get the free pages > > > bitmap take about 20ms for an 8G idle guest. > > > > > > Liang > > > > Where is the time spent though? allocating within guest? > > Or passing the info to host? > > If the former, we can use existing inflate/deflate vqs: > > Have guest put each free page on inflate vq, then on deflate vq. > > > > Maybe I am not clear enough. > > I mean if we inflate balloon before live migration, for a 8GB guest, it takes about 5 Seconds for the inflating operation to finish. And these 5 seconds are spent where? > For the PV solution, there is no need to inflate balloon before live migration, the only cost is to traversing the free_list to > construct the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if there is less free pages), > passing the free pages info to host will take about extra 3ms. > > > Liang So now let's please stop talking about solutions at a high level and discuss the interface changes you make in detail. What makes it faster? Better host/guest interface? No need to go through buddy allocator within guest? Less interrupts? Something else? > > -- > > MST