Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933143AbcCIPaQ (ORCPT ); Wed, 9 Mar 2016 10:30:16 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41933 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932998AbcCIPaK (ORCPT ); Wed, 9 Mar 2016 10:30:10 -0500 Date: Wed, 9 Mar 2016 17:30:04 +0200 From: "Michael S. Tsirkin" To: "Li, Liang Z" Cc: Roman Kagan , "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" , "riel@redhat.com" Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Message-ID: <20160309172929-mutt-send-email-mst@redhat.com> References: <20160304102346.GB2479@rkaganb.sw.ru> <20160304163246-mutt-send-email-mst@redhat.com> <20160305214748-mutt-send-email-mst@redhat.com> <20160307110852-mutt-send-email-mst@redhat.com> <20160309142851.GA9715@rkaganb.sw.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2485 Lines: 62 On Wed, Mar 09, 2016 at 03:27:54PM +0000, Li, Liang Z wrote: > > On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote: > > > On Mon, Mar 07, 2016 at 06:49:19AM +0000, Li, Liang Z wrote: > > > > > > No. And it's exactly what I mean. The ballooned memory is still > > > > > > processed during live migration without skipping. The live > > > > > > migration code is > > > > > in migration/ram.c. > > > > > > > > > > So if guest acknowledged VIRTIO_BALLOON_F_MUST_TELL_HOST, we > > can > > > > > teach qemu to skip these pages. > > > > > Want to write a patch to do this? > > > > > > > > > > > > > Yes, we really can teach qemu to skip these pages and it's not hard. > > > > The problem is the poor performance, this PV solution > > > > > > Balloon is always PV. And do not call patches solutions please. > > > > > > > is aimed to make it more > > > > efficient and reduce the performance impact on guest. > > > > > > We need to get a bit beyond this. You are making multiple changes, it > > > seems to make sense to split it all up, and analyse each change > > > separately. > > > > Couldn't agree more. > > > > There are three stages in this optimization: > > > > 1) choosing which pages to skip > > > > 2) communicating them from guest to host > > > > 3) skip transferring uninteresting pages to the remote side on migration > > > > For (3) there seems to be a low-hanging fruit to amend > > migration/ram.c:iz_zero_range() to consult /proc/self/pagemap. This would > > work for guest RAM that hasn't been touched yet or which has been > > ballooned out. > > > > For (1) I've been trying to make a point that skipping clean pages is much > > more likely to result in noticable benefit than free pages only. > > > > I am considering to drop the pagecache before getting the free pages. > > > As for (2), we do seem to have a problem with the existing balloon: > > according to your measurements it's very slow; besides, I guess it plays badly > > I didn't say communicating is slow. Even this is very slow, my solution use bitmap instead of > PFNs, there is fewer data traffic, so it's faster than the existing balloon which use PFNs. By how much? > > with transparent huge pages (as both the guest and the host work with one > > 4k page at a time). This is a problem for other use cases of balloon (e.g. as a > > facility for resource management); tackling that appears a more natural > > application for optimization efforts. > > > > Thanks, > > Roman.