Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932877AbcCIO3f (ORCPT ); Wed, 9 Mar 2016 09:29:35 -0500 Received: from mx2.parallels.com ([199.115.105.18]:39636 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932401AbcCIO3Z (ORCPT ); Wed, 9 Mar 2016 09:29:25 -0500 Date: Wed, 9 Mar 2016 17:28:54 +0300 From: Roman Kagan To: "Michael S. Tsirkin" CC: "Li, Liang Z" , "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" , Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Message-ID: <20160309142851.GA9715@rkaganb.sw.ru> Mail-Followup-To: Roman Kagan , "Michael S. Tsirkin" , "Li, Liang Z" , "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" , riel@redhat.com References: <20160304081411.GD9100@rkaganb.sw.ru> <20160304102346.GB2479@rkaganb.sw.ru> <20160304163246-mutt-send-email-mst@redhat.com> <20160305214748-mutt-send-email-mst@redhat.com> <20160307110852-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20160307110852-mutt-send-email-mst@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-ClientProxiedBy: US-EXCH.sw.swsoft.com (10.255.249.47) To US-EXCH2.sw.swsoft.com (10.255.249.46) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1929 Lines: 50 On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote: > On Mon, Mar 07, 2016 at 06:49:19AM +0000, Li, Liang Z wrote: > > > > No. And it's exactly what I mean. The ballooned memory is still > > > > processed during live migration without skipping. The live migration code is > > > in migration/ram.c. > > > > > > So if guest acknowledged VIRTIO_BALLOON_F_MUST_TELL_HOST, we can > > > teach qemu to skip these pages. > > > Want to write a patch to do this? > > > > > > > Yes, we really can teach qemu to skip these pages and it's not hard. > > The problem is the poor performance, this PV solution > > Balloon is always PV. And do not call patches solutions please. > > > is aimed to make it more > > efficient and reduce the performance impact on guest. > > We need to get a bit beyond this. You are making multiple > changes, it seems to make sense to split it all up, and analyse each > change separately. Couldn't agree more. There are three stages in this optimization: 1) choosing which pages to skip 2) communicating them from guest to host 3) skip transferring uninteresting pages to the remote side on migration For (3) there seems to be a low-hanging fruit to amend migration/ram.c:iz_zero_range() to consult /proc/self/pagemap. This would work for guest RAM that hasn't been touched yet or which has been ballooned out. For (1) I've been trying to make a point that skipping clean pages is much more likely to result in noticable benefit than free pages only. As for (2), we do seem to have a problem with the existing balloon: according to your measurements it's very slow; besides, I guess it plays badly with transparent huge pages (as both the guest and the host work with one 4k page at a time). This is a problem for other use cases of balloon (e.g. as a facility for resource management); tackling that appears a more natural application for optimization efforts. Thanks, Roman.