Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753859AbcCJM3v (ORCPT ); Thu, 10 Mar 2016 07:29:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:57674 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751424AbcCJM3m (ORCPT ); Thu, 10 Mar 2016 07:29:42 -0500 Date: Thu, 10 Mar 2016 14:29:34 +0200 From: "Michael S. Tsirkin" To: "Li, Liang Z" Cc: Roman Kagan , "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" , "riel@redhat.com" Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Message-ID: <20160310122934.GB8144@redhat.com> References: <20160304163246-mutt-send-email-mst@redhat.com> <20160305214748-mutt-send-email-mst@redhat.com> <20160307110852-mutt-send-email-mst@redhat.com> <20160309142851.GA9715@rkaganb.sw.ru> <20160309172929-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 10 Mar 2016 12:29:41 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2703 Lines: 71 On Thu, Mar 10, 2016 at 01:41:16AM +0000, Li, Liang Z wrote: > > > > > > Yes, we really can teach qemu to skip these pages and it's not hard. > > > > > > The problem is the poor performance, this PV solution > > > > > > > > > > Balloon is always PV. And do not call patches solutions please. > > > > > > > > > > > is aimed to make it more > > > > > > efficient and reduce the performance impact on guest. > > > > > > > > > > We need to get a bit beyond this. You are making multiple > > > > > changes, it seems to make sense to split it all up, and analyse > > > > > each change separately. > > > > > > > > Couldn't agree more. > > > > > > > > There are three stages in this optimization: > > > > > > > > 1) choosing which pages to skip > > > > > > > > 2) communicating them from guest to host > > > > > > > > 3) skip transferring uninteresting pages to the remote side on > > > > migration > > > > > > > > For (3) there seems to be a low-hanging fruit to amend > > > > migration/ram.c:iz_zero_range() to consult /proc/self/pagemap. This > > > > would work for guest RAM that hasn't been touched yet or which has > > > > been ballooned out. > > > > > > > > For (1) I've been trying to make a point that skipping clean pages > > > > is much more likely to result in noticable benefit than free pages only. > > > > > > > > > > I am considering to drop the pagecache before getting the free pages. > > > > > > > As for (2), we do seem to have a problem with the existing balloon: > > > > according to your measurements it's very slow; besides, I guess it > > > > plays badly > > > > > > I didn't say communicating is slow. Even this is very slow, my > > > solution use bitmap instead of PFNs, there is fewer data traffic, so it's > > faster than the existing balloon which use PFNs. > > > > By how much? > > > > Haven't measured yet. > To identify a page, 1 bit is needed if using bitmap, 4 Bytes(32bit) is needed if using PFN, > > For a guest with 8GB RAM, the corresponding free page bitmap size is 256KB. > And the corresponding total PFNs size is 8192KB. Assuming the inflating size > is 7GB, the total PFNs size is 7168KB. Yes but this is not how balloon works, instead, it will reuse a single 4K page multiple times. We can also trade off more memory for speed if we want to, it's completely up to guest. > > Maybe this is not the point. > > Liang > > > > with transparent huge pages (as both the guest and the host work > > > > with one 4k page at a time). This is a problem for other use cases > > > > of balloon (e.g. as a facility for resource management); tackling > > > > that appears a more natural application for optimization efforts. > > > > > > > > Thanks, > > > > Roman.