Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757255AbcCDIOk (ORCPT ); Fri, 4 Mar 2016 03:14:40 -0500 Received: from mx2.parallels.com ([199.115.105.18]:48871 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751486AbcCDIOi (ORCPT ); Fri, 4 Mar 2016 03:14:38 -0500 Date: Fri, 4 Mar 2016 11:14:12 +0300 From: Roman Kagan To: "Li, Liang Z" CC: "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "mst@redhat.com" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Message-ID: <20160304081411.GD9100@rkaganb.sw.ru> Mail-Followup-To: Roman Kagan , "Li, Liang Z" , "Dr. David Alan Gilbert" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "mst@redhat.com" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" References: <1457001868-15949-1-git-send-email-liang.z.li@intel.com> <20160303174615.GF2115@work-vm> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To US-EXCH2.sw.swsoft.com (10.255.249.46) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1011 Lines: 22 On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > I wonder if it would be possible to avoid the kernel changes by parsing > > /proc/self/pagemap - if that can be used to detect unmapped/zero mapped > > pages in the guest ram, would it achieve the same result? > > Only detect the unmapped/zero mapped pages is not enough. Consider the > situation like case 2, it can't achieve the same result. Your case 2 doesn't exist in the real world. If people could stop their main memory consumer in the guest prior to migration they wouldn't need live migration at all. I tend to think you can safely assume there's no free memory in the guest, so there's little point optimizing for it. OTOH it makes perfect sense optimizing for the unmapped memory that's made up, in particular, by the ballon, and consider inflating the balloon right before migration unless you already maintain it at the optimal size for other reasons (like e.g. a global resource manager optimizing the VM density). Roman.