Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935147AbcCJHXc (ORCPT ); Thu, 10 Mar 2016 02:23:32 -0500 Received: from mga04.intel.com ([192.55.52.120]:22082 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932692AbcCJHXY (ORCPT ); Thu, 10 Mar 2016 02:23:24 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,314,1455004800"; d="scan'208";a="920747761" From: "Li, Liang Z" To: Jitendra Kolhe , "amit.shah@redhat.com" CC: "dgilbert@redhat.com" , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "mst@redhat.com" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" , "mohan_parthasarathy@hpe.com" , "simhan@hpe.com" Subject: RE: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Thread-Topic: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Thread-Index: AQHRepqXUGbSaa8noEK7Rfwwy1yE259SROnQ Date: Thu, 10 Mar 2016 07:22:38 +0000 Message-ID: References: <1457593292-30686-1-git-send-email-jitendra.kolhe@hpe.com> In-Reply-To: <1457593292-30686-1-git-send-email-jitendra.kolhe@hpe.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZWYxNjEzNzAtNDk5ZS00MmI3LWI0ZGEtMDQ0YTM4ZDU1M2EyIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IlhIRmx1UnJ3ckVwXC9FRVRpNklRa1d0dDN2TmJPRHVvdlwvQlExVnIrcUJ5bz0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id u2A7NaCU006974 Content-Length: 2441 Lines: 57 > On 3/8/2016 4:44 PM, Amit Shah wrote: > > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote: > >>>> > >>>> * Liang Li (liang.z.li@intel.com) wrote: > >>>>> The current QEMU live migration implementation mark the all the > >>>>> guest's RAM pages as dirtied in the ram bulk stage, all these > >>>>> pages will be processed and that takes quit a lot of CPU cycles. > >>>>> > >>>>> From guest's point of view, it doesn't care about the content in > >>>>> free pages. We can make use of this fact and skip processing the > >>>>> free pages in the ram bulk stage, it can save a lot CPU cycles and > >>>>> reduce the network traffic significantly while speed up the live > >>>>> migration process obviously. > >>>>> > >>>>> This patch set is the QEMU side implementation. > >>>>> > >>>>> The virtio-balloon is extended so that QEMU can get the free pages > >>>>> information from the guest through virtio. > >>>>> > >>>>> After getting the free pages information (a bitmap), QEMU can use > >>>>> it to filter out the guest's free pages in the ram bulk stage. > >>>>> This make the live migration process much more efficient. > >>>> > >>>> Hi, > >>>> An interesting solution; I know a few different people have been > >>>> looking at how to speed up ballooned VM migration. > >>>> > >>> > >>> Ooh, different solutions for the same purpose, and both based on the > balloon. > >> > >> We were also tying to address similar problem, without actually > >> needing to modify the guest driver. Please find patch details under mail > with subject. > >> migration: skip sending ram pages released by virtio-balloon driver > > > > The scope of this patch series seems to be wider: don't send free > > pages to a dest at all, vs. don't send pages that are ballooned out. > > > > Amit > > Hi, > > Thanks for your response. The scope of this patch series doesn’t seem to > take care of ballooned out pages. To balloon out a guest ram page the guest > balloon driver does a alloc_page() and then return the guest pfn to Qemu, so > ballooned out pages will not be seen as free ram pages by the guest. > Thus we will still end up scanning (for zero page) for ballooned out pages > during migration. It would be ideal if we could have both solutions. > Agree, for users who care about the performance, just skipping the free pages. For users who have already turned on virtio-balloon, your solution can take effect. Liang > Thanks, > - Jitendra