Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp841627ybz; Wed, 15 Apr 2020 20:30:11 -0700 (PDT) X-Google-Smtp-Source: APiQypLMRQjSQLnzAsnk9SOjfV/03VCDs6y3MUZ4cBvMGF5PSYHd6ybHUxQVrAV7RgJ5FDEsx6a/ X-Received: by 2002:aa7:d0c7:: with SMTP id u7mr14807712edo.206.1587007811525; Wed, 15 Apr 2020 20:30:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587007811; cv=none; d=google.com; s=arc-20160816; b=uIem777pAx2tXGUvLAUuHeVSB49DdrVDEzog2dybI72qZ35ARwCYQBurUiJM1YAtOV PerkXumV1L8MEWOd0gvieGtJcLylpxmvhj8JKj/R0BYwezIcM47TXv+TPVJGOBBVPe0F qUSnMY0cIqURbevriBpUc7vdCfn77AETfjKc02TEx6kqJ4YG/AfuGtKwn1MpW161BQav 9wp+v35bfIr+aGBp8BECv4A1SXN7dVEkCUBVJYiaFAeAnnNy8Vkhek+d4Ieb9TvKo/yS fZM6DiW73TcQ3yBYHtfEeXyUtczZaYTafzH9aTW0nHeInZo8J3UJrPsHYSV7Qt8nalUV 1xjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=Y1Lrno8cEZeez52JPDQl7mSt5hN35yIhuZ7lrzDMtyo=; b=PoppAeuJ1Mn8I+8A4xxPwymExpgNKNyI6raHnE8bFR73uAJWpoAds/vHHPsxgwzjUW oiOMuHtDUm0APH5esV+jRPsrl2UJR1SyzUNY+I5wuHl1b7pD0DMMVNsn/tvXW99FDI+m EUURVfuqmBR/4+ckUECtAlbJVESHJRMmjN7m1e7METoznEfKEtTCOlAP7Da9T3Ok7bT2 pAFAc+4uSmrcpuGzY29wqSOCaHiIILbTD0mwtT6MG3xvEad1iNdVgX5ukR9eDRTp05L1 vxbO8SVURvyJYP8SVPEpI3iVokFyAjRTpYakub9szmm14UacGTNwggQtiPs0yQPKvnxm 5GDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n9si9547401edb.87.2020.04.15.20.29.47; Wed, 15 Apr 2020 20:30:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390977AbgDPD2U (ORCPT + 99 others); Wed, 15 Apr 2020 23:28:20 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2336 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2390005AbgDPD2Q (ORCPT ); Wed, 15 Apr 2020 23:28:16 -0400 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 0020A7D09D55C0CCF71F; Thu, 16 Apr 2020 11:28:09 +0800 (CST) Received: from [127.0.0.1] (10.173.221.230) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Thu, 16 Apr 2020 11:27:59 +0800 Subject: Re: [PATCH v2] KVM/arm64: Support enabling dirty log gradually in small chunks To: , , , References: <20200413122023.52583-1-zhukeqian1@huawei.com> CC: Marc Zyngier , Paolo Bonzini , "James Morse" , Julien Thierry , Will Deacon , Suzuki K Poulose , Sean Christopherson , Jay Zhou , From: zhukeqian Message-ID: Date: Thu, 16 Apr 2020 11:27:57 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20200413122023.52583-1-zhukeqian1@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, In RFC patch, I still write protect huge pages when DIRTY_LOG_INITIALLY_ALL_SET is enabled by userspace. I find that both huge pages and normal pages can be write protected during log clear. So this formal patch is pretty simple now. Thanks, Keqian On 2020/4/13 20:20, Keqian Zhu wrote: > There is already support of enabling dirty log graually in small chunks > for x86 in commit 3c9bd4006bfc ("KVM: x86: enable dirty log gradually in > small chunks"). This adds support for arm64. > > x86 still writes protect all huge pages when DIRTY_LOG_INITIALLY_ALL_SET > is eanbled. However, for arm64, both huge pages and normal pages can be > write protected gradually by userspace. > > Under the Huawei Kunpeng 920 2.6GHz platform, I did some tests on 128G > Linux VMs with different page size. The memory pressure is 127G in each > case. The time taken of memory_global_dirty_log_start in QEMU is listed > below: > > Page Size Before After Optimization > 4K 650ms 1.8ms > 2M 4ms 1.8ms > 1G 2ms 1.8ms > > Besides the time reduction, the biggest income is that we will minimize > the performance side effect (because of dissloving huge pages and marking > memslots dirty) on guest after enabling dirty log. > > Signed-off-by: Keqian Zhu > --- > Documentation/virt/kvm/api.rst | 2 +- > arch/arm64/include/asm/kvm_host.h | 3 +++ > virt/kvm/arm/mmu.c | 12 ++++++++++-- > 3 files changed, 14 insertions(+), 3 deletions(-) > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst > index efbbe570aa9b..0017f63fa44f 100644 > --- a/Documentation/virt/kvm/api.rst > +++ b/Documentation/virt/kvm/api.rst > @@ -5777,7 +5777,7 @@ will be initialized to 1 when created. This also improves performance because > dirty logging can be enabled gradually in small chunks on the first call > to KVM_CLEAR_DIRTY_LOG. KVM_DIRTY_LOG_INITIALLY_SET depends on > KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (it is also only available on > -x86 for now). > +x86 and arm64 for now). > > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 was previously available under the name > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT, but the implementation had bugs that make > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 32c8a675e5a4..a723f84fab83 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -46,6 +46,9 @@ > #define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) > #define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4) > > +#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \ > + KVM_DIRTY_LOG_INITIALLY_SET) > + > DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); > > extern unsigned int kvm_sve_max_vl; > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index e3b9ee268823..1077f653a611 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -2265,8 +2265,16 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, > * allocated dirty_bitmap[], dirty pages will be be tracked while the > * memory slot is write protected. > */ > - if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) > - kvm_mmu_wp_memory_region(kvm, mem->slot); > + if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { > + /* > + * If we're with initial-all-set, we don't need to write > + * protect any pages because they're all reported as dirty. > + * Huge pages and normal pages will be write protect gradually. > + */ > + if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { > + kvm_mmu_wp_memory_region(kvm, mem->slot); > + } > + } > } > > int kvm_arch_prepare_memory_region(struct kvm *kvm, >