Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3112610pxf; Mon, 15 Mar 2021 01:26:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyOBAE5JvnWyKl/CzzyuoSrRHUIdPy2KtNGyv9NQQstjS2rQhy7h2weZL1P2JZRgF7iZ+Vr X-Received: by 2002:a50:f314:: with SMTP id p20mr28487575edm.236.1615796815561; Mon, 15 Mar 2021 01:26:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615796815; cv=none; d=google.com; s=arc-20160816; b=QbpmR9hmkZOT6j79XcvgPC+joiek8f4y9GhhiEdbIkgpaWWoSD9uqQ4AAMFQk2KJTY yS3m35h0PyoS6XUceLpWiiMnfgOSlIUCdvMdCad0UnlgSOZNY4prEicfZpFe90JFyGmq Cv5DbLouGGjPc1qAxlwzwg8BnZpTyaCsy01D3AyeE/zhA7edwovAcQfxrX8+z2oKZR2z 4rGO7M5YK/jnxiNyFiTvhTfa/cg6TaJzhtOfvpek+9sUqR4DYN3EWDa8ZEi7cmbLvsvj QohIjfp5L5y0++ysgvAPnvzKjpnedMnUaxGDupshtsNV1HkwgaE8I+Tqzp34p18fKaPA +dzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=6uS21RbVV1HKBfRecxe/xwkOWvikIjgzaKNokzF6JnQ=; b=feM56eOQErOQ7rtU5PUhZ96wAi19tYNPgsFGGFplwZyd1TWd/WUiOg/OCO/zdnp3/p rF9Te8JhbMcqnX+CQkDZqhfuJlpDQgihkCEo/sCslrh6QHRqhJo30mLr3VPw2ePMZGtD 4T6JTiruJfET91dNaMl43SVecc8VvyX6Zo4qrAotA0gXB4rtuFRYUcdBOW1pZDUChXwl 47fKBwaqniqCHuI/HT0ABJqOf7xWxtwklawbH4kkLhdqq3Pl4PevJl1mGIHkJ+dkqp+2 cxGamLoLa4ng3DvEG9VpVXXqUiYvFMIN3x4oQBwxxF6SgpDReVH6ytM78ImtkQrfk6Bd sjOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ec23si11357174ejb.710.2021.03.15.01.26.31; Mon, 15 Mar 2021 01:26:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230118AbhCOIZa (ORCPT + 99 others); Mon, 15 Mar 2021 04:25:30 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:13535 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230114AbhCOIZ0 (ORCPT ); Mon, 15 Mar 2021 04:25:26 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DzTvB1C1JzMkdD; Mon, 15 Mar 2021 16:23:02 +0800 (CST) Received: from [10.174.184.42] (10.174.184.42) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Mon, 15 Mar 2021 16:25:19 +0800 Subject: Re: [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler To: Gavin Shan , References: <20210315041844.64915-1-gshan@redhat.com> <20210315041844.64915-5-gshan@redhat.com> CC: , , , From: Keqian Zhu Message-ID: <30073114-339f-33dd-0168-b4d6bfbe88bc@huawei.com> Date: Mon, 15 Mar 2021 16:25:19 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20210315041844.64915-5-gshan@redhat.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Gavin, On 2021/3/15 12:18, Gavin Shan wrote: > We needn't retrieve the memory slot again in user_mem_abort() because > the corresponding memory slot has been passed from the caller. This I think you are right, though fault_ipa will be adjusted when we try to use block mapping, the fault_supports_stage2_huge_mapping() makes sure we're not trying to map anything not covered by the memslot, so the adjusted fault_ipa still belongs to the memslot. > would save some CPU cycles. For example, the time used to write 1GB > memory, which is backed by 2MB hugetlb pages and write-protected, is > dropped by 6.8% from 928ms to 864ms. > > Signed-off-by: Gavin Shan > --- > arch/arm64/kvm/mmu.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index a5a8ade9fde4..4a4abcccfafb 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -846,7 +846,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > */ > smp_rmb(); > > - pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable); > + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, > + write_fault, &writable, NULL); It's better to update the code comments at same time. > if (pfn == KVM_PFN_ERR_HWPOISON) { > kvm_send_hwpoison_signal(hva, vma_shift); > return 0; > @@ -912,7 +913,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > /* Mark the page dirty only if the fault is handled successfully */ > if (writable && !ret) { > kvm_set_pfn_dirty(pfn); > - mark_page_dirty(kvm, gfn); > + mark_page_dirty_in_slot(kvm, memslot, gfn); > } > > out_unlock: > Thanks, Keqian.