Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2394583pxb; Tue, 9 Mar 2021 01:06:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJy80mjmfOjmW1SyLs1S75rcMiQoQdOwwLqDeMt0tvf1YiPpEkh4GqaRtecBw0ZNyglFKU0z X-Received: by 2002:a17:907:7683:: with SMTP id jv3mr19034067ejc.450.1615280802669; Tue, 09 Mar 2021 01:06:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615280802; cv=none; d=google.com; s=arc-20160816; b=fJzCIcUNjRId+HebP7/ESO2mBhPMNPCKJWqug7FJocb/r2eOt+HxYk96423R44gfGC kBXFFON72WxfL+vvbGpne+var2aBiBTGB8XwjSGqzm38J6bq9qhZWHjPbcEOlIAKKxaU qz7M+6of8TPaGjgpUUiOZuu7QUlyJXYy5nLPzYD5Y2q0rlVgqX8hgv7SbQvM30Dttl1f Jb11tRreEJdg/5MV/1DoeV3m1AFldEgLDAnAgGfeQfKKc1pTW4rM/BsAe3CuapwcrLfV cjp99fZ/Play5XAkGHd+DDu7qmc49yGplAatqTUjtqm61GZmyCw4w08E+mtObZVphb4i N63w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=HHj56gNrwtezeP83b0ViqGw8LybENBDjsNEy3dvLxlA=; b=ma2sOtyRAtcV33pZEORxlKurfKyO6JCST4XboYhcOljRsZp29YrZShbTHmIlRnE9V1 FSKB6sbPmo7y+6lh882J5pOhwkbFLpQKFahV9TwAzQ3V+WGXwwGK5ISPk1P1k+lIj9xT FgAjc5VUf0l81jGRJhm3IHSw8+lIclqTODWgfmB8rnZrMt8rha2lXkn2TUgrOK/ViAAt lbhHBo1QTH73wjudnE47fJ6Cuge8kk0yIUjF73NAj7euheh9H5LtBiVfSD5IYDE3F0Qq cuZ9u4D/DYlGHFzj5jX0IIqHaO+5uOk9oYqq/CZjknNMTE1kyB7dN3fGJ6hbqUHKjIpd gouw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j13si8864102edn.369.2021.03.09.01.06.19; Tue, 09 Mar 2021 01:06:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230075AbhCIJDF (ORCPT + 99 others); Tue, 9 Mar 2021 04:03:05 -0500 Received: from szxga08-in.huawei.com ([45.249.212.255]:3297 "EHLO szxga08-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229730AbhCIJCi (ORCPT ); Tue, 9 Mar 2021 04:02:38 -0500 Received: from DGGEMM402-HUB.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Dvq0C3dsFz143DX; Tue, 9 Mar 2021 16:59:39 +0800 (CST) Received: from dggpemm500023.china.huawei.com (7.185.36.83) by DGGEMM402-HUB.china.huawei.com (10.3.20.210) with Microsoft SMTP Server (TLS) id 14.3.498.0; Tue, 9 Mar 2021 17:02:30 +0800 Received: from [10.174.187.128] (10.174.187.128) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2106.2; Tue, 9 Mar 2021 17:02:30 +0800 Subject: Re: [PATCH 2/2] KVM: arm64: Skip the cache flush when coalescing tables into a block To: Marc Zyngier CC: Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose , , , , , , References: <20210125141044.380156-1-wangyanan55@huawei.com> <20210125141044.380156-3-wangyanan55@huawei.com> <20210308163454.GA26561@willie-the-truck> <8a947c73-16e9-7ca7-c185-d4c951938505@huawei.com> <87y2ewyawn.wl-maz@kernel.org> From: "wangyanan (Y)" Message-ID: Date: Tue, 9 Mar 2021 17:02:29 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <87y2ewyawn.wl-maz@kernel.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [10.174.187.128] X-ClientProxiedBy: dggeme712-chm.china.huawei.com (10.1.199.108) To dggpemm500023.china.huawei.com (7.185.36.83) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/3/9 16:43, Marc Zyngier wrote: > On Tue, 09 Mar 2021 08:34:43 +0000, > "wangyanan (Y)" wrote: >> >> On 2021/3/9 0:34, Will Deacon wrote: >>> On Mon, Jan 25, 2021 at 10:10:44PM +0800, Yanan Wang wrote: >>>> After dirty-logging is stopped for a VM configured with huge mappings, >>>> KVM will recover the table mappings back to block mappings. As we only >>>> replace the existing page tables with a block entry and the cacheability >>>> has not been changed, the cache maintenance opreations can be skipped. >>>> >>>> Signed-off-by: Yanan Wang >>>> --- >>>> arch/arm64/kvm/mmu.c | 12 +++++++++--- >>>> 1 file changed, 9 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >>>> index 8e8549ea1d70..37b427dcbc4f 100644 >>>> --- a/arch/arm64/kvm/mmu.c >>>> +++ b/arch/arm64/kvm/mmu.c >>>> @@ -744,7 +744,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>>> { >>>> int ret = 0; >>>> bool write_fault, writable, force_pte = false; >>>> - bool exec_fault; >>>> + bool exec_fault, adjust_hugepage; >>>> bool device = false; >>>> unsigned long mmu_seq; >>>> struct kvm *kvm = vcpu->kvm; >>>> @@ -872,12 +872,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >>>> mark_page_dirty(kvm, gfn); >>>> } >>>> - if (fault_status != FSC_PERM && !device) >>>> + /* >>>> + * There is no necessity to perform cache maintenance operations if we >>>> + * will only replace the existing table mappings with a block mapping. >>>> + */ >>>> + adjust_hugepage = fault_granule < vma_pagesize ? true : false; >>> nit: you don't need the '? true : false' part >>> >>> That said, your previous patch checks for 'fault_granule > vma_pagesize', >>> so I'm not sure the local variable helps all that much here because it >>> obscures the size checks in my opinion. It would be more straight-forward >>> if we could structure the logic as: >>> >>> >>> if (fault_granule < vma_pagesize) { >>> >>> } else if (fault_granule > vma_page_size) { >>> >>> } else { >>> >>> } >>> >>> With some comments describing what we can infer about the memcache and cache >>> maintenance requirements for each case. >> Thanks for your suggestion here, Will. >> But I have resent another newer series [1] (KVM: arm64: Improve >> efficiency of stage2 page table) >> recently, which has the same theme but different solutions that I >> think are better. >> [1] >> https://lore.kernel.org/lkml/20210208112250.163568-1-wangyanan55@huawei.com/ >> >> Could you please comment on that series ?  I think it can be found in >> your inbox :). > There were already a bunch of comments on that series, and I stopped > at the point where the cache maintenance was broken. Please respin > that series if you want further feedback on it. Ok, I will send a new version later. > > In the future, if you deprecate a series (which is completely > understandable), please leave a note on the list with a pointer to the > new series so that people don't waste time reviewing an obsolete > series. Or post the new series with a new version number so that it is > obvious that the original series has been superseded. I apologize for this, I will be more careful in the future. Thanks, Yanan > Thanks, > > M. >