Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1884437pxb; Mon, 8 Mar 2021 08:37:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJziikHMh6jmCytrNsZTZzEXtJ0v/mgKP20k8yEAGnRHCIB41pNX7fH7RSrFgxGhWUun1qyh X-Received: by 2002:a17:906:d0c3:: with SMTP id bq3mr15352896ejb.424.1615221428761; Mon, 08 Mar 2021 08:37:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615221428; cv=none; d=google.com; s=arc-20160816; b=XXHrhxDrq/vXk7PKnixRelgBFeAJWBFuiLT2jOJof5W24FZ9wPRNgTxDDzxV62pQ44 yVZ24q0qkQpp8AtHcLJlRSCP+fjppvWbG3j5ntW6QqQQYrfHupS2pKUBofa2KcxxyhH5 SUvFENGEHCU+TT3P1DL7uavqqie2m7lGs8K0swRZgoC+6SbXhgksKO5Ex85tHMxCWfBB XeQA3KuOCbOs2PfCMgov3CnY6BzgWrTtgcXeAOfOca7GVxXnuZDqMBvjSjYkCb+wdq/H 9fU5iPrAFrEIRcnJxRk9/vr+JDQuPRCsqyNVmOTlsjDU5ZqJQvZNC7+HrQZzBmyDgqyi QO9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=7ykVRT6UdprQGYKBxLqM0yVODWM5rdsiPkjzCmq2OEs=; b=rGPSr0NKz++hE3UWzaZCz6WV4abMd0LtXuriNwKhZOM/obRxW/+hOrghx/EvdMN4fz WVj67K3nnEFQ2L/JipTYf8/Dk6nzhM2o6djndRrNq2hIigfLszBPLdtOmDryu9m7/AkN rWDabufaUdvBTSHDWBKwT1NxH1EA/TJ8r9h2Fyvo5keAcIvj5Z656lyKGyEwtNOuDvfr aPnepckEd9seNf++wzIaAdbAv130RYcrqep4HneNfLa9xtKihBhuMSod4Kr2Dj52+Bgu zdR1/yZb40i19q8hvsxHKOqaxuT1/bnMe7xK9KxpZpacE3C9cM7uKaTJ/b9tL+b9lZMl FoWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UWctiQna; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k19si7060945eja.265.2021.03.08.08.36.45; Mon, 08 Mar 2021 08:37:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=UWctiQna; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230301AbhCHQfa (ORCPT + 99 others); Mon, 8 Mar 2021 11:35:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:48588 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbhCHQfV (ORCPT ); Mon, 8 Mar 2021 11:35:21 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E834965227; Mon, 8 Mar 2021 16:35:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1615221320; bh=aB3ELpHAsNqPMSU1NTeVXZu4tkER3Fw968tuBoPwCbA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UWctiQnaYW8/k1Yb9drt1Gx33SIx53VEr9QUvBFtO++cSktio2IJ0zfV/ewCRZarp 336YuKkMtHXRSuYBavY39s4mbcMYiHtBWr+sRvWGZ5lHGSrbfP2i3cT6bD3IXB5dK9 XG0wo0LtPIgiU1G/ZSF3cH7a/ODuX4Kng0qGTuOltSSMi8YqXBiBF/+fT0PUpovADG 83SbF/T2HmBE5FmqBDw1IlC7Fjl6d5Uw9lO45d1PRrtuH7UwKIwqi7eTN8EMhz3x95 Z7z0loOP+g5/iO2/WDVCxGD5NF44agOCV8FrKagw7dGRiIx0vYS9+tRbTqTzhu1BTI sZ5LT0LmUIgug== Date: Mon, 8 Mar 2021 16:35:16 +0000 From: Will Deacon To: Yanan Wang Cc: Marc Zyngier , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, wanghaibin.wang@huawei.com, yuzenghui@huawei.com Subject: Re: [PATCH 1/2] KVM: arm64: Distinguish cases of allocating memcache more precisely Message-ID: <20210308163515.GB26561@willie-the-truck> References: <20210125141044.380156-1-wangyanan55@huawei.com> <20210125141044.380156-2-wangyanan55@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210125141044.380156-2-wangyanan55@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 25, 2021 at 10:10:43PM +0800, Yanan Wang wrote: > With a guest translation fault, we don't really need the memcache pages > when only installing a new entry to the existing page table or replacing > the table entry with a block entry. And with a guest permission fault, > we also don't need the memcache pages for a write_fault in dirty-logging > time if VMs are not configured with huge mappings. > > The cases where allocations from memcache are required can be much more > precisely distinguished by comparing fault_granule and vma_pagesize. > > Signed-off-by: Yanan Wang > --- > arch/arm64/kvm/mmu.c | 25 ++++++++++++------------- > 1 file changed, 12 insertions(+), 13 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 7d2257cc5438..8e8549ea1d70 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -820,19 +820,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > gfn = fault_ipa >> PAGE_SHIFT; > mmap_read_unlock(current->mm); > > - /* > - * Permission faults just need to update the existing leaf entry, > - * and so normally don't require allocations from the memcache. The > - * only exception to this is when dirty logging is enabled at runtime > - * and a write fault needs to collapse a block entry into a table. > - */ > - if (fault_status != FSC_PERM || (logging_active && write_fault)) { > - ret = kvm_mmu_topup_memory_cache(memcache, > - kvm_mmu_cache_min_pages(kvm)); > - if (ret) > - return ret; > - } > - > mmu_seq = vcpu->kvm->mmu_notifier_seq; > /* > * Ensure the read of mmu_notifier_seq happens before we call > @@ -898,6 +885,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) > prot |= KVM_PGTABLE_PROT_X; > > + /* > + * Allocations from the memcache are required only when granule of the > + * lookup level where a guest fault happened exceeds the vma_pagesize, > + * which means new page tables will be created in the fault handlers. > + */ > + if (fault_granule > vma_pagesize) { > + ret = kvm_mmu_topup_memory_cache(memcache, > + kvm_mmu_cache_min_pages(kvm)); > + if (ret) > + return ret; > + } This feels like it could bite us in future as the code evolves but people forget to reconsider this check. Maybe it would be better to extend this patch so that we handle getting -ENOMEM back and try a second time after topping up the memcache? Will