Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp691809pxx; Wed, 28 Oct 2020 14:39:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy40CIVh+sr9M/cJZfbNWydTx0kot8LhuoDmezPOglPQDkVZac7/Eoxrb6lnb2zH906KZtB X-Received: by 2002:a17:906:1418:: with SMTP id p24mr1110705ejc.46.1603921168188; Wed, 28 Oct 2020 14:39:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603921168; cv=none; d=google.com; s=arc-20160816; b=fCWRnoEtXDpVOMnhyIFida8zMd1veBokupVenTAncI/2cx8WnE5x7nvnWfbudCa+F3 YkSHdRzjJF6jrz8nXhiah1T6wo+IRaF/1z7vKmUPv4kcdieMAjzypDDDvb/qVNKmqr5/ tf+qbLxUOcPrCFjTNjaiJ4JMjUh9sXP8zYWNqTljxUrdc+czLxb1Dn/RZLBV5WgHvwcW miX/GlASSYDoZDGaZ5BHIkJ4Ir9Ib5mbg4EyBMsmf7D/pymt1b13syM1fqvGN5+Wp/ZB O0/7yteRX6ukIqWFhW6FagjGlCPXNmrqVsqoCsE4wVZoElUekgeSU21c+bKJIuuBnykT 6rng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=62OzoS6jE3LG56i9QRNtmxnthDjr2E4kCAi69SS72Zw=; b=t8YX+W1mcnO8bTfQVXG9KKuR1Bg40lGZogTVwEGR2MRJlQZ9xD8ntkdV86Kw+Sg2ZK aAZhKismAubSp3mMvKdaX+trOd+rbZYsZb9feoUXygaweA8ODga9J5TXaTdhW9JgL+Zj icThgEUZrIo6OGghQdmIbFEWpcmbarMVAuZ/U/+zXvV7uipDOeNMHizUSIulKPcjhtuq NJgv2+ZcsNfWn9xvIDxwmDVIMaUq1f478yfyM8zV9Uq72WTm5LgtTpICtrb7+P/yzeQP 9XNgPP446LlJp7pm20j4nPiUvLV8W2IxM70BpEFUkKAVKDGYuLCAK42U9RGRQHFSidxR 2qbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=q+D08P7q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d16si567345edy.237.2020.10.28.14.39.05; Wed, 28 Oct 2020 14:39:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=q+D08P7q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1832236AbgJ0WRw (ORCPT + 99 others); Tue, 27 Oct 2020 18:17:52 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:46345 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S374823AbgJ0WRw (ORCPT ); Tue, 27 Oct 2020 18:17:52 -0400 Received: by mail-io1-f65.google.com with SMTP id b15so3257316iod.13 for ; Tue, 27 Oct 2020 15:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=62OzoS6jE3LG56i9QRNtmxnthDjr2E4kCAi69SS72Zw=; b=q+D08P7qTsKjEmbe0ZH0WPHKGGZ3w2aqohalDMbwNYrMiVkKx/qDzlkWZpgb87aj5F pVsOm9DZAuzwqM2Et4Ok7TJeadBmahf1C4JCF0PGafZgPE7YRCUTnXOj52jz7DHZi8a0 S6+EkaSFUTikvMxv44FDHXux9vxt0WdiF0FtdpMms3LyojV1O8mD6Z+qtF7vvtwlQuZ2 uGSpAtUJk7NvfIPtrOSY9irefb7Q5baQrFaDpBKxQpx+KQ2cuFOxSUjGNmcsBk4zTR5R sB+Kywdpy1FlJbikyLZgjDDP+H4GTbqJvNyo33zFJ3REJ4LgezEoUlvT63VTZM+XeL0d 4tsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=62OzoS6jE3LG56i9QRNtmxnthDjr2E4kCAi69SS72Zw=; b=t50x0L+GlXxHycsHIGIuxysCU8JNLUCz/4SExPcFNkV/teOjoTvfyXjqSZM4TRjOBP 8mEvdFLX2k5kkRqltHQlDeImTmnBJh1BLmQtRkIGQP6yI6/hGTMF/F95XPDcw207RE13 /hJ+Pzob7QbNKlOj7kM8W44vRdVcSlft2g69t4XcPccoED4K9LGIywnVtEIqwWqCKll/ REbG1btRwUc9JdVb7e346HsuW84ckTJ6UkwbxFnkJU6/PEhb2yQKMkxM9ij1sIC5GN+E yLnCX1WmoEpLngqhMETRMZdCqCBLFQBT3olpaY+KjkRZzDvVCMXgIpil4IOL0cvqt2UD IMLQ== X-Gm-Message-State: AOAM530d66iIN3nL8Z92tDIYAvwJ84ErpC62SQBX0BFfdgJD63veb1qz 4UcChzwpDGSmyStefOIsXeXfpBMEJ6jQxgt3TcHyog== X-Received: by 2002:a5e:9411:: with SMTP id q17mr3774548ioj.157.1603837070988; Tue, 27 Oct 2020 15:17:50 -0700 (PDT) MIME-Version: 1.0 References: <20201027214300.1342-1-sean.j.christopherson@intel.com> <20201027214300.1342-2-sean.j.christopherson@intel.com> In-Reply-To: <20201027214300.1342-2-sean.j.christopherson@intel.com> From: Ben Gardon Date: Tue, 27 Oct 2020 15:17:40 -0700 Message-ID: Subject: Re: [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing hugepage GFN mask To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson wrote: > > Add a helper to compute the GFN mask given a hugepage level, KVM is > accumulating quite a few users with the addition of the TDP MMU. > > Note, gcc is clever enough to use a single NEG instruction instead of > SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of > round_gfn_for_level()'s direct two's complement trickery. As far as I can tell this patch has no functional changes intended. Please correct me if that's not correct. > > Signed-off-by: Sean Christopherson Reviewed-by: Ben Gardon > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 2 +- > arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- > arch/x86/kvm/mmu/tdp_iter.c | 2 +- > 4 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index d44858b69353..6ea046415f29 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -119,6 +119,7 @@ > #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) > #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) > #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) > +#define KVM_HPAGE_GFN_MASK(x) (~(KVM_PAGES_PER_HPAGE(x) - 1)) NIT: I know x follows the convention on adjacent macros, but this would be clearer to me if x was changed to level. (Probably for all the macros in this block) > > static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) > { > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 17587f496ec7..3bfc7ee44e51 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2886,7 +2886,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > disallowed_hugepage_adjust(*it.sptep, gfn, it.level, > &pfn, &level); > > - base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); > + base_gfn = gfn & KVM_HPAGE_GFN_MASK(it.level); > if (it.level == level) > break; > > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 50e268eb8e1a..76ee36f2afd2 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, > disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level, > &pfn, &level); > > - base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); > + base_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level); > if (it.level == level) > break; > > @@ -751,7 +751,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, > bool *write_fault_to_shadow_pgtable) > { > int level; > - gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1); > + gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level); > bool self_changed = false; > > if (!(walker->pte_access & ACC_WRITE_MASK || > diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c > index 87b7e16911db..c6e914c96641 100644 > --- a/arch/x86/kvm/mmu/tdp_iter.c > +++ b/arch/x86/kvm/mmu/tdp_iter.c > @@ -17,7 +17,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) > > static gfn_t round_gfn_for_level(gfn_t gfn, int level) > { > - return gfn & -KVM_PAGES_PER_HPAGE(level); > + return gfn & KVM_HPAGE_GFN_MASK(level); > } > > /* > -- > 2.28.0 >