Received: by 2002:ab2:6991:0:b0:1f7:f6c3:9cb1 with SMTP id v17csp552882lqo; Wed, 8 May 2024 07:55:21 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVNULbtHlIQQPIcxogpBcog8WTxTwLyokDbC87R12Fck+4K83AXB+d2IMpePv8uSEnzYEYKf+K4jrHmU9VcOXOdJ1yIr+/Sp5O12x1B2Q== X-Google-Smtp-Source: AGHT+IHFDtdU2rjw1/Cu+UIBcWcfBxV+2kjsHHLJQL/jBbuaKs+W7eM7vkZelT3QuwIo1dWicT/r X-Received: by 2002:a05:6214:48e:b0:69b:4503:9fbf with SMTP id 6a1803df08f44-6a1514c8547mr28414856d6.43.1715180121090; Wed, 08 May 2024 07:55:21 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715180121; cv=pass; d=google.com; s=arc-20160816; b=vq5dJX2ZnER3LG4rAfQa4HNzxUyUPIBQ2OO0zsvYRqEkOg6uMH4XXjal9TjngRXZ9h EJbO7dngq+soARToMEpyeCNecPjgeugrOsp4I7o013xK+bHaoGgFbe9svQkLDnn13Es3 N2ayB8ZjKtwJrC8vsws9c0HHu0N8zmB7mIYUksNI8LjU5FbsXvUiNYO/hUSSglinU2sM MUnLzd7Dv0mr9k01Qj+4qbtqAOwPmp4FuMAEzxUrrZxTy3Br1fgQauusfvMRKwtITihH R4qWDqEWLv3QdWAspsp2Ggz9Zv4xIVB7S0XO6nEcysVy8q93YffbXU2nKaNgUfDas9N2 tIKA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=qkj273El9+gDZrjw9vIeFrFXoqJFtQgFlFhD69aQANs=; fh=RqE5dTHIy4P7ZaZM27ZBUSsEMbR5yA+aZ1libKA66dc=; b=JGoCGe6DmGlxo72IvCSYAD3VhRo87X/ZTdtbRuICO3oCIIg15zKOH8qb+6RZN8dda7 RPqLWZ2OwjOTWbvDhttC90xkHeaUBpwruSv9D55GAMCUPM/wBNJRnvKIgPu4h1dKtQUZ KCdO/D9/BH4tVSMufkyjHDbCK9eDGPGSUk5xIQMJ/CWP4839+wiOj3sKE8mzMiiBObvE pB+iWEMoZmIHN99trB0f8vYv0dxcWEs/0Hc4L2o8DRsdUs7zS1FqjDjthN3EgXqSgpkc rRRCyyirdPhaksHHmUjV75BpN521+e9nL8dbRD/xbGvszdbexonNAH50UTojoU1IW7cC sywA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Qs5cKPPD; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-173443-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-173443-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id e4-20020a0562141d0400b006a09243a2d6si15268489qvd.478.2024.05.08.07.55.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 07:55:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-173443-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Qs5cKPPD; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-173443-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-173443-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 3552C1C222DA for ; Wed, 8 May 2024 14:55:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A835A86AE2; Wed, 8 May 2024 14:55:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Qs5cKPPD" Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 330361A2C0B for ; Wed, 8 May 2024 14:55:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715180107; cv=none; b=EAqia7COpYG5PqgJXxUXsXEMPQOTbSYbPgWE7+Xe/4uorWB3auYGRYtOLiCbJCRCcPFqI2fTa/ocF0Pap3+xVucdOZxj/FaviJBB/dUOqRriN2DPxY2myMQhyB4NBo7v6DD6Vu9wBdPJd4jDkaT23qieiBhItE6YVGwfbFLAN9I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715180107; c=relaxed/simple; bh=E/TIUh0Tpb73YznKVdcAVd8jxg2CqQ6AzKWuQKahBNk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VUwMCg+A7cSYeV2Ein1DvrKdWjvyYbDKeIBRp8uJd+kiiiwWKjwFVyRaDCxaUgPW+wE5XOOdKwDrpmY4RznFAiP8mLryFc2bxruHN83JwW11ZF/OnQdytymSEsirKgqXLyxuFZQs1QbFXs9RxoJJ805XXvmpxeYdEaGRL66VUgY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Qs5cKPPD; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be8f7ed6eso12237307b3.0 for ; Wed, 08 May 2024 07:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1715180104; x=1715784904; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qkj273El9+gDZrjw9vIeFrFXoqJFtQgFlFhD69aQANs=; b=Qs5cKPPDaG1ePGrVyTVHUTtvH3N2ep4/LizH6+VUUik1CTssJ407selD5VVgU/vYub vBLMHw1YgHa7UZwrhj+R/m0b0wiOGu9ImqDYCiVeCjnTgygh/MjTic8+Spb0Bpi3OQL6 WzHFMDGYYLbUuC8dnapdONUMP0HQa88TRLiDuQMyF4cmOnDh8zfY6eX/OLz7wNrvdJ0B iqwMCDLH6oLBNWWVMvZBimc0HB+QdbteYKKHBzmaXfiBU5QjkkY/PmiFZ4Jj8yGxWYhR 5AC+nxFSd+cSGxgaf8Z2XO2dUUrd4cF5FE2RdSej+Q38t1JvNXlAbfBCywdqypoihOvU dV8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715180104; x=1715784904; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qkj273El9+gDZrjw9vIeFrFXoqJFtQgFlFhD69aQANs=; b=EPLZsYJimKly5DyZEOH1cWCvsM2nZDM/1I4nKHTzzQUeSWwWdY0lJx100mWo894y/3 XYehe2RbFW+B9TlOUFvEgeLBq2EQnSOyiBB40isSPNY2H8lko0fg5QR4zg1cnVsQA0e0 Jkj0RqmiA7bfC1tQ1+gvi433UH7TA+a8KItXIZNaOBy2dT/S2ZOkibjM7AWxCY1tvH/n /BEopdvhoza7silJmxsXF8wqwUmEwtsifBoWt+n71YnnKwMQF6hhxsiRtoPbK3CAhKi0 GlnTtxbcyGSgl/InOwP84aJ1zDoGDwj9yXkYD58IiVX2yubfI9O+kRQcTcQrAT7LMSje mBdQ== X-Forwarded-Encrypted: i=1; AJvYcCXw/Vt5nLQNd6E25eJKFWaAbIgIWhSsJubxxuAwz/4WIrfBoVZ7AJ+0ah+ulbx3cns5d/OGDnMQGb5oYuIpypll8xAw8aOZd1S1kXde X-Gm-Message-State: AOJu0YyddXcjOut9FadGo25AiuvE0qu2Dcb3zw/F/Wh6CWLs2/bckPLP XF7WnD77bzLRb6WSBdIsq+HfWjaZpMEyRuHR+oCBxEudIo7BELOUkrqGfSu4kU+ZaBZXHtdUOnl Xug== X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:cad2:0:b0:618:5009:cb71 with SMTP id 00721157ae682-62085c7c695mr7568127b3.5.1715180104287; Wed, 08 May 2024 07:55:04 -0700 (PDT) Date: Wed, 8 May 2024 07:55:02 -0700 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: Message-ID: Subject: Re: [PATCH] KVM: x86/mmu: Only allocate shadowed translation cache for sp->role.level <= KVM_MAX_HUGEPAGE_LEVEL From: Sean Christopherson To: Hou Wenlong Cc: kvm@vger.kernel.org, Lai Jiangshan , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="us-ascii" On Wed, May 08, 2024, Hou Wenlong wrote: > Only the indirect SP with sp->role.level <= KVM_MAX_HUGEPAGE_LEVEL might > have leaf gptes, so allocation of shadowed translation cache is needed > only for it. Additionally, use sp->shadowed_translation to determine > whether to use the information in shadowed translation cache or not. > > Suggested-by: Lai Jiangshan > Signed-off-by: Hou Wenlong > --- > arch/x86/kvm/mmu/mmu.c | 27 +++++++++++++++++++-------- > 1 file changed, 19 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index fc3b59b59ee1..8be987d0f05e 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -716,12 +716,12 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp); > > static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) > { > + if (sp->shadowed_translation) > + return sp->shadowed_translation[index] >> PAGE_SHIFT; > + > if (sp->role.passthrough) > return sp->gfn; > > - if (!sp->role.direct) > - return sp->shadowed_translation[index] >> PAGE_SHIFT; Why change the order? Either the order doesn't matter, in which case go for the smallest diff, or the order does matter, in which case this warrants an explanation in the changelog (or perhaps even a separate patch, but that's likely overkill). > - > return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); > } > > @@ -733,7 +733,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) > */ > static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) Can you also extend the WARN in FNAME(sync_spte)()? Partly to help communicate to future readers that sync_spte() operates on leaf GPTEs, and also to help make it somewhat obvious that this patch doesn't break shadow_mmu_get_sp_for_split() diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7a87097cb45b..89b5d73e9e3c 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -911,7 +911,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int gpa_t pte_gpa; gfn_t gfn; - if (WARN_ON_ONCE(!sp->spt[i])) + if (WARN_ON_ONCE(!sp->spt[i] || !sp->shadowed_translation)) return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); > { > - if (sp_has_gptes(sp)) > + if (sp->shadowed_translation) > return sp->shadowed_translation[index] & ACC_ALL; > > /* > @@ -754,7 +754,7 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) > static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, > gfn_t gfn, unsigned int access) > { > - if (sp_has_gptes(sp)) { > + if (sp->shadowed_translation) { > sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; > return; > } > @@ -1697,7 +1697,7 @@ static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) > hlist_del(&sp->hash_link); > list_del(&sp->link); > free_page((unsigned long)sp->spt); > - if (!sp->role.direct) > + if (sp->shadowed_translation) > free_page((unsigned long)sp->shadowed_translation); Just drop the manual check, free_page() already handles the NULL/0 case. > kmem_cache_free(mmu_page_header_cache, sp); > } > @@ -1850,6 +1850,17 @@ static bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, > static void kvm_mmu_commit_zap_page(struct kvm *kvm, > struct list_head *invalid_list); > > +static bool sp_might_have_leaf_gptes(struct kvm_mmu_page *sp) Hmm, I think I'd prefer to forego the helper entirely, as this really is intended to be used only when allocating the shadow page. That way we avoid having to try and clarify exactly what "might" means in this context, as well as potential future goofs, e.g. if someone had the clever idea to check sp->shadowed_translation. Oof, yeah, definitely omit the helper. It took me a minute to fully appreciate the difference between "leaf gptes" and "gptes" as it relates to write-protecting gfns. > +{ > + if (sp->role.direct) > + return false; > + > + if (sp->role.level > KVM_MAX_HUGEPAGE_LEVEL) > + return false; > + > + return true; > +} > + > static bool sp_has_gptes(struct kvm_mmu_page *sp) > { > if (sp->role.direct) > @@ -2199,7 +2210,8 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, > > sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); > sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); > - if (!role.direct) > + sp->role = role; > + if (sp_might_have_leaf_gptes(sp)) And then this becomes: if (!role.direct && role.level <= KVM_MAX_HUGEPAGE_LEVEL) > sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); > > set_page_private(virt_to_page(sp->spt), (unsigned long)sp); > @@ -2216,7 +2228,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, > kvm_account_mmu_page(kvm, sp); > > sp->gfn = gfn; > - sp->role = role; And this code doesn't need to move. > hlist_add_head(&sp->hash_link, sp_list); > if (sp_has_gptes(sp)) > account_shadowed(kvm, sp); > -- > 2.31.1 >