Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 083E1C433F5 for ; Tue, 16 Nov 2021 00:10:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DEA7163237 for ; Tue, 16 Nov 2021 00:10:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355415AbhKPANG (ORCPT ); Mon, 15 Nov 2021 19:13:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348649AbhKOVcl (ORCPT ); Mon, 15 Nov 2021 16:32:41 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7328C043198 for ; Mon, 15 Nov 2021 13:17:12 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id e4-20020a170902b78400b00143c2e300ddso1463946pls.17 for ; Mon, 15 Nov 2021 13:17:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=XKLWlLaz9dMMXpY7vqrKj+gJX43hfIjMlaR58B8E1J0=; b=KsbuBZr2cvOp1RXmqbysm7ST4ITB8FxHBMsGtm6YNxN6qNLWMEjiO3fgxBW7T2fuv4 7iJeku2Ir8e2JTr4OPAr1j4Ovv+xija18RivWUEOeuJJKhLPD/Ys550gcBWPOonQ6w1T 4xaz5cR0/TJOC1yErdqvrcQMRK2lfJhAaQYBPwFC+7XNBvSmjWCIDuc3tn+8KFC7mina 0dMMy7Y6WktLt9+4K4yQWlwu6v4lJyuml23ipqIYwoxNgAvZax9JCHtJv1zjD+tJo3Pj /WyCiKmyHOknsqo6r9ZmReSL4jbzbRjhUYUUVD6poQ0JtGG5mL4n5IQ9u8Ieul542ckd 4IsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=XKLWlLaz9dMMXpY7vqrKj+gJX43hfIjMlaR58B8E1J0=; b=tQ4z6lhdswPOmEVKQmUuI2uqFkwapmWh22HswXYaKPJAP7Yhw052RE4hJpR41sc4R+ 3wkyMwNDGvKVRiPsGcHw6wJn3g4asnv1ZMm949CB4zUYvdnlGkwq+JH00aDLdv371QVn rYd2snZoHBhgD+AK0Hz2/Xc8zXSFqrCu8Qo4ULUZoOIfiBhF2RB6KuJoIbSav2Ae25yX 2OcGA9gGN2lfZMkztDZiKw0hEaSkMGCCOxZGMsQkDU4hche6I3AUPvZA0lvmujlEGn/F U9Xad1jGlw7uZx+U1G1xhdG0Eyep95+FLKMyX6tysfeKuKuNrL6gnK7YoLrW9nBNz6n3 GE2Q== X-Gm-Message-State: AOAM530zbaOCWl1zGAluAXnElCGXHjkPkNXJHCsZHlgjXDZyFGMYob8l YLmE1B9VfSMmWTzm7H/fP5eV9DCucf4CUppIpOhPReFSrZgN+4zGprPggHoyDhtSPIhRkZNS5H3 hsRx62AqA0PSXl2ZsMsgrjqKUh2mcu7MFuCgn1JN+Alj0mFG4qkZ0hV5ZVkcYOePn6LyWijY6 X-Google-Smtp-Source: ABdhPJzTHdkyhPUStS6JfR8W1Vb+jz+OWtaRuA2J8PFkGBP9eQbWPebsjSpHeALvCb1Vqmgfk70L/+Rp81kD X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:916d:2253:5849:9965]) (user=bgardon job=sendgmr) by 2002:a17:902:ba84:b0:142:5514:8dd7 with SMTP id k4-20020a170902ba8400b0014255148dd7mr39012600pls.87.1637011032196; Mon, 15 Nov 2021 13:17:12 -0800 (PST) Date: Mon, 15 Nov 2021 13:17:04 -0800 Message-Id: <20211115211704.2621644-1-bgardon@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [PATCH 1/1] KVM: x86/mmu: Fix TLB flush range when handling disconnected pt From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , David Matlack , Mingwei Zhang , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When recursively clearing out disconnected pts, the range based TLB flush in handle_removed_tdp_mmu_page uses the wrong starting GFN, resulting in the flush mostly missing the affected range. Fix this by using base_gfn for the flush. In response to feedback from David Matlack on the RFC version of this patch, also move a few definitions into the for loop in the function to prevent unintended references to them in the future. Fixes: a066e61f13cf ("KVM: x86/mmu: Factor out handling of removed page tables") CC: stable@vger.kernel.org Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7c5dd83e52de..4bd541050d21 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -317,9 +317,6 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt)); int level = sp->role.level; gfn_t base_gfn = sp->gfn; - u64 old_child_spte; - u64 *sptep; - gfn_t gfn; int i; trace_kvm_mmu_prepare_zap_page(sp); @@ -327,8 +324,9 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, tdp_mmu_unlink_page(kvm, sp, shared); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { - sptep = rcu_dereference(pt) + i; - gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); + u64 *sptep = rcu_dereference(pt) + i; + gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); + u64 old_child_spte; if (shared) { /* @@ -374,7 +372,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, shared); } - kvm_flush_remote_tlbs_with_address(kvm, gfn, + kvm_flush_remote_tlbs_with_address(kvm, base_gfn, KVM_PAGES_PER_HPAGE(level + 1)); call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); -- 2.34.0.rc1.387.gb447b232ab-goog