Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp158253ybh; Tue, 14 Jul 2020 21:28:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxtOtlclbrWgWbKLMY+GsnXqjieXtcWSS0Ivmvv/mXyOf7I1CqljyeS4OVxUOoZvhgqkR1p X-Received: by 2002:a17:906:375a:: with SMTP id e26mr7800257ejc.324.1594787301860; Tue, 14 Jul 2020 21:28:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594787301; cv=none; d=google.com; s=arc-20160816; b=Kg8bmarl0fCT2zuuQCHOJoPFy5VG4qJaTYy8DtjxIGEjLo5WChLnpN6Li4neszjOU5 zXtXfOXCTLHqjPxk9XCjDzUU/WK+lY+C59ZGPVR8kj8e0Qp2CseuDuggCQknmSq+c65q macBezL0lmNfoeAfpDxEg/+O/okVuWU4VzVaZ1wSCAVFvR3S+WOpewL1rPObBLUy6I7s We8OYxcY5/6qVt4R9UkhgoVzZTr+q6Db2Ny3S++bk37tgQD4WLAqeDRQloVQgHbWtZx3 uRkMYARl70tF+e4yPgHgJng9AmeXbEoCeJGw/YvMpYZ3BX7T59BxZes80k2B+OADA4C9 0RjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=JNKfaMb5cH8UXIkolF45qzpiDvilbgjHopgSv/X6flY=; b=wmLqdwkCDZuuDAA3MSXYXyo2neJ8JYucVhXesruC+Kp+enCVJyL1SwGwzFl/sUwRQX AAjkKcByXQLSZ0lNaKxWcGJ/d7YLXxwxbWVozz4yKlB9tdMO6ibvtnHzBvs9QAXe7+76 An1DL147zy54TwpAbxWFAKGheJzIlopYk27DLbNloAPKEPTwBatLtOWNJJG3tfs32Cx0 5PLXPb7EufvoVC3iA/Fr0OoNlBoStKcS/N7xyss6CrHNr0EwBe5Ch/mMXzJ8y5QjI2ml 44XkL9U+Bcs7yhN0p8FI4y/RP2GojQZjYDaIUFBlwP9EfsOL2GSWKgvko25H2RzxdsBY +kAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id di16si542758edb.397.2020.07.14.21.27.57; Tue, 14 Jul 2020 21:28:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727104AbgGOE1b (ORCPT + 99 others); Wed, 15 Jul 2020 00:27:31 -0400 Received: from mga05.intel.com ([192.55.52.43]:59539 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726023AbgGOE11 (ORCPT ); Wed, 15 Jul 2020 00:27:27 -0400 IronPort-SDR: rdKDBCCr0sQNNk++uBrIur9cfBae2UiSmnL9e3xxeXejqYE9kRMbeTOVZ6GPCp6NcD6hTmDA+k bBvehhDYzs1g== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936294" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936294" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:26 -0700 IronPort-SDR: DNE1cujfr2YcOX6Y8wIRz+c51jOPkd1aM06hjDa0GiaU/wlWW0/H97wHq1+QkQ3uAJ/ShSDN6z Dj5rKSlifSMA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118775" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 1/8] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages Date: Tue, 14 Jul 2020 21:27:18 -0700 Message-Id: <20200715042725.10961-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in kvm_recover_nx_lpages() to finish zapping pages in the unlikely event that the loop exited due to lpage_disallowed_mmu_pages being empty. Because the recovery thread drops mmu_lock() when rescheduling, it's possible that lpage_disallowed_mmu_pages could be emptied by a different thread without to_zap reaching zero despite to_zap being derived from the number of disallowed lpages. Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages") Cc: Junaid Shahid Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 77810ce66bdb4..48be51027af64 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6340,6 +6340,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) cond_resched_lock(&kvm->mmu_lock); } } + kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx); -- 2.26.0