Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp158795ybh; Tue, 14 Jul 2020 21:29:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwOwCEHkK44XXU3B2NS2shBhWGjxdrIKkeM1YOwsa1yBdNXR5pxJl9jyjoanYHIR533DxmU X-Received: by 2002:a50:f385:: with SMTP id g5mr7431060edm.347.1594787386031; Tue, 14 Jul 2020 21:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594787386; cv=none; d=google.com; s=arc-20160816; b=oPgSlwiva3I5oJ7eSUiLM2YD5L6oeT+G7z5GVUl+Wenv/JCL/vFIdZZ/OD8sYX0nKT Yj1IgRfrWlB1OhY1LGDLiHEn36cl0KJnAcxbZI4+y0vB9hvFHNIWHCgb8j2HyZyWCa7P 7h4LAQ1sPe5X/9padBJNOjRZphUW0K7iS/j2bL3hiAib/j5Tzik+LeQmCJ73pW/eAMai er9f+YZ8vq0sMNzD4I8/zJ82dElEnNAj6vmMAkNg3NshF20FeQ2ZMg5a21FG5Pv1Xo/R T6JjmitPnS5cXj95dE+r4icUaNTa8iYW2miQ4n/YFx7GZh6LcDyaMsQBl4Z/cX5c2eEt Inqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=VTw+hwfQ75/0+vYG7F6F2kgrQGy21W/1tQF5OfXd/Wo=; b=sq9YNwzX1gzRCVliQcF7Hkc3VFo2nquTz/kJ+SBP6itju95p/g5lV0eysEFQvR3STM UqNeo7igvfSC6OEbJXfTKB6gAornDlSMLSVzjxf+92EIELKYk42WsShU0+USyoZJFXCw I2DFDkPjnepOAlOGJmkPw6/Nq+T576c7Ajhb412K3mKENX3GqviX2IsyW6mg+N6zfjKA 9fJp4slKwwsW1hPUIY002FxvXDd5d/Lg6ki5VJ9aqCdImSx+YEfGFcK5KLqMx0/rWLOG 7e8HyOLr3qmfiOMr+gjolIbcUtaiIeQaqZdcZvTuVL8TRUlu9Cp1TygDKZbadn/99CVz EDDA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z3si492712ejc.547.2020.07.14.21.29.22; Tue, 14 Jul 2020 21:29:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726859AbgGOE13 (ORCPT + 99 others); Wed, 15 Jul 2020 00:27:29 -0400 Received: from mga05.intel.com ([192.55.52.43]:59540 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726648AbgGOE11 (ORCPT ); Wed, 15 Jul 2020 00:27:27 -0400 IronPort-SDR: LG9/xqolDxs6T2wgjRBSwOCtXb6s7a/btCgWNS6/NlDa2194u5VQI0WwHJygYQmLwJIfJMvTzT JtQRMSKLFwvw== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="233936295" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="233936295" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 21:27:26 -0700 IronPort-SDR: XG45825FcRk1uS6H+/j/TMlWjpS9BTUauZtMIS3M1GzE3c522iNOCo/BtTS22KqOhjz6agcSoK VeesXwip3Pzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="308118778" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 14 Jul 2020 21:27:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Junaid Shahid Subject: [PATCH 2/8] KVM: x86/mmu: Refactor the zap loop for recovering NX lpages Date: Tue, 14 Jul 2020 21:27:19 -0700 Message-Id: <20200715042725.10961-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200715042725.10961-1-sean.j.christopherson@intel.com> References: <20200715042725.10961-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Refactor the zap loop in kvm_recover_nx_lpages() to be a for loop that iterates on to_zap and drop the !to_zap check that leads to the in-loop calling of kvm_mmu_commit_zap_page(). The in-loop commit when to_zap hits zero is superfluous now that there's an unconditional commit after the loop to handle the case where lpage_disallowed_mmu_pages is emptied. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 48be51027af64..9cd3d2a23f8a5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6321,7 +6321,10 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) ratio = READ_ONCE(nx_huge_pages_recovery_ratio); to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0; - while (to_zap && !list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) { + for ( ; to_zap; --to_zap) { + if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) + break; + /* * We use a separate list instead of just using active_mmu_pages * because the number of lpage_disallowed pages is expected to @@ -6334,10 +6337,9 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->lpage_disallowed); - if (!--to_zap || need_resched() || spin_needbreak(&kvm->mmu_lock)) { + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (to_zap) - cond_resched_lock(&kvm->mmu_lock); + cond_resched_lock(&kvm->mmu_lock); } } kvm_mmu_commit_zap_page(kvm, &invalid_list); -- 2.26.0