Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3862497ybt; Tue, 23 Jun 2020 12:40:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzBj88TNxpu3JsRAURHE8yAy0UdhGpGr6JCAfjlkTCMG/JH0XQHs40k40z6LENSWOPmU4aN X-Received: by 2002:a05:6402:1c87:: with SMTP id cy7mr24030589edb.354.1592941207716; Tue, 23 Jun 2020 12:40:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592941207; cv=none; d=google.com; s=arc-20160816; b=j4A8HYWoik4cHGaiaEY10YSTfUK2cjBb/Ofg+Kih4faLVKBm2d2I35yuvl75p0rM3X r6rBhdOp5NrI9kotbACMcjU+GlnbrWMR9k+ZLApHh7zxbdn/TT7/EQdlNmaSeOYw0iuH 5cnBi5yZk1f0lxIpi1luXMndMIFYgF9U+hbGMkvIFMGqKGfc+Hw6qYcFfax48k+p3W8m ZHJqnFnCrQ78ezlD+uWIuF/NORxFtUeO+8s9gR+NXkH1uA4PSkrz61ETsP2iAwzUzR+q cZ725qAEsC/ZG40W1jTclc0a4bsFAXlOSE1Kpv5yOfdjvITYkYtzu1Y9x+bY0U6nlwd1 J4YQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=n0jB43lQsa1fRpQrEFbsnAO+HO8GyvvSNuciGG0xKQA=; b=mob0hvgcMGtCzCGGAHYAFAbr0rr2cxwCMddb1BwAEdAWxu38jGeyvhq8MUyai5ZT4T vPfciq0UoT7JDtexjlNahxfK5jwbQwryT/h6LTEFtOruFGv7Z+aDKmccft8+EGpop2ol NMXYsuhrVRRQkrvnxKGe/1xKKurJT1p/yvsXwld9HCFpO/TcTpBKPCG+Bu7MGOquCqyX 6j07hKTBwV/xG/GxNQPSMZbrEHijOx2XccSFQRnu/0vmTp3XtNtKIqjxVaNYp7bBhOKV +vNLl9/v84grjM/xwZcB3V/3XRUmt2CpGUJgqmGgaftRWWS24iYficl1eB9Cx2r7HBMR PwFA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s14si418985ejx.43.2020.06.23.12.39.44; Tue, 23 Jun 2020 12:40:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387574AbgFWTgB (ORCPT + 99 others); Tue, 23 Jun 2020 15:36:01 -0400 Received: from mga11.intel.com ([192.55.52.93]:11008 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387502AbgFWTfp (ORCPT ); Tue, 23 Jun 2020 15:35:45 -0400 IronPort-SDR: Q8lFbMZ7xy/EiJM7FIaf9pBZPtH7Qibg9eiEH/x0dP3ijrnUGLhZRKhUOPJoEXnqf0RfhrQocD 7xW6KTslSi7w== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="142430971" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="142430971" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 12:35:44 -0700 IronPort-SDR: paQmTNzGnJBumOT+MUhvZ2x9BWmJndGmv2BbB6jitFmvWqLuje/y2u830LNVT9ST0JZj0z1O4/ I5V00FXetfBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="263428292" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga007.fm.intel.com with ESMTP; 23 Jun 2020 12:35:43 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/4] KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages Date: Tue, 23 Jun 2020 12:35:40 -0700 Message-Id: <20200623193542.7554-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200623193542.7554-1-sean.j.christopherson@intel.com> References: <20200623193542.7554-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Collect MMU pages for zapping in a loop when making MMU pages available, and skip over active roots when doing so as zapping an active root can never immediately free up a page. Batching the zapping avoids multiple remote TLB flushes and remedies the issue where the loop would bail early if an active root was encountered. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 58 ++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8e7df4ed4b55..8c85a3a178f4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2838,20 +2838,51 @@ static bool prepare_zap_oldest_mmu_page(struct kvm *kvm, return kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); } +static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm, + unsigned long nr_to_zap) +{ + unsigned long total_zapped = 0; + struct kvm_mmu_page *sp, *tmp; + LIST_HEAD(invalid_list); + bool unstable; + int nr_zapped; + + if (list_empty(&kvm->arch.active_mmu_pages)) + return 0; + +restart: + list_for_each_entry_safe(sp, tmp, &kvm->arch.active_mmu_pages, link) { + /* + * Don't zap active root pages, the page itself can't be freed + * and zapping it will just force vCPUs to realloc and reload. + */ + if (sp->root_count) + continue; + + unstable = __kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, + &nr_zapped); + total_zapped += nr_zapped; + if (total_zapped >= nr_to_zap) + break; + + if (unstable) + goto restart; + } + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + + kvm->stat.mmu_recycled += total_zapped; + return total_zapped; +} + static int make_mmu_pages_available(struct kvm_vcpu *vcpu) { - LIST_HEAD(invalid_list); + unsigned long avail = kvm_mmu_available_pages(vcpu->kvm); - if (likely(kvm_mmu_available_pages(vcpu->kvm) >= KVM_MIN_FREE_MMU_PAGES)) + if (likely(avail >= KVM_MIN_FREE_MMU_PAGES)) return 0; - while (kvm_mmu_available_pages(vcpu->kvm) < KVM_REFILL_PAGES) { - if (!prepare_zap_oldest_mmu_page(vcpu->kvm, &invalid_list)) - break; - - ++vcpu->kvm->stat.mmu_recycled; - } - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + kvm_mmu_zap_oldest_mmu_pages(vcpu->kvm, KVM_REFILL_PAGES - avail); if (!kvm_mmu_available_pages(vcpu->kvm)) return -ENOSPC; @@ -2864,17 +2895,12 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu) */ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long goal_nr_mmu_pages) { - LIST_HEAD(invalid_list); - spin_lock(&kvm->mmu_lock); if (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages) { - /* Need to free some mmu pages to achieve the goal. */ - while (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages) - if (!prepare_zap_oldest_mmu_page(kvm, &invalid_list)) - break; + kvm_mmu_zap_oldest_mmu_pages(kvm, kvm->arch.n_used_mmu_pages - + goal_nr_mmu_pages); - kvm_mmu_commit_zap_page(kvm, &invalid_list); goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages; } -- 2.26.0