Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp440532ybl; Wed, 14 Aug 2019 00:05:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqzEA2SIKRUSAi9EB5x9YPQHruIyIQLwdKCsYV2lS+NKAW5Dz+DcNhwBfWQJrBGE0lW/RMqs X-Received: by 2002:a63:d301:: with SMTP id b1mr37178940pgg.379.1565766307484; Wed, 14 Aug 2019 00:05:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565766307; cv=none; d=google.com; s=arc-20160816; b=JAPq0elCtAzsQI4vqfmr3dxg505DcOo6JviJF/1eCY6kfvZEOwp+fxT44Zh5KsYcX9 z0uu3Medu7J5rLCsmR52JJbNYgQyvOZFPKHxjIb8Sipm2iUwjmgCqN3bz+iI2TjpVTKK BkhxbCpTyNidzW2zRQ1L+MjAK9ikHkZOmOk9Xluv0pihEkRsy6fDjv+gcTInwvuhCTs4 Fvcj6RaBtWARk664gSsN/hQVpZ3EDGFDCpkbwIT5fbAlEdinHwzGcBkvDUbLbn++2n1h KudqRC0pD3FJpzfdHp5MPL1e3jQhQ1D0BNxuTezPhf6pKl4Gs35fcFVDBNlvDE9LEoVY k0vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=up9FgXgDWOsnoD3n2VISet9mBY8PMz2Uy5cvcuVMLTE=; b=Kiqkcjt0US7Qop/PlZzFo0iSjbbewCWeCZWUgMGA6SzJAjz7HgPSQahO3Fj+1To2Qa byMhK3Gfw2PgvOaQFnPkdvjho0E++p1ZxvNyEJd/9uA+GZp+iJUTvzalMFMPGgnEhu9C CkeRmnCPylhPg0LR1wIsjdfON/Wbmr5CYEmnh30e6XQ4yNSmyly6i64vWUV2YgBo837j qrNTBtN2TgtMfdOjCCyKqXupcfovLYx6FSQVTtAG8lvGnv2HgBl9BJszXpG1L0QWfJNo SWlMRfsj21Ds/8x18pUxpXqvJ7kUq609KI855SH9fQ1aJatrBSJRQmpkUxB8ja2j5/i6 vr6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a8si68758975pfc.176.2019.08.14.00.04.51; Wed, 14 Aug 2019 00:05:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727687AbfHNHCr (ORCPT + 99 others); Wed, 14 Aug 2019 03:02:47 -0400 Received: from mga04.intel.com ([192.55.52.120]:42220 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727651AbfHNHCm (ORCPT ); Wed, 14 Aug 2019 03:02:42 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Aug 2019 00:02:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,384,1559545200"; d="scan'208";a="181427244" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.128]) by orsmga006.jf.intel.com with ESMTP; 14 Aug 2019 00:02:40 -0700 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, sean.j.christopherson@intel.com Cc: mst@redhat.com, rkrcmar@redhat.com, jmattson@google.com, yu.c.zhang@intel.com, alazar@bitdefender.com, Yang Weijiang Subject: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup Date: Wed, 14 Aug 2019 15:04:02 +0800 Message-Id: <20190814070403.6588-9-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190814070403.6588-1-weijiang.yang@intel.com> References: <20190814070403.6588-1-weijiang.yang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If SPP subpages are set while the physical page are not available in EPT leaf entry, the mapping is first stored in SPP access bitmap buffer. SPPT setup is deferred to access to the protected page, in EPT page fault handler, the SPPT enries are set up. Signed-off-by: Yang Weijiang --- arch/x86/kvm/mmu.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 419878301375..f017fe6cd67b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4304,6 +4304,26 @@ check_hugepage_cache_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int level) return kvm_mtrr_check_gfn_range_consistency(vcpu, gfn, page_num); } +static int kvm_enable_spp_protection(struct kvm *kvm, u64 gfn) +{ + struct kvm_subpage spp_info = {0}; + struct kvm_memory_slot *slot; + + slot = gfn_to_memslot(kvm, gfn); + if (!slot) + return -EFAULT; + + spp_info.base_gfn = gfn; + spp_info.npages = 1; + + if (kvm_mmu_get_subpages(kvm, &spp_info, true) < 0) + return -EFAULT; + + if (spp_info.access_map[0] != FULL_SPP_ACCESS) + kvm_mmu_set_subpages(kvm, &spp_info, true); + + return 0; +} static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, bool prefault) { @@ -4355,6 +4375,10 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, if (likely(!force_pt_level)) transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level); r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault); + + if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL) + kvm_enable_spp_protection(vcpu->kvm, gfn); + spin_unlock(&vcpu->kvm->mmu_lock); return r; -- 2.17.2