Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp795567pxy; Wed, 5 May 2021 14:13:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygXQgUOHqsWSZi62H+jgjW7V3jCt98u+d1DU9vItI/UUCF4EpGHLSnC35zDtyJqMkH1g7U X-Received: by 2002:a50:fe05:: with SMTP id f5mr1047115edt.288.1620249233681; Wed, 05 May 2021 14:13:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620249233; cv=none; d=google.com; s=arc-20160816; b=FgTpw33Y8xYS2vPRu5UBd9ZQHdV/GqUGb+qldYXJn/yoShEPrc3VlVFd1t3kMBlPPl fSjvyp4lqaK+35ln/xJxPmx8NV4PvOKxOyHpfvYyDkzg2VG5eJDTLogGrBOcOmxS7v0s R8h7jByJG6hywuw3/sAMjrmJtjC6+AzNPgfee36wp4OzAcTitQhKgM88NTtayEvMveCr BVNkjzrYCASR3tLvyRjr1gCpcXZ9sHhZ9gFCn7yenYRvbEjHY8PDcmFgE6yXWsA5Z/jg IiuBXZXT+J2Y/0NqhE9lCjqG5MMDA1pV7kg9FU8nsoT4r0wE1IMooG9kFyxR6ILc0yRf Uh+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :reply-to:dkim-signature; bh=p4KYU6vAoyfFP2I+rWAeMqwqBN6gnusP1gZ9ljdBr6Y=; b=lVHJSYLoB95PhX/jvMQMAA+4gXUtLhPbi2DainfFNRFB5qnWxG1uwLD+loIXyc4CvQ nMPbNW2KNiyEPklIen1bimKNVxNOL0WNuOpgqEWbWsryPQF5QwSDg2KexY2fQqsIQTTi i0YnuotPckairKvzhcFHNvJGGmDMS+kmFvaWgbmyu7teucVvsKCGOMJ/qg9G94UvWxMu xx0zdfACKKHR8Xw2VYWZOWnj6VYe+O706AeLS92yLGF5mJmXtn6rY9gsQaN2QxtbJ3/B 9WnDdrgEf8LRw7YU0Tmrrm/0O53CEdEfdG1OzuO3oS80tSsnbjNLohE/4NZbaiWejjDu BwxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=AjLwt9Xu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q6si380098ejn.673.2021.05.05.14.13.30; Wed, 05 May 2021 14:13:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=AjLwt9Xu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235133AbhEEUnW (ORCPT + 99 others); Wed, 5 May 2021 16:43:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234935AbhEEUnV (ORCPT ); Wed, 5 May 2021 16:43:21 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0BD6C06174A for ; Wed, 5 May 2021 13:42:24 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 184-20020a250cc10000b02904ee21d0e583so3656989ybm.6 for ; Wed, 05 May 2021 13:42:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:message-id:mime-version:subject:from:to:cc; bh=p4KYU6vAoyfFP2I+rWAeMqwqBN6gnusP1gZ9ljdBr6Y=; b=AjLwt9XuECLQRfJeY4C1g/NLMSd3Z/WcBVmKEAtlvoOnl0rUzczKpDXVnH6NK3LtKZ 9ZZvjHHyMm+Wuc+YSHuZlEirzwFpM1scYyEVMew0zlJjE5ZB57ymqhQUmtjUtM5TmTqF hf7AVyJCbtf25Ym6BaID/QCGmVEtnymL5yzlbkMfH4oz5AeT88vFuQptzkMQ0rVqr0mA oWLd5LgtTGdBeEmVa/mGYT649ljN2e1e8rm6TC88NEpQaw0kXjxbZszaEDwKaJzyxQsE UN/4R85u1MsrwZy6MqIcTt1zAGMZrEsOo4tptC3JY7feT3p2Mz/56Eae2wvbs892dY8a udsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:message-id:mime-version:subject :from:to:cc; bh=p4KYU6vAoyfFP2I+rWAeMqwqBN6gnusP1gZ9ljdBr6Y=; b=mgZx62iU2L4WJWzOmgeSu0UikmEFtJEmEQSThYAD0Sb/PBzllGbw4w0/j65Wj0yghK n0Y5JapvU/QYZCbjGihdtHPV1V6DwflRw/JXUZ0La/NFeKQiNUaMjj+JR677P0WfcFVL AIXByWjotDONlma6U7wxHajhpq5CJE/1CQzXkak64HwDH5zqDOSAySMlkvWPrwL5dVUZ ICnSjVHCKNm2NQnYF4HG6S5as6LE01gvXTJL95q7KiBFRm+J811R0RBFK00eJc2gymho mTu7GZlD0YwBMm5aRkfWXG4mN5dJ+oNYd2574loBI2GXzvXiIAUBB3+1ZGnVbAN6hRUR GrMQ== X-Gm-Message-State: AOAM530JJmA3g/UBL/YIIkmSKwC8U7zOauQpevOF9ZdIGJ8H2Y4rEszX 8qJx+4a0TzG96gTuC6kS6ZvWtcIrXss= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:820b:3fc:8d69:7035]) (user=seanjc job=sendgmr) by 2002:a5b:f02:: with SMTP id x2mr892065ybr.99.1620247343885; Wed, 05 May 2021 13:42:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 5 May 2021 13:42:21 -0700 Message-Id: <20210505204221.1934471-1-seanjc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.31.1.527.g47e6f16901-goog Subject: [PATCH v2] KVM: x86: Prevent KVM SVM from loading on kernels with 5-level paging From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Disallow loading KVM SVM if 5-level paging is supported. In theory, NPT for L1 should simply work, but there unknowns with respect to how the guest's MAXPHYADDR will be handled by hardware. Nested NPT is more problematic, as running an L1 VMM that is using 2-level page tables requires stacking single-entry PDP and PML4 tables in KVM's NPT for L2, as there are no equivalent entries in L1's NPT to shadow. Barring hardware magic, for 5-level paging, KVM would need stack another layer to handle PML5. Opportunistically rename the lm_root pointer, which is used for the aforementioned stacking when shadowing 2-level L1 NPT, to pml4_root to call out that it's specifically for PML4. Suggested-by: Paolo Bonzini Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++---------- arch/x86/kvm/svm/svm.c | 5 +++++ 3 files changed, 16 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3e5fc80a35c8..bf35f369b49e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -407,7 +407,7 @@ struct kvm_mmu { u32 pkru_mask; u64 *pae_root; - u64 *lm_root; + u64 *pml4_root; /* * check zero bits on shadow page table entries, these diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 930ac8a7e7c9..04c869794ab3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3310,12 +3310,12 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; - if (WARN_ON_ONCE(!mmu->lm_root)) { + if (WARN_ON_ONCE(!mmu->pml4_root)) { r = -EIO; goto out_unlock; } - mmu->lm_root[0] = __pa(mmu->pae_root) | pm_mask; + mmu->pml4_root[0] = __pa(mmu->pae_root) | pm_mask; } for (i = 0; i < 4; ++i) { @@ -3335,7 +3335,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) - mmu->root_hpa = __pa(mmu->lm_root); + mmu->root_hpa = __pa(mmu->pml4_root); else mmu->root_hpa = __pa(mmu->pae_root); @@ -3350,7 +3350,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu = vcpu->arch.mmu; - u64 *lm_root, *pae_root; + u64 *pml4_root, *pae_root; /* * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP @@ -3369,14 +3369,14 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) if (WARN_ON_ONCE(mmu->shadow_root_level != PT64_ROOT_4LEVEL)) return -EIO; - if (mmu->pae_root && mmu->lm_root) + if (mmu->pae_root && mmu->pml4_root) return 0; /* * The special roots should always be allocated in concert. Yell and * bail if KVM ends up in a state where only one of the roots is valid. */ - if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) + if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->pml4_root)) return -EIO; /* @@ -3387,14 +3387,14 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) if (!pae_root) return -ENOMEM; - lm_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); - if (!lm_root) { + pml4_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); + if (!pml4_root) { free_page((unsigned long)pae_root); return -ENOMEM; } mmu->pae_root = pae_root; - mmu->lm_root = lm_root; + mmu->pml4_root = pml4_root; return 0; } @@ -5261,7 +5261,7 @@ static void free_mmu_pages(struct kvm_mmu *mmu) if (!tdp_enabled && mmu->pae_root) set_memory_encrypted((unsigned long)mmu->pae_root, 1); free_page((unsigned long)mmu->pae_root); - free_page((unsigned long)mmu->lm_root); + free_page((unsigned long)mmu->pml4_root); } static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 14ff7f0963e9..d29dfe4a6503 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -447,6 +447,11 @@ static int has_svm(void) return 0; } + if (pgtable_l5_enabled()) { + pr_info("KVM doesn't yet support 5-level paging on AMD SVM\n"); + return 0; + } + return 1; } -- 2.31.1.527.g47e6f16901-goog