Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp367368pxa; Fri, 31 Jul 2020 14:24:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwkS2fBf0jlMXXklvsLoY+s98ZuPSTvcx5tww/41Kul84AGkgBoQG59aAZU9zWaznhIxAjk X-Received: by 2002:a17:906:4d4f:: with SMTP id b15mr5902536ejv.534.1596230656707; Fri, 31 Jul 2020 14:24:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596230656; cv=none; d=google.com; s=arc-20160816; b=k3fDY2SJnbh+j3D+uGbCBTEnFu9nNwbO5Bv6EVeRFI6miNYelv28RvsdkbNLFirFAI Iyg6bG7Q8qpltuXRA2Xpj0QTM5I+6NJZgwKd5tYJRXTsVWDwT4sIf/bk1nf8MdV6f0FC myTkE97aMaUAb7aQQ5rATipS9qPzhahLq9buvnqDaBtjovuvXq8Rf2vuTqGZhpMXcfSM oIO6lxRjau+zV7UtW2s7U9ZAurL4BNqa+wEJiLm5aMymOEEI1XM3QoG5iIPs9YRHLLvS Q+XBaoO0VCVimSoKCmt8OFu+Rkwcw7OrzbnogOeUzfwjxlFom/E8GdusYv+tBQ22Ke6W XtYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=EQ3hkiGmdiN58uZ80mr5E5V6qgsLpzBx/Na2G05kVrQ=; b=omkLF+s1L+EvDWKlUFYNfWu3jbWisB5kASDkU6+05dgCAFXE5w7v8fnN/BHrrsVVAK h65SPVZh8NJ75KjikOfQPCnOUfA+ggXA2936JPPG+WcSguKLxiJwGuxHpDF5O5DyHwAB u1iEjTXbI6E0jymdluUVrnew21MaHSk0tSOAUnw5LS7m+mO+CCxq9OTvkyIgzAKlLblJ +EpcXqDmeMK0e94W1+7MBakbLJI1YeLR6hL0puBzem7u6c5yPemvQzuW9AZpt97Vag89 GRj5Pm2VyeTK/3UelRsFNcGCL4E/pISmpeopwIh0Z5LfYft+g7j6LGNIAJNZdk7XOqy/ DScg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bt10si5400510ejb.491.2020.07.31.14.23.54; Fri, 31 Jul 2020 14:24:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729661AbgGaVXi (ORCPT + 99 others); Fri, 31 Jul 2020 17:23:38 -0400 Received: from mga14.intel.com ([192.55.52.115]:50224 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729197AbgGaVXb (ORCPT ); Fri, 31 Jul 2020 17:23:31 -0400 IronPort-SDR: QlZ05md2KNPYIzpbAmiKQbyIGgBXkeRSKYVEeu6fHUZuIl+fUPqDNgtdYOnT6pDFQnqCWuKbDw MPqm4m31ynRg== X-IronPort-AV: E=McAfee;i="6000,8403,9699"; a="151075134" X-IronPort-AV: E=Sophos;i="5.75,419,1589266800"; d="scan'208";a="151075134" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2020 14:23:27 -0700 IronPort-SDR: cyySMoQBdLD4wZpxqY0qAajzuUxlrh3m7BtU7M5kj5Q6OmwQ6m+4iDeqyrT+RMOmRBlTHLGPB1 eQfhzUHiTzWA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,419,1589266800"; d="scan'208";a="331191319" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.160]) by orsmga007.jf.intel.com with ESMTP; 31 Jul 2020 14:23:26 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, eric van tassell , Tom Lendacky Subject: [RFC PATCH 8/8] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data() Date: Fri, 31 Jul 2020 14:23:23 -0700 Message-Id: <20200731212323.21746-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200731212323.21746-1-sean.j.christopherson@intel.com> References: <20200731212323.21746-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 117 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 112 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index f640b8beb443e..eb95914578497 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -15,6 +15,7 @@ #include #include +#include "mmu.h" #include "x86.h" #include "svm.h" @@ -415,6 +416,107 @@ static unsigned long get_num_contig_pages(unsigned long idx, return pages; } +#define SEV_PFERR (PFERR_WRITE_MASK | PFERR_USER_MASK) + +static void *sev_alloc_pages(unsigned long size, unsigned long *npages) +{ + /* TODO */ + *npages = 0; + return NULL; +} + +static struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, + unsigned long hva) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot; + + kvm_for_each_memslot(memslot, slots) { + if (hva >= memslot->userspace_addr && + hva < memslot->userspace_addr + + (memslot->npages << PAGE_SHIFT)) + return memslot; + } + + return NULL; +} + +static bool hva_to_gpa(struct kvm *kvm, unsigned long hva) +{ + struct kvm_memory_slot *memslot; + gpa_t gpa_offset; + + memslot = hva_to_memslot(kvm, hva); + if (!memslot) + return UNMAPPED_GVA; + + gpa_offset = hva - memslot->userspace_addr; + return ((memslot->base_gfn << PAGE_SHIFT) + gpa_offset); +} + +static struct page **sev_pin_memory_in_mmu(struct kvm *kvm, unsigned long addr, + unsigned long size, + unsigned long *npages) +{ + struct kvm_vcpu *vcpu; + struct page **pages; + unsigned long i; + kvm_pfn_t pfn; + int idx, ret; + gpa_t gpa; + + pages = sev_alloc_pages(size, npages); + if (!pages) + return ERR_PTR(-ENOMEM); + + vcpu = kvm_get_vcpu(kvm, 0); + if (mutex_lock_killable(&vcpu->mutex)) { + kvfree(pages); + return ERR_PTR(-EINTR); + } + + vcpu_load(vcpu); + idx = srcu_read_lock(&kvm->srcu); + + kvm_mmu_load(vcpu); + + for (i = 0; i < *npages; i++, addr += PAGE_SIZE) { + if (signal_pending(current)) { + ret = -ERESTARTSYS; + goto err; + } + + if (need_resched()) + cond_resched(); + + gpa = hva_to_gpa(kvm, addr); + if (gpa == UNMAPPED_GVA) { + ret = -EFAULT; + goto err; + } + pfn = kvm_mmu_map_tdp_page(vcpu, gpa, SEV_PFERR, PG_LEVEL_4K); + if (is_error_noslot_pfn(pfn)) { + ret = -EFAULT; + goto err; + } + pages[i] = pfn_to_page(pfn); + get_page(pages[i]); + } + + srcu_read_unlock(&kvm->srcu, idx); + vcpu_put(vcpu); + + mutex_unlock(&vcpu->mutex); + return pages; + +err: + for ( ; i; --i) + put_page(pages[i-1]); + + kvfree(pages); + return ERR_PTR(ret); +} + static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) { unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i; @@ -439,9 +541,12 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) vaddr_end = vaddr + size; /* Lock the user memory. */ - inpages = sev_pin_memory(kvm, vaddr, size, &npages, 1); - if (!inpages) { - ret = -ENOMEM; + if (atomic_read(&kvm->online_vcpus)) + inpages = sev_pin_memory_in_mmu(kvm, vaddr, size, &npages); + else + inpages = sev_pin_memory(kvm, vaddr, size, &npages, 1); + if (IS_ERR(inpages)) { + ret = PTR_ERR(inpages); goto e_free; } @@ -449,9 +554,11 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) * The LAUNCH_UPDATE command will perform in-place encryption of the * memory content (i.e it will write the same memory region with C=1). * It's possible that the cache may contain the data with C=0, i.e., - * unencrypted so invalidate it first. + * unencrypted so invalidate it first. Flushing is automatically + * handled if the pages can be pinned in the MMU. */ - sev_clflush_pages(inpages, npages); + if (!atomic_read(&kvm->online_vcpus)) + sev_clflush_pages(inpages, npages); for (i = 0; vaddr < vaddr_end; vaddr = next_vaddr, i += pages) { int offset, len; -- 2.28.0