Received: by 2002:a05:7208:13ce:b0:7f:395a:35b6 with SMTP id r14csp224754rbe; Wed, 28 Feb 2024 18:59:47 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWtvHZ0gegbf4jsm554JMmkHMwYa2m6FBgGScZ4t9ajpowV/v8sQl7yvMVJUVsoRgsr5Q/ikONBnUzOkWf1p2COrRiCoxzFR3aR1+bavA== X-Google-Smtp-Source: AGHT+IF/TvnaE2D156OdAFsvcrXD62tB9DtneMTWUna6LiL1TWRBSxnh49PaA+MNdnH4MXXAYmXX X-Received: by 2002:a05:6830:1603:b0:6e4:5da0:cae0 with SMTP id g3-20020a056830160300b006e45da0cae0mr855347otr.18.1709175587456; Wed, 28 Feb 2024 18:59:47 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709175587; cv=pass; d=google.com; s=arc-20160816; b=lzEJ+48/jt9EH/NLa2FHfetR5Ud/NH1CMJC5ueQG3zNXYUNWlOEDHmy2MATKEjPcXM pYZrPZlK6it8MiajvmcfkXLz4hZW5nhtfBL2f72DuQBsg+GSBN23yHkViUFao9GOJCaE mG9A0KS/IN6bZ8OPIfWyUke1b+kCckCWyHod75c21y6nZrx3FX99LeDlnd3YDlFJVjR0 nRcH3UI7IjDz+JEfn64V1l+/PgxXjkHxOHhQn57ZAEuLXCbk1VnJD9jgbgIjdUUA5Mvr d29GqrIIZv58nm9kpld+u0B4qGDLiueaD2cKnijpJjd4ZlNbnwOkMJ+n+brYCNcbQyEX PmnQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=7pCyadIacDNXpMxBRbQ7ViSuJim7Tld7U5Xm9y5zMV8=; fh=8L2SGKn8pkKfCNhoi8HO8a7E/0rD7riUpLuj16lfYxE=; b=tfeSGwTi4jOQKH3CNm39mBAgaE87vry1/4wxX32HgP08Bc1xO6XKlO84ahNS6SA5j2 kMXcmEYQAgjoXW9Z4FBYOhE9tbe1h60PQfmpJJ3TjhtapxZ5td3QMl4yjG0G/bca8o4a 0BMdWxuXGr2wYzQgnRmRP5nIMahRNA9zQCLCY50UmRwmY1ih5gLvXI1NKTb8UOuD2QyW 6w0STnL9SHp+lURzdaeg/EQy4HE2iVqFsgXAX4oG20HzgNdiSHGUsAiuvtbvZctdQY/p 2HFfIXFVZr7g0BWrEo28qKF3z3ypBaoKNqE4aQdFkVHrOL+Bg3RmMkgS4nCxnHW2GnkE nTWA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=OkUjrea7; arc=pass (i=1 spf=pass spfdomain=chromium.org dkim=pass dkdomain=chromium.org dmarc=pass fromdomain=chromium.org); spf=pass (google.com: domain of linux-kernel+bounces-86067-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-86067-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id q4-20020a631f44000000b005dbd9c28c62si412113pgm.539.2024.02.28.18.59.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Feb 2024 18:59:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-86067-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=OkUjrea7; arc=pass (i=1 spf=pass spfdomain=chromium.org dkim=pass dkdomain=chromium.org dmarc=pass fromdomain=chromium.org); spf=pass (google.com: domain of linux-kernel+bounces-86067-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-86067-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 94EEDB2384E for ; Thu, 29 Feb 2024 02:59:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 611673A267; Thu, 29 Feb 2024 02:58:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="OkUjrea7" Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2D8338DF8 for ; Thu, 29 Feb 2024 02:58:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175502; cv=none; b=o/juY+x6id4UuH+1YdRHPxQKYxxseMpgN2yuKxzVQZSj6agPbOFmOm6JDSnZWNS6UlVIsH4Wg0WMCFMf1D2a2scj51OUab8F7XgO7hC+YqO2RBDoCSltn0R5xLQrD209Nwh1RU6hEa5BPjyt1zCsNjMXgBod/osuGfEWsjA2aJY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709175502; c=relaxed/simple; bh=zqbO5W+2Y74Xe1DsTvWYol7UEmu3gtWMmT7zTXAW+jM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l0MQFQfu9pQ7gAVqmny634bMgbXtUMVSS4RPHdRCjAQEbHX71Yuc7IdnmFLCGiDSfd0Jc7OK/kcEeMuhkrAzoBrX21PGNNMZKKe0PO+fotNNxE0rqd8Vt5eMSaYa3iuHnVns+g1PGRUjUED5eudAcHYFgF9PyHFCAJxBYHylT0w= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=OkUjrea7; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1dcc7f4717fso4642625ad.0 for ; Wed, 28 Feb 2024 18:58:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709175500; x=1709780300; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7pCyadIacDNXpMxBRbQ7ViSuJim7Tld7U5Xm9y5zMV8=; b=OkUjrea7QC0MX7A50vNaqmmoG2EB6A7kCHuwFPwphiLbXARU0EnyOVQfdMN3THSTzS l0rKSBVIkAz4DEBbZDVZi+3YzZwNJjfrZluMddHmCDqX6VSM9AkP5/WVlv51HF6LNHDW BSwcOj/TS4nXBsg1hKLy4/tnIbRVVLNjhlH4Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709175500; x=1709780300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7pCyadIacDNXpMxBRbQ7ViSuJim7Tld7U5Xm9y5zMV8=; b=NpjACNFpMQflAPRKXQiytsp8LhPmXLK8WzPoh25xWsc5I8tSby3Gx3kyXub1XVuPEF ymEI4C6W+Wvz7ve5AETxJtOmhXPnAth+vxXPm63p8rTsJgtl4d9ltjkffSNgD8aA8Zjo f+qz5jfUwnqdd9Hlm3RSHUST3r7DuV+zPcWkPRCZLvdH3Glj7eey2/SXTeFxpx2vA++K O24ZlH6Fvgu7KNEbvcHxHPGPgw+Br6nZlT8y0xQlPUf4+Z4uzp34nGszL3PS16yBAUmI ld+DzMKOUdq3mpBkw5qT2DYpiUZs0Ta70GoTVZuudOR81O38jgdqscNYwfvyUKSs33cH 4R5A== X-Forwarded-Encrypted: i=1; AJvYcCXgbjcema6kIZwQLHRx2KL0DniIZTUNYxjCgzDv0kUNhmWMMZX9epkUbXkn/+B0QxpGbGRwzTD8IjSauojyDSA2noET8HSGXBC13SkD X-Gm-Message-State: AOJu0YwHWGIBd0OWTxGO+oKP36vcur9+PRWaXM88U9akipy38msMuTcK lmOIll4D5wdu5ce+Xvb3MKC866Mkgey7CKe5N9jWa9jVf5lueFEUSHGX4wZ8RuO0g/EEbdL6uf4 = X-Received: by 2002:a17:903:234f:b0:1db:f830:c381 with SMTP id c15-20020a170903234f00b001dbf830c381mr860436plh.44.1709175500286; Wed, 28 Feb 2024 18:58:20 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f51:e79e:9056:77ea]) by smtp.gmail.com with UTF8SMTPSA id c1-20020a170903234100b001d9fc6df457sm180902plh.5.2024.02.28.18.58.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 28 Feb 2024 18:58:19 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Sean Christopherson , Paolo Bonzini Cc: Yu Zhang , Isaku Yamahata , Zhi Wang , Maxim Levitsky , kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v11 3/8] KVM: mmu: Introduce kvm_follow_pfn() Date: Thu, 29 Feb 2024 11:57:54 +0900 Message-ID: <20240229025759.1187910-4-stevensd@google.com> X-Mailer: git-send-email 2.44.0.rc1.240.g4c46232300-goog In-Reply-To: <20240229025759.1187910-1-stevensd@google.com> References: <20240229025759.1187910-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: David Stevens Introduce kvm_follow_pfn(), which will replace __gfn_to_pfn_memslot(). This initial implementation is just a refactor of the existing API which uses a single structure for passing the arguments. The arguments are further refactored as follows: - The write_fault and interruptible boolean flags and the in parameter part of async are replaced by setting FOLL_WRITE, FOLL_INTERRUPTIBLE, and FOLL_NOWAIT respectively in a new flags argument. - The out parameter portion of the async parameter is now a return value. - The writable in/out parameter is split into a separate. try_map_writable in parameter and writable out parameter. - All other parameter are the same. Upcoming changes will add the ability to get a pfn without needing to take a ref to the underlying page. Signed-off-by: David Stevens Reviewed-by: Maxim Levitsky --- include/linux/kvm_host.h | 18 ++++ virt/kvm/kvm_main.c | 191 +++++++++++++++++++++------------------ virt/kvm/kvm_mm.h | 3 +- virt/kvm/pfncache.c | 10 +- 4 files changed, 131 insertions(+), 91 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7e7fd25b09b3..290db5133c36 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -97,6 +97,7 @@ #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) #define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3) +#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4) /* * error pfns indicate that the gfn is in slot but faild to @@ -1209,6 +1210,23 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn, void kvm_release_page_clean(struct page *page); void kvm_release_page_dirty(struct page *page); +struct kvm_follow_pfn { + const struct kvm_memory_slot *slot; + gfn_t gfn; + /* FOLL_* flags modifying lookup behavior. */ + unsigned int flags; + /* Whether this function can sleep. */ + bool atomic; + /* Try to create a writable mapping even for a read fault. */ + bool try_map_writable; + + /* Outputs of kvm_follow_pfn */ + hva_t hva; + bool writable; +}; + +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp); + kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6f37d56fb2fc..575756c9c5b0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2791,8 +2791,7 @@ static inline int check_user_page_hwpoison(unsigned long addr) * true indicates success, otherwise false is returned. It's also the * only part that runs if we can in atomic context. */ -static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) +static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { struct page *page[1]; @@ -2801,14 +2800,12 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * or the caller allows to map a writable pfn for a read fault * request. */ - if (!(write_fault || writable)) + if (!((kfp->flags & FOLL_WRITE) || kfp->try_map_writable)) return false; - if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, page)) { *pfn = page_to_pfn(page[0]); - - if (writable) - *writable = true; + kfp->writable = true; return true; } @@ -2819,8 +2816,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool interruptible, bool *writable, kvm_pfn_t *pfn) +static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn) { /* * When a VCPU accesses a page that is not mapped into the secondary @@ -2833,32 +2829,24 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, * Note that get_user_page_fast_only() and FOLL_WRITE for now * implicitly honor NUMA hinting faults and don't need this flag. */ - unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT; + unsigned int flags = FOLL_HWPOISON | FOLL_HONOR_NUMA_FAULT | kfp->flags; struct page *page; int npages; might_sleep(); - if (writable) - *writable = write_fault; - - if (write_fault) - flags |= FOLL_WRITE; - if (async) - flags |= FOLL_NOWAIT; - if (interruptible) - flags |= FOLL_INTERRUPTIBLE; - - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(kfp->hva, 1, &page, flags); if (npages != 1) return npages; - /* map read fault as writable if possible */ - if (unlikely(!write_fault) && writable) { + if (kfp->flags & FOLL_WRITE) { + kfp->writable = true; + } else if (kfp->try_map_writable) { struct page *wpage; - if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { - *writable = true; + /* map read fault as writable if possible */ + if (get_user_page_fast_only(kfp->hva, FOLL_WRITE, &wpage)) { + kfp->writable = true; put_page(page); page = wpage; } @@ -2889,23 +2877,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) } static int hva_to_pfn_remapped(struct vm_area_struct *vma, - unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *p_pfn) + struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn) { kvm_pfn_t pfn; pte_t *ptep; pte_t pte; spinlock_t *ptl; + bool write_fault = kfp->flags & FOLL_WRITE; int r; - r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r = follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * not call the fault handler, so do it here. */ bool unlocked = false; - r = fixup_user_fault(current->mm, addr, + r = fixup_user_fault(current->mm, kfp->hva, (write_fault ? FAULT_FLAG_WRITE : 0), &unlocked); if (unlocked) @@ -2913,7 +2901,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, if (r) return r; - r = follow_pte(vma->vm_mm, addr, &ptep, &ptl); + r = follow_pte(vma->vm_mm, kfp->hva, &ptep, &ptl); if (r) return r; } @@ -2925,8 +2913,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, goto out; } - if (writable) - *writable = pte_write(pte); + kfp->writable = pte_write(pte); pfn = pte_pfn(pte); /* @@ -2957,38 +2944,28 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, } /* - * Pin guest page in memory and return its pfn. - * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function can sleep - * @interruptible: whether the process can be interrupted by non-fatal signals - * @async: whether this function need to wait IO complete if the - * host page is not in the memory - * @write_fault: whether we should get a writable host page - * @writable: whether it allows to map a writable host page for !@write_fault - * - * The function will map a writable host page for these two cases: - * 1): @write_fault = true - * 2): @write_fault = false && @writable, @writable will tell the caller - * whether the mapping is writable. + * Convert a hva to a pfn. + * @kfp: args struct for the conversion */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; - /* we can do it either atomically or asynchronously, not both */ - WARN_ON_ONCE(atomic && async); + /* + * FOLL_NOWAIT is used for async page faults, which don't make sense + * in an atomic context where the caller can't do async resolution. + */ + WARN_ON_ONCE(kfp->atomic && (kfp->flags & FOLL_NOWAIT)); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(kfp, &pfn)) return pfn; - if (atomic) + if (kfp->atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, - writable, &pfn); + npages = hva_to_pfn_slow(kfp, &pfn); if (npages == 1) return pfn; if (npages == -EINTR) @@ -2996,83 +2973,123 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, mmap_read_lock(current->mm); if (npages == -EHWPOISON || - (!async && check_user_page_hwpoison(addr))) { + (!(kfp->flags & FOLL_NOWAIT) && check_user_page_hwpoison(kfp->hva))) { pfn = KVM_PFN_ERR_HWPOISON; goto exit; } retry: - vma = vma_lookup(current->mm, addr); + vma = vma_lookup(current->mm, kfp->hva); if (vma == NULL) pfn = KVM_PFN_ERR_FAULT; else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { - r = hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn); + r = hva_to_pfn_remapped(vma, kfp, &pfn); if (r == -EAGAIN) goto retry; if (r < 0) pfn = KVM_PFN_ERR_FAULT; } else { - if (async && vma_is_valid(vma, write_fault)) - *async = true; - pfn = KVM_PFN_ERR_FAULT; + if ((kfp->flags & FOLL_NOWAIT) && + vma_is_valid(vma, kfp->flags & FOLL_WRITE)) + pfn = KVM_PFN_ERR_NEEDS_IO; + else + pfn = KVM_PFN_ERR_FAULT; } exit: mmap_read_unlock(current->mm); return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) +kvm_pfn_t kvm_follow_pfn(struct kvm_follow_pfn *kfp) { - unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + kfp->writable = false; + kfp->hva = __gfn_to_hva_many(kfp->slot, kfp->gfn, NULL, + kfp->flags & FOLL_WRITE); - if (hva) - *hva = addr; - - if (addr == KVM_HVA_ERR_RO_BAD) { - if (writable) - *writable = false; + if (kfp->hva == KVM_HVA_ERR_RO_BAD) return KVM_PFN_ERR_RO_FAULT; - } - if (kvm_is_error_hva(addr)) { - if (writable) - *writable = false; + if (kvm_is_error_hva(kfp->hva)) return KVM_PFN_NOSLOT; - } - /* Do not map writable pfn in the readonly memslot. */ - if (writable && memslot_is_readonly(slot)) { - *writable = false; - writable = NULL; - } + if (memslot_is_readonly(kfp->slot)) + kfp->try_map_writable = false; + + return hva_to_pfn(kfp); +} +EXPORT_SYMBOL_GPL(kvm_follow_pfn); + +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva) +{ + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = 0, + .atomic = atomic, + .try_map_writable = !!writable, + }; + + if (write_fault) + kfp.flags |= FOLL_WRITE; + if (async) + kfp.flags |= FOLL_NOWAIT; + if (interruptible) + kfp.flags |= FOLL_INTERRUPTIBLE; - return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + pfn = kvm_follow_pfn(&kfp); + if (pfn == KVM_PFN_ERR_NEEDS_IO) { + *async = true; + pfn = KVM_PFN_ERR_FAULT; + } + if (hva) + *hva = kfp.hva; + if (writable) + *writable = kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + kvm_pfn_t pfn; + struct kvm_follow_pfn kfp = { + .slot = gfn_to_memslot(kvm, gfn), + .gfn = gfn, + .flags = write_fault ? FOLL_WRITE : 0, + .try_map_writable = !!writable, + }; + pfn = kvm_follow_pfn(&kfp); + if (writable) + *writable = kfp.writable; + return pfn; } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = FOLL_WRITE, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); + struct kvm_follow_pfn kfp = { + .slot = slot, + .gfn = gfn, + .flags = FOLL_WRITE, + .atomic = true, + }; + return kvm_follow_pfn(&kfp); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index ecefc7ec51af..9ba61fbb727c 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -20,8 +20,7 @@ #define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..1fb21c2ced5d 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -144,6 +144,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; + struct kvm_follow_pfn kfp = { + .slot = gpc->memslot, + .gfn = gpa_to_gfn(gpc->gpa), + .flags = FOLL_WRITE, + .hva = gpc->uhva, + }; lockdep_assert_held(&gpc->refresh_lock); @@ -182,8 +188,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) cond_resched(); } - /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + /* We always request a writable mapping */ + new_pfn = hva_to_pfn(&kfp); if (is_error_noslot_pfn(new_pfn)) goto out_error; -- 2.44.0.rc1.240.g4c46232300-goog