Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp904147pxv; Fri, 25 Jun 2021 00:39:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyFUweOySrBeweL+BT/GMYT0fcL3CcRx3rSGmpp21KUb32oWVYGy/BiojAmnJloNZy9v6GO X-Received: by 2002:a05:6402:270e:: with SMTP id y14mr6260161edd.124.1624606757745; Fri, 25 Jun 2021 00:39:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624606757; cv=none; d=google.com; s=arc-20160816; b=yJgVALtSC06yZw7OZ7qDOx3kgtj7Dx/EWe+1aYc3ZaK8Y0TcaSk1O3avNjP/90SKn3 UvczSEVJoNE/m8IFx7UevMCGkJJBnzUQAnj520XUxd7GLKH7jV6a5mYFSlXjyE6kzIvR Vd/EOpDD+/zGFO4JI1ybbP5bkDYpJg/Hj9WJIM7b6O4iFuhPFVwrljAw4NxGhuIfLUjK RdAQ/CoIQshnCXB2BaQ081y2gaxnTJXuYGrNg5C61gemUaTZhjahx6C2QRKF3RNAB6Dw So8ElxBW0yzXeO9GatyzPWJHQODIg+mq3uVbpitw2AfLGbpKk0U67FTP4LoilfkiunqR iuxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=u4Msfi+S3PkAmckIb45Y4YhfvK0zrfK1v0GQOeKxL8U=; b=jWAnLAWRpkGghBoa+OEIVYIApwONPBAlVMu5LjTZ6s6YYAC6XQo3Hpc5JRu+Q4jXWM Fl8yt9X0Iwt+mJDSYXMxX7QJP3idC1av62O404IzYOpF7yl824XX+ZLHI8mQsHaw/fQv 0YdWGb+2PlsaDGQFi/AJEQdn+YPC94PWGjZuH3ImYykm0IeQlw2LUB5N8fCeozX+jt3J 8h3w2BTRFaSwPlNJDLd+V1xHVmcKIvcq1x4OqHjEIfIFYVQOEKlXkEWHorbSoqInh0V8 WOe1HEqSKghCgJfPF0EjvCuBzzqUdacPGoIx1gcCY1klDqpFX+E1ZnmJs55XhoyBeU// mUUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=jsVWx4zR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dj10si3984104edb.226.2021.06.25.00.38.54; Fri, 25 Jun 2021 00:39:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=jsVWx4zR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbhFYHj7 (ORCPT + 99 others); Fri, 25 Jun 2021 03:39:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230108AbhFYHjz (ORCPT ); Fri, 25 Jun 2021 03:39:55 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45827C061787 for ; Fri, 25 Jun 2021 00:37:31 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id e33so6894036pgm.3 for ; Fri, 25 Jun 2021 00:37:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u4Msfi+S3PkAmckIb45Y4YhfvK0zrfK1v0GQOeKxL8U=; b=jsVWx4zRHi5LgbBDABZxQBMVf0I5Pehki/bLP1YEDTCFU+7oqcawuLdQyOF/WTHvrm GkAZKMD5IiMtCqrbGlu910q/CyOs+isCAewjZKOhwg59+E7AXomrJ6A2uZ9RqaGcXirk lV0SFGaAyCFC4T82D6h/FrcpvKiIknqhpH6yk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u4Msfi+S3PkAmckIb45Y4YhfvK0zrfK1v0GQOeKxL8U=; b=XUD+E5giXJHoXkHtPnACPcQybCzA8Qe3lUwl1GQnUGOgbf0vj1Zd6wDaYFgxxN6vxs MR8bhLqGA5LSHtYcrRo9SmLnwySOE6HCYlJWFfqG+N/JoOfDPW5+YJxY50EH755cLSfT /RNtGVGWu7DFANDa0Azf6PYXIKnQdmTzusZ/HVYs10TBFlj4r9SNGUhTpsY5NE7LFNRY HcAlc12GqpiwMy/aZnUYOEAviFwA5RSBYpTEAMwNEfcHDvnlWJ+CeqgdYxhyl1pMjP4H bIRUd2emzgxmJx4jfVSFE+/84Kc/0SFnwJTKjgbPCvXEP3y/9arbo0saUjWYsgqC8Xfl 1GUQ== X-Gm-Message-State: AOAM531E3um5sGGIBbywYZwDWyM5IB6yYo5XhNeNrYwV+2hTgSWP0HVh 9TkUzqamaXCDhCkgYWypHni2AQ== X-Received: by 2002:a05:6a00:238e:b029:2ef:839b:adc7 with SMTP id f14-20020a056a00238eb02902ef839badc7mr9115989pfc.54.1624606650766; Fri, 25 Jun 2021 00:37:30 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:1492:9d4f:19fa:df61]) by smtp.gmail.com with UTF8SMTPSA id j22sm4516344pgb.62.2021.06.25.00.37.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 25 Jun 2021 00:37:30 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Paolo Bonzini , Nick Piggin Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Zhenyu Wang , Zhi Wang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, intel-gvt-dev@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, David Stevens Subject: [PATCH v2 2/5] KVM: mmu: introduce new gfn_to_pfn_page functions Date: Fri, 25 Jun 2021 16:36:13 +0900 Message-Id: <20210625073616.2184426-3-stevensd@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog In-Reply-To: <20210625073616.2184426-1-stevensd@google.com> References: <20210625073616.2184426-1-stevensd@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Stevens Introduce new gfn_to_pfn_page functions that parallel existing gfn_to_pfn functions. The new functions are identical except they take an additional out parameter that is used to return the struct page if the hva was resolved by gup. This allows callers to differentiate the gup and follow_pte cases, which in turn allows callers to only touch the page refcount when necessitated by gup. The old gfn_to_pfn functions are depreciated, and all callers should be migrated to the new gfn_to_pfn_page functions. In the interim, the gfn_to_pfn functions are reimplemented as wrappers of the corresponding gfn_to_pfn_page functions. The wrappers take a reference to the pfn's page that had previously been taken in hva_to_pfn_remapped. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 17 ++++ virt/kvm/kvm_main.c | 186 ++++++++++++++++++++++++++++----------- 2 files changed, 152 insertions(+), 51 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ae7735b490b4..2f828edd7278 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -820,6 +820,19 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva); +kvm_pfn_t gfn_to_pfn_page(struct kvm *kvm, gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_page_prot(struct kvm *kvm, gfn_t gfn, + bool write_fault, bool *writable, + struct page **page); +kvm_pfn_t gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_page_memslot_atomic(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t __gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable, + hva_t *hva, struct page **page); + void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); @@ -901,6 +914,10 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool atomic); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f7445c3bcd90..1de8702845ac 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2102,9 +2102,9 @@ static inline int check_user_page_hwpoison(unsigned long addr) * only part that runs if we can in atomic context. */ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, kvm_pfn_t *pfn, + struct page **page) { - struct page *page[1]; /* * Fast pin a writable pfn only if it is a write fault request @@ -2115,7 +2115,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, return false; if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { - *pfn = page_to_pfn(page[0]); + *pfn = page_to_pfn(*page); if (writable) *writable = true; @@ -2130,10 +2130,9 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, kvm_pfn_t *pfn, struct page **page) { unsigned int flags = FOLL_HWPOISON; - struct page *page; int npages = 0; might_sleep(); @@ -2146,7 +2145,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (async) flags |= FOLL_NOWAIT; - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(addr, 1, page, flags); if (npages != 1) return npages; @@ -2156,11 +2155,11 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { *writable = true; - put_page(page); - page = wpage; + put_page(*page); + *page = wpage; } } - *pfn = page_to_pfn(page); + *pfn = page_to_pfn(*page); return npages; } @@ -2175,13 +2174,6 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault) return true; } -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - if (kvm_is_reserved_pfn(pfn)) - return 1; - return get_page_unless_zero(pfn_to_page(pfn)); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, unsigned long addr, bool *async, bool write_fault, bool *writable, @@ -2221,26 +2213,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, *writable = pte_write(*ptep); pfn = pte_pfn(*ptep); - /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. - */ - if (!kvm_try_get_pfn(pfn)) - r = -EFAULT; - out: pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; @@ -2262,8 +2234,9 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) +static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, + bool *async, bool write_fault, bool *writable, + struct page **page) { struct vm_area_struct *vma; kvm_pfn_t pfn = 0; @@ -2272,13 +2245,14 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, /* we can do it either atomically or asynchronously, not both */ BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(addr, write_fault, writable, &pfn, page)) return pfn; if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, write_fault, writable, + &pfn, page); if (npages == 1) return pfn; @@ -2310,12 +2284,14 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, - bool *writable, hva_t *hva) +kvm_pfn_t __gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable, + hva_t *hva, struct page **page) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + *page = NULL; if (hva) *hva = addr; @@ -2338,45 +2314,153 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, async, write_fault, - writable); + writable, page); +} +EXPORT_SYMBOL_GPL(__gfn_to_pfn_page_memslot); + +kvm_pfn_t gfn_to_pfn_page_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, + bool *writable, struct page **page) +{ + return __gfn_to_pfn_page_memslot(gfn_to_memslot(kvm, gfn), gfn, false, + NULL, write_fault, writable, NULL, + page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_prot); + +kvm_pfn_t gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **page) +{ + return __gfn_to_pfn_page_memslot(slot, gfn, false, NULL, true, + NULL, NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_memslot); + +kvm_pfn_t gfn_to_pfn_page_memslot_atomic(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page) +{ + return __gfn_to_pfn_page_memslot(slot, gfn, true, NULL, true, NULL, + NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_memslot_atomic); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_page_memslot_atomic( + kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_page_atomic); + +kvm_pfn_t gfn_to_pfn_page(struct kvm *kvm, gfn_t gfn, struct page **page) +{ + return gfn_to_pfn_page_memslot(gfn_to_memslot(kvm, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_page_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), + gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_page); + +static kvm_pfn_t ensure_pfn_ref(struct page *page, kvm_pfn_t pfn) +{ + if (page || is_error_pfn(pfn) || kvm_is_reserved_pfn(pfn)) + return pfn; + + /* + * Certain IO or PFNMAP mappings can be backed with valid + * struct pages, but be allocated without refcounting e.g., + * tail pages of non-compound higher order allocations, which + * would then underflow the refcount when the caller does the + * required put_page. Don't allow those pages here. + */ + if (get_page_unless_zero(pfn_to_page(pfn))) + return pfn; + + return KVM_PFN_ERR_FAULT; +} + +kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool *async, bool write_fault, + bool *writable, hva_t *hva) +{ + struct page *page; + kvm_pfn_t pfn; + + pfn = __gfn_to_pfn_page_memslot(slot, gfn, atomic, async, + write_fault, writable, hva, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_prot(kvm, gfn, write_fault, writable, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_memslot(slot, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_memslot_atomic(slot, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_page_atomic(vcpu, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page(kvm, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_page(vcpu, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); -- 2.32.0.93.g670b81a890-goog