Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03AE9C433F5 for ; Wed, 24 Nov 2021 12:47:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346420AbhKXMvD (ORCPT ); Wed, 24 Nov 2021 07:51:03 -0500 Received: from mail.kernel.org ([198.145.29.99]:56440 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346037AbhKXMsc (ORCPT ); Wed, 24 Nov 2021 07:48:32 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E8B3A6127B; Wed, 24 Nov 2021 12:28:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1637756895; bh=h1jjiQ5Vq1tyYLwDlLAZ9uOBZ6PT5lfCEaJ9238yWNU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tRBwy0I+XfgCTP4XYyuSQoTGklJ3w1b9xPZfDyIWCRAxMFtHaPZMYGcxSXdnvV6La ungZspvm/fEDwHxzwah6aTIOV4RjmoV87oDddx75d8qx5N8oTHa46xf5T3govC82l8 RmfOtBtYhwX4Cb1Uq2S3pXgerdfZlAl+4xeHDEOs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Greg Thelen , "Peter Zijlstra (Intel)" Subject: [PATCH 4.14 242/251] perf/core: Avoid put_page() when GUP fails Date: Wed, 24 Nov 2021 12:58:04 +0100 Message-Id: <20211124115718.746109359@linuxfoundation.org> X-Mailer: git-send-email 2.34.0 In-Reply-To: <20211124115710.214900256@linuxfoundation.org> References: <20211124115710.214900256@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Greg Thelen commit 4716023a8f6a0f4a28047f14dd7ebdc319606b84 upstream. PEBS PERF_SAMPLE_PHYS_ADDR events use perf_virt_to_phys() to convert PMU sampled virtual addresses to physical using get_user_page_fast_only() and page_to_phys(). Some get_user_page_fast_only() error cases return false, indicating no page reference, but still initialize the output page pointer with an unreferenced page. In these error cases perf_virt_to_phys() calls put_page(). This causes page reference count underflow, which can lead to unintentional page sharing. Fix perf_virt_to_phys() to only put_page() if get_user_page_fast_only() returns a referenced page. Fixes: fc7ce9c74c3ad ("perf/core, x86: Add PERF_SAMPLE_PHYS_ADDR") Signed-off-by: Greg Thelen Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20211111021814.757086-1-gthelen@google.com Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6110,7 +6110,6 @@ void perf_output_sample(struct perf_outp static u64 perf_virt_to_phys(u64 virt) { u64 phys_addr = 0; - struct page *p = NULL; if (!virt) return 0; @@ -6129,14 +6128,15 @@ static u64 perf_virt_to_phys(u64 virt) * If failed, leave phys_addr as 0. */ if (current->mm != NULL) { + struct page *p; + pagefault_disable(); - if (__get_user_pages_fast(virt, 1, 0, &p) == 1) + if (__get_user_pages_fast(virt, 1, 0, &p) == 1) { phys_addr = page_to_phys(p) + virt % PAGE_SIZE; + put_page(p); + } pagefault_enable(); } - - if (p) - put_page(p); } return phys_addr;