Received: by 2002:ac0:e34a:0:0:0:0:0 with SMTP id g10csp479252imn; Wed, 27 Jul 2022 11:20:45 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uT+8XVon9H/ydmWzyEOQ8dAH76+S4NmRE+Sb86T7bGeTDroK0b45j0JKXK3IiJwIC+9dN8 X-Received: by 2002:a17:907:6d1e:b0:72b:4add:75db with SMTP id sa30-20020a1709076d1e00b0072b4add75dbmr18738584ejc.717.1658946045599; Wed, 27 Jul 2022 11:20:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658946045; cv=none; d=google.com; s=arc-20160816; b=id2Jwe2M4xAr6eIVpXVrcjQSJ68z29NXe0ZUP1LQ99PaWJhWa37gcWn2PmH7GgN26U ANIzQGATKwOjXTW4YRc9zjGN10+Mad5H1lY0sqmcBMoo74Ay83I4n+00YUXWb95WfM0u HGkWOr/7t0mceo0P1EuOi/1eMMYv2fiyiDhrTNwOhJmiRQtZ+qjI73/67fJxRBJOwIie IXIL0m/M2BUtJoPFeAJP/7o6tx6bhpbnLbnVGU33IuE1Z1Fcwc/TKMecwHWNlNDfoiAV h8TLgOu2UYHrh912nZr74O38a3vuYVaHWB5Q9iacvz/BsQiQb4GgEhzzkgbC7zAmzuD1 TbEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=jlz+JrN7W57qaajth9uOuaZpWy5ZBx3UERvAgMdRthU=; b=oUvtyB6Gp0LxWS69iIlAEBbDG2mEEHpinn01rAzKMNMIVbdDZbjaQF9/GrDmnF4d+e 2vWp0xUMEGnKrbJqBEtFDBiRWJDubS2FaWDuBW95wMRDwZ9LOhBp86xEZEetor+hXd8R qVCyZ6OEyq3g6rwdWEhy6o0JoSgSIve2RsXyFj39nkg+mv/IocaGh1dFEwyFp61iqqmh gmbfubQOemBvqWIyPvwk5yZNyZSpddv0dzLm4i+xQwnspmAzRXlH9tt4/XYcdABJcEoa 8q6982pS+TdqLFNah9gOdu9jGxam0TVGdvOcWFh8RX1WjEeF8QcDo5X0wQZJNju/srbW CgQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ER6e7rCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji10-20020a170907980a00b00726d5d59195si16446323ejc.674.2022.07.27.11.20.20; Wed, 27 Jul 2022 11:20:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ER6e7rCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237390AbiG0QbU (ORCPT + 99 others); Wed, 27 Jul 2022 12:31:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236065AbiG0Qan (ORCPT ); Wed, 27 Jul 2022 12:30:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D6CE52454; Wed, 27 Jul 2022 09:25:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 85ACB619F6; Wed, 27 Jul 2022 16:25:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A27BC433C1; Wed, 27 Jul 2022 16:25:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1658939113; bh=Btiv1hzgkHyKbTO+WO6YHAWnfk85J+1PYPGcL6n2wHU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ER6e7rCStFIa4gjfZiWYDxmMFCmycHzi5xxRJj9nRS/rARCsOluzPmobgRUdHUKnH fEvhswOer5Xq7WRsHzNeqWWJEVSKCJ1eVnpIdLUHVP+Dj72fKkV7AFfobX7uVEoV0V U6lGJvys9JjhRrE0HxoIL5NQlF6+JejUpqUeaHf4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Yang Jihong , "Peter Zijlstra (Intel)" , Sasha Levin Subject: [PATCH 4.19 06/62] perf/core: Fix data race between perf_event_set_output() and perf_mmap_close() Date: Wed, 27 Jul 2022 18:10:15 +0200 Message-Id: <20220727161004.416583079@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220727161004.175638564@linuxfoundation.org> References: <20220727161004.175638564@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra [ Upstream commit 68e3c69803dada336893640110cb87221bb01dcf ] Yang Jihing reported a race between perf_event_set_output() and perf_mmap_close(): CPU1 CPU2 perf_mmap_close(e2) if (atomic_dec_and_test(&e2->rb->mmap_count)) // 1 - > 0 detach_rest = true ioctl(e1, IOC_SET_OUTPUT, e2) perf_event_set_output(e1, e2) ... list_for_each_entry_rcu(e, &e2->rb->event_list, rb_entry) ring_buffer_attach(e, NULL); // e1 isn't yet added and // therefore not detached ring_buffer_attach(e1, e2->rb) list_add_rcu(&e1->rb_entry, &e2->rb->event_list) After this; e1 is attached to an unmapped rb and a subsequent perf_mmap() will loop forever more: again: mutex_lock(&e->mmap_mutex); if (event->rb) { ... if (!atomic_inc_not_zero(&e->rb->mmap_count)) { ... mutex_unlock(&e->mmap_mutex); goto again; } } The loop in perf_mmap_close() holds e2->mmap_mutex, while the attach in perf_event_set_output() holds e1->mmap_mutex. As such there is no serialization to avoid this race. Change perf_event_set_output() to take both e1->mmap_mutex and e2->mmap_mutex to alleviate that problem. Additionally, have the loop in perf_mmap() detach the rb directly, this avoids having to wait for the concurrent perf_mmap_close() to get around to doing it to make progress. Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole") Reported-by: Yang Jihong Signed-off-by: Peter Zijlstra (Intel) Tested-by: Yang Jihong Link: https://lkml.kernel.org/r/YsQ3jm2GR38SW7uD@worktop.programming.kicks-ass.net Signed-off-by: Sasha Levin --- kernel/events/core.c | 45 ++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 88dd1398ae88..ba66ea3ca705 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5719,10 +5719,10 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) if (!atomic_inc_not_zero(&event->rb->mmap_count)) { /* - * Raced against perf_mmap_close() through - * perf_event_set_output(). Try again, hope for better - * luck. + * Raced against perf_mmap_close(); remove the + * event and try again. */ + ring_buffer_attach(event, NULL); mutex_unlock(&event->mmap_mutex); goto again; } @@ -10396,14 +10396,25 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, goto out; } +static void mutex_lock_double(struct mutex *a, struct mutex *b) +{ + if (b < a) + swap(a, b); + + mutex_lock(a); + mutex_lock_nested(b, SINGLE_DEPTH_NESTING); +} + static int perf_event_set_output(struct perf_event *event, struct perf_event *output_event) { struct ring_buffer *rb = NULL; int ret = -EINVAL; - if (!output_event) + if (!output_event) { + mutex_lock(&event->mmap_mutex); goto set; + } /* don't allow circular references */ if (event == output_event) @@ -10441,8 +10452,15 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) event->pmu != output_event->pmu) goto out; + /* + * Hold both mmap_mutex to serialize against perf_mmap_close(). Since + * output_event is already on rb->event_list, and the list iteration + * restarts after every removal, it is guaranteed this new event is + * observed *OR* if output_event is already removed, it's guaranteed we + * observe !rb->mmap_count. + */ + mutex_lock_double(&event->mmap_mutex, &output_event->mmap_mutex); set: - mutex_lock(&event->mmap_mutex); /* Can't redirect output if we've got an active mmap() */ if (atomic_read(&event->mmap_count)) goto unlock; @@ -10452,6 +10470,12 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) rb = ring_buffer_get(output_event); if (!rb) goto unlock; + + /* did we race against perf_mmap_close() */ + if (!atomic_read(&rb->mmap_count)) { + ring_buffer_put(rb); + goto unlock; + } } ring_buffer_attach(event, rb); @@ -10459,20 +10483,13 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) ret = 0; unlock: mutex_unlock(&event->mmap_mutex); + if (output_event) + mutex_unlock(&output_event->mmap_mutex); out: return ret; } -static void mutex_lock_double(struct mutex *a, struct mutex *b) -{ - if (b < a) - swap(a, b); - - mutex_lock(a); - mutex_lock_nested(b, SINGLE_DEPTH_NESTING); -} - static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id) { bool nmi_safe = false; -- 2.35.1