Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp229240imw; Mon, 4 Jul 2022 08:11:21 -0700 (PDT) X-Google-Smtp-Source: AGRyM1u7dCqnvqfJmqgRFd2YC7dyG3e1P6PYjfAcgB6Z+ZRCpn0ltVlfENS++yQumk9MLzkM9Gf0 X-Received: by 2002:a17:906:7049:b0:70c:a5fe:d4f8 with SMTP id r9-20020a170906704900b0070ca5fed4f8mr27465615ejj.493.1656947481560; Mon, 04 Jul 2022 08:11:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656947481; cv=none; d=google.com; s=arc-20160816; b=WsV/s6bmjRxsD0KceSHGS5IBmVTnqIT1+EXGLXZJ1JTkuZsGv+3P+eFNt9ly63ivyq jT4PhUJe2v6c9LgwNiupu8pbMGCgLOrhZUEQB1W8FOxvw3uhy5WP7y+yhf78O700n886 PZs6bvecU72SJAPPFjnBb3AbZ2QJSwdKWuFFIKcNvYxbsXW/0m9f6rHnDXnuErUTPvnI zXUZfP29mvodofgB+gQfMZ0PMJOD6/yKJpnNvQ759pCMTc55ShZRuVo2Qnao3ZxW8Lrb 6fipaeAF+0nnBqvG6f/n2K4mye1N5dxbM95tAjtH+tA4HOlhpnPcxqWfJCEumGUh4yLd 3FNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=JfXu58z7UaUBjps4RoWkmcCG0rF85BXMjQyAUe0XzyE=; b=PCJNGG+zIu+M/AFbZPUcBFvWvsZlilHRKY6GcFl/XlLM3mz1Stbp7g6AiOFz5LNMGt gfCRpfFJIJFIFMJ8SyJMcUn2cYx+ICt9J9+g8uZNHpr5tiWqCzsya2vY0c9x1DqfhSjr wj6HkCbbe024HCZtoj6R0Ym2ZYy4fcJu2Lu4SwbRWevpScfdbvK4K8O24erLT7kH0W61 JuqZ3mcs2wCr0ZV28iGtlyhFruzQoNWirkPWJUWe1g9jCsZCraQG4i+yeK+DMo8USfBT FSmmADSEskPUWjqVdsPJGJvLJMEJ8AR+KizYEEhhZYR10p3XfjhmndBYm2aGk6J1lSdt 3XBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=VH8XGPdq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m6-20020a056402430600b0042af9d59d05si22026889edc.25.2022.07.04.08.10.56; Mon, 04 Jul 2022 08:11:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=VH8XGPdq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234672AbiGDPG5 (ORCPT + 99 others); Mon, 4 Jul 2022 11:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235047AbiGDPGf (ORCPT ); Mon, 4 Jul 2022 11:06:35 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 752AB11838 for ; Mon, 4 Jul 2022 08:06:22 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id f13-20020a0564021e8d00b00437a2acb543so7335362edf.7 for ; Mon, 04 Jul 2022 08:06:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JfXu58z7UaUBjps4RoWkmcCG0rF85BXMjQyAUe0XzyE=; b=VH8XGPdq62Rza3vnPLknKrk0O1f7xVvLleAZbm2MuAvWCkVYSREcclsFfFCR1CwIoO nCzhh6GvtP6VQW3QMca5QB99LCVrwz+BWC4MGI82dimzDLsavq8xUci/JqBjiuDp0Kb8 So9Ko2+q+ZP0vECWlvc72jWJ6CL3/K5a6LFTXzLsZBZufux7iopT/hsVDBP5DWHOSGtN YWILQ1TXAQjR/kH26tERJBcGD8his0ED7idiYMBtIoIq1UzxVBOnt70ESO5U7fREgsCz hh5LKAqZodTnQe9j/qsquwkpMRdMXEfmE+okX9lnSce+4/H9s63GYBxUwsBCWu1XdtG9 uczQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JfXu58z7UaUBjps4RoWkmcCG0rF85BXMjQyAUe0XzyE=; b=akyyH6IfuXONBvEJJ4CKQAnbUUu4iaTEBmAhDvDHjcy9NF1Ekl+YSRorg1ggbC4Dge vrM+q5J8liI0o8r/QxbrbIM/BfJO2hGdN1kvnhlg8LtjaoGxhsEjFObwnAuho0FAsM2h WwPAh09fFWPW0AcsyanjEgZYvcct0UTnlXJGKcnyw+wxTHCrThHLiQ9E0eL7cEz+t1a8 5DCn7NjosEdHCld0pHbxeP3bF72TuzY1yCC3yz4VfT6TB/km9TVRiLipeb1Nv4OAPcV1 zvD9zE4k17aW81S2ZssGEIJZBGgf29EUCwbCm/6QWMHDsF0TVIUQH2vivS8JiEBdrsiB jv3Q== X-Gm-Message-State: AJIora8N7DScDt1e19rIyYKs3untx3YicoIZAMlfq8IdpAFXfKypqk36 nJETxLS5fytKZ/TJz3OcbAAvErWzxw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:6edf:e1bc:9a92:4ad0]) (user=elver job=sendgmr) by 2002:a05:6402:4446:b0:43a:3f52:4172 with SMTP id o6-20020a056402444600b0043a3f524172mr9836137edb.417.1656947180619; Mon, 04 Jul 2022 08:06:20 -0700 (PDT) Date: Mon, 4 Jul 2022 17:05:09 +0200 In-Reply-To: <20220704150514.48816-1-elver@google.com> Message-Id: <20220704150514.48816-10-elver@google.com> Mime-Version: 1.0 References: <20220704150514.48816-1-elver@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v3 09/14] powerpc/hw_breakpoint: Avoid relying on caller synchronization From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Internal data structures (cpu_bps, task_bps) of powerpc's hw_breakpoint implementation have relied on nr_bp_mutex serializing access to them. Before overhauling synchronization of kernel/events/hw_breakpoint.c, introduce 2 spinlocks to synchronize cpu_bps and task_bps respectively, thus avoiding reliance on callers synchronizing powerpc's hw_breakpoint. Reported-by: Dmitry Vyukov Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov --- v2: * New patch. --- arch/powerpc/kernel/hw_breakpoint.c | 53 ++++++++++++++++++++++------- 1 file changed, 40 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c index 2669f80b3a49..8db1a15d7acb 100644 --- a/arch/powerpc/kernel/hw_breakpoint.c +++ b/arch/powerpc/kernel/hw_breakpoint.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -129,7 +130,14 @@ struct breakpoint { bool ptrace_bp; }; +/* + * While kernel/events/hw_breakpoint.c does its own synchronization, we cannot + * rely on it safely synchronizing internals here; however, we can rely on it + * not requesting more breakpoints than available. + */ +static DEFINE_SPINLOCK(cpu_bps_lock); static DEFINE_PER_CPU(struct breakpoint *, cpu_bps[HBP_NUM_MAX]); +static DEFINE_SPINLOCK(task_bps_lock); static LIST_HEAD(task_bps); static struct breakpoint *alloc_breakpoint(struct perf_event *bp) @@ -174,7 +182,9 @@ static int task_bps_add(struct perf_event *bp) if (IS_ERR(tmp)) return PTR_ERR(tmp); + spin_lock(&task_bps_lock); list_add(&tmp->list, &task_bps); + spin_unlock(&task_bps_lock); return 0; } @@ -182,6 +192,7 @@ static void task_bps_remove(struct perf_event *bp) { struct list_head *pos, *q; + spin_lock(&task_bps_lock); list_for_each_safe(pos, q, &task_bps) { struct breakpoint *tmp = list_entry(pos, struct breakpoint, list); @@ -191,6 +202,7 @@ static void task_bps_remove(struct perf_event *bp) break; } } + spin_unlock(&task_bps_lock); } /* @@ -200,12 +212,17 @@ static void task_bps_remove(struct perf_event *bp) static bool all_task_bps_check(struct perf_event *bp) { struct breakpoint *tmp; + bool ret = false; + spin_lock(&task_bps_lock); list_for_each_entry(tmp, &task_bps, list) { - if (!can_co_exist(tmp, bp)) - return true; + if (!can_co_exist(tmp, bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&task_bps_lock); + return ret; } /* @@ -215,13 +232,18 @@ static bool all_task_bps_check(struct perf_event *bp) static bool same_task_bps_check(struct perf_event *bp) { struct breakpoint *tmp; + bool ret = false; + spin_lock(&task_bps_lock); list_for_each_entry(tmp, &task_bps, list) { if (tmp->bp->hw.target == bp->hw.target && - !can_co_exist(tmp, bp)) - return true; + !can_co_exist(tmp, bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&task_bps_lock); + return ret; } static int cpu_bps_add(struct perf_event *bp) @@ -234,6 +256,7 @@ static int cpu_bps_add(struct perf_event *bp) if (IS_ERR(tmp)) return PTR_ERR(tmp); + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); for (i = 0; i < nr_wp_slots(); i++) { if (!cpu_bp[i]) { @@ -241,6 +264,7 @@ static int cpu_bps_add(struct perf_event *bp) break; } } + spin_unlock(&cpu_bps_lock); return 0; } @@ -249,6 +273,7 @@ static void cpu_bps_remove(struct perf_event *bp) struct breakpoint **cpu_bp; int i = 0; + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); for (i = 0; i < nr_wp_slots(); i++) { if (!cpu_bp[i]) @@ -260,19 +285,25 @@ static void cpu_bps_remove(struct perf_event *bp) break; } } + spin_unlock(&cpu_bps_lock); } static bool cpu_bps_check(int cpu, struct perf_event *bp) { struct breakpoint **cpu_bp; + bool ret = false; int i; + spin_lock(&cpu_bps_lock); cpu_bp = per_cpu_ptr(cpu_bps, cpu); for (i = 0; i < nr_wp_slots(); i++) { - if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) - return true; + if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) { + ret = true; + break; + } } - return false; + spin_unlock(&cpu_bps_lock); + return ret; } static bool all_cpu_bps_check(struct perf_event *bp) @@ -286,10 +317,6 @@ static bool all_cpu_bps_check(struct perf_event *bp) return false; } -/* - * We don't use any locks to serialize accesses to cpu_bps or task_bps - * because are already inside nr_bp_mutex. - */ int arch_reserve_bp_slot(struct perf_event *bp) { int ret; -- 2.37.0.rc0.161.g10f37bed90-goog