Received: by 2002:a5d:925a:0:0:0:0:0 with SMTP id e26csp466018iol; Thu, 9 Jun 2022 07:21:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzStSPu0znpeVhslwVk6SXaV8ZQAlKFyRqZDLEMbfJzc0D1m84HNdY85gUyujYtXHpDJZL+ X-Received: by 2002:a17:90a:3182:b0:1e3:530d:6994 with SMTP id j2-20020a17090a318200b001e3530d6994mr3704614pjb.69.1654784479608; Thu, 09 Jun 2022 07:21:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654784479; cv=none; d=google.com; s=arc-20160816; b=Eo351JYSFqks7CbR8Jd14DAvj/VT5dp7WZaeUrRBF0cIyzFm+7nwJUij3c24A4TH3g DiU3WNc0UxOrhPDGobZunmWx0QQGZlTXz/q0Aij4ylu7uCMhJrNIxbRm73g+ceypEjhY ylhqe3kWcgzUbhLTkwk4giNdAMVfhQj9dnqFP3ZhXYHQeYsyiDp1yImHARC0aNKJluAL iptm5TZujvhqOMDeXI/OAN6S5kF9zYLb/37ACW4tiQzHc6ZSELvk/N0qEGxUdEFqcxDQ MTBSoJZ9bAQR1bBQGo+ShHnoxHFoyXriYID9nFfCCbNBVpziBAlv8M1Z/AAcdy3lLI7N 19Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=PF28ZnQLVsGI2i7ul/dfgqup/8C1a0mc9TiBXmcosYU=; b=vmcYQaYvnjLQyvIHtCSPIaso1zw2RcKJdQ87lzxdg1GnoJC6KOVTR+ocEgV8NmA2fO p33oMuINoHjv6ojSps1p2Z46nnZ+ZIBW3zR8q05NH1wII2Bv542Q1ZSAyPM0aFsmwXse 2tCgsZD9sdc7TgdQqXOrIlu0KEelsm9wrZ/8z2jbjGxnNVau84pT8AUCVoKa2eGNa1rl UTM08M2lMOiIEq3PraFR3mUuE1F1ibXLc8+C6IE2WpOsPCDMsFqBsEndTQWd2x0lFe+H zkL4BdHFxS0tzuChqehuu2ctdGhGp3GH25twbiH4GKJ0fj6fueOd2kIXLwYS99mPtZWn 0UDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=OAxGTJUe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 124-20020a630182000000b00401ae0f4a06si642579pgb.577.2022.06.09.07.21.07; Thu, 09 Jun 2022 07:21:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=OAxGTJUe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234780AbiFINlr (ORCPT + 99 others); Thu, 9 Jun 2022 09:41:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343895AbiFINll (ORCPT ); Thu, 9 Jun 2022 09:41:41 -0400 Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com [IPv6:2a00:1450:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1494F20BE9 for ; Thu, 9 Jun 2022 06:41:40 -0700 (PDT) Received: by mail-lj1-x231.google.com with SMTP id s10so1796554ljh.12 for ; Thu, 09 Jun 2022 06:41:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=PF28ZnQLVsGI2i7ul/dfgqup/8C1a0mc9TiBXmcosYU=; b=OAxGTJUe+XEVGmasomKjfTPF19BOrYj/fEHmMcR16tXmje2L2+5aMxRpp7TxlZz6+J yvlRV1lYkm0XlpTf4tltOaog/KE+I4t+0hk2WrYk3ACPv+Y7oF+vJN9L5yagoTRSCrS/ rworsIcSpHAopjkAjIDAp6gLDlsgNfeV+NvL3Db5jTsQRr/8ODDXA4w0x3waW2l6VyvY QuVdTcP44qHWxvGI92Tw8KlDERHWKynUAg3xzD4PoY+1SZM+nkIHAMReHJqIYi8fyhAZ GYTE2tO9RGEqmtMrpDG8JYNiKjyopdZujiWnnYuw9ckMZ1u0SBLz5YG8HyowXxR6qw40 Er/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=PF28ZnQLVsGI2i7ul/dfgqup/8C1a0mc9TiBXmcosYU=; b=TaMoN6WkbcnvWeuhXiN7xOv4BgdRWssX+MEsvK5cRXRJlwz8wN5p0xBceuaP5UirtM hRzkafuV6JatWjBP4tJeBMz6D3MNYztU4cB/XcuNnYNQZT0QanVc0Ji1V8n7BsmMRo7G JK76B5TaMw0pplMLAx77aPBp0aJZXFS3JfeagAAR5fiOiDbeVbcmhrRdZfX75vBXN9Ns ar0X6DczWVuRvwt96GQrnYWVlPnQOJ0FQndHd0HaHhCOvYVqSKBWqzTBxK9CG6wfF0LY fD4PkdyKiPVBzCkF4SJ14SSpytsVUKhI1h6ybWJgHeHkgie5X3bk5dcwHS1xl/TVe2qe ARgg== X-Gm-Message-State: AOAM531SvI7nCA1Fk/sGLioRea/lwNXKH81GBRsRQ5Te5ZFsAlN473QW oHApojpdW5rubFYOvDTpexct8oTsYtAggbZy6WJupQ== X-Received: by 2002:a2e:b0fc:0:b0:255:6f92:f9d4 with SMTP id h28-20020a2eb0fc000000b002556f92f9d4mr22136645ljl.92.1654782098160; Thu, 09 Jun 2022 06:41:38 -0700 (PDT) MIME-Version: 1.0 References: <20220609113046.780504-1-elver@google.com> <20220609113046.780504-6-elver@google.com> In-Reply-To: From: Dmitry Vyukov Date: Thu, 9 Jun 2022 15:41:26 +0200 Message-ID: Subject: Re: [PATCH 5/8] perf/hw_breakpoint: Remove useless code related to flexible breakpoints To: Marco Elver Cc: Peter Zijlstra , Frederic Weisbecker , Ingo Molnar , Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 9 Jun 2022 at 14:04, Dmitry Vyukov wrote: > > On Thu, 9 Jun 2022 at 13:31, Marco Elver wrote: > > > > Flexible breakpoints have never been implemented, with > > bp_cpuinfo::flexible always being 0. Unfortunately, they still occupy 4 > > bytes in each bp_cpuinfo and bp_busy_slots, as well as computing the max > > flexible count in fetch_bp_busy_slots(). > > > > This again causes suboptimal code generation, when we always know that > > `!!slots.flexible` will be 0. > > > > Just get rid of the flexible "placeholder" and remove all real code > > related to it. Make a note in the comment related to the constraints > > algorithm but don't remove them from the algorithm, so that if in future > > flexible breakpoints need supporting, it should be trivial to revive > > them (along with reverting this change). > > > > Signed-off-by: Marco Elver > > Was added in 2009. > > Acked-by: Dmitry Vyukov > > > --- > > kernel/events/hw_breakpoint.c | 12 +++--------- > > 1 file changed, 3 insertions(+), 9 deletions(-) > > > > diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c > > index 5f40c8dfa042..afe0a6007e96 100644 > > --- a/kernel/events/hw_breakpoint.c > > +++ b/kernel/events/hw_breakpoint.c > > @@ -46,8 +46,6 @@ struct bp_cpuinfo { > > #else > > unsigned int *tsk_pinned; > > #endif > > - /* Number of non-pinned cpu/task breakpoints in a cpu */ > > - unsigned int flexible; /* XXX: placeholder, see fetch_this_slot() */ > > }; > > > > static DEFINE_PER_CPU(struct bp_cpuinfo, bp_cpuinfo[TYPE_MAX]); > > @@ -71,7 +69,6 @@ static bool constraints_initialized __ro_after_init; > > /* Gather the number of total pinned and un-pinned bp in a cpuset */ > > struct bp_busy_slots { Do we also want to remove this struct altogether? Now it becomes just an int counter. > > unsigned int pinned; > > - unsigned int flexible; > > }; > > > > /* Serialize accesses to the above constraints */ > > @@ -213,10 +210,6 @@ fetch_bp_busy_slots(struct bp_busy_slots *slots, struct perf_event *bp, > > > > if (nr > slots->pinned) > > slots->pinned = nr; > > - > > - nr = info->flexible; > > - if (nr > slots->flexible) > > - slots->flexible = nr; > > } > > } > > > > @@ -299,7 +292,8 @@ __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) > > } > > > > /* > > - * Constraints to check before allowing this new breakpoint counter: > > + * Constraints to check before allowing this new breakpoint counter. Note that > > + * flexible breakpoints are currently unsupported -- see fetch_this_slot(). > > * > > * == Non-pinned counter == (Considered as pinned for now) > > * > > @@ -366,7 +360,7 @@ static int __reserve_bp_slot(struct perf_event *bp, u64 bp_type) > > fetch_this_slot(&slots, weight); > > > > /* Flexible counters need to keep at least one slot */ > > - if (slots.pinned + (!!slots.flexible) > hw_breakpoint_slots_cached(type)) > > + if (slots.pinned > hw_breakpoint_slots_cached(type)) > > return -ENOSPC; > > > > ret = arch_reserve_bp_slot(bp); > > -- > > 2.36.1.255.ge46751e96f-goog > >