Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1857784rwd; Fri, 9 Jun 2023 03:18:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4dY5di0RMSH23F13FoRswdpCpMU58T99/3Aap4DM+9wz+LLHy2bEEkf7AUGbyqOwC3TTx+ X-Received: by 2002:a17:902:efd3:b0:1b0:295b:f192 with SMTP id ja19-20020a170902efd300b001b0295bf192mr891438plb.3.1686305900100; Fri, 09 Jun 2023 03:18:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686305900; cv=none; d=google.com; s=arc-20160816; b=PB4pRRQxK9d3YoLrKkShkoIG71/PGIjGDikOw7g/nbZWOt2fqzx3ebNeqnwmF/5qdE RiY2a+VsOMLi3de0xMYMsD07uBGTErSIKCalQ8rN7hZ+vWjHBK10Ittz0KS0TVUh1+Sl CvW/LgUN7Lznq3zXhFgcifb5m8DATz2YR6X+uPY/1/8W/tRC/f8lfRGgxHvdjqBlhUKj NcAQORbFUP7Hv3ou/qbRVxkR4Ol3CafeHXdGt9xGKfSUBPaP4MdG+EAkuTAjwfLiYl6z +kWYRk91Et165XWtxAYSya3GyYSQNigoapCebVbblYbkUZ9dRA+9bHmZrgwMcpMiF6az JbKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=U7FN2rOwavwNdTKSe5xr2WWU0DlyNkkmp54rXxbC1ss=; b=khgvh6su7o2lFh82ceZCCjYODe1fDP4sE16d2+zTorJ54DbrkNQa3PA7MDXQYq3kl8 dZ6Xcty5t1PewquAXKEBb+MkxkMr2Zrnwogw6YX5txRqbMjPTGdOjb60Cn/7ZDyJYt1E VEQZTIsmtQ1M5V9YqzVUcvYdsgTLWsx9XKWw3lcu1LFSyTdfOFo9vO7zey9jfSH8IdUC Mbmoj9Zf1WmSu+60ZvO7Q3Pj94o/6GtKZakB5SYrMGi8pqZ3HX4AU0FU5LUaq0oj0b0M XgEKldoKtSTcL7LPrg5knpNd5WjgEgQHBzJgUA4SzBrmks6b0mB86N6ykah9v0yQ4j6K pCYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b12-20020a170902d40c00b001b06707ac12si2403005ple.49.2023.06.09.03.18.07; Fri, 09 Jun 2023 03:18:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240686AbjFIKDH (ORCPT + 99 others); Fri, 9 Jun 2023 06:03:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242009AbjFIKCS (ORCPT ); Fri, 9 Jun 2023 06:02:18 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7FF766583; Fri, 9 Jun 2023 02:54:37 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8BEEAB6; Fri, 9 Jun 2023 02:55:21 -0700 (PDT) Received: from [10.57.25.114] (unknown [10.57.25.114]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B17923F587; Fri, 9 Jun 2023 02:54:34 -0700 (PDT) Message-ID: <3f5f69e4-9324-152c-6581-8855a3dbb221@arm.com> Date: Fri, 9 Jun 2023 10:54:33 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.11.1 Subject: Re: [PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8 PMU To: Anshuman Khandual , Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, Mark Brown , James Clark , Rob Herring , Marc Zyngier , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org References: <20230531040428.501523-1-anshuman.khandual@arm.com> <20230531040428.501523-6-anshuman.khandual@arm.com> <290b577c-4740-d2e2-d236-c8bbe2f907b9@arm.com> <78cb22e2-c46e-d62d-fefc-b7963737499e@arm.com> From: Suzuki K Poulose In-Reply-To: <78cb22e2-c46e-d62d-fefc-b7963737499e@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/06/2023 05:00, Anshuman Khandual wrote: > > > On 6/8/23 15:43, Suzuki K Poulose wrote: >> On 06/06/2023 11:34, Anshuman Khandual wrote: >>> >>> >>> On 6/5/23 17:35, Mark Rutland wrote: >>>> On Wed, May 31, 2023 at 09:34:23AM +0530, Anshuman Khandual wrote: >>>>> This enables support for branch stack sampling event in ARMV8 PMU, checking >>>>> has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although >>>>> these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions >>>>> for now. While here, this also defines arm_pmu's sched_task() callback with >>>>> armv8pmu_sched_task(), which resets the branch record buffer on a sched_in. >>>> >>>> This generally looks good, but I have a few comments below. >>>> >>>> [...] >>>> >>>>> +static inline bool armv8pmu_branch_valid(struct perf_event *event) >>>>> +{ >>>>> +    WARN_ON_ONCE(!has_branch_stack(event)); >>>>> +    return false; >>>>> +} >>>> >>>> IIUC this is for validating the attr, so could we please name this >>>> armv8pmu_branch_attr_valid() ? >>> >>> Sure, will change the name and updated call sites. >>> >>>> >>>> [...] >>>> >>>>> +static int branch_records_alloc(struct arm_pmu *armpmu) >>>>> +{ >>>>> +    struct pmu_hw_events *events; >>>>> +    int cpu; >>>>> + >>>>> +    for_each_possible_cpu(cpu) { >> >> Shouldn't this be supported_pmus ? i.e. >>     for_each_cpu(cpu, &armpmu->supported_cpus) { >> >> >>>>> +        events = per_cpu_ptr(armpmu->hw_events, cpu); >>>>> +        events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL); >>>>> +        if (!events->branches) >>>>> +            return -ENOMEM; >> >> Do we need to free the allocated branches already ? > > This gets fixed in the next patch via per-cpu allocation. I will > move and fold the code block in here. Updated function will look > like the following. > > static int branch_records_alloc(struct arm_pmu *armpmu) > { > struct branch_records __percpu *records; > int cpu; > > records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); > if (!records) > return -ENOMEM; > > /* > * FIXME: Memory allocated via records gets completely > * consumed here, never required to be freed up later. Hence > * losing access to on stack 'records' is acceptable. > * Otherwise this alloc handle has to be saved some where. > */ > for_each_possible_cpu(cpu) { > struct pmu_hw_events *events_cpu; > struct branch_records *records_cpu; > > events_cpu = per_cpu_ptr(armpmu->hw_events, cpu); > records_cpu = per_cpu_ptr(records, cpu); > events_cpu->branches = records_cpu; > } > return 0; > } > > Regarding the cpumask argument in for_each_cpu(). > > - hw_events is a __percpu pointer in struct arm_pmu > > - pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, GFP_KERNEL) > > > - 'records' above is being allocated via alloc_percpu_gfp() > > - records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL) > > If 'armpmu->supported_cpus' mask gets used instead of possible cpu mask, > would not there be some dangling per-cpu branch_record allocated areas, > that remain unsigned ? Assigning all of them back into hw_events should > be harmless. Thats because you are using alloc_percpu for records ? With the current proposed code, if there are two different arm_pmus on the system, you would end up wasting 1xper_cpu branch_records ? And if there are 3, 2xper_cpu gets wasted ? > >> >>>>> +    } >> >> >> May be: >>     int ret = 0; >> >>     for_each_cpu(cpu, &armpmu->supported_cpus) { >>         events = per_cpu_ptr(armpmu->hw_events, cpu); >>         events->branches = kzalloc(sizeof(struct         branch_records), GFP_KERNEL); >> >>         if (!events->branches) { >>             ret = -ENOMEM; >>             break; >>         } >>     } >> >>     if (!ret) >>         return 0; >> >>     for_each_cpu(cpu, &armpmu->supported_cpus) { >>         events = per_cpu_ptr(armpmu->hw_events, cpu); >>         if (!events->branches) >>             break; >>         kfree(events->branches); >>     } >>     return ret; >> >>>>> +    return 0; >>>> >>>> This leaks memory if any allocation fails, and the next patch replaces this >>>> code entirely. >>> >>> Okay. >>> >>>> >>>> Please add this once in a working state. Either use the percpu allocation >>>> trick in the next patch from the start, or have this kzalloc() with a >>>> corresponding kfree() in an error path. >>> >>> I will change branch_records_alloc() as suggested in the next patch's thread >>> and fold those changes here in this patch. >>> >>>> >>>>>   } >>>>>     static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) >>>>> @@ -1145,12 +1162,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) >>>>>       }; >>>>>       int ret; >>>>>   +    ret = armv8pmu_private_alloc(cpu_pmu); >>>>> +    if (ret) >>>>> +        return ret; >>>>> + >>>>>       ret = smp_call_function_any(&cpu_pmu->supported_cpus, >>>>>                       __armv8pmu_probe_pmu, >>>>>                       &probe, 1); >>>>>       if (ret) >>>>>           return ret; >>>>>   +    if (arm_pmu_branch_stack_supported(cpu_pmu)) { >>>>> +        ret = branch_records_alloc(cpu_pmu); >>>>> +        if (ret) >>>>> +            return ret; >>>>> +    } else { >>>>> +        armv8pmu_private_free(cpu_pmu); >>>>> +    } >>>> >>>> I see from the next patch that "private" is four ints, so please just add that >>>> to struct arm_pmu under an ifdef CONFIG_ARM64_BRBE. That'll simplify this, and >>>> if we end up needing more space in future we can consider factoring it out. >>> >>> struct arm_pmu { >>>     ........................................ >>>          /* Implementation specific attributes */ >>>          void            *private; >>> } >>> >>> private pointer here creates an abstraction for given pmu implementation >>> to hide attribute details without making it known to core arm pmu layer. >>> Although adding ifdef CONFIG_ARM64_BRBE solves the problem as mentioned >>> above, it does break that abstraction. Currently arm_pmu layer is aware >>> about 'branch records' but not about BRBE in particular which the driver >>> adds later on. I suggest we should not break that abstraction. >>> >>> Instead a global 'static struct brbe_hw_attr' in drivers/perf/arm_brbe.c >>> can be initialized into arm_pmu->private during armv8pmu_branch_probe(), >>> which will also solve the allocation-free problem. Also similar helpers >>> armv8pmu_task_ctx_alloc()/free() could be defined to manage task context >>> cache i.e arm_pmu->pmu.task_ctx_cache independently. >>> >>> But now armv8pmu_task_ctx_alloc() can be called after pmu probe confirms >>> to have arm_pmu->has_branch_stack. >>> >>>> >>>>> + >>>>>       return probe.present ? 0 : -ENODEV; >>>>>   } >>>> >>>> It also seems odd to ceck probe.present *after* checking >>>> arm_pmu_branch_stack_supported(). >>> >>> I will reorganize as suggested below. >>> >>>> >>>> With the allocation removed I think this can be written more clearly as: >>>> >>>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) >>>> | { >>>> |         struct armv8pmu_probe_info probe = { >>>> |                 .pmu = cpu_pmu, >>>> |                 .present = false, >>>> |         }; >>>> |         int ret; >>>> | >>>> |         ret = smp_call_function_any(&cpu_pmu->supported_cpus, >>>> |                                     __armv8pmu_probe_pmu, >>>> |                                     &probe, 1); >>>> |         if (ret) >>>> |                 return ret; > | >>>> |         if (!probe.present) >>>> |                 return -ENODEV; >>>> | >>>> |         if (arm_pmu_branch_stack_supported(cpu_pmu)) >>>> |                 ret = branch_records_alloc(cpu_pmu); >>>> | >>>> |         return ret; >>>> | } >> >> Could we not simplify this as below and keep the abstraction, since we >> already have it ? > > No, there is an allocation dependency before the smp call context. Ok, I wasn't aware of that. Could we not read whatever we need to know about the brbe in armv8pmu_probe_info and process it at the caller here? And then do the the private_alloc etc as we need ? Suzuki