Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751072AbdFGFpU (ORCPT ); Wed, 7 Jun 2017 01:45:20 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:52237 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750730AbdFGFpS (ORCPT ); Wed, 7 Jun 2017 01:45:18 -0400 Subject: Re: [PATCH v9 07/10] powerpc/perf: PMU functions for Core IMC and hotplugging To: Thomas Gleixner Cc: mpe@ellerman.id.au, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, ego@linux.vnet.ibm.com, bsingharora@gmail.com, anton@samba.org, sukadev@linux.vnet.ibm.com, mikey@neuling.org, stewart@linux.vnet.ibm.com, dja@axtens.net, eranian@google.com, hemant@linux.vnet.ibm.com, maddy@linux.vnet.ibm.com References: <1496665922-702-1-git-send-email-anju@linux.vnet.ibm.com> <1496665922-702-2-git-send-email-anju@linux.vnet.ibm.com> From: Anju T Sudhakar Date: Wed, 7 Jun 2017 11:14:49 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-TM-AS-GCONF: 00 x-cbid: 17060705-0028-0000-0000-000007C0F2AA X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007187; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00871278; UDB=6.00433330; IPR=6.00651270; BA=6.00005403; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015727; XFM=3.00000015; UTC=2017-06-07 05:45:03 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17060705-0029-0000-0000-0000361A4905 Message-Id: <692e037d-5367-208f-ac6f-8fd43e1643ba@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-07_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706070107 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4650 Lines: 174 Hi Thomas, On Tuesday 06 June 2017 03:39 PM, Thomas Gleixner wrote: > On Mon, 5 Jun 2017, Anju T Sudhakar wrote: >> +static void cleanup_all_core_imc_memory(struct imc_pmu *pmu_ptr) >> +{ >> + struct imc_mem_info *ptr = pmu_ptr->mem_info; >> + >> + if (!ptr) >> + return; > That's pointless. No, it is not. We may end up here from imc_mem_init() when the memory allocation for pmu_ptr->mem_info fails. So in that case we can just return from here, and kfree wont be called with a NULL pointer. >> + for (; ptr; ptr++) { > for (ptr = pmu_ptr->mem_info; ptr; ptr++) { > > will do the right thing. > >> + if (ptr->vbase[0] != 0) >> + free_pages(ptr->vbase[0], 0); >> + } > and kfree can be called with a NULL pointer. > >> + kfree(pmu_ptr->mem_info); >> +} >> + >> +/* Enabling of Core Engine needs a scom operation */ >> +static void core_imc_control_enable(int *cpu_opal_rc) >> +{ >> + int rc; >> + >> + rc = opal_imc_counters_start(OPAL_IMC_COUNTERS_CORE); >> + if (rc) >> + cpu_opal_rc[smp_processor_id()] = 1; >> +} >> + >> +/* >> + * Disabling of IMC Core Engine needs a scom operation >> + */ >> +static void core_imc_control_disable(int *cpu_opal_rc) >> +{ >> + int rc; >> + >> + rc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_CORE); >> + if (rc) >> + cpu_opal_rc[smp_processor_id()] = 1; >> +} >> + >> +int core_imc_control(int operation) >> +{ >> + int cpu, *cpus_opal_rc; >> + >> + /* Memory for OPAL call return value. */ >> + cpus_opal_rc = kzalloc((sizeof(int) * nr_cpu_ids), GFP_KERNEL); >> + if (!cpus_opal_rc) >> + return -ENOMEM; >> + >> + /* >> + * Enable or disable the core IMC PMU on each core using the >> + * core_imc_cpumask. >> + */ >> + switch (operation) { >> + >> + case IMC_COUNTER_DISABLE: >> + on_each_cpu_mask(&core_imc_cpumask, >> + (smp_call_func_t)core_imc_control_disable, >> + cpus_opal_rc, 1); >> + break; >> + case IMC_COUNTER_ENABLE: >> + on_each_cpu_mask(&core_imc_cpumask, >> + (smp_call_func_t)core_imc_control_enable, >> + cpus_opal_rc, 1); >> + break; >> + default: >> + goto fail; >> + } >> + /* Check return value array for any OPAL call failure */ >> + for_each_cpu(cpu, &core_imc_cpumask) { >> + if (cpus_opal_rc[cpu]) >> + goto fail; >> + } >> + return 0; >> +fail: >> + if (cpus_opal_rc) >> + kfree(cpus_opal_rc); >> + return -EINVAL; >> +} > Interesting. You copied the other function and then implemented it > differently. > > But, what's worse is that this is just duplicated code for no value. Both > the nest and core control functions call > > opal_imc_counters_xxxx(WHICH_COUNTER); > > So the obvious thing to do is: > > static struct cpumask imc_result_mask; > static DEFINE_MUTEX(imc_control_mutex); > > static void opal_imc_XXX(void *which) > { > if (opal_imc_counters_XXX((unsigned long) which)) > cpumask_set_cpu(smp_processor_id(), &imc_result_mask); > } > > static int imc_control(unsigned long which, bool start) > { > smp_call_func_t *fn; > int res; > > mutex_lock(&imc_control_mutex); > memset(&imc_result_mask, 0, sizeof(imc_result_mask)); > > switch (which) { > case OPAL_IMC_COUNTERS_CORE: > case OPAL_IMC_COUNTERS_NEST: > break; > default: > res = -EINVAL; > goto out; > } > > fn = start ? opal_imc_start : opal_imc_stop; > > on_each_cpu_mask(&core_imc_cpumask, fn, (void *) which, 1); > > res = cpumask_empty(&nest_imc_result) ? 0 : -ENODEV; > out: > mutex_unlock(&imc_control_mutex); > return res; > > That would allow sharing of too much code, right? Yes. This looks really good. Thanks! I will rework the code. >> +/* >> + * core_imc_mem_init : Initializes memory for the current core. >> + * >> + * Uses alloc_pages_exact_nid() and uses the returned address as an argument to >> + * an opal call to configure the pdbar. The address sent as an argument is >> + * converted to physical address before the opal call is made. This is the >> + * base address at which the core imc counters are populated. >> + */ >> +static int __meminit core_imc_mem_init(int cpu, int size) >> +{ >> + int phys_id, rc = 0, core_id = (cpu / threads_per_core); >> + struct imc_mem_info *mem_info = NULL; > What's that NULL initialization for? > >> + >> + phys_id = topology_physical_package_id(cpu); >> + /* >> + * alloc_pages_exact_nid() will allocate memory for core in the >> + * local node only. >> + */ >> + mem_info = &core_imc_pmu->mem_info[core_id]; >> + mem_info->id = core_id; >> + mem_info->vbase[0] = (u64) alloc_pages_exact_nid(phys_id, >> + (size_t)size, GFP_KERNEL | __GFP_ZERO); > That allocation is guaranteed not to fail? Nice catch. We need a check here. Will fix this. Thanks and Regards, Anju