Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933104AbbFVJQv (ORCPT ); Mon, 22 Jun 2015 05:16:51 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:59075 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754480AbbFVJQe (ORCPT ); Mon, 22 Jun 2015 05:16:34 -0400 X-Helo: d23dlp02.au.ibm.com X-MailFrom: maddy@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Message-ID: <5587D21C.8080303@linux.vnet.ibm.com> Date: Mon, 22 Jun 2015 14:45:08 +0530 From: Madhavan Srinivasan User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Preeti U Murthy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org CC: Peter Zijlstra , rusty@rustcorp.com.au, Stephane Eranian , Paul Mackerras , Anton Blanchard , Sukadev Bhattiprolu , Ingo Molnar , Anshuman Khandual Subject: Re: [PATCH v2 7/7]powerpc/powernv: nest pmu cpumask and cpu hotplug support References: <1433999874-2043-1-git-send-email-maddy@linux.vnet.ibm.com> <1433999874-2043-8-git-send-email-maddy@linux.vnet.ibm.com> <557FC1F9.5010607@linux.vnet.ibm.com> In-Reply-To: <557FC1F9.5010607@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15062209-0033-0000-0000-000001AB7CE7 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6626 Lines: 203 On Tuesday 16 June 2015 11:58 AM, Preeti U Murthy wrote: > On 06/11/2015 10:47 AM, Madhavan Srinivasan wrote: >> Adds cpumask attribute to be used by each nest pmu since nest >> units are per-chip. Only one cpu (first online cpu) from each node/chip >> is designated to read counters. >> >> On cpu hotplug, dying cpu is checked to see whether it is one of the >> designated cpus, if yes, next online cpu from the same node/chip is designated >> as new cpu to read counters. >> >> Cc: Michael Ellerman >> Cc: Benjamin Herrenschmidt >> Cc: Paul Mackerras >> Cc: Anton Blanchard >> Cc: Sukadev Bhattiprolu >> Cc: Anshuman Khandual >> Cc: Stephane Eranian >> Cc: Preeti U Murthy >> Cc: Ingo Molnar >> Cc: Peter Zijlstra >> Signed-off-by: Madhavan Srinivasan >> --- >> +static void nest_change_cpu_context(int old_cpu, int new_cpu) >> +{ >> + int i; >> + >> + if (new_cpu >= 0) { >> + for (i = 0; per_nest_pmu_arr[i] != NULL; i++) >> + perf_pmu_migrate_context(&per_nest_pmu_arr[i]->pmu, >> + old_cpu, new_cpu); >> + } >> +} >> + >> +static void nest_exit_cpu(int cpu) >> +{ >> + int i, nid, target = -1; >> + const struct cpumask *l_cpumask; >> + int src_chipid; >> + >> + /* >> + * Check in the designated list for this cpu. Dont bother >> + * if not one of them. >> + */ >> + if (!cpumask_test_and_clear_cpu(cpu, &cpu_mask_nest_pmu)) >> + return; >> + >> + /* >> + * Now that this cpu is one of the designated, >> + * find a new cpu a) which is not dying and > This comment is not right. nest_exit_cpu() is called in the hotplug > path, so another cpu cannot be dying in parallel. Hotplug operations are > done serially. The comment ought to be "a) which is online" instead. Ok will change it. >> + * b) is in same node/chip. > node is not the same as a chip right ? And you are interested in cpus on > the same chip alone. So shouldn't the above comment be b) in the same chip ? I was hoping it to be, but i will change comment to chip. >> + */ >> + nid = cpu_to_node(cpu); >> + src_chipid = topology_physical_package_id(cpu); > What is the relation between a node and a chip ? Can't we have a > function which returns the cpumask of a chip straight away, since that > is what you seem to be more interested in ? You can then simply choose > the next cpu in this cpumask rather than going through the below loop. > Make sense. I can separate it out. >> + l_cpumask = cpumask_of_node(nid); >> + for_each_cpu(i, l_cpumask) { >> + if (i == cpu) >> + continue; >> + if (src_chipid == topology_physical_package_id(i)) { >> + target = i; >> + break; >> + } >> + } >> + >> + /* >> + * Update the cpumask with the target cpu and >> + * migrate the context if needed >> + */ >> + if (target >= 0) { > You already check for target >= 0 here. So you don't need to check for > new_cpu >= 0 in nest_change_cpu_context() above ? I guess i was way too cautious :) Will change it >> + cpumask_set_cpu(target, &cpu_mask_nest_pmu); >> + nest_change_cpu_context (cpu, target); >> + } >> +} >> + >> +static void nest_init_cpu(int cpu) >> +{ >> + int i, src_chipid; >> + >> + /* >> + * Search for any existing designated thread from >> + * the incoming cpu's node/chip. If found, do nothing. >> + */ >> + src_chipid = topology_physical_package_id(cpu); >> + for_each_cpu(i, &cpu_mask_nest_pmu) >> + if (src_chipid == topology_physical_package_id(i)) >> + return; >> + >> + /* >> + * Make incoming cpu as a designated thread for >> + * this node/chip >> + */ >> + cpumask_set_cpu(cpu, &cpu_mask_nest_pmu); > Why can't we simply check if cpu is the first one coming online in the > chip and designate it as the cpu_mask_nest_pmu for that chip ? If it is > not the first cpu, it means that another cpu in the same chip already > took over this duty and it needn't bother. Looks to be right. let me try it out. > And shouldn't we also call nest_init() on this cpu, just like you do in > cpumask_chip() on all cpu_mask_nest_pmu cpus ? Yes. I missed that. We should init. Nice catch. >> +} >> + >> +static int nest_cpu_notifier(struct notifier_block *self, >> + unsigned long action, void *hcpu) >> +{ >> + long cpu = (long)hcpu; >> + >> + switch (action & ~CPU_TASKS_FROZEN) { >> + case CPU_DOWN_FAILED: > Why do we need to handle the DOWN_FAILED case ? In DOWN_PREPARE, you > have ensured that the function moves on to another cpu. So even if the > offline failed, its no issue. The duty is safely taken over. > >> + case CPU_STARTING: > I would suggest initializing nest in the CPU_ONLINE stage. This is > because CPU_STARTING phase can fail. In that case, we will be > unnecessarily initializing nest pre-maturely. CPU_ONLINE phase assures > that the cpu is successfully online and you can then initiate nest. Ok sure. Will do that. >> + nest_init_cpu(cpu); >> + break; >> + case CPU_DOWN_PREPARE: >> + nest_exit_cpu(cpu); >> + break; >> + default: >> + break; >> + } >> + >> + return NOTIFY_OK; >> +} >> + >> +static struct notifier_block nest_cpu_nb = { >> + .notifier_call = nest_cpu_notifier, >> + .priority = CPU_PRI_PERF + 1, >> +}; >> + >> +void cpumask_chip(void) > This name ^^ is not apt IMO. You are initiating the cpumask necessary > for nest pmu. So why not call it nest_pmu_cpumask_init() ? Ok. >> +{ >> + const struct cpumask *l_cpumask; >> + int cpu, nid; >> + >> + if (!cpumask_empty(&cpu_mask_nest_pmu)) > When can this condition become true ? My bad. This code ended up here from the initial RFC patch, but after reviewing this again, i dont think this is needed. Once again nice catch. >> + return; >> + >> + cpu_notifier_register_begin(); >> + >> + /* >> + * Nest PMUs are per-chip counters. So designate a cpu >> + * from each node/chip for counter collection. >> + */ >> + for_each_online_node(nid) { >> + l_cpumask = cpumask_of_node(nid); >> + >> + /* designate first online cpu in this node */ >> + cpu = cpumask_first(l_cpumask); >> + cpumask_set_cpu(cpu, &cpu_mask_nest_pmu); >> + } >> + >> + /* Initialize Nest PMUs in each node using designated cpus */ >> + on_each_cpu_mask(&cpu_mask_nest_pmu, (smp_call_func_t)nest_init, NULL, 1); >> + >> + __register_cpu_notifier(&nest_cpu_nb); >> + >> + cpu_notifier_register_done(); >> +} > Regards > Preeti U Murthy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/