Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp2539319ybg; Fri, 31 Jul 2020 02:52:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJztjA+XlAWEkpvDevGvpTPeq9yoHfX6vHWJG8YMTD/v2Vq+RbbEyzbOkwB6kOXRL8wUXVJX X-Received: by 2002:a17:906:e218:: with SMTP id gf24mr3297873ejb.469.1596189156301; Fri, 31 Jul 2020 02:52:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596189156; cv=none; d=google.com; s=arc-20160816; b=ecQNCpfIpkbMVqY2QKIw5MoE8u1Y3SrbikhuisHjippuE9w0SVpXiwSQ8L/mTWj3mA PT57yoGsA3KuxHE+Civw9eVkb60EgTiJBxAC1lhM2jazN+TUzGqXyBd/Q1mqx9LF6vdm c7MCVcP44YU+04yt3MgOQw2RXdHODpNdeeEb9lcwhdGLh72wAeQGaOLGGiyGb7lEM5o8 4lEvo87wP4RhEv2WVQZBuUgao7/0PdbrztMbKo60dTT4lNlLaOFPh0Mg2e/UjTRpnqZI TLNlYvNGwASPVMqt9/dCf+sbOIqC9pYAzAaoKHoAnDxjpjxh1LnoGDuOtl+7JwnZc8bh 73hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date; bh=cpngPIjWDDauPJuO0XNqQb6YifiL4EyIzSgmILYJrXY=; b=VNTo7uqoNkaobM8wIgS+kjnVt2j5mpVqlW4PuZQ1qGISvWqOzpbBTpi6bTS+VEqfW8 Exz0V+cHyMPigM8Bj1yVanjCWR0Bb0yXjHuV8gK1qmefdtNSA5458klXXRtHu+fukEqy oBEpx0jKLE2kJqI5bQTK4Pwl1uUwFvtX0MD9QdNkPuxx3ZZTQ8vu3ci7ujD2vMU8jteF uYfU1lFcbhqpNqN2xxKOz9a3IQrWXM/QOtVJbPdrAl7RjGp+8RmiCmwHsYxKs38m71jD k07q/tflMWBg7FjuFa4MRIyw830TQiPCkr2vICFxrQWZHakux7CuV6ByR/HjcQb2naWi 4RiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x20si2089986eju.35.2020.07.31.02.52.11; Fri, 31 Jul 2020 02:52:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732318AbgGaJt6 (ORCPT + 99 others); Fri, 31 Jul 2020 05:49:58 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:46216 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732134AbgGaJt5 (ORCPT ); Fri, 31 Jul 2020 05:49:57 -0400 Received: from pps.filterd (m0127361.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06V9Wx6V120616; Fri, 31 Jul 2020 05:49:46 -0400 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 32md1ywv47-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 31 Jul 2020 05:49:45 -0400 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06V9jZRh014321; Fri, 31 Jul 2020 09:49:44 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma04ams.nl.ibm.com with ESMTP id 32gcy4q3pt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 31 Jul 2020 09:49:44 +0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06V9ngBC28312018 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 31 Jul 2020 09:49:42 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E48F0A405F; Fri, 31 Jul 2020 09:49:41 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 79B90A4054; Fri, 31 Jul 2020 09:49:39 +0000 (GMT) Received: from linux.vnet.ibm.com (unknown [9.126.150.29]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with SMTP; Fri, 31 Jul 2020 09:49:39 +0000 (GMT) Date: Fri, 31 Jul 2020 15:19:38 +0530 From: Srikar Dronamraju To: Michael Ellerman Cc: linuxppc-dev , LKML , Nicholas Piggin , Anton Blanchard , "Oliver O'Halloran" , Nathan Lynch , Michael Neuling , Gautham R Shenoy , Ingo Molnar , Peter Zijlstra , Valentin Schneider , Jordan Niethe Subject: Re: [PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group Message-ID: <20200731094938.GA18776@linux.vnet.ibm.com> Reply-To: Srikar Dronamraju References: <20200727053230.19753-1-srikar@linux.vnet.ibm.com> <20200727053230.19753-9-srikar@linux.vnet.ibm.com> <87zh7g3yvk.fsf@mpe.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <87zh7g3yvk.fsf@mpe.ellerman.id.au> User-Agent: Mutt/1.10.1 (2018-07-13) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-31_03:2020-07-31,2020-07-31 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 clxscore=1015 priorityscore=1501 malwarescore=0 mlxlogscore=999 adultscore=0 bulkscore=0 impostorscore=0 lowpriorityscore=0 suspectscore=2 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007310071 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Michael Ellerman [2020-07-31 17:52:15]: > Srikar Dronamraju writes: > > If allocated earlier and the search fails, then cpumask need to be > > freed. However cpu_l1_cache_map can be allocated after we search thread > > group. > > It's not freed anywhere AFAICS? > Yes, its never freed. Infact we are never checking if zalloc_cpumask_var_node fails. Its not just this cpumask, but historically all the other existing cpumasks in arch/powerpc/kernel/smp.c are never freed/checked. I did dig into this a bit and it appears that .. (Please do correct me if I am wrong!! ) Powerpc using cpumask_var_t for all of the percpu variables. And it dont seem to enable CONFIG_CPUMASK_OFFSTACK even from the MAXSMP config. So from include/linux/cpumask.h typedef struct cpumask cpumask_var_t[1]; and zalloc_cpumask_var_node ends up being cpumask_clear So I think we are historically we seem to assume we are always !CPUMASK_OFFSTACK and hence we dont need to check for return as well as free.. I would look forward to your comments on how we should handle this going forward. But I would keep this the same for this patchset. One of the questions that I have is if we most likely are to be in !CONFIG_CPUMASK_OFFSTACK, then should be migrate to cpumask_t for percpu variables. The reason being we end up using NR_CPU cpumask for each percpu cpumask variable instead of using NR_CPU cpumask_t pointer. > And even after this change there's still an error path that doesn't free > it, isn't there? > > cheers > > > Cc: linuxppc-dev > > Cc: LKML > > Cc: Michael Ellerman > > Cc: Nicholas Piggin > > Cc: Anton Blanchard > > Cc: Oliver O'Halloran > > Cc: Nathan Lynch > > Cc: Michael Neuling > > Cc: Gautham R Shenoy > > Cc: Ingo Molnar > > Cc: Peter Zijlstra > > Cc: Valentin Schneider > > Cc: Jordan Niethe > > Reviewed-by: Gautham R. Shenoy > > Signed-off-by: Srikar Dronamraju > > --- > > arch/powerpc/kernel/smp.c | 7 +++---- > > 1 file changed, 3 insertions(+), 4 deletions(-) > > > > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c > > index 698000c7f76f..dab96a1203ec 100644 > > --- a/arch/powerpc/kernel/smp.c > > +++ b/arch/powerpc/kernel/smp.c > > @@ -797,10 +797,6 @@ static int init_cpu_l1_cache_map(int cpu) > > if (err) > > goto out; > > > > - zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), > > - GFP_KERNEL, > > - cpu_to_node(cpu)); > > - > > cpu_group_start = get_cpu_thread_group_start(cpu, &tg); > > > > if (unlikely(cpu_group_start == -1)) { > > @@ -809,6 +805,9 @@ static int init_cpu_l1_cache_map(int cpu) > > goto out; > > } > > > > + zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu), > > + GFP_KERNEL, cpu_to_node(cpu)); > > + > > for (i = first_thread; i < first_thread + threads_per_core; i++) { > > int i_group_start = get_cpu_thread_group_start(i, &tg); > > > > -- > > 2.17.1 -- Thanks and Regards Srikar Dronamraju