Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3016227imm; Sun, 1 Jul 2018 10:21:26 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK8h++HODgMFBhOFCVOdjjNu0R69Wiilo28IZk/uebQ7Ubyj3qRQ+iR5uWCCf5N2EZ1in9s X-Received: by 2002:a63:2a45:: with SMTP id q66-v6mr19299998pgq.210.1530465685989; Sun, 01 Jul 2018 10:21:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530465685; cv=none; d=google.com; s=arc-20160816; b=xXPvESu+u+odYcHie++C5swVS2lfwYX8Ahl3uoOkVmeyYDZlI6mkhR7shVSzO2ksky qasGxeeojwRRG05g2oqcSul1TwRTd9H39WMm8aLVJ3ayEUzrVnE3lQEDRDgQgLnF+a2H ENdKIcJE9mCVdLhCP3qAp92VOFwA6IfJWd2SvTDEBvk4+vD3cw8rCzeYujAgfyYdyKEx KEA1v9I2m7jBf9HlJr9184l+iedQJi5HvRwDnMYXbrIlj2WVdpYgew/KhCoEo+a0XzGQ 7LzWl9Tz/a8kCyWAAVsym2tkU5A7+KKIVV9mTn59/azgbOqnUGyRrL0YhPzOmnKEkEo/ 7PwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=M57OmAuycTvOF4OodPNgkPGYCakH/44sRL73FTJbEl0=; b=hlzdvoksAJ3wHnrd/5wuC404rYusy5lItMX4V5+j1tUsKQaNvMPtxc/482EFFI+1/e EZXTQcX73inLkl3FWUJEoO7XSrEB0qQG9keHT6pyT11QYSoOlzirLTSW1Q2C6o7AvUJV +J6ovVR1CvYv+sWGRUbjH9QkjlQlB1XLeHSLaKCBc/fdnIJkH/LyEGGkWzHRyLVPilq5 jSDZN3mKSaJphHkXuBKyGq8ZYwQsu3eBeniRsKHUJ40jSrUmWaK41lQdRhFZsJvRNW+h Njti5cexERZPgcP6NLAcHEm4QzkbR3olPSja42cGN8tyewZ/ZFL58eDbQQa0NgWXmS8O 7U0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j5-v6si13783844pfh.3.2018.07.01.10.21.11; Sun, 01 Jul 2018 10:21:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031650AbeGAQiu (ORCPT + 99 others); Sun, 1 Jul 2018 12:38:50 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:36740 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031609AbeGAQid (ORCPT ); Sun, 1 Jul 2018 12:38:33 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id B85D892B; Sun, 1 Jul 2018 16:38:32 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pridhiviraj Paidipeddi , Anju T Sudhakar , Balbir Singh , Michael Ellerman Subject: [PATCH 4.17 047/220] powerpc/perf: Fix memory allocation for core-imc based on num_possible_cpus() Date: Sun, 1 Jul 2018 18:21:11 +0200 Message-Id: <20180701160910.284235427@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701160908.272447118@linuxfoundation.org> References: <20180701160908.272447118@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Anju T Sudhakar commit d2032678e57fc508d7878307badde8f89b632ba3 upstream. Currently memory is allocated for core-imc based on cpu_present_mask, which has bit 'cpu' set iff cpu is populated. We use (cpu number / threads per core) as the array index to access the memory. Under some circumstances firmware marks a CPU as GUARDed CPU and boot the system, until cleared of errors, these CPU's are unavailable for all subsequent boots. GUARDed CPUs are possible but not present from linux view, so it blows a hole when we assume the max length of our allocation is driven by our max present cpus, where as one of the cpus might be online and be beyond the max present cpus, due to the hole. So (cpu number / threads per core) value bounds the array index and leads to memory overflow. Call trace observed during a guard test: Faulting instruction address: 0xc000000000149f1c cpu 0x69: Vector: 380 (Data Access Out of Range) at [c000003fea303420] pc:c000000000149f1c: prefetch_freepointer+0x14/0x30 lr:c00000000014e0f8: __kmalloc+0x1a8/0x1ac sp:c000003fea3036a0 msr:9000000000009033 dar:c9c54b2c91dbf6b7 current = 0xc000003fea2c0000 paca = 0xc00000000fddd880 softe: 3 irq_happened: 0x01 pid = 1, comm = swapper/104 Linux version 4.16.7-openpower1 (smc@smc-desktop) (gcc version 6.4.0 (Buildroot 2018.02.1-00006-ga8d1126)) #2 SMP Fri May 4 16:44:54 PDT 2018 enter ? for help call trace: __kmalloc+0x1a8/0x1ac (unreliable) init_imc_pmu+0x7f4/0xbf0 opal_imc_counters_probe+0x3fc/0x43c platform_drv_probe+0x48/0x80 driver_probe_device+0x22c/0x308 __driver_attach+0xa0/0xd8 bus_for_each_dev+0x88/0xb4 driver_attach+0x2c/0x40 bus_add_driver+0x1e8/0x228 driver_register+0xd0/0x114 __platform_driver_register+0x50/0x64 opal_imc_driver_init+0x24/0x38 do_one_initcall+0x150/0x15c kernel_init_freeable+0x250/0x254 kernel_init+0x1c/0x150 ret_from_kernel_thread+0x5c/0xc8 Allocating memory for core-imc based on cpu_possible_mask, which has bit 'cpu' set iff cpu is populatable, will fix this issue. Reported-by: Pridhiviraj Paidipeddi Signed-off-by: Anju T Sudhakar Reviewed-by: Balbir Singh Tested-by: Pridhiviraj Paidipeddi Fixes: 39a846db1d57 ("powerpc/perf: Add core IMC PMU support") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Michael Ellerman Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/perf/imc-pmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -1146,7 +1146,7 @@ static int init_nest_pmu_ref(void) static void cleanup_all_core_imc_memory(void) { - int i, nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core); + int i, nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core); struct imc_mem_info *ptr = core_imc_pmu->mem_info; int size = core_imc_pmu->counter_mem_size; @@ -1264,7 +1264,7 @@ static int imc_mem_init(struct imc_pmu * if (!pmu_ptr->pmu.name) return -ENOMEM; - nr_cores = DIV_ROUND_UP(num_present_cpus(), threads_per_core); + nr_cores = DIV_ROUND_UP(num_possible_cpus(), threads_per_core); pmu_ptr->mem_info = kcalloc(nr_cores, sizeof(struct imc_mem_info), GFP_KERNEL);