Received: by 10.223.164.221 with SMTP id h29csp3656063wrb; Tue, 31 Oct 2017 02:51:18 -0700 (PDT) X-Google-Smtp-Source: ABhQp+QDlPGP1IkgNLasv+La+Qcu7L5PYf76Vl51ZxjkKsdXjTFlucEoxmUjcjOwBTHh9+T4ncJx X-Received: by 10.84.172.67 with SMTP id m61mr1308640plb.97.1509443478184; Tue, 31 Oct 2017 02:51:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509443478; cv=none; d=google.com; s=arc-20160816; b=tUNdQtVyNWW1ihwkyPp6iP0GMDnVtFNU51vIxZiE4S0+RRBurjW2UUsjea2/pwzfPn 60z1V1TeTyWI11lqqiF7qGsLrtMz7dMjlTOU9xPNIfAxP9MibMCn5qSalOnHm5wvmzZm rniwP4WQABuullJhcBH+UO4xunR+FIv9otT57TIj4t+0XMqH9S0WNqLMz+LXLdgrYuvo X78Qx1IQJqW72a9FAVJEuepmxeOkZkN3Qlwd1YNFXSYgQm+8Jz2Ky3w5gQzgHnxiQZXe gbdIfLo5oXrLWs3rBKA+ScUNWxviKNfUISTiNGJK/7fRROYZhLaI4qIswiFvaETpq9Ag 0+nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Q7BreR/2QxJoZsjy7Q45b6wEDOjuSj2yzBkwxy5VnlU=; b=n4+CkfhuQlyu/S0ZBeb0okpCJH3j44aydzD0fz0Y8F4s23IpZN5w/nqEvXgDLPkd4K /tIldkvECOBQ51UlDdyJQET4kDeSXeRYxjCqG+Z2aC/6JalgaZ8GQyJzqBhM++goA9+e AYegQ5L5KcjQJYL/oj0OK+rvMqR6nbS5T2mjjCeMEheMBAu+gQ7zrMVYyRyUU8XrODwB 58L7H44OR/mYm3AN9PIc0cIEjOll6roZHgWwmdc+eBfbgkpAsHMt+1lFmsHzr3QLImjq +ruoNa6reny2ESZtgvnLA7T3jLbDda6FrNiMVZKLBuHmN0F9oiRFHJURQEyIov5Zlrg6 NaTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o11si1135241pgd.473.2017.10.31.02.51.04; Tue, 31 Oct 2017 02:51:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752965AbdJaJuJ (ORCPT + 99 others); Tue, 31 Oct 2017 05:50:09 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42278 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752791AbdJaJuH (ORCPT ); Tue, 31 Oct 2017 05:50:07 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v9V9nbso035068 for ; Tue, 31 Oct 2017 05:50:07 -0400 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0a-001b2d01.pphosted.com with ESMTP id 2dxmp1yg4c-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 31 Oct 2017 05:50:06 -0400 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 31 Oct 2017 09:50:03 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 31 Oct 2017 09:50:01 -0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v9V9o16N27066524; Tue, 31 Oct 2017 09:50:01 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0671A4203F; Tue, 31 Oct 2017 09:45:16 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6D85442042; Tue, 31 Oct 2017 09:45:14 +0000 (GMT) Received: from xenial-xerus.in.ibm.com (unknown [9.79.184.22]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 31 Oct 2017 09:45:14 +0000 (GMT) From: Anju T Sudhakar To: mpe@ellerman.id.au Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, maddy@linux.vnet.ibm.com, anju@linux.vnet.ibm.com Subject: [PATCH] powerpc/perf: Fix core-imc hotplug callback failure during imc initialization Date: Tue, 31 Oct 2017 15:19:58 +0530 X-Mailer: git-send-email 2.7.4 X-TM-AS-GCONF: 00 x-cbid: 17103109-0020-0000-0000-000003C6C2AF X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17103109-0021-0000-0000-0000425BB6C8 Message-Id: <1509443398-26539-1-git-send-email-anju@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-10-31_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1710310136 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Call trace observed during boot: [c000000ff38ffb80] c0000000002ddfac perf_pmu_migrate_context+0xac/0x470 [c000000ff38ffc40] c00000000011385c ppc_core_imc_cpu_offline+0x1ac/0x1e0 [c000000ff38ffc90] c000000000125758 cpuhp_invoke_callback+0x198/0x5d0 [c000000ff38ffd00] c00000000012782c cpuhp_thread_fun+0x8c/0x3d0 [c000000ff38ffd60] c0000000001678d0 smpboot_thread_fn+0x290/0x2a0 [c000000ff38ffdc0] c00000000015ee78 kthread+0x168/0x1b0 [c000000ff38ffe30] c00000000000b368 ret_from_kernel_thread+0x5c/0x74 While registering the cpuhoplug callbacks for core-imc, if we fails in the cpuhotplug online path for any random core (either because opal call to initialize the core-imc counters fails or because memory allocation fails for that core), ppc_core_imc_cpu_offline() will get invoked for other cpus who successfully returned from cpuhotplug online path. But in the ppc_core_imc_cpu_offline() path we are trying to migrate the event context, when core-imc counters are not even initialized. Thus creating the above stack dump. Add a check to see if core-imc counters are enabled or not in the cpuhotplug offline path before migrating the context to handle this failing scenario. Signed-off-by: Anju T Sudhakar --- arch/powerpc/perf/imc-pmu.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c index 8812624..08139f9 100644 --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -30,6 +30,7 @@ static struct imc_pmu *per_nest_pmu_arr[IMC_MAX_PMUS]; static cpumask_t nest_imc_cpumask; struct imc_pmu_ref *nest_imc_refc; static int nest_pmus; +static bool core_imc_enabled; /* Core IMC data structures and variables */ @@ -607,6 +608,19 @@ static int ppc_core_imc_cpu_offline(unsigned int cpu) if (!cpumask_test_and_clear_cpu(cpu, &core_imc_cpumask)) return 0; + /* + * See if core imc counters are enabled or not. + * + * Suppose we reach here from core_imc_cpumask_init(), + * since we failed at the cpuhotplug online path for any random + * core (either because opal call to initialize the core-imc counters + * failed or because memory allocation failed). + * We need to check whether core imc counters are enabled or not before + * migrating the event context from cpus in the other cores. + */ + if (!core_imc_enabled) + return 0; + /* Find any online cpu in that core except the current "cpu" */ ncpu = cpumask_any_but(cpu_sibling_mask(cpu), cpu); @@ -1299,6 +1313,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id return ret; } + core_imc_enabled = true; break; case IMC_DOMAIN_THREAD: ret = thread_imc_cpu_init(); -- 2.7.4 From 1583300985491762747@xxx Mon Nov 06 07:31:25 +0000 2017 X-GM-THRID: 1583300985491762747 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread