Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp595409iog; Mon, 13 Jun 2022 08:49:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxYvtuXsvZgMTXKRO8XGPTWrfCevJZnI6YWcgL5w9I2Pj4eudEuq9poBg44cZZniZ7r8dGG X-Received: by 2002:a05:6402:50ce:b0:42e:2208:bd8c with SMTP id h14-20020a05640250ce00b0042e2208bd8cmr422406edb.216.1655135366992; Mon, 13 Jun 2022 08:49:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655135366; cv=none; d=google.com; s=arc-20160816; b=FoGCxp4+X36l/r72nyWo4Htb2GCwDWp9Jz3zbb2RiE1jXjGRuJ9RukoLTc4GdggT7d NUbpz/bFfbkny3F0oG03Yyuahd+PSi/Obbt4+4IZcj8uOqGRL2FvWpz/y8sgWak0ojHL kQr/VA6Qu8OLSzvznBU3Sx7swHg0jHdOs57mcfXdXiUlslYT8J2OHfsJ9Mt/J6MSV1sH lwWbXFpM8StfQ///n49rhVOK2n+/szY1FH0N9xpXQhcuegG2U97s17NQyKz3Zk13mvyW 1mNp1k3CF+La7qWHplz7rCsQLkq1N5zjn2jmaQj93IWk80aEPqpIydFH3P4xZygr+M6v SXCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=/IJib2EAU/hRvqBlG9K1RhRM6W5oFBOPKdhnnq4m7P4=; b=L/773V6ByKiJegHACPOeWL4a0k0RtjBI1x2PBxd9FP83SUfeRR5WdqcKm1sqzeF+50 w3IZRbk5FGKbo5igWSCqsexeDAOWaOhopdK1VzKYErreiWRZar6w+SzCc10FYVJquafr gRBrJaBUDvDpDYu2Y5p4gawUT28tPGBv6KNmiO1OdIZIZ4qrKbsKR6/UZa3AINuaZ3pl NuM2Uc8F4mRgJI/GblfVvD6E8w48tzNtGFVf0PYfCWNIU8WzZxiqlWd49rmX/4VY/qvB AHb/4KxlzAm4fzW4T5msfDAuOEIyvpm/190V6u9jV8Qnv8v5dssCLe+qQDMUSsHDnRtB o+Hg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=xgR+7KxK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d29-20020a1709067a1d00b006e8d1d9dc28si8135632ejo.32.2022.06.13.08.48.58; Mon, 13 Jun 2022 08:49:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=xgR+7KxK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352297AbiFMLV2 (ORCPT + 99 others); Mon, 13 Jun 2022 07:21:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353280AbiFMLPY (ORCPT ); Mon, 13 Jun 2022 07:15:24 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2BBF3703A; Mon, 13 Jun 2022 03:37:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 722E0B80E5C; Mon, 13 Jun 2022 10:37:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD313C34114; Mon, 13 Jun 2022 10:37:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1655116669; bh=CR0lwMZLemY1W6sEBkqnBPV6dOdLJ0RE9AjjI/9sP1g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xgR+7KxKCsz0mH0ULMVnOnPCTHc2y5icl2h5LQf/qfIvV6yTv1ED5+VkCnBUptnuv hWfxaUkjdqicyMOgClKBdcM95L2iJFNydrI6dE0TiavaGonA3ls/Who2Vk3GkJ07pK 5RXOWkZZPKnHxpzvRpBkAbuXOvBY5HpWr1MWscfA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dmitry Monakhov , Peter Zijlstra , Ravi Bangoria , Sasha Levin Subject: [PATCH 5.4 130/411] perf/amd/ibs: Use interrupt regs ip for stack unwinding Date: Mon, 13 Jun 2022 12:06:43 +0200 Message-Id: <20220613094932.570497390@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220613094928.482772422@linuxfoundation.org> References: <20220613094928.482772422@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ravi Bangoria [ Upstream commit 3d47083b9ff46863e8374ad3bb5edb5e464c75f8 ] IbsOpRip is recorded when IBS interrupt is triggered. But there is a skid from the time IBS interrupt gets triggered to the time the interrupt is presented to the core. Meanwhile processor would have moved ahead and thus IbsOpRip will be inconsistent with rsp and rbp recorded as part of the interrupt regs. This causes issues while unwinding stack using the ORC unwinder as it needs consistent rip, rsp and rbp. Fix this by using rip from interrupt regs instead of IbsOpRip for stack unwinding. Fixes: ee9f8fce99640 ("x86/unwind: Add the ORC unwinder") Reported-by: Dmitry Monakhov Suggested-by: Peter Zijlstra Signed-off-by: Ravi Bangoria Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220429051441.14251-1-ravi.bangoria@amd.com Signed-off-by: Sasha Levin --- arch/x86/events/amd/ibs.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index b7baaa973317..2e930d8c04d9 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -312,6 +312,16 @@ static int perf_ibs_init(struct perf_event *event) hwc->config_base = perf_ibs->msr; hwc->config = config; + /* + * rip recorded by IbsOpRip will not be consistent with rsp and rbp + * recorded as part of interrupt regs. Thus we need to use rip from + * interrupt regs while unwinding call stack. Setting _EARLY flag + * makes sure we unwind call-stack before perf sample rip is set to + * IbsOpRip. + */ + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; + return 0; } @@ -683,6 +693,14 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) data.raw = &raw; } + /* + * rip recorded by IbsOpRip will not be consistent with rsp and rbp + * recorded as part of interrupt regs. Thus we need to use rip from + * interrupt regs while unwinding call stack. + */ + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + data.callchain = perf_callchain(event, iregs); + throttle = perf_event_overflow(event, &data, ®s); out: if (throttle) { -- 2.35.1