Received: by 2002:a05:6520:4211:b029:f4:110d:56bc with SMTP id o17csp2112723lkv; Thu, 20 May 2021 03:42:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzgB6W0/gcmBF/TN8MVg31Qf2OQ5FBimjAGWYH3jagC155Q/KiiEF7mILdb1vo6T3DUjkbf X-Received: by 2002:a05:6e02:19ce:: with SMTP id r14mr4732234ill.4.1621507364539; Thu, 20 May 2021 03:42:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621507364; cv=none; d=google.com; s=arc-20160816; b=XbZC40iNf2SnVYjaUnm0n9fFjFFbQM1AE9+RBPO+t35fiHJHW6Nz8+tWcvQNYDOGg7 ssI3AIlJKgjmCARR/vvUB4cIk11QjC9Jk+JWxwrC99L5b8wLuAd5E+vTkmKdLy47tWGW eSu3Km/UoDaoR6Lfp6vTgumI7x2RdcoTcs1vv9Nv402TCW3Nm3nVoPhPjE7oG8Mj3xpv CM+MU1UMZc/AwIa38yqctMGP7qKffx2nXQnwmMJwNm2kvxoFOJUYq4faYmcQGMB0a3ZC dYXi0d3THcLkgWFLf1lwSwipZ+/zfD7aikMJVU5oiNjCrdnZ6frGl1JO+fExOIn5OyYU tG6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=MD9oUj1wQRt8NKsojMl0fuse/Mn9HH6GOpSshRB3Njo=; b=YPt8l+IBU+zG830tuPS483U/B/lW4gnV17ccjgQ1GYzbRLYzxkYPKOlVYjVkWeKwCR /5FnjWHW0+V0fYgLILN2ZjTMIr6ow/v6KnxNTh2iA11A6HpnBW6B+ZVy0mZWZeand7wO mF98fp5QtTt+o45efUpWm0rfqQDWWGG+8UeFO1Rv8xxHV9YRMzZewRhawdJ2v8Nvt364 ZtaEMIxTF/JBdwoWaaxVU2A3G6N4n2iAZtzkwIsSjj09xu7ynyYAwCi95rI2LxwFvhtV gC4WotzqYQo4qwS30GdimG6HdZsPyeMRTOUI0gBFSjIPE0xTNGPMh+Ml5AMRMTQFR6m+ NcQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=iwNY739f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h10si1937813ioe.38.2021.05.20.03.42.07; Thu, 20 May 2021 03:42:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=iwNY739f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238128AbhETKj3 (ORCPT + 99 others); Thu, 20 May 2021 06:39:29 -0400 Received: from mail.kernel.org ([198.145.29.99]:55768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236792AbhETKYb (ORCPT ); Thu, 20 May 2021 06:24:31 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1108061466; Thu, 20 May 2021 09:49:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621504162; bh=oFrheRxoYjfKEyywpxmT0GxYBcZxUkRdxebXCqpSZ0M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iwNY739fLr3RmuMxTYK6NzosUl+1pzOHMIdvnxc/4Qa0zq7TtAD9xuXNOgPM816tV AMDX+IFgG8Q2OYnpX8xdF5z8fk6g7T1YWCF/xEhNW3+Ag5kaD2tEXxhvnT6XjUdEqV xD/9tiRUaWk+njPQ3IYqhjDlXv5qfcXKk7Q18ewo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Konstantin Kharlamov , Todd Brandt , "Steven Rostedt (VMware)" Subject: [PATCH 4.14 120/323] tracing: Restructure trace_clock_global() to never block Date: Thu, 20 May 2021 11:20:12 +0200 Message-Id: <20210520092124.222353796@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210520092120.115153432@linuxfoundation.org> References: <20210520092120.115153432@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Steven Rostedt (VMware) commit aafe104aa9096827a429bc1358f8260ee565b7cc upstream. It was reported that a fix to the ring buffer recursion detection would cause a hung machine when performing suspend / resume testing. The following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to happen while tracing, this lead to the trace_clock_global() taking a spin lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the above. Restructure the trace_clock_global() code to instead of simply taking a lock to update the recorded "prev_time" simply use it, as two events happening on two different CPUs that calls this at the same time, really doesn't matter which one goes first. Use a trylock to grab the lock for updating the prev_time, and if it fails, simply try again the next time. If it failed to be taken, that means something else is already updating it. Link: https://lkml.kernel.org/r/20210430121758.650b6e8a@gandalf.local.home Cc: stable@vger.kernel.org Tested-by: Konstantin Kharlamov Tested-by: Todd Brandt Fixes: b02414c8f045 ("ring-buffer: Fix recursion protection transitions between interrupt context") # started showing the problem Fixes: 14131f2f98ac3 ("tracing: implement trace_clock_*() APIs") # where the bug happened Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761 Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Greg Kroah-Hartman --- kernel/trace/trace_clock.c | 48 ++++++++++++++++++++++++++++++--------------- 1 file changed, 32 insertions(+), 16 deletions(-) --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -94,33 +94,49 @@ u64 notrace trace_clock_global(void) { unsigned long flags; int this_cpu; - u64 now; + u64 now, prev_time; local_irq_save(flags); this_cpu = raw_smp_processor_id(); - now = sched_clock_cpu(this_cpu); + /* - * If in an NMI context then dont risk lockups and return the - * cpu_clock() time: + * The global clock "guarantees" that the events are ordered + * between CPUs. But if two events on two different CPUS call + * trace_clock_global at roughly the same time, it really does + * not matter which one gets the earlier time. Just make sure + * that the same CPU will always show a monotonic clock. + * + * Use a read memory barrier to get the latest written + * time that was recorded. */ - if (unlikely(in_nmi())) - goto out; + smp_rmb(); + prev_time = READ_ONCE(trace_clock_struct.prev_time); + now = sched_clock_cpu(this_cpu); - arch_spin_lock(&trace_clock_struct.lock); + /* Make sure that now is always greater than prev_time */ + if ((s64)(now - prev_time) < 0) + now = prev_time + 1; /* - * TODO: if this happens often then maybe we should reset - * my_scd->clock to prev_time+1, to make sure - * we start ticking with the local clock from now on? + * If in an NMI context then dont risk lockups and simply return + * the current time. */ - if ((s64)(now - trace_clock_struct.prev_time) < 0) - now = trace_clock_struct.prev_time + 1; - - trace_clock_struct.prev_time = now; - - arch_spin_unlock(&trace_clock_struct.lock); + if (unlikely(in_nmi())) + goto out; + /* Tracing can cause strange recursion, always use a try lock */ + if (arch_spin_trylock(&trace_clock_struct.lock)) { + /* Reread prev_time in case it was already updated */ + prev_time = READ_ONCE(trace_clock_struct.prev_time); + if ((s64)(now - prev_time) < 0) + now = prev_time + 1; + + trace_clock_struct.prev_time = now; + + /* The unlock acts as the wmb for the above rmb */ + arch_spin_unlock(&trace_clock_struct.lock); + } out: local_irq_restore(flags);