Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp1944701ima; Thu, 25 Oct 2018 07:22:49 -0700 (PDT) X-Google-Smtp-Source: AJdET5eIZk+TJ9XUk7YzbdnwRZQ6FbSP/2bJjRGL1shbKPg1EQHreBIxNxe6HDep4rV44B5AEquR X-Received: by 2002:a62:3a54:: with SMTP id h81-v6mr1746234pfa.119.1540477368981; Thu, 25 Oct 2018 07:22:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540477368; cv=none; d=google.com; s=arc-20160816; b=Scem2qF+10rCz+GOO3EHcFGtP0t4RXhqxy5yxx8EI+mYByUXLpDXDYO45jIGg0oT8T 2C+CgFglfBk0nhgdGx5tDZeOaTEzTzgEewVFKN2AZ3mj5jDxqTyHwM4b9EqrZLQyLPqZ hvO4GIgsgRdEerUFsHzqTgnsmEQWMNl+rlRyBWQo6Cv8/TCH1Ycspn0pMP6wstKm9HE+ Y++RHw9Aqch9hqAG9rNfX7rk3xjxcORdW3+n/FINAOXzSL+kvTc2CiJ77TsEdsF3OOpT x8hoQnc6EYZQ+hAHxUOztjWItpGvAdWV3/o4K3ZBuG3bigNfN78/mtav8rdzC0tlyWmc 6q1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=U2sp0TsbiTxchN0+3oK20L7kBFO/hyKEAcW8pSJtl/U=; b=tCaPaIzK2NjaOkthL9yzYzmG9X3L+CgFdla4ZvoyItdapUsjRAgNxKQndQ6IcQzUnC NWXEY5d0ZNtULT6DqI2tqDLGNMdL4xi16sHsWJ04TYFYfxNLvnYglhg6RjqNA8TBuMa8 pMMtRsQYNnNtMn8F9lcoLqtTY4k3quCPEe3i04+jdt4P7G/EznkI1H2csijikMrUWk+p W01MtcZ/hxdA8l24YYN7gUepcDwbomUI/ujGGUCQp5bb9VE5yKs/U6hA3p8Qd/JTHCfe NSaT/DvaIFaqZUUVgldxuZ0nCnkxI9PFpBm8qjKeUTKnZl3V22CFPcKuHDx30k59fBJS 0X1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=nAhsAmTG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id gn15si7669860plb.321.2018.10.25.07.22.01; Thu, 25 Oct 2018 07:22:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=nAhsAmTG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730961AbeJYWxQ (ORCPT + 99 others); Thu, 25 Oct 2018 18:53:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:35662 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730942AbeJYWxP (ORCPT ); Thu, 25 Oct 2018 18:53:15 -0400 Received: from sasha-vm.mshome.net (unknown [167.98.65.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A26202085B; Thu, 25 Oct 2018 14:20:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1540477217; bh=9CXAaSh5CMkZ5Z1fHYLCLLb/NaGCfCRaAKtFoMrnmZQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nAhsAmTGkyofTcp7meP4T8DWTSoyApNxFH80NE+nTuj8Sj5PbPW+Dkpr4TD5+n2+r wfCyRYMGpd5M6eNaBDLWNVcbU2Li6PXGRYxwQGnZjogQ07/jKqOIaFIqMrewptWpEz TVK/vY+UMfbauon9CPNRzyHNmuafRvpjgv8CmoU4= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: "Paul E. McKenney" , Sasha Levin Subject: [PATCH AUTOSEL 3.18 55/98] rcu: Clear need_qs flag to prevent splat Date: Thu, 25 Oct 2018 10:18:10 -0400 Message-Id: <20181025141853.214051-55-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181025141853.214051-1-sashal@kernel.org> References: <20181025141853.214051-1-sashal@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Paul E. McKenney" [ Upstream commit c0135d07b013fa8f7ba9ec91b4369c372e6a28cb ] If the scheduling-clock interrupt sets the current tasks need_qs flag, but if the current CPU passes through a quiescent state in the meantime, then rcu_preempt_qs() will fail to clear the need_qs flag, which can fool RCU into thinking that additional rcu_read_unlock_special() processing is needed. This commit therefore clears the need_qs flag before checking for additional processing. For this problem to occur, we need rcu_preempt_data.passed_quiesce equal to true and current->rcu_read_unlock_special.b.need_qs also equal to true. This condition can occur as follows: 1. CPU 0 is aware of the current preemptible RCU grace period, but has not yet passed through a quiescent state. Among other things, this means that rcu_preempt_data.passed_quiesce is false. 2. Task A running on CPU 0 enters a preemptible RCU read-side critical section. 3. CPU 0 takes a scheduling-clock interrupt, which notices the RCU read-side critical section and the need for a quiescent state, and thus sets current->rcu_read_unlock_special.b.need_qs to true. 4. Task A is preempted, enters the scheduler, eventually invoking rcu_preempt_note_context_switch() which in turn invokes rcu_preempt_qs(). Because rcu_preempt_data.passed_quiesce is false, control enters the body of the "if" statement, which sets rcu_preempt_data.passed_quiesce to true. 5. At this point, CPU 0 takes an interrupt. The interrupt handler contains an RCU read-side critical section, and the rcu_read_unlock() notes that current->rcu_read_unlock_special is nonzero, and thus invokes rcu_read_unlock_special(). 6. Once in rcu_read_unlock_special(), the fact that current->rcu_read_unlock_special.b.need_qs is true becomes apparent, so rcu_read_unlock_special() invokes rcu_preempt_qs(). Recursively, given that we interrupted out of that same function in the preceding step. 7. Because rcu_preempt_data.passed_quiesce is now true, rcu_preempt_qs() does nothing, and simply returns. 8. Upon return to rcu_read_unlock_special(), it is noted that current->rcu_read_unlock_special is still nonzero (because the interrupted rcu_preempt_qs() had not yet gotten around to clearing current->rcu_read_unlock_special.b.need_qs). 9. Execution proceeds to the WARN_ON_ONCE(), which notes that we are in an interrupt handler and thus duly splats. The solution, as noted above, is to make rcu_read_unlock_special() clear out current->rcu_read_unlock_special.b.need_qs after calling rcu_preempt_qs(). The interrupted rcu_preempt_qs() will clear it again, but this is harmless. The worst that happens is that we clobber another attempt to set this field, but this is not a problem because we just got done reporting a quiescent state. Reported-by: Sasha Levin Signed-off-by: Paul E. McKenney [ paulmck: Fix embarrassing build bug noted by Sasha Levin. ] Tested-by: Sasha Levin Signed-off-by: Sasha Levin --- kernel/rcu/tree_plugin.h | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index c1d7f27bd38f..c038831bfa57 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -328,6 +328,7 @@ void rcu_read_unlock_special(struct task_struct *t) special = t->rcu_read_unlock_special; if (special.b.need_qs) { rcu_preempt_qs(); + t->rcu_read_unlock_special.b.need_qs = false; if (!t->rcu_read_unlock_special.s) { local_irq_restore(flags); return; -- 2.17.1