Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp36227ybk; Tue, 19 May 2020 14:51:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzH0UDKOxGr8cHE/RRpybzpe2/EEK0QecfbQ6oPnFK1/rDSdfIMaLcs6OHDYtMTlInXPzyE X-Received: by 2002:a05:6402:22d9:: with SMTP id dm25mr789452edb.380.1589925061376; Tue, 19 May 2020 14:51:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589925061; cv=none; d=google.com; s=arc-20160816; b=nnOh3kDHp5B+rwcYRkAcFOX3IygzpoGhMorJEFpMN8iYUOBF+Dms0itWfALFF7w24z TJCk/8DnOEPidrJLuVo+xueSu+IM+kBy3xezy73mFVvNTqdnaCbWZcDXtkBlJM6U0Mjl TEsmjHT6sDDXy70hhDirxj1cEnfNh6MzLwU+C9XtZdMWjgpX3x6Kr+la1qsuQ8cGl/zQ nlayf3ZkCloIYucuhNvkUTL+PKL6JTr9NNXJULiLpmjuaP4CtfjMj5v2en8EpD1NsFNp q6iipxZ+O65+R+tpPWRWT08paHoOLL20uPlDONMTutExCqtv3Ff0Yxm93r/92BGOBgYa XSJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=UwHRsfdUKwwFjXy9XJ2iQc6vWJg3EYEyygCMKZcwYmE=; b=d3ovQSrd2lf2gqCxQUWHTRHwVgKEcmkj6ARnERarZ5iwUSn1WTbrBDUysV56guT2Le wJlcCfRV1/2PFxQTXHXYhUbeEUhofY3WGWCaY53oTHqHg90ekdRrJhZRoAxzBLcg2glk Z/zJHMRCO75eIaSoxRNNMsygWY+EkZHrNVU67+2d++fgA2ICnmVTVJAQfeVzoRmcGvtU BfxL2JzC3xA8eUxoPxxSw15c+5cRldbTSbRitzNUuPFIx2DjSLR4tbb7MWOmRsdmfLmR 6MLuE90n9iipS3xkyU/8/x4Dy9VBNAkqEn9UJu4zhyQ8VZP//PiOHPSCMNiKTgYDVjMp FB1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id pw5si632449ejb.354.2020.05.19.14.50.38; Tue, 19 May 2020 14:51:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728635AbgESVs0 (ORCPT + 99 others); Tue, 19 May 2020 17:48:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728248AbgESVqy (ORCPT ); Tue, 19 May 2020 17:46:54 -0400 Received: from Galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71792C08C5C0 for ; Tue, 19 May 2020 14:46:54 -0700 (PDT) Received: from [5.158.153.53] (helo=debian-buster-darwi.lab.linutronix.de.) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80) (envelope-from ) id 1jbA4D-0002fp-GG; Tue, 19 May 2020 23:46:29 +0200 From: "Ahmed S. Darwish" To: Peter Zijlstra , Ingo Molnar , Will Deacon Cc: Thomas Gleixner , "Paul E. McKenney" , "Sebastian A. Siewior" , Steven Rostedt , LKML , "Ahmed S. Darwish" Subject: [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write Date: Tue, 19 May 2020 23:45:30 +0200 Message-Id: <20200519214547.352050-9-a.darwish@linutronix.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200519214547.352050-1-a.darwish@linutronix.de> References: <20200519214547.352050-1-a.darwish@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Preemption must be disabled before entering a sequence count write side critical section. Failing to do so, the seqcount read side can preempt the write side section and spin for the entire scheduler tick. If that reader belongs to a real-time scheduling class, it can spin forever and the kernel will livelock. Assert through lockdep that preemption is disabled for seqcount writers. Signed-off-by: Ahmed S. Darwish --- include/linux/seqlock.h | 30 ++++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 0491d963d47e..d35be7709403 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -369,14 +369,32 @@ static inline void raw_write_seqcount_latch(seqcount_t *s) /* * Sequence counter only version assumes that callers are using their - * own mutexing. + * own locking and preemption is disabled. */ -static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass) + +static inline void __write_seqcount_begin_nested(seqcount_t *s, int subclass) { raw_write_seqcount_begin(s); seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_); } +static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass) +{ + lockdep_assert_preemption_disabled(); + __write_seqcount_begin_nested(s, subclass); +} + +/* + * write_seqcount_begin() without lockdep non-preemptibility checks. + * + * Use for internal seqlock.h code where it's known that preemption + * is already disabled. For example, seqlock_t write functions. + */ +static inline void __write_seqcount_begin(seqcount_t *s) +{ + __write_seqcount_begin_nested(s, 0); +} + static inline void write_seqcount_begin(seqcount_t *s) { write_seqcount_begin_nested(s, 0); @@ -446,7 +464,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) static inline void write_seqlock(seqlock_t *sl) { spin_lock(&sl->lock); - write_seqcount_begin(&sl->seqcount); + __write_seqcount_begin(&sl->seqcount); } static inline void write_sequnlock(seqlock_t *sl) @@ -458,7 +476,7 @@ static inline void write_sequnlock(seqlock_t *sl) static inline void write_seqlock_bh(seqlock_t *sl) { spin_lock_bh(&sl->lock); - write_seqcount_begin(&sl->seqcount); + __write_seqcount_begin(&sl->seqcount); } static inline void write_sequnlock_bh(seqlock_t *sl) @@ -470,7 +488,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) static inline void write_seqlock_irq(seqlock_t *sl) { spin_lock_irq(&sl->lock); - write_seqcount_begin(&sl->seqcount); + __write_seqcount_begin(&sl->seqcount); } static inline void write_sequnlock_irq(seqlock_t *sl) @@ -484,7 +502,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) unsigned long flags; spin_lock_irqsave(&sl->lock, flags); - write_seqcount_begin(&sl->seqcount); + __write_seqcount_begin(&sl->seqcount); return flags; } -- 2.20.1