Received: by 2002:a05:7412:8d09:b0:fa:4c10:6cad with SMTP id bj9csp71606rdb; Mon, 15 Jan 2024 12:31:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IGh/EW9P67M7f7Gq/7EBgf2BNY+cbIdD66yFvZpB6veyoytL3AnO5y/+oQinMAcS5NiF4gY X-Received: by 2002:a05:6870:22d1:b0:206:cb5f:afe0 with SMTP id k17-20020a05687022d100b00206cb5fafe0mr2142237oaf.0.1705350711353; Mon, 15 Jan 2024 12:31:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705350711; cv=none; d=google.com; s=arc-20160816; b=JZ3l9pnzIi1NOtpd5rjCvue+cEuYwinpDHVhu+wq4yNjzNXp7h7nZuh48gumC7/YSv mLdCYQBwOmQ+D4C+TU5mQVUgEj/EbeF53E7DFL+0SV687YrexbVz1MjxOXrVeLuyXrfk J+70O7x9pd/p6/Q6MmF5BPg8F234gwxtZsvuhDS3IHXG66fJfesohPNXTd/h2/nG1ZAy 9+yqsbolyWbWc5+rTU9+ccnOcI8oc9hlcH6rIQrn2grJF83qFSWIZJOqcDlee7nvrEjl sRRV5Kro6T5KqVtF16PH7Kg1WI5rrTFw6jmhiUWJ5s+W+Hf+3qVCWHREFwnxtqrXiu9e AddA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:content-disposition:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=oo1lY+LUrPGNJa+47ZTljwNXvGKmrXfZHob6MgR5Xqo=; fh=byqI9L7Jnm7YYKaOnNbsrm6nvVe/abm3kP1S1862JYk=; b=heCHblm5GNhRIR25rOaGfNXNHfEPvpaMhi9O9I95jvL20bksx2/OAR1mSsKLETtxEi 8Ag25jUtisnWSRGo5zhsQaksE1MxFkg+hLyo7NEAkku+nmds6fzQpnrQCj1o0RajXqsG gBX2l1SgVtOvWRdviZp9ZCVrzkzQjn8LNtBWVZsVzWbaa3GA0RNLQnXPRP0FlCha/EHa 6xM30xm7C5+L2v9JeovgGqL9eu2e7BEUaIXRERqfMdy+YybDyFfh1Db7RbDRfPkkQRo6 8FwVAlri6fnv3RWB5Se7tHIw15p7pf4Cl37p+bRFDp+pvZOKJiAElnhUGP4QhSD3Ku6K 1c3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Q48+waAV; spf=pass (google.com: domain of linux-kernel+bounces-26478-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-26478-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id j70-20020a638b49000000b005cecc4bd290si9570868pge.292.2024.01.15.12.31.51 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Jan 2024 12:31:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-26478-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Q48+waAV; spf=pass (google.com: domain of linux-kernel+bounces-26478-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-26478-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id F108A283935 for ; Mon, 15 Jan 2024 20:31:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C5A3F1AAB9; Mon, 15 Jan 2024 20:31:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q48+waAV" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A3902914 for ; Mon, 15 Jan 2024 20:31:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1705350701; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oo1lY+LUrPGNJa+47ZTljwNXvGKmrXfZHob6MgR5Xqo=; b=Q48+waAVi+zmq6oeg7w6ivioee5uFrtiUBzMaEou4uj9uN6r55BXeKZqF9nirFL5eEa/fW IW1ndXDFtE7UYTGInqJ70XD+7XzBoZZBTNdjk4ocfWe16fV/sMshcx7j6HDNEX1rIBC6YP tXDqZLrmEM+XxzHNn0SPawmG90MEwuY= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-605-5BAb157yNYWs2B6mZDePzA-1; Mon, 15 Jan 2024 15:31:40 -0500 X-MC-Unique: 5BAb157yNYWs2B6mZDePzA-1 Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-429cc1afb92so57136651cf.2 for ; Mon, 15 Jan 2024 12:31:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705350699; x=1705955499; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oo1lY+LUrPGNJa+47ZTljwNXvGKmrXfZHob6MgR5Xqo=; b=Ub+6ioVCXbyKaThm89OGxmi4MUE17UPaeG9sYQcwaDgUhKfw3wiT1/88Lx093CfaLn Xx8KeleuKvNHO2CggH5XHswMv8e2c0cIZuFU4K7SEBmHJVlPhf52Gr7LRrrVs3C6IfpF NxMTgdg5UQe5MBlDvKAH8N/DJA6Xjr4t8LISxv6eVBWta8fltZ47by8mnxf6RCIWqH37 hDNR5+Z6PPRaO44Z4Y7Ns+DKMZEZW2qicKwjLtoSvpyJkZs7IDUm/spmJGKdgxgcBiSi +8OA4uPEDFMECidzUadH8V899x3qwnUC/5bkgItXaYn3goBE57GzoF9UypJrfbnosL36 QHYw== X-Gm-Message-State: AOJu0YyjI0wrzjiTr0A8J9lbihuiZ1KgrO4H3qmTyn91zCwOQlItqEVD i7PrKdRUHr8dLM1+YPBPtznWBAIE8jJPMctuT7FxPvPSWFpLbbsVl9mt7VkiJPsIIYN71H2KgAs orpLMnbEFAOKZbxBmlMOzulx/mGmQ4QjB X-Received: by 2002:ac8:4e4e:0:b0:429:fa83:f4c6 with SMTP id e14-20020ac84e4e000000b00429fa83f4c6mr1096570qtw.39.1705350699495; Mon, 15 Jan 2024 12:31:39 -0800 (PST) X-Received: by 2002:ac8:4e4e:0:b0:429:fa83:f4c6 with SMTP id e14-20020ac84e4e000000b00429fa83f4c6mr1096561qtw.39.1705350699222; Mon, 15 Jan 2024 12:31:39 -0800 (PST) Received: from LeoBras.redhat.com ([2804:1b3:a803:26a5:3f32:e12b:5335:3c2d]) by smtp.gmail.com with ESMTPSA id jz4-20020a05622a81c400b00427f02d072bsm4132051qtb.95.2024.01.15.12.31.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Jan 2024 12:31:38 -0800 (PST) From: Leonardo Bras To: Kent Overstreet Cc: Leonardo Bras , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, tglx@linutronix.de, x86@kernel.org, tj@kernel.org, peterz@infradead.org, mathieu.desnoyers@efficios.com, paulmck@kernel.org, keescook@chromium.org, dave.hansen@linux.intel.com, mingo@redhat.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com, brauner@kernel.org Subject: Re: [PATCH 16/50] sched.h: Move (spin|rwlock)_needbreak() to spinlock.h Date: Mon, 15 Jan 2024 17:31:34 -0300 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231216032651.3553101-6-kent.overstreet@linux.dev> References: <20231216024834.3510073-1-kent.overstreet@linux.dev> <20231216032651.3553101-1-kent.overstreet@linux.dev> <20231216032651.3553101-6-kent.overstreet@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Fri, Dec 15, 2023 at 10:26:15PM -0500, Kent Overstreet wrote: > This lets us kill the dependency on spinlock.h. > > Signed-off-by: Kent Overstreet > --- > include/linux/sched.h | 31 ------------------------------- > include/linux/spinlock.h | 31 +++++++++++++++++++++++++++++++ > 2 files changed, 31 insertions(+), 31 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 5a5b7b122682..7501a3451a20 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2227,37 +2227,6 @@ static inline bool preempt_model_preemptible(void) > return preempt_model_full() || preempt_model_rt(); > } > > -/* > - * Does a critical section need to be broken due to another > - * task waiting?: (technically does not depend on CONFIG_PREEMPTION, > - * but a general need for low latency) > - */ > -static inline int spin_needbreak(spinlock_t *lock) > -{ > -#ifdef CONFIG_PREEMPTION > - return spin_is_contended(lock); > -#else > - return 0; > -#endif > -} > - > -/* > - * Check if a rwlock is contended. > - * Returns non-zero if there is another task waiting on the rwlock. > - * Returns zero if the lock is not contended or the system / underlying > - * rwlock implementation does not support contention detection. > - * Technically does not depend on CONFIG_PREEMPTION, but a general need > - * for low latency. > - */ > -static inline int rwlock_needbreak(rwlock_t *lock) > -{ > -#ifdef CONFIG_PREEMPTION > - return rwlock_is_contended(lock); > -#else > - return 0; > -#endif > -} > - > static __always_inline bool need_resched(void) > { > return unlikely(tif_need_resched()); > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > index 31d3d747a9db..0c71f06454d9 100644 > --- a/include/linux/spinlock.h > +++ b/include/linux/spinlock.h > @@ -449,6 +449,37 @@ static __always_inline int spin_is_contended(spinlock_t *lock) > return raw_spin_is_contended(&lock->rlock); > } > > +/* > + * Does a critical section need to be broken due to another > + * task waiting?: (technically does not depend on CONFIG_PREEMPTION, > + * but a general need for low latency) > + */ > +static inline int spin_needbreak(spinlock_t *lock) > +{ > +#ifdef CONFIG_PREEMPTION > + return spin_is_contended(lock); > +#else > + return 0; > +#endif > +} > + > +/* > + * Check if a rwlock is contended. > + * Returns non-zero if there is another task waiting on the rwlock. > + * Returns zero if the lock is not contended or the system / underlying > + * rwlock implementation does not support contention detection. > + * Technically does not depend on CONFIG_PREEMPTION, but a general need > + * for low latency. > + */ > +static inline int rwlock_needbreak(rwlock_t *lock) > +{ > +#ifdef CONFIG_PREEMPTION > + return rwlock_is_contended(lock); > +#else > + return 0; > +#endif > +} > + > #define assert_spin_locked(lock) assert_raw_spin_locked(&(lock)->rlock) > > #else /* !CONFIG_PREEMPT_RT */ > -- > 2.43.0 Hello Kent, This patch is breaking PREEMPT_RT builds, but it can be easily fixed. I sent a patch on the fix, please take a look: https://lore.kernel.org/all/20240115201935.2326400-1-leobras@redhat.com/ Thanks! Leo