Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp3016886rdb; Tue, 26 Dec 2023 12:50:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IEGFErCT7WxU0wa3NGIKP7ovoBcG55O6huW//t0fk+tzhgm9zCchsvmpiiOLYruFACz+LSv X-Received: by 2002:ad4:5963:0:b0:67f:5118:b65e with SMTP id eq3-20020ad45963000000b0067f5118b65emr13454855qvb.50.1703623817605; Tue, 26 Dec 2023 12:50:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703623817; cv=none; d=google.com; s=arc-20160816; b=LpLYHmwdiFtWI3KUH7xAl7nVjHoI51e+o5Nhk8GRyfgBqJGJp57PRbQvFZ+9cFf5Fa w3qkz4bmGGIYxGZuBvrZhEqVNIKyjIQUwwPDArzPkf3gbtJT4TDjqpoNTzJWAEVjmOtU L40kk76mT93xsfVOQZNuBPJ9RdNweFkeB+HkenW8Zwzxyy+ZSrs/G/8FSrb/dQYrbv9n clzKZlAHcRHHrmd4t+nWEiyiSgurX0abGx5ubEipwn+JZpPBtYTvIVeqdLVJCnC77Oz+ VkOPYAGKWVP76gGk5iOiqeTllwKhWfsM5jPcYnE9C/z6ikZxm0UGvEpcwnVHJFQ4PchB nX1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=sU/ZDwVMrGzv8b1xMxNd4+PSlPCz5hvDCFekATiKTC8=; fh=3h5sTaBzGfo4zswZM2mAperOvf8LtWEEVurzkZ9rOA0=; b=g1UhvWz86AEBezoVct+umeyShWUiV7d1czcZ65kUL2D7mG2RLJk6SFSNE/cMJt9C0i LJys8R+y+xHrumxTShkj3GgKIEgE+2ol1fxrtSv+iOwAhDtL9ZY66dgh87SduY8RadXA M4DtA/F0srOKklMtqSJ0HYgFTQTIFLCWkxONd70OkWQrAHWy7Vj9n8MrdOg/3TqkSyKg oH2nS9uUcCn+clWPJ0ksBmjlHSUByEZFmg5LemRpWQmOhc5dBx4b7GueARpYDMR8mYVu UqiW9EoYtSXKFc1vXPjgEeWJqBCyvaK0inTfC0BIqUOykGoIjCI7pETZSQIdDu3o9tUr sh3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Z00n1bGP; spf=pass (google.com: domain of linux-kernel+bounces-11689-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-11689-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id o12-20020a0562140e4c00b0067fe597c57dsi5931617qvc.55.2023.12.26.12.50.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 12:50:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-11689-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Z00n1bGP; spf=pass (google.com: domain of linux-kernel+bounces-11689-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-11689-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 475EF1C20B41 for ; Tue, 26 Dec 2023 20:50:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3820D4C87; Tue, 26 Dec 2023 20:50:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Z00n1bGP" X-Original-To: linux-kernel@vger.kernel.org Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 515E74C6F for ; Tue, 26 Dec 2023 20:50:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=sU/ZDwVMrGzv8b1xMxNd4+PSlPCz5hvDCFekATiKTC8=; b=Z00n1bGPPF4i3LvMg4bvxZqY0Q GBwq/LGTAuBdPQRkkYPwIt2An9p10mciUA/gg9QmGshByIamlj5hzLm+jGfia0emLL+nkKOZXVhJj lLKVpheFUndJJCE65mL3+9UMnVL7IVHi5CaFpQ39BGD1wSUNCNWOKUhflYsbHsbu40dXbFvYiJZsp w3Y2LVaNaRci+MyWk8jtMPr0MKNsHR5aV81aVuan6ueNeUpVG6z9tH2pzYFtbOQ29YrkKc9XyfD0x wxYNwxoXQObcZwT+/jSn8iEAYVFgoM3D08fA+PQ+cLFT86M7N+tTJByXzPyZuWAuNCAFSK/v1hn6e pnHCqSMA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rIEMs-00191a-5m; Tue, 26 Dec 2023 20:49:38 +0000 Date: Tue, 26 Dec 2023 20:49:38 +0000 From: Matthew Wilcox To: Hillf Danton Cc: "Eric W. Biederman" , Maria Yu , linux-kernel@vger.kernel.org Subject: Re: [PATCH] kernel: Introduce a write lock/unlock wrapper for tasklist_lock Message-ID: References: <20231213101745.4526-1-quic_aiquny@quicinc.com> <20231226104652.1491-1-hdanton@sina.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231226104652.1491-1-hdanton@sina.com> On Tue, Dec 26, 2023 at 06:46:52PM +0800, Hillf Danton wrote: > On Wed, 13 Dec 2023 12:27:05 -0600 Eric W. Biederman > > Matthew Wilcox writes: > > > On Wed, Dec 13, 2023 at 06:17:45PM +0800, Maria Yu wrote: > > >> +static inline void write_lock_tasklist_lock(void) > > >> +{ > > >> + while (1) { > > >> + local_irq_disable(); > > >> + if (write_trylock(&tasklist_lock)) > > >> + break; > > >> + local_irq_enable(); > > >> + cpu_relax(); > > > > > > This is a bad implementation though. You don't set the _QW_WAITING flag > > > so readers don't know that there's a pending writer. Also, I've seen > > > cpu_relax() pessimise CPU behaviour; putting it into a low-power mode > > > that takes a while to wake up from. > > > > > > I think the right way to fix this is to pass a boolean flag to > > > queued_write_lock_slowpath() to let it know whether it can re-enable > > > interrupts while checking whether _QW_WAITING is set. > > lock(&lock->wait_lock) > enable irq > int > lock(&lock->wait_lock) > > You are adding chance for recursive locking. Did you bother to read queued_read_lock_slowpath() before writing this email? if (unlikely(in_interrupt())) { /* * Readers in interrupt context will get the lock immediately * if the writer is just waiting (not holding the lock yet), * so spin with ACQUIRE semantics until the lock is available * without waiting in the queue. */ atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED)); return;