Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp3923900ybh; Tue, 17 Mar 2020 09:00:25 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsvArTfXLjpQZ/SJZQZUPC4fvt6EoRl5u3eRE28hkZQJyIeNqxsPK8atnID272RsLeUXO4R X-Received: by 2002:aca:646:: with SMTP id 67mr36468oig.4.1584460825049; Tue, 17 Mar 2020 09:00:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584460825; cv=none; d=google.com; s=arc-20160816; b=NZy7J6JmZkfvQ3qpvrhrnMW3a+nhdyZur9r0KXI6+8+OFI+19SJpp6UbJ1Hn2BBMPy Z0ujtOPgffFWr4tnySP5g7vooz5Hq+rDfzHhPcKeH18hW2zW02wgySqMMHfKyKD0gvpV TLCWf9wY9oQCtAl9yfjiRROiFlgHWArPrZP3v934lwfYNlNH2Z7E6ENPuMhvZwaNPe81 v9nWcScBnK5ctLIjNxGLAJk7OhlPUa13g3pKL+K/1cvNLB9PFT5J/Gn7bh3CNx1Qq1fE unZNII4iP49XuHOFQFqE8XHTRw5ZoQgnkANBg+St8zTF94xvILUSR84BQUH3WoIdQwsf byhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=C5vNvz0pRPyGtGV55/ndNdI31Mf3hlcrCk6gzk5We7w=; b=mI8dCM5Jr/I/AXLcY41N2pkYGETidu8rg34JPkIqxPU+mvJq9UWQ0IMpz3bf+YBTMv 9+hjToSq6Ww2qopqD9c/tu57dPiWJ1VzafgUmvc0n8OBsQ1MVD6yOp5ct2pgkpIUVyZo +y+tRo+i4uGquNEf/l2l1NKZRr8WtLuYLU/2Dish7BWKAZH0cybd2tv7PNr9gEpbJFln GEIRflqI52F496PxiLKDfZbC2P0oKIXECcvlQQkfVVyo3Qdt34nNgHqfiM0Jo1fd5Esd carkVPJhIs+cLpw53ZsaWr9z4lUDHbgUIZ1BKkkCirASkWX+u4+BpH0WQ/FXx5mhxYYN GS5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hVli641c; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a205si1786871oif.159.2020.03.17.09.00.11; Tue, 17 Mar 2020 09:00:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=hVli641c; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726713AbgCQP71 (ORCPT + 99 others); Tue, 17 Mar 2020 11:59:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:60416 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726294AbgCQP71 (ORCPT ); Tue, 17 Mar 2020 11:59:27 -0400 Received: from tleilax.poochiereds.net (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E8CDA20724; Tue, 17 Mar 2020 15:59:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584460766; bh=QKH9gO8a+divkhjn6vVthIrkDYpvb66V0Iq362aPVWc=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=hVli641ctEManFaCzq3O11jVi1IXvi7M3f7R10/wM/B7rETW+kCfvBRioAEXQS1bT wUFrGuCkRGcWwHwaM+QxaArLG9Y0yu59tTDP+zag8cFE/GMWWDdyF/g8AQViXz78X5 ju/FAEeNcULe7UYCq+cjr8yLTXDFv2Y+a/Q7Lb/k= Message-ID: <46d2c16f48f1fd4ad28a85099c59ae95a9997740.camel@kernel.org> Subject: Re: [locks] 6d390e4b5d: will-it-scale.per_process_ops -96.6% regression From: Jeff Layton To: NeilBrown , Linus Torvalds Cc: yangerkun , kernel test robot , LKML , lkp@lists.01.org, Bruce Fields , Al Viro Date: Tue, 17 Mar 2020 11:59:24 -0400 In-Reply-To: <87k13jsyum.fsf@notabene.neil.brown.name> References: <20200308140314.GQ5972@shao2-debian> <41c83d34ae4c166f48e7969b2b71e43a0f69028d.camel@kernel.org> <923487db2c9396c79f8e8dd4f846b2b1762635c8.camel@kernel.org> <36c58a6d07b67aac751fca27a4938dc1759d9267.camel@kernel.org> <878sk7vs8q.fsf@notabene.neil.brown.name> <875zfbvrbm.fsf@notabene.neil.brown.name> <0066a9f150a55c13fcc750f6e657deae4ebdef97.camel@kernel.org> <87v9nattul.fsf@notabene.neil.brown.name> <87o8t2tc9s.fsf@notabene.neil.brown.name> <877dznu0pk.fsf@notabene.neil.brown.name> <87pndcsxc6.fsf@notabene.neil.brown.name> <87k13jsyum.fsf@notabene.neil.brown.name> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.34.4 (3.34.4-1.fc31) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2020-03-17 at 09:45 +1100, NeilBrown wrote: > > + > > + /* > > + * Tell the world we're done with it - see comment at top > > + * of this function > > This comment might be misleading. The world doesn't care. > Only this thread cares where ->fl_blocker is NULL. We need the release > semantics when some *other* thread sets fl_blocker to NULL, not when > this thread does. > I don't think we need to spell that out and I'm not against using > store_release here, but locks_delete_block cannot race with itself, so > referring to the comment at the top of this function is misleading. > > So: > Reviewed-by: NeilBrown > > but I'm not totally happy with the comments. > > Thanks Neil. We can clean up the comments before merge. How about this revision to the earlier patch? I took the liberty of poaching your your proposed verbiage: ------------------8<--------------------- From c9fbfae0ab615e20de0bdf1ae7b27591d602f577 Mon Sep 17 00:00:00 2001 From: Jeff Layton Date: Mon, 16 Mar 2020 18:57:47 -0400 Subject: [PATCH] SQUASH: update with Neil's comments Signed-off-by: Jeff Layton --- fs/locks.c | 31 ++++++++++++++++++++++--------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/fs/locks.c b/fs/locks.c index eaf754ecdaa8..e74075b0e8ec 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -741,8 +741,9 @@ static void __locks_wake_up_blocks(struct file_lock *blocker) wake_up(&waiter->fl_wait); /* - * Tell the world we're done with it - see comment at - * top of locks_delete_block(). + * The setting of fl_blocker to NULL marks the official "done" + * point in deleting a block. Paired with acquire at the top + * of locks_delete_block(). */ smp_store_release(&waiter->fl_blocker, NULL); } @@ -761,11 +762,23 @@ int locks_delete_block(struct file_lock *waiter) /* * If fl_blocker is NULL, it won't be set again as this thread "owns" * the lock and is the only one that might try to claim the lock. - * Because fl_blocker is explicitly set last during a delete, it's - * safe to locklessly test to see if it's NULL. If it is, then we know - * that no new locks can be inserted into its fl_blocked_requests list, - * and we can therefore avoid doing anything further as long as that - * list is empty. + * + * We use acquire/release to manage fl_blocker so that we can + * optimize away taking the blocked_lock_lock in many cases. + * + * The smp_load_acquire guarantees two things: + * + * 1/ that fl_blocked_requests can be tested locklessly. If something + * was recently added to that list it must have been in a locked region + * *before* the locked region when fl_blocker was set to NULL. + * + * 2/ that no other thread is accessing 'waiter', so it is safe to free + * it. __locks_wake_up_blocks is careful not to touch waiter after + * fl_blocker is released. + * + * If a lockless check of fl_blocker shows it to be NULL, we know that + * no new locks can be inserted into its fl_blocked_requests list, and + * can avoid doing anything further if the list is empty. */ if (!smp_load_acquire(&waiter->fl_blocker) && list_empty(&waiter->fl_blocked_requests)) @@ -778,8 +791,8 @@ int locks_delete_block(struct file_lock *waiter) __locks_delete_block(waiter); /* - * Tell the world we're done with it - see comment at top - * of this function + * The setting of fl_blocker to NULL marks the official "done" point in + * deleting a block. Paired with acquire at the top of this function. */ smp_store_release(&waiter->fl_blocker, NULL); spin_unlock(&blocked_lock_lock); -- 2.24.1