Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp559492ybh; Wed, 11 Mar 2020 06:28:40 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsuvSKgQ1pcAmYAzAwmFHATfCPTFmg4f1KBwUb1Cai9/CHDkqAHS1Uq1Sw71w8zEStnR715 X-Received: by 2002:a05:6830:10da:: with SMTP id z26mr2232038oto.27.1583933319916; Wed, 11 Mar 2020 06:28:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1583933319; cv=none; d=google.com; s=arc-20160816; b=aih+IsF0f0hTtEqkZf3CB3L/SagNHnqjDADFEWqnUEp/k2PPeRthb0fdsT5KPFciSv QvBKWn7g6DlDaG6uHXFvEdr8s6RdjuNGh/Pu5U/GXYFgELXevkTqU0ZN0xe++WJL81pW 8+V9Jg1YVlnWwALLVYMId4NGPkq+eY1Dg9Ddt5XrWBN4yxMD8Z5orAvkaiMNwAd7isZB Zer6EWC6aUzZS6o098wjXVzDvwknZKvv7iVKQ5SL0CSaBtoynd9fz5xknwADxb4M8LKl KNPhS6uOQnoYWj5E1dlgwajU8oJbhHKVfRltNgqxhHa+/DhyVvAeI5obyvgsVKqsdj0h ZXxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=UkEauOh40ja3T1qrjFY+LiOz0SKXZICHwppQ0WvDNlU=; b=QPIINc1UpCa0zVMih0qrNcLG6yx427/zcRJDcm3EerCMk46ZRyS+ePpR+uZ7rwEnhi pGZX1omQkqpsdlNNVBgs8DAN5HpHvkZPxC79IvF4Y/JgN2L3N33oDJNlkxKiCywlicK9 o8Wrefafe1LyNGKA788hWIXxYCnqi8A80ozL6pIDYTkokw5sP1QZxUUQzZ+sL5x5yOO9 By9adOKekJEo9o6nfSLryDcqknOh3+22+ePyoL7zoHb8/g8jJVTwREVWkNREn5LGsK8J pLDXi2xiGGA4+hjQF1iZU2oCgSNLiAJqfHtxw04nz9i7rlBagxcinxSjQUKzv3pfM0JA i6IA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i15si1142955ots.121.2020.03.11.06.28.27; Wed, 11 Mar 2020 06:28:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729589AbgCKN1U (ORCPT + 99 others); Wed, 11 Mar 2020 09:27:20 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:50552 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729501AbgCKN1U (ORCPT ); Wed, 11 Mar 2020 09:27:20 -0400 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 4B23DA57124B05C113E7; Wed, 11 Mar 2020 21:26:55 +0800 (CST) Received: from [127.0.0.1] (10.133.210.141) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Wed, 11 Mar 2020 21:26:50 +0800 Subject: Re: [locks] 6d390e4b5d: will-it-scale.per_process_ops -96.6% regression To: Jeff Layton , NeilBrown , "Linus Torvalds" CC: kernel test robot , LKML , , Bruce Fields , Al Viro References: <20200308140314.GQ5972@shao2-debian> <34355c4fe6c3968b1f619c60d5ff2ca11a313096.camel@kernel.org> <1bfba96b4bf0d3ca9a18a2bced3ef3a2a7b44dad.camel@kernel.org> <87blp5urwq.fsf@notabene.neil.brown.name> <41c83d34ae4c166f48e7969b2b71e43a0f69028d.camel@kernel.org> <923487db2c9396c79f8e8dd4f846b2b1762635c8.camel@kernel.org> <36c58a6d07b67aac751fca27a4938dc1759d9267.camel@kernel.org> <878sk7vs8q.fsf@notabene.neil.brown.name> <9ff6eee403d293dd069935ca6979f72131fe5217.camel@kernel.org> From: yangerkun Message-ID: Date: Wed, 11 Mar 2020 21:26:49 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <9ff6eee403d293dd069935ca6979f72131fe5217.camel@kernel.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.133.210.141] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/3/11 20:52, Jeff Layton wrote: > On Wed, 2020-03-11 at 09:57 +0800, yangerkun wrote: > > [snip] > >> >> On 2020/3/11 5:01, NeilBrown wrote: >>> >>> I think this patch contains an assumption which is not justified. It >>> assumes that if a wait_event completes without error, then the wake_up() >>> must have happened. I don't think that is correct. >>> >>> In the patch that caused the recent regression, the race described >>> involved a signal arriving just as __locks_wake_up_blocks() was being >>> called on another thread. >>> So the waiting process was woken by a signal *after* ->fl_blocker was set >>> to NULL, and *before* the wake_up(). If wait_event_interruptible() >>> finds that the condition is true, it will report success whether there >>> was a signal or not. >> Neil and Jeff, Hi, >> >> But after this, like in flock_lock_inode_wait, we will go another >> flock_lock_inode. And the flock_lock_inode it may return >> -ENOMEM/-ENOENT/-EAGAIN/0. >> >> - 0: If there is a try lock, it means that we have call >> locks_move_blocks, and fl->fl_blocked_requests will be NULL, no need to >> wake up at all. If there is a unlock, no one call wait for me, no need >> to wake up too. >> >> - ENOENT: means we are doing unlock, no one will wait for me, no need to >> wake up. >> >> - ENOMEM: since last time we go through flock_lock_inode someone may >> wait for me, so for this error, we need to wake up them. >> >> - EAGAIN: since we has go through flock_lock_inode before, these may >> never happen because FL_SLEEP will not lose. >> >> So the assumption may be ok and for some error case we need to wake up >> someone may wait for me before(the reason for the patch "cifs: call >> locks_delete_block for all error case in cifs_posix_lock_set"). If I am >> wrong, please point out! >> >> > > That's the basic dilemma. We need to know whether we'll need to delete > the block before taking the blocked_lock_lock. > > Your most recent patch used the return code from the wait to determine > this, but that's not 100% reliable (as Neil pointed out). Could we try I am a little confused, maybe I am wrong. As Neil say: "If wait_event_interruptible() finds that the condition is true, it will report success whether there was a signal or not.", this wait_event_interruptible may return 0 for this scenes? so we will go loop and call flock_lock_inode again, and after we exits the loop with error equals 0(if we try lock), the lock has call locks_move_blocks and leave fl_blocked_requests as NULL? > to do this by doing the delete only when we get certain error codes? > Maybe, but that's a bit fragile-sounding. > > Neil's most recent patch used presence on the fl_blocked_requests list > to determine whether to take the lock, but that relied on some very > subtle memory ordering. We could of course do that, but that's a bit > brittle too. > > That's the main reason I'm leaning toward the patch Neil sent > originally and that uses the fl_wait.lock. The existing alternate lock > managers (nfsd and lockd) don't use fl_wait at all, so I don't think > doing that will cause any issues. >