Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp5378435imm; Sun, 22 Jul 2018 21:05:38 -0700 (PDT) X-Google-Smtp-Source: AAOMgpejpTQ2hQhD58iOzujgUHMbc+fOUplDzQskedos8yqOiORfz0WuheKUNwY4bIHBw2w5UZL1 X-Received: by 2002:a17:902:24e:: with SMTP id 72-v6mr6007368plc.74.1532318738668; Sun, 22 Jul 2018 21:05:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532318738; cv=none; d=google.com; s=arc-20160816; b=LPptRVJa7VLrj+jqqUs4xWBkwsfFn4E88zmiCbHvc8t8xG3njAMTCzLQaQM1apobtp 7IECz6Qn6zK615WdRoo9WI735VzMLmlKnMt/FGerIrbMpEidrCmohEyvfL5VYKZU8uf+ 84bEYZ67H2QZvaQQQ5kORw9YMaGfX+4yMTUQSlCuZR0TmoWSzPCrg8SqPjTClk74heFF 9ckz380sfgMTTxQgIuyFq2bAf2LuQobsNQU80bnatpG6uQSmc/cUBuAU3cTPGhZLxVHW eSWoqfmKirhHpjcb8unlW9pEHwhal3nqzDPSPSMHVEFNAqXYUb49YqEoLMB/HZLl4VJV 2IGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=iIwT4iaB1wJKwkoBDuj1EvE/Wwzd0/FkVQDw2Q+TpS0=; b=WUGPUPDmkp4nPRHBjFM1fLyUN49/BYaq3WxmMM3BMeoQl4M1ogxLCKMCdMPacagMjs fKDzHCpcag7R9s/hHACuCwAjsfC3lykItDYsXF5X5SEbNb71A9LmRJ0DSs40RrBga1n+ yBRwEx/Ig1BPI36BbyRyabAnNtmkV7/k62l2f8MkqQxFJxlpdel7yWIFnXJm/SbSpwFy SYpv/ukvFpTobJw0vJsnq9Gu8u5bXZpSs4qvw0iRwP5DwpWXIjVLAR1bb1diTA8rjrvO srt1KoR8Gxxpm5orXteIFXxYZbPnwEASfjM2vZxt1olKQs40Inc1y+gz352b8io9kH8N Q3rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n1-v6si7083939plp.166.2018.07.22.21.05.21; Sun, 22 Jul 2018 21:05:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727956AbeGWFDk (ORCPT + 99 others); Mon, 23 Jul 2018 01:03:40 -0400 Received: from mx2.suse.de ([195.135.220.15]:48454 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725754AbeGWFDj (ORCPT ); Mon, 23 Jul 2018 01:03:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 668E9AE62; Mon, 23 Jul 2018 04:04:28 +0000 (UTC) Date: Sun, 22 Jul 2018 21:04:23 -0700 From: Davidlohr Bueso To: Waiman Long Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , linux-kernel@vger.kernel.org, Mark Ray , Joe Mario , Scott Norton Subject: Re: [PATCH v2] locking/rwsem: Take read lock immediate if queue empty with no writer Message-ID: <20180723040423.hntq6dzzzf3sagfb@linux-r8p5> References: <1531506653-5244-1-git-send-email-longman@redhat.com> <20180718161639.GT2494@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 18 Jul 2018, Waiman Long wrote: >The key here is that we don't want other incoming readers to observe >that there are waiters in the wait queue and hence have to go into the >slowpath until the single waiter in the queue is sure that it probably >will need to go to sleep if there is writer. > >With a constant stream of incoming readers, a major portion of them will >observe the a negative count and be serialized to enter the slowpath. >There are certainly other readers that do not observe the negative count >in the in between period after one reader clear the count in the unlock >path and a waiter set the count to negative again. Those readers can go >ahead and do the read in parallel. But it is the serialized readers that >cause the performance loss and the observation of spinlock contention in >the perf output. This makes sense and seems feasible in that the optimization is done with the wait_lock held. > >It is the constant stream of incoming readers that sustain the spinlock >queue and the repeated clearing and negative setting of the count. This would not affect optimistic spinners that haven't yet arrived at the waitqueue phase because the lock is anonymously owned, so they won't spin in the first place, right? Thanks, Davidlohr