Received: by 10.213.65.68 with SMTP id h4csp1060997imn; Fri, 6 Apr 2018 13:54:08 -0700 (PDT) X-Google-Smtp-Source: AIpwx49yY6VAVZ1nGOXizFMvs0LZR2HvYNiyau+xxHT5iEKJc3J2qNGbWiLUmTpBjOqa5t5jJIjl X-Received: by 2002:a17:902:9305:: with SMTP id bc5-v6mr28360230plb.18.1523048048592; Fri, 06 Apr 2018 13:54:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523048048; cv=none; d=google.com; s=arc-20160816; b=ahCTsQ5PWJ65vzQWS1fcx0j29Wl34KiIYKb+BTfQInst+uAOsDwZQ5zZi60tOdZBqt DbYhzDz2uPCrZwuGprn4xL1IWLAdlf4dVBT0aJW1Eh9pWVuxUXMekT7Y4IcBhlg2pdAU u+YdPmrC0FDUf36XHg0eAbLMvzoBk/MO8EG3N5qpvHCVgDARoRGV4OqFvFj2By9tE1qQ MZvFiPF4UbwVdd78W9/UB4ohVzUpyQureCDrxm32tNADyLy7JVe6punAkYmhR9wdQRBs 1YfAe3UsTjt8HdIwlOsHpgl1T1CTSGBXphLQK5nSgY8wNhw3ygKMar5gsnSz5MwgkeFa PA5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject :arc-authentication-results; bh=D/svp3S85ioNkRapwnR+DUkJKRRyZVeol94Zfv5LHhw=; b=E5HJJjZ0X3uXVTfSggtKRfsCq46ntb0RGCGVLAjgO/KMjUQSBaPJIuxeLpjEBB5w5m 8IicjANTfnu79jDFYU+9ba0qLLO1JkpP4htZps3elp5msO5wuFGIPF/7yqd43/4G7LCo 3wmE9a/8/xuwSnbCKYLT/o36BUrjAhdFgJs4acc9pge0i51Cce22kAK7KTS2n4vNPlMt 6z82hY9mmdEJysiisPCwFxkXYSjVTlVYgUfhr0vqrbrLvYq1NahjbLcS/u9bPIgyRk83 8VVRdJnmjGf1m5n8wqEjbaOH8U/lYkZ7ot8LldAx9lVPU2LhstikCeNS9csvO/huH2H3 0m+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w9si8571973pfl.268.2018.04.06.13.53.31; Fri, 06 Apr 2018 13:54:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752051AbeDFUuW convert rfc822-to-8bit (ORCPT + 99 others); Fri, 6 Apr 2018 16:50:22 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:50340 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751579AbeDFUuU (ORCPT ); Fri, 6 Apr 2018 16:50:20 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 73ECA84250; Fri, 6 Apr 2018 20:50:20 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-164.bos.redhat.com [10.18.17.164]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2E4A42166BB2; Fri, 6 Apr 2018 20:50:20 +0000 (UTC) Subject: Re: [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath To: Will Deacon , linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> <1522947547-24081-3-git-send-email-will.deacon@arm.com> From: Waiman Long Organization: Red Hat Message-ID: Date: Fri, 6 Apr 2018 16:50:19 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <1522947547-24081-3-git-send-email-will.deacon@arm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Fri, 06 Apr 2018 20:50:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Fri, 06 Apr 2018 20:50:20 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'longman@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/05/2018 12:58 PM, Will Deacon wrote: > The qspinlock locking slowpath utilises a "pending" bit as a simple form > of an embedded test-and-set lock that can avoid the overhead of explicit > queuing in cases where the lock is held but uncontended. This bit is > managed using a cmpxchg loop which tries to transition the uncontended > lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). > > Unfortunately, the cmpxchg loop is unbounded and lockers can be starved > indefinitely if the lock word is seen to oscillate between unlocked > (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are > able to take the lock in the cmpxchg loop without queuing and pass it > around amongst themselves. > > This patch fixes the problem by unconditionally setting _Q_PENDING_VAL > using atomic_fetch_or, and then inspecting the old value to see whether > we need to spin on the current lock owner, or whether we now effectively > hold the lock. The tricky scenario is when concurrent lockers end up > queuing on the lock and the lock becomes available, causing us to see > a lockword of (n,0,0). With pending now set, simply queuing could lead > to deadlock as the head of the queue may not have observed the pending > flag being cleared. Conversely, if the head of the queue did observe > pending being cleared, then it could transition the lock from (n,0,0) -> > (0,0,1) meaning that any attempt to "undo" our setting of the pending > bit could race with a concurrent locker trying to set it. > > We handle this race by preserving the pending bit when taking the lock > after reaching the head of the queue and leaving the tail entry intact > if we saw pending set, because we know that the tail is going to be > updated shortly. > > Cc: Peter Zijlstra > Cc: Ingo Molnar > Signed-off-by: Will Deacon > --- The pending bit was added to the qspinlock design to counter performance degradation compared with ticket lock for workloads with light spinlock contention. I run my spinlock stress test on a Intel Skylake server running the vanilla 4.16 kernel vs a patched kernel with this patchset. The locking rates with different number of locking threads were as follows: # of threads 4.16 kernel patched 4.16 kernel ------------ ----------- ------------------- 1 7,417 kop/s 7,408 kop/s 2 5,755 kop/s 4,486 kop/s 3 4,214 kop/s 4,169 kop/s 4 4,396 kop/s 4,383 kop/s The 2 contending threads case is the one that exercise the pending bit code path the most. So it is obvious that this is the one that is most impacted by this patchset. The differences in the other cases are mostly noise or maybe just a little bit on the 3 contending threads case. I am not against this patch, but we certainly need to find out a way to bring the performance number up closer to what it is before applying the patch. Cheers, Longman