Received: by 10.192.165.148 with SMTP id m20csp422165imm; Fri, 20 Apr 2018 08:54:53 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+sryM0Wp2JuBbbQrSIo5EDz0dhnJeBEe6P9Xrt5YlSb2txXC2jBDTkutK4SsCDqZAm949/ X-Received: by 10.98.157.90 with SMTP id i87mr5468382pfd.190.1524239692956; Fri, 20 Apr 2018 08:54:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524239692; cv=none; d=google.com; s=arc-20160816; b=HaQpf+j4GT4qg+6ljccRMcsrYAx5tickNoLQ9jYhH1MaUoOSL4Fo54HrtRDh3Pbfh2 3fGcp2125Wd+Mn8rtjJpAJzRH2dj/i8+TWAMfpxX/tFnwuSeDERH+cCzRrP8htYGLNLk Q/j+jrUlAR8aHin/5ndT0sU1au75NVKZs4bFIGmVK6Sz4HaZ7D2rYBNIjIgiQEL6wk2v V39qfDiBzH3rIaVHcnlKaUmFvIlgAt3e7+DjqK8AlpNQiaBDBXncBEJOo1fyYRI1Ehf+ IUYnCRXBMskNqsUD5+LgEfnDiudsKXA72mkMzROXxEMkkON4p4c622+8bMl0yyVej82f PRXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=6/73RmmqDm+v/95w4guyDtSn3Gm8+j1RlEY7CyUK+qs=; b=WeAPmDXAhDoTrHJlXGYlGeVEvdXWBMahz6jfu3QaYZK4Kus4Bi8n2YFOfqFgsGK5Qd PcnMP4xXtQM+MUojBYthLaPqLgzNGETfckMBuv4a6DTX3qNj67zVmAnNXsbIGs7tIsBF +giuDeiEwFEkbpO1QRhrMhJ+uj/cSVGT1JmPmsAqLwjssmFF4jBSnjl5ueeL1rKff9DK W9yyd7H6dpqrr9Uz1FVReM8vspzHy4MuTQijk/ZzvUtd2HVURXVhhfAW1zp0PMUXVIku Kux6R2GvP8j/1fAotzUvFKYm4NUTabjs1ZFGjmxdEWow0erC93qBV6uEbRc0ogxiYkSX dTZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ew/qMd0l; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i136si5191054pgc.0.2018.04.20.08.54.08; Fri, 20 Apr 2018 08:54:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ew/qMd0l; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755884AbeDTPuf (ORCPT + 99 others); Fri, 20 Apr 2018 11:50:35 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:40302 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755714AbeDTPue (ORCPT ); Fri, 20 Apr 2018 11:50:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6/73RmmqDm+v/95w4guyDtSn3Gm8+j1RlEY7CyUK+qs=; b=ew/qMd0l9vtw8332K3oXog5/z XYhiLQJiMJBd84MEofh8V0/wbv5KHAa5dgLj2JN7q8Cqi2xeKLDMWoucmg5x3XW/tlIOWtDIyQipW SmBZLEoLn0pLf3+kbEnYaUHCYbDAy0lhyhr+sT8p0OilZMxPb7fnNzicIegDYDuY+RIOn0qRspUd1 jxlYeF3h4/8mrwEl77C0ld2DAu1knLmlT4wN+5dyuT/jXXk7kdAL8Rbo4Noe9E3gG1FPqOyNP6/4T PJV9I/8wCw7ErD/n1TYPRgo4z/cHbZOwJa3p6GqVhM8qKuQKLXIDfW4xx6RyNIrhX4O1/f7MdPT6v O6YZNLv1g==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1f9YIw-0003wU-0P; Fri, 20 Apr 2018 15:50:30 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 4EAA62029F873; Fri, 20 Apr 2018 17:50:28 +0200 (CEST) Date: Fri, 20 Apr 2018 17:50:28 +0200 From: Peter Zijlstra To: Davidlohr Bueso Cc: tglx@linutronix.de, mingo@kernel.org, longman@redhat.com, linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: Re: [PATCH 2/2] rtmutex: Reduce top-waiter blocking on a lock Message-ID: <20180420155028.GO4064@hirez.programming.kicks-ass.net> References: <20180410162750.8290-1-dave@stgolabs.net> <20180410162750.8290-2-dave@stgolabs.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180410162750.8290-2-dave@stgolabs.net> User-Agent: Mutt/1.9.3 (2018-01-21) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 10, 2018 at 09:27:50AM -0700, Davidlohr Bueso wrote: > By applying well known spin-on-lock-owner techniques, we can avoid the > blocking overhead during the process of when the task is trying to take > the rtmutex. The idea is that as long as the owner is running, there is a > fair chance it'll release the lock soon, and thus a task trying to acquire > the rtmutex will better off spinning instead of blocking immediately after > the fastpath. This is similar to what we use for other locks, borrowed > from -rt. The main difference (due to the obvious realtime constraints) > is that top-waiter spinning must account for any new higher priority waiter, > and therefore cannot steal the lock and avoid any pi-dance. As such there > will be at most only one spinner waiter upon contended lock. > > Conditions to stop spinning and block are simple: > > (1) Upon need_resched() > (2) Current lock owner blocks > (3) The top-waiter has changed while spinning. > > The unlock side remains unchanged as wake_up_process can safely deal with > calls where the task is not actually blocked (TASK_NORMAL). As such, there > is only unnecessary overhead dealing with the wake_q, but this allows us not > to miss any wakeups between the spinning step and the unlocking side. > > Passes running the pi_stress program with increasing thread-group counts. Is this similar to what we have in RT (which, IIRC, has an optimistic spinning implementation as well)? ISTR there being some contention over the exact semantics of (3) many years ago. IIRC the question was if an equal priority task was allowed to steal; because lock stealing can lead to fairness issues. One would expect two FIFO-50 tasks to be 'fair' wrt lock acquisition and not starve one another. Therefore I think we only allowed higher prio tasks to steal and kept FIFO order for equal prioty tasks.