Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752721AbdLNTy6 (ORCPT ); Thu, 14 Dec 2017 14:54:58 -0500 Received: from mail.efficios.com ([167.114.142.141]:59857 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752497AbdLNTy4 (ORCPT ); Thu, 14 Dec 2017 14:54:56 -0500 Date: Thu, 14 Dec 2017 19:57:08 +0000 (UTC) From: Mathieu Desnoyers To: Peter Zijlstra Cc: Chris Lameter , "Paul E. McKenney" , Boqun Feng , Andy Lutomirski , Dave Watson , linux-kernel , linux-api , Paul Turner , Andrew Morton , Russell King , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Andrew Hunter , Andi Kleen , Ben Maurer , rostedt , Josh Triplett , Linus Torvalds , Catalin Marinas , Will Deacon , Michael Kerrisk , Alexander Viro Message-ID: <1772818221.34575.1513281428902.JavaMail.zimbra@efficios.com> In-Reply-To: <20171214194853.GE3326@worktop> References: <20171214161403.30643-1-mathieu.desnoyers@efficios.com> <20171214161403.30643-3-mathieu.desnoyers@efficios.com> <12046460.34426.1513275177081.JavaMail.zimbra@efficios.com> <20171214194853.GE3326@worktop> Subject: Re: [RFC PATCH for 4.16 02/21] rseq: Introduce restartable sequences system call (v12) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.11_GA_1854 (ZimbraWebClient - FF52 (Linux)/8.7.11_GA_1854) Thread-Topic: rseq: Introduce restartable sequences system call (v12) Thread-Index: 7XXvbCU0nvI+Tbbq5GB2zdjUg2NRsg== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1000 Lines: 28 ----- On Dec 14, 2017, at 2:48 PM, Peter Zijlstra peterz@infradead.org wrote: > On Thu, Dec 14, 2017 at 12:50:13PM -0600, Christopher Lameter wrote: >> Ultimately I wish fast increments like done by this_cpu_inc() could be >> implemented in an efficient way on non x86 platforms that do not have >> cheap instructions like that. > > So the problem isn't migration; for that we could wrap the operation in > preempt_disable() which is not more expensive than rseq would be. And a > lot more deterministic. > > The problem instead is interrupts, which can result in nested load-store > operations, and that comes apart. This then means having to disable > interrupts over these things and _that_ is expensive. Then could we consider checking a per task-struct rseq_cs pointer when returning from interrupt handler ? This rseq_cs pointer would track kernel restartable sequences. This would also work for NMI handlers. Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com