Received: by 2002:a05:7412:3784:b0:e2:908c:2ebd with SMTP id jk4csp1496418rdb; Mon, 2 Oct 2023 11:14:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGjEkmnE93/YfKdo9vaxMsH/JAPZa877NGTUGQYc6DG+7xa/JB8j+DFAXE9NUJyu0oeRoLU X-Received: by 2002:a05:6a21:35c8:b0:131:f3a:4020 with SMTP id ba8-20020a056a2135c800b001310f3a4020mr9182595pzc.33.1696270476112; Mon, 02 Oct 2023 11:14:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696270476; cv=none; d=google.com; s=arc-20160816; b=kyQvavcZtYiTCZ+JXVGK9e7m03YgkUO8riLfuxqsl69v/BWf4IuNktjj/I6aBRH1IT GksQqF3ZbJsT9e9nXv+l5nH3E8lCWCs/JLlT24SkgZOZZq2y0b84sfoX064vKbQTfY6I 3LISpHoS2zDvYRmOWLa3NfntOpKjEZ3xUHAOhmLywIhebCJ9oum6FfO/y+npdTA+789R ZNw0QUY6uyrTiqsaM0p8yIa9BwAcKCR7erXQ+Fieg07VOGKNG8XCqob1RVwwGP8+f3cn 23eq7f1WIZ+nW0fVhP85ezmtMC9pMDxGIKXtAIhTvIaJrQTjtT9UPXJ+Mknnd0rN/4e2 fL4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=ZlnkYzotf0YTNSNEte9yabQp+l0BdXJHWLyOxzVNxHk=; fh=mg0D+eMPRsdQsWgNfej+G/fvoquZ/OzvBtnxTNSap3U=; b=F3EXgEjMj/GoJ7ho1Bc2bfN2tpEXlUhtvIs+jL/d30L2yAK1IlAlSg6v9YGkCgPx2K S+Jil+jiRdaP3D3g9LRkaIl1EbWd8BHqRyMqAS8XvuFWFxEbcjdJcDzIGz+H+H6KPkIo Ulg4cJnvC3RbeTHZ4QKCqmC4y5LZeqBZU4ll7ajAsl+YeTRjcZl1eMBRvmqXM7oXHd7x 8NUw3VHuexNErsc23voBHv2gbZaSCvuhds0iNbx286l6lyR3dBsrtDyq/ISwPHMwyQEo xTxZa4US1NVbgRyVoBbiskJXSEJdQfHIxZCoZC62D5uf7kCXhyoxm0wZpfs2RXq+Ta8O jxmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id m9-20020a654389000000b0056569ee3ae6si27258022pgp.798.2023.10.02.11.14.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Oct 2023 11:14:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 5A134803D58E; Mon, 2 Oct 2023 10:55:58 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229538AbjJBRzu (ORCPT + 99 others); Mon, 2 Oct 2023 13:55:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238317AbjJBRzt (ORCPT ); Mon, 2 Oct 2023 13:55:49 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E90B9E; Mon, 2 Oct 2023 10:55:46 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0774CC433C8; Mon, 2 Oct 2023 17:55:42 +0000 (UTC) Date: Mon, 2 Oct 2023 13:56:43 -0400 From: Steven Rostedt To: David Laight Cc: Peter Zijlstra , Mathieu Desnoyers , "linux-kernel@vger.kernel.org" , Thomas Gleixner , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , "linux-api@vger.kernel.org" , Christian Brauner , Florian Weimer , "carlos@redhat.com" , Peter Oskolkov , Alexander Mikhalitsyn , Chris Kennelly , Ingo Molnar , Darren Hart , Davidlohr Bueso , =?UTF-8?B?QW5kcsOp?= Almeida , "libc-alpha@sourceware.org" , Jonathan Corbet , Noah Goldstein , Daniel Colascione , "longman@redhat.com" , Florian Weimer Subject: Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq Message-ID: <20231002135643.4c8eefbd@gandalf.local.home> In-Reply-To: <845039ad23d24cc687491efa95be5e0d@AcuMS.aculab.com> References: <20230529191416.53955-1-mathieu.desnoyers@efficios.com> <20230529191416.53955-2-mathieu.desnoyers@efficios.com> <20230928103926.GI9829@noisy.programming.kicks-ass.net> <20230928104321.490782a7@rorschach.local.home> <40b76cbd00d640e49f727abbd0c39693@AcuMS.aculab.com> <20231002125109.55c35030@gandalf.local.home> <845039ad23d24cc687491efa95be5e0d@AcuMS.aculab.com> X-Mailer: Claws Mail 3.19.1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Mon, 02 Oct 2023 10:55:58 -0700 (PDT) On Mon, 2 Oct 2023 17:22:34 +0000 David Laight wrote: > > And what heuristic would you use. My experience with picking "time to spin" > > may work for one workload but cause major regressions in another workload. > > I came to the conclusion to "hate" heuristics and NACK them whenever > > someone suggested adding them to the rt_mutex in the kernel (back before > > adaptive mutexes were introduced). > > Isn't that exactly what and adaptive mutex does? > Spin 'for a bit' before sleeping. But it's not some arbitrary time to spin. Technically, a kernel spin lock is spinning on the heuristic of ownership. "Spin until the lock is released" is a heuristic! > > > > > The obvious problem with their implementation is that if the owner is > > > > sleeping, there's no point in spinning. Worse, the owner may even be > > > > waiting for the spinner to get off the CPU before it can run again. But > > > > according to Robert, the gain in the general performance greatly > > > > outweighed the few times this happened in practice. > > > > > > Unless you can use atomics (ok for bits and linked lists) you > > > always have the problem that userspace can't disable interrupts. > > > So, unlike the kernel, you can't implement a proper spinlock. > > > > Why do you need to disable interrupts? If you know the owner is running on > > the CPU, you know it's not trying to run on the CPU that is acquiring the > > lock. Heck, there's normal spin locks outside of PREEMPT_RT that do not > > disable interrupts. The only time you need to disable interrupts is if the > > interrupt itself takes the spin lock, and that's just to prevent deadlocks. > > You need to disable interrupts in order to bound the time the > spinlock is held for. > If all you are doing is a dozen instructions (eg to remove an > item from s list) then you really don't want an interrupt coming in > while you have the spinlock held. That's just noise of normal processing. What's the difference of it happening during spinning to where it happens in normal execution? > It isn't the cost of the ISR - that has to happen sometime, but that > the cpu waiting for the spinlock also take the cost of the ISR. As supposed to just going into the kernel? So it wastes some of its quota. It's not stopping anything else from running more than normal. > > A network+softint ISR can run for a long time - I'm sure I've > seen a good fraction of a millisecond. > You really don't want another (or many other) cpu spinning while > that is going on. Why not? The current user space only code does that now (and it will even spin if the owner is preempted). What we are talking about implementing is a big improvement to what is currently done. > Which (to my mind) pretty much means that you always want to > disable interrupts on a spinlock. The benchmarks say otherwise. Sure, once in a while you may spin longer because of an interrupt, but that's a very rare occurrence compared to normal taking of spin locks. Disabling interrupts is an expensive operation. The savings you get from "not waiting for a softirq to finish" will be drowned out by the added overhead of disabling interrupts at every acquire. > If the architecture makes masking ISR expensive then I've seen schemes > that let the hardware interrupt happen, then disable it and rerun later. > > > > I've NFI how CONFIG_RT manages to get anything done with all > > > the spinlocks replaced by sleep locks. > > > Clearly there are a spinlocks that are held for far too long. > > > But you really do want to spin most of the time. > > > > It spins as long as the owner of the lock is running on the CPU. This is > > what we are looking to get from this patch series for user space. > > I think you'd need to detect that the cpu was in-kernel running an ISR. For the few times that might happen, it's not worth it. > > But the multithreaded audio app I was 'fixing' basically failed > as soon as it had to sleep on one of the futex. > The real problem was ISR while the mutex was held. > So deciding to sleep because the lock owner isn't running (in user) > would already be delaying things too much. That doesn't sound like the use case we are fixing. If your audio app failed because it had to sleep, that tells me it would fail regardless. > > > > > Back in 2007, we had an issue with scaling on SMP machines. The RT kernel > > with the sleeping spin locks would start to exponentially slow down with > > the more CPUs you had. Once we hit more than 16 CPUs, the time to boot a > > kernel took 10s of minutes to boot RT when the normal CONFIG_PREEMPT kernel > > would only take a couple of minutes. The more CPUs you added, the worse it > > became. > > > > Then SUSE submitted a patch to have the rt_mutex spin only if the owner of > > the mutex was still running on another CPU. This actually mimics a real > > spin lock (because that's exactly what they do, they spin while the owner > > is running on a CPU). The difference between a true spin lock and an > > rt_mutex was that the spinner would stop spinning if the owner was > > preempted (a true spin lock owner could not be preempted). > > > > After applying the adaptive spinning, we were able to scale PREEMPT_RT to > > any number of CPUs that the normal kernel could do with just a linear > > performance hit. > > Sounds like it was spinning for far too long at the best of times. > But analysing these sort of latencies is hard. It wasn't spinning at all! The problem was that all rt_mutex would immediately sleep on any contention. This caused a ripple effect that would increase the time locks were held and that would increase contention which increased the time more. A very bad feedback loop. This was all very well traced and studied. That analysis was not hard at all. We know exactly what the cause was and why adaptive mutexes fixed the situation. And this is why I'm excited about this current work. -- Steve