Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp2376725rwp; Fri, 14 Jul 2023 05:24:50 -0700 (PDT) X-Google-Smtp-Source: APBJJlH+t3815wc9I8oiZ3ICnmBdgt3BJSbyfAxzh6kguQFZNmDXs3Ok93+iojJC4Fu8i+sh9Uqc X-Received: by 2002:a17:902:e80e:b0:1b0:5e0f:16a5 with SMTP id u14-20020a170902e80e00b001b05e0f16a5mr4111450plg.11.1689337489995; Fri, 14 Jul 2023 05:24:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689337489; cv=none; d=google.com; s=arc-20160816; b=zy4HJefNLxOFQW3/cBuPaLvohw20z/XDJnXgznEJxI62ZRLJq2ZAhU5WVj+Js+T6rW uomWRBne1G1beW5h+AO05Z2bsN1FbgGnGZ1vNX/KHd4pa4VznTRUCHMptJMzGyyoc1OW r0yJ/V7S4F5RXc0eWn3D3bQLzQjqd8QHmaEQeypG30zVFtNPItWTfePkJZ7E9UwZ7yOs G1oNs5Hzg1hxpcXpENRbYCGedKLK4WWX4D8FgRmvdDjsBDRJC6jaGWJkgLMc0rGYURQm tg5KmCMC6dvS7z1liXn5iicIw5sZxHTx15mSj4HVLiekZZT6zefSO/pzvsfVSaI0p472 v7XA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Qh7Kn9xreU0ySLVGJx/TXXsnTmx7rIwzXZtYr9RVTqI=; fh=8kpS64wXGtYEhkYwxLBvj4Gc0ZANsZzK74wh9lHCfAg=; b=hpKZFyrncsCoFrf2xTITE/RV7d9Pc6812lzls5CyMcPUKlzz6d+gmwOdOsZVGc+bSY HfOfq/u6vLM5y4I01KOP9RCt3xnWndW3OKTB7Nq3tY2V03neTv9MS+glhUmYEfbkr2fw O8yTCGV9/eQF5JicqwNUajbuslZTnNVCTx3WPdqTmPGP6ZsAI4szwIQjq8+JdV2HrkIe Cw3eARP7DSM47wUw0MwtY3vx8M7hjXpw4eQLIjVhgfRbsgDG84ufQ0KFB3VrAsbvHZ1C XRDvtsLDxAKruGWxce4JLJdKrFddiwrjFk5cu+unChCJdf+EgW0RKXGw1DTBxpI9bcd2 C1cA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gvx2oe+9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y7-20020a17090322c700b001b8b6a19bdesi7300317plg.168.2023.07.14.05.24.38; Fri, 14 Jul 2023 05:24:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gvx2oe+9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234998AbjGNMDE (ORCPT + 99 others); Fri, 14 Jul 2023 08:03:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235238AbjGNMDD (ORCPT ); Fri, 14 Jul 2023 08:03:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35A8F30DE for ; Fri, 14 Jul 2023 05:02:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C6A3861CF8 for ; Fri, 14 Jul 2023 12:02:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1ECCC433C7; Fri, 14 Jul 2023 12:02:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689336176; bh=u28JpsKtmzmM7ch3+n86dxFQxq6SMXzEpAIsFKPewQg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=gvx2oe+9X6z2ScvepDU80Pa36lOIWr3C28TZgr2tvYdfQ+uIT5knhRwM15/IkN/KR 6gyS+PLnL8JaAZsXQXSENU7+s1Eq6yYbwnOaT/Lso304bYwLVnXaR3c2WxmJ5KW9kF KNrX7Y6vK5BPVzglPt9U701Fojo+Aw1sJUi18ElEF2Ss13tyONeN2RUr/0PivUgQr0 7zF1qD4YjwKNH+xlDZZozlE0kKSIcWUfa+Ny4LYVQyYXvqLA/yueYFGRB7sLB8qr1O 3gbNr0G3KTxlxe9nGpOVNRiU7LMQXC+0fRpcW3Ju7iEIbOo/ZkZMy0w+wx3YibzcqC 0CvjT5ifJC2Ew== Date: Fri, 14 Jul 2023 14:02:53 +0200 From: Frederic Weisbecker To: Finn Thain Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Thomas Gleixner , linux-kernel@vger.kernel.org Subject: Re: [PATCH] sched: Optimize in_task() and in_interrupt() a bit Message-ID: References: <44ad7a7afa1b8b1383426971402d2901361db1c5.1689326311.git.fthain@linux-m68k.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <44ad7a7afa1b8b1383426971402d2901361db1c5.1689326311.git.fthain@linux-m68k.org> X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 14, 2023 at 07:18:31PM +1000, Finn Thain wrote: > Except on x86, preempt_count is always accessed with READ_ONCE. > Repeated invocations in macros like irq_count() produce repeated loads. > These redundant instructions appear in various fast paths. In the one > shown below, for example, irq_count() is evaluated during kernel entry > if !tick_nohz_full_cpu(smp_processor_id()). > > 0001ed0a : > 1ed0a: 4e56 0000 linkw %fp,#0 > 1ed0e: 200f movel %sp,%d0 > 1ed10: 0280 ffff e000 andil #-8192,%d0 > 1ed16: 2040 moveal %d0,%a0 > 1ed18: 2028 0008 movel %a0@(8),%d0 > 1ed1c: 0680 0001 0000 addil #65536,%d0 > 1ed22: 2140 0008 movel %d0,%a0@(8) > 1ed26: 082a 0001 000f btst #1,%a2@(15) > 1ed2c: 670c beqs 1ed3a > 1ed2e: 2028 0008 movel %a0@(8),%d0 > 1ed32: 2028 0008 movel %a0@(8),%d0 > 1ed36: 2028 0008 movel %a0@(8),%d0 > 1ed3a: 4e5e unlk %fp > 1ed3c: 4e75 rts > > This patch doesn't prevent the pointless btst and beqs instructions > above, but it does eliminate 2 of the 3 pointless move instructions > here and elsewhere. > > On x86, preempt_count is per-cpu data and the problem does not arise > perhaps because the compiler is free to perform similar optimizations. > > Cc: Thomas Gleixner > Fixes: 15115830c887 ("preempt: Cleanup the macro maze a bit") Does this optimization really deserves a "Fixes:" tag? > Signed-off-by: Finn Thain > --- > This patch was tested on m68k and x86. I was expecting no changes > to object code for x86 and mostly that's what I saw. However, there > were a few places where code generation was perturbed for some reason. > --- > include/linux/preempt.h | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/include/linux/preempt.h b/include/linux/preempt.h > index 0df425bf9bd7..953358e40291 100644 > --- a/include/linux/preempt.h > +++ b/include/linux/preempt.h > @@ -102,10 +102,11 @@ static __always_inline unsigned char interrupt_context_level(void) > #define hardirq_count() (preempt_count() & HARDIRQ_MASK) > #ifdef CONFIG_PREEMPT_RT > # define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK) > +# define irq_count() ((preempt_count() & (NMI_MASK | HARDIRQ_MASK)) | softirq_count()) > #else > # define softirq_count() (preempt_count() & SOFTIRQ_MASK) > +# define irq_count() (preempt_count() & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_MASK)) > #endif > -#define irq_count() (nmi_count() | hardirq_count() | softirq_count()) Perhaps add a comment as to why you're making these two versions (ie: because that avoids three consecutive reads), otherwise people may be tempted to roll that back again in the future to make the code shorter. > > /* > * Macros to retrieve the current execution context: > @@ -118,7 +119,11 @@ static __always_inline unsigned char interrupt_context_level(void) > #define in_nmi() (nmi_count()) > #define in_hardirq() (hardirq_count()) > #define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET) > -#define in_task() (!(in_nmi() | in_hardirq() | in_serving_softirq())) > +#ifdef CONFIG_PREEMPT_RT > +# define in_task() (!((preempt_count() & (NMI_MASK | HARDIRQ_MASK)) | in_serving_softirq())) > +#else > +# define in_task() (!(preempt_count() & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET))) > +#endif Same here, thanks! > > /* > * The following macros are deprecated and should not be used in new code: > -- > 2.39.3 >