Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1200513rwb; Thu, 10 Nov 2022 12:39:04 -0800 (PST) X-Google-Smtp-Source: AMsMyM6+W4npbTaaCdYo3ARzcXgvo16JWI6DL3xkk6njhiBZxw8f4wOYp8PrFj8LwUbfjycbYiQt X-Received: by 2002:a63:4e44:0:b0:46e:c9d0:96c with SMTP id o4-20020a634e44000000b0046ec9d0096cmr3166802pgl.586.1668112744551; Thu, 10 Nov 2022 12:39:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668112744; cv=none; d=google.com; s=arc-20160816; b=Jepv5nakV+SAMyz43ujEWJrBvp5HyXqeHylhT9mthBsk9oCckqhT43faqh4Ej2O6nw wPB9eV9B/zIIS+4f0zCzvToxtJgpsb5xvqD48YJMsCF3ITdJiC137dy5Pb8Yjge+mjOQ pcpnUgXVH/4CJORtgik1c2/GmG5P6A7VfoSvmORCUlkDpuJyUslPPI6aXqjbtJo8paCm iYELSKOaWzRKJpbtMLbv+YumzTcrbLZ/YsimGYusrd1o7Fwr+DsoraNHJ0QEf+jSIDyl jA4W0wLa9+vf9pWyYG+TzsvXrcZshFNVsVFTxePdFnrooJUJ00Z5ZZo/jMx+gzo4dsxQ Z9eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=xhfgYO1/fCarNFsadeaFBS1i0KY0WCBZ4nGurZDgqyA=; b=UtHAOPBkqQL6F1whuDPuBBz7k5viXoOGXUdyoSFvNffQMhFcahXWACH5snIGagi+vA 3FnTebVTs+TlJgfMAj7eQpDBIA1BOQw0Ex3cJd6O1wkSnigYKtr9NjgCLd3Qx1DbXiV5 HEKB23gNu9Z9zhClNB6ijTwAqynQ2wcCi7tR1WVxC+bj52yjKefX1IAWTp7iQheVcymU Oba05SqK2q8+3DgsneZbq1viXaeBZaaZdZOSr1CzGkQq4LUao26gkOIjNk05hE/jcxR3 P1Ef1vTgBBePZSZDwkuCKI2cAgrBfWVAr5wxpOzNFkC+7t/wR9EsWurONaeZhYqBhquu sPSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AJSHeQxj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y131-20020a626489000000b00558991a667esi155231pfb.359.2022.11.10.12.38.52; Thu, 10 Nov 2022 12:39:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AJSHeQxj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231681AbiKJU1L (ORCPT + 92 others); Thu, 10 Nov 2022 15:27:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229461AbiKJU1J (ORCPT ); Thu, 10 Nov 2022 15:27:09 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E45291C429; Thu, 10 Nov 2022 12:27:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668112028; x=1699648028; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=m+ZS3rOfRmRa8UfzE++QCfs9Vzmvg4hPfC+PSx0NcaM=; b=AJSHeQxjBWLyJ8QssGw7kdJtThX5LMAVnpHq6co2s9Cxqkh2nzr+g+R4 vEfNCLTB7Afbbwr1lyPz4pM+qiGrAoGTWKgpszkb1OUaRdaKiYtjNqEi8 JGVpEBzz9IJVQtd4ONh9WNJmyHHKC9PkBCymP3ly/KSiQ4id737iFuhmX 8pS3CFO1qoRR8rZLeuZafe8PSxo5PvUWKskKU37B9ZsI00xUoNN38wcj6 feILyxHOvoo/7v5ydaoGMRFH+FJdaXUPgxWAAcc2UFLz412urDmPcXpRf OXy/Pn0n3rlA2kecuJT3e7Sxm/BI3iQWu8rptPBpCE7kir1aBWthix2lU Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="309060811" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="309060811" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 12:27:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="637320121" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="637320121" Received: from stinkpipe.fi.intel.com (HELO stinkbox) ([10.237.72.191]) by orsmga002.jf.intel.com with SMTP; 10 Nov 2022 12:27:03 -0800 Received: by stinkbox (sSMTP sendmail emulation); Thu, 10 Nov 2022 22:27:02 +0200 Date: Thu, 10 Nov 2022 22:27:02 +0200 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Peter Zijlstra Cc: linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, bigeasy@linutronix.de, rjw@rjwysocki.net, oleg@redhat.com, rostedt@goodmis.org, mingo@kernel.org, mgorman@suse.de, intel-gfx@lists.freedesktop.org, tj@kernel.org, Will Deacon , dietmar.eggemann@arm.com, ebiederm@xmission.com Subject: Re: [Intel-gfx] [PATCH v3 6/6] freezer, sched: Rewrite core freezer logic Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Patchwork-Hint: comment X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 07, 2022 at 01:47:23PM +0200, Ville Syrj?l? wrote: > On Wed, Nov 02, 2022 at 11:16:48PM +0100, Peter Zijlstra wrote: > > On Wed, Nov 02, 2022 at 06:57:51PM +0200, Ville Syrj?l? wrote: > > > On Thu, Oct 27, 2022 at 06:53:23PM +0200, Peter Zijlstra wrote: > > > > On Thu, Oct 27, 2022 at 04:09:01PM +0300, Ville Syrj?l? wrote: > > > > > On Wed, Oct 26, 2022 at 01:43:00PM +0200, Peter Zijlstra wrote: > > > > > > > > > > Could you please give the below a spin? > > > > > > > > > > Thanks. I've added this to our CI branch. I'll try to keep and eye > > > > > on it in the coming days and let you know if anything still trips. > > > > > And I'll report back maybe ~middle of next week if we haven't caught > > > > > anything by then. > > > > > > > > Thanks! > > > > > > Looks like we haven't caught anything since I put the patch in. > > > So the fix seems good. > > > > While writing up the Changelog, it occured to me it might be possible to > > fix another way, could I bother you to also run the below patch for a > > bit? > > I swapped in the new patch to the CI branch. I'll check back > after a few days. CI hasn't had anything new to report AFAICS, so looks like this version is good as well. > > > > > --- > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index cb2aa2b54c7a..daff72f00385 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -4200,6 +4200,40 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) > > return success; > > } > > > > +static bool __task_needs_rq_lock(struct task_struct *p) > > +{ > > + unsigned int state = READ_ONCE(p->__state); > > + > > + /* > > + * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when > > + * the task is blocked. Make sure to check @state since ttwu() can drop > > + * locks at the end, see ttwu_queue_wakelist(). > > + */ > > + if (state == TASK_RUNNING || state == TASK_WAKING) > > + return true; > > + > > + /* > > + * Ensure we load p->on_rq after p->__state, otherwise it would be > > + * possible to, falsely, observe p->on_rq == 0. > > + * > > + * See try_to_wake_up() for a longer comment. > > + */ > > + smp_rmb(); > > + if (p->on_rq) > > + return true; > > + > > +#ifdef CONFIG_SMP > > + /* > > + * Ensure the task has finished __schedule() and will not be referenced > > + * anymore. Again, see try_to_wake_up() for a longer comment. > > + */ > > + smp_rmb(); > > + smp_cond_load_acquire(&p->on_cpu, !VAL); > > +#endif > > + > > + return false; > > +} > > + > > /** > > * task_call_func - Invoke a function on task in fixed state > > * @p: Process for which the function is to be invoked, can be @current. > > @@ -4217,28 +4251,12 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) > > int task_call_func(struct task_struct *p, task_call_f func, void *arg) > > { > > struct rq *rq = NULL; > > - unsigned int state; > > struct rq_flags rf; > > int ret; > > > > raw_spin_lock_irqsave(&p->pi_lock, rf.flags); > > > > - state = READ_ONCE(p->__state); > > - > > - /* > > - * Ensure we load p->on_rq after p->__state, otherwise it would be > > - * possible to, falsely, observe p->on_rq == 0. > > - * > > - * See try_to_wake_up() for a longer comment. > > - */ > > - smp_rmb(); > > - > > - /* > > - * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when > > - * the task is blocked. Make sure to check @state since ttwu() can drop > > - * locks at the end, see ttwu_queue_wakelist(). > > - */ > > - if (state == TASK_RUNNING || state == TASK_WAKING || p->on_rq) > > + if (__task_needs_rq_lock(p)) > > rq = __task_rq_lock(p, &rf); > > > > /* > > -- > Ville Syrj?l? > Intel -- Ville Syrj?l? Intel