Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp497343ybk; Fri, 15 May 2020 06:16:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybYBq54njnQLt9SiyKAfXFoDKtyDKOdB5Z9ZYMw0swAfwV4evUw32u41Vxj4yu9A/Ybi1/ X-Received: by 2002:aa7:ca11:: with SMTP id y17mr2636841eds.87.1589548575545; Fri, 15 May 2020 06:16:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589548575; cv=none; d=google.com; s=arc-20160816; b=0K+N17hYgTFurjINdthLdQydZMmJFyNDx59gtqSJuOwDXdOxonl49IM9cYQ+gkHfSz CyiPG7HsskP7tpd1E2H2/i7nuEC+zb00NtEBBoRC/pNoAb/JAsKTPCz/YBtB3xR4jg4U qBGQdZ2lFsvV30i36NkRRgFj9CxokS1BOTSvCKnYHb8uqK44PaoHRaOQk+JWf1tuGuvv 198qd48S0QugwzGRPKGcF+Geltncu28N79MuT84AGeGYJPv091quBloLuyqXSkWmFhFn yvxbULtRcbS07hu4emODnKFpIuHj386d28dCcK9yrTCYTCg5dPldbszABj/Dcnw8+eFO ycBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=yfKPxko55jufmY0xy3S8W87THENHzmg1dUd7YFjxF4E=; b=mgmmhHQbi3FjtZMkO2i0LOiuXN6vJfSBCMxGy5IHA5NXX0hJ6zn/vxQRphg91yOWNG kYP1CiU0aiXKC6nUPvPblmG6HhoDUN2TaMt27KJYosrDDoKUdAB4Ua+OXpjKYeeG16It fc1hiRQR3g6/q9KI11XNL8aUWrCzoPjxQt2whOw5869vOy/jZj2Ay9ij9G+k+pJwS59X 6F2VzgSvj+Pqrr7O1R2f2sDlgwHQroMODFUkBYvG24U89nXybtRubj5Y9Vhgplm1XdJ2 f5mVpXFjhiEIa7ivqW98u0HId57C8SXmGeE+mWphThdwYUGVukvrioA9ybaX3seezOnY WdQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=yJJR0pMt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qh14si1179413ejb.171.2020.05.15.06.15.51; Fri, 15 May 2020 06:16:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=yJJR0pMt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726228AbgEONNb (ORCPT + 99 others); Fri, 15 May 2020 09:13:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726163AbgEONNb (ORCPT ); Fri, 15 May 2020 09:13:31 -0400 Received: from merlin.infradead.org (unknown [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3A07C061A0C for ; Fri, 15 May 2020 06:13:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=yfKPxko55jufmY0xy3S8W87THENHzmg1dUd7YFjxF4E=; b=yJJR0pMtsCZqYTPsfQocCBO78z EhpRZyVKSf5IlEaU4swzxO4zwPkmOdXycBPpfOHsqsdTUWu2RywnuI4lDiZCOhJ1Sgk5qwKVYhjGJ sHkHMoUn1+J6X3/hQX8ARJfMhU45IzBJbldCMQ9wKxnfa01WvOagEsp0Ge1D5xV0D5UBN1aHH8Xs2 R4/2FrVT/wYlU13jajtTK2pYHYEVywe2Z3ADy8KVcUHvkZAAVex8aZAWII5YUI1nTC5aWBs36b3Dw NM1JTH9gOiMWzeYRHG1tnxv4l4xiH4itsz8XKoe/i7EJZOHZnh9rhNU2ARJcanB7sAxsMQc/wwU+t cJnNgFBw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZa8n-0000ad-1i; Fri, 15 May 2020 13:12:41 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 39315302753; Fri, 15 May 2020 15:12:39 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 1C2A320267E67; Fri, 15 May 2020 15:12:39 +0200 (CEST) Date: Fri, 15 May 2020 15:12:39 +0200 From: Peter Zijlstra To: Mel Gorman Cc: Jirka Hladky , Phil Auld , Ingo Molnar , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Valentin Schneider , Hillf Danton , LKML , Douglas Shakshober , Waiman Long , Joe Mario , Bill Gray Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Message-ID: <20200515131239.GX2957@hirez.programming.kicks-ass.net> References: <20200507155422.GD3758@techsingularity.net> <20200508092212.GE3758@techsingularity.net> <20200513153023.GF3758@techsingularity.net> <20200514153122.GE2978@hirez.programming.kicks-ass.net> <20200515084740.GJ3758@techsingularity.net> <20200515111732.GS2957@hirez.programming.kicks-ass.net> <20200515130346.GM3758@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200515130346.GM3758@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 15, 2020 at 02:03:46PM +0100, Mel Gorman wrote: > On Fri, May 15, 2020 at 01:17:32PM +0200, Peter Zijlstra wrote: > > On Fri, May 15, 2020 at 09:47:40AM +0100, Mel Gorman wrote: > > +static bool ttwu_queue_remote(struct task_struct *p, int cpu, int wake_flags) > > +{ > > + if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), cpu)) { > > + sched_clock_cpu(cpu); /* Sync clocks across CPUs */ > > + __ttwu_queue_remote(p, cpu, wake_flags); > > + return true; > > + } > > + > > + return false; > > +} > > + if (READ_ONCE(p->on_cpu) && __ttwu_queue_remote(p, cpu, wake_flags)) > > + goto unlock; > I don't see a problem with moving the updating of p->state to the other > side of the barrier but I'm relying on the comment that the barrier is > only related to on_rq and on_cpu. Yeah, I went with that too, like I said, didn't think too hard. > However, I'm less sure about what exactly you intended to do. > __ttwu_queue_remote is void so maybe you meant to use ttwu_queue_remote. That! > In that case, we potentially avoid spinning on on_rq for wakeups between > tasks that do not share CPU but it's not clear why it would be specific to > remote tasks. The thinking was that we can avoid spinning on ->on_cpu, and let the CPU get on with things. Rik had a workload where that spinning was significant, and I thought to have understood you saw the same. By sticking the task on the wake_list of the CPU that's in charge of clearing ->on_cpu we ensure ->on_cpu is 0 by the time we get to doing the actual enqueue.