Received: by 2002:ac0:a679:0:0:0:0:0 with SMTP id p54csp663580imp; Thu, 21 Feb 2019 08:44:02 -0800 (PST) X-Google-Smtp-Source: AHgI3IZ9Ork65f8olRZN6YBl3dEYhTWrhOSx+jWQhOUSrwtSUbB54x96mNLhawNmC20yFLA4PLsn X-Received: by 2002:a63:6881:: with SMTP id d123mr35080433pgc.10.1550767442890; Thu, 21 Feb 2019 08:44:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550767442; cv=none; d=google.com; s=arc-20160816; b=fUt7BaZZ0XUFh9yaxXujy6zw8fMeCIsp4eouwiWL5cnIcj/Ag4Kr2FImtvCDnG1+HA MUXunPRP1fQemQWJgze6glY6krZGu2VnOTj40yCIuT9rnhqUhrS+u6jindsOHCzPz81z stEk0tr/anWmHGqA/pl972fS/akkq5LammwZaFzsAB4jUj3r9fD+PYaN1uRMtQf1fbb5 AoipzLzaCWEmVDH8CR2C9XCl6dwBkY39eAG32hBqoJ62fu36bpBWeno8Ea3Q5uRgUpCW oMG4mSOXum/HLk4OtgmHPubg03LCa7TMUZXV4wTO1dAy+23UM1Fs79iV0BKEdfxwu0Pe TNPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=K+2GhtwjVhbgmHQVTJ8KD5yadggXH16mNhTyw8yZvog=; b=B3BAQt2nPNo1DOFNPVPjHFa012GTb42cgy5To3ZwDrEhWJRVEX/fhcoJatk8qzRhL9 mffwT3ZcmWcYbCsURSUJV3g3pfS0voneewGpDhTfGkPf4EF9+/c7I5wWBE4HIWuaUXto iYJyTqH/cAS9iUefV/vS68ENM2UkWTcfRJA9s7FwDf2gum3e/00J6AryteFAe8hzdpgD b4hb6UG9Uoxe1C95QxdDg09L7f0uv9K92la1PBDEg/1MQxF17U56y+SZPW9rXjmjbX+n I94ci6snvBS1zFiLXlLpJPHb8OEnIYMFc9QM+Kk/nGvONNrjrRrEc77E8QrA48kHrXg/ DJ+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=lO4Z6pec; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y17si22180358plr.275.2019.02.21.08.43.46; Thu, 21 Feb 2019 08:44:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=lO4Z6pec; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727479AbfBUQmE (ORCPT + 99 others); Thu, 21 Feb 2019 11:42:04 -0500 Received: from merlin.infradead.org ([205.233.59.134]:38380 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725831AbfBUQmE (ORCPT ); Thu, 21 Feb 2019 11:42:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=K+2GhtwjVhbgmHQVTJ8KD5yadggXH16mNhTyw8yZvog=; b=lO4Z6pecNqbbm67zklDg5SMXt kZlZA2vTp+uyumU/XijVxCoEaSly9BaX4flnAzQ275qMhizL4sBp39USqzOA/l4Fwv5CuSohk6HhA KqI9wZJfkI0XpqKAtT2meT737SsAuOYIC53ED08A8Pj9v/jPBU5NKNr0+6m1Q8xlqyVLttuE0kxsm PphialuzJkZJEAJUaT08slj53EHqY2fzjimG+5upEkftN0wB62+OIXJN2DkgxwW7DoNukKw3MmvBf O+5zd03IjETxsWpALKAz+iHxEbYgXXx94xxoWNUcZbHJ8pf/xskuAHPZQNZSkFtDrkBbIuhM5grT2 UKYF5ecDA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwrPw-0002iK-RP; Thu, 21 Feb 2019 16:41:49 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 3E7172030EE5E; Thu, 21 Feb 2019 17:41:46 +0100 (CET) Date: Thu, 21 Feb 2019 17:41:46 +0100 From: Peter Zijlstra To: Valentin Schneider Cc: mingo@kernel.org, tglx@linutronix.de, pjt@google.com, tim.c.chen@linux.intel.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, subhra.mazumdar@oracle.com, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com Subject: Re: [RFC][PATCH 15/16] sched: Trivial forced-newidle balancer Message-ID: <20190221164146.GV32494@hirez.programming.kicks-ass.net> References: <20190218165620.383905466@infradead.org> <20190218173514.796920915@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 21, 2019 at 04:19:46PM +0000, Valentin Schneider wrote: > Hi, > > On 18/02/2019 16:56, Peter Zijlstra wrote: > [...] > > +static bool try_steal_cookie(int this, int that) > > +{ > > + struct rq *dst = cpu_rq(this), *src = cpu_rq(that); > > + struct task_struct *p; > > + unsigned long cookie; > > + bool success = false; > > + > > + local_irq_disable(); > > + double_rq_lock(dst, src); > > + > > + cookie = dst->core->core_cookie; > > + if (!cookie) > > + goto unlock; > > + > > + if (dst->curr != dst->idle) > > + goto unlock; > > + > > + p = sched_core_find(src, cookie); > > + if (p == src->idle) > > + goto unlock; > > + > > + do { > > + if (p == src->core_pick || p == src->curr) > > + goto next; > > + > > + if (!cpumask_test_cpu(this, &p->cpus_allowed)) > > + goto next; > > + > > + if (p->core_occupation > dst->idle->core_occupation) > > + goto next; > > + > > IIUC, we're trying to find/steal tasks matching the core_cookie from other > rqs because dst has been cookie-forced-idle. > > If the p we find isn't running, what's the meaning of core_occupation? > I would have expected it to be 0, but we don't seem to be clearing it when > resetting the state in pick_next_task(). Indeed. We preserve the occupation from the last time around; it's not perfect but its better than nothing. Consider there's two groups; and we just happen to run the other group. Then our occopation, being what it was last, is still accurate. When next we run, we'll again get that many siblings together. > If it is running, we prevent the stealing if the core it's on is running > more matching tasks than the core of the pulling rq. It feels to me as if > that's a balancing tweak to try to cram as many matching tasks as possible > in a single core, so to me this reads as "don't steal my tasks if I'm > running more than you are, but I will steal tasks from you if I'm given > the chance". Is that correct? Correct, otherwise an SMT4 with 5 tasks could end up ping-ponging the one task forever. Note that a further condition a little up the callchain from here only does this stealing if the thread was forced-idle -- ie. it had something to run anyway. So under the condition where there simple aren't enough tasks to keep all siblings busy, we'll not compact just cause.