Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp4421031ybh; Tue, 6 Aug 2019 11:20:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqw7PaBmdeMEsD6BhYXVTtpBA2fOxppiTZlOpk4DYFtBQmFrRsUY1+IZnV2JBCNY72PwLjVX X-Received: by 2002:a62:1807:: with SMTP id 7mr4927355pfy.149.1565115631857; Tue, 06 Aug 2019 11:20:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565115631; cv=none; d=google.com; s=arc-20160816; b=gd7GPpVc1a9AvdznSiZTTXIK2pClXyGEI/6Bn2zmRhb1C12VZNAmr840LIrvs+MJLM 6f0KNtLmm3GUQQz3voNmrHUD8J5ogCOLdaA6X8ZzEObvu0Ms7xoJaKjlDfoVaHImdM2u /TBADjHDHvm7WjD5XpOuV4KfK1guDB3bftIXL2zJAWJTmZeJ1LcmOJMOFEtk1HVwxWAe VO+Jfy0BktokuQ4rFYIi08QzvwSaYDedSPNYAoVVTvirTrKaLn5h29QgqqimYFIlGEhx q8s6dYq7CL2wltNQE8aHi3FR17de0F/mdCo3hG/IpEUA5EB8uVjlxE3Nu57h3+GWzIto J+1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=/PLCpfMIiXDbrZKcO+CegpzUS+nShcw+EtHGygpPYFc=; b=m9qqp1rqIcHJb+47UiY8zZF2ldQ7otvz8b3l5YSbQerehWoOyZ9r3Sb9+tRSZHOSrT ugrXJT400vy69fzmyIPBrTJo47tLbCY5J4lERJAd9TAz9TL2AabgShVfklCqgOeVpLJ8 agdvB51p1M3b6O0DagNcLIwCF3a2bKZs/MbyQLo5xvOE0fzYHrhOJAeqozyf9mXJPT4M 2el9qVGTuiVcIZ2CLto9VudAOyrRDMfmNy6X0MfmIImAJsk2DguID68CX7cn1/LWbP1d v7SKa0q3NJiMY+6ch5yMR8X+TNs59hL640XHviKWJqKJMKdoINaiuAoXmGahlgheT3+y 29wg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=UjWbpIYd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d6si15796710pjc.7.2019.08.06.11.20.16; Tue, 06 Aug 2019 11:20:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=UjWbpIYd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733043AbfHFRMz (ORCPT + 99 others); Tue, 6 Aug 2019 13:12:55 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50612 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728189AbfHFRMz (ORCPT ); Tue, 6 Aug 2019 13:12:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/PLCpfMIiXDbrZKcO+CegpzUS+nShcw+EtHGygpPYFc=; b=UjWbpIYdko4jsnDq93k7NNuYq Uro7dX7Ud/DysTy8YAezq7snHwGe9RRLLHmJb3KNTOxWDwefXyI4zaExd/uZuQZyTC/j2wjoU0bAc mK1G8XAQZ/IdvVRBNBS3KLVvw875FSg3WBMf6lG1ThG1x3ObOp1RxBdykeNJ1zozLz2sjuplq+9vM 1RXjtPF2CXZxkTl2leJDVtlLmMTGSdd1m3mfV2X2zJmYzYsQIFC0cBu6z3zYd+Ht2zzM1pvgIY6cx NEwKjfxPL3/BtB9IbLECEWF0kSh/WTOov0Tp6XmcQV8G/Np9N9tpAB2lt/AmHyz3rZj7Gb3NkCWe+ 0Qyy9pY5w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hv30u-0007Ai-BB; Tue, 06 Aug 2019 17:12:44 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BB7743077A6; Tue, 6 Aug 2019 19:12:15 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 2AACF201B3683; Tue, 6 Aug 2019 19:12:41 +0200 (CEST) Date: Tue, 6 Aug 2019 19:12:41 +0200 From: Peter Zijlstra To: Tim Chen Cc: Aaron Lu , Julien Desfossez , "Li, Aubrey" , Aubrey Li , Subhra Mazumdar , Vineeth Remanan Pillai , Nishanth Aravamudan , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3 Message-ID: <20190806171241.GQ2349@hirez.programming.kicks-ass.net> References: <20190619183302.GA6775@sinkpad> <20190718100714.GA469@aaronlu> <20190725143003.GA992@aaronlu> <20190726152101.GA27884@sinkpad> <7dc86e3c-aa3f-905f-3745-01181a3b0dac@linux.intel.com> <20190802153715.GA18075@sinkpad> <20190806032418.GA54717@aaronlu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 06, 2019 at 10:03:29AM -0700, Tim Chen wrote: > On 8/5/19 8:24 PM, Aaron Lu wrote: > > > I've been thinking if we should consider core wide tenent fairness? > > > > Let's say there are 3 tasks on 2 threads' rq of the same core, 2 tasks > > (e.g. A1, A2) belong to tenent A and the 3rd B1 belong to another tenent > > B. Assume A1 and B1 are queued on the same thread and A2 on the other > > thread, when we decide priority for A1 and B1, shall we also consider > > A2's vruntime? i.e. shall we consider A1 and A2 as a whole since they > > belong to the same tenent? I tend to think we should make fairness per > > core per tenent, instead of per thread(cpu) per task(sched entity). What > > do you guys think? > > > > Implemention of the idea is a mess to me, as I feel I'm duplicating the > > existing per cpu per sched_entity enqueue/update vruntime/dequeue logic > > for the per core per tenent stuff. > > I'm wondering if something simpler will work. It is easier to maintain fairness > between the CPU threads. A simple scheme may be if the force idle deficit > on a CPU thread exceeds a threshold compared to its sibling, we will > bias in choosing the task on the suppressed CPU thread. > The fairness among the tenents per run queue is balanced out by cfq fairness, > so things should be fair if we maintain fairness in CPU utilization between > the two CPU threads. IIRC pjt once did a simle 5ms flip flop between siblings.