Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp4134500ybh; Tue, 6 Aug 2019 06:53:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqyhVWZiZvx54NPCnaWONCZC7F4xYMibMSeyvdB2fqJLK65TrNnXJB/osnXdA8MQKjCkR/bS X-Received: by 2002:a17:90a:3aed:: with SMTP id b100mr3336186pjc.63.1565099581303; Tue, 06 Aug 2019 06:53:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565099581; cv=none; d=google.com; s=arc-20160816; b=J+ULvcYBplUAG/s2hPBNQyc2lziTJITaHGOHx99nf5Wgw1fR7F41BxVN85G+lNEHIo vaW3/KedehJsZVFiDGGZAoRz5GZ4kHTvSw2R4O79o+LPW3Z0CH7t4FM94hGmWv1mvt7m 6lfO4yGmajb3LElN8VgeWsyHVf99QGKX09TCtOwnqlxsIx//4gI84IsM7HMcuT9LviP4 XH3s3+tBXcHaT5Kq1G2cT/+KXpZLa50oc7NgHVQSGhNN/+pQOBQ1AOVShGyXsIt2tSwg z9KdhfGo8fNx73nyVuEndACgNvLB1me1xa49AJOQlwSgoQMrR5vhvDRsRxjeeEP7oQ49 BVcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=KpTmTzir1dU7AEvO3R6bXBJxcGJPC5rNmNBLWAhyCdU=; b=Jz9+T5k4WHdVnX1bxuKyUoijCHeZ89GYg3mUl9+vYq3xVA1Ebhz8H7ROgOhfDzwA2E L0PzNLfSRGXnagg429eP0ilbAQO/m8rQNLzOvxRk2J+fP9vFEL9P/oTnSdXYDYMYEan6 rWfAvqnTxM1CX2lfZoIbI+4j+Uwwqxxa3breLA/dsu3JxfaC3t6zdRPizsmOIf29P2u9 cnw/b3Rf4td81jzd9qApF2ktNcsA3hGR+QFCZwzUvsmNhypMaywgBQawFcvReQCWAerY hY3O+vr7tOmKNHMBKVjltKZBMXL2ldPM3GysXFNiCbyqiI2p1A8+fH0oQxLIMUka11u+ iYjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c137si48531850pga.80.2019.08.06.06.52.45; Tue, 06 Aug 2019 06:53:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732206AbfHFNuQ (ORCPT + 99 others); Tue, 6 Aug 2019 09:50:16 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:47657 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726713AbfHFNuQ (ORCPT ); Tue, 6 Aug 2019 09:50:16 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R251e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04420;MF=aaron.lu@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0TYpJ7Gd_1565099397; Received: from aaronlu(mailfrom:aaron.lu@linux.alibaba.com fp:SMTPD_---0TYpJ7Gd_1565099397) by smtp.aliyun-inc.com(127.0.0.1); Tue, 06 Aug 2019 21:50:02 +0800 Date: Tue, 6 Aug 2019 21:49:57 +0800 From: Aaron Lu To: Vineeth Remanan Pillai Cc: Aubrey Li , Tim Chen , Julien Desfossez , "Li, Aubrey" , Subhra Mazumdar , Nishanth Aravamudan , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3 Message-ID: <20190806134949.GA46757@aaronlu> References: <20190725143003.GA992@aaronlu> <20190726152101.GA27884@sinkpad> <7dc86e3c-aa3f-905f-3745-01181a3b0dac@linux.intel.com> <20190802153715.GA18075@sinkpad> <20190806032418.GA54717@aaronlu> <54fa27ff-69a7-b2ac-6152-6915f78a57f9@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 06, 2019 at 08:24:17AM -0400, Vineeth Remanan Pillai wrote: > > > > > > I also think a way to make fairness per cookie per core, is this what you > > > want to propose? > > > > Yes, that's what I meant. > > I think that would hurt some kind of workloads badly, especially if > one tenant is > having way more tasks than the other. Tenant with more task on the same core > might have immediate requirements from some threads than the other and we > would fail to take that into account. With some hierarchical management, we can > alleviate this, but as Aaron said, it would be a bit messy. I think tenant will have per core weight, similar to sched entity's per cpu weight. The tenant's per core weight could derive from its corresponding taskgroup's per cpu sched entities' weight(sum them up perhaps). Tenant with higher weight will have its core wide vruntime advance slower than tenant with lower weight. Does this address the issue here? > Peter's rebalance logic actually takes care of most of the runq > imbalance caused > due to cookie tagging. What we have found from our testing is, fairness issue is > caused mostly due to a Hyperthread going idle and not waking up. Aaron's 3rd > patch works around that. As Julien mentioned, we are working on a per thread > coresched idle thread concept. The problem that we found was, idle thread causes > accounting issues and wakeup issues as it was not designed to be used in this > context. So if we can have a low priority thread which looks like any other task > to the scheduler, things becomes easy for the scheduler and we achieve security > as well. Please share your thoughts on this idea. Care to elaborate the idea of coresched idle thread concept? How it solved the hyperthread going idle problem and what the accounting issues and wakeup issues are, etc. Thanks, Aaron > The results are encouraging, but we do not yet have the coresched idle > to not spin > 100%. We will soon post the patch once it is a bit more stable for > running the tests > that we all have done so far. > > Thanks, > Vineeth