Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp6345983ybf; Thu, 5 Mar 2020 18:42:31 -0800 (PST) X-Google-Smtp-Source: ADFU+vvYlgngZiuQBRayYWl6HJZtsOSTtZOIYw4VjoVugYaPgvxbrdnjqGbNsDCDyVgrAlZmB6GK X-Received: by 2002:aca:ab0c:: with SMTP id u12mr1037271oie.171.1583462551290; Thu, 05 Mar 2020 18:42:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583462551; cv=none; d=google.com; s=arc-20160816; b=EQLX8SADgZzq7nYf8+NcSw+a+07J9CthUhV2ZluN4y1HVzSEB8JcaEIkg8ArO99W8j IRYoRU3FJDGpdNt7jhL1aFX1/xtVdYL4tboC/2RTuDe21thb7KDv+VxlmTMkmOQ6DLx+ ErJQ6V7aQyyzLe3fhiOJ70yGE3TYP5QSc/86uOXXJfDl9SjpOTS5hjhKxGToY49co+Yc wkzCVpPNDocvZaukc20tBSlIoeWDugVjVDiYeYSb+FOD/cNHMszcgwNeGrlukeO40LUG BKUX3P8UUEMp9zMOEoPSVqCsX6SDW3Trq+4SeiSVLNzmgmnQBVri/8lAbT879y8pAUrm gL+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=ybwuAO+dxcFKGdRu+UMo3hrueD2iw1ldBlmuuV7eqeo=; b=xzA+OID3NKq7746fI3dyhJCwo5TTf6Sqap7N8V8K8lUrGAvZhaLk7d8ltnrG2MHxI4 F3bYQhjzQUjETSXRtleyZJmjeAiXpw/N96+TfItiX4LhsBh2qMnBWILD/9hW2/I1gpos 9DWcEAYihWgrWGwzq6C4H0LeKsFg75Gt4fqwYNWCLeSrRN/gybRA4+v6KUIv+5awTNpe tyvcCT0+Txk9GuhnnI1EjU5DxsFAhO+ZHwvYjuwx8C00nQShES2GhdvnbxzWn15zFS6u dJP41CEJRTj4/9P7eNlAh+mJRBKQumGQGA29VD3w6KtZxrK0zy5vimFe4rEZaQHlQKu5 fV+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lvDBDquH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s8si454334oij.275.2020.03.05.18.42.18; Thu, 05 Mar 2020 18:42:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=lvDBDquH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726351AbgCFCl3 (ORCPT + 99 others); Thu, 5 Mar 2020 21:41:29 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:42440 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726191AbgCFCl3 (ORCPT ); Thu, 5 Mar 2020 21:41:29 -0500 Received: by mail-pg1-f196.google.com with SMTP id h8so361066pgs.9 for ; Thu, 05 Mar 2020 18:41:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ybwuAO+dxcFKGdRu+UMo3hrueD2iw1ldBlmuuV7eqeo=; b=lvDBDquHuWLf//VjlWapEVpykhcml4E40Z4woVLPUjf7T9rfmfs+JqNe+LPYcoAddm G3ofnRUuT0BhRDTVLuddKrSFf14y7AokRrnFbP1XhIvODq+8cAPmlOtKLTMqmiIqT+4i OC2Yf6jtH//bC1NBuktFeswv0zT8Br4uEXbs+cRnf/fZmBbm0KJhBHojKi7d3umxcBG7 wJQRZDNipvmAhPoBXVfqGcIVrU3RV3VIy2h793viWWg1fwK3MdXthBJzTsdlN0Z5fp9X nWv2aK6R+0gW/UjB2hEEHBKUxvCphLQgcHulTARBx/acgwzch0Yks3jPn9DoSbSa5HLU a5Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ybwuAO+dxcFKGdRu+UMo3hrueD2iw1ldBlmuuV7eqeo=; b=uJOzlEQVA7XpPUmgDaTh7TjGEYjVi6ga/8ByNw5SdP1WuYiYfD1bRENSVJYSgnddFU 606vnS+vqwBrhjXZe720YPsMWuQ4Y5zfx3iBaR9+1KQj6kw04mEAJaw/dDGW9aNKPqWa wRvA5+YEL2EcaY7qTNG1Bma7KgzCXJIYpZxjAHXt0ozXn5th85n3sIqAsF0j9TYZzNS5 o4XVvWtL+SYpxvsU4O2ssS8Ab2/H8E2QEPBP2EmC3GlJlH8qHEE1SK3yUFkSqj6YGmi8 CDwM07F12FN5YPWZD/9cE+BkYYJRa7ZfndPowBnQf03qC/uKZ7wxf9DhSBeYXsMx1+yu Vkzg== X-Gm-Message-State: ANhLgQ2rwjy9WgXYdqo+k0bKVpLLy0SUAwoIYAejUpLBRVvdtmnWnt9S f7NtIfZJWRoJgcLDmx0NqjA= X-Received: by 2002:a63:1044:: with SMTP id 4mr1190082pgq.412.1583462486335; Thu, 05 Mar 2020 18:41:26 -0800 (PST) Received: from ziqianlu-desktop.localdomain ([47.89.83.64]) by smtp.gmail.com with ESMTPSA id q8sm7631575pje.2.2020.03.05.18.41.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 18:41:25 -0800 (PST) Date: Fri, 6 Mar 2020 10:41:16 +0800 From: Aaron Lu To: Aubrey Li Cc: Phil Auld , Vineeth Remanan Pillai , Tim Chen , Julien Desfossez , Nishanth Aravamudan , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , Dario Faggioli , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , Greg Kerr , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4 Message-ID: <20200306024116.GA16400@ziqianlu-desktop.localdomain> References: <20200212230705.GA25315@sinkpad> <29d43466-1e18-6b42-d4d0-20ccde20ff07@linux.intel.com> <20200225034438.GA617271@ziqianlu-desktop.localdomain> <20200227020432.GA628749@ziqianlu-desktop.localdomain> <20200227141032.GA30178@pauld.bos.csb> <20200228025405.GA634650@ziqianlu-desktop.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 05, 2020 at 09:45:15PM +0800, Aubrey Li wrote: > On Fri, Feb 28, 2020 at 10:54 AM Aaron Lu wrote: > > > > When the core wide weight is somewhat balanced, yes I definitely agree. > > But when core wide weight mismatch a lot, I'm not so sure since if these > > high weight task is spread among cores, with the feature of core > > scheduling, these high weight tasks can get better performance. > > It depends. > > Say TaskA(cookie 1) and TaskB(cookie1) has high weight, > TaskC(cookie 2) and Task D(cookie 2) has low weight. > And we have two cores 4 CPU. > > If we dispatch > - TaskA and TaskB on Core0, > - TaskC and TaskD on Core1, > > with coresched enabled, all 4 tasks can run all the time. Although all tasks get CPU, TaskA and TaskB are competing hardware resources and will run slower. > But if we dispatch > - TaskA on Core0.CPU0, TaskB on Core1.CPU2, > - TaskC on Core0.CPU1, TaskB on Core1.CPU3, > > with coresched enabled, when TaskC is running, TaskA will be forced > off CPU and replaced with a forced idle thread. Not likely to happen since TaskA and TaskB's share will normally be a lot higher to make sure they get the CPU most of the time. > > Things get worse if TaskA and TaskB share some data and can get > benefit from the core level cache. That's a good point and hard to argue. I'm mostly considering colocating redis-server(the main workload) with other compute intensive workload. redis-server can be idle most of the time but needs every hardware resource when it runs to meet its latency and throughput requirement. Test at my side shows redis-server's throughput can be about 30% lower when two redis-servers run at the same core(throughput is about 80000 when runs exclusively on a core VS about 56000 when runs with sibling thread busy) IIRC. So my use case here is that I don't really care about low weight task's performance when high weight task demands CPU. I understand that there will be other use cases that also care about low weight task's performance. So what I have done is to make the two task's weight difference as large as possible to signal that the low weight task is not important, maybe I can also try to tag low weight task as SCHED_IDLE ones and then we can happily sacrifice SCHED_IDLE task's performance? > > So this appeared to me like a question of: is it desirable to protect/enhance > > high weight task performance in the presence of core scheduling? > > This sounds to me a policy VS mechanism question. Do you have any idea > how to spread high weight task among the cores with coresched enabled? Yes I would like to get us on the same page of the expected behaviour before jumping to the implementation details. As for how to achieve that: I'm thinking about to make core wide load balanced and then high weight task shall spread on different cores. This isn't just about load balance, the initial task placement will also need to be considered of course if the high weight task only runs a small period.