Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp6848457ybn; Mon, 30 Sep 2019 04:54:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqxAzTZU0XxbpeOAts1kbx/OXbC4AfjxpomWTCqn1iMA9LvUK8cgEvsnt+uEin1Bn5ex1EnV X-Received: by 2002:a05:6402:1501:: with SMTP id f1mr19101315edw.76.1569844474390; Mon, 30 Sep 2019 04:54:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569844474; cv=none; d=google.com; s=arc-20160816; b=ExFEUraQ49N4kEum6bReMSzYc/wOWlbn07C+ydDMSsRBHVsPRY+Th3sQmpwEO98j9L ESKncs8V9lhlNKgmQmYadnYppDSxzh6CoOwk/mpeJoLDI/dwWPYB4PvWovuwGoH1dh1b q86OGYiWcseZCMQpHYa/3cw4qNXLxYzRpL7sQmw7lpYsn2v4HNOVhwJpPcb2+/CTry9w lQDZDY2X9Bq1x3YiORxFWHXtYktVukXey9fKfD7+9CPkwLGHGruIkY171TQdAl8yhlKr 3tW+QspyK6TvDVydUH3xjSMLImEiw4hd+8aJ0H0NyyRBVynFIFJ6mRWipvBjw1mnUxUK IRJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=MFZFfM0846prBEVBYkpm16Wm+bg3bHOMhi+iGvy4A8E=; b=wo14HmdU+L1XgqB+T3vw2y1VBitwV3qzAicrwHS0QZp92FbC6Ln4lPYzFxUoWBqSO9 nRmlF0X4+/8/Nj7x2G5t4aVbEAhTN57IJqrtRMBZ+Ti/797Zx/imOgON79K6JT+4swCI LQql2zsc5bu6Qawd5rfYIReE9TbIGNjdUFO2O77DL+cOgbm9OsZnLP2TGeOxjQIDYd/j IMEC1hUmPxjK7El9XT1HU++7sb41bZhzm/yHS/XTJRaJY8s4MroS+NrTHIAl3yD7zrhX ZBYriN/Ye0rj0IOFdbJYAadD6VOb9C//PvPvIeJTUG9pEnr/yzfZhnjXugXZ6WVeBHPg fFIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=N6pnDyVU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e46si6864607eda.401.2019.09.30.04.54.09; Mon, 30 Sep 2019 04:54:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=N6pnDyVU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729831AbfI3Lxn (ORCPT + 99 others); Mon, 30 Sep 2019 07:53:43 -0400 Received: from mail-oi1-f193.google.com ([209.85.167.193]:45558 "EHLO mail-oi1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726952AbfI3Lxn (ORCPT ); Mon, 30 Sep 2019 07:53:43 -0400 Received: by mail-oi1-f193.google.com with SMTP id o205so10786254oib.12 for ; Mon, 30 Sep 2019 04:53:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MFZFfM0846prBEVBYkpm16Wm+bg3bHOMhi+iGvy4A8E=; b=N6pnDyVUDYhzKQc1LFdklknQkLbjYs5CQHB5agpN4WJJ805CnbFa4cesHtF9w7IHzN JC8WIBJaiTfGsbeJcvV6xqkNa65E0eIFkuRQrSM4bo63xLG44wZQXC/4w68T9pugBZon jC10NayRGAYcGKKMrkalSkqSgl8UUzA8aYHFs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MFZFfM0846prBEVBYkpm16Wm+bg3bHOMhi+iGvy4A8E=; b=IwDrlKU7rg4GhD4pcsFIbHfh8It7fQkAUuYUKasNFXtvbapw3rQGT5IaeiHDZK/HPF bGHcKmaUDhtMadyvUpAe+SoK+QS1dyhfQZp0LyyDmZpFqvCtRkrMsXlBIqVO5Xsa74fL jnSfOJvqLMcUizHeBF83oR3lTsBd31iROGr1JmlYD8DYISJMghxrDkmV53SStzEvkYrK xux2/8LgVyBCz+Lt0YlUjryyTOU4ScCI+KXyaOie6mT0BQeGCdcjAYZg5dXPWnLS4UrT XFYt+YFeSBfPR0PuAE1aZQy4HMF1IQBTIZHHekXpXezV3l1neoxqsJWqed/i+pAP2RJX Br8A== X-Gm-Message-State: APjAAAUrJ+jOXzoomDR9ljLbB6sqnBSB0y2pJEHblhzTpH4sUAeKGNT0 rYFNSrwBfdtTbivQ8w0F9yYhjLVdx0Ry1DiJGzyv0Q== X-Received: by 2002:aca:50ca:: with SMTP id e193mr18358400oib.110.1569844420639; Mon, 30 Sep 2019 04:53:40 -0700 (PDT) MIME-Version: 1.0 References: <20190725143003.GA992@aaronlu> <20190726152101.GA27884@sinkpad> <7dc86e3c-aa3f-905f-3745-01181a3b0dac@linux.intel.com> <20190802153715.GA18075@sinkpad> <69cd9bca-da28-1d35-3913-1efefe0c1c22@linux.intel.com> <20190911140204.GA52872@aaronlu> <7b001860-05b4-4308-df0e-8b60037b8000@linux.intel.com> <20190912123532.GB16200@aaronlu> In-Reply-To: <20190912123532.GB16200@aaronlu> From: Vineeth Remanan Pillai Date: Mon, 30 Sep 2019 07:53:30 -0400 Message-ID: Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3 To: Aaron Lu Cc: Tim Chen , Julien Desfossez , Dario Faggioli , "Li, Aubrey" , Aubrey Li , Subhra Mazumdar , Nishanth Aravamudan , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?UTF-8?B?RnLDqWTDqXJpYyBXZWlzYmVja2Vy?= , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 12, 2019 at 8:35 AM Aaron Lu wrote: > > > > I think comparing parent's runtime also will have issues once > > the task group has a lot more threads with different running > > patterns. One example is a task group with lot of active threads > > and a thread with fairly less activity. So when this less active > > thread is competing with a thread in another group, there is a > > chance that it loses continuously for a while until the other > > group catches up on its vruntime. > > I actually think this is expected behaviour. > > Without core scheduling, when deciding which task to run, we will first > decide which "se" to run from the CPU's root level cfs runqueue and then > go downwards. Let's call the chosen se on the root level cfs runqueue > the winner se. Then with core scheduling, we will also need compare the > two winner "se"s of each hyperthread and choose the core wide winner "se". > Sorry, I misunderstood the fix and I did not initially see the core wide min_vruntime that you tried to maintain in the rq->core. This approach seems reasonable. I think we can fix the potential starvation that you mentioned in the comment by adjusting for the difference in all the children cfs_rq when we set the minvruntime in rq->core. Since we take the lock for both the queues, it should be doable and I am trying to see how we can best do that. > > > > As discussed during LPC, probably start thinking along the lines > > of global vruntime or core wide vruntime to fix the vruntime > > comparison issue? > > core wide vruntime makes sense when there are multiple tasks of > different cgroups queued on the same core. e.g. when there are two > tasks of cgroupA and one task of cgroupB are queued on the same core, > assume cgroupA's one task is on one hyperthread and its other task is on > the other hyperthread with cgroupB's task. With my current > implementation or Tim's, cgroupA will get more time than cgroupB. If we > maintain core wide vruntime for cgroupA and cgroupB, we should be able > to maintain fairness between cgroups on this core. Tim propose to solve > this problem by doing some kind of load balancing if I'm not mistaken, I > haven't taken a look at this yet. I think your fix is almost close to maintaining a core wide vruntime as you have a single minvruntime to compare now across the siblings in the core. To make the fix complete, we might need to adjust the whole tree's min_vruntime and I think its doable. Thanks, Vineeth