Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp792139ybp; Fri, 11 Oct 2019 04:33:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqxBjnfvBb9UkdngcE5DsqC7XtXRzebf6Jq8/hwgx44NnE5gYLmCrw8hJbTE7tgX4v/g1r5l X-Received: by 2002:a05:6402:1687:: with SMTP id a7mr12976124edv.222.1570793619767; Fri, 11 Oct 2019 04:33:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570793619; cv=none; d=google.com; s=arc-20160816; b=ZZ14xzAqPkp33pzwuMPPOeARuQNQRD4Dt+SUczIp450XPgEh8yCKiBktlqWhlKWf/z kbL30kIPt9Ybxqh0wXcdt8ML8BXndTq7JPjDWMhWRgaiuNKOO1QFjMsQD9Rwz0MTWe6g zMDd8UbyDuHyPqezA5oVKRRH8b5PUCnCdtHN2l7WPAWmdpz9jd1C8qQM8apCHRcRgTpX Wo6aklSTEJCih3FooaBTtG8SyNgikKbkvGRa09Tyf0wJHiw5nmyNDBiRQmO8AaQhPWLW BtX3fVk0ceQ0eOcgMWbaywZerOlGX98Gd8FlbduGv3DP0Ig+QicV2ITJiODDjJX1lUpz q3vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=EG1xk3VNtSNli9LstHzFzW8FTASqppOx7Iz+ljSepr4=; b=j9gOK/KAiEcDj19qE1qGMxxXb6cahBKk/wlEdTKo0BNib6xH9PWFFv0Rbfm/RzCkOr o7VPCodej0WhPWQ8kVqsCH9T4oHJEQeKCaIVrlM6Xe63QicvhVTdn2qPz2KyPg8H5Gvt UT5eXhWW6eENaiCrUXHL0aR9yDa/x+S/CrOqu8IkxNrTu/D5T1xkq7JpNNPVp2HxAMLm SZ7hzQSq6/8xFbschEzyBUd+5Z3l1H4IInG+vvkNWLaqy17eZ2ILFJE74DKOpry2Dsh8 SqER+ATN/I0Ee0uS/T0EkRMPuaKMpLYO4tcZi2WrOilmj92WoQdmgKnh5zQiBgVrAT0d UKGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=ELZQHqIH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x51si5431745eda.272.2019.10.11.04.33.15; Fri, 11 Oct 2019 04:33:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=ELZQHqIH; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727734AbfJKLdB (ORCPT + 99 others); Fri, 11 Oct 2019 07:33:01 -0400 Received: from mail-ot1-f67.google.com ([209.85.210.67]:44392 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727226AbfJKLdA (ORCPT ); Fri, 11 Oct 2019 07:33:00 -0400 Received: by mail-ot1-f67.google.com with SMTP id 21so7637693otj.11 for ; Fri, 11 Oct 2019 04:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=EG1xk3VNtSNli9LstHzFzW8FTASqppOx7Iz+ljSepr4=; b=ELZQHqIHKthcOw+cx1/eWMPQh9CdCQe9335QiwqnmEpfsAQo5QrXsB0uM8YeRn9sZB DMjCGOF3WmEWsych31ECult+qoZlba+l/4cFwD78LER9wjIOHHvdSaAoQTpzWJHDYhtC nMege7elWqUs3JbT4yQBVz2Ys2N8BBYj/kpU8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=EG1xk3VNtSNli9LstHzFzW8FTASqppOx7Iz+ljSepr4=; b=PoOiwXIlXk772ApLIv9R4Kt/BOS0Sx8RzdT/ay6wQwYkpMrNvtQFgbTfrQyRrYTKrf zxU8bM++ZSYzlsGSYyjVb8tYejhIrXHQLFui0WP43BbnTuVm08VeJtyktqwFIq6T7Mgo psoI0JKj4ihdLfGob4A1ekv4f2zDg3Qz9SS3N1nWCsPRTomPzS4Nkngcdzb8IMIGQpzS JJuzPvwYlrIvywBKE5BdXzUnEU5UK3d7WqqFEdcZ/i0Kg3vK/4+zapzREfdTJeqkLnpt jWcimCCOe8G3iaEWz2HgfMcJLddWUhzy/c21IvPCyTzxBy6P1MQ4E2pg2mphJRJnaopt NGcQ== X-Gm-Message-State: APjAAAXpxQhLJXOgMhy8r9iCezgfYzNGkf1OIbqHeiv5OrfVNMuZvLoX SLmJAJvUUXutQ8v3zxum9GHWpCY+q0FMoEr3zLTdWQ== X-Received: by 2002:a9d:70c3:: with SMTP id w3mr6065891otj.246.1570793579837; Fri, 11 Oct 2019 04:32:59 -0700 (PDT) MIME-Version: 1.0 References: <69cd9bca-da28-1d35-3913-1efefe0c1c22@linux.intel.com> <20190911140204.GA52872@aaronlu> <7b001860-05b4-4308-df0e-8b60037b8000@linux.intel.com> <20190912123532.GB16200@aaronlu> <20191010135436.GA67897@aaronlu> <20191011073338.GA125778@aaronlu> In-Reply-To: <20191011073338.GA125778@aaronlu> From: Vineeth Remanan Pillai Date: Fri, 11 Oct 2019 07:32:48 -0400 Message-ID: Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3 To: Aaron Lu Cc: Tim Chen , Julien Desfossez , Dario Faggioli , "Li, Aubrey" , Aubrey Li , Nishanth Aravamudan , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?UTF-8?B?RnLDqWTDqXJpYyBXZWlzYmVja2Vy?= , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > The reason we need to do this is because, new tasks that gets created will > > have a vruntime based on the new min_vruntime and old tasks will have it > > based on the old min_vruntime > > I think this is expected behaviour. > I don't think this is the expected behavior. If we hadn't changed the root cfs->min_vruntime for the core rq, then it would have been the expected behaviour. But now, we are updating the core rq's root cfs, min_vruntime without changing the the vruntime down to the tree. To explain, consider this example based on your patch. Let cpu 1 and 2 be siblings. And let rq(cpu1) be the core rq. Let rq1->cfs->min_vruntime=1000 and rq2->cfs->min_vruntime=2000. So in update_core_cfs_rq_min_vruntime, you update rq1->cfs->min_vruntime to 2000 because that is the max. So new tasks enqueued on rq1 starts with vruntime of 2000 while the tasks in that runqueue are still based on the old min_vruntime(1000). So the new tasks gets enqueued some where to the right of the tree and has to wait until already existing tasks catch up the vruntime to 2000. This is what I meant by starvation. This happens always when we update the core rq's cfs->min_vruntime. Hope this clarifies. > > and it can cause starvation based on how > > you set the min_vruntime. > > Care to elaborate the starvation problem? Explained above. > Again, what's the point of normalizing sched entities' vruntime in > sub-cfs_rqs? Their vruntime comparisons only happen inside their own > cfs_rq, we don't do cross CPU vruntime comparison for them. As I mentioned above, this is to avoid the starvation case. Even though we are not doing cross cfs_rq comparison, the whole tree's vruntime is based on the root cfs->min_vruntime and we will have an imbalance if we change the root cfs->min_vruntime without updating down the tree. Thanks, Vineeth