Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3216947yba; Sun, 28 Apr 2019 20:37:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqw7OFooh4ar1PdfVZhQss3X6V/LHALD36ksDxVw8N/s3Pt4GH38QbPYkeb1fPhafxZ5wi6T X-Received: by 2002:a17:902:4681:: with SMTP id p1mr726555pld.139.1556509056251; Sun, 28 Apr 2019 20:37:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556509056; cv=none; d=google.com; s=arc-20160816; b=I1tbP3maMKBXV0F7dgb7OXtY/ym4dAMl735I/a3P/O0g99acNHJZbD/Q2ceXkvGu3/ 8Bljz58ncrGn51iU/vWVwXiICpegSwD0sNFJQ/hKWmwfNxPvGMEv6s49o2zH6qNLu5Wg 4RkuROyWic0bnM8f1rNu+8aC5JBvhEyru0ZZJ4kWoaiYGC77tTSlayAtN1rJYDE+4xtS 78HncJ5+BeFrYdK5Sgn83wNRwa5v9c4k4h3odHJsMBLxqnkHYW8rvTqe3cGa9PI4GKTC LU63yD+2mxVE2pJbrSml2xO5aUTlEf86SZLAhi8kby1U1qyDi07zcFRFJEdYXH2AFi6K 3HMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Pa6NOb0XRgPIkCKjJyla9NnBHAcIiO2GQffM5INHeP8=; b=ZG0OgZr0sVPA8JYTp2RjfGPfJD9aybB7p4ThBH0mZtnUSe/jnbNRhkPQuRsJYB5yQa 2cR7ma2ZfNUjJZ+G+AJYpLay2G1ztnW3HWhn1yK5SBe+x1F1BNs3wTiknab0T9EeF4C7 MJvrKWNdbteVpoXwQBeDDpxPKJ42g3g0O0PbnVHKLbdu4Tztmm96M1aFCfeAP6pMcHWy XikgdiM2fWinbvKkyX7mOW1UdUDZM1fCvW/3auVd2C3fHyAYQvAVXIakydI7e+EYTq+4 HcG9os0zVuSMPEv2l+LHjKlW2ssm8PxwkhXC/DlnlU6j/JdOUptBZQKWpOESCax+ILj0 vOOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f184si2135488pfb.280.2019.04.28.20.37.21; Sun, 28 Apr 2019 20:37:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727151AbfD2Dgd (ORCPT + 99 others); Sun, 28 Apr 2019 23:36:33 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:36970 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726819AbfD2Dgd (ORCPT ); Sun, 28 Apr 2019 23:36:33 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07486;MF=aaron.lu@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0TQTCbEl_1556508982; Received: from aaronlu(mailfrom:aaron.lu@linux.alibaba.com fp:SMTPD_---0TQTCbEl_1556508982) by smtp.aliyun-inc.com(127.0.0.1); Mon, 29 Apr 2019 11:36:28 +0800 Date: Mon, 29 Apr 2019 11:36:22 +0800 From: Aaron Lu To: Vineeth Remanan Pillai Cc: Nishanth Aravamudan , Julien Desfossez , Peter Zijlstra , Tim Chen , mingo@kernel.org, tglx@linutronix.de, pjt@google.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, subhra.mazumdar@oracle.com, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Aaron Lu , Aubrey Li , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: Re: [RFC PATCH v2 11/17] sched: Basic tracking of matching tasks Message-ID: <20190429033620.GA128241@aaronlu> References: <2364f2b65bf50826d881c84d7634b6565dfee527.1556025155.git.vpillai@digitalocean.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2364f2b65bf50826d881c84d7634b6565dfee527.1556025155.git.vpillai@digitalocean.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 23, 2019 at 04:18:16PM +0000, Vineeth Remanan Pillai wrote: > +/* > + * l(a,b) > + * le(a,b) := !l(b,a) > + * g(a,b) := l(b,a) > + * ge(a,b) := !l(a,b) > + */ > + > +/* real prio, less is less */ > +static inline bool __prio_less(struct task_struct *a, struct task_struct *b, bool core_cmp) > +{ > + u64 vruntime; > + > + int pa = __task_prio(a), pb = __task_prio(b); > + > + if (-pa < -pb) > + return true; > + > + if (-pb < -pa) > + return false; > + > + if (pa == -1) /* dl_prio() doesn't work because of stop_class above */ > + return !dl_time_before(a->dl.deadline, b->dl.deadline); > + > + vruntime = b->se.vruntime; > + if (core_cmp) { > + vruntime -= task_cfs_rq(b)->min_vruntime; > + vruntime += task_cfs_rq(a)->min_vruntime; > + } > + if (pa == MAX_RT_PRIO + MAX_NICE) /* fair */ > + return !((s64)(a->se.vruntime - vruntime) <= 0); > + > + return false; > +} This unfortunately still doesn't work. Consider the following task layout on two sibling CPUs(cpu0 and cpu1): rq0.cfs_rq rq1.cfs_rq | | se_bash se_hog se_hog is the sched_entity for a cpu intensive task and se_bash is the sched_entity for bash. There are two problems: 1 SCHED_DEBIT when user execute some commands through bash, say ls, bash will fork. The newly forked ls' vruntime is set in the future due to SCHED_DEBIT. This made 'ls' lose in __prio_less() when compared with hog, whose vruntime may very likely be the same as its cfs_rq's min_vruntime. This is OK since we do not want forked process to starve already running ones. The problem is, since hog keeps running, its vruntime will always sync with its cfs_rq's min_vruntime. OTOH, 'ls' can not run, its cfs_rq's min_vruntime doesn't proceed, making 'ls' always lose to hog. 2 who schedules, who wins so I disabled SCHED_DEBIT, for testing's purpose. When cpu0 schedules, ls could win where both sched_entity's vruntime is the same as their cfs_rqs' min_vruntime. So does hog: when cpu1 schedules, hog can preempt ls in the same way. The end result is, interactive task can lose to cpu intensive task and ls can feel "dead". I haven't figured out a way to solve this yet. A core wide cfs_rq's min_vruntime can probably solve this. Your suggestions are appreciated.