Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3196588ybi; Thu, 18 Jul 2019 22:55:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqwmt0SVOe/RN8EDHCoQuaqK96iHP8xA3IEXN3jW4pbl46anMirzo47MFNJ3ozh8UfHMjg+l X-Received: by 2002:a17:902:aa8a:: with SMTP id d10mr55786122plr.154.1563515731003; Thu, 18 Jul 2019 22:55:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563515730; cv=none; d=google.com; s=arc-20160816; b=Zceb/GQzh5YKnma2SsdzcIn8+PZy8wN/VweGQl5uWVDQXF9/gYfSh6ZlA5941gABXY iYOA3+WYd9mgq27GpJQdHkECEPbxUf4FP6o5DTlsR+xPGlzONgfHds6gP4jIXCCoB0Zp qsN+XsMG3dKYfyrVXhmLTspd1eXAgev+FIIQSF/WzLwCUyHuVJyp2uEhwf9OGeU7mon2 TR0/O8fGvNMtRlClVv/DcWxsm8n+yhJRZxsJfLgN6j3T3khEIWm5VxHZ0PL6jQ9LnFiU dNZJxbN8faMvicIX2Hj9haqhGmtghCegpL4e1d/m0EvD2XCIPLdCu7vGCMae827csehY F3pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=kh/rVf0O4MRYuxEPfxp4a1qrkER6w+OmZI0JNr4sIYM=; b=pJTL9ivgo9MYlrd8+Ky7YIyLe0TYLBV9Kippe6zci9j1wg+UaGoPVvf7bAhuZXSqwo 3LB0tZehE1rVxnFRlYYTY1Rb8LYsE4wY5l2ILa1ND1+BM+Hh5vParHgHlOMsDc0jN33Q 05Yjo+bPJ+AsyTUK3u/7hx15EFzvSHXW2SicnTUmPvWse8x9hpGvJ97s8qizOwlBVPtJ M9YHBfxsc2xe0FDjDkPFmR/Dw2H+iTSKlGrFW+VNZIFMNBjYmqmkBUOZRQ6NRmwSHKXL s2wawOK3wdMmXxXdnKNd+fbjE2PKnGPXqdjZODrnK8vYMfBX2UuKXYrd6RY4ePFuilv6 ezBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k15si938394pga.99.2019.07.18.22.55.13; Thu, 18 Jul 2019 22:55:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727163AbfGSFxU (ORCPT + 99 others); Fri, 19 Jul 2019 01:53:20 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:40643 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726600AbfGSFxT (ORCPT ); Fri, 19 Jul 2019 01:53:19 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07486;MF=aaron.lu@linux.alibaba.com;NM=1;PH=DS;RN=20;SR=0;TI=SMTPD_---0TXG4jGq_1563515558; Received: from aaronlu(mailfrom:aaron.lu@linux.alibaba.com fp:SMTPD_---0TXG4jGq_1563515558) by smtp.aliyun-inc.com(127.0.0.1); Fri, 19 Jul 2019 13:53:16 +0800 Date: Fri, 19 Jul 2019 13:52:38 +0800 From: Aaron Lu To: Tim Chen Cc: Julien Desfossez , Aubrey Li , Subhra Mazumdar , Vineeth Remanan Pillai , Nishanth Aravamudan , Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3 Message-ID: <20190719055238.GA536@aaronlu> References: <20190531210816.GA24027@sinkpad> <20190606152637.GA5703@sinkpad> <20190612163345.GB26997@sinkpad> <635c01b0-d8f3-561b-5396-10c75ed03712@oracle.com> <20190613032246.GA17752@sinkpad> <20190619183302.GA6775@sinkpad> <20190718100714.GA469@aaronlu> <5f869512-3336-d9f0-6fff-e1150673a924@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5f869512-3336-d9f0-6fff-e1150673a924@linux.intel.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 18, 2019 at 04:27:19PM -0700, Tim Chen wrote: > > > On 7/18/19 3:07 AM, Aaron Lu wrote: > > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > > > > > With the below patch on top of v3 that makes use of util_avg to decide > > which task win, I can do all 8 steps and the final scores of the 2 > > workloads are: 1796191 and 2199586. The score number are not close, > > suggesting some unfairness, but I can finish the test now... > > Aaron, > > Do you still see high variance in terms of workload throughput that > was a problem with the previous version? Any suggestion how to measure this? It's not clear how Aubrey did his test, will need to take a look at sysbench. > > > > > > } > > + > > +bool cfs_prio_less(struct task_struct *a, struct task_struct *b) > > +{ > > + struct sched_entity *sea = &a->se; > > + struct sched_entity *seb = &b->se; > > + bool samecore = task_cpu(a) == task_cpu(b); > > > Probably "samecpu" instead of "samecore" will be more accurate. > I think task_cpu(a) and task_cpu(b) > can be different, but still belong to the same cpu core. Right, definitely, guess I'm brain damaged. > > > + struct task_struct *p; > > + s64 delta; > > + > > + if (samecore) { > > + /* vruntime is per cfs_rq */ > > + while (!is_same_group(sea, seb)) { > > + int sea_depth = sea->depth; > > + int seb_depth = seb->depth; > > + > > + if (sea_depth >= seb_depth) > > Should this be strictly ">" instead of ">=" ? Same depth doesn't necessarily mean same group while the purpose here is to make sure they are in the same cfs_rq. When they are of the same depth but in different cfs_rqs, we will continue to go up till we reach rq->cfs. > > > + sea = parent_entity(sea); > > + if (sea_depth <= seb_depth) > > Should use "<" ? Ditto here. When they are of the same depth but no in the same cfs_rq, both se will move up. > > + seb = parent_entity(seb); > > + } > > + > > + delta = (s64)(sea->vruntime - seb->vruntime); > > + } > > + Thanks.