Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6022613ybi; Wed, 12 Jun 2019 12:34:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTSIsfwHJHZSflrPzApAPPMaOBkCuhLVMefc3K68c4yAUtPN1Q6drMsSKQ28GoTLmaSsL3 X-Received: by 2002:a63:6ec6:: with SMTP id j189mr21591619pgc.168.1560368072378; Wed, 12 Jun 2019 12:34:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560368072; cv=none; d=google.com; s=arc-20160816; b=Aga52VffWY+R/9YZ23NZxprAv4vuGgAFz7R14pzmGKJDn/Rk2pe0DgzUnuByXYZffb js+73PGCfnaE9B91eMorIhn4mp/BwKBxOtHAKR9OLWwIt53peNX0/beDNv+HMQNXqNA6 XSZoct4oSWcWYZwasC+oso7kaRhcCBx1GfwjdULKlPEfX7PIlZdjNJ6U/p+EOwzN1LTW wmQABMeSk8kX3ckF2+djTKKswZWbRm0tm9a3dRXYuTHUsa356/5dKBlA3OeQfmZ2jpL5 RNc+hMeMgHG0dJwE5awGZfoMEH4R4AH8Jdvm35Gj648N8xb3DpZ8p32RdhaudYW2ceJ8 iFNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=O0xmxTUaVAo8Gjk1km8LEaE98oEQ+Dt3/K1ab4S0WuQ=; b=s6PynMaY9GjAEThC+hqB/pDZ5l+IHO1kCwbXiAWlJbU2Ls5gEFU/Bg7GpF+zhBgGqD UVnGaBEOJlrMKfmBd7RtuLkzNkrJIPqLI4fgc8ytQ+08rB/ekNKI4I8n2PWx/sJ/eRFq 4g2GDCl11Qjkh8w1Lhf6UOLlfC6/qFiT4lMKM9xcIJnKHmEUBCZPN7QV15Os4cfV1Ohe DyshzQi2iAcjpCg2hGe1CWmEEPECdbO6jsqziJmkWqWm1qXHDv6gIb9+UfS1CryCl9Lp FJfiuRAYOzQYqS+AS1HMfULYXZhiErF34DG+ByB7FGncl01Z8kEO0fKQoYa23VCcvPqg BMpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v11si461303plp.304.2019.06.12.12.34.18; Wed, 12 Jun 2019 12:34:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729215AbfFLTdO (ORCPT + 99 others); Wed, 12 Jun 2019 15:33:14 -0400 Received: from shelob.surriel.com ([96.67.55.147]:50454 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728405AbfFLTcs (ORCPT ); Wed, 12 Jun 2019 15:32:48 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92) (envelope-from ) id 1hb8z2-0001BN-1R; Wed, 12 Jun 2019 15:32:32 -0400 From: Rik van Riel To: peterz@infradead.org Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, kernel-team@fb.com, morten.rasmussen@arm.com, tglx@linutronix.de, dietmar.eggeman@arm.com, mgorman@techsingularity.com, vincent.guittot@linaro.org, Rik van Riel Subject: [PATCH 6/8] sched,cfs: fix zero length timeslice calculation Date: Wed, 12 Jun 2019 15:32:25 -0400 Message-Id: <20190612193227.993-7-riel@surriel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612193227.993-1-riel@surriel.com> References: <20190612193227.993-1-riel@surriel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The way the time slice length is currently calculated, not only do high priority tasks get longer time slices than low priority tasks, but due to fixed point math, low priority tasks could end up with a zero length time slice. This can lead to cache thrashing and other inefficiencies. Simplify the logic a little bit, and cap the minimum time slice length to sysctl_sched_min_granularity. Tasks that end up getting a time slice length too long for their relative priority will simply end up having their vruntime advanced much faster than other tasks, resulting in them receiving time slices less frequently. Signed-off-by: Rik van Riel --- kernel/sched/fair.c | 25 ++++++++----------------- 1 file changed, 8 insertions(+), 17 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c6ede2ecc935..35153a89d5c5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -670,22 +670,6 @@ static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) return delta; } -/* - * The idea is to set a period in which each task runs once. - * - * When there are too many tasks (sched_nr_latency) we have to stretch - * this period because otherwise the slices get too small. - * - * p = (nr <= nl) ? l : l*nr/nl - */ -static u64 __sched_period(unsigned long nr_running) -{ - if (unlikely(nr_running > sched_nr_latency)) - return nr_running * sysctl_sched_min_granularity; - else - return sysctl_sched_latency; -} - /* * We calculate the wall-time slice from the period by taking a part * proportional to the weight. @@ -694,7 +678,7 @@ static u64 __sched_period(unsigned long nr_running) */ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) { - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); + u64 slice = sysctl_sched_latency; for_each_sched_entity(se) { struct load_weight *load; @@ -711,6 +695,13 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) } slice = __calc_delta(slice, se->load.weight, load); } + + /* + * To avoid cache thrashing, run at least sysctl_sched_min_granularity. + * The vruntime of a low priority task advances faster; those tasks + * will simply get time slices less frequently. + */ + slice = max_t(u64, slice, sysctl_sched_min_granularity); return slice; } -- 2.20.1