Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp750098pxb; Wed, 22 Sep 2021 11:58:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxmCGWTWDdhQeaVa/K//LX2EEinIrph+XGNlUExypu9twEU0sx9V0uHy2bB2sexX52wy94i X-Received: by 2002:a05:6638:339e:: with SMTP id h30mr513471jav.148.1632337129766; Wed, 22 Sep 2021 11:58:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632337129; cv=none; d=google.com; s=arc-20160816; b=H884ufPqUQ4Ce+0eJv7QhttqBvB9SAjRlyitjM0/YBL4QyVEnZFNcR8GrP0VfQzQj0 DWg5fGMLjTY6QkObscszLOoIzlLc+VsY90sxuXYbpqnWCdjdtaqBffErWWFizf0gSVsQ B9ix7XojdaaW8JwPKP7ga6eKc9D4EmuNNFU4cbOLzqGGitTk4GByf3cGD7JVrlv6rp+4 J28rk6kXVgzecfzNR8BQpk9yfo9Ai5fqqRXkM8nZEBtWGxlM+1tEp14VfwTNLd0qrOp1 zRAwR35BFAetWWPpanhe3xmX6jMHLDMGRnRGZwmca1rI+8G5jyPEh0w9es2L9xRKMHLi I3bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=JjiAhIxlKwbYndIcZjV6QI9uNXTwjHN+bdTVfBE+Zdw=; b=nTUttRau2pFbYJ1e0SVi/cypYGZhOVrr8qF6HEAp3XiBX6YOYz6a7yNZZZk/2ONgTh 6xkjImkYH1IuzxyaUv1WjWBUpuRykNX2QZO/CLX5oXveWgqZLBeI5zDeyO+cmKeraaCq izIY7q++8EmtMAV3bZ2CQCgE2OiQQ4eBw/lloOIYiRfsZtrMF9yH3QyiTPzwmIPyrzuc zKFjbjK8Pa1H3S3uLEpcAhzNt0gL7scZnc8DkZqA3BVqw9O6RR3YGqt0zC4PxOFhBgwz y7UvSDYVmxumTBR+7dcwt/HxZIktol1OdsYjnSd4pco02RW/mq79NfxCDNhqvRh7G9A4 bbAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z16si3327923ill.147.2021.09.22.11.58.34; Wed, 22 Sep 2021 11:58:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237137AbhIVS65 (ORCPT + 99 others); Wed, 22 Sep 2021 14:58:57 -0400 Received: from outbound-smtp32.blacknight.com ([81.17.249.64]:56794 "EHLO outbound-smtp32.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230407AbhIVS65 (ORCPT ); Wed, 22 Sep 2021 14:58:57 -0400 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp32.blacknight.com (Postfix) with ESMTPS id D0D76D29CA for ; Wed, 22 Sep 2021 19:57:25 +0100 (IST) Received: (qmail 24727 invoked from network); 22 Sep 2021 18:57:25 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.29]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 22 Sep 2021 18:57:25 -0000 Date: Wed, 22 Sep 2021 19:57:24 +0100 From: Mel Gorman To: Vincent Guittot Cc: Mike Galbraith , Peter Zijlstra , Ingo Molnar , Valentin Schneider , Aubrey Li , Barry Song , Srikar Dronamraju , LKML Subject: Re: [PATCH 2/2] sched/fair: Scale wakeup granularity relative to nr_running Message-ID: <20210922185724.GD3959@techsingularity.net> References: <20210920142614.4891-3-mgorman@techsingularity.net> <22e7133d674b82853a5ee64d3f5fc6b35a8e18d6.camel@gmx.de> <20210921103621.GM3959@techsingularity.net> <20210922132002.GX3959@techsingularity.net> <20210922150457.GA3959@techsingularity.net> <20210922173853.GB3959@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 22, 2021 at 08:22:43PM +0200, Vincent Guittot wrote: > > > > > In > > > > > your case, you want hackbench threads to not preempt each others > > > > > because they tries to use same resources so it's probably better to > > > > > let the current one to move forward but that's not a universal policy. > > > > > > > > > > > > > No, but have you a better suggestion? hackbench might be stupid but it's > > > > an example of where a workload can excessively preempt itself. While > > > > overloading an entire machine is stupid, it could also potentially occurs > > > > for applications running within a constrained cpumask. > > > > > > But this is property that is specific to each application. Some can > > > have a lot of running threads but few wakes up which have to preempt > > > current threads quickly but others just want the opposite > > > So because it is a application specific property we should define it > > > this way instead of trying to guess > > > > I'm not seeing an alternative suggestion that could be turned into > > an implementation. The current value for sched_wakeup_granularity > > was set 12 years ago was exposed for tuning which is no longer > > the case. The intent was to allow some dynamic adjustment between > > sysctl_sched_wakeup_granularity and sysctl_sched_latency to reduce > > over-scheduling in the worst case without disabling preemption entirely > > (which the first version did). > > > > Should we just ignore this problem and hope it goes away or just let > > people keep poking silly values into debugfs via tuned? > > We should certainly not add a bandaid because people will continue to > poke silly value at the end. And increasing > sysctl_sched_wakeup_granularity based on the number of running threads > is not the right solution. According to the description of your > problem that the current task doesn't get enough time to move forward, > sysctl_sched_min_granularity should be part of the solution. Something > like below will ensure that current got a chance to move forward > That's a very interesting idea! I've queued it up for further testing and as a comparison to the bandaid. -- Mel Gorman SUSE Labs