Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp656830imu; Mon, 5 Nov 2018 06:55:28 -0800 (PST) X-Google-Smtp-Source: AJdET5fKb2hTMHaED8a8+iRgcQYzT5CWxhyou9cl0ZmEDe6SYM5ZeWL+edxi1ONZb4AlkgXSAJHR X-Received: by 2002:a17:902:7045:: with SMTP id h5-v6mr22295701plt.211.1541429728728; Mon, 05 Nov 2018 06:55:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541429728; cv=none; d=google.com; s=arc-20160816; b=DIt73Jbez1oj4nPVo0GSiup031nBm3wFgvj+az2eDNwDxI9eizgv+MNFPmspCg159G vrQXuVZamtQkSGwykmQgp2r7Eppem/6JXEuf2i2cYUVRPAljTIjDZgSTfyKjUp7R9pY7 dDDovX2Jd8LVtEUDGp7tj2491xJsI1KyDMX6eXJTzJMBgTM+yTr2m+pb5kQcFDK1rgSI rucrqd/T+nZ2huB36zwgne5vQ94J9eUESh6oiEZMJgGjxjjJOrwr4+waGCyKRBHPTr8z HKcRiO6znIwQeL8bJbneU39HSEBAMZVyZrD8vE1U6Yt4Y7kztj5nERLUcwi6r/HhTjf/ 4I3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eaCz7GIVUOu1Bd3vPohG2q1+ZU/WYwhZll0xfd860H8=; b=mjm4pMS8CoUlwzF2lWV0AcTqAIjtyAShxfuIBEzyK5ZI0cPdDDMJUk4VuHF3zli+rT yps6ojSbjtH3v1k9hm/wj0h9KTeq/UwWwdw02kVUzLJ62wFvhbioSDQGk92KUpQQZ2UJ MR987lo/LJ81dPAuR9c/xkWtTuODRjLeWK4QXsXMQ8fK1BTCBX2BfWv5uJaQG8Nqd2rt IoyJKkKL+7AKARozfqqfZa7ZPkBLBOhjVzuCIx/aoSBP1sg2ve0CezQtA3sp/jnqOQM6 KM0hVfXl9ognwA0quw43/I0dDfbAbleQhnQ+VoF5UuWZ5oxP4mY7olMq0RZ1vChommos SiwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p2-v6si8617473pfp.82.2018.11.05.06.55.13; Mon, 05 Nov 2018 06:55:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387459AbeKFAOf (ORCPT + 99 others); Mon, 5 Nov 2018 19:14:35 -0500 Received: from foss.arm.com ([217.140.101.70]:45334 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387428AbeKFAOf (ORCPT ); Mon, 5 Nov 2018 19:14:35 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7AE6615BF; Mon, 5 Nov 2018 06:54:29 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 465683F5CF; Mon, 5 Nov 2018 06:54:27 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Steve Muckle , Suren Baghdasaryan , Ingo Molnar Subject: [PATCH v2 3/3] sched/fair: add lsub_positive and use it consistently Date: Mon, 5 Nov 2018 14:54:00 +0000 Message-Id: <20181105145400.935-4-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181105145400.935-1-patrick.bellasi@arm.com> References: <20181105145400.935-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following pattern: var -= min_t(typeof(var), var, val); is used multiple times in fair.c. The existing sub_positive() already capture that pattern but it adds also explicit load-sotre to properly support lockless observations. In other cases, the patter above is used to update local, and/or not concurrently accessed, variables. Let's add a simpler version of sub_positive, targeted to local variables updates, which gives the same readability benefits at calling sites without enforcing {READ,WRITE}_ONCE barriers. Signed-off-by: Patrick Bellasi Link: https://lore.kernel.org/lkml/20181031184527.GA3178@hirez.programming.kicks-ass.net Cc: Ingo Molnar Cc: Peter Zijlstra Cc: linux-kernel@vger.kernel.org --- kernel/sched/fair.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index aeb37fe4dbb1..d50c739127d6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2734,6 +2734,17 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) WRITE_ONCE(*ptr, res); \ } while (0) +/* + * Remove and clamp on negative, from a local variable. + * + * A variant of sub_positive which do not use explicit load-store + * and thus optimized for local variable updates. + */ +#define lsub_positive(_ptr, _val) do { \ + typeof(_ptr) ptr = (_ptr); \ + *ptr -= min_t(typeof(*ptr), *ptr, _val); \ +} while (0) + #ifdef CONFIG_SMP static inline void enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) @@ -4639,7 +4650,7 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun) cfs_b->distribute_running = 0; throttled = !list_empty(&cfs_b->throttled_cfs_rq); - cfs_b->runtime -= min(runtime, cfs_b->runtime); + lsub_positive(&cfs_b->runtime, runtime); } /* @@ -4773,7 +4784,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b) raw_spin_lock(&cfs_b->lock); if (expires == cfs_b->runtime_expires) - cfs_b->runtime -= min(runtime, cfs_b->runtime); + lsub_positive(&cfs_b->runtime, runtime); cfs_b->distribute_running = 0; raw_spin_unlock(&cfs_b->lock); } @@ -6240,7 +6251,7 @@ static unsigned long cpu_util_without(int cpu, struct task_struct *p) util = READ_ONCE(cfs_rq->avg.util_avg); /* Discount task's util from CPU's util */ - util -= min_t(unsigned int, util, task_util(p)); + lsub_positive(&util, task_util(p)); /* * Covered cases: @@ -6289,10 +6300,9 @@ static unsigned long cpu_util_without(int cpu, struct task_struct *p) * properly fix the execl regression and it helps in further * reducing the chances for the above race. */ - if (unlikely(task_on_rq_queued(p) || current == p)) { - estimated -= min_t(unsigned int, estimated, - _task_util_est(p)); - } + if (unlikely(task_on_rq_queued(p) || current == p)) + lsub_positive(&estimated, _task_util_est(p)); + util = max(util, estimated); } -- 2.18.0