Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp10954974ybi; Thu, 25 Jul 2019 07:35:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqw5ucTAdXHXQDnJw87A2jCtvUGAY9lXfOnf7fEHe0K0KvTNezHI2mm4fVJrJa9VulCS5ZSw X-Received: by 2002:a17:90a:346c:: with SMTP id o99mr91200923pjb.20.1564065352978; Thu, 25 Jul 2019 07:35:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564065352; cv=none; d=google.com; s=arc-20160816; b=yEmKdNGaalgfPpUotfkOu++IoOQFy8tdStgkTRtloU1U44LzZaUTSv4CjxixkT42RO uEqVQONEuosgmEVwU1mSAouX+3pT6XlBFtCLelo6PHpbPSXf1vbTC8Q3RvCWfifDGiPQ /QqVovM1nK7oGs5iatVsw7yMqrOzcAj2qlTqLoTaDhp0RsdFTU+wVZJmb44k9KSpFkqi fl0Qhl6ZyPoGTaPKwebgXJ4AwIQDJ7yli0m6b85NWoHueAItpu7RiLJHTFi+ixiR5pkl JiabqjEAYS6vepcsViebE1weak8X+7gEzkgKPSyTHR+b2vNNyGrpdmyi5THabZNba2W5 PKow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=a24SWZz6BseozIBqLH/Dm4LXzLuWWZp9d2GwsFqA7+E=; b=YJJC28Y13BfFBGPeuqHLE/OpppI78JrzfkqwdrPHFTi49Xbgt+kmRuynvKbvllVX24 T6d0l/J6gJxnz6uB5NGox9BBid1E6gNzK30hjw/Y3poVVXBrwN14axA/b2gnuPSkqlmm nvV9/wNpfuhW4X+eY1mn3rkcZ6zsgzO/QN4QKKybrzl0eMQmpXAC/e8B+vv6PqNQhaNE dodXPfiT23QM2IZ+5qXVTwMV0dSAjoEBhjM9mAWa0CAE+yCDbxY9bTTKnO4ifhLQi/Z3 8fC87cJ9mEgdcgUEL29jurPAPV5YeRlqDfI3Vg4oRoCI5bx7tvEFzegTaxMQkXUIxfcX NNqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c23si16667081pfo.168.2019.07.25.07.35.38; Thu, 25 Jul 2019 07:35:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387891AbfGYObi (ORCPT + 99 others); Thu, 25 Jul 2019 10:31:38 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:45767 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726736AbfGYObh (ORCPT ); Thu, 25 Jul 2019 10:31:37 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=aaron.lu@linux.alibaba.com;NM=1;PH=DS;RN=20;SR=0;TI=SMTPD_---0TXmj5u6_1564065087; Received: from aaronlu(mailfrom:aaron.lu@linux.alibaba.com fp:SMTPD_---0TXmj5u6_1564065087) by smtp.aliyun-inc.com(127.0.0.1); Thu, 25 Jul 2019 22:31:29 +0800 Date: Thu, 25 Jul 2019 22:31:27 +0800 From: Aaron Lu To: Aubrey Li Cc: Julien Desfossez , Subhra Mazumdar , Vineeth Remanan Pillai , Nishanth Aravamudan , Peter Zijlstra , Tim Chen , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , Greg Kerr , Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini Subject: [RFC PATCH 1/3] wrapper for cfs_rq->min_vruntime Message-ID: <20190725143127.GB992@aaronlu> References: <20190531210816.GA24027@sinkpad> <20190606152637.GA5703@sinkpad> <20190612163345.GB26997@sinkpad> <635c01b0-d8f3-561b-5396-10c75ed03712@oracle.com> <20190613032246.GA17752@sinkpad> <20190619183302.GA6775@sinkpad> <20190718100714.GA469@aaronlu> <20190725143003.GA992@aaronlu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190725143003.GA992@aaronlu> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a wrapper function cfs_rq_min_vruntime(cfs_rq) to return cfs_rq->min_vruntime. It will be used in the following patch, no functionality change. Signed-off-by: Aaron Lu --- kernel/sched/fair.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 26d29126d6a5..a7b26c96f46b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -431,6 +431,11 @@ find_matching_se(struct sched_entity **se, struct sched_entity **pse) #endif /* CONFIG_FAIR_GROUP_SCHED */ +static inline u64 cfs_rq_min_vruntime(struct cfs_rq *cfs_rq) +{ + return cfs_rq->min_vruntime; +} + static __always_inline void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec); @@ -467,7 +472,7 @@ static void update_min_vruntime(struct cfs_rq *cfs_rq) struct sched_entity *curr = cfs_rq->curr; struct rb_node *leftmost = rb_first_cached(&cfs_rq->tasks_timeline); - u64 vruntime = cfs_rq->min_vruntime; + u64 vruntime = cfs_rq_min_vruntime(cfs_rq); if (curr) { if (curr->on_rq) @@ -487,7 +492,7 @@ static void update_min_vruntime(struct cfs_rq *cfs_rq) } /* ensure we never gain time by being placed backwards. */ - cfs_rq->min_vruntime = max_vruntime(cfs_rq->min_vruntime, vruntime); + cfs_rq->min_vruntime = max_vruntime(cfs_rq_min_vruntime(cfs_rq), vruntime); #ifndef CONFIG_64BIT smp_wmb(); cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime; @@ -3742,7 +3747,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) {} static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se) { #ifdef CONFIG_SCHED_DEBUG - s64 d = se->vruntime - cfs_rq->min_vruntime; + s64 d = se->vruntime - cfs_rq_min_vruntime(cfs_rq); if (d < 0) d = -d; @@ -3755,7 +3760,7 @@ static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se) static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) { - u64 vruntime = cfs_rq->min_vruntime; + u64 vruntime = cfs_rq_min_vruntime(cfs_rq); /* * The 'current' period is already promised to the current tasks, @@ -3848,7 +3853,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * update_curr(). */ if (renorm && curr) - se->vruntime += cfs_rq->min_vruntime; + se->vruntime += cfs_rq_min_vruntime(cfs_rq); update_curr(cfs_rq); @@ -3859,7 +3864,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * fairness detriment of existing tasks. */ if (renorm && !curr) - se->vruntime += cfs_rq->min_vruntime; + se->vruntime += cfs_rq_min_vruntime(cfs_rq); /* * When enqueuing a sched_entity, we must: @@ -3972,7 +3977,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * can move min_vruntime forward still more. */ if (!(flags & DEQUEUE_SLEEP)) - se->vruntime -= cfs_rq->min_vruntime; + se->vruntime -= cfs_rq_min_vruntime(cfs_rq); /* return excess runtime on last dequeue */ return_cfs_rq_runtime(cfs_rq); @@ -6722,7 +6727,7 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu) min_vruntime = cfs_rq->min_vruntime; } while (min_vruntime != min_vruntime_copy); #else - min_vruntime = cfs_rq->min_vruntime; + min_vruntime = cfs_rq_min_vruntime(cfs_rq); #endif se->vruntime -= min_vruntime; @@ -10215,7 +10220,7 @@ static void task_fork_fair(struct task_struct *p) resched_curr(rq); } - se->vruntime -= cfs_rq->min_vruntime; + se->vruntime -= cfs_rq_min_vruntime(cfs_rq); rq_unlock(rq, &rf); } @@ -10335,7 +10340,7 @@ static void detach_task_cfs_rq(struct task_struct *p) * cause 'unlimited' sleep bonus. */ place_entity(cfs_rq, se, 0); - se->vruntime -= cfs_rq->min_vruntime; + se->vruntime -= cfs_rq_min_vruntime(cfs_rq); } detach_entity_cfs_rq(se); @@ -10349,7 +10354,7 @@ static void attach_task_cfs_rq(struct task_struct *p) attach_entity_cfs_rq(se); if (!vruntime_normalized(p)) - se->vruntime += cfs_rq->min_vruntime; + se->vruntime += cfs_rq_min_vruntime(cfs_rq); } static void switched_from_fair(struct rq *rq, struct task_struct *p) -- 2.19.1.3.ge56e4f7