Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4278179imm; Wed, 30 May 2018 02:32:55 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIaGah/I+NhwuYQpk/72k1ujFNDhF29fHAveUECyyi4O4b6h1ReoTzmL40e3zs2a2v4DkV1 X-Received: by 2002:a17:902:5a03:: with SMTP id q3-v6mr2104124pli.300.1527672775485; Wed, 30 May 2018 02:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527672775; cv=none; d=google.com; s=arc-20160816; b=sH6f6e83iEmojKRCn65bAp3gTxT97QDsaEOzlK87p4SK1VHyYEIBzlG2WPcm47Pyid Z/aRBiuOW8OaBJLifMPEB86fgiZz7zVLdAFunwvpYPTJ8LMG3/4R/0Uwl1ZtHEikm3bJ z4k8bG25NFFZe+PssK9MQmLTX+L4LETknSODCz5ZxIgI59cjM2j6+nNeiZaUZJ5470WN jr1P1vvHTb/Q5IRtwgRggmnhZrJ+OQOXjtAgGKKXetPiavCvoEXQLMSG4p9WEWKWWVE7 Yq5ZlYnglM3iguRttl1+qWKwzxhZQehbrAuOp2la94pYbegYcNF4fF6gNOTbgD9u272q HljA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=SMlfLfH7n6DEWorqVWJRagKZcVvYlKk4NaTiNHrCi3I=; b=MiPCYojxSqDqBtzaFjBobiKvKBLf7x6kS1fcv6A2+WAmyzIIGz/4zkKnGMcrsMUvvr d48sJDYDb1IDJWt+vQQjirwNS7y+0Vv19D/n8tVYjXWr6DjcZt3efpJvd89FKceVgx/f B5gl9Y1bK7atysh7a1vAufCUkqIlTzTOzJI+YTrXjLvfwpmdk33YhdgFmlRgbMhUKKAw oe73mrz59UZHkA099hEyTGk8vvL5PHZgcsbHweWW1rvvrm5XFr+Iu/osoWeAldEtZV9O 3aOyXpp+i7S2pKfkFGfQVjU0Zl0iiOMVv43Jbzpwu5JqYGCHGFmKF/nUArp4ihIkMSbM yFKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m13-v6si10629815pls.70.2018.05.30.02.32.41; Wed, 30 May 2018 02:32:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968807AbeE3JcP (ORCPT + 99 others); Wed, 30 May 2018 05:32:15 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52946 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965350AbeE3JcI (ORCPT ); Wed, 30 May 2018 05:32:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BD481435; Wed, 30 May 2018 02:32:08 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3E4783F53D; Wed, 30 May 2018 02:32:06 -0700 (PDT) Date: Wed, 30 May 2018 10:32:03 +0100 From: Patrick Bellasi To: Vincent Guittot Cc: Peter Zijlstra , Ingo Molnar , linux-kernel , "Rafael J. Wysocki" , Juri Lelli , Dietmar Eggemann , Morten Rasmussen , viresh kumar , Valentin Schneider , Quentin Perret Subject: Re: [PATCH v5 02/10] sched/rt: add rt_rq utilization tracking Message-ID: <20180530093203.GG30654@e110439-lin> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> <1527253951-22709-3-git-send-email-vincent.guittot@linaro.org> <20180525155437.GE30654@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29-May 15:29, Vincent Guittot wrote: > Hi Patrick, > >> +static inline bool rt_rq_has_blocked(struct rq *rq) > >> +{ > >> + if (rq->avg_rt.util_avg) > > > > Should use READ_ONCE? > > I was expecting that there will be only one read by default but I can > add READ_ONCE I would say here it's required mainly for "documentation" purposes, since we can use this function from non rq-locked paths, e.g. update_sg_lb_stats() update_nohz_stats() update_blocked_averages() rt_rq_has_blocked() Thus, AFAIU, we should use READ_ONCE to "flag" that the value can potentially be updated concurrently? > > > >> + return true; > >> + > >> + return false; > > > > What about just: > > > > return READ_ONCE(rq->avg_rt.util_avg); > > > > ? > > This function is renamed and extended with others tracking in the > following patches so we have to test several values in the function. > That's also why there is the if test because additional if test are > going to be added Right, makes sense. [...] > >> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > >> index ef3c4e6..b4148a9 100644 > >> --- a/kernel/sched/rt.c > >> +++ b/kernel/sched/rt.c > >> @@ -5,6 +5,8 @@ > >> */ > >> #include "sched.h" > >> > >> +#include "pelt.h" > >> + > >> int sched_rr_timeslice = RR_TIMESLICE; > >> int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE; > >> > >> @@ -1572,6 +1574,9 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) > >> > >> rt_queue_push_tasks(rq); > >> > >> + update_rt_rq_load_avg(rq_clock_task(rq), rq, > >> + rq->curr->sched_class == &rt_sched_class); > >> + > >> return p; > >> } > >> > >> @@ -1579,6 +1584,8 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) > >> { > >> update_curr_rt(rq); > >> > >> + update_rt_rq_load_avg(rq_clock_task(rq), rq, 1); > >> + > >> /* > >> * The previous task needs to be made eligible for pushing > >> * if it is still active > >> @@ -2308,6 +2315,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) > >> struct sched_rt_entity *rt_se = &p->rt; > >> > >> update_curr_rt(rq); > >> + update_rt_rq_load_avg(rq_clock_task(rq), rq, 1); > > > > Mmm... not entirely sure... can't we fold > > update_rt_rq_load_avg() into update_curr_rt() ? > > > > Currently update_curr_rt() is used in: > > dequeue_task_rt > > pick_next_task_rt > > put_prev_task_rt > > task_tick_rt > > > > while we update_rt_rq_load_avg() only in: > > pick_next_task_rt > > put_prev_task_rt > > task_tick_rt > > and > > update_blocked_averages > > > > Why we don't we need to update at dequeue_task_rt() time ? > > We are tracking rt rq and not sched entities so we want to know when > sched rt will be the running or not sched class on the rq. Tracking > dequeue_task_rt is useless What about (push) migrations? -- #include Patrick Bellasi