Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp819406imm; Thu, 31 May 2018 09:55:56 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLSoCetJ/6IGloERgwEL1rR7b6q4iJqiLAx9j7kqUuY0kKizyXvGqFQFVGxyt0tjO2HOtgj X-Received: by 2002:a63:3115:: with SMTP id x21-v6mr5990642pgx.373.1527785756413; Thu, 31 May 2018 09:55:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527785756; cv=none; d=google.com; s=arc-20160816; b=VifQ3rj98iig/p+t2CIKpV6X7q1EXzTYzR0+V2Hul/NVS7uXgesxTht3xJFyjf3DdT rC0tdyfPb8hvkjBk2gzzDqiJvosciZXUtqh6lTnQeqjdkt2XVgl+pFF0GdZ5radS7RTn S+jHrp3DAt2Qf6g6uVhCx6ONhP8BhMrTPI8Rto2qAPrliKZwHrOLu99WidQU6dy8JZvt ko2px7mQr8lGbVnKRG/ieDge69qbeY88Zn4fFFSuCLuohO2CqGs/gJxcXuxwRTywOdgv tDaB8dXVIc6DsmgBxYjjDFPvzUNiKS3DOJLU9iINKF91MnETtsrYtIRunRj81hlVXi+F hZYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=a5+J4FmdN+CFcnJRExAo1AgCmjz9tU2ISX0vH+L6jG0=; b=L2eIV3owQtfN2cRqyDus2BJ51GaDejKWxn20qnytRlshStM4NStWJjE1UTlvd3Q6UO s0wndtDs5ME4WOrX1SBdCmbbS9URuHAaP8+SfJLOMSAh1S75nHWB++oy+7+ZG76iXSMK xSFkbDZxGbelnIux7xvkS1fM2MxrbkqrV44PGlz+yQxhgERgCAowkalwCAdnVkGc7CLq mDmhqlIlXiCYm8zvNWfbRi4Q4expwWMA7+P7D2+W3vTeKVJaKfHc/UBEGYiMHzj9ViWT Fth4l11GI75lTw6fw9PynLROWpM14QoIqxQ+C5W8AwfgQSy/Tg9ec8sdgncKyWiSPW+h sm2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b4-v6si29946980pgc.190.2018.05.31.09.55.42; Thu, 31 May 2018 09:55:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755819AbeEaQzE (ORCPT + 99 others); Thu, 31 May 2018 12:55:04 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:43600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755793AbeEaQzA (ORCPT ); Thu, 31 May 2018 12:55:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FAFB15B2; Thu, 31 May 2018 09:55:00 -0700 (PDT) Received: from [0.0.0.0] (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86CBF3F53D; Thu, 31 May 2018 09:54:58 -0700 (PDT) Subject: Re: [PATCH v5 07/10] sched/irq: add irq utilization tracking To: Vincent Guittot Cc: Peter Zijlstra , Ingo Molnar , linux-kernel , "Rafael J. Wysocki" , Juri Lelli , Morten Rasmussen , viresh kumar , Valentin Schneider , Quentin Perret References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> <1527253951-22709-8-git-send-email-vincent.guittot@linaro.org> <72473e6f-8ade-8e26-3282-276fcae4c4c7@arm.com> From: Dietmar Eggemann Message-ID: Date: Thu, 31 May 2018 18:54:56 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/30/2018 08:45 PM, Vincent Guittot wrote: > Hi Dietmar, > > On 30 May 2018 at 17:55, Dietmar Eggemann wrote: >> On 05/25/2018 03:12 PM, Vincent Guittot wrote: [...] >>> + */ >>> + ret = ___update_load_sum(rq->clock - running, rq->cpu, >>> &rq->avg_irq, >>> + 0, >>> + 0, >>> + 0); >>> + ret += ___update_load_sum(rq->clock, rq->cpu, &rq->avg_irq, >>> + 1, >>> + 1, >>> + 1); Can you not change the function parameter list to the usual (u64 now, struct rq *rq, int running)? Something like this (only compile and boot tested): -- >8 -- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9894bc7af37e..26ffd585cab8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -177,8 +177,22 @@ static void update_rq_clock_task(struct rq *rq, s64 delta) rq->clock_task += delta; #if defined(CONFIG_IRQ_TIME_ACCOUNTING) || defined(CONFIG_PARAVIRT_TIME_ACCOUNTING) - if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY)) - update_irq_load_avg(rq, irq_delta + steal); + if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY)) { + /* + * We know the time that has been used by interrupt since last + * update but we don't when. Let be pessimistic and assume that + * interrupt has happened just before the update. This is not + * so far from reality because interrupt will most probably + * wake up task and trig an update of rq clock during which the + * metric si updated. + * We start to decay with normal context time and then we add + * the interrupt context time. + * We can safely remove running from rq->clock because + * rq->clock += delta with delta >= running + */ + update_irq_load_avg(rq_clock(rq) - (irq_delta + steal), rq, 0); + update_irq_load_avg(rq_clock(rq), rq, 1); + } #endif } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1bb3379c4b71..a245f853c271 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7363,7 +7363,7 @@ static void update_blocked_averages(int cpu) } update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); - update_irq_load_avg(rq, 0); + update_irq_load_avg(rq_clock(rq), rq, 0); /* Don't need periodic decay once load/util_avg are null */ if (others_rqs_have_blocked(rq)) done = false; @@ -7434,7 +7434,7 @@ static inline void update_blocked_averages(int cpu) update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); - update_irq_load_avg(rq, 0); + update_irq_load_avg(rq_clock(rq), rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies; if (!cfs_rq_has_blocked(cfs_rq) && !others_rqs_have_blocked(rq)) diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c index d2e4f2186b13..ae01bb18e28c 100644 --- a/kernel/sched/pelt.c +++ b/kernel/sched/pelt.c @@ -365,31 +365,15 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) * */ -int update_irq_load_avg(struct rq *rq, u64 running) +int update_irq_load_avg(u64 now, struct rq *rq, int running) { - int ret = 0; - /* - * We know the time that has been used by interrupt since last update - * but we don't when. Let be pessimistic and assume that interrupt has - * happened just before the update. This is not so far from reality - * because interrupt will most probably wake up task and trig an update - * of rq clock during which the metric si updated. - * We start to decay with normal context time and then we add the - * interrupt context time. - * We can safely remove running from rq->clock because - * rq->clock += delta with delta >= running - */ - ret = ___update_load_sum(rq->clock - running, rq->cpu, &rq->avg_irq, - 0, - 0, - 0); - ret += ___update_load_sum(rq->clock, rq->cpu, &rq->avg_irq, - 1, - 1, - 1); - - if (ret) + if (___update_load_sum(now, rq->cpu, &rq->avg_irq, + running, + running, + running)) { ___update_load_avg(&rq->avg_irq, 1, 1); + return 1; + } - return ret; + return 0; } diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 0ce9a5a5877a..ebc57301a9a8 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -5,7 +5,7 @@ int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_e int __update_load_avg_cfs_rq(u64 now, int cpu, struct cfs_rq *cfs_rq); int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); int update_dl_rq_load_avg(u64 now, struct rq *rq, int running); -int update_irq_load_avg(struct rq *rq, u64 running); +int update_irq_load_avg(u64 now, struct rq *rq, int running); /* * When a task is dequeued, its estimated utilization should not be update if