Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp4151259ybh; Tue, 6 Aug 2019 07:08:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqxjaSBb2Tnqm2AQ6L1NnGNEgTk4fWLWEdtW2XzLuRblhlPYi7FI1MttyLZm559WEx1sBGRx X-Received: by 2002:a62:87c8:: with SMTP id i191mr3848553pfe.133.1565100504320; Tue, 06 Aug 2019 07:08:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565100504; cv=none; d=google.com; s=arc-20160816; b=S+QiMpornq/qM7LtxJm8Khz5osjg4J7JDb2CQq/inah33N3as7HKJLPpdwEHgcjaST vJ7zRYrwnn3pYfUu+be14OLel8FNBMeu8oFCGVYU0maucRbbWR+jq/pCqbLjgBbSEOql OLh2RENGixKsw1/8OFCXPfGMXBA2aK8KGHCYmFRCvwNuMjFtlFhS13Yefakl3cIoNpbm kO9zVoEkxBDz5mCpgOHxxUf90TDYb3MP2F8lFpyDABHO4WHoLqQ8WMTuHusqF0N/cJhU TN6B1dGuy9RCVURYl31uH5YlWd7uUcosU8ufCNd5FWWIuruISKKRN1x5FG8GbahAD4oc iy9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=2hHybxU+6I0p4iB9wZxrddxPlwx/rGBSJok5a0OUDZE=; b=MgSFQnPsvTvfFs0ysdhM7UNGDfIlc9QEOFrOCvjkDqpv0AYV06fRQhAxiQMJxw2RTo gPoYRfLf/OHdURFfUqs/ZywzOgDb0ug3lOlee16weSvxQOHvY7u54D17FW9ZJ1zWXyhU 9AzY/HGaS+J7Vw66KvkE/1/YAYi8x7785VQo6nrgodOBTXo8PfPtYErZXmjfEs8OlF06 HXfd6T11sepAltvO2FBj/+ue3MhPkjLGSKo/YEeRESxZWPvEaVhF6yB7Js5msjUsQndY gGlAyH2f7JgRsUCTwmFvd60c4Pmc0BIGLEzO6GtoOLYG8DlWVEsK7mg/RTNeT6vIGCJs Eb7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x24si56188094pfr.200.2019.08.06.07.08.06; Tue, 06 Aug 2019 07:08:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732635AbfHFOHD (ORCPT + 99 others); Tue, 6 Aug 2019 10:07:03 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3767 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728836AbfHFOHD (ORCPT ); Tue, 6 Aug 2019 10:07:03 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id E96F6A1A2E2A146353F6; Tue, 6 Aug 2019 22:06:59 +0800 (CST) Received: from huawei.com (10.90.53.225) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.439.0; Tue, 6 Aug 2019 22:06:52 +0800 From: Cheng Jian To: , , , , , CC: Subject: [PATCH] sched/core: decrease rq->nr_uninterruptible before set_task_cpu Date: Tue, 6 Aug 2019 22:12:50 +0800 Message-ID: <1565100770-110193-1-git-send-email-cj.chengjian@huawei.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.90.53.225] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Migration may occur when wake up a process, so we must update the rq->nr_uninterruptible before set_task_cpu, otherwise we will decrease the nr_interuptible of the incorrect rq. Over time, it cause some rq accounting according to be too large, but others are negative. Also change the type of rq->nr_uninterruptible to atomic_t. Signed-off-by: Cheng Jian --- kernel/sched/core.c | 14 +++++++++----- kernel/sched/debug.c | 2 +- kernel/sched/loadavg.c | 2 +- kernel/sched/sched.h | 2 +- 4 files changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2b037f1..4d3bbc1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1198,7 +1198,7 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags) void activate_task(struct rq *rq, struct task_struct *p, int flags) { if (task_contributes_to_load(p)) - rq->nr_uninterruptible--; + atomic_dec(&rq->nr_uninterruptible); enqueue_task(rq, p, flags); @@ -1210,7 +1210,7 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags) p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING; if (task_contributes_to_load(p)) - rq->nr_uninterruptible++; + atomic_inc(&rq->nr_uninterruptible); dequeue_task(rq, p, flags); } @@ -2135,9 +2135,6 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, lockdep_assert_held(&rq->lock); #ifdef CONFIG_SMP - if (p->sched_contributes_to_load) - rq->nr_uninterruptible--; - if (wake_flags & WF_MIGRATED) en_flags |= ENQUEUE_MIGRATED; #endif @@ -2500,11 +2497,15 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) p->sched_contributes_to_load = !!task_contributes_to_load(p); p->state = TASK_WAKING; + /* update the rq accounting according before set_task_cpu */ if (p->in_iowait) { delayacct_blkio_end(p); atomic_dec(&task_rq(p)->nr_iowait); } + if (p->sched_contributes_to_load) + atomic_dec(&task_rq(p)->nr_uninterruptible); + cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags); if (task_cpu(p) != cpu) { wake_flags |= WF_MIGRATED; @@ -2519,6 +2520,9 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) atomic_dec(&task_rq(p)->nr_iowait); } + if (p->sched_contributes_to_load) + atomic_dec(&task_rq(p)->nr_uninterruptible); + #endif /* CONFIG_SMP */ ttwu_queue(p, cpu, wake_flags); diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index f7e4579..fa2c1bc 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -641,7 +641,7 @@ do { \ P(nr_running); P(nr_switches); P(nr_load_updates); - P(nr_uninterruptible); + SEQ_printf(m, " .%-30s: %d\n", "nr_uninterruptible", atomic_read(&rq->nr_uninterruptible)); PN(next_balance); SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq->curr))); PN(clock); diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index 28a5165..cae7643 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c @@ -81,7 +81,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust) long nr_active, delta = 0; nr_active = this_rq->nr_running - adjust; - nr_active += (long)this_rq->nr_uninterruptible; + nr_active += (long)atomic_read(&this_rq->nr_uninterruptible); if (nr_active != this_rq->calc_load_active) { delta = nr_active - this_rq->calc_load_active; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 802b1f3..8429281 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -890,7 +890,7 @@ struct rq { * one CPU and if it got migrated afterwards it may decrease * it on another CPU. Always updated under the runqueue lock: */ - unsigned long nr_uninterruptible; + atomic_t nr_uninterruptible; struct task_struct *curr; struct task_struct *idle; -- 2.7.4