Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp1751393ima; Sun, 21 Oct 2018 20:10:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV60BbdapzICFHhaFnBU35wEufDrEgeqMSDduv7vdS/2DLMDb9T2TNQiCR5ci6imrmszAc3mB X-Received: by 2002:a62:6383:: with SMTP id x125-v6mr43639862pfb.13.1540177811003; Sun, 21 Oct 2018 20:10:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540177810; cv=none; d=google.com; s=arc-20160816; b=t4IcJ5tHGHV7JUJ2Taq7jS2yE2UvD0G9pDM3OBnatYtzn476y9j3PQod+ey7mvMytg I/4hYHDQrZzx7pqe3yim1W+4JrVzoG16L2uril3lwKewMNAhu/FIN5kYyU0cjvYCdEku kaLASgcH02v75Spl3LXV3CKZLcYWer1W3oIPSoICpxtbKu1R00nYboT/2Vj0D8Ik57O0 /fi+a2HPKHlKvm8KO0ce+3dylW/Ek2H+q57VLJe+AKPq06z8JkAtrhGLOiVsoxSttr+D G4mdn8MvF1Ysfo6CO/YIrrRRmgMhSUqmfZldWrkxgsdzead5BErzHWXc5YVosUDMQ0yA iQhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=EkWpvmVrf1t36pfVqMhTqmr1ltDI+glMGs/E3va2HNc=; b=Uxhc8naRVcuSTl3oZcY1MHB3hy1YTLYsHesGQjgfcEi9Lixf2e85G/XHwQq8jqJAYy onLcftgCtW7ArRBnxC95CCUiTGAje+2D+bP0yF/RA4uJVqw3nwLwk7QePTMhVlguQx4T bTjk+XJmckX1QU3hir0cXgFlZhn1is/qN8f23V56YT7XontzPp22Syv4zQWvLmnfx5O7 qu38NLYfLFggiedCuWpxYaIzbTvMCk++Ut0EvDhe00o8Wb4Ozxie6DlADBz+cKJowLNJ JijVStlJe8XH3JGgjqnitneckp3jz2kavDBf48BP10LX5b6gTS3F/CIf0NBsxbXQxKtW qo5Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l33-v6si31863447pld.397.2018.10.21.20.09.53; Sun, 21 Oct 2018 20:10:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727215AbeJVLU5 (ORCPT + 99 others); Mon, 22 Oct 2018 07:20:57 -0400 Received: from mxhk.zte.com.cn ([63.217.80.70]:39064 "EHLO mxhk.zte.com.cn" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726182AbeJVLU5 (ORCPT ); Mon, 22 Oct 2018 07:20:57 -0400 Received: from mse01.zte.com.cn (unknown [10.30.3.20]) by Forcepoint Email with ESMTPS id 851F7C45D9F38FADE042; Mon, 22 Oct 2018 11:04:18 +0800 (CST) Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse01.zte.com.cn with ESMTP id w9M34DRo082145; Mon, 22 Oct 2018 11:04:13 +0800 (GMT-8) (envelope-from wang.yi59@zte.com.cn) Received: from fox-host8.localdomain ([10.74.120.8]) by szsmtp06.zte.com.cn (Lotus Domino Release 8.5.3FP6) with ESMTP id 2018102211041768-5939534 ; Mon, 22 Oct 2018 11:04:17 +0800 From: Yi Wang To: mingo@redhat.com Cc: peterz@infradead.org, linux-kernel@vger.kernel.org, wang.yi59@zte.com.cn, zhong.weidong@zte.com.cn, liu.yi24@zte.com.cn Subject: [PATCH] sched/numa: fix choosing isolated CPUs when task_numa_migrate() Date: Mon, 22 Oct 2018 11:05:16 +0800 Message-Id: <1540177516-38613-1-git-send-email-wang.yi59@zte.com.cn> X-Mailer: git-send-email 1.8.3.1 X-MIMETrack: Itemize by SMTP Server on SZSMTP06/server/zte_ltd(Release 8.5.3FP6|November 21, 2013) at 2018-10-22 11:04:17, Serialize by Router on notes_smtp/zte_ltd(Release 9.0.1FP7|August 17, 2016) at 2018-10-22 11:04:13, Serialize complete at 2018-10-22 11:04:13 X-MAIL: mse01.zte.com.cn w9M34DRo082145 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When trying to migrate to a CPU in task_numa_migrate(), we invoke task_numa_find_cpu() to choose a spot, in which function we skip the CPU which is not in cpus_allowed, but forgot to concern the isolated CPUs, and this may cause the task would run on the isolcpus. This patch fixes this issue by checking the load_balance_mask. Signed-off-by: Yi Wang Reviewed-by: Yi Liu --- kernel/sched/fair.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 908c9cd..0fa0cee 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1709,6 +1709,7 @@ static void task_numa_compare(struct task_numa_env *env, rcu_read_unlock(); } +static int is_cpu_load_balance(int cpu); static void task_numa_find_cpu(struct task_numa_env *env, long taskimp, long groupimp) { @@ -1731,6 +1732,9 @@ static void task_numa_find_cpu(struct task_numa_env *env, if (!cpumask_test_cpu(cpu, &env->p->cpus_allowed)) continue; + if (!is_cpu_load_balance(cpu)) + continue; + env->dst_cpu = cpu; task_numa_compare(env, taskimp, groupimp, maymove); } @@ -8528,6 +8532,12 @@ static int should_we_balance(struct lb_env *env) return balance_cpu == env->dst_cpu; } +static int is_cpu_load_balance(int cpu) +{ + struct cpumask *cpus = this_cpu_cpumask_var_ptr(load_balance_mask); + + return cpumask_test_cpu(cpu, cpus); +} + /* * Check this_cpu to ensure it is balanced within domain. Attempt to move * tasks if there is an imbalance. -- 1.8.3.1