Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp6296027ybi; Mon, 8 Jul 2019 00:33:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqxwswdWguFHiwxtN7Fi2ZCGrKe1+EYrPKBywBeptb9EBaHUM+Wm5nkQ9tueYfZy589zimnX X-Received: by 2002:a63:4442:: with SMTP id t2mr6992005pgk.327.1562571221858; Mon, 08 Jul 2019 00:33:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562571221; cv=none; d=google.com; s=arc-20160816; b=SQ0NPuP0kMvtlWkKc7Ff1kL5orCpOioCRwL06W/ILePxJAZbYNYgj1RHhoGrzAH8Pb +5Or2kzjumBxH5AATHQ/f9BmghNsK1lHDGXWuGW9CM53zx2ZaWN1mKGg6yaqHeYw5WVG 1FdJ3IEUUgPcdI0HCy0mN1RC7ItZ8CjJMEUQc+/XRpCduTtMATlOlnu/Z73jDVfq0uTO ESIsA/l+TbyK0KuHlU+QuwBAACdnsT7oUaq/E1RTKbNvMI8TAlvX7jKSUppEm9CLLuY/ 3BjEI568wTkasM1hFS36d1Px/nyzvBWzBfuxrY2fYQa8a4oAnBVKXPpF2EUiBR0J23NV ngKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=qs74MNXeX8PUd5IlEIr3DdDrdTGpDJ2VkIsqNY9133o=; b=C+qOWL1u70BYGYIxSR3pQUG1nOSpGaH37Bbchj8HfFAjbPFKCKUbwd/PIn496MGMd+ YNtnr17HetwgyHHkzi7CoCgtTGYgPOmTS1UwHPpSVlKP45+gmx5YKW57jD+z1gzCCXCp VwbwV4hs4aREI04d0J5grUyskCGeHtZC+kEBg45Fe5yDX6fAEuII9y1m/D39ZpwkmLmN Aa8dPugxxbl7C6NOLFAtZOsKXoeNmyk+J/wqtL/Gd2xK9iC+sAYig07BsHYLwomyo8Ew 0DSdBOiAjoQbFAFrp/MIM5OLPMUSPOknlt016phcqZGpGNK7TW5j3IHFcOwsUgxNN9oU az9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=vDGz+aDC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f96si17394763plb.339.2019.07.08.00.33.23; Mon, 08 Jul 2019 00:33:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=vDGz+aDC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725875AbfGHEFz (ORCPT + 99 others); Mon, 8 Jul 2019 00:05:55 -0400 Received: from mail-ot1-f66.google.com ([209.85.210.66]:33953 "EHLO mail-ot1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725781AbfGHEFz (ORCPT ); Mon, 8 Jul 2019 00:05:55 -0400 Received: by mail-ot1-f66.google.com with SMTP id n5so14826163otk.1 for ; Sun, 07 Jul 2019 21:05:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qs74MNXeX8PUd5IlEIr3DdDrdTGpDJ2VkIsqNY9133o=; b=vDGz+aDCGW4wDelVIhNKQxbg/e7MV9jEJZKH9NxVG2vyV754beiX2Y32hDXxxRu1xn eYGUHwug99CIUe/F66wSqBhG5kWnfjHWqo+ZmFLltihwe5yMlQkHW3Hay3pjthHZgEaU pcAUOgcO+dicWGKdHQUv/J6hKb5sMopNndwHc5GVhJzH7c+zG8MenF0j0nlOvsct8Fgt Iod0CZblgBS7Eo9uq9v+RpFa6uWd3q08cPSq1g4C98Rkmxu3e0n1eiVsnf21Sy4NcFme BiFxoxaTBLCSRq9/99d6a5jWg9ziwkn72ymfMCdmbt1g8P3iu4X3X5ppg6U08iMmVg3G iK/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qs74MNXeX8PUd5IlEIr3DdDrdTGpDJ2VkIsqNY9133o=; b=G5pdl+fsqx/+82hSdVHIhcV6B5f7GsFCpL0X0MQOZ9pc6Ey2S68TBih2owJ5dVXCQG bqT49Y1e7NM40VVtkM8lqq0Bq/cxZPd0j+rb+pOILW9PAHpCxCVIHF+W8hsnqM6URTZx izlC6xqUIv4ZP1fh4Q6S/N4EDqUn4tyHKsExC/n0p0kz08TuHdhFaO/cY8fKl2hqxy8W ZoXBUdBtcAjw+0UGtKHCnZgJYautIbgYRxes/LjVadu0f3QNhR2S/jekri4IS7r3zrOg 4patIMaGN7RtsKThypkDZY0xF0Ut66q4Y55wI6ZUBYLjVqo9hazl/qpZsPQrEF1P7QUD 52YA== X-Gm-Message-State: APjAAAVg3UZSAFXciwsmZXzSTMeH4XOKf6K3hXBWaCO1H08WBJzIQkPt PKWJKjgrJswxg7rMywkbQ4PkVoKRGLLXPMqKhpTd/AY+ X-Received: by 2002:a9d:6959:: with SMTP id p25mr12003060oto.118.1562558754128; Sun, 07 Jul 2019 21:05:54 -0700 (PDT) MIME-Version: 1.0 References: <1561711901-4755-1-git-send-email-wanpengli@tencent.com> <1561711901-4755-2-git-send-email-wanpengli@tencent.com> In-Reply-To: <1561711901-4755-2-git-send-email-wanpengli@tencent.com> From: Wanpeng Li Date: Mon, 8 Jul 2019 12:05:44 +0800 Message-ID: Subject: Re: [PATCH v4 1/2] sched/isolation: Prefer housekeeping cpu in local node To: LKML Cc: Ingo Molnar , Peter Zijlstra , Ingo Molnar , Frederic Weisbecker , Thomas Gleixner , Srikar Dronamraju Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Kindly ping for these two patches, :) On Fri, 28 Jun 2019 at 16:51, Wanpeng Li wrote: > > From: Wanpeng Li > > In real product setup, there will be houseeking cpus in each nodes, it > is prefer to do housekeeping from local node, fallback to global online > cpumask if failed to find houseeking cpu from local node. > > Reviewed-by: Frederic Weisbecker > Reviewed-by: Srikar Dronamraju > Cc: Ingo Molnar > Cc: Peter Zijlstra > Cc: Frederic Weisbecker > Cc: Thomas Gleixner > Cc: Srikar Dronamraju > Signed-off-by: Wanpeng Li > --- > v3 -> v4: > * have a static function for sched_numa_find_closest > * cleanup sched_numa_find_closest comments > v2 -> v3: > * add sched_numa_find_closest comments > v1 -> v2: > * introduce sched_numa_find_closest > > kernel/sched/isolation.c | 12 ++++++++++-- > kernel/sched/sched.h | 8 +++++--- > kernel/sched/topology.c | 20 ++++++++++++++++++++ > 3 files changed, 35 insertions(+), 5 deletions(-) > > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > index 7b9e1e0..191f751 100644 > --- a/kernel/sched/isolation.c > +++ b/kernel/sched/isolation.c > @@ -16,9 +16,17 @@ static unsigned int housekeeping_flags; > > int housekeeping_any_cpu(enum hk_flags flags) > { > - if (static_branch_unlikely(&housekeeping_overridden)) > - if (housekeeping_flags & flags) > + int cpu; > + > + if (static_branch_unlikely(&housekeeping_overridden)) { > + if (housekeeping_flags & flags) { > + cpu = sched_numa_find_closest(housekeeping_mask, smp_processor_id()); > + if (cpu < nr_cpu_ids) > + return cpu; > + > return cpumask_any_and(housekeeping_mask, cpu_online_mask); > + } > + } > return smp_processor_id(); > } > EXPORT_SYMBOL_GPL(housekeeping_any_cpu); > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 802b1f3..ec65d90 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1261,16 +1261,18 @@ enum numa_topology_type { > extern enum numa_topology_type sched_numa_topology_type; > extern int sched_max_numa_distance; > extern bool find_numa_distance(int distance); > -#endif > - > -#ifdef CONFIG_NUMA > extern void sched_init_numa(void); > extern void sched_domains_numa_masks_set(unsigned int cpu); > extern void sched_domains_numa_masks_clear(unsigned int cpu); > +extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu); > #else > static inline void sched_init_numa(void) { } > static inline void sched_domains_numa_masks_set(unsigned int cpu) { } > static inline void sched_domains_numa_masks_clear(unsigned int cpu) { } > +static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu) > +{ > + return nr_cpu_ids; > +} > #endif > > #ifdef CONFIG_NUMA_BALANCING > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index f751ce0..4eea2c9 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1724,6 +1724,26 @@ void sched_domains_numa_masks_clear(unsigned int cpu) > } > } > > +/* > + * sched_numa_find_closest() - given the NUMA topology, find the cpu > + * closest to @cpu from @cpumask. > + * cpumask: cpumask to find a cpu from > + * cpu: cpu to be close to > + * > + * returns: cpu, or nr_cpu_ids when nothing found. > + */ > +int sched_numa_find_closest(const struct cpumask *cpus, int cpu) > +{ > + int i, j = cpu_to_node(cpu); > + > + for (i = 0; i < sched_domains_numa_levels; i++) { > + cpu = cpumask_any_and(cpus, sched_domains_numa_masks[i][j]); > + if (cpu < nr_cpu_ids) > + return cpu; > + } > + return nr_cpu_ids; > +} > + > #endif /* CONFIG_NUMA */ > > static int __sdt_alloc(const struct cpumask *cpu_map) > -- > 2.7.4 >