Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp51682pxf; Wed, 24 Mar 2021 20:42:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz9CMm9xDrz0+WcD/XEDWBvJPulXTHk7ziJrjqhYDge31joXssF5hFW0ytqI9+Rnz7QW8+n X-Received: by 2002:aa7:db0c:: with SMTP id t12mr6659882eds.34.1616643768034; Wed, 24 Mar 2021 20:42:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616643768; cv=none; d=google.com; s=arc-20160816; b=JvsXPShsXAMCkaQT5BrbCDn/dToJFcOnEV7BApEakaFy/NbD2R1LcTXFToi7xSIp+S 4VBEGTkTwJhLUbwys0lal1E2QvtoXYc0EGbGkbmoLINlzMaXhbCJPhGyMWrLbjynw+Be HegR/OGRqsPDANyr9o0beLXOen9NIAmq1GTPaliI12IMPADOeXPoAVdPJDnyfqnWKeAf WbxAfC5Bp/fCubT8hTzs1Iede6NMBSbWO+b+/qnV3eKtmboQihmMrgEzQcozIa0UkxGd /nBN6jm0ufykPzsaDzZdlLVP8KVSS8ZBvnFn50QysknM5M/dZ0CZsoZ4/mrS2NILyJQN CTcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=ZoWLjCpa94JKo6sEJxQQ/nrcO6uNQAbGn7htlx8Nk0o=; b=sbsbg4hyAMyQ6k3BmoM0gz6K3J1cQlbk75gtnngWdInIWbMMUBhxU9oFmJw56AHEMp q8R32dW57iOcGBgXjO4AkhH8/wEgxMg5jZAgq4IlZlqrgiLteOAji8m6ysTameipqQ+/ q33SUlmBqMBSpCLr0DwkchUA9gzz9BrdOGTqxo+Kwwwz7fk5bu4t1eypieBsTFcWC1jF np0NHt7XJvOeF4TfKeZ5kDZbCrZPwfpYseXtOY0t2GrbVwCWnpvA3ZErKw1XIvFizxvT HK5IVK8ngPBBhv8z8mIGJ6xuuHaiF3+5oQiLkAMEObd+Nk//5KqDtT+mIt01ZpYsxMlB 10Xg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=hisilicon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y10si3067363ejw.185.2021.03.24.20.42.25; Wed, 24 Mar 2021 20:42:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=hisilicon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232643AbhCYCiy (ORCPT + 99 others); Wed, 24 Mar 2021 22:38:54 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:14888 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239811AbhCYCia (ORCPT ); Wed, 24 Mar 2021 22:38:30 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4F5Tl62kmrzkfNh; Thu, 25 Mar 2021 10:36:50 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.202.38) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Thu, 25 Mar 2021 10:38:19 +0800 From: Barry Song To: , , , , , , , CC: , , , Barry Song , "Valentin Schneider" Subject: [PATCH v2] sched/topology: remove redundant cpumask_and in init_overlap_sched_group Date: Thu, 25 Mar 2021 15:31:40 +1300 Message-ID: <20210325023140.23456-1-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.126.202.38] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org mask is built in build_balance_mask() by for_each_cpu(i, sg_span), so it must be a subset of sched_group_span(sg). Though cpumask_first_and doesn't lead to a wrong result of balance cpu, it is pointless to do cpumask_and again. Signed-off-by: Barry Song Reviewed-by: Valentin Schneider --- -v2: add reviewed-by of Valentin, thanks! kernel/sched/topology.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index f2066d682cd8..d1aec244c027 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -934,7 +934,7 @@ static void init_overlap_sched_group(struct sched_domain *sd, int cpu; build_balance_mask(sd, sg, mask); - cpu = cpumask_first_and(sched_group_span(sg), mask); + cpu = cpumask_first(mask); sg->sgc = *per_cpu_ptr(sdd->sgc, cpu); if (atomic_inc_return(&sg->sgc->ref) == 1) -- 2.25.1