Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp13062306rwl; Wed, 4 Jan 2023 03:04:19 -0800 (PST) X-Google-Smtp-Source: AMrXdXvtUiUCABRS2jk/dJ6Z0nQuU6YU9m0A1/D3e0PROq/wXArAFRNxiumJgPAND9/m8f83ylaa X-Received: by 2002:a05:6402:4010:b0:48d:5382:1d82 with SMTP id d16-20020a056402401000b0048d53821d82mr12204303eda.32.1672830259491; Wed, 04 Jan 2023 03:04:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672830259; cv=none; d=google.com; s=arc-20160816; b=MzL50hLHpadWTC2ley8MrH/a2eZt1HXSm/z62R/63tZdiHJbo5WbkrttT/aqeS2Tot D2SU9bPgIT5nHcp8zKWIXk0SndwrMJkDR82d0iwFhcfGlELaWiFt+Ynq7nkP9HxYH3Y5 arsbaMGoh3hjFUKFZPgKYzTt93CABT1+nB1sfsDc8TVL1m/hX2hFgNUJRDJH9DbZ7QVx 4FaC85ADdFlXpNpH5WuBIvs6zeI/gf2wLihrwKITh50Ky/sninsL8RkH+8XflEU8NOlm d+y+vyqZxbEnwJbckWGpo6XnGx3GZiFmK5y9sD+TxnpX7gKBQ9N74uTLNZ3Q+02n9BYS 9Lew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=JB5JXqPx38x5Furf1H3J8VHjXVaJr6fLRE4xBKlJ8ks=; b=LxRGQgz7s/x82FrB6OwIwYIicHs8hm6Y9LYwElZ7p2yTIbbFOhiNcl1eAoM7wsqcOQ c/sSFqIoCnLtunQOxDmR+j64Le+SosX0besFdy+uprPZKazIX68dOcX0sG812Lvlc/iB hRfgMY+kMey5A1vPCYCePFoIklhHTXwl+8x02zysOo9LJ9WOp7apwI0cWgBIvfdOORaM gPextUV9GDlIO3Iphep4C7uUL8PIr+6AOs4o/KqztxyELBZNmjLamUqOQ+KI4rEeRj5A zMyACN2t9EbR8NA6AK3gx/IyA9sphFDO7duJ/5mZalDCC9AMVPHcIfsj1rNP8QEHuXx+ Ha9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x8-20020a05640226c800b0046c24dd6f2esi31563777edd.235.2023.01.04.03.04.06; Wed, 04 Jan 2023 03:04:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239156AbjADLBQ (ORCPT + 57 others); Wed, 4 Jan 2023 06:01:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233383AbjADLAe (ORCPT ); Wed, 4 Jan 2023 06:00:34 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 493645F43 for ; Wed, 4 Jan 2023 02:59:43 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 814A51FB; Wed, 4 Jan 2023 03:00:24 -0800 (PST) Received: from bogus (unknown [10.163.75.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A32793F587; Wed, 4 Jan 2023 02:59:39 -0800 (PST) Date: Wed, 4 Jan 2023 10:59:39 +0000 From: Sudeep Holla To: Yong-Xuan Wang Cc: Greg Kroah-Hartman , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, Sudeep Holla , Pierre Gondois , Vincent Chen , Greentime Hu Subject: Re: [PATCH -next v3] drivers: base: cacheinfo: fix shared_cpu_map Message-ID: <20230104105939.vdiq77xbn45agj22@bogus> References: <20221228032419.1763-1-yongxuan.wang@sifive.com> <20221228032419.1763-2-yongxuan.wang@sifive.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20221228032419.1763-2-yongxuan.wang@sifive.com> X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 28, 2022 at 03:24:19AM +0000, Yong-Xuan Wang wrote: > The cacheinfo sets up the shared_cpu_map by checking whether the caches > with the same index are shared between CPUs. However, this will trigger > slab-out-of-bounds access if the CPUs do not have the same cache hierarchy. > Another problem is the mismatched shared_cpu_map when the shared cache does > not have the same index between CPUs. > > CPU0 I D L3 > index 0 1 2 x > ^ ^ ^ ^ > index 0 1 2 3 > CPU1 I D L2 L3 > > This patch checks each cache is shared with all caches on other CPUs. > Just curious to know if this is just Qemu config or a real platform. I had intentionally not supported this to just to get to know when such h/w appears in the real world ????. > Reviewed-by: Pierre Gondois > Signed-off-by: Yong-Xuan Wang > --- > drivers/base/cacheinfo.c | 25 +++++++++++++++---------- > 1 file changed, 15 insertions(+), 10 deletions(-) > > diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > index 950b22cdb5f7..dfa804bcf3cc 100644 > --- a/drivers/base/cacheinfo.c > +++ b/drivers/base/cacheinfo.c > @@ -256,7 +256,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > { > struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > struct cacheinfo *this_leaf, *sib_leaf; > - unsigned int index; > + unsigned int index, sib_index; > int ret = 0; > > if (this_cpu_ci->cpu_map_populated) > @@ -284,11 +284,12 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > > if (i == cpu || !sib_cpu_ci->info_list) > continue;/* skip if itself or no cacheinfo */ > - > - sib_leaf = per_cpu_cacheinfo_idx(i, index); > - if (cache_leaves_are_shared(this_leaf, sib_leaf)) { > - cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map); > - cpumask_set_cpu(i, &this_leaf->shared_cpu_map); > + for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) { > + sib_leaf = per_cpu_cacheinfo_idx(i, sib_index); > + if (cache_leaves_are_shared(this_leaf, sib_leaf)) { > + cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map); > + cpumask_set_cpu(i, &this_leaf->shared_cpu_map); Does it make sense to break here once we match as it is unlikely to match with any other indices ? > + } > } > } > /* record the maximum cache line size */ > @@ -302,7 +303,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > static void cache_shared_cpu_map_remove(unsigned int cpu) > { > struct cacheinfo *this_leaf, *sib_leaf; > - unsigned int sibling, index; > + unsigned int sibling, index, sib_index; > > for (index = 0; index < cache_leaves(cpu); index++) { > this_leaf = per_cpu_cacheinfo_idx(cpu, index); > @@ -313,9 +314,13 @@ static void cache_shared_cpu_map_remove(unsigned int cpu) > if (sibling == cpu || !sib_cpu_ci->info_list) > continue;/* skip if itself or no cacheinfo */ > > - sib_leaf = per_cpu_cacheinfo_idx(sibling, index); > - cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map); > - cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map); > + for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) { > + sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index); > + if (cache_leaves_are_shared(this_leaf, sib_leaf)) { > + cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map); > + cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map); Same comment as above. -- Regards, Sudeep