Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp2217999imm; Tue, 2 Oct 2018 23:49:32 -0700 (PDT) X-Google-Smtp-Source: ACcGV61pTbDLCRgWKt7RS5zIbiBTq8cKJ1UvS7gL3QK1uqn2GYk85r01jvyoXVNcv9XXiH/s8I83 X-Received: by 2002:a62:9f11:: with SMTP id g17-v6mr97465pfe.144.1538549372493; Tue, 02 Oct 2018 23:49:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538549372; cv=none; d=google.com; s=arc-20160816; b=jwT+/GOH785TUoM10+hpd664BfToRoQdzVz9r6fC5WxvCKHlyRLXb+id86yNMCXtEP 2FapV+W/vAPpooIx4VJM8sKlbjAZ1mQNP+G0zoU7sNY2CxX4lRkFqqpGaN4IUDiPUwYg Oq87T27q8FrD1sIcSuppleSTX1YTgx74ILFO4/ADnBphl3NKtna0K6DifkMSkmPnlc7j mY+7MUgM1FHTmPOZjnsLnbxrL1vW3sZMaC94kXjv1x3QJpPE1TyR6ywIsw+vCQxFVciB zZfIbHgeSREHFHTCfNp756sCuACmSn8sBqpAKIIZ8yQ+18aZ7bG6gBb4KLvyiLd2mvGF lceA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=b+lgvlTQl6yQJKbL0JWxO8SgJVv6VgH65T73amx8OhA=; b=CjH7zdbUmUKFdCWO1abtj6DI4UvO/y5YjqXBiSKInMi9IuvA6UGTkVcS32wI2waxwP H8iWSzj4hRmdflBEpw5iqNCvHsCMvZasiMx3xyVyhnHiLJ65fhcSP7LVCwtcgzz+8hfn AC583MpWT4EM8BFPAEkwq6fx7jSjAAoLnukqbO6Md0AxiTPo+pTjnESB9Ywf2S2effhu Ayhl7ykPOulvLOezIIxyWP84jNxqLOBQXUluREiAlbvtrRg8iDSiVZPsrniJz73O6GrG THAAPo3hGoOeaaW1Md+7NI/66bEsicOZsTSbMw7GBNRbgv0tDE8T9OdmCaO2hFQVjp1y 7ong== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v17-v6si551833pfg.157.2018.10.02.23.49.14; Tue, 02 Oct 2018 23:49:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727009AbeJCNdl (ORCPT + 99 others); Wed, 3 Oct 2018 09:33:41 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:33127 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726767AbeJCNdl (ORCPT ); Wed, 3 Oct 2018 09:33:41 -0400 Received: from p5492e4c1.dip0.t-ipconnect.de ([84.146.228.193] helo=nanos) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1g7avX-0007sU-0F; Wed, 03 Oct 2018 08:46:31 +0200 Date: Wed, 3 Oct 2018 08:46:30 +0200 (CEST) From: Thomas Gleixner To: Reinette Chatre cc: fenghua.yu@intel.com, tony.luck@intel.com, jithu.joseph@intel.com, gavin.hindman@intel.com, dave.hansen@intel.com, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/3] x86/intel_rdt: Introduce utility to obtain CDP peer In-Reply-To: <6e8c2eddf0cb2521fe7018357a0fa6f8dba7a882.1537987801.git.reinette.chatre@intel.com> Message-ID: References: <6e8c2eddf0cb2521fe7018357a0fa6f8dba7a882.1537987801.git.reinette.chatre@intel.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 26 Sep 2018, Reinette Chatre wrote: > + * Return: 0 if a CDP peer was found, <0 on error or if no CDP peer exists. > + * If a CDP peer was found, @r_cdp will point to the peer RDT resource > + * and @d_cdp will point to the peer RDT domain. > + */ > +static int __attribute__((unused)) rdt_cdp_peer_get(struct rdt_resource *r, > + struct rdt_domain *d, > + struct rdt_resource **r_cdp, > + struct rdt_domain **d_cdp) > +{ > + struct rdt_resource *_r_cdp = NULL; > + struct rdt_domain *_d_cdp = NULL; > + int ret = 0; > + > + switch (r->rid) { > + case RDT_RESOURCE_L3DATA: > + _r_cdp = &rdt_resources_all[RDT_RESOURCE_L3CODE]; > + break; > + case RDT_RESOURCE_L3CODE: > + _r_cdp = &rdt_resources_all[RDT_RESOURCE_L3DATA]; > + break; > + case RDT_RESOURCE_L2DATA: > + _r_cdp = &rdt_resources_all[RDT_RESOURCE_L2CODE]; > + break; > + case RDT_RESOURCE_L2CODE: > + _r_cdp = &rdt_resources_all[RDT_RESOURCE_L2DATA]; > + break; > + default: > + ret = -ENOENT; > + goto out; > + } > + > + /* > + * When a new CPU comes online and CDP is enabled then the new > + * RDT domains (if any) associated with both CDP RDT resources > + * are added in the same CPU online routine while the > + * rdtgroup_mutex is held. It should thus not happen for one > + * RDT domain to exist and be associated with its RDT CDP > + * resource but there is no RDT domain associated with the > + * peer RDT CDP resource. Hence the WARN. > + */ > + _d_cdp = rdt_find_domain(_r_cdp, d->id, NULL); > + if (WARN_ON(!_d_cdp)) { > + _r_cdp = NULL; > + ret = -ENOENT; While this should never happen, the return value is ambiguous. I'd rather use EINVAL or such and propagate it further down at the call site. Thanks, tglx