Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp2874517pxb; Thu, 3 Feb 2022 17:07:46 -0800 (PST) X-Google-Smtp-Source: ABdhPJxfPuGsCvMbgrv/tu8KUe9VWEaMKkCodSrgGrGrDUUiy895o7rD6u33h3cwEesbgGT3oPQ0 X-Received: by 2002:a05:6402:50c8:: with SMTP id h8mr808204edb.144.1643936866315; Thu, 03 Feb 2022 17:07:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643936866; cv=none; d=google.com; s=arc-20160816; b=d1gtrY6xpeA73up6J3XFYeNwq5gxPvl3TmnYrjFNdbVEtz2ABCgdC7hOPgUYVtDCUz b6Y2HvB7jzt9NZtOCqL29t26TRtV3bhqkeLAL7lHI4QYiM3M2i+gEiKyzApq6lbuTuwt /UQspG4uEjkuh22Ulj9tRj+n7Rgfxq1yXOeDhJAN8EQTXUdbqYrrlGvLb6TyQNK/+tNp +JMbueIR/78WsjA24oBVVIPuOHcVbRn3UPBpinQkrnmiRNtxtcS2NkhsRkAe9jrkITet T3puQKz5q9W+lAg+JE12pYtjgX5unxh+i0tr4exDbqX1izAkzYtNN3XgmM1SqolEDPSK IwBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=kBc6csHilO+8WvZqSCJHoHrrqR+JmhfXM1Z7pKMsbzc=; b=AnUjSSq0RYCdjVNBWG1toPRPaJL+QLPKC4EjxyRSKxloYDrJyYgTReDlmmZ3CsJmdD CfAb1tAYDG7WZr4TOTi9dq/s59LdXG5PaxtGXGgDp5EsKtXrP+XEdURadAyMDHikJE2j q5pMGrHKJIF9HwhXTagiCrrrzqXRFgMnZs0H/1DgiPD+jIGOkfgTFs+Qme3lku1KoiEm on96TTJPFaP9tjj20eq25mG8S8qW3b2cdmEHtqMwo1fo0OP1rHrw97SBPxpP9s2hqrd/ DyJrhgr7FeeAchY07AqToc0oKbYWxdI38EyABM2qsATiR1CUtS3KU2Fg+05ZNY4oW078 lLRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jTqali7w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t9si249428edd.135.2022.02.03.17.07.15; Thu, 03 Feb 2022 17:07:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jTqali7w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237961AbiBATFg (ORCPT + 99 others); Tue, 1 Feb 2022 14:05:36 -0500 Received: from mga06.intel.com ([134.134.136.31]:48322 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234438AbiBATFf (ORCPT ); Tue, 1 Feb 2022 14:05:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643742335; x=1675278335; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=nme2HohwG4cYQY68sjZJc5W1kNylfqbOIi5QZOKqZi4=; b=jTqali7wIZx5ZCL56vE/rnkTLoUFnK8kau1MnOJUnpgEHJgdnIMEqiQS I6ymebVMc73S8wHKKupOY+uowfHYhUiF2fH3/8F0wn0XatYO1gcCJSIuA Qpe/Tns0mOVzMxDjZJN/6Xt+BZi5kwgDXGmgWVrjZetQIHmdAZKaFTUhm rYZ8jesfgzB6+9go1kfcTyGxpYQ6s1ZpTduHK/v50ELiJg/fSITQRnbNV oGs5GGpuno3qvEyXj3EmbZHsZyWX5yIKk7GY6/KLP9zlYkmsFQVn5YIWY pI7fKpYbNckCA9afF1j96udpRuZ0EXtei2ioCtVM0QuQi/edN1Sgs8kZc w==; X-IronPort-AV: E=McAfee;i="6200,9189,10245"; a="308495751" X-IronPort-AV: E=Sophos;i="5.88,334,1635231600"; d="scan'208";a="308495751" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Feb 2022 11:05:34 -0800 X-IronPort-AV: E=Sophos;i="5.88,334,1635231600"; d="scan'208";a="497449689" Received: from rashmigh-mobl.amr.corp.intel.com (HELO intel.com) ([10.252.132.8]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Feb 2022 11:05:34 -0800 Date: Tue, 1 Feb 2022 11:05:32 -0800 From: Ben Widawsky To: ira.weiny@intel.com Cc: Dan Williams , Jonathan Cameron , Bjorn Helgaas , Alison Schofield , Vishal Verma , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org Subject: Re: [PATCH V6 10/10] cxl/cdat: Parse out DSMAS data from CDAT table Message-ID: <20220201190532.ynwr73ninobqx7bm@intel.com> References: <20220201071952.900068-1-ira.weiny@intel.com> <20220201071952.900068-11-ira.weiny@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220201071952.900068-11-ira.weiny@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 22-01-31 23:19:52, ira.weiny@intel.com wrote: > From: Ira Weiny > > CXL memory devices need the information in the Device Scoped Memory > Affinity Structure (DSMAS). This information is contained within the > CDAT table buffer which is already read and cached. > > Parse and cache DSMAS data from the CDAT table. Store this data in > unmarshaled struct dsmas data structures for ease of use. > > Signed-off-by: Ira Weiny > > --- > Changes from V5 > Fix up sparse warnings > Split out cdat_hdr_valid() > Update cdat_hdr_valid() > Remove revision and cs field parsing > There is no point in these > Add seq check and debug print. > From Jonathan > Add spaces around '+' and '/' > use devm_krealloc() for dmas_ary > > Changes from V4 > New patch > --- > drivers/cxl/cdat.h | 21 ++++++++++++ > drivers/cxl/core/memdev.c | 70 +++++++++++++++++++++++++++++++++++++++ > 2 files changed, 91 insertions(+) > > diff --git a/drivers/cxl/cdat.h b/drivers/cxl/cdat.h > index a7725d26f2d2..f8c126190d18 100644 > --- a/drivers/cxl/cdat.h > +++ b/drivers/cxl/cdat.h > @@ -83,17 +83,38 @@ > #define CDAT_SSLBIS_ENTRY_PORT_Y(entry, i) (((entry)[4 + (i) * 2] & 0xffff0000) >> 16) > #define CDAT_SSLBIS_ENTRY_LAT_OR_BW(entry, i) ((entry)[4 + (i) * 2 + 1] & 0x0000ffff) > > +/** > + * struct cxl_dsmas - host unmarshaled version of DSMAS data > + * > + * As defined in the Coherent Device Attribute Table (CDAT) specification this > + * represents a single DSMAS entry in that table. > + * > + * @dpa_base: The lowest DPA address associated with this DSMAD > + * @dpa_length: Length in bytes of this DSMAD > + * @non_volatile: If set, the memory region represents Non-Volatile memory > + */ > +struct cxl_dsmas { > + u64 dpa_base; > + u64 dpa_length; > + /* Flags */ > + u8 non_volatile:1; > +}; > + > /** > * struct cxl_cdat - CXL CDAT data > * > * @table: cache of CDAT table > * @length: length of cached CDAT table > * @seq: Last read Sequence number of the CDAT table > + * @dsmas_ary: Array of DSMAS entries as parsed from the CDAT table > + * @nr_dsmas: Number of entries in dsmas_ary > */ > struct cxl_cdat { > void *table; > size_t length; > u32 seq; > + struct cxl_dsmas *dsmas_ary; > + int nr_dsmas; > }; > > #endif /* !__CXL_CDAT_H__ */ > diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c > index 11d721c56f08..32342a15e991 100644 > --- a/drivers/cxl/core/memdev.c > +++ b/drivers/cxl/core/memdev.c > @@ -6,6 +6,7 @@ > #include > #include > #include > +#include "cdat.h" > #include "core.h" > > static DECLARE_RWSEM(cxl_memdev_rwsem); > @@ -386,6 +387,71 @@ static int read_cdat_data(struct cxl_memdev *cxlmd, > return rc; > } > > +static int parse_dsmas(struct cxl_memdev *cxlmd) > +{ > + struct cxl_dsmas *dsmas_ary = NULL; > + u32 *data = cxlmd->cdat.table; > + int bytes_left = cxlmd->cdat.length; > + int nr_dsmas = 0; > + > + if (!data) > + return -ENXIO; > + > + /* Skip header */ > + data += CDAT_HEADER_LENGTH_DW; > + bytes_left -= CDAT_HEADER_LENGTH_BYTES; > + > + while (bytes_left > 0) { > + u32 *cur_rec = data; > + u8 type = FIELD_GET(CDAT_STRUCTURE_DW0_TYPE, cur_rec[0]); > + u16 length = FIELD_GET(CDAT_STRUCTURE_DW0_LENGTH, cur_rec[0]); > + > + if (type == CDAT_STRUCTURE_DW0_TYPE_DSMAS) { > + struct cxl_dsmas *new_ary; > + u8 flags; > + > + new_ary = devm_krealloc(&cxlmd->dev, dsmas_ary, > + sizeof(*dsmas_ary) * (nr_dsmas + 1), > + GFP_KERNEL); > + if (!new_ary) { > + dev_err(&cxlmd->dev, > + "Failed to allocate memory for DSMAS data\n"); > + return -ENOMEM; > + } One thought here - it looks like there are at most 256 DSMAS entries. You could allocate the full 256 up front, and then realloc *down* to the actual number. > + dsmas_ary = new_ary; > + > + flags = FIELD_GET(CDAT_DSMAS_DW1_FLAGS, cur_rec[1]); > + > + dsmas_ary[nr_dsmas].dpa_base = CDAT_DSMAS_DPA_OFFSET(cur_rec); > + dsmas_ary[nr_dsmas].dpa_length = CDAT_DSMAS_DPA_LEN(cur_rec); > + dsmas_ary[nr_dsmas].non_volatile = CDAT_DSMAS_NON_VOLATILE(flags); > + > + dev_dbg(&cxlmd->dev, "DSMAS %d: %llx:%llx %s\n", > + nr_dsmas, > + dsmas_ary[nr_dsmas].dpa_base, > + dsmas_ary[nr_dsmas].dpa_base + > + dsmas_ary[nr_dsmas].dpa_length, > + (dsmas_ary[nr_dsmas].non_volatile ? > + "Persistent" : "Volatile") > + ); > + > + nr_dsmas++; > + } > + > + data += (length / sizeof(u32)); > + bytes_left -= length; > + } > + > + if (nr_dsmas == 0) > + return -ENXIO; Hmm is there documentation that suggests a DSMAS must be implemented? Could this just return 0? I'd put maybe dev_dbg here if it's unexpected but not a failure and return success. > + > + dev_dbg(&cxlmd->dev, "Found %d DSMAS entries\n", nr_dsmas); > + cxlmd->cdat.dsmas_ary = dsmas_ary; > + cxlmd->cdat.nr_dsmas = nr_dsmas; > + > + return 0; > +} > + > struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) > { > struct cxl_memdev *cxlmd; > @@ -407,6 +473,10 @@ struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) > if (rc) > goto err; > > + rc = parse_dsmas(cxlmd); > + if (rc) > + dev_warn(dev, "No DSMAS data found: %d\n", rc); > + > /* > * Activate ioctl operations, no cxl_memdev_rwsem manipulation > * needed as this is ordered with cdev_add() publishing the device. > -- > 2.31.1 >