Since 2010 Intel has included non-volatile memory support on a few
storage-focused platforms with a feature named ADR (Asynchronous DRAM
Refresh). These platforms were mostly targeted at custom applications
and never enjoyed standard discovery mechanisms for platform firmware
to advertise non-volatile memory capabilities. This now changes with
the publication of version 6 of the ACPI specification [1] and its
inclusion of a new table for describing platform memory capabilities.
The NVDIMM Firmware Interface Table (NFIT), along with new EFI and E820
memory types, enumerates persistent memory ranges, memory-mapped-I/O
apertures, physical memory devices (DIMMs), and their associated
properties.
The ND-subsystem wraps a Linux device driver model around the objects
and address boundaries defined in the specification and introduces 3 new
drivers.
nd_pmem: NFIT enabled version of the existing 'pmem' driver [2]
nd_blk: mmio aperture method for accessing persistent storage
nd_btt: give persistent memory disk semantics (atomic sector update)
See the documentation in patch2 for more details, and there is
supplemental documentation on pmem.io [4]. Please review, and
patches welcome...
For kicking the tires, this release is accompanied by a userspace
management library 'ndctl' that includes unit tests (make check) for all
of the kernel ABIs. The nfit_test.ko module can be used to explore a
sample NFIT topology.
[1]: http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
[2]: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/log/?h=x86/pmem
[3]: https://github.com/pmem/ndctl
[4]: http://pmem.io/documents/
--
Dan for the NFIT driver development team Andy Rudoff, Matthew Wilcox, Ross
Zwisler, and Vishal Verma
---
Dan Williams (19):
e820, efi: add ACPI 6.0 persistent memory types
ND NFIT-Defined/NVIDIMM Subsystem
nd_acpi: initial core implementation and nfit skeleton
nd: create an 'nd_bus' from an 'nfit_desc'
nfit-test: manufactured NFITs for interface development
nd: ndctl class device, and nd bus attributes
nd: dimm devices (nfit "memory-devices")
nd: ndctl.h, the nd ioctl abi
nd_dimm: dimm driver and base nd-bus device-driver infrastructure
nd: regions (block-data-window, persistent memory, volatile memory)
nd_region: support for legacy nvdimms
nd_pmem: add NFIT support to the pmem driver
nd: add interleave-set state-tracking infrastructure
nd: namespace indices: read and validate
nd: pmem label sets and namespace instantiation.
nd: blk labels and namespace instantiation
nd: write pmem label set
nd: write blk label set
nd: infrastructure for btt devices
Ross Zwisler (1):
nd_blk: nfit blk driver
Vishal Verma (1):
nd_btt: atomic sector updates
Documentation/blockdev/btt.txt | 273 ++++++
Documentation/blockdev/nd.txt | 867 +++++++++++++++++++
MAINTAINERS | 34 +
arch/arm64/kernel/efi.c | 1
arch/ia64/kernel/efi.c | 1
arch/x86/boot/compressed/eboot.c | 4
arch/x86/include/uapi/asm/e820.h | 1
arch/x86/kernel/e820.c | 25 -
arch/x86/platform/efi/efi.c | 3
drivers/block/Kconfig | 13
drivers/block/Makefile | 2
drivers/block/nd/Kconfig | 130 +++
drivers/block/nd/Makefile | 39 +
drivers/block/nd/acpi.c | 443 ++++++++++
drivers/block/nd/blk.c | 269 ++++++
drivers/block/nd/btt.c | 1423 +++++++++++++++++++++++++++++++
drivers/block/nd/btt.h | 185 ++++
drivers/block/nd/btt_devs.c | 443 ++++++++++
drivers/block/nd/bus.c | 703 +++++++++++++++
drivers/block/nd/core.c | 963 +++++++++++++++++++++
drivers/block/nd/dimm.c | 126 +++
drivers/block/nd/dimm_devs.c | 701 +++++++++++++++
drivers/block/nd/label.c | 925 ++++++++++++++++++++
drivers/block/nd/label.h | 143 +++
drivers/block/nd/namespace_devs.c | 1697 +++++++++++++++++++++++++++++++++++++
drivers/block/nd/nd-private.h | 203 ++++
drivers/block/nd/nd.h | 310 +++++++
drivers/block/nd/nfit.h | 238 +++++
drivers/block/nd/pmem.c | 122 ++-
drivers/block/nd/region.c | 95 ++
drivers/block/nd/region_devs.c | 1196 ++++++++++++++++++++++++++
drivers/block/nd/test/Makefile | 5
drivers/block/nd/test/iomap.c | 199 ++++
drivers/block/nd/test/nfit.c | 1018 ++++++++++++++++++++++
drivers/block/nd/test/nfit_test.h | 37 +
include/linux/efi.h | 3
include/linux/nd.h | 98 ++
include/uapi/linux/Kbuild | 1
include/uapi/linux/ndctl.h | 199 ++++
39 files changed, 13102 insertions(+), 36 deletions(-)
create mode 100644 Documentation/blockdev/btt.txt
create mode 100644 Documentation/blockdev/nd.txt
create mode 100644 drivers/block/nd/Kconfig
create mode 100644 drivers/block/nd/Makefile
create mode 100644 drivers/block/nd/acpi.c
create mode 100644 drivers/block/nd/blk.c
create mode 100644 drivers/block/nd/btt.c
create mode 100644 drivers/block/nd/btt.h
create mode 100644 drivers/block/nd/btt_devs.c
create mode 100644 drivers/block/nd/bus.c
create mode 100644 drivers/block/nd/core.c
create mode 100644 drivers/block/nd/dimm.c
create mode 100644 drivers/block/nd/dimm_devs.c
create mode 100644 drivers/block/nd/label.c
create mode 100644 drivers/block/nd/label.h
create mode 100644 drivers/block/nd/namespace_devs.c
create mode 100644 drivers/block/nd/nd-private.h
create mode 100644 drivers/block/nd/nd.h
create mode 100644 drivers/block/nd/nfit.h
rename drivers/block/{pmem.c => nd/pmem.c} (68%)
create mode 100644 drivers/block/nd/region.c
create mode 100644 drivers/block/nd/region_devs.c
create mode 100644 drivers/block/nd/test/Makefile
create mode 100644 drivers/block/nd/test/iomap.c
create mode 100644 drivers/block/nd/test/nfit.c
create mode 100644 drivers/block/nd/test/nfit_test.h
create mode 100644 include/linux/nd.h
create mode 100644 include/uapi/linux/ndctl.h
ACPI 6.0 formalizes e820-type-7 and efi-type-14 as persistent memory.
Mark it "reserved" and allow it to be claimed by a persistent memory
device driver.
This definition is in addition to the Linux kernel's existing type-12
definition that was recently added in support of shipping platforms with
NVDIMM support that predate ACPI 6.0 (which now classifies type-12 as
OEM reserved). We may choose to exploit this wealth of definitions for
NVDIMMs to differentiate E820_PRAM (type-12) from E820_PMEM (type-7).
One potential differentiation is that PMEM is not backed by struct page
by default in contrast to PRAM. For now, they are effectively treated
as aliases by the mm.
Note, /proc/iomem can be consulted for differentiating legacy
"Persistent RAM" E820_PRAM vs standard "Persistent I/O Memory"
E820_PMEM.
Cc: Andy Lutomirski <[email protected]>
Cc: Boaz Harrosh <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Ross Zwisler <[email protected]>
---
arch/arm64/kernel/efi.c | 1 +
arch/ia64/kernel/efi.c | 1 +
arch/x86/boot/compressed/eboot.c | 4 ++++
arch/x86/include/uapi/asm/e820.h | 1 +
arch/x86/kernel/e820.c | 25 +++++++++++++++++++------
arch/x86/platform/efi/efi.c | 3 +++
include/linux/efi.h | 3 ++-
7 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index ab21e0d58278..9d4aa18f2a82 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -158,6 +158,7 @@ static __init int is_reserve_region(efi_memory_desc_t *md)
case EFI_BOOT_SERVICES_CODE:
case EFI_BOOT_SERVICES_DATA:
case EFI_CONVENTIONAL_MEMORY:
+ case EFI_PERSISTENT_MEMORY:
return 0;
default:
break;
diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
index c52d7540dc05..cd8b7485e396 100644
--- a/arch/ia64/kernel/efi.c
+++ b/arch/ia64/kernel/efi.c
@@ -1227,6 +1227,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
case EFI_RUNTIME_SERVICES_CODE:
case EFI_RUNTIME_SERVICES_DATA:
case EFI_ACPI_RECLAIM_MEMORY:
+ case EFI_PERSISTENT_MEMORY:
default:
name = "reserved";
break;
diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index ef17683484e9..dde5bf7726f4 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -1222,6 +1222,10 @@ static efi_status_t setup_e820(struct boot_params *params,
e820_type = E820_NVS;
break;
+ case EFI_PERSISTENT_MEMORY:
+ e820_type = E820_PMEM;
+ break;
+
default:
continue;
}
diff --git a/arch/x86/include/uapi/asm/e820.h b/arch/x86/include/uapi/asm/e820.h
index 960a8a9dc4ab..0f457e6eab18 100644
--- a/arch/x86/include/uapi/asm/e820.h
+++ b/arch/x86/include/uapi/asm/e820.h
@@ -32,6 +32,7 @@
#define E820_ACPI 3
#define E820_NVS 4
#define E820_UNUSABLE 5
+#define E820_PMEM 7
/*
* This is a non-standardized way to represent ADR or NVDIMM regions that
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 11cc7d54ec3f..410af501a941 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
case E820_RESERVED_KERN:
printk(KERN_CONT "usable");
break;
+ case E820_PMEM:
+ case E820_PRAM:
case E820_RESERVED:
printk(KERN_CONT "reserved");
break;
@@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
case E820_UNUSABLE:
printk(KERN_CONT "unusable");
break;
- case E820_PRAM:
- printk(KERN_CONT "persistent (type %u)", type);
- break;
default:
printk(KERN_CONT "type %u", type);
break;
@@ -919,10 +918,26 @@ static inline const char *e820_type_to_string(int e820_type)
case E820_NVS: return "ACPI Non-volatile Storage";
case E820_UNUSABLE: return "Unusable memory";
case E820_PRAM: return "Persistent RAM";
+ case E820_PMEM: return "Persistent I/O Memory";
default: return "reserved";
}
}
+static bool do_mark_busy(u32 type, struct resource *res)
+{
+ if (res->start < (1ULL<<20))
+ return true;
+
+ switch (type) {
+ case E820_RESERVED:
+ case E820_PRAM:
+ case E820_PMEM:
+ return false;
+ default:
+ return true;
+ }
+}
+
/*
* Mark e820 reserved areas as busy for the resource manager.
*/
@@ -952,9 +967,7 @@ void __init e820_reserve_resources(void)
* pci device BAR resource and insert them later in
* pcibios_resource_survey()
*/
- if (((e820.map[i].type != E820_RESERVED) &&
- (e820.map[i].type != E820_PRAM)) ||
- res->start < (1ULL<<20)) {
+ if (do_mark_busy(e820.map[i].type, res)) {
res->flags |= IORESOURCE_BUSY;
insert_resource(&iomem_resource, res);
}
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index dbc8627a5cdf..a116e236ac3f 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -145,6 +145,9 @@ static void __init do_add_efi_memmap(void)
case EFI_UNUSABLE_MEMORY:
e820_type = E820_UNUSABLE;
break;
+ case EFI_PERSISTENT_MEMORY:
+ e820_type = E820_PMEM;
+ break;
default:
/*
* EFI_RESERVED_TYPE EFI_RUNTIME_SERVICES_CODE
diff --git a/include/linux/efi.h b/include/linux/efi.h
index cf7e431cbc73..28868504aa17 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -85,7 +85,8 @@ typedef struct {
#define EFI_MEMORY_MAPPED_IO 11
#define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12
#define EFI_PAL_CODE 13
-#define EFI_MAX_MEMORY_TYPE 14
+#define EFI_PERSISTENT_MEMORY 14
+#define EFI_MAX_MEMORY_TYPE 15
/* Attribute values: */
#define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */
Maintainer information and documenation for drivers/block/nd/
Cc: Andy Lutomirski <[email protected]>
Cc: Boaz Harrosh <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: Greg KH <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
Documentation/blockdev/nd.txt | 867 +++++++++++++++++++++++++++++++++++++++++
MAINTAINERS | 34 +-
2 files changed, 895 insertions(+), 6 deletions(-)
create mode 100644 Documentation/blockdev/nd.txt
diff --git a/Documentation/blockdev/nd.txt b/Documentation/blockdev/nd.txt
new file mode 100644
index 000000000000..bcfdf21063ab
--- /dev/null
+++ b/Documentation/blockdev/nd.txt
@@ -0,0 +1,867 @@
+ The NFIT-Defined/NVDIMM Sub-system (ND)
+
+ nd - kernel abi / device-model & ndctl - userspace helper library
+ [email protected]
+ v9: April 17th, 2015
+
+
+ Glossary
+
+ Overview
+ Supporting Documents
+ Git Trees
+
+ NFIT Terminology and NVDIMM Types
+
+ Why BLK?
+ PMEM vs BLK (SPA vs BDW)
+ BLK-REGIONs, PMEM-REGIONs, Atomic Sectors, and DAX
+
+ Example NFIT Diagram
+
+ ND Device Model/ABI and NDCTL API
+ NDCTL: Context
+ ndctl: instantiate a new library context example
+
+ ND/NDCTL: Bus
+ nd: control class device in /sys/class
+ nd: bus layout
+ ndctl: bus enumeration example
+
+ ND/NDCTL: DIMM (NMEM)
+ nd: DIMM (NMEM) layout
+ ndctl: DIMM enumeration example
+
+ ND/NDCTL: Region
+ nd: region layout
+ ndctl: region enumeration example
+ Why Not Encode the Region Type into the Region Name?
+ How Do I Determine the Major Type of a Region?
+
+ ND/NDCTL: Namespace
+ nd: namespace layout
+ ndctl: namespace enumeration example
+ ndctl: namespace creation example
+ Why the Term “namespace”?
+
+ ND/NDCTL: Block Translation Table “btt”
+ nd: btt layout
+ ndctl: btt creation example
+
+ Summary NDCTL Diagram
+
+
+Glossary
+--------
+
+NFIT: NVDIMM Firmware Interface Table
+
+SPA: System Physical Address also refers to an NFIT system-physical
+address table entry describing contiguous persistent memory range.
+
+DPA: DIMM Physical Address, is a DIMM-relative offset. With one DIMM in
+the system there would be a 1:1 SPA:DPA association. Once more DIMMs
+are added an interleave-description-table provided by NFIT is needed to
+decode a SPA to a DPA.
+
+DCR: DIMM Control Region Descriptor, an NFIT sub-table entry conveying
+the vendor, format, revision, and geometry of the related
+block-data-windows.
+
+BDW: Block Data Window Region Descriptor, an NFIT sub-table referenced
+by a DCR locating a set of data transfer apertures and control registers
+in system memory.
+
+PMEM: A linux block device which provides access to an SPA range. A PMEM
+device is capable of DAX (see below).
+
+DAX: File system extensions to bypass the page cache and block layer to
+map persistent memory, from a PMEM block device, directly into a process
+address space.
+
+BLK: A linux block device which accesses NVDIMM storage through a BDW
+(block-data-window aperture). A BLK device is not amenable to DAX.
+
+DSM: Device Specific Method, refers to a runtime service provided by
+platform firmware to send formatted control/configuration messages to a
+DIMM device. In ACPI this is an _DSM attribute of an object.
+
+BTT: Block Translation Table: Persistent memory is byte addressable.
+Existing software may have an expectation that the power-fail-atomicity
+of writes is at least one sector, 512 bytes. The BTT is an indirection
+table with atomic update semantics to front a PMEM/BLK block device
+driver and present arbitrary atomic sector sizes.
+
+LABEL: Metadata stored on a DIMM device that partitions and identifies
+(persistently names) storage between PMEM and BLK. It also partitions
+BLK storage to host BTTs with different parameters per BLK-partition.
+Note that traditional partition tables, GPT/MBR, are layered on top of a
+BLK or PMEM device.
+
+
+
+
+Overview
+--------
+
+The “NVDIMM Firmware Interface Table” (NFIT) defines a set of tables
+that describe the non-volatile memory resources in a platform. Platform
+firmware provides this table as well as runtime-services for sending
+control and configuration messages to capable NVDIMM devices. NFIT is a
+new top-level table in ACPI 6. The Linux ND subsystem is designed as a
+generic mechanism that can register a binary NFIT from any provider,
+ACPI being just one example of a provider. The unit test infrastructure
+in the kernel exploits this capability to provide multiple sample NFITs
+via custom test-platform-devices.
+
+
+Supporting Documents
+ACPI 6: http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
+NVDIMM Namespace: http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf
+DSM Interface Example: http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
+Driver Writer’s Guide: http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf
+
+
+Git Trees
+ND: https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git/log/?h=nd
+NDCTL: https://github.com/pmem/ndctl.git
+PMEM: https://github.com/01org/prd
+
+
+NFIT Terminology and NVDIMM Types
+---------------------------------
+
+Prior to the arrival of the NFIT, non-volatile memory was described to a
+system in various ad-hoc ways. Usually only the bare minimum was
+provided, namely, a single SPA range where writes are expected to be
+durable after a system power loss. Now, the NFIT specification
+standardizes not only the description SPA ranges, but also DCR/BDW
+(block-aperture access) and DSM entry points for control/configuration.
+
+
+For each NFIT-defined I/O interface (SPA, DCR/BDW), ND provides a block
+device driver:
+
+
+1. PMEM (nd_pmem.ko): Drives an NFIT system-physical address (SPA)
+ range. A SPA range is contiguous in system memory and may be
+ interleaved (hardware memory controller striped) across multiple DIMMs.
+ When a SPA is interleaved the NFIT optionally provides descriptions of
+ which DIMMs are participating in the interleave.
+
+ Note, while ND describes SPAs with backing DIMM information
+ (ND_NAMESPACE_PMEM) with a different device-type than SPAs without such
+ a description (ND_NAMESPACE_IO), to nd_pmem there is no distinction.
+ The different device-types are an implementation detail that userspace
+ can exploit to implement policies like “only interface with SPA ranges
+ from certain DIMMs”.
+
+
+2. BLK (nd_blk.ko): This driver performs I/O using a set of DCR/BDW
+ defined apertures. A set of apertures will all access just one DIMM.
+ Multiple windows allow multiple concurrent accesses, much like
+ tagged-command-queuing, and would likely be used by different threads or
+ different CPUs.
+
+ The NFIT specification defines a standard format for a BDW, but the spec
+ also allows for vendor specific layouts. As of this writing “nd_blk”
+ only supports the example interface detailed in the “DSM Interface
+ Example”. If another BDW format arrives in the future this can added as
+ a new sub-device-type to nd_blk or as a new ND device type with its own
+ driver.
+
+
+Why BLK?
+--------
+
+While PMEM provides direct byte-addressable CPU-load/store access to
+NVDIMM storage, it does not provide the best system RAS (recovery,
+availability, and serviceability) model. An access to a corrupted SPA
+address causes a cpu exception while an access to a corrupted address
+through a BDW aperture causes that block window to raise an error status
+in a register. The latter is more aligned with the standard error model
+that host-bus-adapter attached disks present. Also, if an administrator
+ever wants to replace a memory it is easier to service a system at DIMM
+module boundaries. Compare this to PMEM where data could be interleaved
+in an opaque hardware specific manner across several DIMMs.
+
+
+PMEM vs BLK (SPA vs BDW)
+------------------------
+
+BDWs solve this RAS problem, but their presence is also the major
+contributing factor to the complexity of the ND subsystem. They
+complicate the implementation because PMEM and BLK alias in DPA space.
+Any given DIMM’s DPA-range may contribute to one or more SPA sets of
+interleaved DIMMs, *and* may also be accessed in its entirety through
+its BDW. Accessing a DPA through a SPA while simultaneously accessing
+the same DPA through a BDW has undefined results. For this reason,
+DIMM’s with this dual interface configuration include a DSM function to
+store/retrieve a LABEL. The LABEL effectively partitions the DPA-space
+into exclusive SPA and BDW accessible regions. For simplicity a DIMM is
+allowed a PMEM “region” per each interleave set in which it is a member.
+The remaining DPA space can be carved into an arbitrary number of BLK
+devices with discontiguous extents.
+
+
+BLK-REGIONs, PMEM-REGIONs, Atomic Sectors, and DAX
+--------------------------------------------------
+One of the few reasons to allow multiple BLK namespaces per REGION is so
+that each BLK-namespace can be configured with a BTT with unique atomic
+sector sizes. While a PMEM device can host a BTT the LABEL
+specification does not provide for a sector size to be specified for a
+PMEM namespace. This is due to the expectation that the primary usage
+model for PMEM is via DAX, and the BTT is incompatible with DAX.
+However, for the cases where an application or filesystem still needs
+atomic sector update guarantees it can register a BTT on a PMEM device
+or partition. See ND/NDCTL: Block Translation Table “btt”
+
+
+________________
+
+
+Example NFIT Diagram
+
+
+For the remainder of this document the following diagram and device
+names will be referenced for the example sysfs layouts.
+
+
+ (a) (b) DIMM BLK-REGION
+ +-------------------+--------+--------+--------+
++------+ | pm0.0 | blk2.0 | pm1.0 | blk2.1 | 0 region2
+| imc0 +--+- - - region0- - - +--------+ +--------+
++--+---+ | pm0.0 | blk3.0 | pm1.0 | blk3.1 | 1 region3
+ | +-------------------+--------v v--------+
++--+---+ | |
+| cpu0 | region1
++--+---+ | |
+ | +----------------------------^ ^--------+
++--+---+ | blk4.0 | pm1.0 | blk4.0 | 2 region4
+| imc1 +--+----------------------------| +--------+
++------+ | blk5.0 | pm1.0 | blk5.0 | 3 region5
+ +----------------------------+--------+--------+
+
+
+In this platform we have four DIMMs and two memory controllers in one
+socket. Each unique interface (BLK or PMEM) to DPA space is identified
+by a region device with a dynamically assigned id (REGION0 - REGION5).
+
+
+1. The first portion of DIMM0 and DIMM1 are interleaved as REGION0. A
+ single PMEM namespace is created in the REGION0-SPA-range that spans
+ DIMM0 and DIMM1 with a user-specified name of "pm0.0". Some of that
+ interleaved SPA range is reclaimed as BDW accessed space starting at
+ DPA-offset (a) into each DIMM. In that reclaimed space we create two
+ BDW "namespaces" from REGION2 and REGION3 where "blk2.0" and "blk3.0"
+ are just human readable names that could be set to any user-desired name
+ in the LABEL.
+
+
+2. In the last portion of DIMM0 and DIMM1 we have an interleaved SPA
+ range, REGION1, that spans those two DIMMs as well as DIMM2 and DIMM3.
+ Some of REGION1 allocated to a PMEM namespace named "pm1.0" the rest is
+ reclaimed in 4 BDW namespaces (for each DIMM in the interleave set),
+ "blk2.1", "blk3.1", "blk4.0", and "blk5.0".
+
+
+3. The portion of DIMM2 and DIMM3 that do not participate in the REGION1
+ interleaved SPA range (i.e. the DPA address below offset (b) are also
+ included in the "blk4.0" and "blk5.0" namespaces. Note, that this
+ example shows that BDW namespaces don't need to be contiguous in
+ DPA-space.
+
+This bus is provided by the kernel under the device
+/sys/devices/platform/nfit_test.0 when CONFIG_NFIT_TEST is enabled and
+the nfit_test.ko module is loaded.
+
+
+ND Device Model/ABI and NDCTL API
+---------------------------------
+
+What follows is a description of the ND sysfs layout and a corresponding
+object hierarchy diagram as viewed through the NDCTL api. The example
+sysfs paths and diagrams are relative to the Example NFIT Diagram which
+is also the NFIT used in the “nd/ndctl” unit test.
+
+
+NDCTL: Context
+Every api call in the NDCTL library requires a context that holds the
+logging parameters and other library instance state. The library is
+based on the libabc template:
+https://git.kernel.org/cgit/linux/kernel/git/kay/libabc.git/
+
+ndctl: instantiate a new library context example
+
+ struct ndctl_ctx *ctx;
+
+ if (ndctl_new(&ctx) == 0)
+ return ctx;
+ else
+ return NULL;
+
+
+ND/NDCTL: Bus
+A bus has a 1:1 relationship with an NFIT. The current expectation for
+ACPI based systems is that there is only ever one platform-global NFIT.
+That said, it is trivial to register multiple NFITs, the specification
+does not preclude it. The infrastructure supports multiple busses and
+we we use this capability to test multiple NFIT configurations in the
+unit test.
+
+nd: control class device in /sys/class
+
+This character device accepts DSM messages to be passed to DIMM
+identified by its NFIT handle.
+
+ /sys/class/nd/ndctl0
+ |-- dev
+ |-- device -> ../../../ndbus0
+ |-- subsystem -> ../../../../../../../class/nd
+
+
+nd: bus layout
+
+ /sys/devices/platform/nfit_test.0/ndbus0
+ |-- btt0
+ |-- btt_seed
+ |-- commands
+ |-- nd
+ |-- nmem0
+ |-- nmem1
+ |-- nmem2
+ |-- nmem3
+ |-- provider
+ |-- region0
+ |-- region1
+ |-- region2
+ |-- region3
+ |-- region4
+ |-- region5
+ |-- revision
+ |-- uevent
+ `-- wait_probe
+
+
+ndctl: bus enumeration example
+
+Find the 'bus' handle that describes the bus from Example NFIT Diagram
+
+
+ static struct ndctl_bus *get_bus_by_provider(struct ndctl_ctx *ctx,
+ const char *provider)
+ {
+ struct ndctl_bus *bus;
+
+
+ ndctl_bus_foreach(ctx, bus)
+ if (strcmp(provider, ndctl_bus_get_provider(bus)) == 0)
+ return bus;
+
+
+ return NULL;
+ }
+
+ bus = get_bus_by_provider(ctx, “nfit_test.0”);
+
+
+ND/NDCTL: DIMM (NMEM)
+
+The DIMM object identifies the NFIT “handle” and a “phys_id” for a given
+memory device. The “handle” is derived from the DIMM’s physical
+location (socket, memory-controller, channel, slot). The “phys_id” is
+used for looking up DIMM details in other platform tables. The handle
+value is also used to send control/configuration messages via ioctl
+through the “ndctl0” device in the given example. The kernel id (‘N” in
+“DIMMN”) for the device is dynamically assigned. The “vendor”,
+“device”, “revision” and “format” attributes are optionally available if
+the NFIT publishes a DCR (DIMM-control-region) for the given memory
+device. These latter attributes are only useful in the presence of a
+vendor-specific DIMM.
+
+
+Note that the kernel device name for “DIMMs” is “nmemX”. The NFIT
+describes these devices via “Memory Device to System Physical Address
+Range Mapping Structure”, and there is no requirement that they actually
+be DIMMs, so we use a more generic name.
+
+
+nd: DIMM (NMEM) layout
+
+ /sys/devices/platform/nfit_test.0/ndbus0/
+ |-- nmem0
+ | |-- available_slots
+ | |-- commands
+ | |-- dev
+ | |-- device
+ | |-- devtype
+ | |-- driver -> ../../../../../bus/nd/drivers/nd_dimm
+ | |-- format
+ | |-- handle
+ | |-- modalias
+ | |-- phys_id
+ | |-- revision
+ | |-- serial
+ | |-- state
+ | |-- subsystem -> ../../../../../bus/nd
+ | |-- uevent
+ | `-- vendor
+ |-- nmem1
+ [..]
+
+ndctl: DIMM enumeration example
+
+Note, DIMMs are identified by an “nfit_handle” which is a 32-bit value
+where:
+
+ Bit 3:0 DIMM number within the memory channel
+ Bit 7:4 memory channel number
+ Bit 11:8 memory controller ID
+ Bit 15:12 socket ID
+ Bit 27:16 Node Controller ID
+ Bit 31:28 Reserved
+
+ static struct ndctl_dimm *get_dimm_by_handle(struct ndctl_bus *bus,
+ unsigned int handle)
+ {
+ struct ndctl_dimm *dimm;
+
+
+ ndctl_dimm_foreach(bus, dimm)
+ if (ndctl_dimm_get_handle(dimm) == handle)
+ return dimm;
+
+
+ return NULL;
+ }
+
+ #define DIMM_HANDLE(n, s, i, c, d) \
+ (((n & 0xfff) << 16) | ((s & 0xf) << 12) | ((i & 0xf) << 8) \
+ | ((c & 0xf) << 4) | (d & 0xf))
+
+ dimm = get_dimm_by_handle(bus, DIMM_HANDLE(0, 0, 0, 0, 0));
+
+
+ND/NDCTL: Region
+A generic REGION device is registered for each SPA or DCR/BDW. Per the
+example there are 6 regions: 2 SPAs and 4 BDWs on the “nfit_test.0” bus.
+The primary role of regions are to be a container of “mappings”. A
+mapping is a tuple of <DIMM, DPA-start-offset, length>.
+
+The ND core provides a driver for these REGION devices. This driver is
+responsible for reconciling the aliased mappings across all regions,
+parsing the LABEL, if present, and then emitting “namespace” devices
+with the resolved/exclusive DPA-boundaries for a ND PMEM or BLK device
+driver to consume.
+
+In addition to the generic attributes of “mapping”s, “interleave_ways”
+and “size” the REGION device also exports some convenience attributes.
+“nstype” indicates the integer type of namespace-device this region
+emits, “devtype” duplicates the DEVTYPE variable stored by udev at the
+‘add’ event, “modalias” duplicates the MODALIAS variable stored by udev
+at the ‘add’ event, and finally, the optional “spa_index” is provided in
+the case where the region is defined by a SPA.
+
+nd: region layout
+
+ |-- region0
+ | |-- available_size
+ | |-- devtype
+ | |-- driver -> ../../../../../bus/nd/drivers/nd_region
+ | |-- init_namespaces
+ | |-- mapping0
+ | |-- mapping1
+ | |-- mappings
+ | |-- modalias
+ | |-- namespace0.0
+ | |-- namespace_seed
+ | |-- nstype
+ | |-- set_cookie
+ | |-- size
+ | |-- spa_index
+ | |-- subsystem -> ../../../../../bus/nd
+ | `-- uevent
+ |-- region1
+ | |-- available_size
+ | |-- devtype
+ | |-- driver -> ../../../../../bus/nd/drivers/nd_region
+ | |-- init_namespaces
+ | |-- mapping0
+ | |-- mapping1
+ | |-- mapping2
+ | |-- mapping3
+ | |-- mappings
+ | |-- modalias
+ | |-- namespace1.0
+ | |-- namespace_seed
+ | |-- nstype
+ | |-- set_cookie
+ | |-- size
+ | |-- spa_index
+ | |-- subsystem -> ../../../../../bus/nd
+ | `-- uevent
+ |-- region2
+ [..]
+
+
+ndctl: region enumeration example
+
+Sample region retrieval routines based on NFIT-unique data like
+“spa_index” (interleave set id) for PMEM and “nfit_handle” (dimm id) for
+BLK.
+
+ static struct ndctl_region *get_pmem_region_by_spa_index(struct ndctl_bus *bus,
+ unsigned int spa_index)
+ {
+ struct ndctl_region *region;
+
+
+ ndctl_region_foreach(bus, region) {
+ if (ndctl_region_get_type(region) != ND_DEVICE_REGION_PMEM)
+ continue;
+ if (ndctl_region_get_spa_index(region) == spa_index)
+ return region;
+ }
+ return NULL;
+ }
+
+
+ static struct ndctl_region *get_blk_region_by_dimm_handle(struct ndctl_bus *bus,
+ unsigned int handle)
+ {
+ struct ndctl_region *region;
+
+
+ ndctl_region_foreach(bus, region) {
+ struct ndctl_mapping *map;
+
+
+ if (ndctl_region_get_type(region) != ND_DEVICE_REGION_BLOCK)
+ continue;
+ ndctl_mapping_foreach(region, map) {
+ struct ndctl_dimm *dimm = ndctl_mapping_get_dimm(map);
+
+
+ if (ndctl_dimm_get_handle(dimm) == handle)
+ return region;
+ }
+ }
+ return NULL;
+ }
+
+
+Why Not Encode the Region Type into the Region Name?
+
+At first glance it seems since NFIT defines just PMEM and BLK interface
+types that we should simply name REGION devices with something derived
+from those type names. However, the ND subsystem explicitly keeps the
+REGION name generic and expects userspace to always consider the
+region-attributes for 4 reasons:
+
+1. There are already more than two REGION and “namespace” types. For
+ PMEM there are two subtypes. As mentioned previously we have PMEM where
+ the constituent DIMM devices are known and anonymous PMEM. For BLK
+ regions the NFIT specification already anticipates vendor specific
+ implementations. The exact distinction of what a region contains is in
+ the region-attributes not the region-name or the region-devtype.
+
+2. A region with zero child-namespaces is a possible configuration. For
+ example, the NFIT allows for a DCR to be published without a
+ corresponding BDW. This equates to a DIMM that can only accept
+ control/configuration messages, but no i/o through a descendant block
+ device. Again, this “type” is advertised in the attributes (‘mappings’
+ == 0) and the name does not tell you much.
+
+3. What if a third major interface type arises in the future? Outside
+ of vendor specific implementations, it’s not difficult to envision a
+ third class of interface type beyond BLK and PMEM. With a generic name
+ for the REGION level of the device-hierarchy old userspace
+ implementations can still make sense of new kernel advertised
+ region-types. Userspace can always rely on the generic region
+ attributes like “mappings”, “size”, etc and the expected child devices
+ named “namespace”. This generic format of the device-model hierarchy
+ allows the ND and NDCTL implementations to be more uniform and
+ future-proof.
+
+4. There are more robust mechanisms for determining the major type of a
+ region than a device name. See the next section, How Do I Determine the
+ Major Type of a Region?
+
+
+How Do I Determine the Major Type of a Region?
+
+Outside of the blanket recommendation of “use the ndctl library”, or
+simply looking at the kernel header to decode the “nstype” integer
+attribute, here are some other options.
+
+
+1. module alias lookup:
+ The whole point of region/namespace device type differentiation is to
+ decide which block-device driver will attach to a given ND namespace.
+ One can simply use the modalias to lookup the resulting module. It’s
+ important to note that this method is robust in the presence of a
+ vendor-specific driver down the road. If a vendor-specific
+ implementation wants to supplant the standard nd_blk driver it can with
+ minimal impact to the rest of ND.
+
+ In fact, a vendor may also want to have a vendor-specific region-driver
+ (outside of nd_region). For example, if a vendor defined its own LABEL
+ format it would need its own region driver to parse that LABEL and emit
+ the resulting namespaces. The output from module resolution is more
+ accurate than a region-name or region-devtype.
+
+
+2. udev:
+ The kernel “devtype” is registered in the udev database
+ # udevadm info --path=/devices/platform/nfit_test.0/ndbus0/region0
+ P: /devices/platform/nfit_test.0/ndbus0/region0
+ E: DEVPATH=/devices/platform/nfit_test.0/ndbus0/region0
+ E: DEVTYPE=nd_pmem
+ E: MODALIAS=nd:t2
+ E: SUBSYSTEM=nd
+
+
+ # udevadm info --path=/devices/platform/nfit_test.0/ndbus0/region4
+ P: /devices/platform/nfit_test.0/ndbus0/region4
+ E: DEVPATH=/devices/platform/nfit_test.0/ndbus0/region4
+ E: DEVTYPE=nd_blk
+ E: MODALIAS=nd:t3
+ E: SUBSYSTEM=nd
+
+
+ ...and is available as a region attribute, but keep in mind that the
+ “devtype” does not indicate sub-type variations and scripts should
+ really be understanding the other attributes.
+
+
+3. type specific attributes:
+ As it currently stands a BDW region will never have a “spa_index”
+ attribute. A DCR region with a “mappings” value of 0 is, as mentioned
+ above, a DIMM that does not allow I/O. A PMEM region with a “mappings”
+ value of zero is a simple SPA range.
+
+
+ND/NDCTL: Namespace
+
+A REGION, after resolving DPA aliasing and LABEL specified boundaries,
+surfaces one or more “namespace” devices. The arrival of a “namespace”
+device currently triggers either the nd_blk or nd_pmem driver to load
+and register a disk/block device.
+
+
+nd: namespace layout
+Here is a sample layout from the three major types of NAMESPACE where
+namespace0.0 represents DIMM-info-backed PMEM (note that it has a ‘uuid’
+attribute), namespace2.0 represents a BLK namespace (note it has a
+‘sector_size’ attribute) that, and namespace6.0 represents an anonymous
+PMEM namespace (note that has no ‘uuid’ attribute due to not support a
+LABEL).
+
+ /sys/devices/platform/nfit_test.0/ndbus0/region0/namespace0.0
+ |-- alt_name
+ |-- devtype
+ |-- dpa_extents
+ |-- modalias
+ |-- resource
+ |-- size
+ |-- subsystem -> ../../../../../../bus/nd
+ |-- type
+ |-- uevent
+ `-- uuid
+ /sys/devices/platform/nfit_test.0/ndbus0/region2/namespace2.0
+ |-- alt_name
+ |-- devtype
+ |-- dpa_extents
+ |-- modalias
+ |-- sector_size
+ |-- size
+ |-- subsystem -> ../../../../../../bus/nd
+ |-- type
+ |-- uevent
+ `-- uuid
+ /sys/devices/platform/nfit_test.1/ndbus1/region6/namespace6.0
+ |-- block
+ | `-- pmem0
+ |-- devtype
+ |-- driver -> ../../../../../../bus/nd/drivers/pmem
+ |-- modalias
+ |-- resource
+ |-- size
+ |-- subsystem -> ../../../../../../bus/nd
+ |-- type
+ `-- uevent
+
+
+ndctl: namespace enumeration example
+Namespaces are indexed relative to their parent region, example below.
+These indexes are mostly static from boot to boot, but subsystem makes
+no guarantees in this regard. For a static namespace identifier use its
+‘uuid’ attribute.
+
+ static struct ndctl_namespace *get_namespace_by_id(struct ndctl_region *region,
+ unsigned int id)
+ {
+ struct ndctl_namespace *ndns;
+
+
+ ndctl_namespace_foreach(region, ndns)
+ if (ndctl_namespace_get_id(ndns) == id)
+ return ndns;
+
+
+ return NULL;
+ }
+
+
+ndctl: namespace creation example
+
+Idle namespaces are automatically created by the kernel if a given
+region has enough available capacity to create a new namespace.
+Namespace instantiation involves finding an idle namespace and
+configuring it. For the most part the setting of namespace attributes
+can occur in any order, the only constraint is that ‘uuid’ must be set
+before ‘size’. This enables the kernel to track DPA allocations
+internally with a static identifier.
+
+
+ static int configure_namespace(struct ndctl_region *region,
+ struct ndctl_namespace *ndns,
+ struct namespace_parameters *parameters)
+ {
+ char devname[50];
+
+
+ snprintf(devname, sizeof(devname), "namespace%d.%d",
+ ndctl_region_get_id(region), paramaters->id);
+
+
+ ndctl_namespace_set_alt_name(ndns, devname);
+ /* ‘uuid’ must be set prior to setting size! */
+ ndctl_namespace_set_uuid(ndns, paramaters->uuid);
+ ndctl_namespace_set_size(ndns, paramaters->size);
+ /* unlike pmem namespaces, blk namespaces have a sector size */
+ if (parameters->lbasize)
+ ndctl_namespace_set_sector_size(ndns, parameters->lbasize);
+ ndctl_namespace_enable(ndns);
+ }
+
+Why the Term “namespace”?
+1. Why not “volume” for instance? “volume” ran the risk of confusing ND
+ as a volume manager like device-mapper.
+
+
+2. The term originated to describe the sub-devices that can be created
+ within a NVME controller (see the nvme specification:
+ http://www.nvmexpress.org/specifications/), and NFIT namespaces are
+ meant to parallel the capabilities and configurability of
+ NVME-namespaces.
+
+
+ND/NDCTL: Block Translation Table “btt”
+A BTT (design document: http://pmem.io/2014/09/23/btt.html) is a stacked
+block device driver that fronts either the whole block device or a
+partition of a block device emitted by either a PMEM or BLK NAMESPACE.
+
+
+nd: btt layout
+Every bus will start out with at least one BTT device which is the seed
+device. To activate it set the “backing_dev”, “uuid”, and “sector_size”
+attributes and then bind the device to the nd_btt driver.
+
+ /sys/devices/platform/nfit_test.1/ndbus0/btt0/
+ ├── backing_dev
+ ├── delete
+ ├── devtype
+ ├── modalias
+ ├── sector_size
+ ├── subsystem -> ../../../../../bus/nd
+ ├── uevent
+ └── uuid
+
+ndctl: btt creation example
+
+Similar to namespaces an idle BTT device is automatically created per
+bus. Each time this “seed” btt device is configured and enabled a new
+seed is created. Creating a BTT configuration involves two steps of
+finding and idle BTT and assigning it to front a PMEM or BLK namespace.
+
+
+ static struct ndctl_btt *get_idle_btt(struct ndctl_bus *bus)
+ {
+ struct ndctl_btt *btt;
+
+
+ ndctl_btt_foreach(bus, btt)
+ if (!ndctl_btt_is_enabled(btt) && !ndctl_btt_is_configured(btt))
+ return btt;
+
+
+ return NULL;
+ }
+
+ static int configure_btt(struct ndctl_bus *bus, struct btt_parameters *parameters)
+ {
+ btt = get_idle_btt(bus);
+
+
+ sprintf(bdevpath, "/dev/%s",
+ ndctl_namespace_get_block_device(parameters->ndns));
+ ndctl_btt_set_uuid(btt, parameters->uuid);
+ ndctl_btt_set_sector_size(btt, parameters->sector_size);
+ ndctl_btt_set_backing_dev(btt, parametes->bdevpath);
+ ndctl_btt_enable(btt);
+ }
+
+
+Once instantiated a “nd_btt” link will be created under the
+“backing_dev” (pmem0) block device:
+
+ /sys/block/pmem0/
+ ├── alignment_offset
+ ├── bdi -> ../../../../../../../virtual/bdi/259:0
+ ├── capability
+ ├── dev
+ ├── device -> ../../../namespace0.0
+ ├── discard_alignment
+ ├── ext_range
+ ├── holders
+ ├── inflight
+ └── nd_btt -> ../../../../btt0
+
+
+...and a new inactive seed device will appear on the bus.
+
+
+Once a “backing_dev” is disabled its associated BTT will be
+automatically deleted. This deletion is only at the device model level.
+In order to destroy a BTT the “info block” needs to be destroyed.
+
+
+Summary NDCTL Diagram
+---------------------
+
+For the given example above, here is the view of the objects as seen by
+the NDCTL api:
+ +---+
+ |CTX| +---------+ +--------------+ +---------------+
+ +-+-+ +-> REGION0 +---> NAMESPACE0.0 +--> PMEM8 "pm0.0" |
+ | | +---------+ +--------------+ +---------------+
++-------+ | | +---------+ +--------------+ +---------------+
+| DIMM0 <-+ | +-> REGION1 +---> NAMESPACE1.0 +--> PMEM6 "pm1.0" |
++-------+ | | | +---------+ +--------------+ +---------------+
+| DIMM1 <-+ +-v--+ | +---------+ +--------------+ +---------------+
++-------+ +-+BUS0+---> REGION2 +-+-> NAMESPACE2.0 +--> ND6 "blk2.0" |
+| DIMM2 <-+ +----+ | +---------+ | +--------------+ +----------------------+
++-------+ | | +-> NAMESPACE2.1 +--> ND5 "blk2.1" | BTT2 |
+| DIMM3 <-+ | +--------------+ +----------------------+
++-------+ | +---------+ +--------------+ +---------------+
+ +-> REGION3 +-+-> NAMESPACE3.0 +--> ND4 "blk3.0" |
+ | +---------+ | +--------------+ +----------------------+
+ | +-> NAMESPACE3.1 +--> ND3 "blk3.1" | BTT1 |
+ | +--------------+ +----------------------+
+ | +---------+ +--------------+ +---------------+
+ +-> REGION4 +---> NAMESPACE4.0 +--> ND2 "blk4.0" |
+ | +---------+ +--------------+ +---------------+
+ | +---------+ +--------------+ +----------------------+
+ +-> REGION5 +---> NAMESPACE5.0 +--> ND1 "blk5.0" | BTT0 |
+ +---------+ +--------------+ +---------------+------+
diff --git a/MAINTAINERS b/MAINTAINERS
index 4517613dc638..6bc0af450544 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6666,6 +6666,34 @@ S: Maintained
F: Documentation/hwmon/nct6775
F: drivers/hwmon/nct6775.c
+ND (NFIT-DEFINED/NVDIMM SUBSYSTEM)
+M: Dan Williams <[email protected]>
+L: [email protected]
+Q: https://patchwork.kernel.org/project/linux-nvdimm/list/
+S: Supported
+F: drivers/block/nd/*
+F: include/linux/nd.h
+F: include/uapi/linux/ndctl.h
+
+ND BLOCK APERTURE DRIVER
+M: Ross Zwisler <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/block/nd/blk.c
+F: drivers/block/nd/region_devs.c
+
+ND BLOCK TRANSLATION TABLE
+M: Vishal Verma <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/block/nd/btt*
+
+ND PERSISTENT MEMORY DRIVER
+M: Ross Zwisler <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/block/nd/pmem.c
+
NETEFFECT IWARP RNIC DRIVER (IW_NES)
M: Faisal Latif <[email protected]>
L: [email protected]
@@ -8071,12 +8099,6 @@ S: Maintained
F: Documentation/blockdev/ramdisk.txt
F: drivers/block/brd.c
-PERSISTENT MEMORY DRIVER
-M: Ross Zwisler <[email protected]>
-L: [email protected]
-S: Supported
-F: drivers/block/pmem.c
-
RANDOM NUMBER DRIVER
M: "Theodore Ts'o" <[email protected]>
S: Maintained
1/ Autodetect an NFIT table for the ACPI namespace device with _HID of
"ACPI0012"
2/ Skeleton implementation to register an NFIT bus.
The NFIT provided by ACPI is the primary method by which platforms will
discover NVDIMM resources. However, the intent of the
nfit_bus_descriptor abstraction is to contain "provider" specific
details, leaving the nd core to be NFIT-provider agnostic. This
flexibility is exploited in later patches to implement special purpose
providers of test and custom-defined NFITs.
Cc: <[email protected]>
Cc: Robert Moore <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/Kconfig | 2
drivers/block/Makefile | 1
drivers/block/nd/Kconfig | 44 ++++++++++
drivers/block/nd/Makefile | 6 +
drivers/block/nd/acpi.c | 112 +++++++++++++++++++++++++
drivers/block/nd/core.c | 48 +++++++++++
drivers/block/nd/nfit.h | 201 +++++++++++++++++++++++++++++++++++++++++++++
7 files changed, 414 insertions(+)
create mode 100644 drivers/block/nd/Kconfig
create mode 100644 drivers/block/nd/Makefile
create mode 100644 drivers/block/nd/acpi.c
create mode 100644 drivers/block/nd/core.c
create mode 100644 drivers/block/nd/nfit.h
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index eb1fed5bd516..dfe40e5ca9bd 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -321,6 +321,8 @@ config BLK_DEV_NVME
To compile this driver as a module, choose M here: the
module will be called nvme.
+source "drivers/block/nd/Kconfig"
+
config BLK_DEV_SKD
tristate "STEC S1120 Block Driver"
depends on PCI
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 9cc6c18a1c7e..18b27bb9cd2d 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -24,6 +24,7 @@ obj-$(CONFIG_CDROM_PKTCDVD) += pktcdvd.o
obj-$(CONFIG_MG_DISK) += mg_disk.o
obj-$(CONFIG_SUNVDC) += sunvdc.o
obj-$(CONFIG_BLK_DEV_NVME) += nvme.o
+obj-$(CONFIG_NFIT_DEVICES) += nd/
obj-$(CONFIG_BLK_DEV_SKD) += skd.o
obj-$(CONFIG_BLK_DEV_OSD) += osdblk.o
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
new file mode 100644
index 000000000000..5fa74f124b3e
--- /dev/null
+++ b/drivers/block/nd/Kconfig
@@ -0,0 +1,44 @@
+config ND_ARCH_HAS_IOREMAP_CACHE
+ depends on (X86 || IA64 || ARM || ARM64 || SH || XTENSA)
+ def_bool y
+
+menuconfig NFIT_DEVICES
+ bool "NVDIMM (NFIT) Support"
+ depends on ND_ARCH_HAS_IOREMAP_CACHE
+ depends on PHYS_ADDR_T_64BIT
+ help
+ Support for non-volatile memory devices defined by the NVDIMM
+ Firmware Interface Table. (NFIT) On platforms that define an
+ NFIT, via ACPI, or other means, a "nd_bus" is registered to
+ advertise PM (persistent memory) namespaces (/dev/pmemX) and
+ BLOCK (sliding block data window) namespaces (/dev/ndX). A PM
+ namespace refers to a system-physical-address-range than may
+ span multiple DIMMs and support DAX (see CONFIG_DAX). A BLOCK
+ namespace refers to a NVDIMM control region which exposes a
+ register-based windowed access mode to non-volatile memory.
+ See the NVDIMM Firmware Interface Table specification for more
+ details.
+
+if NFIT_DEVICES
+
+config ND_CORE
+ tristate "Core: Generic 'nd' Device Model"
+ help
+ Platform agnostic device model for an NFIT-defined bus.
+ Publishes resources for a NFIT-persistent-memory driver and/or
+ NFIT-block-data-window driver to attach. Exposes a device
+ topology under a "ndX" bus device and a "/dev/ndctl<N>"
+ dimm-ioctl message passing interface per registered NFIT
+ instance. A userspace library "ndctl" provides an API to
+ enumerate/manage this subsystem.
+
+config NFIT_ACPI
+ tristate "NFIT ACPI: Discover ACPI-Namespace NFIT Devices"
+ select ND_CORE
+ depends on ACPI
+ help
+ Infrastructure to probe the ACPI namespace for NVDIMMs and
+ register the platform-global NFIT blob with the core. Also
+ enables the core to craft ACPI._DSM messages for platform/dimm
+ configuration.
+endif
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
new file mode 100644
index 000000000000..22701ab7dcae
--- /dev/null
+++ b/drivers/block/nd/Makefile
@@ -0,0 +1,6 @@
+obj-$(CONFIG_ND_CORE) += nd.o
+obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
+
+nd_acpi-y := acpi.o
+
+nd-y := core.o
diff --git a/drivers/block/nd/acpi.c b/drivers/block/nd/acpi.c
new file mode 100644
index 000000000000..48db723d7a90
--- /dev/null
+++ b/drivers/block/nd/acpi.c
@@ -0,0 +1,112 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/list.h>
+#include <linux/acpi.h>
+#include <linux/mutex.h>
+#include <linux/module.h>
+#include "nfit.h"
+
+enum {
+ NFIT_ACPI_NOTIFY_TABLE = 0x80,
+};
+
+struct acpi_nfit {
+ struct nfit_bus_descriptor nfit_desc;
+ struct acpi_device *dev;
+ struct nd_bus *nd_bus;
+};
+
+static int nd_acpi_ctl(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
+ unsigned int buf_len)
+{
+ return -ENOTTY;
+}
+
+static int nd_acpi_add(struct acpi_device *dev)
+{
+ struct nfit_bus_descriptor *nfit_desc;
+ struct acpi_table_header *tbl;
+ acpi_status status = AE_OK;
+ struct acpi_nfit *nfit;
+ acpi_size sz;
+
+ status = acpi_get_table_with_size("NFIT", 0, &tbl, &sz);
+ if (ACPI_FAILURE(status)) {
+ dev_err(&dev->dev, "failed to find NFIT\n");
+ return -ENXIO;
+ }
+
+ nfit = devm_kzalloc(&dev->dev, sizeof(*nfit), GFP_KERNEL);
+ if (!nfit)
+ return -ENOMEM;
+ nfit->dev = dev;
+ nfit_desc = &nfit->nfit_desc;
+ nfit_desc->nfit_base = (void __iomem *) tbl;
+ nfit_desc->nfit_size = sz;
+ nfit_desc->provider_name = "ACPI.NFIT";
+ nfit_desc->nfit_ctl = nd_acpi_ctl;
+
+ nfit->nd_bus = nfit_bus_register(&dev->dev, nfit_desc);
+ if (!nfit->nd_bus)
+ return -ENXIO;
+
+ dev_set_drvdata(&dev->dev, nfit);
+ return 0;
+}
+
+static int nd_acpi_remove(struct acpi_device *dev)
+{
+ struct acpi_nfit *nfit = dev_get_drvdata(&dev->dev);
+
+ nfit_bus_unregister(nfit->nd_bus);
+ return 0;
+}
+
+static void nd_acpi_notify(struct acpi_device *dev, u32 event)
+{
+ /* TODO: handle ACPI_NOTIFY_BUS_CHECK notification */
+ dev_dbg(&dev->dev, "%s: event: %d\n", __func__, event);
+}
+
+static const struct acpi_device_id nd_acpi_ids[] = {
+ { "ACPI0012", 0 },
+ { "", 0 },
+};
+MODULE_DEVICE_TABLE(acpi, nd_acpi_ids);
+
+static struct acpi_driver nd_acpi_driver = {
+ .name = KBUILD_MODNAME,
+ .ids = nd_acpi_ids,
+ .flags = ACPI_DRIVER_ALL_NOTIFY_EVENTS,
+ .ops = {
+ .add = nd_acpi_add,
+ .remove = nd_acpi_remove,
+ .notify = nd_acpi_notify
+ },
+};
+
+static __init int nd_acpi_init(void)
+{
+ return acpi_bus_register_driver(&nd_acpi_driver);
+}
+
+static __exit void nd_acpi_exit(void)
+{
+ acpi_bus_unregister_driver(&nd_acpi_driver);
+}
+
+module_init(nd_acpi_init);
+module_exit(nd_acpi_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel Corporation");
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
new file mode 100644
index 000000000000..8df8d315b726
--- /dev/null
+++ b/drivers/block/nd/core.c
@@ -0,0 +1,48 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/export.h>
+#include <linux/module.h>
+#include "nfit.h"
+
+struct nd_bus *nfit_bus_register(struct device *parent,
+ struct nfit_bus_descriptor *nfit_desc)
+{
+ return NULL;
+}
+EXPORT_SYMBOL(nfit_bus_register);
+
+void nfit_bus_unregister(struct nd_bus *nd_bus)
+{
+}
+EXPORT_SYMBOL(nfit_bus_unregister);
+
+static __init int nd_core_init(void)
+{
+ BUILD_BUG_ON(sizeof(struct nfit) != 40);
+ BUILD_BUG_ON(sizeof(struct nfit_spa) != 56);
+ BUILD_BUG_ON(sizeof(struct nfit_mem) != 48);
+ BUILD_BUG_ON(sizeof(struct nfit_idt) != 16);
+ BUILD_BUG_ON(sizeof(struct nfit_smbios) != 8);
+ BUILD_BUG_ON(sizeof(struct nfit_dcr) != 80);
+ BUILD_BUG_ON(sizeof(struct nfit_bdw) != 40);
+
+ return 0;
+}
+
+static __exit void nd_core_exit(void)
+{
+}
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel Corporation");
+module_init(nd_core_init);
+module_exit(nd_core_exit);
diff --git a/drivers/block/nd/nfit.h b/drivers/block/nd/nfit.h
new file mode 100644
index 000000000000..56a3b2dad124
--- /dev/null
+++ b/drivers/block/nd/nfit.h
@@ -0,0 +1,201 @@
+/*
+ * NVDIMM Firmware Interface Table - NFIT
+ *
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __NFIT_H__
+#define __NFIT_H__
+
+#include <linux/types.h>
+
+enum {
+ NFIT_TABLE_SPA = 0,
+ NFIT_TABLE_MEM = 1,
+ NFIT_TABLE_IDT = 2,
+ NFIT_TABLE_SMBIOS = 3,
+ NFIT_TABLE_DCR = 4,
+ NFIT_TABLE_BDW = 5,
+ NFIT_TABLE_FLUSH = 6,
+ NFIT_SPA_VOLATILE = 0,
+ NFIT_SPA_PM = 1,
+ NFIT_SPA_DCR = 2,
+ NFIT_SPA_BDW = 3,
+ NFIT_SPA_VDISK = 4,
+ NFIT_SPA_VCD = 5,
+ NFIT_SPA_PDISK = 6,
+ NFIT_SPA_PCD = 7,
+ NFIT_SPAF_DCR_HOT_ADD = 1 << 0,
+ NFIT_SPAF_PDVALID = 1 << 1,
+ NFIT_MEMF_SAVE_FAIL = 1 << 0,
+ NFIT_MEMF_RESTORE_FAIL = 1 << 1,
+ NFIT_MEMF_FLUSH_FAIL = 1 << 2,
+ NFIT_MEMF_UNARMED = 1 << 3,
+ NFIT_MEMF_NOTIFY_SMART = 1 << 4,
+ NFIT_MEMF_SMART_READY = 1 << 5,
+ NFIT_DCRF_BUFFERED = 1 << 0,
+};
+
+/**
+ * struct nfit - Nvdimm Firmware Interface Table
+ * @signature: "NFIT"
+ * @length: sum of size of this table plus all appended subtables
+ */
+struct nfit {
+ __u8 signature[4];
+ __le32 length;
+ __u8 revision;
+ __u8 checksum;
+ __u8 oemid[6];
+ __le64 oem_tbl_id;
+ __le32 oem_revision;
+ __le32 creator_id;
+ __le32 creator_revision;
+ __le32 reserved;
+} __packed;
+
+/**
+ * struct nfit_spa - System Physical Address Range Descriptor Table
+ */
+struct nfit_spa {
+ __le16 type;
+ __le16 length;
+ __le16 spa_index;
+ __le16 flags;
+ __le32 reserved;
+ __le32 proximity_domain;
+ __u8 type_uuid[16];
+ __le64 spa_base;
+ __le64 spa_length;
+ __le64 mem_attr;
+} __packed;
+
+/**
+ * struct nfit_mem - Memory Device to SPA Mapping Table
+ */
+struct nfit_mem {
+ __le16 type;
+ __le16 length;
+ __le32 nfit_handle;
+ __le16 phys_id;
+ __le16 region_id;
+ __le16 spa_index;
+ __le16 dcr_index;
+ __le64 region_len;
+ __le64 region_spa_offset;
+ __le64 region_dpa;
+ __le16 idt_index;
+ __le16 interleave_ways;
+ __le16 flags;
+ __le16 reserved;
+} __packed;
+
+/**
+ * struct nfit_idt - Interleave description Table
+ */
+struct nfit_idt {
+ __le16 type;
+ __le16 length;
+ __le16 idt_index;
+ __le16 reserved;
+ __le32 num_lines;
+ __le32 line_size;
+ __le32 line_offset[0];
+} __packed;
+
+/**
+ * struct nfit_smbios - SMBIOS Management Information Table
+ */
+struct nfit_smbios {
+ __le16 type;
+ __le16 length;
+ __le32 reserved;
+ __u8 data[0];
+} __packed;
+
+/**
+ * struct nfit_dcr - NVDIMM Control Region Table
+ * @fic: Format Interface Code
+ * @cmd_offset: command registers relative to block control window
+ * @status_offset: status registers relative to block control window
+ */
+struct nfit_dcr {
+ __le16 type;
+ __le16 length;
+ __le16 dcr_index;
+ __le16 vendor_id;
+ __le16 device_id;
+ __le16 revision_id;
+ __le16 sub_vendor_id;
+ __le16 sub_device_id;
+ __le16 sub_revision_id;
+ __u8 reserved[6];
+ __le32 serial_number;
+ __le16 fic;
+ __le16 num_bcw;
+ __le64 bcw_size;
+ __le64 cmd_offset;
+ __le64 cmd_size;
+ __le64 status_offset;
+ __le64 status_size;
+ __le16 flags;
+ __u8 reserved2[6];
+} __packed;
+
+/**
+ * struct nfit_bdw - NVDIMM Block Data Window Region Table
+ */
+struct nfit_bdw {
+ __le16 type;
+ __le16 length;
+ __le16 dcr_index;
+ __le16 num_bdw;
+ __le64 bdw_offset;
+ __le64 bdw_size;
+ __le64 blk_capacity;
+ __le64 blk_offset;
+} __packed;
+
+/**
+ * struct nfit_flush - Flush Hint Address Structure
+ */
+struct nfit_flush {
+ __le16 type;
+ __le16 length;
+ __le32 nfit_handle;
+ __le16 num_hints;
+ __u8 reserved[6];
+ __le64 hint_addr[0];
+};
+
+struct nd_dimm;
+struct nfit_bus_descriptor;
+typedef int (*nfit_ctl_fn)(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
+ unsigned int buf_len);
+
+typedef int (*nfit_add_dimm_fn)(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm);
+
+struct nfit_bus_descriptor {
+ unsigned long dsm_mask;
+ void __iomem *nfit_base;
+ size_t nfit_size;
+ char *provider_name;
+ nfit_ctl_fn nfit_ctl;
+ nfit_add_dimm_fn add_dimm;
+};
+
+struct nd_bus;
+struct nd_bus *nfit_bus_register(struct device *parent,
+ struct nfit_bus_descriptor *nfit_desc);
+void nfit_bus_unregister(struct nd_bus *nd_bus);
+#endif /* __NFIT_H__ */
Basic allocation and parsing of an nfit table. This is infrastructure
for walking the list of "System Physical Address (SPA) Range Tables",
and "Memory device to SPA" to create "region" devices representing
persistent-memory (PMEM) or a dimm block data window set (BLK).
Note, BLK windows may be interleaved. The nd_mem object tracks all the
tables needed for carrying out BLK I/O operations. For the interleaved
case there may be multiple nd_mem instances per dimm-control-region
(DCR).
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/core.c | 438 +++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/nd-private.h | 61 ++++++
drivers/block/nd/nfit.h | 25 ++
3 files changed, 523 insertions(+), 1 deletion(-)
create mode 100644 drivers/block/nd/nd-private.h
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 8df8d315b726..d126799e7ff7 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -10,19 +10,455 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
+#include <linux/list_sort.h>
#include <linux/export.h>
#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/uuid.h>
+#include <linux/io.h>
+#include "nd-private.h"
#include "nfit.h"
-struct nd_bus *nfit_bus_register(struct device *parent,
+static DEFINE_IDA(nd_ida);
+
+static bool warn_checksum;
+module_param(warn_checksum, bool, S_IRUGO|S_IWUSR);
+MODULE_PARM_DESC(warn_checksum, "Turn checksum errors into warnings");
+
+static void nd_bus_release(struct device *dev)
+{
+ struct nd_bus *nd_bus = container_of(dev, struct nd_bus, dev);
+ struct nd_memdev *nd_memdev, *_memdev;
+ struct nd_spa *nd_spa, *_spa;
+ struct nd_mem *nd_mem, *_mem;
+ struct nd_dcr *nd_dcr, *_dcr;
+ struct nd_bdw *nd_bdw, *_bdw;
+
+ list_for_each_entry_safe(nd_spa, _spa, &nd_bus->spas, list) {
+ list_del_init(&nd_spa->list);
+ kfree(nd_spa);
+ }
+ list_for_each_entry_safe(nd_dcr, _dcr, &nd_bus->dcrs, list) {
+ list_del_init(&nd_dcr->list);
+ kfree(nd_dcr);
+ }
+ list_for_each_entry_safe(nd_bdw, _bdw, &nd_bus->bdws, list) {
+ list_del_init(&nd_bdw->list);
+ kfree(nd_bdw);
+ }
+ list_for_each_entry_safe(nd_memdev, _memdev, &nd_bus->memdevs, list) {
+ list_del_init(&nd_memdev->list);
+ kfree(nd_memdev);
+ }
+ list_for_each_entry_safe(nd_mem, _mem, &nd_bus->dimms, list) {
+ list_del_init(&nd_mem->list);
+ kfree(nd_mem);
+ }
+
+ ida_simple_remove(&nd_ida, nd_bus->id);
+ kfree(nd_bus);
+}
+
+struct nd_bus *to_nd_bus(struct device *dev)
+{
+ struct nd_bus *nd_bus = container_of(dev, struct nd_bus, dev);
+
+ WARN_ON(nd_bus->dev.release != nd_bus_release);
+ return nd_bus;
+}
+
+static void *nd_bus_new(struct device *parent,
struct nfit_bus_descriptor *nfit_desc)
{
+ struct nd_bus *nd_bus = kzalloc(sizeof(*nd_bus), GFP_KERNEL);
+ int rc;
+
+ if (!nd_bus)
+ return NULL;
+ INIT_LIST_HEAD(&nd_bus->spas);
+ INIT_LIST_HEAD(&nd_bus->dcrs);
+ INIT_LIST_HEAD(&nd_bus->bdws);
+ INIT_LIST_HEAD(&nd_bus->memdevs);
+ INIT_LIST_HEAD(&nd_bus->dimms);
+ nd_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL);
+ if (nd_bus->id < 0) {
+ kfree(nd_bus);
+ return NULL;
+ }
+ nd_bus->nfit_desc = nfit_desc;
+ nd_bus->dev.parent = parent;
+ nd_bus->dev.release = nd_bus_release;
+ dev_set_name(&nd_bus->dev, "ndbus%d", nd_bus->id);
+ rc = device_register(&nd_bus->dev);
+ if (rc) {
+ dev_dbg(&nd_bus->dev, "device registration failed: %d\n", rc);
+ put_device(&nd_bus->dev);
+ return NULL;
+ }
+ return nd_bus;
+}
+
+struct nfit_table_header {
+ __le16 type;
+ __le16 length;
+};
+
+static const char *spa_type_name(u16 type)
+{
+ switch (type) {
+ case NFIT_SPA_VOLATILE: return "volatile";
+ case NFIT_SPA_PM: return "pmem";
+ case NFIT_SPA_DCR: return "dimm-control-region";
+ case NFIT_SPA_BDW: return "block-data-window";
+ default: return "unknown";
+ }
+}
+
+static int nfit_spa_type(struct nfit_spa __iomem *nfit_spa)
+{
+ __u8 uuid[16];
+
+ memcpy_fromio(uuid, &nfit_spa->type_uuid, sizeof(uuid));
+
+ if (memcmp(&nfit_spa_uuid_volatile, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_VOLATILE;
+
+ if (memcmp(&nfit_spa_uuid_pm, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_PM;
+
+ if (memcmp(&nfit_spa_uuid_dcr, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_DCR;
+
+ if (memcmp(&nfit_spa_uuid_bdw, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_BDW;
+
+ if (memcmp(&nfit_spa_uuid_vdisk, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_VDISK;
+
+ if (memcmp(&nfit_spa_uuid_vcd, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_VCD;
+
+ if (memcmp(&nfit_spa_uuid_pdisk, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_PDISK;
+
+ if (memcmp(&nfit_spa_uuid_pcd, uuid, sizeof(uuid)) == 0)
+ return NFIT_SPA_PCD;
+
+ return -1;
+}
+
+static void __iomem *add_table(struct nd_bus *nd_bus, void __iomem *table,
+ const void __iomem *end)
+{
+ struct nfit_table_header __iomem *hdr;
+ void *ret = NULL;
+
+ if (table >= end)
+ goto err;
+
+ ret = ERR_PTR(-ENOMEM);
+ hdr = (struct nfit_table_header __iomem *) table;
+ switch (readw(&hdr->type)) {
+ case NFIT_TABLE_SPA: {
+ struct nd_spa *nd_spa = kzalloc(sizeof(*nd_spa), GFP_KERNEL);
+ struct nfit_spa __iomem *nfit_spa = table;
+
+ if (!nd_spa)
+ goto err;
+ INIT_LIST_HEAD(&nd_spa->list);
+ nd_spa->nfit_spa = nfit_spa;
+ list_add_tail(&nd_spa->list, &nd_bus->spas);
+ dev_dbg(&nd_bus->dev, "%s: spa index: %d type: %s\n", __func__,
+ readw(&nfit_spa->spa_index),
+ spa_type_name(nfit_spa_type(nfit_spa)));
+ break;
+ }
+ case NFIT_TABLE_MEM: {
+ struct nd_memdev *nd_memdev = kzalloc(sizeof(*nd_memdev),
+ GFP_KERNEL);
+ struct nfit_mem __iomem *nfit_mem = table;
+
+ if (!nd_memdev)
+ goto err;
+ INIT_LIST_HEAD(&nd_memdev->list);
+ nd_memdev->nfit_mem = nfit_mem;
+ list_add_tail(&nd_memdev->list, &nd_bus->memdevs);
+ dev_dbg(&nd_bus->dev, "%s: memdev handle: %#x spa: %d dcr: %d\n",
+ __func__, readl(&nfit_mem->nfit_handle),
+ readw(&nfit_mem->spa_index),
+ readw(&nfit_mem->dcr_index));
+ break;
+ }
+ case NFIT_TABLE_DCR: {
+ struct nd_dcr *nd_dcr = kzalloc(sizeof(*nd_dcr), GFP_KERNEL);
+ struct nfit_dcr __iomem *nfit_dcr = table;
+
+ if (!nd_dcr)
+ goto err;
+ INIT_LIST_HEAD(&nd_dcr->list);
+ nd_dcr->nfit_dcr = nfit_dcr;
+ list_add_tail(&nd_dcr->list, &nd_bus->dcrs);
+ dev_dbg(&nd_bus->dev, "%s: dcr index: %d num_bcw: %d\n",
+ __func__, readw(&nfit_dcr->dcr_index),
+ readw(&nfit_dcr->num_bcw));
+ break;
+ }
+ case NFIT_TABLE_BDW: {
+ struct nd_bdw *nd_bdw = kzalloc(sizeof(*nd_bdw), GFP_KERNEL);
+ struct nfit_bdw __iomem *nfit_bdw = table;
+
+ if (!nd_bdw)
+ goto err;
+ INIT_LIST_HEAD(&nd_bdw->list);
+ nd_bdw->nfit_bdw = nfit_bdw;
+ list_add_tail(&nd_bdw->list, &nd_bus->bdws);
+ dev_dbg(&nd_bus->dev, "%s: bdw dcr: %d num_bdw: %d\n", __func__,
+ readw(&nfit_bdw->dcr_index),
+ readw(&nfit_bdw->num_bdw));
+ break;
+ }
+ /* TODO */
+ case NFIT_TABLE_IDT:
+ dev_dbg(&nd_bus->dev, "%s: idt\n", __func__);
+ break;
+ case NFIT_TABLE_FLUSH:
+ dev_dbg(&nd_bus->dev, "%s: flush\n", __func__);
+ break;
+ case NFIT_TABLE_SMBIOS:
+ dev_dbg(&nd_bus->dev, "%s: smbios\n", __func__);
+ break;
+ default:
+ dev_err(&nd_bus->dev, "unknown table '%d' parsing nfit\n",
+ readw(&hdr->type));
+ ret = ERR_PTR(-EINVAL);
+ goto err;
+ }
+
+ return table + readw(&hdr->length);
+ err:
+ return (void __iomem *) ret;
+}
+
+void nd_mem_find_spa_bdw(struct nd_bus *nd_bus, struct nd_mem *nd_mem)
+{
+ u32 nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+ u16 dcr_index = readw(&nd_mem->nfit_dcr->dcr_index);
+ struct nd_spa *nd_spa;
+
+ list_for_each_entry(nd_spa, &nd_bus->spas, list) {
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ int type = nfit_spa_type(nd_spa->nfit_spa);
+ struct nd_memdev *nd_memdev;
+
+ if (type != NFIT_SPA_BDW)
+ continue;
+
+ list_for_each_entry(nd_memdev, &nd_bus->memdevs, list) {
+ if (readw(&nd_memdev->nfit_mem->spa_index) != spa_index)
+ continue;
+ if (readl(&nd_memdev->nfit_mem->nfit_handle) != nfit_handle)
+ continue;
+ if (readw(&nd_memdev->nfit_mem->dcr_index) != dcr_index)
+ continue;
+
+ nd_mem->nfit_spa_bdw = nd_spa->nfit_spa;
+ return;
+ }
+ }
+
+ dev_dbg(&nd_bus->dev, "SPA-BDW not found for SPA-DCR %d\n",
+ readw(&nd_mem->nfit_spa_dcr->spa_index));
+ nd_mem->nfit_bdw = NULL;
+}
+
+static void nd_mem_add(struct nd_bus *nd_bus, struct nd_mem *nd_mem)
+{
+ u16 dcr_index = readw(&nd_mem->nfit_mem_dcr->dcr_index);
+ u16 spa_index = readw(&nd_mem->nfit_spa_dcr->spa_index);
+ struct nd_dcr *nd_dcr;
+ struct nd_bdw *nd_bdw;
+
+ list_for_each_entry(nd_dcr, &nd_bus->dcrs, list) {
+ if (readw(&nd_dcr->nfit_dcr->dcr_index) != dcr_index)
+ continue;
+ nd_mem->nfit_dcr = nd_dcr->nfit_dcr;
+ break;
+ }
+
+ if (!nd_mem->nfit_dcr) {
+ dev_dbg(&nd_bus->dev, "SPA-DCR %d missing:%s%s\n",
+ spa_index, nd_mem->nfit_mem_dcr ? "" : " MEMDEV",
+ nd_mem->nfit_dcr ? "" : " DCR");
+ kfree(nd_mem);
+ return;
+ }
+
+ /*
+ * We've found enough to create an nd_dimm, optionally
+ * find an associated BDW
+ */
+ list_add(&nd_mem->list, &nd_bus->dimms);
+
+ list_for_each_entry(nd_bdw, &nd_bus->bdws, list) {
+ if (readw(&nd_bdw->nfit_bdw->dcr_index) != dcr_index)
+ continue;
+ nd_mem->nfit_bdw = nd_bdw->nfit_bdw;
+ break;
+ }
+
+ if (!nd_mem->nfit_bdw)
+ return;
+
+ nd_mem_find_spa_bdw(nd_bus, nd_mem);
+}
+
+static int nd_mem_cmp(void *priv, struct list_head *__a, struct list_head *__b)
+{
+ struct nd_mem *a = container_of(__a, typeof(*a), list);
+ struct nd_mem *b = container_of(__b, typeof(*b), list);
+ u32 handleA, handleB;
+
+ handleA = readl(&a->nfit_mem_dcr->nfit_handle);
+ handleB = readl(&b->nfit_mem_dcr->nfit_handle);
+ if (handleA < handleB)
+ return -1;
+ else if (handleA > handleB)
+ return 1;
+ return 0;
+}
+
+static int nd_mem_init(struct nd_bus *nd_bus)
+{
+ struct nd_spa *nd_spa;
+
+ /*
+ * For each SPA-DCR address range find its corresponding
+ * MEMDEV(s). From each MEMDEV find the corresponding DCR.
+ * Then, try to find a SPA-BDW and a corresponding BDW that
+ * references the DCR. Throw it all into an nd_mem object.
+ * Note, that BDWs are optional.
+ */
+ list_for_each_entry(nd_spa, &nd_bus->spas, list) {
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ int type = nfit_spa_type(nd_spa->nfit_spa);
+ struct nd_mem *nd_mem, *found;
+ struct nd_memdev *nd_memdev;
+ u16 dcr_index;
+
+ if (type != NFIT_SPA_DCR)
+ continue;
+
+ /* multiple dimms may share a SPA_DCR when interleaved */
+ list_for_each_entry(nd_memdev, &nd_bus->memdevs, list) {
+ if (readw(&nd_memdev->nfit_mem->spa_index) != spa_index)
+ continue;
+ found = NULL;
+ dcr_index = readw(&nd_memdev->nfit_mem->dcr_index);
+ list_for_each_entry(nd_mem, &nd_bus->dimms, list)
+ if (readw(&nd_mem->nfit_mem_dcr->dcr_index)
+ == dcr_index) {
+ found = nd_mem;
+ break;
+ }
+ if (found)
+ continue;
+
+ nd_mem = kzalloc(sizeof(*nd_mem), GFP_KERNEL);
+ if (!nd_mem)
+ return -ENOMEM;
+ INIT_LIST_HEAD(&nd_mem->list);
+ nd_mem->nfit_spa_dcr = nd_spa->nfit_spa;
+ nd_mem->nfit_mem_dcr = nd_memdev->nfit_mem;
+ nd_mem_add(nd_bus, nd_mem);
+ }
+ }
+
+ list_sort(NULL, &nd_bus->dimms, nd_mem_cmp);
+
+ return 0;
+}
+
+static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
+{
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
+ struct nfit __iomem *nfit = nfit_desc->nfit_base;
+ void __iomem *base = nfit;
+ const void __iomem *end;
+ u8 sum, signature[4];
+ u8 __iomem *data;
+ size_t size, i;
+ int rc;
+
+ size = nd_bus->nfit_desc->nfit_size;
+ if (size < sizeof(struct nfit))
+ goto err;
+
+ size = min_t(u32, size, readl(&nfit->length));
+ data = (u8 __iomem *) base;
+ for (i = 0, sum = 0; i < size; i++)
+ sum += readb(data + i);
+ if (sum != 0 && !warn_checksum) {
+ dev_dbg(&nd_bus->dev, "%s: nfit checksum failure\n", __func__);
+ goto err;
+ }
+ WARN_TAINT_ONCE(sum != 0, TAINT_FIRMWARE_WORKAROUND,
+ "nfit checksum failure, continuing...\n");
+
+ memcpy_fromio(signature, &nfit->signature, sizeof(signature));
+ if (memcmp(signature, "NFIT", 4) != 0) {
+ dev_dbg(&nd_bus->dev, "%s: nfit signature mismatch\n",
+ __func__);
+ goto err;
+ }
+
+ end = base + size;
+ base += sizeof(struct nfit);
+ base = add_table(nd_bus, base, end);
+ while (!IS_ERR_OR_NULL(base))
+ base = add_table(nd_bus, base, end);
+
+ if (IS_ERR(base)) {
+ dev_dbg(&nd_bus->dev, "%s: nfit table parsing error: %ld\n",
+ __func__, PTR_ERR(base));
+ goto err;
+ }
+
+ rc = nd_mem_init(nd_bus);
+ if (rc)
+ goto err;
+
+ return nd_bus;
+ err:
+ put_device(&nd_bus->dev);
return NULL;
+
+}
+
+struct nd_bus *nfit_bus_register(struct device *parent,
+ struct nfit_bus_descriptor *nfit_desc)
+{
+ static DEFINE_MUTEX(mutex);
+ struct nd_bus *nd_bus;
+
+ /* enforce single bus at a time registration */
+ mutex_lock(&mutex);
+ nd_bus = nd_bus_new(parent, nfit_desc);
+ nd_bus = nd_bus_probe(nd_bus);
+ mutex_unlock(&mutex);
+
+ if (!nd_bus)
+ return NULL;
+
+ return nd_bus;
}
EXPORT_SYMBOL(nfit_bus_register);
void nfit_bus_unregister(struct nd_bus *nd_bus)
{
+ if (!nd_bus)
+ return;
+ device_unregister(&nd_bus->dev);
}
EXPORT_SYMBOL(nfit_bus_unregister);
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
new file mode 100644
index 000000000000..0ede8818f320
--- /dev/null
+++ b/drivers/block/nd/nd-private.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __ND_PRIVATE_H__
+#define __ND_PRIVATE_H__
+#include <linux/device.h>
+extern struct list_head nd_bus_list;
+extern struct mutex nd_bus_list_mutex;
+
+struct nd_bus {
+ struct nfit_bus_descriptor *nfit_desc;
+ struct list_head memdevs;
+ struct list_head dimms;
+ struct list_head spas;
+ struct list_head dcrs;
+ struct list_head bdws;
+ struct list_head list;
+ struct device dev;
+ int id;
+};
+
+struct nd_spa {
+ struct nfit_spa __iomem *nfit_spa;
+ struct list_head list;
+};
+
+struct nd_dcr {
+ struct nfit_dcr __iomem *nfit_dcr;
+ struct list_head list;
+};
+
+struct nd_bdw {
+ struct nfit_bdw __iomem *nfit_bdw;
+ struct list_head list;
+};
+
+struct nd_memdev {
+ struct nfit_mem __iomem *nfit_mem;
+ struct list_head list;
+};
+
+/* assembled tables for a given dimm */
+struct nd_mem {
+ struct nfit_mem __iomem *nfit_mem_dcr;
+ struct nfit_dcr __iomem *nfit_dcr;
+ struct nfit_bdw __iomem *nfit_bdw;
+ struct nfit_spa __iomem *nfit_spa_dcr;
+ struct nfit_spa __iomem *nfit_spa_bdw;
+ struct list_head list;
+};
+struct nd_bus *to_nd_bus(struct device *dev);
+#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nfit.h b/drivers/block/nd/nfit.h
index 56a3b2dad124..72c317d04cb2 100644
--- a/drivers/block/nd/nfit.h
+++ b/drivers/block/nd/nfit.h
@@ -16,6 +16,31 @@
#define __NFIT_H__
#include <linux/types.h>
+#include <linux/uuid.h>
+
+static const uuid_le nfit_spa_uuid_volatile __maybe_unused = UUID_LE(0x7305944f,
+ 0xfdda, 0x44e3, 0xb1, 0x6c, 0x3f, 0x22, 0xd2, 0x52, 0xe5, 0xd0);
+
+static const uuid_le nfit_spa_uuid_pm __maybe_unused = UUID_LE(0x66f0d379,
+ 0xb4f3, 0x4074, 0xac, 0x43, 0x0d, 0x33, 0x18, 0xb7, 0x8c, 0xdb);
+
+static const uuid_le nfit_spa_uuid_dcr __maybe_unused = UUID_LE(0x92f701f6,
+ 0x13b4, 0x405d, 0x91, 0x0b, 0x29, 0x93, 0x67, 0xe8, 0x23, 0x4c);
+
+static const uuid_le nfit_spa_uuid_bdw __maybe_unused = UUID_LE(0x91af0530,
+ 0x5d86, 0x470e, 0xa6, 0xb0, 0x0a, 0x2d, 0xb9, 0x40, 0x82, 0x49);
+
+static const uuid_le nfit_spa_uuid_vdisk __maybe_unused = UUID_LE(0x77ab535a,
+ 0x45fc, 0x624b, 0x55, 0x60, 0xf7, 0xb2, 0x81, 0xd1, 0xf9, 0x6e);
+
+static const uuid_le nfit_spa_uuid_vcd __maybe_unused = UUID_LE(0x3d5abd30,
+ 0x4175, 0x87ce, 0x6d, 0x64, 0xd2, 0xad, 0xe5, 0x23, 0xc4, 0xbb);
+
+static const uuid_le nfit_spa_uuid_pdisk __maybe_unused = UUID_LE(0x5cea02c9,
+ 0x4d07, 0x69d3, 0x26, 0x9f, 0x44, 0x96, 0xfb, 0xe0, 0x96, 0xf9);
+
+static const uuid_le nfit_spa_uuid_pcd __maybe_unused = UUID_LE(0x08018188,
+ 0x42cd, 0xbb48, 0x10, 0x0f, 0x53, 0x87, 0xd5, 0x3d, 0xed, 0x3d);
enum {
NFIT_TABLE_SPA = 0,
Manually create and register NFITs to describe 2 topologies. Topology1
is an advanced plausible configuration for BLK/PMEM aliased NVDIMMs.
Topology2 is an example configuration for current platforms that only
ship with a persistent address range.
Kernel provider "nfit_test.0" produces an NFIT with the following attributes:
(a) (b) DIMM BLK-REGION
+-------------------+--------+--------+--------+
+------+ | pm0.0 | blk2.0 | pm1.0 | blk2.1 | 0 region2
| imc0 +--+- - - region0- - - +--------+ +--------+
+--+---+ | pm0.0 | blk3.0 | pm1.0 | blk3.1 | 1 region3
| +-------------------+--------v v--------+
+--+---+ | |
| cpu0 | region1
+--+---+ | |
| +----------------------------^ ^--------+
+--+---+ | blk4.0 | pm1.0 | blk4.0 | 2 region4
| imc1 +--+----------------------------| +--------+
+------+ | blk5.0 | pm1.0 | blk5.0 | 3 region5
+----------------------------+--------+--------+
*) In this layout we have four dimms and two memory controllers in one
socket. Each unique interface ("block" or "pmem") to DPA space
is identified by a region device with a dynamically assigned id.
*) The first portion of dimm0 and dimm1 are interleaved as REGION0.
A single "pmem" namespace is created in the REGION0-"spa"-range
that spans dimm0 and dimm1 with a user-specified name of "pm0.0".
Some of that interleaved "spa" range is reclaimed as "bdw"
accessed space starting at offset (a) into each dimm. In that
reclaimed space we create two "bdw" "namespaces" from REGION2 and
REGION3 where "blk2.0" and "blk3.0" are just human readable names
that could be set to any user-desired name in the label.
*) In the last portion of dimm0 and dimm1 we have an interleaved
"spa" range, REGION1, that spans those two dimms as well as dimm2
and dimm3. Some of REGION1 allocated to a "pmem" namespace named
"pm1.0" the rest is reclaimed in 4 "bdw" namespaces (for each
dimm in the interleave set), "blk2.1", "blk3.1", "blk4.0", and
"blk5.0".
*) The portion of dimm2 and dimm3 that do not participate in the
REGION1 interleaved "spa" range (i.e. the DPA address below
offset (b) are also included in the "blk4.0" and "blk5.0"
namespaces. Note, that this example shows that "bdw" namespaces
don't need to be contiguous in DPA-space.
Kernel provider "nfit_test.1" produces an NFIT with the following attributes:
region2
+---------------------+
|---------------------|
|| pm2.0 ||
|---------------------|
+---------------------+
*) Describes a simple system-physical-address range with no backing
dimm or interleave description.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Kconfig | 20 +
drivers/block/nd/Makefile | 16 +
drivers/block/nd/nfit.h | 9
drivers/block/nd/test/Makefile | 5
drivers/block/nd/test/iomap.c | 148 ++++++
drivers/block/nd/test/nfit.c | 930 +++++++++++++++++++++++++++++++++++++
drivers/block/nd/test/nfit_test.h | 25 +
7 files changed, 1153 insertions(+)
create mode 100644 drivers/block/nd/test/Makefile
create mode 100644 drivers/block/nd/test/iomap.c
create mode 100644 drivers/block/nd/test/nfit.c
create mode 100644 drivers/block/nd/test/nfit_test.h
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index 5fa74f124b3e..0106b3807202 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -41,4 +41,24 @@ config NFIT_ACPI
register the platform-global NFIT blob with the core. Also
enables the core to craft ACPI._DSM messages for platform/dimm
configuration.
+
+config NFIT_TEST
+ tristate "NFIT TEST: Manufactured NFIT for interface testing"
+ depends on DMA_CMA
+ depends on ND_CORE=m
+ depends on m
+ help
+ For development purposes register a manufactured
+ NFIT table to verify the resulting device model topology.
+ Note, this module arranges for ioremap_cache() to be
+ overridden locally to allow simulation of system-memory as an
+ io-memory-resource.
+
+ Note, this test expects to be able to find at least
+ 256MB of CMA space (CONFIG_CMA_SIZE_MBYTES) or it will fail to
+ load. Kconfig does not allow for numerical value
+ dependencies, so we can only warn at runtime.
+
+ Say N unless you are doing development of the 'nd' subsystem.
+
endif
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 22701ab7dcae..c6bec0c185c5 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -1,3 +1,19 @@
+obj-$(CONFIG_NFIT_TEST) += test/
+
+ifdef CONFIG_NFIT_TEST
+# This obviously will cause symbol collisions if another
+# driver/sub-system attempts a similar mocked io-memory implementation.
+# When that happens we can either add a 'choice' kconfig option to
+# select one mocked instance at a time, or push for the linker to
+# include an option of the form "--wrap-prefix=<prefix>" to allow for
+# separate namespaces of mocked functions.
+ldflags-y += --wrap=ioremap_cache
+ldflags-y += --wrap=ioremap_nocache
+ldflags-y += --wrap=iounmap
+ldflags-y += --wrap=__request_region
+ldflags-y += --wrap=__release_region
+endif
+
obj-$(CONFIG_ND_CORE) += nd.o
obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
diff --git a/drivers/block/nd/nfit.h b/drivers/block/nd/nfit.h
index 72c317d04cb2..75b480f6ff03 100644
--- a/drivers/block/nd/nfit.h
+++ b/drivers/block/nd/nfit.h
@@ -123,6 +123,15 @@ struct nfit_mem {
__le16 reserved;
} __packed;
+#define NFIT_DIMM_HANDLE(node, socket, imc, chan, dimm) \
+ (((node & 0xfff) << 16) | ((socket & 0xf) << 12) \
+ | ((imc & 0xf) << 8) | ((chan & 0xf) << 4) | (dimm & 0xf))
+#define NFIT_DIMM_NODE(handle) ((handle) >> 16 & 0xfff)
+#define NFIT_DIMM_SOCKET(handle) ((handle) >> 12 & 0xf)
+#define NFIT_DIMM_CHAN(handle) ((handle) >> 8 & 0xf)
+#define NFIT_DIMM_IMC(handle) ((handle) >> 4 & 0xf)
+#define NFIT_DIMM_DIMM(handle) ((handle) & 0xf)
+
/**
* struct nfit_idt - Interleave description Table
*/
diff --git a/drivers/block/nd/test/Makefile b/drivers/block/nd/test/Makefile
new file mode 100644
index 000000000000..c7f319cbd082
--- /dev/null
+++ b/drivers/block/nd/test/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_NFIT_TEST) += nfit_test.o
+obj-$(CONFIG_NFIT_TEST) += nfit_test_iomap.o
+
+nfit_test-y := nfit.o
+nfit_test_iomap-y := iomap.o
diff --git a/drivers/block/nd/test/iomap.c b/drivers/block/nd/test/iomap.c
new file mode 100644
index 000000000000..87e6a1255237
--- /dev/null
+++ b/drivers/block/nd/test/iomap.c
@@ -0,0 +1,148 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/rculist.h>
+#include <linux/export.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/io.h>
+#include "nfit_test.h"
+
+static LIST_HEAD(iomap_head);
+
+static struct iomap_ops {
+ nfit_test_lookup_fn nfit_test_lookup;
+ struct list_head list;
+} iomap_ops;
+
+void nfit_test_setup(nfit_test_lookup_fn lookup)
+{
+ iomap_ops.nfit_test_lookup = lookup;
+ INIT_LIST_HEAD(&iomap_ops.list);
+ list_add_rcu(&iomap_ops.list, &iomap_head);
+}
+EXPORT_SYMBOL(nfit_test_setup);
+
+void nfit_test_teardown(void)
+{
+ list_del_rcu(&iomap_ops.list);
+ synchronize_rcu();
+}
+EXPORT_SYMBOL(nfit_test_teardown);
+
+static struct nfit_test_resource *get_nfit_res(resource_size_t resource)
+{
+ struct iomap_ops *ops;
+
+ ops = list_first_or_null_rcu(&iomap_head, typeof(*ops), list);
+ if (ops)
+ return ops->nfit_test_lookup(resource);
+ return NULL;
+}
+
+void __iomem *__nfit_test_ioremap(resource_size_t offset, unsigned long size,
+ void __iomem *(*fallback_fn)(resource_size_t, unsigned long))
+{
+ struct nfit_test_resource *nfit_res;
+
+ rcu_read_lock();
+ nfit_res = get_nfit_res(offset);
+ rcu_read_unlock();
+ if (nfit_res)
+ return (void __iomem *) nfit_res->buf + offset
+ - nfit_res->res->start;
+ return fallback_fn(offset, size);
+}
+
+void __iomem *__wrap_ioremap_cache(resource_size_t offset, unsigned long size)
+{
+ return __nfit_test_ioremap(offset, size, ioremap_cache);
+}
+EXPORT_SYMBOL(__wrap_ioremap_cache);
+
+void __iomem *__wrap_ioremap_nocache(resource_size_t offset, unsigned long size)
+{
+ return __nfit_test_ioremap(offset, size, ioremap_nocache);
+}
+EXPORT_SYMBOL(__wrap_ioremap_nocache);
+
+void __wrap_iounmap(volatile void __iomem *addr)
+{
+ struct nfit_test_resource *nfit_res;
+
+ rcu_read_lock();
+ nfit_res = get_nfit_res((unsigned long) addr);
+ rcu_read_unlock();
+ if (nfit_res)
+ return;
+ return iounmap(addr);
+}
+EXPORT_SYMBOL(__wrap_iounmap);
+
+struct resource *__wrap___request_region(struct resource *parent,
+ resource_size_t start, resource_size_t n, const char *name,
+ int flags)
+{
+ struct nfit_test_resource *nfit_res;
+
+ if (parent == &iomem_resource) {
+ rcu_read_lock();
+ nfit_res = get_nfit_res(start);
+ rcu_read_unlock();
+ if (nfit_res) {
+ struct resource *res = nfit_res->res + 1;
+
+ if (start + n > nfit_res->res->start
+ + resource_size(nfit_res->res)) {
+ pr_debug("%s: start: %llx n: %llx overflow: %pr\n",
+ __func__, start, n,
+ nfit_res->res);
+ return NULL;
+ }
+
+ res->start = start;
+ res->end = start + n - 1;
+ res->name = name;
+ res->flags = resource_type(parent);
+ res->flags |= IORESOURCE_BUSY | flags;
+ pr_debug("%s: %pr\n", __func__, res);
+ return res;
+ }
+ }
+ return __request_region(parent, start, n, name, flags);
+}
+EXPORT_SYMBOL(__wrap___request_region);
+
+void __wrap___release_region(struct resource *parent, resource_size_t start,
+ resource_size_t n)
+{
+ struct nfit_test_resource *nfit_res;
+
+ rcu_read_lock();
+ nfit_res = get_nfit_res(start);
+ if (nfit_res) {
+ struct resource *res = nfit_res->res + 1;
+
+ if (start != res->start || resource_size(res) != n)
+ pr_info("%s: start: %llx n: %llx mismatch: %pr\n",
+ __func__, start, n, res);
+ else
+ memset(res, 0, sizeof(*res));
+ }
+ rcu_read_unlock();
+ if (!nfit_res)
+ __release_region(parent, start, n);
+}
+EXPORT_SYMBOL(__wrap___release_region);
+
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/block/nd/test/nfit.c b/drivers/block/nd/test/nfit.c
new file mode 100644
index 000000000000..61227dec111a
--- /dev/null
+++ b/drivers/block/nd/test/nfit.c
@@ -0,0 +1,930 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include "nfit_test.h"
+#include "../nfit.h"
+
+#include <asm-generic/io-64-nonatomic-lo-hi.h>
+
+/*
+ * Generate an NFIT table to describe the following topology:
+ *
+ * BUS0: Interleaved PMEM regions, and aliasing with BLK regions
+ *
+ * (a) (b) DIMM BLK-REGION
+ * +----------+--------------+----------+---------+
+ * +------+ | blk2.0 | pm0.0 | blk2.1 | pm1.0 | 0 region2
+ * | imc0 +--+- - - - - region0 - - - -+----------+ +
+ * +--+---+ | blk3.0 | pm0.0 | blk3.1 | pm1.0 | 1 region3
+ * | +----------+--------------v----------v v
+ * +--+---+ | |
+ * | cpu0 | region1
+ * +--+---+ | |
+ * | +-------------------------^----------^ ^
+ * +--+---+ | blk4.0 | pm1.0 | 2 region4
+ * | imc1 +--+-------------------------+----------+ +
+ * +------+ | blk5.0 | pm1.0 | 3 region5
+ * +-------------------------+----------+-+-------+
+ *
+ * *) In this layout we have four dimms and two memory controllers in one
+ * socket. Each unique interface (BLK or PMEM) to DPA space
+ * is identified by a region device with a dynamically assigned id.
+ *
+ * *) The first portion of dimm0 and dimm1 are interleaved as REGION0.
+ * A single PMEM namespace "pm0.0" is created using half of the
+ * REGION0 SPA-range. REGION0 spans dimm0 and dimm1. PMEM namespace
+ * allocate from from the bottom of a region. The unallocated
+ * portion of REGION0 aliases with REGION2 and REGION3. That
+ * unallacted capacity is reclaimed as BLK namespaces ("blk2.0" and
+ * "blk3.0") starting at the base of each DIMM to offset (a) in those
+ * DIMMs. "pm0.0", "blk2.0" and "blk3.0" are free-form readable
+ * names that can be assigned to a namespace.
+ *
+ * *) In the last portion of dimm0 and dimm1 we have an interleaved
+ * SPA range, REGION1, that spans those two dimms as well as dimm2
+ * and dimm3. Some of REGION1 allocated to a PMEM namespace named
+ * "pm1.0" the rest is reclaimed in 4 BLK namespaces (for each
+ * dimm in the interleave set), "blk2.1", "blk3.1", "blk4.0", and
+ * "blk5.0".
+ *
+ * *) The portion of dimm2 and dimm3 that do not participate in the
+ * REGION1 interleaved SPA range (i.e. the DPA address below offset
+ * (b) are also included in the "blk4.0" and "blk5.0" namespaces.
+ * Note, that BLK namespaces need not be contiguous in DPA-space, and
+ * can consume aliased capacity from multiple interleave sets.
+ *
+ * BUS1: Legacy NVDIMM (single contiguous range)
+ *
+ * region2
+ * +---------------------+
+ * |---------------------|
+ * || pm2.0 ||
+ * |---------------------|
+ * +---------------------+
+ *
+ * *) A NFIT-table may describe a simple system-physical-address range
+ * with no backing dimm or interleave description.
+ */
+enum {
+ NUM_PM = 2,
+ NUM_DCR = 4,
+ NUM_BDW = NUM_DCR,
+ NUM_SPA = NUM_PM + NUM_DCR + NUM_BDW,
+ NUM_MEM = NUM_DCR + NUM_BDW + 2 /* spa0 iset */ + 4 /* spa1 iset */,
+ DIMM_SIZE = SZ_32M,
+ LABEL_SIZE = SZ_128K,
+ SPA0_SIZE = DIMM_SIZE,
+ SPA1_SIZE = DIMM_SIZE*2,
+ SPA2_SIZE = DIMM_SIZE,
+ BDW_SIZE = 64 << 8,
+ DCR_SIZE = 12,
+ NUM_NFITS = 2, /* permit testing multiple NFITs per system */
+};
+
+struct nfit_test_dcr {
+ __le64 bdw_addr;
+ __le32 bdw_status;
+ __u8 aperature[BDW_SIZE];
+};
+
+static u32 handle[NUM_DCR] = {
+ [0] = NFIT_DIMM_HANDLE(0, 0, 0, 0, 0),
+ [1] = NFIT_DIMM_HANDLE(0, 0, 0, 0, 1),
+ [2] = NFIT_DIMM_HANDLE(0, 0, 1, 0, 0),
+ [3] = NFIT_DIMM_HANDLE(0, 0, 1, 0, 1),
+};
+
+struct nfit_test {
+ struct nfit_bus_descriptor nfit_desc;
+ struct platform_device pdev;
+ struct list_head resources;
+ void __iomem *nfit_buf;
+ struct nd_bus *nd_bus;
+ dma_addr_t nfit_dma;
+ size_t nfit_size;
+ int num_dcr;
+ int num_pm;
+ void **dimm;
+ dma_addr_t *dimm_dma;
+ void **label;
+ dma_addr_t *label_dma;
+ void **spa_set;
+ dma_addr_t *spa_set_dma;
+ struct nfit_test_dcr **dcr;
+ dma_addr_t *dcr_dma;
+ int (*alloc)(struct nfit_test *t);
+ void (*setup)(struct nfit_test *t);
+};
+
+static struct nfit_test *to_nfit_test(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+
+ return container_of(pdev, struct nfit_test, pdev);
+}
+
+static int nfit_test_ctl(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
+ unsigned int buf_len)
+{
+ return -ENOTTY;
+}
+
+static DEFINE_SPINLOCK(nfit_test_lock);
+static struct nfit_test *instances[NUM_NFITS];
+
+static void *alloc_coherent(struct nfit_test *t, size_t size, dma_addr_t *dma)
+{
+ struct device *dev = &t->pdev.dev;
+ struct resource *res = devm_kzalloc(dev, sizeof(*res) * 2, GFP_KERNEL);
+ void *buf = dmam_alloc_coherent(dev, size, dma, GFP_KERNEL);
+ struct nfit_test_resource *nfit_res = devm_kzalloc(dev,
+ sizeof(*nfit_res), GFP_KERNEL);
+
+ if (!res || !buf || !nfit_res)
+ return NULL;
+ INIT_LIST_HEAD(&nfit_res->list);
+ memset(buf, 0, size);
+ nfit_res->buf = buf;
+ nfit_res->res = res;
+ res->start = *dma;
+ res->end = *dma + size - 1;
+ res->name = "NFIT";
+ spin_lock(&nfit_test_lock);
+ list_add(&nfit_res->list, &t->resources);
+ spin_unlock(&nfit_test_lock);
+
+ return nfit_res->buf;
+}
+
+static struct nfit_test_resource *nfit_test_lookup(resource_size_t addr)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(instances); i++) {
+ struct nfit_test_resource *n, *nfit_res = NULL;
+ struct nfit_test *t = instances[i];
+
+ if (!t)
+ continue;
+ spin_lock(&nfit_test_lock);
+ list_for_each_entry(n, &t->resources, list) {
+ if (addr >= n->res->start && (addr < n->res->start
+ + resource_size(n->res))) {
+ nfit_res = n;
+ break;
+ } else if (addr >= (unsigned long) n->buf
+ && (addr < (unsigned long) n->buf
+ + resource_size(n->res))) {
+ nfit_res = n;
+ break;
+ }
+ }
+ spin_unlock(&nfit_test_lock);
+ if (nfit_res)
+ return nfit_res;
+ }
+
+ return NULL;
+}
+
+static int nfit_test0_alloc(struct nfit_test *t)
+{
+ size_t nfit_size = sizeof(struct nfit)
+ + sizeof(struct nfit_spa) * NUM_SPA
+ + sizeof(struct nfit_mem) * NUM_MEM
+ + sizeof(struct nfit_dcr) * NUM_DCR
+ + sizeof(struct nfit_bdw) * NUM_BDW;
+ int i;
+
+ t->nfit_buf = (void __iomem *) alloc_coherent(t, nfit_size,
+ &t->nfit_dma);
+ if (!t->nfit_buf)
+ return -ENOMEM;
+ t->nfit_size = nfit_size;
+
+ t->spa_set[0] = alloc_coherent(t, SPA0_SIZE, &t->spa_set_dma[0]);
+ if (!t->spa_set[0])
+ return -ENOMEM;
+
+ t->spa_set[1] = alloc_coherent(t, SPA1_SIZE, &t->spa_set_dma[1]);
+ if (!t->spa_set[1])
+ return -ENOMEM;
+
+ for (i = 0; i < NUM_DCR; i++) {
+ t->dimm[i] = alloc_coherent(t, DIMM_SIZE, &t->dimm_dma[i]);
+ if (!t->dimm[i])
+ return -ENOMEM;
+
+ t->label[i] = alloc_coherent(t, LABEL_SIZE, &t->label_dma[i]);
+ if (!t->label[i])
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < NUM_DCR; i++) {
+ t->dcr[i] = alloc_coherent(t, LABEL_SIZE, &t->dcr_dma[i]);
+ if (!t->dcr[i])
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static u8 nfit_checksum(void *buf, size_t size)
+{
+ u8 sum, *data = buf;
+ size_t i;
+
+ for (sum = 0, i = 0; i < size; i++)
+ sum += data[i];
+ return 0 - sum;
+}
+
+static int nfit_test1_alloc(struct nfit_test *t)
+{
+ size_t nfit_size = sizeof(struct nfit) + sizeof(struct nfit_spa);
+
+ t->nfit_buf = (void __iomem *) alloc_coherent(t, nfit_size,
+ &t->nfit_dma);
+ if (!t->nfit_buf)
+ return -ENOMEM;
+ t->nfit_size = nfit_size;
+
+ t->spa_set[0] = alloc_coherent(t, SPA2_SIZE, &t->spa_set_dma[0]);
+ if (!t->spa_set[0])
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nfit_test0_setup(struct nfit_test *t)
+{
+ struct nfit_bus_descriptor *nfit_desc;
+ void __iomem *nfit_buf = t->nfit_buf;
+ struct nfit_spa __iomem *nfit_spa;
+ struct nfit_dcr __iomem *nfit_dcr;
+ struct nfit_bdw __iomem *nfit_bdw;
+ struct nfit_mem __iomem *nfit_mem;
+ size_t size = t->nfit_size;
+ struct nfit __iomem *nfit;
+ unsigned int offset;
+
+ /* nfit header */
+ nfit = nfit_buf;
+ memcpy_toio(nfit->signature, "NFIT", 4);
+ writel(size, &nfit->length);
+ writeb(1, &nfit->revision);
+ memcpy_toio(nfit->oemid, "NDTEST", 6);
+ writew(0x1234, &nfit->oem_tbl_id);
+ writel(1, &nfit->oem_revision);
+ writel(0xabcd0000, &nfit->creator_id);
+ writel(1, &nfit->creator_revision);
+
+ /*
+ * spa0 (interleave first half of dimm0 and dimm1, note storage
+ * does not actually alias the related block-data-window
+ * regions)
+ */
+ nfit_spa = nfit_buf + sizeof(*nfit);
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_pm, 16);
+ writew(0+1, &nfit_spa->spa_index);
+ writeq(t->spa_set_dma[0], &nfit_spa->spa_base);
+ writeq(SPA0_SIZE, &nfit_spa->spa_length);
+
+ /*
+ * spa1 (interleave last half of the 4 DIMMS, note storage
+ * does not actually alias the related block-data-window
+ * regions)
+ */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa);
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_pm, 16);
+ writew(1+1, &nfit_spa->spa_index);
+ writeq(t->spa_set_dma[1], &nfit_spa->spa_base);
+ writeq(SPA1_SIZE, &nfit_spa->spa_length);
+
+ /* spa2 (dcr0) dimm0 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 2;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_dcr, 16);
+ writew(2+1, &nfit_spa->spa_index);
+ writeq(t->dcr_dma[0], &nfit_spa->spa_base);
+ writeq(DCR_SIZE, &nfit_spa->spa_length);
+
+ /* spa3 (dcr1) dimm1 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 3;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_dcr, 16);
+ writew(3+1, &nfit_spa->spa_index);
+ writeq(t->dcr_dma[1], &nfit_spa->spa_base);
+ writeq(DCR_SIZE, &nfit_spa->spa_length);
+
+ /* spa4 (dcr2) dimm2 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 4;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_dcr, 16);
+ writew(4+1, &nfit_spa->spa_index);
+ writeq(t->dcr_dma[2], &nfit_spa->spa_base);
+ writeq(DCR_SIZE, &nfit_spa->spa_length);
+
+ /* spa5 (dcr3) dimm3 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 5;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_dcr, 16);
+ writew(5+1, &nfit_spa->spa_index);
+ writeq(t->dcr_dma[3], &nfit_spa->spa_base);
+ writeq(DCR_SIZE, &nfit_spa->spa_length);
+
+ /* spa6 (bdw for dcr0) dimm0 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 6;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_bdw, 16);
+ writew(6+1, &nfit_spa->spa_index);
+ writeq(t->dimm_dma[0], &nfit_spa->spa_base);
+ writeq(DIMM_SIZE, &nfit_spa->spa_length);
+ dev_dbg(&t->pdev.dev, "%s: BDW0: %#llx:%#x\n", __func__,
+ (unsigned long long) t->dimm_dma[0], DIMM_SIZE);
+
+ /* spa7 (bdw for dcr1) dimm1 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 7;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_bdw, 16);
+ writew(7+1, &nfit_spa->spa_index);
+ writeq(t->dimm_dma[1], &nfit_spa->spa_base);
+ writeq(DIMM_SIZE, &nfit_spa->spa_length);
+ dev_dbg(&t->pdev.dev, "%s: BDW1: %#llx:%#x\n", __func__,
+ (unsigned long long) t->dimm_dma[1], DIMM_SIZE);
+
+ /* spa8 (bdw for dcr2) dimm2 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 8;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_bdw, 16);
+ writew(8+1, &nfit_spa->spa_index);
+ writeq(t->dimm_dma[2], &nfit_spa->spa_base);
+ writeq(DIMM_SIZE, &nfit_spa->spa_length);
+ dev_dbg(&t->pdev.dev, "%s: BDW2: %#llx:%#x\n", __func__,
+ (unsigned long long) t->dimm_dma[2], DIMM_SIZE);
+
+ /* spa9 (bdw for dcr3) dimm3 */
+ nfit_spa = nfit_buf + sizeof(*nfit) + sizeof(*nfit_spa) * 9;
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_bdw, 16);
+ writew(9+1, &nfit_spa->spa_index);
+ writeq(t->dimm_dma[3], &nfit_spa->spa_base);
+ writeq(DIMM_SIZE, &nfit_spa->spa_length);
+ dev_dbg(&t->pdev.dev, "%s: BDW3: %#llx:%#x\n", __func__,
+ (unsigned long long) t->dimm_dma[3], DIMM_SIZE);
+
+ offset = sizeof(*nfit) + sizeof(*nfit_spa) * 10;
+ /* mem-region0 (spa0, dimm0) */
+ nfit_mem = nfit_buf + offset;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[0], &nfit_mem->nfit_handle);
+ writew(0, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(0+1, &nfit_mem->spa_index);
+ writew(0+1, &nfit_mem->dcr_index);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[0], &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(2, &nfit_mem->interleave_ways);
+
+ /* mem-region1 (spa0, dimm1) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem);
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[1], &nfit_mem->nfit_handle);
+ writew(1, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(0+1, &nfit_mem->spa_index);
+ writew(1+1, &nfit_mem->dcr_index);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[0] + SPA0_SIZE/2, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(2, &nfit_mem->interleave_ways);
+
+ /* mem-region2 (spa1, dimm0) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 2;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[0], &nfit_mem->nfit_handle);
+ writew(0, &nfit_mem->phys_id);
+ writew(1, &nfit_mem->region_id);
+ writew(1+1, &nfit_mem->spa_index);
+ writew(0+1, &nfit_mem->dcr_index);
+ writeq(SPA1_SIZE/4, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[1], &nfit_mem->region_spa_offset);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(4, &nfit_mem->interleave_ways);
+
+ /* mem-region3 (spa1, dimm1) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 3;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[1], &nfit_mem->nfit_handle);
+ writew(1, &nfit_mem->phys_id);
+ writew(1, &nfit_mem->region_id);
+ writew(1+1, &nfit_mem->spa_index);
+ writew(1+1, &nfit_mem->dcr_index);
+ writeq(SPA1_SIZE/4, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[1] + SPA1_SIZE/4, &nfit_mem->region_spa_offset);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(4, &nfit_mem->interleave_ways);
+
+ /* mem-region4 (spa1, dimm2) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 4;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[2], &nfit_mem->nfit_handle);
+ writew(2, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(1+1, &nfit_mem->spa_index);
+ writew(2+1, &nfit_mem->dcr_index);
+ writeq(SPA1_SIZE/4, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[1] + 2*SPA1_SIZE/4, &nfit_mem->region_spa_offset);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(4, &nfit_mem->interleave_ways);
+
+ /* mem-region5 (spa1, dimm3) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 5;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[3], &nfit_mem->nfit_handle);
+ writew(3, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(1+1, &nfit_mem->spa_index);
+ writew(3+1, &nfit_mem->dcr_index);
+ writeq(SPA1_SIZE/4, &nfit_mem->region_len);
+ writeq(t->spa_set_dma[1] + 3*SPA1_SIZE/4, &nfit_mem->region_spa_offset);
+ writeq(SPA0_SIZE/2, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(4, &nfit_mem->interleave_ways);
+
+ /* mem-region6 (spa/dcr0, dimm0) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 6;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[0], &nfit_mem->nfit_handle);
+ writew(0, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(2+1, &nfit_mem->spa_index);
+ writew(0+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region7 (spa/dcr1, dimm1) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 7;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[1], &nfit_mem->nfit_handle);
+ writew(1, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(3+1, &nfit_mem->spa_index);
+ writew(1+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region8 (spa/dcr2, dimm2) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 8;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[2], &nfit_mem->nfit_handle);
+ writew(2, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(4+1, &nfit_mem->spa_index);
+ writew(2+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region9 (spa/dcr3, dimm3) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 9;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[3], &nfit_mem->nfit_handle);
+ writew(3, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(5+1, &nfit_mem->spa_index);
+ writew(3+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region10 (spa/bdw0, dimm0) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 10;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[0], &nfit_mem->nfit_handle);
+ writew(0, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(6+1, &nfit_mem->spa_index);
+ writew(0+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region11 (spa/bdw1, dimm1) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 11;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[1], &nfit_mem->nfit_handle);
+ writew(1, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(7+1, &nfit_mem->spa_index);
+ writew(1+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region12 (spa/bdw2, dimm2) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 12;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[2], &nfit_mem->nfit_handle);
+ writew(2, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(8+1, &nfit_mem->spa_index);
+ writew(2+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ /* mem-region13 (spa/dcr3, dimm3) */
+ nfit_mem = nfit_buf + offset + sizeof(struct nfit_mem) * 13;
+ writew(NFIT_TABLE_MEM, &nfit_mem->type);
+ writew(sizeof(*nfit_mem), &nfit_mem->length);
+ writel(handle[3], &nfit_mem->nfit_handle);
+ writew(3, &nfit_mem->phys_id);
+ writew(0, &nfit_mem->region_id);
+ writew(9+1, &nfit_mem->spa_index);
+ writew(3+1, &nfit_mem->dcr_index);
+ writeq(0, &nfit_mem->region_len);
+ writeq(0, &nfit_mem->region_spa_offset);
+ writeq(0, &nfit_mem->region_dpa);
+ writew(0, &nfit_mem->idt_index);
+ writew(1, &nfit_mem->interleave_ways);
+
+ offset = offset + sizeof(struct nfit_mem) * 14;
+ /* dcr-descriptor0 */
+ nfit_dcr = nfit_buf + offset;
+ writew(NFIT_TABLE_DCR, &nfit_dcr->type);
+ writew(sizeof(struct nfit_dcr), &nfit_dcr->length);
+ writew(0+1, &nfit_dcr->dcr_index);
+ writew(0xabcd, &nfit_dcr->vendor_id);
+ writew(0, &nfit_dcr->device_id);
+ writew(1, &nfit_dcr->revision_id);
+ writel(~handle[0], &nfit_dcr->serial_number);
+ writew(1, &nfit_dcr->num_bcw);
+ writeq(DCR_SIZE, &nfit_dcr->bcw_size);
+ writeq(0, &nfit_dcr->cmd_offset);
+ writeq(8, &nfit_dcr->cmd_size);
+ writeq(8, &nfit_dcr->status_offset);
+ writeq(4, &nfit_dcr->status_size);
+
+ /* dcr-descriptor1 */
+ nfit_dcr = nfit_buf + offset + sizeof(struct nfit_dcr);
+ writew(NFIT_TABLE_DCR, &nfit_dcr->type);
+ writew(sizeof(struct nfit_dcr), &nfit_dcr->length);
+ writew(1+1, &nfit_dcr->dcr_index);
+ writew(0xabcd, &nfit_dcr->vendor_id);
+ writew(0, &nfit_dcr->device_id);
+ writew(1, &nfit_dcr->revision_id);
+ writel(~handle[1], &nfit_dcr->serial_number);
+ writew(1, &nfit_dcr->num_bcw);
+ writeq(DCR_SIZE, &nfit_dcr->bcw_size);
+ writeq(0, &nfit_dcr->cmd_offset);
+ writeq(8, &nfit_dcr->cmd_size);
+ writeq(8, &nfit_dcr->status_offset);
+ writeq(4, &nfit_dcr->status_size);
+
+ /* dcr-descriptor2 */
+ nfit_dcr = nfit_buf + offset + sizeof(struct nfit_dcr) * 2;
+ writew(NFIT_TABLE_DCR, &nfit_dcr->type);
+ writew(sizeof(struct nfit_dcr), &nfit_dcr->length);
+ writew(2+1, &nfit_dcr->dcr_index);
+ writew(0xabcd, &nfit_dcr->vendor_id);
+ writew(0, &nfit_dcr->device_id);
+ writew(1, &nfit_dcr->revision_id);
+ writel(~handle[2], &nfit_dcr->serial_number);
+ writew(1, &nfit_dcr->num_bcw);
+ writeq(DCR_SIZE, &nfit_dcr->bcw_size);
+ writeq(0, &nfit_dcr->cmd_offset);
+ writeq(8, &nfit_dcr->cmd_size);
+ writeq(8, &nfit_dcr->status_offset);
+ writeq(4, &nfit_dcr->status_size);
+
+ /* dcr-descriptor3 */
+ nfit_dcr = nfit_buf + offset + sizeof(struct nfit_dcr) * 3;
+ writew(NFIT_TABLE_DCR, &nfit_dcr->type);
+ writew(sizeof(struct nfit_dcr), &nfit_dcr->length);
+ writew(3+1, &nfit_dcr->dcr_index);
+ writew(0xabcd, &nfit_dcr->vendor_id);
+ writew(0, &nfit_dcr->device_id);
+ writew(1, &nfit_dcr->revision_id);
+ writel(~handle[3], &nfit_dcr->serial_number);
+ writew(1, &nfit_dcr->num_bcw);
+ writeq(DCR_SIZE, &nfit_dcr->bcw_size);
+ writeq(0, &nfit_dcr->cmd_offset);
+ writeq(8, &nfit_dcr->cmd_size);
+ writeq(8, &nfit_dcr->status_offset);
+ writeq(4, &nfit_dcr->status_size);
+
+ offset = offset + sizeof(struct nfit_dcr) * 4;
+ /* bdw0 (spa/dcr0, dimm0) */
+ nfit_bdw = nfit_buf + offset;
+ writew(NFIT_TABLE_BDW, &nfit_bdw->type);
+ writew(sizeof(struct nfit_bdw), &nfit_bdw->length);
+ writew(0+1, &nfit_bdw->dcr_index);
+ writew(1, &nfit_bdw->num_bdw);
+ writeq(0, &nfit_bdw->bdw_offset);
+ writeq(BDW_SIZE, &nfit_bdw->bdw_size);
+ writeq(DIMM_SIZE, &nfit_bdw->blk_capacity);
+ writeq(0, &nfit_bdw->blk_offset);
+
+ /* bdw1 (spa/dcr1, dimm1) */
+ nfit_bdw = nfit_buf + offset + sizeof(struct nfit_bdw);
+ writew(NFIT_TABLE_BDW, &nfit_bdw->type);
+ writew(sizeof(struct nfit_bdw), &nfit_bdw->length);
+ writew(1+1, &nfit_bdw->dcr_index);
+ writew(1, &nfit_bdw->num_bdw);
+ writeq(0, &nfit_bdw->bdw_offset);
+ writeq(BDW_SIZE, &nfit_bdw->bdw_size);
+ writeq(DIMM_SIZE, &nfit_bdw->blk_capacity);
+ writeq(0, &nfit_bdw->blk_offset);
+
+ /* bdw2 (spa/dcr2, dimm2) */
+ nfit_bdw = nfit_buf + offset + sizeof(struct nfit_bdw) * 2;
+ writew(NFIT_TABLE_BDW, &nfit_bdw->type);
+ writew(sizeof(struct nfit_bdw), &nfit_bdw->length);
+ writew(2+1, &nfit_bdw->dcr_index);
+ writew(1, &nfit_bdw->num_bdw);
+ writeq(0, &nfit_bdw->bdw_offset);
+ writeq(BDW_SIZE, &nfit_bdw->bdw_size);
+ writeq(DIMM_SIZE, &nfit_bdw->blk_capacity);
+ writeq(0, &nfit_bdw->blk_offset);
+
+ /* bdw3 (spa/dcr3, dimm3) */
+ nfit_bdw = nfit_buf + offset + sizeof(struct nfit_bdw) * 3;
+ writew(NFIT_TABLE_BDW, &nfit_bdw->type);
+ writew(sizeof(struct nfit_bdw), &nfit_bdw->length);
+ writew(3+1, &nfit_bdw->dcr_index);
+ writew(1, &nfit_bdw->num_bdw);
+ writeq(0, &nfit_bdw->bdw_offset);
+ writeq(BDW_SIZE, &nfit_bdw->bdw_size);
+ writeq(DIMM_SIZE, &nfit_bdw->blk_capacity);
+ writeq(0, &nfit_bdw->blk_offset);
+
+ writeb(nfit_checksum(nfit_buf, size), &nfit->checksum);
+
+ nfit_desc = &t->nfit_desc;
+ nfit_desc->nfit_ctl = nfit_test_ctl;
+}
+
+static void nfit_test1_setup(struct nfit_test *t)
+{
+ void __iomem *nfit_buf = t->nfit_buf;
+ struct nfit_spa __iomem *nfit_spa;
+ size_t size = t->nfit_size;
+ struct nfit __iomem *nfit;
+
+ /* nfit header */
+ nfit = nfit_buf;
+ memcpy_toio(nfit->signature, "NFIT", 4);
+ writel(size, &nfit->length);
+ writeb(1, &nfit->revision);
+ memcpy_toio(nfit->oemid, "NDTEST", 6);
+ writew(0x1234, &nfit->oem_tbl_id);
+ writel(1, &nfit->oem_revision);
+ writel(0xabcd0000, &nfit->creator_id);
+ writel(1, &nfit->creator_revision);
+
+ /* spa0 (flat range with no bdw aliasing) */
+ nfit_spa = nfit_buf + sizeof(*nfit);
+ writew(NFIT_TABLE_SPA, &nfit_spa->type);
+ writew(sizeof(*nfit_spa), &nfit_spa->length);
+ memcpy_toio(&nfit_spa->type_uuid, &nfit_spa_uuid_pm, 16);
+ writew(0+1, &nfit_spa->spa_index);
+ writeq(t->spa_set_dma[0], &nfit_spa->spa_base);
+ writeq(SPA2_SIZE, &nfit_spa->spa_length);
+
+ writeb(nfit_checksum(nfit_buf, size), &nfit->checksum);
+}
+
+static int nfit_test_probe(struct platform_device *pdev)
+{
+ struct nfit_bus_descriptor *nfit_desc;
+ struct device *dev = &pdev->dev;
+ struct nfit_test *nfit_test;
+ int rc;
+
+ nfit_test = to_nfit_test(&pdev->dev);
+ rc = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (rc)
+ return rc;
+
+ /* common alloc */
+ if (nfit_test->num_dcr) {
+ int num = nfit_test->num_dcr;
+
+ nfit_test->dimm = devm_kcalloc(dev, num, sizeof(void *), GFP_KERNEL);
+ nfit_test->dimm_dma = devm_kcalloc(dev, num, sizeof(dma_addr_t), GFP_KERNEL);
+ nfit_test->label = devm_kcalloc(dev, num, sizeof(void *), GFP_KERNEL);
+ nfit_test->label_dma = devm_kcalloc(dev, num, sizeof(dma_addr_t), GFP_KERNEL);
+ nfit_test->dcr = devm_kcalloc(dev, num, sizeof(struct nfit_test_dcr *), GFP_KERNEL);
+ nfit_test->dcr_dma = devm_kcalloc(dev, num, sizeof(dma_addr_t), GFP_KERNEL);
+ if (nfit_test->dimm && nfit_test->dimm_dma && nfit_test->label
+ && nfit_test->label_dma && nfit_test->dcr
+ && nfit_test->dcr_dma)
+ /* pass */;
+ else
+ return -ENOMEM;
+ }
+
+ if (nfit_test->num_pm) {
+ int num = nfit_test->num_pm;
+
+ nfit_test->spa_set = devm_kcalloc(dev, num, sizeof(void *), GFP_KERNEL);
+ nfit_test->spa_set_dma = devm_kcalloc(dev, num,
+ sizeof(dma_addr_t), GFP_KERNEL);
+ if (nfit_test->spa_set && nfit_test->spa_set_dma)
+ /* pass */;
+ else
+ return -ENOMEM;
+ }
+
+ /* per-nfit specific alloc */
+ if (nfit_test->alloc(nfit_test))
+ return -ENOMEM;
+
+ nfit_test->setup(nfit_test);
+
+ nfit_desc = &nfit_test->nfit_desc;
+ nfit_desc->nfit_base = nfit_test->nfit_buf;
+ nfit_desc->nfit_size = nfit_test->nfit_size;
+
+ nfit_test->nd_bus = nfit_bus_register(&pdev->dev, nfit_desc);
+ if (!nfit_test->nd_bus)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int nfit_test_remove(struct platform_device *pdev)
+{
+ struct nfit_test *nfit_test = to_nfit_test(&pdev->dev);
+
+ nfit_bus_unregister(nfit_test->nd_bus);
+
+ return 0;
+}
+
+static void nfit_test_release(struct device *dev)
+{
+ struct nfit_test *nfit_test = to_nfit_test(dev);
+
+ kfree(nfit_test);
+}
+
+static const struct platform_device_id nfit_test_id[] = {
+ { KBUILD_MODNAME },
+ { },
+};
+
+static struct platform_driver nfit_test_driver = {
+ .probe = nfit_test_probe,
+ .remove = nfit_test_remove,
+ .driver = {
+ .name = KBUILD_MODNAME,
+ },
+ .id_table = nfit_test_id,
+};
+
+#ifdef CONFIG_CMA_SIZE_MBYTES
+#define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
+#else
+#define CMA_SIZE_MBYTES 0
+#endif
+
+static __init int nfit_test_init(void)
+{
+ int rc, i;
+
+ if (CMA_SIZE_MBYTES < 584) {
+ pr_err("need CONFIG_CMA_SIZE_MBYTES >= 584 to load\n");
+ return -EINVAL;
+ }
+
+ nfit_test_setup(nfit_test_lookup);
+
+ for (i = 0; i < NUM_NFITS; i++) {
+ struct nfit_test *nfit_test;
+ struct platform_device *pdev;
+
+ nfit_test = kzalloc(sizeof(*nfit_test), GFP_KERNEL);
+ if (!nfit_test) {
+ rc = -ENOMEM;
+ goto err_register;
+ }
+ INIT_LIST_HEAD(&nfit_test->resources);
+ switch (i) {
+ case 0:
+ nfit_test->num_pm = NUM_PM;
+ nfit_test->num_dcr = NUM_DCR;
+ nfit_test->alloc = nfit_test0_alloc;
+ nfit_test->setup = nfit_test0_setup;
+ break;
+ case 1:
+ nfit_test->num_pm = 1;
+ nfit_test->alloc = nfit_test1_alloc;
+ nfit_test->setup = nfit_test1_setup;
+ break;
+ default:
+ rc = -EINVAL;
+ goto err_register;
+ }
+ pdev = &nfit_test->pdev;
+ pdev->name = KBUILD_MODNAME;
+ pdev->id = i;
+ pdev->dev.release = nfit_test_release;
+ rc = platform_device_register(pdev);
+ if (rc) {
+ put_device(&pdev->dev);
+ goto err_register;
+ }
+ instances[i] = nfit_test;
+ }
+
+ rc = platform_driver_register(&nfit_test_driver);
+ if (rc)
+ goto err_register;
+ return 0;
+
+ err_register:
+ for (i = 0; i < NUM_NFITS; i++)
+ if (instances[i])
+ platform_device_unregister(&instances[i]->pdev);
+ return rc;
+}
+
+static __exit void nfit_test_exit(void)
+{
+ int i;
+
+ nfit_test_teardown();
+ for (i = 0; i < NUM_NFITS; i++)
+ platform_device_unregister(&instances[i]->pdev);
+ platform_driver_unregister(&nfit_test_driver);
+}
+
+module_init(nfit_test_init);
+module_exit(nfit_test_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel Corporation");
diff --git a/drivers/block/nd/test/nfit_test.h b/drivers/block/nd/test/nfit_test.h
new file mode 100644
index 000000000000..8a300c51b6bc
--- /dev/null
+++ b/drivers/block/nd/test/nfit_test.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __NFIT_TEST_H__
+#define __NFIT_TEST_H__
+
+struct nfit_test_resource {
+ struct list_head list;
+ struct resource *res;
+ void *buf;
+};
+
+typedef struct nfit_test_resource *(*nfit_test_lookup_fn)(resource_size_t);
+void nfit_test_setup(nfit_test_lookup_fn fn);
+void nfit_test_teardown(void);
+#endif
This is the position (device topology) independent method to find all
the NFIT-defined buses in the system. The expectation is that there
will only ever be one "nd" bus discovered via /sys/class/nd/ndctl0.
However, we allow for the possibility of multiple buses and they will
listed in discovery order as ndctl0...ndctlN. This character device
hosts the ioctl for passing control messages (as defined by the NFIT
spec). The "format" and "revision" attributes of this device identify
the format of the messages. In the event an NFIT is registered with an
unknown/unsupported control message format then the "format" attribute
will not be visible.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 1
drivers/block/nd/bus.c | 84 +++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/core.c | 71 ++++++++++++++++++++++++++++++++++-
drivers/block/nd/nd-private.h | 5 ++
4 files changed, 160 insertions(+), 1 deletion(-)
create mode 100644 drivers/block/nd/bus.c
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index c6bec0c185c5..7772fb599809 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -20,3 +20,4 @@ obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
nd_acpi-y := acpi.o
nd-y := core.o
+nd-y += bus.o
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
new file mode 100644
index 000000000000..c27db50511f2
--- /dev/null
+++ b/drivers/block/nd/bus.c
@@ -0,0 +1,84 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/uaccess.h>
+#include <linux/fcntl.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/io.h>
+#include "nd-private.h"
+#include "nfit.h"
+
+static int nd_major;
+static struct class *nd_class;
+
+int nd_bus_create_ndctl(struct nd_bus *nd_bus)
+{
+ dev_t devt = MKDEV(nd_major, nd_bus->id);
+ struct device *dev;
+
+ dev = device_create(nd_class, &nd_bus->dev, devt, nd_bus, "ndctl%d",
+ nd_bus->id);
+
+ if (IS_ERR(dev)) {
+ dev_dbg(&nd_bus->dev, "failed to register ndctl%d: %ld\n",
+ nd_bus->id, PTR_ERR(dev));
+ return PTR_ERR(dev);
+ }
+ return 0;
+}
+
+void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
+{
+ device_destroy(nd_class, MKDEV(nd_major, nd_bus->id));
+}
+
+static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ return -ENXIO;
+}
+
+static const struct file_operations nd_bus_fops = {
+ .owner = THIS_MODULE,
+ .open = nonseekable_open,
+ .unlocked_ioctl = nd_ioctl,
+ .compat_ioctl = nd_ioctl,
+ .llseek = noop_llseek,
+};
+
+int __init nd_bus_init(void)
+{
+ int rc;
+
+ rc = register_chrdev(0, "ndctl", &nd_bus_fops);
+ if (rc < 0)
+ return rc;
+ nd_major = rc;
+
+ nd_class = class_create(THIS_MODULE, "nd");
+ if (IS_ERR(nd_class))
+ goto err_class;
+
+ return 0;
+
+ err_class:
+ unregister_chrdev(nd_major, "ndctl");
+
+ return rc;
+}
+
+void __exit nd_bus_exit(void)
+{
+ class_destroy(nd_class);
+ unregister_chrdev(nd_major, "ndctl");
+}
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index d126799e7ff7..d6a666b9228b 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -14,12 +14,15 @@
#include <linux/export.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include <linux/io.h>
#include "nd-private.h"
#include "nfit.h"
+LIST_HEAD(nd_bus_list);
+DEFINE_MUTEX(nd_bus_list_mutex);
static DEFINE_IDA(nd_ida);
static bool warn_checksum;
@@ -68,6 +71,53 @@ struct nd_bus *to_nd_bus(struct device *dev)
return nd_bus;
}
+static const char *nd_bus_provider(struct nd_bus *nd_bus)
+{
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
+ struct device *parent = nd_bus->dev.parent;
+
+ if (nfit_desc->provider_name)
+ return nfit_desc->provider_name;
+ else if (parent)
+ return dev_name(parent);
+ else
+ return "unknown";
+}
+
+static ssize_t provider_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_bus *nd_bus = to_nd_bus(dev);
+
+ return sprintf(buf, "%s\n", nd_bus_provider(nd_bus));
+}
+static DEVICE_ATTR_RO(provider);
+
+static ssize_t revision_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_bus *nd_bus = to_nd_bus(dev);
+ struct nfit __iomem *nfit = nd_bus->nfit_desc->nfit_base;
+
+ return sprintf(buf, "%d\n", readb(&nfit->revision));
+}
+static DEVICE_ATTR_RO(revision);
+
+static struct attribute *nd_bus_attributes[] = {
+ &dev_attr_provider.attr,
+ &dev_attr_revision.attr,
+ NULL,
+};
+
+static struct attribute_group nd_bus_attribute_group = {
+ .attrs = nd_bus_attributes,
+};
+
+static const struct attribute_group *nd_bus_attribute_groups[] = {
+ &nd_bus_attribute_group,
+ NULL,
+};
+
static void *nd_bus_new(struct device *parent,
struct nfit_bus_descriptor *nfit_desc)
{
@@ -81,6 +131,7 @@ static void *nd_bus_new(struct device *parent,
INIT_LIST_HEAD(&nd_bus->bdws);
INIT_LIST_HEAD(&nd_bus->memdevs);
INIT_LIST_HEAD(&nd_bus->dimms);
+ INIT_LIST_HEAD(&nd_bus->list);
nd_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL);
if (nd_bus->id < 0) {
kfree(nd_bus);
@@ -89,6 +140,7 @@ static void *nd_bus_new(struct device *parent,
nd_bus->nfit_desc = nfit_desc;
nd_bus->dev.parent = parent;
nd_bus->dev.release = nd_bus_release;
+ nd_bus->dev.groups = nd_bus_attribute_groups;
dev_set_name(&nd_bus->dev, "ndbus%d", nd_bus->id);
rc = device_register(&nd_bus->dev);
if (rc) {
@@ -428,6 +480,14 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
if (rc)
goto err;
+ rc = nd_bus_create_ndctl(nd_bus);
+ if (rc)
+ goto err;
+
+ mutex_lock(&nd_bus_list_mutex);
+ list_add_tail(&nd_bus->list, &nd_bus_list);
+ mutex_unlock(&nd_bus_list_mutex);
+
return nd_bus;
err:
put_device(&nd_bus->dev);
@@ -458,6 +518,13 @@ void nfit_bus_unregister(struct nd_bus *nd_bus)
{
if (!nd_bus)
return;
+
+ mutex_lock(&nd_bus_list_mutex);
+ list_del_init(&nd_bus->list);
+ mutex_unlock(&nd_bus_list_mutex);
+
+ nd_bus_destroy_ndctl(nd_bus);
+
device_unregister(&nd_bus->dev);
}
EXPORT_SYMBOL(nfit_bus_unregister);
@@ -472,11 +539,13 @@ static __init int nd_core_init(void)
BUILD_BUG_ON(sizeof(struct nfit_dcr) != 80);
BUILD_BUG_ON(sizeof(struct nfit_bdw) != 40);
- return 0;
+ return nd_bus_init();
}
static __exit void nd_core_exit(void)
{
+ WARN_ON(!list_empty(&nd_bus_list));
+ nd_bus_exit();
}
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Intel Corporation");
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 0ede8818f320..4bcc9c96cb4d 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -57,5 +57,10 @@ struct nd_mem {
struct nfit_spa __iomem *nfit_spa_bdw;
struct list_head list;
};
+
struct nd_bus *to_nd_bus(struct device *dev);
+int __init nd_bus_init(void);
+void __exit nd_bus_exit(void);
+int nd_bus_create_ndctl(struct nd_bus *nd_bus);
+void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
#endif /* __ND_PRIVATE_H__ */
Register the dimms described in the nfit as devices on a nd_bus, named
"dimmN" where N is a global ida index. The dimm numbering per-bus may
appear contiguous, since we only allow a single nd_bus to be registered
at at a time. However, eventually, dimm-hotplug invalidates this
property and dimms should be addressed via NFIT-handle.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 1
drivers/block/nd/bus.c | 62 +++++++++-
drivers/block/nd/core.c | 55 +++++++++
drivers/block/nd/dimm_devs.c | 243 +++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/nd-private.h | 19 +++
5 files changed, 373 insertions(+), 7 deletions(-)
create mode 100644 drivers/block/nd/dimm_devs.c
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 7772fb599809..6b34dd4d4df8 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -21,3 +21,4 @@ nd_acpi-y := acpi.o
nd-y := core.o
nd-y += bus.o
+nd-y += dimm_devs.o
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index c27db50511f2..e24db67001d0 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -13,18 +13,59 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/uaccess.h>
#include <linux/fcntl.h>
+#include <linux/async.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/io.h>
#include "nd-private.h"
#include "nfit.h"
-static int nd_major;
+static int nd_bus_major;
static struct class *nd_class;
+struct bus_type nd_bus_type = {
+ .name = "nd",
+};
+
+static ASYNC_DOMAIN_EXCLUSIVE(nd_async_domain);
+
+static void nd_async_dimm_delete(void *d, async_cookie_t cookie)
+{
+ u32 nfit_handle;
+ struct nd_dimm_delete *del_info = d;
+ struct nd_bus *nd_bus = del_info->nd_bus;
+ struct nd_mem *nd_mem = del_info->nd_mem;
+
+ nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+
+ mutex_lock(&nd_bus_list_mutex);
+ radix_tree_delete(&nd_bus->dimm_radix, nfit_handle);
+ mutex_unlock(&nd_bus_list_mutex);
+
+ put_device(&nd_bus->dev);
+ kfree(del_info);
+}
+
+void nd_dimm_delete(struct nd_dimm *nd_dimm)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(&nd_dimm->dev);
+ struct nd_dimm_delete *del_info = nd_dimm->del_info;
+
+ del_info->nd_bus = nd_bus;
+ get_device(&nd_bus->dev);
+ del_info->nd_mem = nd_dimm->nd_mem;
+ async_schedule_domain(nd_async_dimm_delete, del_info,
+ &nd_async_domain);
+}
+
+void nd_synchronize(void)
+{
+ async_synchronize_full_domain(&nd_async_domain);
+}
+
int nd_bus_create_ndctl(struct nd_bus *nd_bus)
{
- dev_t devt = MKDEV(nd_major, nd_bus->id);
+ dev_t devt = MKDEV(nd_bus_major, nd_bus->id);
struct device *dev;
dev = device_create(nd_class, &nd_bus->dev, devt, nd_bus, "ndctl%d",
@@ -40,7 +81,7 @@ int nd_bus_create_ndctl(struct nd_bus *nd_bus)
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
{
- device_destroy(nd_class, MKDEV(nd_major, nd_bus->id));
+ device_destroy(nd_class, MKDEV(nd_bus_major, nd_bus->id));
}
static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -60,10 +101,14 @@ int __init nd_bus_init(void)
{
int rc;
+ rc = bus_register(&nd_bus_type);
+ if (rc)
+ return rc;
+
rc = register_chrdev(0, "ndctl", &nd_bus_fops);
if (rc < 0)
- return rc;
- nd_major = rc;
+ goto err_chrdev;
+ nd_bus_major = rc;
nd_class = class_create(THIS_MODULE, "nd");
if (IS_ERR(nd_class))
@@ -72,7 +117,9 @@ int __init nd_bus_init(void)
return 0;
err_class:
- unregister_chrdev(nd_major, "ndctl");
+ unregister_chrdev(nd_bus_major, "ndctl");
+ err_chrdev:
+ bus_unregister(&nd_bus_type);
return rc;
}
@@ -80,5 +127,6 @@ int __init nd_bus_init(void)
void __exit nd_bus_exit(void)
{
class_destroy(nd_class);
- unregister_chrdev(nd_major, "ndctl");
+ unregister_chrdev(nd_bus_major, "ndctl");
+ bus_unregister(&nd_bus_type);
}
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index d6a666b9228b..a0d1623b3641 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -29,6 +29,24 @@ static bool warn_checksum;
module_param(warn_checksum, bool, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(warn_checksum, "Turn checksum errors into warnings");
+/**
+ * nd_dimm_by_handle - lookup an nd_dimm by its corresponding nfit_handle
+ * @nd_bus: parent bus of the dimm
+ * @nfit_handle: handle from the memory-device-to-spa (nfit_mem) structure
+ *
+ * LOCKING: expect nd_bus_list_mutex() held at entry
+ */
+struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle)
+{
+ struct nd_dimm *nd_dimm;
+
+ WARN_ON_ONCE(!mutex_is_locked(&nd_bus_list_mutex));
+ nd_dimm = radix_tree_lookup(&nd_bus->dimm_radix, nfit_handle);
+ if (nd_dimm)
+ get_device(&nd_dimm->dev);
+ return nd_dimm;
+}
+
static void nd_bus_release(struct device *dev)
{
struct nd_bus *nd_bus = container_of(dev, struct nd_bus, dev);
@@ -71,6 +89,19 @@ struct nd_bus *to_nd_bus(struct device *dev)
return nd_bus;
}
+struct nd_bus *walk_to_nd_bus(struct device *nd_dev)
+{
+ struct device *dev;
+
+ for (dev = nd_dev; dev; dev = dev->parent)
+ if (dev->release == nd_bus_release)
+ break;
+ dev_WARN_ONCE(nd_dev, !dev, "invalid dev, not on nd bus\n");
+ if (dev)
+ return to_nd_bus(dev);
+ return NULL;
+}
+
static const char *nd_bus_provider(struct nd_bus *nd_bus)
{
struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
@@ -132,6 +163,7 @@ static void *nd_bus_new(struct device *parent,
INIT_LIST_HEAD(&nd_bus->memdevs);
INIT_LIST_HEAD(&nd_bus->dimms);
INIT_LIST_HEAD(&nd_bus->list);
+ INIT_RADIX_TREE(&nd_bus->dimm_radix, GFP_KERNEL);
nd_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL);
if (nd_bus->id < 0) {
kfree(nd_bus);
@@ -431,6 +463,21 @@ static int nd_mem_init(struct nd_bus *nd_bus)
return 0;
}
+static int child_unregister(struct device *dev, void *data)
+{
+ /*
+ * the singular ndctl class device per bus needs to be
+ * "device_destroy"ed, so skip it here
+ *
+ * i.e. remove classless children
+ */
+ if (dev->class)
+ /* pass */;
+ else
+ device_unregister(dev);
+ return 0;
+}
+
static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
{
struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
@@ -484,11 +531,18 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
if (rc)
goto err;
+ rc = nd_bus_register_dimms(nd_bus);
+ if (rc)
+ goto err_child;
+
mutex_lock(&nd_bus_list_mutex);
list_add_tail(&nd_bus->list, &nd_bus_list);
mutex_unlock(&nd_bus_list_mutex);
return nd_bus;
+ err_child:
+ device_for_each_child(&nd_bus->dev, NULL, child_unregister);
+ nd_bus_destroy_ndctl(nd_bus);
err:
put_device(&nd_bus->dev);
return NULL;
@@ -523,6 +577,7 @@ void nfit_bus_unregister(struct nd_bus *nd_bus)
list_del_init(&nd_bus->list);
mutex_unlock(&nd_bus_list_mutex);
+ device_for_each_child(&nd_bus->dev, NULL, child_unregister);
nd_bus_destroy_ndctl(nd_bus);
device_unregister(&nd_bus->dev);
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
new file mode 100644
index 000000000000..b74b23c297fb
--- /dev/null
+++ b/drivers/block/nd/dimm_devs.c
@@ -0,0 +1,243 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/io.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include "nd-private.h"
+#include "nfit.h"
+
+static DEFINE_IDA(dimm_ida);
+
+static void nd_dimm_release(struct device *dev)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+
+ ida_simple_remove(&dimm_ida, nd_dimm->id);
+ nd_dimm_delete(nd_dimm);
+ kfree(nd_dimm);
+}
+
+static struct device_type nd_dimm_device_type = {
+ .name = "nd_dimm",
+ .release = nd_dimm_release,
+};
+
+static bool is_nd_dimm(struct device *dev)
+{
+ return dev->type == &nd_dimm_device_type;
+}
+
+struct nd_dimm *to_nd_dimm(struct device *dev)
+{
+ struct nd_dimm *nd_dimm = container_of(dev, struct nd_dimm, dev);
+
+ WARN_ON(!is_nd_dimm(dev));
+ return nd_dimm;
+}
+
+static struct nfit_mem __iomem *to_nfit_mem(struct device *dev)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+ struct nd_mem *nd_mem = nd_dimm->nd_mem;
+ struct nfit_mem __iomem *nfit_mem = nd_mem->nfit_mem_dcr;
+
+ return nfit_mem;
+}
+
+static struct nfit_dcr __iomem *to_nfit_dcr(struct device *dev)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+ struct nd_mem *nd_mem = nd_dimm->nd_mem;
+ struct nfit_dcr __iomem *nfit_dcr = nd_mem->nfit_dcr;
+
+ return nfit_dcr;
+}
+
+static ssize_t handle_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_mem __iomem *nfit_mem = to_nfit_mem(dev);
+
+ return sprintf(buf, "%#x\n", readl(&nfit_mem->nfit_handle));
+}
+static DEVICE_ATTR_RO(handle);
+
+static ssize_t phys_id_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_mem __iomem *nfit_mem = to_nfit_mem(dev);
+
+ return sprintf(buf, "%#x\n", readw(&nfit_mem->phys_id));
+}
+static DEVICE_ATTR_RO(phys_id);
+
+static ssize_t vendor_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_dcr __iomem *nfit_dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "%#x\n", readw(&nfit_dcr->vendor_id));
+}
+static DEVICE_ATTR_RO(vendor);
+
+static ssize_t revision_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_dcr __iomem *nfit_dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "%#x\n", readw(&nfit_dcr->revision_id));
+}
+static DEVICE_ATTR_RO(revision);
+
+static ssize_t device_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_dcr __iomem *nfit_dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "%#x\n", readw(&nfit_dcr->device_id));
+}
+static DEVICE_ATTR_RO(device);
+
+static ssize_t format_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_dcr __iomem *nfit_dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "%#x\n", readw(&nfit_dcr->fic));
+}
+static DEVICE_ATTR_RO(format);
+
+static ssize_t serial_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nfit_dcr __iomem *nfit_dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "%#x\n", readl(&nfit_dcr->serial_number));
+}
+static DEVICE_ATTR_RO(serial);
+
+static struct attribute *nd_dimm_attributes[] = {
+ &dev_attr_handle.attr,
+ &dev_attr_phys_id.attr,
+ &dev_attr_vendor.attr,
+ &dev_attr_device.attr,
+ &dev_attr_format.attr,
+ &dev_attr_serial.attr,
+ &dev_attr_revision.attr,
+ NULL,
+};
+
+static umode_t nd_dimm_attr_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+
+ if (a == &dev_attr_handle.attr || a == &dev_attr_phys_id.attr
+ || to_nfit_dcr(&nd_dimm->dev))
+ return a->mode;
+ else
+ return 0;
+}
+
+static struct attribute_group nd_dimm_attribute_group = {
+ .attrs = nd_dimm_attributes,
+ .is_visible = nd_dimm_attr_visible,
+};
+
+static const struct attribute_group *nd_dimm_attribute_groups[] = {
+ &nd_dimm_attribute_group,
+ NULL,
+};
+
+static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
+ struct nd_mem *nd_mem)
+{
+ struct nd_dimm *nd_dimm = kzalloc(sizeof(*nd_dimm), GFP_KERNEL);
+ struct device *dev;
+ u32 nfit_handle;
+
+ if (!nd_dimm)
+ return NULL;
+
+ nd_dimm->del_info = kzalloc(sizeof(struct nd_dimm_delete), GFP_KERNEL);
+ if (!nd_dimm->del_info)
+ goto err_del_info;
+ nd_dimm->del_info->nd_bus = nd_bus;
+ nd_dimm->del_info->nd_mem = nd_mem;
+
+ nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+ if (radix_tree_insert(&nd_bus->dimm_radix, nfit_handle, nd_dimm) != 0)
+ goto err_radix;
+
+ nd_dimm->id = ida_simple_get(&dimm_ida, 0, 0, GFP_KERNEL);
+ if (nd_dimm->id < 0)
+ goto err_ida;
+
+ nd_dimm->nd_mem = nd_mem;
+ dev = &nd_dimm->dev;
+ dev_set_name(dev, "nmem%d", nd_dimm->id);
+ dev->parent = &nd_bus->dev;
+ dev->type = &nd_dimm_device_type;
+ dev->bus = &nd_bus_type;
+ dev->groups = nd_dimm_attribute_groups;
+ if (device_register(dev) != 0) {
+ put_device(dev);
+ return NULL;
+ }
+
+ return nd_dimm;
+ err_ida:
+ radix_tree_delete(&nd_bus->dimm_radix, nfit_handle);
+ err_radix:
+ kfree(nd_dimm->del_info);
+ err_del_info:
+ kfree(nd_dimm);
+ return NULL;
+}
+
+int nd_bus_register_dimms(struct nd_bus *nd_bus)
+{
+ int rc = 0, dimm_count = 0;
+ struct nd_mem *nd_mem;
+
+ mutex_lock(&nd_bus_list_mutex);
+ list_for_each_entry(nd_mem, &nd_bus->dimms, list) {
+ struct nd_dimm *nd_dimm;
+ u32 nfit_handle;
+
+ nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+ nd_dimm = nd_dimm_by_handle(nd_bus, nfit_handle);
+ if (nd_dimm) {
+ /*
+ * If for some reason we find multiple DCRs the
+ * first one wins
+ */
+ dev_err(&nd_bus->dev, "duplicate DCR detected: %s\n",
+ dev_name(&nd_dimm->dev));
+ put_device(&nd_dimm->dev);
+ continue;
+ }
+
+ if (!nd_dimm_create(nd_bus, nd_mem)) {
+ rc = -ENOMEM;
+ break;
+ }
+ dimm_count++;
+ }
+ mutex_unlock(&nd_bus_list_mutex);
+
+ return rc;
+}
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 4bcc9c96cb4d..58a52c03f5ee 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -12,12 +12,15 @@
*/
#ifndef __ND_PRIVATE_H__
#define __ND_PRIVATE_H__
+#include <linux/radix-tree.h>
#include <linux/device.h>
extern struct list_head nd_bus_list;
extern struct mutex nd_bus_list_mutex;
+extern struct bus_type nd_bus_type;
struct nd_bus {
struct nfit_bus_descriptor *nfit_desc;
+ struct radix_tree_root dimm_radix;
struct list_head memdevs;
struct list_head dimms;
struct list_head spas;
@@ -28,6 +31,16 @@ struct nd_bus {
int id;
};
+struct nd_dimm {
+ struct nd_mem *nd_mem;
+ struct device dev;
+ int id;
+ struct nd_dimm_delete {
+ struct nd_bus *nd_bus;
+ struct nd_mem *nd_mem;
+ } *del_info;
+};
+
struct nd_spa {
struct nfit_spa __iomem *nfit_spa;
struct list_head list;
@@ -58,9 +71,15 @@ struct nd_mem {
struct list_head list;
};
+struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle);
struct nd_bus *to_nd_bus(struct device *dev);
+struct nd_dimm *to_nd_dimm(struct device *dev);
+struct nd_bus *walk_to_nd_bus(struct device *nd_dev);
+void nd_synchronize(void);
int __init nd_bus_init(void);
void __exit nd_bus_exit(void);
+void nd_dimm_delete(struct nd_dimm *nd_dimm);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
+int nd_bus_register_dimms(struct nd_bus *nd_bus);
#endif /* __ND_PRIVATE_H__ */
Most configuration of the nd-subsystem is done via nd-sysfs. However,
the NFIT specification defines a small set of messages that can be
passed to the subsystem via platform-firmware-defined methods. The
command set (as of the current version of the NFIT-DSM spec) is:
NFIT_CMD_SMART: media health and diagnostics
NFIT_CMD_GET_CONFIG_SIZE: size of the label space
NFIT_CMD_GET_CONFIG_DATA: read label
NFIT_CMD_SET_CONFIG_DATA: write label
NFIT_CMD_VENDOR: vendor-specific command passthrough
NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
NFIT_CMD_START_ARS: initiate scrubbing
NFIT_CMD_QUERY_ARS: report on scrubbing state
NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
Most of the commands target a specific dimm. However, the
address-range-scrubbing commands target the entire NFIT-bus / platform.
The 'commands' attribute of an nd-bus, or an nd-dimm enumerate the
supported commands for that object.
Cc: <[email protected]>
Cc: Robert Moore <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Reported-by: Nicholas Moulin <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Kconfig | 11 +
drivers/block/nd/acpi.c | 333 +++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/bus.c | 230 ++++++++++++++++++++++++++++
drivers/block/nd/core.c | 17 ++
drivers/block/nd/dimm_devs.c | 69 ++++++++
drivers/block/nd/nd-private.h | 11 +
drivers/block/nd/nd.h | 21 +++
drivers/block/nd/test/nfit.c | 89 +++++++++++
include/uapi/linux/Kbuild | 1
include/uapi/linux/ndctl.h | 178 ++++++++++++++++++++++
10 files changed, 950 insertions(+), 10 deletions(-)
create mode 100644 drivers/block/nd/nd.h
create mode 100644 include/uapi/linux/ndctl.h
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index 0106b3807202..6c15d10bf4e0 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -42,6 +42,17 @@ config NFIT_ACPI
enables the core to craft ACPI._DSM messages for platform/dimm
configuration.
+config NFIT_ACPI_DEBUG
+ bool "NFIT ACPI: Turn on extra debugging"
+ depends on NFIT_ACPI
+ depends on DYNAMIC_DEBUG
+ default n
+ help
+ Enabling this option causes the nd_acpi driver to dump the
+ input and output buffers of _DSM operations on the ACPI0012
+ device, which can be very verbose. Leave it disabled unless
+ you are debugging a hardware / firmware issue.
+
config NFIT_TEST
tristate "NFIT TEST: Manufactured NFIT for interface testing"
depends on DMA_CMA
diff --git a/drivers/block/nd/acpi.c b/drivers/block/nd/acpi.c
index 48db723d7a90..073ff28fdbfe 100644
--- a/drivers/block/nd/acpi.c
+++ b/drivers/block/nd/acpi.c
@@ -13,8 +13,10 @@
#include <linux/list.h>
#include <linux/acpi.h>
#include <linux/mutex.h>
+#include <linux/ndctl.h>
#include <linux/module.h>
#include "nfit.h"
+#include "nd.h"
enum {
NFIT_ACPI_NOTIFY_TABLE = 0x80,
@@ -26,20 +28,330 @@ struct acpi_nfit {
struct nd_bus *nd_bus;
};
+static struct acpi_nfit *to_acpi_nfit(struct nfit_bus_descriptor *nfit_desc)
+{
+ return container_of(nfit_desc, struct acpi_nfit, nfit_desc);
+}
+
+#define NFIT_ACPI_MAX_ELEM 4
+struct nfit_cmd_desc {
+ int in_num;
+ int out_num;
+ u32 in_sizes[NFIT_ACPI_MAX_ELEM];
+ int out_sizes[NFIT_ACPI_MAX_ELEM];
+};
+
+static const struct nfit_cmd_desc nfit_dimm_descs[] = {
+ [NFIT_CMD_IMPLEMENTED] = { },
+ [NFIT_CMD_SMART] = {
+ .out_num = 2,
+ .out_sizes = { 4, 8, },
+ },
+ [NFIT_CMD_SMART_THRESHOLD] = {
+ .out_num = 2,
+ .out_sizes = { 4, 8, },
+ },
+ [NFIT_CMD_DIMM_FLAGS] = {
+ .out_num = 2,
+ .out_sizes = { 4, 4 },
+ },
+ [NFIT_CMD_GET_CONFIG_SIZE] = {
+ .out_num = 3,
+ .out_sizes = { 4, 4, 4, },
+ },
+ [NFIT_CMD_GET_CONFIG_DATA] = {
+ .in_num = 2,
+ .in_sizes = { 4, 4, },
+ .out_num = 2,
+ .out_sizes = { 4, UINT_MAX, },
+ },
+ [NFIT_CMD_SET_CONFIG_DATA] = {
+ .in_num = 3,
+ .in_sizes = { 4, 4, UINT_MAX, },
+ .out_num = 1,
+ .out_sizes = { 4, },
+ },
+ [NFIT_CMD_VENDOR] = {
+ .in_num = 3,
+ .in_sizes = { 4, 4, UINT_MAX, },
+ .out_num = 3,
+ .out_sizes = { 4, 4, UINT_MAX, },
+ },
+};
+
+static const struct nfit_cmd_desc nfit_acpi_descs[] = {
+ [NFIT_CMD_IMPLEMENTED] = { },
+ [NFIT_CMD_ARS_CAP] = {
+ .in_num = 2,
+ .in_sizes = { 8, 8, },
+ .out_num = 2,
+ .out_sizes = { 4, 4, },
+ },
+ [NFIT_CMD_ARS_START] = {
+ .in_num = 4,
+ .in_sizes = { 8, 8, 2, 6, },
+ .out_num = 1,
+ .out_sizes = { 4, },
+ },
+ [NFIT_CMD_ARS_QUERY] = {
+ .out_num = 2,
+ .out_sizes = { 4, UINT_MAX, },
+ },
+};
+
+static u32 to_cmd_in_size(struct nd_dimm *nd_dimm, int cmd,
+ const struct nfit_cmd_desc *desc, int idx, void *buf)
+{
+ if (idx >= desc->in_num)
+ return UINT_MAX;
+
+ if (desc->in_sizes[idx] < UINT_MAX)
+ return desc->in_sizes[idx];
+
+ if (nd_dimm && cmd == NFIT_CMD_SET_CONFIG_DATA && idx == 2) {
+ struct nfit_cmd_set_config_hdr *hdr = buf;
+
+ return hdr->in_length;
+ } else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2) {
+ struct nfit_cmd_vendor_hdr *hdr = buf;
+
+ return hdr->in_length;
+ }
+
+ return UINT_MAX;
+}
+
+static u32 to_cmd_out_size(struct nd_dimm *nd_dimm, int cmd,
+ const struct nfit_cmd_desc *desc, int idx,
+ void *buf, u32 out_length, u32 offset)
+{
+ if (idx >= desc->out_num)
+ return UINT_MAX;
+
+ if (desc->out_sizes[idx] < UINT_MAX)
+ return desc->out_sizes[idx];
+
+ if (offset >= out_length)
+ return UINT_MAX;
+
+ if (nd_dimm && cmd == NFIT_CMD_GET_CONFIG_DATA && idx == 1)
+ return out_length - offset;
+ else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2)
+ return out_length - offset;
+ else if (!nd_dimm && cmd == NFIT_CMD_ARS_QUERY && idx == 1)
+ return out_length - offset;
+
+ return UINT_MAX;
+}
+
+static u8 nd_acpi_uuids[2][16]; /* initialized at nd_acpi_init */
+
+static u8 *nd_acpi_bus_uuid(void)
+{
+ return nd_acpi_uuids[0];
+}
+
+static u8 *nd_acpi_dimm_uuid(void)
+{
+ return nd_acpi_uuids[1];
+}
+
static int nd_acpi_ctl(struct nfit_bus_descriptor *nfit_desc,
struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
unsigned int buf_len)
{
- return -ENOTTY;
+ struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
+ union acpi_object in_obj, in_buf, *out_obj;
+ const struct nfit_cmd_desc *desc = NULL;
+ struct device *dev = &nfit->dev->dev;
+ const char *cmd_name, *dimm_name;
+ unsigned long dsm_mask;
+ acpi_handle handle;
+ u32 offset;
+ int rc, i;
+ u8 *uuid;
+
+ if (nd_dimm) {
+ struct acpi_device *adev = nd_dimm_get_pdata(nd_dimm);
+
+ if (cmd < ARRAY_SIZE(nfit_dimm_descs))
+ desc = &nfit_dimm_descs[cmd];
+ cmd_name = nfit_dimm_cmd_name(cmd);
+ dsm_mask = nd_dimm_get_dsm_mask(nd_dimm);
+ handle = adev->handle;
+ uuid = nd_acpi_dimm_uuid();
+ dimm_name = dev_name(&adev->dev);
+ } else {
+ if (cmd < ARRAY_SIZE(nfit_acpi_descs))
+ desc = &nfit_acpi_descs[cmd];
+ cmd_name = nfit_bus_cmd_name(cmd);
+ dsm_mask = nfit_desc->dsm_mask;
+ handle = nfit->dev->handle;
+ uuid = nd_acpi_bus_uuid();
+ dimm_name = "bus";
+ }
+
+ if (!desc || (cmd && (desc->out_num + desc->in_num == 0)))
+ return -ENOTTY;
+
+ if (!test_bit(cmd, &dsm_mask))
+ return -ENOTTY;
+
+ in_obj.type = ACPI_TYPE_PACKAGE;
+ in_obj.package.count = 1;
+ in_obj.package.elements = &in_buf;
+ in_buf.type = ACPI_TYPE_BUFFER;
+ in_buf.buffer.pointer = buf;
+ in_buf.buffer.length = 0;
+
+ /* double check that the nfit_acpi_cmd_descs table is self consistent */
+ if (desc->in_num > NFIT_ACPI_MAX_ELEM) {
+ WARN_ON_ONCE(1);
+ return -ENXIO;
+ }
+
+ for (i = 0; i < desc->in_num; i++) {
+ u32 in_size;
+
+ in_size = to_cmd_in_size(nd_dimm, cmd, desc, i, buf);
+ if (in_size == UINT_MAX) {
+ dev_err(dev, "%s:%s unknown input size cmd: %s field: %d\n",
+ __func__, dimm_name, cmd_name, i);
+ return -ENXIO;
+ }
+ in_buf.buffer.length += in_size;
+ if (in_buf.buffer.length > buf_len) {
+ dev_err(dev, "%s:%s input underrun cmd: %s field: %d\n",
+ __func__, dimm_name, cmd_name, i);
+ return -ENXIO;
+ }
+ }
+
+ dev_dbg(dev, "%s:%s cmd: %s input length: %d\n", __func__, dimm_name,
+ cmd_name, in_buf.buffer.length);
+ if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
+ print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
+ 4, in_buf.buffer.pointer, min_t(u32, 128,
+ in_buf.buffer.length), true);
+
+ out_obj = acpi_evaluate_dsm(handle, uuid, 1, cmd, &in_obj);
+ if (!out_obj) {
+ dev_dbg(dev, "%s:%s _DSM failed cmd: %s\n", __func__, dimm_name,
+ cmd_name);
+ return -EINVAL;
+ }
+
+ if (out_obj->package.type != ACPI_TYPE_BUFFER) {
+ dev_dbg(dev, "%s:%s unexpected output object type cmd: %s type: %d\n",
+ __func__, dimm_name, cmd_name, out_obj->type);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ dev_dbg(dev, "%s:%s cmd: %s output length: %d\n", __func__, dimm_name,
+ cmd_name, out_obj->buffer.length);
+ if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
+ print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
+ 4, out_obj->buffer.pointer, min_t(u32, 128,
+ out_obj->buffer.length), true);
+
+ for (i = 0, offset = 0; i < desc->out_num; i++) {
+ u32 out_size = to_cmd_out_size(nd_dimm, cmd, desc, i, buf,
+ out_obj->buffer.length, offset);
+
+ if (out_size == UINT_MAX) {
+ dev_dbg(dev, "%s:%s unknown output size cmd: %s field: %d\n",
+ __func__, dimm_name, cmd_name, i);
+ break;
+ }
+
+ if (offset + out_size > out_obj->buffer.length) {
+ dev_dbg(dev, "%s:%s output object underflow cmd: %s field: %d\n",
+ __func__, dimm_name, cmd_name, i);
+ break;
+ }
+
+ if (in_buf.buffer.length + offset + out_size > buf_len) {
+ dev_dbg(dev, "%s:%s output overrun cmd: %s field: %d\n",
+ __func__, dimm_name, cmd_name, i);
+ rc = -ENXIO;
+ goto out;
+ }
+ memcpy(buf + in_buf.buffer.length + offset,
+ out_obj->buffer.pointer + offset, out_size);
+ offset += out_size;
+ }
+ if (offset + in_buf.buffer.length < buf_len) {
+ if (i >= 1) {
+ /*
+ * status valid, return the number of bytes left
+ * unfilled in the output buffer
+ */
+ rc = buf_len - offset - in_buf.buffer.length;
+ } else {
+ dev_err(dev, "%s:%s underrun cmd: %s buf_len: %d out_len: %d\n",
+ __func__, dimm_name, cmd_name, buf_len, offset);
+ rc = -ENXIO;
+ }
+ } else
+ rc = 0;
+
+ out:
+ ACPI_FREE(out_obj);
+
+ return rc;
+}
+
+static int nd_acpi_add_dimm(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm)
+{
+ struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
+ u32 nfit_handle = to_nfit_handle(nd_dimm);
+ struct device *dev = &nfit->dev->dev;
+ struct acpi_device *acpi_dimm;
+ unsigned long dsm_mask = 0;
+ u8 *uuid = nd_acpi_dimm_uuid();
+ unsigned long long sta;
+ int i, rc = -ENODEV;
+ acpi_status status;
+
+ acpi_dimm = acpi_find_child_device(nfit->dev, nfit_handle, false);
+ if (!acpi_dimm) {
+ dev_err(dev, "no ACPI.NFIT device with _ADR %#x, disabling...\n",
+ nfit_handle);
+ return -ENODEV;
+ }
+
+ status = acpi_evaluate_integer(acpi_dimm->handle, "_STA", NULL, &sta);
+ if (status == AE_NOT_FOUND)
+ dev_err(dev, "%s missing _STA, disabling...\n",
+ dev_name(&acpi_dimm->dev));
+ else if (ACPI_FAILURE(status))
+ dev_err(dev, "%s failed to retrieve_STA, disabling...\n",
+ dev_name(&acpi_dimm->dev));
+ else if ((sta & ACPI_STA_DEVICE_ENABLED) == 0)
+ dev_info(dev, "%s disabled by firmware\n",
+ dev_name(&acpi_dimm->dev));
+ else
+ rc = 0;
+
+ for (i = NFIT_CMD_SMART; i <= NFIT_CMD_VENDOR; i++)
+ if (acpi_check_dsm(acpi_dimm->handle, uuid, 1, 1ULL << i))
+ set_bit(i, &dsm_mask);
+ nd_dimm_set_dsm_mask(nd_dimm, dsm_mask);
+ nd_dimm_set_pdata(nd_dimm, acpi_dimm);
+ return rc;
}
static int nd_acpi_add(struct acpi_device *dev)
{
struct nfit_bus_descriptor *nfit_desc;
struct acpi_table_header *tbl;
+ u8 *uuid = nd_acpi_bus_uuid();
acpi_status status = AE_OK;
struct acpi_nfit *nfit;
acpi_size sz;
+ int i;
status = acpi_get_table_with_size("NFIT", 0, &tbl, &sz);
if (ACPI_FAILURE(status)) {
@@ -56,6 +368,11 @@ static int nd_acpi_add(struct acpi_device *dev)
nfit_desc->nfit_size = sz;
nfit_desc->provider_name = "ACPI.NFIT";
nfit_desc->nfit_ctl = nd_acpi_ctl;
+ nfit_desc->add_dimm = nd_acpi_add_dimm;
+
+ for (i = NFIT_CMD_ARS_CAP; i <= NFIT_CMD_ARS_QUERY; i++)
+ if (acpi_check_dsm(dev->handle, uuid, 1, 1ULL << i))
+ set_bit(i, &nfit_desc->dsm_mask);
nfit->nd_bus = nfit_bus_register(&dev->dev, nfit_desc);
if (!nfit->nd_bus)
@@ -98,6 +415,20 @@ static struct acpi_driver nd_acpi_driver = {
static __init int nd_acpi_init(void)
{
+ char *uuids[] = {
+ /* bus interface */
+ "2f10e7a4-9e91-11e4-89d3-123b93f75cba",
+ /* per-dimm interface */
+ "4309ac30-0d11-11e4-9191-0800200c9a66",
+ };
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(uuids); i++)
+ if (acpi_str_to_uuid(uuids[i], nd_acpi_uuids[i]) != AE_OK) {
+ WARN_ON_ONCE(1);
+ return -ENXIO;
+ }
+
return acpi_bus_register_driver(&nd_acpi_driver);
}
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index e24db67001d0..67a0624c265b 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -11,15 +11,20 @@
* General Public License for more details.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/vmalloc.h>
#include <linux/uaccess.h>
#include <linux/fcntl.h>
#include <linux/async.h>
+#include <linux/ndctl.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/io.h>
+#include <linux/mm.h>
#include "nd-private.h"
#include "nfit.h"
+#include "nd.h"
+int nd_dimm_major;
static int nd_bus_major;
static struct class *nd_class;
@@ -84,19 +89,228 @@ void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
device_destroy(nd_class, MKDEV(nd_bus_major, nd_bus->id));
}
+static int __nd_ioctl(struct nd_bus *nd_bus, struct nd_dimm *nd_dimm,
+ int read_only, unsigned int cmd, unsigned long arg)
+{
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
+ void __user *p = (void __user *) arg;
+ unsigned long dsm_mask;
+ size_t buf_len = 0;
+ void *buf = NULL;
+ int rc;
+
+ /* check if the command is supported */
+ dsm_mask = nd_dimm ? nd_dimm->dsm_mask : nfit_desc->dsm_mask;
+ if (!test_bit(_IOC_NR(cmd), &dsm_mask))
+ return -ENXIO;
+
+ /* fail write commands (when read-only), or unknown commands */
+ switch (cmd) {
+ case NFIT_IOCTL_VENDOR:
+ case NFIT_IOCTL_SET_CONFIG_DATA:
+ case NFIT_IOCTL_ARS_START:
+ if (read_only)
+ return -EPERM;
+ /* fallthrough */
+ case NFIT_IOCTL_SMART:
+ case NFIT_IOCTL_DIMM_FLAGS:
+ case NFIT_IOCTL_GET_CONFIG_SIZE:
+ case NFIT_IOCTL_GET_CONFIG_DATA:
+ case NFIT_IOCTL_ARS_CAP:
+ case NFIT_IOCTL_ARS_QUERY:
+ case NFIT_IOCTL_SMART_THRESHOLD:
+ break;
+ default:
+ pr_debug("%s: unknown cmd: %d\n", __func__, _IOC_NR(cmd));
+ return -ENOTTY;
+ }
+
+ /* validate input buffer / determine size */
+ switch (cmd) {
+ case NFIT_IOCTL_SMART:
+ buf_len = sizeof(struct nfit_cmd_smart);
+ break;
+ case NFIT_IOCTL_DIMM_FLAGS:
+ buf_len = sizeof(struct nfit_cmd_dimm_flags);
+ break;
+ case NFIT_IOCTL_VENDOR: {
+ struct nfit_cmd_vendor_hdr nfit_cmd_v;
+ struct nfit_cmd_vendor_tail nfit_cmd_vt;
+
+ if (!access_ok(VERIFY_WRITE, p, sizeof(nfit_cmd_v)))
+ return -EFAULT;
+ if (copy_from_user(&nfit_cmd_v, p, sizeof(nfit_cmd_v)))
+ return -EFAULT;
+ buf_len = sizeof(nfit_cmd_v) + nfit_cmd_v.in_length;
+ if (!access_ok(VERIFY_WRITE, p + buf_len, sizeof(nfit_cmd_vt)))
+ return -EFAULT;
+ if (copy_from_user(&nfit_cmd_vt, p + buf_len,
+ sizeof(nfit_cmd_vt)))
+ return -EFAULT;
+ buf_len += sizeof(nfit_cmd_vt) + nfit_cmd_vt.out_length;
+ break;
+ }
+ case NFIT_IOCTL_SET_CONFIG_DATA: {
+ struct nfit_cmd_set_config_hdr nfit_cmd_set;
+
+ if (!access_ok(VERIFY_WRITE, p, sizeof(nfit_cmd_set)))
+ return -EFAULT;
+ if (copy_from_user(&nfit_cmd_set, p, sizeof(nfit_cmd_set)))
+ return -EFAULT;
+ /* include input buffer size and trailing status */
+ buf_len = sizeof(nfit_cmd_set) + nfit_cmd_set.in_length + 4;
+ break;
+ }
+ case NFIT_IOCTL_ARS_START:
+ buf_len = sizeof(struct nfit_cmd_ars_start);
+ break;
+ case NFIT_IOCTL_GET_CONFIG_SIZE:
+ buf_len = sizeof(struct nfit_cmd_get_config_size);
+ break;
+ case NFIT_IOCTL_GET_CONFIG_DATA: {
+ struct nfit_cmd_get_config_data_hdr nfit_cmd_get;
+
+ if (!access_ok(VERIFY_WRITE, p, sizeof(nfit_cmd_get)))
+ return -EFAULT;
+ if (copy_from_user(&nfit_cmd_get, p, sizeof(nfit_cmd_get)))
+ return -EFAULT;
+ buf_len = sizeof(nfit_cmd_get) + nfit_cmd_get.in_length;
+ break;
+ }
+ case NFIT_IOCTL_ARS_CAP:
+ buf_len = sizeof(struct nfit_cmd_ars_cap);
+ break;
+ case NFIT_IOCTL_ARS_QUERY: {
+ struct nfit_cmd_ars_query nfit_cmd_query;
+
+ if (!access_ok(VERIFY_WRITE, p, sizeof(nfit_cmd_query)))
+ return -EFAULT;
+ if (copy_from_user(&nfit_cmd_query, p, sizeof(nfit_cmd_query)))
+ return -EFAULT;
+ buf_len = sizeof(nfit_cmd_query) + nfit_cmd_query.out_length
+ - offsetof(struct nfit_cmd_ars_query, out_length);
+ break;
+ }
+ case NFIT_IOCTL_SMART_THRESHOLD:
+ buf_len = sizeof(struct nfit_cmd_smart_threshold);
+ break;
+ }
+
+ if (!access_ok(VERIFY_WRITE, p, sizeof(buf_len)))
+ return -EFAULT;
+
+ if (buf_len > ND_IOCTL_MAX_BUFLEN) {
+ pr_debug("%s: buf_len: %zd > %d\n",
+ __func__, buf_len, ND_IOCTL_MAX_BUFLEN);
+ return -EINVAL;
+ }
+
+ if (buf_len < KMALLOC_MAX_SIZE)
+ buf = kmalloc(buf_len, GFP_KERNEL);
+
+ if (!buf)
+ buf = vmalloc(buf_len);
+
+ if (!buf)
+ return -ENOMEM;
+
+ if (copy_from_user(buf, p, buf_len)) {
+ rc = -EFAULT;
+ goto out;
+ }
+
+ rc = nfit_desc->nfit_ctl(nfit_desc, nd_dimm, _IOC_NR(cmd), buf, buf_len);
+ if (rc < 0)
+ goto out;
+ if (copy_to_user(p, buf, buf_len))
+ rc = -EFAULT;
+ out:
+ if (is_vmalloc_addr(buf))
+ vfree(buf);
+ else
+ kfree(buf);
+ return rc;
+}
+
static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
- return -ENXIO;
+ long id = (long) file->private_data;
+ int rc = -ENXIO, read_only;
+ struct nd_bus *nd_bus;
+
+ read_only = (O_RDWR != (file->f_flags & O_ACCMODE));
+ mutex_lock(&nd_bus_list_mutex);
+ list_for_each_entry(nd_bus, &nd_bus_list, list) {
+ if (nd_bus->id == id) {
+ rc = __nd_ioctl(nd_bus, NULL, read_only, cmd, arg);
+ break;
+ }
+ }
+ mutex_unlock(&nd_bus_list_mutex);
+
+ return rc;
+}
+
+static int match_dimm(struct device *dev, void *data)
+{
+ long id = (long) data;
+
+ if (is_nd_dimm(dev)) {
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+
+ return nd_dimm->id == id;
+ }
+
+ return 0;
+}
+
+static long nd_dimm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int rc = -ENXIO, read_only;
+ struct nd_bus *nd_bus;
+
+ read_only = (O_RDWR != (file->f_flags & O_ACCMODE));
+ mutex_lock(&nd_bus_list_mutex);
+ list_for_each_entry(nd_bus, &nd_bus_list, list) {
+ struct device *dev = device_find_child(&nd_bus->dev,
+ file->private_data, match_dimm);
+
+ if (!dev)
+ continue;
+
+ rc = __nd_ioctl(nd_bus, to_nd_dimm(dev), read_only, cmd, arg);
+ put_device(dev);
+ break;
+ }
+ mutex_unlock(&nd_bus_list_mutex);
+
+ return rc;
+}
+
+static int nd_open(struct inode *inode, struct file *file)
+{
+ long minor = iminor(inode);
+
+ file->private_data = (void *) minor;
+ return 0;
}
static const struct file_operations nd_bus_fops = {
.owner = THIS_MODULE,
- .open = nonseekable_open,
+ .open = nd_open,
.unlocked_ioctl = nd_ioctl,
.compat_ioctl = nd_ioctl,
.llseek = noop_llseek,
};
+static const struct file_operations nd_dimm_fops = {
+ .owner = THIS_MODULE,
+ .open = nd_open,
+ .unlocked_ioctl = nd_dimm_ioctl,
+ .compat_ioctl = nd_dimm_ioctl,
+ .llseek = noop_llseek,
+};
+
int __init nd_bus_init(void)
{
int rc;
@@ -107,9 +321,14 @@ int __init nd_bus_init(void)
rc = register_chrdev(0, "ndctl", &nd_bus_fops);
if (rc < 0)
- goto err_chrdev;
+ goto err_bus_chrdev;
nd_bus_major = rc;
+ rc = register_chrdev(0, "dimmctl", &nd_dimm_fops);
+ if (rc < 0)
+ goto err_dimm_chrdev;
+ nd_dimm_major = rc;
+
nd_class = class_create(THIS_MODULE, "nd");
if (IS_ERR(nd_class))
goto err_class;
@@ -117,8 +336,10 @@ int __init nd_bus_init(void)
return 0;
err_class:
+ unregister_chrdev(nd_dimm_major, "dimmctl");
+ err_dimm_chrdev:
unregister_chrdev(nd_bus_major, "ndctl");
- err_chrdev:
+ err_bus_chrdev:
bus_unregister(&nd_bus_type);
return rc;
@@ -128,5 +349,6 @@ void __exit nd_bus_exit(void)
{
class_destroy(nd_class);
unregister_chrdev(nd_bus_major, "ndctl");
+ unregister_chrdev(nd_dimm_major, "dimmctl");
bus_unregister(&nd_bus_type);
}
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index a0d1623b3641..0df1e82fcb18 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -14,12 +14,14 @@
#include <linux/export.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/ndctl.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include <linux/io.h>
#include "nd-private.h"
#include "nfit.h"
+#include "nd.h"
LIST_HEAD(nd_bus_list);
DEFINE_MUTEX(nd_bus_list_mutex);
@@ -102,6 +104,20 @@ struct nd_bus *walk_to_nd_bus(struct device *nd_dev)
return NULL;
}
+static ssize_t commands_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int cmd, len = 0;
+ struct nd_bus *nd_bus = to_nd_bus(dev);
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
+
+ for_each_set_bit(cmd, &nfit_desc->dsm_mask, BITS_PER_LONG)
+ len += sprintf(buf + len, "%s ", nfit_bus_cmd_name(cmd));
+ len += sprintf(buf + len, "\n");
+ return len;
+}
+static DEVICE_ATTR_RO(commands);
+
static const char *nd_bus_provider(struct nd_bus *nd_bus)
{
struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
@@ -135,6 +151,7 @@ static ssize_t revision_show(struct device *dev,
static DEVICE_ATTR_RO(revision);
static struct attribute *nd_bus_attributes[] = {
+ &dev_attr_commands.attr,
&dev_attr_provider.attr,
&dev_attr_revision.attr,
NULL,
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index b74b23c297fb..b73006cfbf66 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -12,12 +12,14 @@
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/device.h>
+#include <linux/ndctl.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include "nd-private.h"
#include "nfit.h"
+#include "nd.h"
static DEFINE_IDA(dimm_ida);
@@ -35,7 +37,7 @@ static struct device_type nd_dimm_device_type = {
.release = nd_dimm_release,
};
-static bool is_nd_dimm(struct device *dev)
+bool is_nd_dimm(struct device *dev)
{
return dev->type == &nd_dimm_device_type;
}
@@ -66,12 +68,48 @@ static struct nfit_dcr __iomem *to_nfit_dcr(struct device *dev)
return nfit_dcr;
}
+u32 to_nfit_handle(struct nd_dimm *nd_dimm)
+{
+ struct nfit_mem __iomem *nfit_mem = nd_dimm->nd_mem->nfit_mem_dcr;
+
+ return readl(&nfit_mem->nfit_handle);
+}
+EXPORT_SYMBOL(to_nfit_handle);
+
+void *nd_dimm_get_pdata(struct nd_dimm *nd_dimm)
+{
+ if (nd_dimm)
+ return nd_dimm->provider_data;
+ return NULL;
+}
+EXPORT_SYMBOL(nd_dimm_get_pdata);
+
+void nd_dimm_set_pdata(struct nd_dimm *nd_dimm, void *data)
+{
+ if (nd_dimm)
+ nd_dimm->provider_data = data;
+}
+EXPORT_SYMBOL(nd_dimm_set_pdata);
+
+unsigned long nd_dimm_get_dsm_mask(struct nd_dimm *nd_dimm)
+{
+ if (nd_dimm)
+ return nd_dimm->dsm_mask;
+ return 0;
+}
+EXPORT_SYMBOL(nd_dimm_get_dsm_mask);
+
+void nd_dimm_set_dsm_mask(struct nd_dimm *nd_dimm, unsigned long dsm_mask)
+{
+ if (nd_dimm)
+ nd_dimm->dsm_mask = dsm_mask;
+}
+EXPORT_SYMBOL(nd_dimm_set_dsm_mask);
+
static ssize_t handle_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
- struct nfit_mem __iomem *nfit_mem = to_nfit_mem(dev);
-
- return sprintf(buf, "%#x\n", readl(&nfit_mem->nfit_handle));
+ return sprintf(buf, "%#x\n", to_nfit_handle(to_nd_dimm(dev)));
}
static DEVICE_ATTR_RO(handle);
@@ -129,6 +167,19 @@ static ssize_t serial_show(struct device *dev,
}
static DEVICE_ATTR_RO(serial);
+static ssize_t commands_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+ int cmd, len = 0;
+
+ for_each_set_bit(cmd, &nd_dimm->dsm_mask, BITS_PER_LONG)
+ len += sprintf(buf + len, "%s ", nfit_dimm_cmd_name(cmd));
+ len += sprintf(buf + len, "\n");
+ return len;
+}
+static DEVICE_ATTR_RO(commands);
+
static struct attribute *nd_dimm_attributes[] = {
&dev_attr_handle.attr,
&dev_attr_phys_id.attr,
@@ -137,6 +188,7 @@ static struct attribute *nd_dimm_attributes[] = {
&dev_attr_format.attr,
&dev_attr_serial.attr,
&dev_attr_revision.attr,
+ &dev_attr_commands.attr,
NULL,
};
@@ -166,6 +218,7 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
struct nd_mem *nd_mem)
{
struct nd_dimm *nd_dimm = kzalloc(sizeof(*nd_dimm), GFP_KERNEL);
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
struct device *dev;
u32 nfit_handle;
@@ -193,6 +246,14 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
dev->type = &nd_dimm_device_type;
dev->bus = &nd_bus_type;
dev->groups = nd_dimm_attribute_groups;
+ dev->devt = MKDEV(nd_dimm_major, nd_dimm->id);
+ if (nfit_desc->add_dimm)
+ if (nfit_desc->add_dimm(nfit_desc, nd_dimm) != 0) {
+ device_initialize(dev);
+ put_device(dev);
+ return NULL;
+ }
+
if (device_register(dev) != 0) {
put_device(dev);
return NULL;
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 58a52c03f5ee..31239942b724 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -14,9 +14,17 @@
#define __ND_PRIVATE_H__
#include <linux/radix-tree.h>
#include <linux/device.h>
+#include <linux/sizes.h>
+
extern struct list_head nd_bus_list;
extern struct mutex nd_bus_list_mutex;
extern struct bus_type nd_bus_type;
+extern int nd_dimm_major;
+
+enum {
+ /* need to set a limit somewhere, but yes, this is likely overkill */
+ ND_IOCTL_MAX_BUFLEN = SZ_4M,
+};
struct nd_bus {
struct nfit_bus_descriptor *nfit_desc;
@@ -32,8 +40,10 @@ struct nd_bus {
};
struct nd_dimm {
+ unsigned long dsm_mask;
struct nd_mem *nd_mem;
struct device dev;
+ void *provider_data;
int id;
struct nd_dimm_delete {
struct nd_bus *nd_bus;
@@ -72,6 +82,7 @@ struct nd_mem {
};
struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle);
+bool is_nd_dimm(struct device *dev);
struct nd_bus *to_nd_bus(struct device *dev);
struct nd_dimm *to_nd_dimm(struct device *dev);
struct nd_bus *walk_to_nd_bus(struct device *nd_dev);
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
new file mode 100644
index 000000000000..bf6313fffd4c
--- /dev/null
+++ b/drivers/block/nd/nd.h
@@ -0,0 +1,21 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __ND_H__
+#define __ND_H__
+struct nd_dimm;
+u32 to_nfit_handle(struct nd_dimm *nd_dimm);
+void *nd_dimm_get_pdata(struct nd_dimm *nd_dimm);
+void nd_dimm_set_pdata(struct nd_dimm *nd_dimm, void *data);
+unsigned long nd_dimm_get_dsm_mask(struct nd_dimm *nd_dimm);
+void nd_dimm_set_dsm_mask(struct nd_dimm *nd_dimm, unsigned long dsm_mask);
+#endif /* __ND_H__ */
diff --git a/drivers/block/nd/test/nfit.c b/drivers/block/nd/test/nfit.c
index 61227dec111a..e9fb9da765b9 100644
--- a/drivers/block/nd/test/nfit.c
+++ b/drivers/block/nd/test/nfit.c
@@ -14,10 +14,12 @@
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
+#include <linux/ndctl.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include "nfit_test.h"
#include "../nfit.h"
+#include "../nd.h"
#include <asm-generic/io-64-nonatomic-lo-hi.h>
@@ -138,11 +140,94 @@ static struct nfit_test *to_nfit_test(struct device *dev)
return container_of(pdev, struct nfit_test, pdev);
}
+static int nfit_test_add_dimm(struct nfit_bus_descriptor *nfit_desc,
+ struct nd_dimm *nd_dimm)
+{
+ u32 nfit_handle = to_nfit_handle(nd_dimm);
+ unsigned long dsm_mask = 0;
+ long i;
+
+ for (i = 0; i < ARRAY_SIZE(handle); i++)
+ if (nfit_handle == handle[i])
+ break;
+ if (i >= ARRAY_SIZE(handle))
+ return -EINVAL;
+
+ set_bit(NFIT_CMD_GET_CONFIG_SIZE, &dsm_mask);
+ set_bit(NFIT_CMD_GET_CONFIG_DATA, &dsm_mask);
+ set_bit(NFIT_CMD_SET_CONFIG_DATA, &dsm_mask);
+ nd_dimm_set_dsm_mask(nd_dimm, dsm_mask);
+ nd_dimm_set_pdata(nd_dimm, (void *) i);
+ return 0;
+}
+
static int nfit_test_ctl(struct nfit_bus_descriptor *nfit_desc,
struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
unsigned int buf_len)
{
- return -ENOTTY;
+ struct nfit_test *t = container_of(nfit_desc, typeof(*t), nfit_desc);
+ unsigned long dsm_mask = nd_dimm_get_dsm_mask(nd_dimm);
+ int i, rc;
+
+ if (!nd_dimm || !test_bit(cmd, &dsm_mask))
+ return -ENXIO;
+
+ /* lookup label space for the given dimm */
+ i = (long) nd_dimm_get_pdata(nd_dimm);
+
+ switch (cmd) {
+ case NFIT_CMD_GET_CONFIG_SIZE: {
+ struct nfit_cmd_get_config_size *nfit_cmd = buf;
+
+ if (buf_len < sizeof(*nfit_cmd))
+ return -EINVAL;
+ nfit_cmd->status = 0;
+ nfit_cmd->config_size = LABEL_SIZE;
+ nfit_cmd->max_xfer = SZ_4K;
+ rc = 0;
+ break;
+ }
+ case NFIT_CMD_GET_CONFIG_DATA: {
+ struct nfit_cmd_get_config_data_hdr *nfit_cmd = buf;
+ unsigned int len, offset = nfit_cmd->in_offset;
+
+ if (buf_len < sizeof(*nfit_cmd))
+ return -EINVAL;
+ if (offset >= LABEL_SIZE)
+ return -EINVAL;
+ if (nfit_cmd->in_length + sizeof(*nfit_cmd) > buf_len)
+ return -EINVAL;
+
+ nfit_cmd->status = 0;
+ len = min(nfit_cmd->in_length, LABEL_SIZE - offset);
+ memcpy(nfit_cmd->out_buf, t->label[i] + offset, len);
+ rc = buf_len - sizeof(*nfit_cmd) - len;
+ break;
+ }
+ case NFIT_CMD_SET_CONFIG_DATA: {
+ struct nfit_cmd_set_config_hdr *nfit_cmd = buf;
+ unsigned int len, offset = nfit_cmd->in_offset;
+ u32 *status;
+
+ if (buf_len < sizeof(*nfit_cmd))
+ return -EINVAL;
+ if (offset >= LABEL_SIZE)
+ return -EINVAL;
+ if (nfit_cmd->in_length + sizeof(*nfit_cmd) + 4 > buf_len)
+ return -EINVAL;
+
+ status = buf + nfit_cmd->in_length + sizeof(*nfit_cmd);
+ *status = 0;
+ len = min(nfit_cmd->in_length, LABEL_SIZE - offset);
+ memcpy(t->label[i] + offset, nfit_cmd->in_buf, len);
+ rc = buf_len - sizeof(*nfit_cmd) - (len + 4);
+ break;
+ }
+ default:
+ return -ENOTTY;
+ }
+
+ return rc;
}
static DEFINE_SPINLOCK(nfit_test_lock);
@@ -234,6 +319,7 @@ static int nfit_test0_alloc(struct nfit_test *t)
t->label[i] = alloc_coherent(t, LABEL_SIZE, &t->label_dma[i]);
if (!t->label[i])
return -ENOMEM;
+ sprintf(t->label[i], "label%d", i);
}
for (i = 0; i < NUM_DCR; i++) {
@@ -726,6 +812,7 @@ static void nfit_test0_setup(struct nfit_test *t)
nfit_desc = &t->nfit_desc;
nfit_desc->nfit_ctl = nfit_test_ctl;
+ nfit_desc->add_dimm = nfit_test_add_dimm;
}
static void nfit_test1_setup(struct nfit_test *t)
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 68ceb97c458c..384e8d212b04 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -270,6 +270,7 @@ header-y += ncp_fs.h
header-y += ncp.h
header-y += ncp_mount.h
header-y += ncp_no.h
+header-y += ndctl.h
header-y += neighbour.h
header-y += netconf.h
header-y += netdevice.h
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
new file mode 100644
index 000000000000..6cc8c91a0058
--- /dev/null
+++ b/include/uapi/linux/ndctl.h
@@ -0,0 +1,178 @@
+/*
+ * Copyright (c) 2014-2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU Lesser General Public License,
+ * version 2.1, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
+ * more details.
+ */
+#ifndef __NDCTL_H__
+#define __NDCTL_H__
+
+#include <linux/types.h>
+
+struct nfit_cmd_smart {
+ __u32 status;
+ __u8 data[128];
+} __packed;
+
+struct nfit_cmd_smart_threshold {
+ __u32 status;
+ __u8 data[8];
+} __packed;
+
+struct nfit_cmd_dimm_flags {
+ __u32 status;
+ __u32 flags;
+} __packed;
+
+struct nfit_cmd_get_config_size {
+ __u32 status;
+ __u32 config_size;
+ __u32 max_xfer;
+} __packed;
+
+struct nfit_cmd_get_config_data_hdr {
+ __u32 in_offset;
+ __u32 in_length;
+ __u32 status;
+ __u8 out_buf[0];
+} __packed;
+
+struct nfit_cmd_set_config_hdr {
+ __u32 in_offset;
+ __u32 in_length;
+ __u8 in_buf[0];
+} __packed;
+
+struct nfit_cmd_vendor_hdr {
+ __u32 opcode;
+ __u32 in_length;
+ __u8 in_buf[0];
+} __packed;
+
+struct nfit_cmd_vendor_tail {
+ __u32 status;
+ __u32 out_length;
+ __u8 out_buf[0];
+} __packed;
+
+struct nfit_cmd_ars_cap {
+ __u64 address;
+ __u64 length;
+ __u32 status;
+ __u32 max_ars_out;
+} __packed;
+
+struct nfit_cmd_ars_start {
+ __u64 address;
+ __u64 length;
+ __u16 type;
+ __u8 reserved[6];
+ __u32 status;
+} __packed;
+
+struct nfit_cmd_ars_query {
+ __u32 status;
+ __u16 out_length;
+ __u64 address;
+ __u64 length;
+ __u16 type;
+ __u32 num_records;
+ struct nfit_ars_record {
+ __u32 nfit_handle;
+ __u32 flags;
+ __u64 err_address;
+ __u64 mask;
+ } __packed records[0];
+} __packed;
+
+enum {
+ NFIT_CMD_IMPLEMENTED = 0,
+
+ /* bus commands */
+ NFIT_CMD_ARS_CAP = 1,
+ NFIT_CMD_ARS_START = 2,
+ NFIT_CMD_ARS_QUERY = 3,
+
+ /* per-dimm commands */
+ NFIT_CMD_SMART = 1,
+ NFIT_CMD_SMART_THRESHOLD = 2,
+ NFIT_CMD_DIMM_FLAGS = 3,
+ NFIT_CMD_GET_CONFIG_SIZE = 4,
+ NFIT_CMD_GET_CONFIG_DATA = 5,
+ NFIT_CMD_SET_CONFIG_DATA = 6,
+ NFIT_CMD_VENDOR_EFFECT_LOG_SIZE = 7,
+ NFIT_CMD_VENDOR_EFFECT_LOG = 8,
+ NFIT_CMD_VENDOR = 9,
+};
+
+static inline const char *nfit_bus_cmd_name(unsigned cmd)
+{
+ static const char * const names[] = {
+ [NFIT_CMD_ARS_CAP] = "ars_cap",
+ [NFIT_CMD_ARS_START] = "ars_start",
+ [NFIT_CMD_ARS_QUERY] = "ars_query",
+ };
+
+ if (cmd < ARRAY_SIZE(names) && names[cmd])
+ return names[cmd];
+ return "unknown";
+}
+
+static inline const char *nfit_dimm_cmd_name(unsigned cmd)
+{
+ static const char * const names[] = {
+ [NFIT_CMD_SMART] = "smart",
+ [NFIT_CMD_SMART_THRESHOLD] = "smart_thresh",
+ [NFIT_CMD_DIMM_FLAGS] = "flags",
+ [NFIT_CMD_GET_CONFIG_SIZE] = "get_size",
+ [NFIT_CMD_GET_CONFIG_DATA] = "get_data",
+ [NFIT_CMD_SET_CONFIG_DATA] = "set_data",
+ [NFIT_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
+ [NFIT_CMD_VENDOR_EFFECT_LOG] = "effect_log",
+ [NFIT_CMD_VENDOR] = "vendor",
+ };
+
+ if (cmd < ARRAY_SIZE(names) && names[cmd])
+ return names[cmd];
+ return "unknown";
+}
+
+#define ND_IOCTL 'N'
+
+#define NFIT_IOCTL_SMART _IOWR(ND_IOCTL, NFIT_CMD_SMART,\
+ struct nfit_cmd_smart)
+
+#define NFIT_IOCTL_SMART_THRESHOLD _IOWR(ND_IOCTL, NFIT_CMD_SMART_THRESHOLD,\
+ struct nfit_cmd_smart_threshold)
+
+#define NFIT_IOCTL_DIMM_FLAGS _IOWR(ND_IOCTL, NFIT_CMD_DIMM_FLAGS,\
+ struct nfit_cmd_dimm_flags)
+
+#define NFIT_IOCTL_GET_CONFIG_SIZE _IOWR(ND_IOCTL, NFIT_CMD_GET_CONFIG_SIZE,\
+ struct nfit_cmd_get_config_size)
+
+#define NFIT_IOCTL_GET_CONFIG_DATA _IOWR(ND_IOCTL, NFIT_CMD_GET_CONFIG_DATA,\
+ struct nfit_cmd_get_config_data_hdr)
+
+#define NFIT_IOCTL_SET_CONFIG_DATA _IOWR(ND_IOCTL, NFIT_CMD_SET_CONFIG_DATA,\
+ struct nfit_cmd_set_config_hdr)
+
+#define NFIT_IOCTL_VENDOR _IOWR(ND_IOCTL, NFIT_CMD_VENDOR,\
+ struct nfit_cmd_vendor_hdr)
+
+#define NFIT_IOCTL_ARS_CAP _IOWR(ND_IOCTL, NFIT_CMD_ARS_CAP,\
+ struct nfit_cmd_ars_cap)
+
+#define NFIT_IOCTL_ARS_START _IOWR(ND_IOCTL, NFIT_CMD_ARS_START,\
+ struct nfit_cmd_ars_start)
+
+#define NFIT_IOCTL_ARS_QUERY _IOWR(ND_IOCTL, NFIT_CMD_ARS_QUERY,\
+ struct nfit_cmd_ars_query)
+
+#endif /* __NDCTL_H__ */
* Implement the device-model infrastructure for loading modules and
attaching drivers to nd devices. This is a simple association of a
nd-device-type number with a driver that has a bitmask of supported
device types. To facilitate userspace bind/unbind operations 'modalias'
and 'devtype', that also appear in the uevent, are added as generic
sysfs attributes for all nd devices. The reason for the device-type
number is to support sub-types within a given parent devtype, be it a
vendor-specific sub-type or otherwise.
* The first consumer of this infrastructure is the driver
for dimm devices. It simply uses control messages to retrieve and
store the configuration-data image (label set) from each dimm.
Note: nd_device_register() arranges for asynchronous registration of
nd bus devices.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 1
drivers/block/nd/bus.c | 158 ++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/core.c | 43 ++++++++++-
drivers/block/nd/dimm.c | 103 ++++++++++++++++++++++++++
drivers/block/nd/dimm_devs.c | 161 ++++++++++++++++++++++++++++++++++++++---
drivers/block/nd/nd-private.h | 12 ++-
drivers/block/nd/nd.h | 21 +++++
include/linux/nd.h | 39 ++++++++++
include/uapi/linux/ndctl.h | 6 ++
9 files changed, 526 insertions(+), 18 deletions(-)
create mode 100644 drivers/block/nd/dimm.c
create mode 100644 include/linux/nd.h
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 6b34dd4d4df8..9f1b69c86fba 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -22,3 +22,4 @@ nd_acpi-y := acpi.o
nd-y := core.o
nd-y += bus.o
nd-y += dimm_devs.o
+nd-y += dimm.o
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index 67a0624c265b..c815dd425a49 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -16,10 +16,12 @@
#include <linux/fcntl.h>
#include <linux/async.h>
#include <linux/ndctl.h>
+#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/mm.h>
+#include <linux/nd.h>
#include "nd-private.h"
#include "nfit.h"
#include "nd.h"
@@ -28,8 +30,57 @@ int nd_dimm_major;
static int nd_bus_major;
static struct class *nd_class;
-struct bus_type nd_bus_type = {
+static int to_nd_device_type(struct device *dev)
+{
+ if (is_nd_dimm(dev))
+ return ND_DEVICE_DIMM;
+
+ return 0;
+}
+
+static int nd_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+ return add_uevent_var(env, "MODALIAS=" ND_DEVICE_MODALIAS_FMT,
+ to_nd_device_type(dev));
+}
+
+static int nd_bus_match(struct device *dev, struct device_driver *drv)
+{
+ struct nd_device_driver *nd_drv = to_nd_device_driver(drv);
+
+ return test_bit(to_nd_device_type(dev), &nd_drv->type);
+}
+
+static int nd_bus_probe(struct device *dev)
+{
+ struct nd_device_driver *nd_drv = to_nd_device_driver(dev->driver);
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+ int rc;
+
+ rc = nd_drv->probe(dev);
+ dev_dbg(&nd_bus->dev, "%s.probe(%s) = %d\n", dev->driver->name,
+ dev_name(dev), rc);
+ return rc;
+}
+
+static int nd_bus_remove(struct device *dev)
+{
+ struct nd_device_driver *nd_drv = to_nd_device_driver(dev->driver);
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+ int rc;
+
+ rc = nd_drv->remove(dev);
+ dev_dbg(&nd_bus->dev, "%s.remove(%s) = %d\n", dev->driver->name,
+ dev_name(dev), rc);
+ return rc;
+}
+
+static struct bus_type nd_bus_type = {
.name = "nd",
+ .uevent = nd_bus_uevent,
+ .match = nd_bus_match,
+ .probe = nd_bus_probe,
+ .remove = nd_bus_remove,
};
static ASYNC_DOMAIN_EXCLUSIVE(nd_async_domain);
@@ -68,6 +119,109 @@ void nd_synchronize(void)
async_synchronize_full_domain(&nd_async_domain);
}
+static void nd_async_device_register(void *d, async_cookie_t cookie)
+{
+ struct device *dev = d;
+
+ if (device_add(dev) != 0) {
+ dev_err(dev, "%s: failed\n", __func__);
+ put_device(dev);
+ }
+ put_device(dev);
+}
+
+static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+{
+ struct device *dev = d;
+
+ device_unregister(dev);
+ put_device(dev);
+}
+
+void nd_device_register(struct device *dev)
+{
+ dev->bus = &nd_bus_type;
+ device_initialize(dev);
+ get_device(dev);
+ async_schedule_domain(nd_async_device_register, dev,
+ &nd_async_domain);
+}
+EXPORT_SYMBOL(nd_device_register);
+
+void nd_device_unregister(struct device *dev, enum nd_async_mode mode)
+{
+ switch (mode) {
+ case ND_ASYNC:
+ get_device(dev);
+ async_schedule_domain(nd_async_device_unregister, dev,
+ &nd_async_domain);
+ break;
+ case ND_SYNC:
+ nd_synchronize();
+ device_unregister(dev);
+ break;
+ }
+}
+EXPORT_SYMBOL(nd_device_unregister);
+
+/**
+ * __nd_driver_register() - register a region or a namespace driver
+ * @nd_drv: driver to register
+ * @owner: automatically set by nd_driver_register() macro
+ * @mod_name: automatically set by nd_driver_register() macro
+ */
+int __nd_driver_register(struct nd_device_driver *nd_drv, struct module *owner,
+ const char *mod_name)
+{
+ struct device_driver *drv = &nd_drv->drv;
+
+ if (!nd_drv->type) {
+ pr_debug("driver type bitmask not set (%pf)\n",
+ __builtin_return_address(0));
+ return -EINVAL;
+ }
+
+ if (!nd_drv->probe || !nd_drv->remove) {
+ pr_debug("->probe() and ->remove() must be specified\n");
+ return -EINVAL;
+ }
+
+ drv->bus = &nd_bus_type;
+ drv->owner = owner;
+ drv->mod_name = mod_name;
+
+ return driver_register(drv);
+}
+EXPORT_SYMBOL(__nd_driver_register);
+
+static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, ND_DEVICE_MODALIAS_FMT "\n",
+ to_nd_device_type(dev));
+}
+static DEVICE_ATTR_RO(modalias);
+
+static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%s\n", dev->type->name);
+}
+DEVICE_ATTR_RO(devtype);
+
+static struct attribute *nd_device_attributes[] = {
+ &dev_attr_modalias.attr,
+ &dev_attr_devtype.attr,
+ NULL,
+};
+
+/**
+ * nd_device_attribute_group - generic attributes for all devices on an nd bus
+ */
+struct attribute_group nd_device_attribute_group = {
+ .attrs = nd_device_attributes,
+};
+
int nd_bus_create_ndctl(struct nd_bus *nd_bus)
{
dev_t devt = MKDEV(nd_bus_major, nd_bus->id);
@@ -345,7 +499,7 @@ int __init nd_bus_init(void)
return rc;
}
-void __exit nd_bus_exit(void)
+void nd_bus_exit(void)
{
class_destroy(nd_class);
unregister_chrdev(nd_bus_major, "ndctl");
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 0df1e82fcb18..426f96b02594 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -150,8 +150,33 @@ static ssize_t revision_show(struct device *dev,
}
static DEVICE_ATTR_RO(revision);
+static int flush_namespaces(struct device *dev, void *data)
+{
+ device_lock(dev);
+ device_unlock(dev);
+ return 0;
+}
+
+static int flush_regions_dimms(struct device *dev, void *data)
+{
+ device_lock(dev);
+ device_unlock(dev);
+ device_for_each_child(dev, NULL, flush_namespaces);
+ return 0;
+}
+
+static ssize_t wait_probe_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ nd_synchronize();
+ device_for_each_child(dev, NULL, flush_regions_dimms);
+ return sprintf(buf, "1\n");
+}
+static DEVICE_ATTR_RO(wait_probe);
+
static struct attribute *nd_bus_attributes[] = {
&dev_attr_commands.attr,
+ &dev_attr_wait_probe.attr,
&dev_attr_provider.attr,
&dev_attr_revision.attr,
NULL,
@@ -491,7 +516,7 @@ static int child_unregister(struct device *dev, void *data)
if (dev->class)
/* pass */;
else
- device_unregister(dev);
+ nd_device_unregister(dev, ND_SYNC);
return 0;
}
@@ -594,6 +619,7 @@ void nfit_bus_unregister(struct nd_bus *nd_bus)
list_del_init(&nd_bus->list);
mutex_unlock(&nd_bus_list_mutex);
+ nd_synchronize();
device_for_each_child(&nd_bus->dev, NULL, child_unregister);
nd_bus_destroy_ndctl(nd_bus);
@@ -603,6 +629,8 @@ EXPORT_SYMBOL(nfit_bus_unregister);
static __init int nd_core_init(void)
{
+ int rc;
+
BUILD_BUG_ON(sizeof(struct nfit) != 40);
BUILD_BUG_ON(sizeof(struct nfit_spa) != 56);
BUILD_BUG_ON(sizeof(struct nfit_mem) != 48);
@@ -611,12 +639,23 @@ static __init int nd_core_init(void)
BUILD_BUG_ON(sizeof(struct nfit_dcr) != 80);
BUILD_BUG_ON(sizeof(struct nfit_bdw) != 40);
- return nd_bus_init();
+ rc = nd_bus_init();
+ if (rc)
+ return rc;
+ rc = nd_dimm_init();
+ if (rc)
+ goto err_dimm;
+ return 0;
+ err_dimm:
+ nd_bus_exit();
+ return rc;
+
}
static __exit void nd_core_exit(void)
{
WARN_ON(!list_empty(&nd_bus_list));
+ nd_dimm_exit();
nd_bus_exit();
}
MODULE_LICENSE("GPL v2");
diff --git a/drivers/block/nd/dimm.c b/drivers/block/nd/dimm.c
new file mode 100644
index 000000000000..fec7229afb58
--- /dev/null
+++ b/drivers/block/nd/dimm.c
@@ -0,0 +1,103 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/vmalloc.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/sizes.h>
+#include <linux/ndctl.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/nd.h>
+#include "nd.h"
+
+static bool force_enable_dimms;
+module_param(force_enable_dimms, bool, S_IRUGO|S_IWUSR);
+MODULE_PARM_DESC(force_enable_dimms, "Ignore DIMM NFIT/firmware status");
+
+static void free_data(struct nd_dimm_drvdata *ndd)
+{
+ if (!ndd)
+ return;
+
+ if (ndd->data && is_vmalloc_addr(ndd->data))
+ vfree(ndd->data);
+ else
+ kfree(ndd->data);
+ kfree(ndd);
+}
+
+static int nd_dimm_probe(struct device *dev)
+{
+ struct nd_dimm_drvdata *ndd;
+ int rc;
+
+ rc = nd_dimm_firmware_status(dev);
+ if (rc < 0) {
+ dev_info(dev, "disabled by firmware: %d\n", rc);
+ if (!force_enable_dimms)
+ return rc;
+ }
+
+ ndd = kzalloc(sizeof(*ndd), GFP_KERNEL);
+ if (!ndd)
+ return -ENOMEM;
+
+ dev_set_drvdata(dev, ndd);
+
+ rc = nd_dimm_init_nsarea(ndd);
+ if (rc)
+ goto err;
+
+ rc = nd_dimm_init_config_data(ndd);
+ if (rc)
+ goto err;
+
+ dev_dbg(dev, "config data size: %d\n", ndd->nsarea.config_size);
+
+ return 0;
+
+ err:
+ free_data(ndd);
+ return rc;
+
+}
+
+static int nd_dimm_remove(struct device *dev)
+{
+ struct nd_dimm_drvdata *ndd = dev_get_drvdata(dev);
+
+ free_data(ndd);
+
+ return 0;
+}
+
+static struct nd_device_driver nd_dimm_driver = {
+ .probe = nd_dimm_probe,
+ .remove = nd_dimm_remove,
+ .drv = {
+ .name = "nd_dimm",
+ },
+ .type = ND_DRIVER_DIMM,
+};
+
+int __init nd_dimm_init(void)
+{
+ return nd_driver_register(&nd_dimm_driver);
+}
+
+void __exit nd_dimm_exit(void)
+{
+ driver_unregister(&nd_dimm_driver.drv);
+}
+
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DIMM);
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index b73006cfbf66..d15ca75804ac 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -11,6 +11,7 @@
* General Public License for more details.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/vmalloc.h>
#include <linux/device.h>
#include <linux/ndctl.h>
#include <linux/slab.h>
@@ -23,6 +24,112 @@
static DEFINE_IDA(dimm_ida);
+/*
+ * Retrieve bus and dimm handle and return if this bus supports
+ * get_config_data commands
+ */
+static int __validate_dimm(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_dimm *nd_dimm;
+
+ if (!ndd)
+ return -EINVAL;
+
+ nd_dimm = to_nd_dimm(ndd->dev);
+
+ if (!test_bit(NFIT_CMD_GET_CONFIG_DATA, &nd_dimm->dsm_mask))
+ return -ENXIO;
+
+ /* TODO: validate common format interface code */
+ if (!nd_dimm->nd_mem->nfit_dcr)
+ return -ENODEV;
+ return 0;
+}
+
+static int validate_dimm(struct nd_dimm_drvdata *ndd)
+{
+ int rc = __validate_dimm(ndd);
+
+ if (rc && ndd)
+ dev_dbg(ndd->dev, "%pf: %s error: %d\n",
+ __builtin_return_address(0), __func__, rc);
+ return rc;
+}
+
+/**
+ * nd_dimm_init_nsarea - determine the geometry of a dimm's namespace area
+ * @nd_dimm: dimm to initialize
+ */
+int nd_dimm_init_nsarea(struct nd_dimm_drvdata *ndd)
+{
+ struct nfit_cmd_get_config_size *cmd = &ndd->nsarea;
+ struct nd_bus *nd_bus = walk_to_nd_bus(ndd->dev);
+ struct nfit_bus_descriptor *nfit_desc;
+ int rc = validate_dimm(ndd);
+
+ if (rc)
+ return rc;
+
+ if (cmd->config_size)
+ return 0; /* already valid */
+
+ memset(cmd, 0, sizeof(*cmd));
+ nfit_desc = nd_bus->nfit_desc;
+ return nfit_desc->nfit_ctl(nfit_desc, to_nd_dimm(ndd->dev),
+ NFIT_CMD_GET_CONFIG_SIZE, cmd, sizeof(*cmd));
+}
+
+int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(ndd->dev);
+ struct nfit_cmd_get_config_data_hdr *cmd;
+ struct nfit_bus_descriptor *nfit_desc;
+ int rc = validate_dimm(ndd);
+ u32 max_cmd_size, config_size;
+ size_t offset;
+
+ if (rc)
+ return rc;
+
+ if (ndd->data)
+ return 0;
+
+ if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0)
+ return -ENXIO;
+
+ ndd->data = kmalloc(ndd->nsarea.config_size, GFP_KERNEL);
+ if (!ndd->data)
+ ndd->data = vmalloc(ndd->nsarea.config_size);
+
+ if (!ndd->data)
+ return -ENOMEM;
+
+ max_cmd_size = min_t(u32, PAGE_SIZE, ndd->nsarea.max_xfer);
+ cmd = kzalloc(max_cmd_size + sizeof(*cmd), GFP_KERNEL);
+ if (!cmd)
+ return -ENOMEM;
+
+ nfit_desc = nd_bus->nfit_desc;
+ for (config_size = ndd->nsarea.config_size, offset = 0;
+ config_size; config_size -= cmd->in_length,
+ offset += cmd->in_length) {
+ cmd->in_length = min(config_size, max_cmd_size);
+ cmd->in_offset = offset;
+ rc = nfit_desc->nfit_ctl(nfit_desc, to_nd_dimm(ndd->dev),
+ NFIT_CMD_GET_CONFIG_DATA, cmd,
+ cmd->in_length + sizeof(*cmd));
+ if (rc || cmd->status) {
+ rc = -ENXIO;
+ break;
+ }
+ memcpy(ndd->data + offset, cmd->out_buf, cmd->in_length);
+ }
+ dev_dbg(ndd->dev, "%s: len: %zd rc: %d\n", __func__, offset, rc);
+ kfree(cmd);
+
+ return rc;
+}
+
static void nd_dimm_release(struct device *dev)
{
struct nd_dimm *nd_dimm = to_nd_dimm(dev);
@@ -211,9 +318,27 @@ static struct attribute_group nd_dimm_attribute_group = {
static const struct attribute_group *nd_dimm_attribute_groups[] = {
&nd_dimm_attribute_group,
+ &nd_device_attribute_group,
NULL,
};
+/**
+ * nd_dimm_firmware_status - retrieve NFIT-specific state of the dimm
+ * @dev: dimm device to interrogate
+ *
+ * At init time as the NFIT parsing code discovers DIMMs (memdevs) it
+ * validates the state of those devices against the NFIT provider. It
+ * is possible that an NFIT entry exists for the DIMM but the device is
+ * disabled. In that case we will still create an nd_dimm, but prevent
+ * it from binding to its driver.
+ */
+int nd_dimm_firmware_status(struct device *dev)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+
+ return nd_dimm->nfit_status;
+}
+
static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
struct nd_mem *nd_mem)
{
@@ -244,20 +369,12 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
dev_set_name(dev, "nmem%d", nd_dimm->id);
dev->parent = &nd_bus->dev;
dev->type = &nd_dimm_device_type;
- dev->bus = &nd_bus_type;
dev->groups = nd_dimm_attribute_groups;
dev->devt = MKDEV(nd_dimm_major, nd_dimm->id);
if (nfit_desc->add_dimm)
- if (nfit_desc->add_dimm(nfit_desc, nd_dimm) != 0) {
- device_initialize(dev);
- put_device(dev);
- return NULL;
- }
+ nd_dimm->nfit_status = nfit_desc->add_dimm(nfit_desc, nd_dimm);
- if (device_register(dev) != 0) {
- put_device(dev);
- return NULL;
- }
+ nd_device_register(dev);
return nd_dimm;
err_ida:
@@ -269,6 +386,15 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
return NULL;
}
+static int count_dimms(struct device *dev, void *c)
+{
+ int *count = c;
+
+ if (is_nd_dimm(dev))
+ (*count)++;
+ return 0;
+}
+
int nd_bus_register_dimms(struct nd_bus *nd_bus)
{
int rc = 0, dimm_count = 0;
@@ -300,5 +426,18 @@ int nd_bus_register_dimms(struct nd_bus *nd_bus)
}
mutex_unlock(&nd_bus_list_mutex);
- return rc;
+ /*
+ * Flush dimm registration as 'nd_region' registration depends on
+ * finding 'nd_dimm's on the bus.
+ */
+ nd_synchronize();
+ if (rc)
+ return rc;
+
+ rc = 0;
+ device_for_each_child(&nd_bus->dev, &rc, count_dimms);
+ dev_dbg(&nd_bus->dev, "%s: count: %d\n", __func__, rc);
+ if (rc != dimm_count)
+ return -ENXIO;
+ return 0;
}
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 31239942b724..72197992e386 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -18,7 +18,6 @@
extern struct list_head nd_bus_list;
extern struct mutex nd_bus_list_mutex;
-extern struct bus_type nd_bus_type;
extern int nd_dimm_major;
enum {
@@ -26,6 +25,11 @@ enum {
ND_IOCTL_MAX_BUFLEN = SZ_4M,
};
+/*
+ * List manipulation is protected by nd_bus_list_mutex, except for the
+ * deferred probe tracking list which nests under instances where
+ * nd_bus_list_mutex is locked
+ */
struct nd_bus {
struct nfit_bus_descriptor *nfit_desc;
struct radix_tree_root dimm_radix;
@@ -44,7 +48,7 @@ struct nd_dimm {
struct nd_mem *nd_mem;
struct device dev;
void *provider_data;
- int id;
+ int id, nfit_status;
struct nd_dimm_delete {
struct nd_bus *nd_bus;
struct nd_mem *nd_mem;
@@ -88,8 +92,10 @@ struct nd_dimm *to_nd_dimm(struct device *dev);
struct nd_bus *walk_to_nd_bus(struct device *nd_dev);
void nd_synchronize(void);
int __init nd_bus_init(void);
-void __exit nd_bus_exit(void);
+void nd_bus_exit(void);
void nd_dimm_delete(struct nd_dimm *nd_dimm);
+int __init nd_dimm_init(void);
+void __exit nd_dimm_exit(void);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
int nd_bus_register_dimms(struct nd_bus *nd_bus);
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index bf6313fffd4c..f277440c72b4 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -12,10 +12,31 @@
*/
#ifndef __ND_H__
#define __ND_H__
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include <linux/ndctl.h>
+
+struct nd_dimm_drvdata {
+ struct device *dev;
+ struct nfit_cmd_get_config_size nsarea;
+ void *data;
+};
+
+enum nd_async_mode {
+ ND_SYNC,
+ ND_ASYNC,
+};
+
+void nd_device_register(struct device *dev);
+void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
+extern struct attribute_group nd_device_attribute_group;
struct nd_dimm;
u32 to_nfit_handle(struct nd_dimm *nd_dimm);
void *nd_dimm_get_pdata(struct nd_dimm *nd_dimm);
void nd_dimm_set_pdata(struct nd_dimm *nd_dimm, void *data);
unsigned long nd_dimm_get_dsm_mask(struct nd_dimm *nd_dimm);
void nd_dimm_set_dsm_mask(struct nd_dimm *nd_dimm, unsigned long dsm_mask);
+int nd_dimm_init_nsarea(struct nd_dimm_drvdata *ndd);
+int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd);
+int nd_dimm_firmware_status(struct device *dev);
#endif /* __ND_H__ */
diff --git a/include/linux/nd.h b/include/linux/nd.h
new file mode 100644
index 000000000000..e074f67e53a3
--- /dev/null
+++ b/include/linux/nd.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __LINUX_ND_H__
+#define __LINUX_ND_H__
+#include <linux/ndctl.h>
+#include <linux/device.h>
+
+struct nd_device_driver {
+ struct device_driver drv;
+ unsigned long type;
+ int (*probe)(struct device *dev);
+ int (*remove)(struct device *dev);
+};
+
+static inline struct nd_device_driver *to_nd_device_driver(
+ struct device_driver *drv)
+{
+ return container_of(drv, struct nd_device_driver, drv);
+}
+
+#define MODULE_ALIAS_ND_DEVICE(type) \
+ MODULE_ALIAS("nd:t" __stringify(type) "*")
+#define ND_DEVICE_MODALIAS_FMT "nd:t%d"
+
+int __must_check __nd_driver_register(struct nd_device_driver *nd_drv,
+ struct module *module, const char *mod_name);
+#define nd_driver_register(driver) \
+ __nd_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
+#endif /* __LINUX_ND_H__ */
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index 6cc8c91a0058..f11a9f706bbf 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -175,4 +175,10 @@ static inline const char *nfit_dimm_cmd_name(unsigned cmd)
#define NFIT_IOCTL_ARS_QUERY _IOWR(ND_IOCTL, NFIT_CMD_ARS_QUERY,\
struct nfit_cmd_ars_query)
+
+#define ND_DEVICE_DIMM 1 /* nd_dimm: container for "config data" */
+
+enum nd_driver_flags {
+ ND_DRIVER_DIMM = 1 << ND_DEVICE_DIMM,
+};
#endif /* __NDCTL_H__ */
A "region" device represents the maximum capacity of a
block-data-window, or an interleaved spa range (direct-access persistent
memory or volatile memory), without regard for aliasing. Aliasing is
resolved by the label data on the dimm to designate which exclusive
interface will access the aliased data. Enabling for the
label-designated sub-device is in a subsequent patch.
The "region" types are defined in the NFIT System Physical Address (spa)
table. In the case of persistent memory the spa-range describes the
direct memory address range of the storage (NFIT_SPA_PM). A block
"region" region (NFIT_SPA_DCR) points to a DIMM Control Region (DCR) or
an interleaved group of DCRs. Those DCRs are (optionally) referenced by
a block-data-window (BDW) set to describe the access mechanism and
capacity of the BLK-accessible storage. If the related BDW is not
published then the dimm is only available for control/configuration
commands. Finally, a volatile "region" (NFIT_SPA_VOLATILE) indicates
the portions of NVDIMMs that have been re-assigned as normal volatile
system memory by platform firmware.
The name format of "region" devices is "regionN" where, like dimms, N is
a global ida index assigned at discovery time. This id is not reliable
across reboots nor in the presence of hotplug. Look to attributes of
the region or static id-data of the sub-namespace to generate a
persistent name.
"region"s have 2 generic attributes "size", and "mapping"s where:
- size: the block-data-window accessible capacity or the span of the
spa-range in the case of pm.
- mappingN: a tuple describing a dimm's contribution to the region's
capacity in the format (<nfit-dimm-handle>,<dpa>,<size>). For a
pm-region there will be at least one mapping per dimm in the interleave
set. For a block-region there is only "mapping0" listing the starting dimm
offset of the block-data-window and the available capacity of that
window (matches "size" above).
The max number of mappings per "region" is hard coded per the constraints of
sysfs attribute groups. That said the number of mappings per region should
never exceed the maximum number of possible dimms in the system. If the
current number turns out to not be enough then the "mappings" attribute
clarifies how many there are supposed to be. "32 should be enough for
anybody...".
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 1
drivers/block/nd/core.c | 8 +
drivers/block/nd/nd-private.h | 5
drivers/block/nd/nd.h | 17 ++
drivers/block/nd/region_devs.c | 426 ++++++++++++++++++++++++++++++++++++++++
5 files changed, 455 insertions(+), 2 deletions(-)
create mode 100644 drivers/block/nd/region_devs.c
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 9f1b69c86fba..6698acbe7b44 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -23,3 +23,4 @@ nd-y := core.o
nd-y += bus.o
nd-y += dimm_devs.o
nd-y += dimm.o
+nd-y += region_devs.o
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 426f96b02594..32ecd6f05c90 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -230,7 +230,7 @@ struct nfit_table_header {
__le16 length;
};
-static const char *spa_type_name(u16 type)
+const char *spa_type_name(u16 type)
{
switch (type) {
case NFIT_SPA_VOLATILE: return "volatile";
@@ -241,7 +241,7 @@ static const char *spa_type_name(u16 type)
}
}
-static int nfit_spa_type(struct nfit_spa __iomem *nfit_spa)
+int nfit_spa_type(struct nfit_spa __iomem *nfit_spa)
{
__u8 uuid[16];
@@ -577,6 +577,10 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
if (rc)
goto err_child;
+ rc = nd_bus_register_regions(nd_bus);
+ if (rc)
+ goto err_child;
+
mutex_lock(&nd_bus_list_mutex);
list_add_tail(&nd_bus->list, &nd_bus_list);
mutex_unlock(&nd_bus_list_mutex);
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 72197992e386..d254ff688ad6 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -85,6 +85,8 @@ struct nd_mem {
struct list_head list;
};
+const char *spa_type_name(u16 type);
+int nfit_spa_type(struct nfit_spa __iomem *nfit_spa);
struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle);
bool is_nd_dimm(struct device *dev);
struct nd_bus *to_nd_bus(struct device *dev);
@@ -99,4 +101,7 @@ void __exit nd_dimm_exit(void);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
int nd_bus_register_dimms(struct nd_bus *nd_bus);
+int nd_bus_register_regions(struct nd_bus *nd_bus);
+int nd_match_dimm(struct device *dev, void *data);
+bool is_nd_dimm(struct device *dev);
#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index f277440c72b4..13eba9bd74c7 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -22,6 +22,22 @@ struct nd_dimm_drvdata {
void *data;
};
+struct nd_mapping {
+ struct nd_dimm *nd_dimm;
+ u64 start;
+ u64 size;
+};
+
+struct nd_region {
+ struct device dev;
+ struct nd_spa *nd_spa;
+ u16 ndr_mappings;
+ u64 ndr_size;
+ u64 ndr_start;
+ int id;
+ struct nd_mapping mapping[0];
+};
+
enum nd_async_mode {
ND_SYNC,
ND_ASYNC,
@@ -39,4 +55,5 @@ void nd_dimm_set_dsm_mask(struct nd_dimm *nd_dimm, unsigned long dsm_mask);
int nd_dimm_init_nsarea(struct nd_dimm_drvdata *ndd);
int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd);
int nd_dimm_firmware_status(struct device *dev);
+struct nd_region *to_nd_region(struct device *dev);
#endif /* __ND_H__ */
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
new file mode 100644
index 000000000000..f474c32d6dad
--- /dev/null
+++ b/drivers/block/nd/region_devs.c
@@ -0,0 +1,426 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/slab.h>
+#include <linux/io.h>
+#include "nd-private.h"
+#include "nfit.h"
+#include "nd.h"
+
+#include <asm-generic/io-64-nonatomic-lo-hi.h>
+
+static DEFINE_IDA(region_ida);
+
+static void nd_region_release(struct device *dev)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ u16 i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
+
+ put_device(&nd_dimm->dev);
+ }
+ ida_simple_remove(®ion_ida, nd_region->id);
+ kfree(nd_region);
+}
+
+static struct device_type nd_block_device_type = {
+ .name = "nd_blk",
+ .release = nd_region_release,
+};
+
+static struct device_type nd_pmem_device_type = {
+ .name = "nd_pmem",
+ .release = nd_region_release,
+};
+
+static struct device_type nd_volatile_device_type = {
+ .name = "nd_volatile",
+ .release = nd_region_release,
+};
+
+static bool is_nd_pmem(struct device *dev)
+{
+ return dev ? dev->type == &nd_pmem_device_type : false;
+}
+
+struct nd_region *to_nd_region(struct device *dev)
+{
+ struct nd_region *nd_region = container_of(dev, struct nd_region, dev);
+
+ WARN_ON(dev->type->release != nd_region_release);
+ return nd_region;
+}
+
+static ssize_t size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ unsigned long long size = 0;
+
+ if (is_nd_pmem(dev)) {
+ size = nd_region->ndr_size;
+ } else if (nd_region->ndr_mappings == 1) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+
+ size = nd_mapping->size;
+ }
+
+ return sprintf(buf, "%llu\n", size);
+}
+static DEVICE_ATTR_RO(size);
+
+static ssize_t mappings_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+
+ return sprintf(buf, "%d\n", nd_region->ndr_mappings);
+}
+static DEVICE_ATTR_RO(mappings);
+
+static struct attribute *nd_region_attributes[] = {
+ &dev_attr_size.attr,
+ &dev_attr_mappings.attr,
+ NULL,
+};
+
+static struct attribute_group nd_region_attribute_group = {
+ .attrs = nd_region_attributes,
+};
+
+/*
+ * Retrieve the nth entry referencing this spa, for pm there may be not only
+ * multiple per device in the interleave, but multiple per-dimm for each region
+ * of the dimm that maps into the interleave.
+ */
+static struct nd_memdev *nd_memdev_from_spa(struct nd_bus *nd_bus,
+ u16 spa_index, int n)
+{
+ struct nd_memdev *nd_memdev;
+
+ list_for_each_entry(nd_memdev, &nd_bus->memdevs, list)
+ if (readw(&nd_memdev->nfit_mem->spa_index) == spa_index)
+ if (n-- == 0)
+ return nd_memdev;
+ return NULL;
+}
+
+static int num_nd_mem(struct nd_bus *nd_bus, u16 spa_index)
+{
+ struct nd_memdev *nd_memdev;
+ int count = 0;
+
+ list_for_each_entry(nd_memdev, &nd_bus->memdevs, list)
+ if (readw(&nd_memdev->nfit_mem->spa_index) == spa_index)
+ count++;
+ return count;
+}
+
+/* convert and anoymous MEMDEV to its set of associated tables */
+static struct nd_mem *nd_memdev_to_mem(struct nd_bus *nd_bus,
+ struct nd_memdev *nd_memdev)
+{
+ u32 nfit_handle = readl(&nd_memdev->nfit_mem->nfit_handle);
+ struct nd_mem *nd_mem;
+
+ list_for_each_entry(nd_mem, &nd_bus->dimms, list)
+ if (readl(&nd_mem->nfit_mem_dcr->nfit_handle) == nfit_handle)
+ return nd_mem;
+ return NULL;
+}
+
+static ssize_t mappingN(struct device *dev, char *buf, int n)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ struct nfit_mem __iomem *nfit_mem;
+ struct nd_mapping *nd_mapping;
+ struct nd_dimm *nd_dimm;
+
+ if (n >= nd_region->ndr_mappings)
+ return -ENXIO;
+ nd_mapping = &nd_region->mapping[n];
+ nd_dimm = nd_mapping->nd_dimm;
+ nfit_mem = nd_dimm->nd_mem->nfit_mem_dcr;
+
+ return sprintf(buf, "%#x,%llu,%llu\n", readl(&nfit_mem->nfit_handle),
+ nd_mapping->start, nd_mapping->size);
+}
+
+#define REGION_MAPPING(idx) \
+static ssize_t mapping##idx##_show(struct device *dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ return mappingN(dev, buf, idx); \
+} \
+static DEVICE_ATTR_RO(mapping##idx)
+
+/*
+ * 32 should be enough for a while, even in the presence of socket
+ * interleave a 32-way interleave set is a degenerate case.
+ */
+REGION_MAPPING(0);
+REGION_MAPPING(1);
+REGION_MAPPING(2);
+REGION_MAPPING(3);
+REGION_MAPPING(4);
+REGION_MAPPING(5);
+REGION_MAPPING(6);
+REGION_MAPPING(7);
+REGION_MAPPING(8);
+REGION_MAPPING(9);
+REGION_MAPPING(10);
+REGION_MAPPING(11);
+REGION_MAPPING(12);
+REGION_MAPPING(13);
+REGION_MAPPING(14);
+REGION_MAPPING(15);
+REGION_MAPPING(16);
+REGION_MAPPING(17);
+REGION_MAPPING(18);
+REGION_MAPPING(19);
+REGION_MAPPING(20);
+REGION_MAPPING(21);
+REGION_MAPPING(22);
+REGION_MAPPING(23);
+REGION_MAPPING(24);
+REGION_MAPPING(25);
+REGION_MAPPING(26);
+REGION_MAPPING(27);
+REGION_MAPPING(28);
+REGION_MAPPING(29);
+REGION_MAPPING(30);
+REGION_MAPPING(31);
+
+static umode_t nd_mapping_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct nd_region *nd_region = to_nd_region(dev);
+
+ if (n < nd_region->ndr_mappings)
+ return a->mode;
+ return 0;
+}
+
+static struct attribute *nd_mapping_attributes[] = {
+ &dev_attr_mapping0.attr,
+ &dev_attr_mapping1.attr,
+ &dev_attr_mapping2.attr,
+ &dev_attr_mapping3.attr,
+ &dev_attr_mapping4.attr,
+ &dev_attr_mapping5.attr,
+ &dev_attr_mapping6.attr,
+ &dev_attr_mapping7.attr,
+ &dev_attr_mapping8.attr,
+ &dev_attr_mapping9.attr,
+ &dev_attr_mapping10.attr,
+ &dev_attr_mapping11.attr,
+ &dev_attr_mapping12.attr,
+ &dev_attr_mapping13.attr,
+ &dev_attr_mapping14.attr,
+ &dev_attr_mapping15.attr,
+ &dev_attr_mapping16.attr,
+ &dev_attr_mapping17.attr,
+ &dev_attr_mapping18.attr,
+ &dev_attr_mapping19.attr,
+ &dev_attr_mapping20.attr,
+ &dev_attr_mapping21.attr,
+ &dev_attr_mapping22.attr,
+ &dev_attr_mapping23.attr,
+ &dev_attr_mapping24.attr,
+ &dev_attr_mapping25.attr,
+ &dev_attr_mapping26.attr,
+ &dev_attr_mapping27.attr,
+ &dev_attr_mapping28.attr,
+ &dev_attr_mapping29.attr,
+ &dev_attr_mapping30.attr,
+ &dev_attr_mapping31.attr,
+ NULL,
+};
+
+static struct attribute_group nd_mapping_attribute_group = {
+ .is_visible = nd_mapping_visible,
+ .attrs = nd_mapping_attributes,
+};
+
+static const struct attribute_group *nd_region_attribute_groups[] = {
+ &nd_region_attribute_group,
+ &nd_mapping_attribute_group,
+ NULL,
+};
+
+static void nd_blk_init(struct nd_bus *nd_bus, struct nd_region *nd_region,
+ struct nd_mem *nd_mem)
+{
+ struct nd_mapping *nd_mapping;
+ struct nd_dimm *nd_dimm;
+ u32 nfit_handle;
+
+ nd_region->dev.type = &nd_block_device_type;
+ nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+ nd_dimm = nd_dimm_by_handle(nd_bus, nfit_handle);
+
+ /* mark this region invalid unless we find a BDW */
+ nd_region->ndr_mappings = 0;
+
+ if (!nd_mem->nfit_bdw) {
+ dev_dbg(&nd_region->dev,
+ "%s: %s no block-data-window descriptor\n",
+ __func__, dev_name(&nd_dimm->dev));
+ put_device(&nd_dimm->dev);
+ return;
+ }
+ if (readq(&nd_mem->nfit_bdw->blk_offset) % SZ_4K) {
+ dev_err(&nd_region->dev, "%s: %s block-capacity is not 4K aligned\n",
+ __func__, dev_name(&nd_dimm->dev));
+ put_device(&nd_dimm->dev);
+ return;
+ }
+
+ nd_region->ndr_mappings = 1;
+ nd_mapping = &nd_region->mapping[0];
+ nd_mapping->nd_dimm = nd_dimm;
+ nd_mapping->size = readq(&nd_mem->nfit_bdw->blk_capacity);
+ nd_mapping->start = readq(&nd_mem->nfit_bdw->blk_offset);
+}
+
+static void nd_spa_range_init(struct nd_bus *nd_bus, struct nd_region *nd_region,
+ struct device_type *type)
+{
+ u16 i;
+ struct nd_spa *nd_spa = nd_region->nd_spa;
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+
+ nd_region->dev.type = type;
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_memdev *nd_memdev = nd_memdev_from_spa(nd_bus,
+ spa_index, i);
+ struct nd_mem *nd_mem = nd_memdev_to_mem(nd_bus, nd_memdev);
+ u32 nfit_handle = readl(&nd_mem->nfit_mem_dcr->nfit_handle);
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm *nd_dimm;
+
+ nd_dimm = nd_dimm_by_handle(nd_bus, nfit_handle);
+ nd_mapping->nd_dimm = nd_dimm;
+ nd_mapping->start = readq(&nd_memdev->nfit_mem->region_dpa);
+ nd_mapping->size = readq(&nd_memdev->nfit_mem->region_len);
+
+ if ((nd_mapping->start | nd_mapping->size) % SZ_4K) {
+ dev_err(&nd_region->dev, "%s: %s mapping is not 4K aligned\n",
+ __func__, dev_name(&nd_dimm->dev));
+ nd_region->ndr_mappings = 0;
+ return;
+ }
+ }
+}
+
+static struct nd_region *nd_region_create(struct nd_bus *nd_bus,
+ struct nd_spa *nd_spa, struct nd_mem *nd_mem)
+{
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ int spa_type = nfit_spa_type(nd_spa->nfit_spa);
+ struct nd_region *nd_region;
+ struct device *dev;
+ u16 num_mappings;
+
+ if (nd_mem)
+ num_mappings = 1;
+ else
+ num_mappings = num_nd_mem(nd_bus, spa_index);
+ nd_region = kzalloc(sizeof(struct nd_region)
+ + sizeof(struct nd_mapping) * num_mappings, GFP_KERNEL);
+ if (!nd_region)
+ return NULL;
+ nd_region->id = ida_simple_get(®ion_ida, 0, 0, GFP_KERNEL);
+ if (nd_region->id < 0) {
+ kfree(nd_region);
+ return NULL;
+ }
+ nd_region->nd_spa = nd_spa;
+ nd_region->ndr_mappings = num_mappings;
+ dev = &nd_region->dev;
+ dev_set_name(dev, "region%d", nd_region->id);
+ dev->parent = &nd_bus->dev;
+ dev->groups = nd_region_attribute_groups;
+ nd_region->ndr_size = readq(&nd_spa->nfit_spa->spa_length);
+ nd_region->ndr_start = readq(&nd_spa->nfit_spa->spa_base);
+ switch (spa_type) {
+ case NFIT_SPA_PM:
+ nd_spa_range_init(nd_bus, nd_region, &nd_pmem_device_type);
+ break;
+ case NFIT_SPA_VOLATILE:
+ nd_spa_range_init(nd_bus, nd_region, &nd_volatile_device_type);
+ break;
+ case NFIT_SPA_DCR:
+ nd_blk_init(nd_bus, nd_region, nd_mem);
+ break;
+ default:
+ break;
+ }
+ nd_device_register(dev);
+
+ return nd_region;
+}
+
+int nd_bus_register_regions(struct nd_bus *nd_bus)
+{
+ struct nd_spa *nd_spa;
+ int rc = 0;
+
+ mutex_lock(&nd_bus_list_mutex);
+ list_for_each_entry(nd_spa, &nd_bus->spas, list) {
+ int spa_type;
+ u16 spa_index;
+ struct nd_mem *nd_mem;
+ struct nd_region *nd_region;
+
+ spa_type = nfit_spa_type(nd_spa->nfit_spa);
+ spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ if (spa_index == 0) {
+ dev_dbg(&nd_bus->dev, "detected invalid spa index\n");
+ continue;
+ }
+ switch (spa_type) {
+ case NFIT_SPA_PM:
+ case NFIT_SPA_VOLATILE:
+ nd_region = nd_region_create(nd_bus, nd_spa, NULL);
+ if (!nd_region)
+ rc = -ENOMEM;
+ break;
+ case NFIT_SPA_DCR:
+ list_for_each_entry(nd_mem, &nd_bus->dimms, list) {
+ if (readw(&nd_mem->nfit_spa_dcr->spa_index)
+ != spa_index)
+ continue;
+ nd_region = nd_region_create(nd_bus, nd_spa,
+ nd_mem);
+ if (!nd_region)
+ rc = -ENOMEM;
+ }
+ break;
+ case NFIT_SPA_BDW:
+ /* we'll consume this in nd_blk_register for the DCR */
+ break;
+ default:
+ dev_info(&nd_bus->dev, "spa[%d] unhandled type: %s\n",
+ spa_index, spa_type_name(spa_type));
+ break;
+ }
+ }
+ mutex_unlock(&nd_bus_list_mutex);
+
+ nd_synchronize();
+
+ return rc;
+}
The NFIT region driver is an intermediary driver that translates NFIT
defined "region"s into "namespace" devices that are consumed by
persistent memory block drivers. A "namespace" is a sub-division of a
region.
Support for NVDIMM labels is reserved for a later patch. For now,
publish 'nd_namespace_io' devices which are simply memory ranges with no
regard for dimm boundaries, interleave, or aliasing. This also adds a
"nstype" attribute to the parent region so that userspace can know ahead
of time the type of namespaces a given region will produce.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 2 +
drivers/block/nd/bus.c | 26 +++++++++
drivers/block/nd/core.c | 18 ++++--
drivers/block/nd/dimm.c | 2 -
drivers/block/nd/namespace_devs.c | 111 +++++++++++++++++++++++++++++++++++++
drivers/block/nd/nd-private.h | 8 ++-
drivers/block/nd/nd.h | 7 ++
drivers/block/nd/nfit.h | 7 ++
drivers/block/nd/region.c | 88 +++++++++++++++++++++++++++++
drivers/block/nd/region_devs.c | 65 +++++++++++++++++++++-
include/linux/nd.h | 10 +++
include/uapi/linux/ndctl.h | 10 +++
12 files changed, 343 insertions(+), 11 deletions(-)
create mode 100644 drivers/block/nd/namespace_devs.c
create mode 100644 drivers/block/nd/region.c
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 6698acbe7b44..769ddc34f974 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -24,3 +24,5 @@ nd-y += bus.o
nd-y += dimm_devs.o
nd-y += dimm.o
nd-y += region_devs.o
+nd-y += region.o
+nd-y += namespace_devs.o
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index c815dd425a49..c98fe05a4c9b 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -13,6 +13,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
+#include <linux/module.h>
#include <linux/fcntl.h>
#include <linux/async.h>
#include <linux/ndctl.h>
@@ -34,6 +35,12 @@ static int to_nd_device_type(struct device *dev)
{
if (is_nd_dimm(dev))
return ND_DEVICE_DIMM;
+ else if (is_nd_pmem(dev))
+ return ND_DEVICE_REGION_PMEM;
+ else if (is_nd_blk(dev))
+ return ND_DEVICE_REGION_BLOCK;
+ else if (is_nd_pmem(dev->parent) || is_nd_blk(dev->parent))
+ return nd_region_to_namespace_type(to_nd_region(dev->parent));
return 0;
}
@@ -51,27 +58,46 @@ static int nd_bus_match(struct device *dev, struct device_driver *drv)
return test_bit(to_nd_device_type(dev), &nd_drv->type);
}
+static struct module *to_bus_provider(struct device *dev)
+{
+ /* pin bus providers while regions are enabled */
+ if (is_nd_pmem(dev) || is_nd_blk(dev)) {
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
+ return nd_bus->module;
+ }
+ return NULL;
+}
+
static int nd_bus_probe(struct device *dev)
{
struct nd_device_driver *nd_drv = to_nd_device_driver(dev->driver);
+ struct module *provider = to_bus_provider(dev);
struct nd_bus *nd_bus = walk_to_nd_bus(dev);
int rc;
+ if (!try_module_get(provider))
+ return -ENXIO;
+
rc = nd_drv->probe(dev);
dev_dbg(&nd_bus->dev, "%s.probe(%s) = %d\n", dev->driver->name,
dev_name(dev), rc);
+ if (rc != 0)
+ module_put(provider);
return rc;
}
static int nd_bus_remove(struct device *dev)
{
struct nd_device_driver *nd_drv = to_nd_device_driver(dev->driver);
+ struct module *provider = to_bus_provider(dev);
struct nd_bus *nd_bus = walk_to_nd_bus(dev);
int rc;
rc = nd_drv->remove(dev);
dev_dbg(&nd_bus->dev, "%s.remove(%s) = %d\n", dev->driver->name,
dev_name(dev), rc);
+ module_put(provider);
return rc;
}
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 32ecd6f05c90..c795e8057061 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -192,7 +192,7 @@ static const struct attribute_group *nd_bus_attribute_groups[] = {
};
static void *nd_bus_new(struct device *parent,
- struct nfit_bus_descriptor *nfit_desc)
+ struct nfit_bus_descriptor *nfit_desc, struct module *module)
{
struct nd_bus *nd_bus = kzalloc(sizeof(*nd_bus), GFP_KERNEL);
int rc;
@@ -212,6 +212,7 @@ static void *nd_bus_new(struct device *parent,
return NULL;
}
nd_bus->nfit_desc = nfit_desc;
+ nd_bus->module = module;
nd_bus->dev.parent = parent;
nd_bus->dev.release = nd_bus_release;
nd_bus->dev.groups = nd_bus_attribute_groups;
@@ -595,15 +596,16 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
}
-struct nd_bus *nfit_bus_register(struct device *parent,
- struct nfit_bus_descriptor *nfit_desc)
+struct nd_bus *__nfit_bus_register(struct device *parent,
+ struct nfit_bus_descriptor *nfit_desc,
+ struct module *module)
{
static DEFINE_MUTEX(mutex);
struct nd_bus *nd_bus;
/* enforce single bus at a time registration */
mutex_lock(&mutex);
- nd_bus = nd_bus_new(parent, nfit_desc);
+ nd_bus = nd_bus_new(parent, nfit_desc, module);
nd_bus = nd_bus_probe(nd_bus);
mutex_unlock(&mutex);
@@ -612,7 +614,7 @@ struct nd_bus *nfit_bus_register(struct device *parent,
return nd_bus;
}
-EXPORT_SYMBOL(nfit_bus_register);
+EXPORT_SYMBOL(__nfit_bus_register);
void nfit_bus_unregister(struct nd_bus *nd_bus)
{
@@ -649,7 +651,12 @@ static __init int nd_core_init(void)
rc = nd_dimm_init();
if (rc)
goto err_dimm;
+ rc = nd_region_init();
+ if (rc)
+ goto err_region;
return 0;
+ err_region:
+ nd_dimm_exit();
err_dimm:
nd_bus_exit();
return rc;
@@ -659,6 +666,7 @@ static __init int nd_core_init(void)
static __exit void nd_core_exit(void)
{
WARN_ON(!list_empty(&nd_bus_list));
+ nd_region_exit();
nd_dimm_exit();
nd_bus_exit();
}
diff --git a/drivers/block/nd/dimm.c b/drivers/block/nd/dimm.c
index fec7229afb58..7e043c0c1bf5 100644
--- a/drivers/block/nd/dimm.c
+++ b/drivers/block/nd/dimm.c
@@ -95,7 +95,7 @@ int __init nd_dimm_init(void)
return nd_driver_register(&nd_dimm_driver);
}
-void __exit nd_dimm_exit(void)
+void nd_dimm_exit(void)
{
driver_unregister(&nd_dimm_driver.drv);
}
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
new file mode 100644
index 000000000000..6861327f4245
--- /dev/null
+++ b/drivers/block/nd/namespace_devs.c
@@ -0,0 +1,111 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/nd.h>
+#include "nd.h"
+
+static void namespace_io_release(struct device *dev)
+{
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
+
+ kfree(nsio);
+}
+
+static struct device_type namespace_io_device_type = {
+ .name = "nd_namespace_io",
+ .release = namespace_io_release,
+};
+
+static ssize_t type_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+
+ return sprintf(buf, "%d\n", nd_region_to_namespace_type(nd_region));
+}
+static DEVICE_ATTR_RO(type);
+
+static struct attribute *nd_namespace_attributes[] = {
+ &dev_attr_type.attr,
+ NULL,
+};
+
+static struct attribute_group nd_namespace_attribute_group = {
+ .attrs = nd_namespace_attributes,
+};
+
+static const struct attribute_group *nd_namespace_attribute_groups[] = {
+ &nd_device_attribute_group,
+ &nd_namespace_attribute_group,
+ NULL,
+};
+
+static struct device **create_namespace_io(struct nd_region *nd_region)
+{
+ struct nd_namespace_io *nsio;
+ struct device *dev, **devs;
+ struct resource *res;
+
+ nsio = kzalloc(sizeof(*nsio), GFP_KERNEL);
+ if (!nsio)
+ return NULL;
+
+ devs = kcalloc(2, sizeof(struct device *), GFP_KERNEL);
+ if (!devs) {
+ kfree(nsio);
+ return NULL;
+ }
+
+ dev = &nsio->dev;
+ dev->type = &namespace_io_device_type;
+ res = &nsio->res;
+ res->name = dev_name(&nd_region->dev);
+ res->flags = IORESOURCE_MEM;
+ res->start = nd_region->ndr_start;
+ res->end = res->start + nd_region->ndr_size - 1;
+
+ devs[0] = dev;
+ return devs;
+}
+
+int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
+{
+ struct device **devs = NULL;
+ int i;
+
+ *err = 0;
+ switch (nd_region_to_namespace_type(nd_region)) {
+ case ND_DEVICE_NAMESPACE_IO:
+ devs = create_namespace_io(nd_region);
+ break;
+ default:
+ break;
+ }
+
+ if (!devs)
+ return -ENODEV;
+
+ for (i = 0; devs[i]; i++) {
+ struct device *dev = devs[i];
+
+ dev_set_name(dev, "namespace%d.%d", nd_region->id, i);
+ dev->parent = &nd_region->dev;
+ dev->groups = nd_namespace_attribute_groups;
+ nd_device_register(dev);
+ }
+ kfree(devs);
+
+ return i;
+}
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index d254ff688ad6..db68e013b9d0 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -33,6 +33,7 @@ enum {
struct nd_bus {
struct nfit_bus_descriptor *nfit_desc;
struct radix_tree_root dimm_radix;
+ struct module *module;
struct list_head memdevs;
struct list_head dimms;
struct list_head spas;
@@ -89,6 +90,8 @@ const char *spa_type_name(u16 type);
int nfit_spa_type(struct nfit_spa __iomem *nfit_spa);
struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle);
bool is_nd_dimm(struct device *dev);
+bool is_nd_blk(struct device *dev);
+bool is_nd_pmem(struct device *dev);
struct nd_bus *to_nd_bus(struct device *dev);
struct nd_dimm *to_nd_dimm(struct device *dev);
struct nd_bus *walk_to_nd_bus(struct device *nd_dev);
@@ -97,11 +100,12 @@ int __init nd_bus_init(void);
void nd_bus_exit(void);
void nd_dimm_delete(struct nd_dimm *nd_dimm);
int __init nd_dimm_init(void);
-void __exit nd_dimm_exit(void);
+int __init nd_region_init(void);
+void nd_dimm_exit(void);
+int nd_region_exit(void);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
int nd_bus_register_dimms(struct nd_bus *nd_bus);
int nd_bus_register_regions(struct nd_bus *nd_bus);
int nd_match_dimm(struct device *dev, void *data);
-bool is_nd_dimm(struct device *dev);
#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 13eba9bd74c7..4ac7ff2af4c8 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -22,6 +22,11 @@ struct nd_dimm_drvdata {
void *data;
};
+struct nd_region_namespaces {
+ int count;
+ int active;
+};
+
struct nd_mapping {
struct nd_dimm *nd_dimm;
u64 start;
@@ -56,4 +61,6 @@ int nd_dimm_init_nsarea(struct nd_dimm_drvdata *ndd);
int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd);
int nd_dimm_firmware_status(struct device *dev);
struct nd_region *to_nd_region(struct device *dev);
+int nd_region_to_namespace_type(struct nd_region *nd_region);
+int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
#endif /* __ND_H__ */
diff --git a/drivers/block/nd/nfit.h b/drivers/block/nd/nfit.h
index 75b480f6ff03..d8d0308f55a5 100644
--- a/drivers/block/nd/nfit.h
+++ b/drivers/block/nd/nfit.h
@@ -229,7 +229,10 @@ struct nfit_bus_descriptor {
};
struct nd_bus;
-struct nd_bus *nfit_bus_register(struct device *parent,
- struct nfit_bus_descriptor *nfit_desc);
+#define nfit_bus_register(parent, desc) \
+ __nfit_bus_register(parent, desc, THIS_MODULE)
+struct nd_bus *__nfit_bus_register(struct device *parent,
+ struct nfit_bus_descriptor *nfit_desc,
+ struct module *module);
void nfit_bus_unregister(struct nd_bus *nd_bus);
#endif /* __NFIT_H__ */
diff --git a/drivers/block/nd/region.c b/drivers/block/nd/region.c
new file mode 100644
index 000000000000..29019a65808e
--- /dev/null
+++ b/drivers/block/nd/region.c
@@ -0,0 +1,88 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/nd.h>
+#include "nd.h"
+
+static int nd_region_probe(struct device *dev)
+{
+ int err;
+ struct nd_region_namespaces *num_ns;
+ struct nd_region *nd_region = to_nd_region(dev);
+ int rc = nd_region_register_namespaces(nd_region, &err);
+
+ num_ns = devm_kzalloc(dev, sizeof(*num_ns), GFP_KERNEL);
+ if (!num_ns)
+ return -ENOMEM;
+
+ if (rc < 0)
+ return rc;
+
+ num_ns->active = rc;
+ num_ns->count = rc + err;
+ dev_set_drvdata(dev, num_ns);
+
+ if (err == 0)
+ return 0;
+
+ if (rc == err)
+ return -ENODEV;
+
+ /*
+ * Given multiple namespaces per region, we do not want to
+ * disable all the successfully registered peer namespaces upon
+ * a single registration failure. If userspace is missing a
+ * namespace that it expects it can disable/re-enable the region
+ * to retry discovery after correcting the failure.
+ * <regionX>/namespaces returns the current
+ * "<async-registered>/<total>" namespace count.
+ */
+ dev_err(dev, "failed to register %d namespace%s, continuing...\n",
+ err, err == 1 ? "" : "s");
+ return 0;
+}
+
+static int child_unregister(struct device *dev, void *data)
+{
+ nd_device_unregister(dev, ND_SYNC);
+ return 0;
+}
+
+static int nd_region_remove(struct device *dev)
+{
+ device_for_each_child(dev, NULL, child_unregister);
+ return 0;
+}
+
+static struct nd_device_driver nd_region_driver = {
+ .probe = nd_region_probe,
+ .remove = nd_region_remove,
+ .drv = {
+ .name = "nd_region",
+ },
+ .type = ND_DRIVER_REGION_BLOCK | ND_DRIVER_REGION_PMEM,
+};
+
+int __init nd_region_init(void)
+{
+ return nd_driver_register(&nd_region_driver);
+}
+
+void __exit nd_region_exit(void)
+{
+ driver_unregister(&nd_region_driver.drv);
+}
+
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_REGION_PMEM);
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_REGION_BLOCK);
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index f474c32d6dad..03b192368e1a 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -50,11 +50,16 @@ static struct device_type nd_volatile_device_type = {
.release = nd_region_release,
};
-static bool is_nd_pmem(struct device *dev)
+bool is_nd_pmem(struct device *dev)
{
return dev ? dev->type == &nd_pmem_device_type : false;
}
+bool is_nd_blk(struct device *dev)
+{
+ return dev ? dev->type == &nd_block_device_type : false;
+}
+
struct nd_region *to_nd_region(struct device *dev)
{
struct nd_region *nd_region = container_of(dev, struct nd_region, dev);
@@ -63,6 +68,28 @@ struct nd_region *to_nd_region(struct device *dev)
return nd_region;
}
+/**
+ * nd_region_to_namespace_type() - region to an integer namespace type
+ * @nd_region: region-device to interrogate
+ *
+ * This is the 'nstype' attribute of a region as well, an input to the
+ * MODALIAS for namespace devices, and bit number for a nd_bus to match
+ * namespace devices with namespace drivers.
+ */
+int nd_region_to_namespace_type(struct nd_region *nd_region)
+{
+ if (is_nd_pmem(&nd_region->dev)) {
+ if (nd_region->ndr_mappings)
+ return ND_DEVICE_NAMESPACE_PMEM;
+ else
+ return ND_DEVICE_NAMESPACE_IO;
+ } else if (is_nd_blk(&nd_region->dev)) {
+ return ND_DEVICE_NAMESPACE_BLOCK;
+ }
+
+ return 0;
+}
+
static ssize_t size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -90,9 +117,44 @@ static ssize_t mappings_show(struct device *dev,
}
static DEVICE_ATTR_RO(mappings);
+static ssize_t spa_index_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ struct nd_spa *nd_spa = nd_region->nd_spa;
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+
+ return sprintf(buf, "%d\n", spa_index);
+}
+static DEVICE_ATTR_RO(spa_index);
+
+static ssize_t nstype_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+
+ return sprintf(buf, "%d\n", nd_region_to_namespace_type(nd_region));
+}
+static DEVICE_ATTR_RO(nstype);
+
+static ssize_t init_namespaces_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region_namespaces *num_ns = dev_get_drvdata(dev);
+
+ if (!num_ns)
+ return -ENXIO;
+
+ return sprintf(buf, "%d/%d\n", num_ns->active, num_ns->count);
+}
+static DEVICE_ATTR_RO(init_namespaces);
+
static struct attribute *nd_region_attributes[] = {
&dev_attr_size.attr,
+ &dev_attr_nstype.attr,
&dev_attr_mappings.attr,
+ &dev_attr_spa_index.attr,
+ &dev_attr_init_namespaces.attr,
NULL,
};
@@ -256,6 +318,7 @@ static struct attribute_group nd_mapping_attribute_group = {
static const struct attribute_group *nd_region_attribute_groups[] = {
&nd_region_attribute_group,
+ &nd_device_attribute_group,
&nd_mapping_attribute_group,
NULL,
};
diff --git a/include/linux/nd.h b/include/linux/nd.h
index e074f67e53a3..da70e9962197 100644
--- a/include/linux/nd.h
+++ b/include/linux/nd.h
@@ -26,6 +26,16 @@ static inline struct nd_device_driver *to_nd_device_driver(
struct device_driver *drv)
{
return container_of(drv, struct nd_device_driver, drv);
+};
+
+struct nd_namespace_io {
+ struct device dev;
+ struct resource res;
+};
+
+static inline struct nd_namespace_io *to_nd_namespace_io(struct device *dev)
+{
+ return container_of(dev, struct nd_namespace_io, dev);
}
#define MODULE_ALIAS_ND_DEVICE(type) \
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index f11a9f706bbf..0ccc0f2e5765 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -177,8 +177,18 @@ static inline const char *nfit_dimm_cmd_name(unsigned cmd)
#define ND_DEVICE_DIMM 1 /* nd_dimm: container for "config data" */
+#define ND_DEVICE_REGION_PMEM 2 /* nd_region: (parent of pmem namespaces) */
+#define ND_DEVICE_REGION_BLOCK 3 /* nd_region: (parent of block namespaces) */
+#define ND_DEVICE_NAMESPACE_IO 4 /* legacy persistent memory */
+#define ND_DEVICE_NAMESPACE_PMEM 5 /* persistent memory namespace (may alias) */
+#define ND_DEVICE_NAMESPACE_BLOCK 6 /* block-data-window namespace (may alias) */
enum nd_driver_flags {
ND_DRIVER_DIMM = 1 << ND_DEVICE_DIMM,
+ ND_DRIVER_REGION_PMEM = 1 << ND_DEVICE_REGION_PMEM,
+ ND_DRIVER_REGION_BLOCK = 1 << ND_DEVICE_REGION_BLOCK,
+ ND_DRIVER_NAMESPACE_IO = 1 << ND_DEVICE_NAMESPACE_IO,
+ ND_DRIVER_NAMESPACE_PMEM = 1 << ND_DEVICE_NAMESPACE_PMEM,
+ ND_DRIVER_NAMESPACE_BLOCK = 1 << ND_DEVICE_NAMESPACE_BLOCK,
};
#endif /* __NDCTL_H__ */
nd_pmem attaches to persistent memory regions and namespaces emitted by
the nd subsystem, and, same as the original pmem driver, presents the
system-physical-address range as a block device.
Cc: Andy Lutomirski <[email protected]>
Cc: Boaz Harrosh <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/Kconfig | 11 -------
drivers/block/Makefile | 1 -
drivers/block/nd/Kconfig | 17 +++++++++++
drivers/block/nd/Makefile | 3 ++
drivers/block/nd/pmem.c | 72 +++++++++++++++++++++++++++++++++++++++------
5 files changed, 83 insertions(+), 21 deletions(-)
rename drivers/block/{pmem.c => nd/pmem.c} (81%)
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index dfe40e5ca9bd..1cef4ffb16c5 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -406,17 +406,6 @@ config BLK_DEV_RAM_DAX
and will prevent RAM block device backing store memory from being
allocated from highmem (only a problem for highmem systems).
-config BLK_DEV_PMEM
- tristate "Persistent memory block device support"
- help
- Saying Y here will allow you to use a contiguous range of reserved
- memory as one or more persistent block devices.
-
- To compile this driver as a module, choose M here: the module will be
- called 'pmem'.
-
- If unsure, say N.
-
config CDROM_PKTCDVD
tristate "Packet writing on CD/DVD media"
depends on !UML
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 18b27bb9cd2d..3a2f15be66a3 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -14,7 +14,6 @@ obj-$(CONFIG_PS3_VRAM) += ps3vram.o
obj-$(CONFIG_ATARI_FLOPPY) += ataflop.o
obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o
obj-$(CONFIG_BLK_DEV_RAM) += brd.o
-obj-$(CONFIG_BLK_DEV_PMEM) += pmem.o
obj-$(CONFIG_BLK_DEV_LOOP) += loop.o
obj-$(CONFIG_BLK_CPQ_DA) += cpqarray.o
obj-$(CONFIG_BLK_CPQ_CISS_DA) += cciss.o
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index 6c15d10bf4e0..38eae5f0ae4b 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -72,4 +72,21 @@ config NFIT_TEST
Say N unless you are doing development of the 'nd' subsystem.
+config BLK_DEV_PMEM
+ tristate "PMEM: Persistent memory block device support"
+ depends on ND_CORE || X86_PMEM_LEGACY
+ default ND_CORE
+ help
+ Memory ranges for PMEM are described by either an NFIT
+ (NVDIMM Firmware Interface Table, see CONFIG_NFIT_ACPI), a
+ non-standard OEM-specific E820 memory type (type-12, see
+ CONFIG_X86_PMEM_LEGACY), or it is manually specified by the
+ 'memmap=nn[KMG]!ss[KMG]' kernel command line (see
+ Documentation/kernel-parameters.txt). This driver converts
+ these persistent memory ranges into block devices that are
+ capable of DAX (direct-access) file system mappings. See
+ Documentation/blockdev/nd.txt for more details.
+
+ Say Y if you want to use a NVDIMM described by NFIT
+
endif
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 769ddc34f974..c0194d52e5ad 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -16,6 +16,7 @@ endif
obj-$(CONFIG_ND_CORE) += nd.o
obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
+obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
nd_acpi-y := acpi.o
@@ -26,3 +27,5 @@ nd-y += dimm.o
nd-y += region_devs.o
nd-y += region.o
nd-y += namespace_devs.o
+
+nd_pmem-y := pmem.o
diff --git a/drivers/block/pmem.c b/drivers/block/nd/pmem.c
similarity index 81%
rename from drivers/block/pmem.c
rename to drivers/block/nd/pmem.c
index eabf4a8d0085..cd83a9a98d89 100644
--- a/drivers/block/pmem.c
+++ b/drivers/block/nd/pmem.c
@@ -1,7 +1,7 @@
/*
* Persistent Memory Driver
*
- * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2014-2015, Intel Corporation.
* Copyright (c) 2015, Christoph Hellwig <[email protected]>.
* Copyright (c) 2015, Boaz Harrosh <[email protected]>.
*
@@ -23,6 +23,7 @@
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/slab.h>
+#include <linux/nd.h>
#define PMEM_MINORS 16
@@ -34,10 +35,11 @@ struct pmem_device {
phys_addr_t phys_addr;
void *virt_addr;
size_t size;
+ int id;
};
static int pmem_major;
-static atomic_t pmem_index;
+static DEFINE_IDA(pmem_ida);
static void pmem_do_bvec(struct pmem_device *pmem, struct page *page,
unsigned int len, unsigned int off, int rw,
@@ -122,20 +124,26 @@ static struct pmem_device *pmem_alloc(struct device *dev, struct resource *res)
{
struct pmem_device *pmem;
struct gendisk *disk;
- int idx, err;
+ int err;
err = -ENOMEM;
pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
if (!pmem)
goto out;
+ pmem->id = ida_simple_get(&pmem_ida, 0, 0, GFP_KERNEL);
+ if (pmem->id < 0) {
+ err = pmem->id;
+ goto out_free_dev;
+ }
+
pmem->phys_addr = res->start;
pmem->size = resource_size(res);
err = -EINVAL;
if (!request_mem_region(pmem->phys_addr, pmem->size, "pmem")) {
dev_warn(dev, "could not reserve region [0x%pa:0x%zx]\n", &pmem->phys_addr, pmem->size);
- goto out_free_dev;
+ goto out_free_ida;
}
/*
@@ -159,15 +167,13 @@ static struct pmem_device *pmem_alloc(struct device *dev, struct resource *res)
if (!disk)
goto out_free_queue;
- idx = atomic_inc_return(&pmem_index) - 1;
-
disk->major = pmem_major;
- disk->first_minor = PMEM_MINORS * idx;
+ disk->first_minor = PMEM_MINORS * pmem->id;
disk->fops = &pmem_fops;
disk->private_data = pmem;
disk->queue = pmem->pmem_queue;
disk->flags = GENHD_FL_EXT_DEVT;
- sprintf(disk->disk_name, "pmem%d", idx);
+ sprintf(disk->disk_name, "pmem%d", pmem->id);
disk->driverfs_dev = dev;
set_capacity(disk, pmem->size >> 9);
pmem->pmem_disk = disk;
@@ -182,6 +188,8 @@ out_unmap:
iounmap(pmem->virt_addr);
out_release_region:
release_mem_region(pmem->phys_addr, pmem->size);
+out_free_ida:
+ ida_simple_remove(&pmem_ida, pmem->id);
out_free_dev:
kfree(pmem);
out:
@@ -195,6 +203,7 @@ static void pmem_free(struct pmem_device *pmem)
blk_cleanup_queue(pmem->pmem_queue);
iounmap(pmem->virt_addr);
release_mem_region(pmem->phys_addr, pmem->size);
+ ida_simple_remove(&pmem_ida, pmem->id);
kfree(pmem);
}
@@ -236,6 +245,39 @@ static struct platform_driver pmem_driver = {
},
};
+static int nd_pmem_probe(struct device *dev)
+{
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
+ struct pmem_device *pmem;
+
+ pmem = pmem_alloc(dev, &nsio->res);
+ if (IS_ERR(pmem))
+ return PTR_ERR(pmem);
+
+ dev_set_drvdata(dev, pmem);
+
+ return 0;
+}
+
+static int nd_pmem_remove(struct device *dev)
+{
+ struct pmem_device *pmem = dev_get_drvdata(dev);
+
+ pmem_free(pmem);
+ return 0;
+}
+
+MODULE_ALIAS("pmem");
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_NAMESPACE_IO);
+static struct nd_device_driver nd_pmem_driver = {
+ .probe = nd_pmem_probe,
+ .remove = nd_pmem_remove,
+ .drv = {
+ .name = "pmem",
+ },
+ .type = ND_DRIVER_NAMESPACE_IO,
+};
+
static int __init pmem_init(void)
{
int error;
@@ -244,9 +286,20 @@ static int __init pmem_init(void)
if (pmem_major < 0)
return pmem_major;
+ error = nd_driver_register(&nd_pmem_driver);
+ if (error)
+ goto out_unregister_blkdev;
+
error = platform_driver_register(&pmem_driver);
if (error)
- unregister_blkdev(pmem_major, "pmem");
+ goto out_unregister_nd;
+
+ return 0;
+
+ out_unregister_nd:
+ driver_unregister(&nd_pmem_driver.drv);
+ out_unregister_blkdev:
+ unregister_blkdev(pmem_major, "pmem");
return error;
}
module_init(pmem_init);
@@ -254,6 +307,7 @@ module_init(pmem_init);
static void pmem_exit(void)
{
platform_driver_unregister(&pmem_driver);
+ driver_unregister(&nd_pmem_driver.drv);
unregister_blkdev(pmem_major, "pmem");
}
module_exit(pmem_exit);
On platforms that have firmware support for reading/writing per-dimm
label space, a portion of the dimm may be accessible via an interleave
set PMEM mapping in addition to the dimm's BLK (block-data-window
aperture(s)) interface. A label, stored in a "configuration data
region" on the dimm, disambiguates which dimm addresses are accessed
through which exclusive interface.
Add infrastructure that allows the kernel to block modifications to a
label in the set while any member dimm is active. Note that this is
meant only for enforcing "no modifications of active labels" via the
coarse ioctl command. Adding/deleting namespaces from an active
interleave set will only be possible via sysfs.
Another aspect of tracking interleave sets is tracking their integrity
when DIMMs in a set are physically re-ordered. For this purpose we
generate an "interleave-set cookie" that can be recorded in a label and
validated against the current configuration.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/bus.c | 41 +++++++++
drivers/block/nd/core.c | 51 ++++++++++++
drivers/block/nd/dimm_devs.c | 18 ++++
drivers/block/nd/nd-private.h | 17 ++++
drivers/block/nd/nd.h | 4 +
drivers/block/nd/region_devs.c | 176 ++++++++++++++++++++++++++++++++++++++++
6 files changed, 305 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index c98fe05a4c9b..944d7d7845fe 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -79,7 +79,10 @@ static int nd_bus_probe(struct device *dev)
if (!try_module_get(provider))
return -ENXIO;
+ nd_region_probe_start(nd_bus, dev);
rc = nd_drv->probe(dev);
+ nd_region_probe_end(nd_bus, dev, rc);
+
dev_dbg(&nd_bus->dev, "%s.probe(%s) = %d\n", dev->driver->name,
dev_name(dev), rc);
if (rc != 0)
@@ -95,6 +98,8 @@ static int nd_bus_remove(struct device *dev)
int rc;
rc = nd_drv->remove(dev);
+ nd_region_notify_remove(nd_bus, dev, rc);
+
dev_dbg(&nd_bus->dev, "%s.remove(%s) = %d\n", dev->driver->name,
dev_name(dev), rc);
module_put(provider);
@@ -269,6 +274,33 @@ void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
device_destroy(nd_class, MKDEV(nd_bus_major, nd_bus->id));
}
+static void wait_nd_bus_probe_idle(struct nd_bus *nd_bus)
+{
+ do {
+ if (nd_bus->probe_active == 0)
+ break;
+ nd_bus_unlock(&nd_bus->dev);
+ wait_event(nd_bus->probe_wait, nd_bus->probe_active == 0);
+ nd_bus_lock(&nd_bus->dev);
+ } while (true);
+}
+
+/* set_config requires an idle interleave set */
+static int nd_cmd_clear_to_send(struct nd_dimm *nd_dimm, unsigned int cmd)
+{
+ struct nd_bus *nd_bus;
+
+ if (!nd_dimm || cmd != NFIT_CMD_SET_CONFIG_DATA)
+ return 0;
+
+ nd_bus = walk_to_nd_bus(&nd_dimm->dev);
+ wait_nd_bus_probe_idle(nd_bus);
+
+ if (atomic_read(&nd_dimm->busy))
+ return -EBUSY;
+ return 0;
+}
+
static int __nd_ioctl(struct nd_bus *nd_bus, struct nd_dimm *nd_dimm,
int read_only, unsigned int cmd, unsigned long arg)
{
@@ -399,11 +431,18 @@ static int __nd_ioctl(struct nd_bus *nd_bus, struct nd_dimm *nd_dimm,
goto out;
}
+ nd_bus_lock(&nd_bus->dev);
+ rc = nd_cmd_clear_to_send(nd_dimm, _IOC_NR(cmd));
+ if (rc)
+ goto out_unlock;
+
rc = nfit_desc->nfit_ctl(nfit_desc, nd_dimm, _IOC_NR(cmd), buf, buf_len);
if (rc < 0)
- goto out;
+ goto out_unlock;
if (copy_to_user(p, buf, buf_len))
rc = -EFAULT;
+ out_unlock:
+ nd_bus_unlock(&nd_bus->dev);
out:
if (is_vmalloc_addr(buf))
vfree(buf);
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index c795e8057061..976cd5e3ebaf 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -31,6 +31,36 @@ static bool warn_checksum;
module_param(warn_checksum, bool, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(warn_checksum, "Turn checksum errors into warnings");
+void nd_bus_lock(struct device *dev)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
+ if (!nd_bus)
+ return;
+ mutex_lock(&nd_bus->reconfig_mutex);
+}
+EXPORT_SYMBOL(nd_bus_lock);
+
+void nd_bus_unlock(struct device *dev)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
+ if (!nd_bus)
+ return;
+ mutex_unlock(&nd_bus->reconfig_mutex);
+}
+EXPORT_SYMBOL(nd_bus_unlock);
+
+bool is_nd_bus_locked(struct device *dev)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
+ if (!nd_bus)
+ return false;
+ return mutex_is_locked(&nd_bus->reconfig_mutex);
+}
+EXPORT_SYMBOL(is_nd_bus_locked);
+
/**
* nd_dimm_by_handle - lookup an nd_dimm by its corresponding nfit_handle
* @nd_bus: parent bus of the dimm
@@ -49,6 +79,20 @@ struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle)
return nd_dimm;
}
+u64 nd_fletcher64(void __iomem *addr, size_t len)
+{
+ u32 lo32 = 0;
+ u64 hi32 = 0;
+ int i;
+
+ for (i = 0; i < len; i += 4) {
+ lo32 = readl(addr + i);
+ hi32 += lo32;
+ }
+
+ return hi32 << 32 | lo32;
+}
+
static void nd_bus_release(struct device *dev)
{
struct nd_bus *nd_bus = container_of(dev, struct nd_bus, dev);
@@ -60,6 +104,7 @@ static void nd_bus_release(struct device *dev)
list_for_each_entry_safe(nd_spa, _spa, &nd_bus->spas, list) {
list_del_init(&nd_spa->list);
+ kfree(nd_spa->nd_set);
kfree(nd_spa);
}
list_for_each_entry_safe(nd_dcr, _dcr, &nd_bus->dcrs, list) {
@@ -205,8 +250,10 @@ static void *nd_bus_new(struct device *parent,
INIT_LIST_HEAD(&nd_bus->memdevs);
INIT_LIST_HEAD(&nd_bus->dimms);
INIT_LIST_HEAD(&nd_bus->list);
+ init_waitqueue_head(&nd_bus->probe_wait);
INIT_RADIX_TREE(&nd_bus->dimm_radix, GFP_KERNEL);
nd_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL);
+ mutex_init(&nd_bus->reconfig_mutex);
if (nd_bus->id < 0) {
kfree(nd_bus);
return NULL;
@@ -570,6 +617,10 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
if (rc)
goto err;
+ rc = nd_bus_init_interleave_sets(nd_bus);
+ if (rc)
+ goto err;
+
rc = nd_bus_create_ndctl(nd_bus);
if (rc)
goto err;
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index d15ca75804ac..6192d9c82b9b 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -287,6 +287,22 @@ static ssize_t commands_show(struct device *dev,
}
static DEVICE_ATTR_RO(commands);
+static ssize_t state_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct nd_dimm *nd_dimm = to_nd_dimm(dev);
+
+ /*
+ * The state may be in the process of changing, userspace should
+ * quiesce probing if it wants a static answer
+ */
+ nd_bus_lock(dev);
+ nd_bus_unlock(dev);
+ return sprintf(buf, "%s\n", atomic_read(&nd_dimm->busy)
+ ? "active" : "idle");
+}
+static DEVICE_ATTR_RO(state);
+
static struct attribute *nd_dimm_attributes[] = {
&dev_attr_handle.attr,
&dev_attr_phys_id.attr,
@@ -294,6 +310,7 @@ static struct attribute *nd_dimm_attributes[] = {
&dev_attr_device.attr,
&dev_attr_format.attr,
&dev_attr_serial.attr,
+ &dev_attr_state.attr,
&dev_attr_revision.attr,
&dev_attr_commands.attr,
NULL,
@@ -364,6 +381,7 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
if (nd_dimm->id < 0)
goto err_ida;
+ atomic_set(&nd_dimm->busy, 0);
nd_dimm->nd_mem = nd_mem;
dev = &nd_dimm->dev;
dev_set_name(dev, "nmem%d", nd_dimm->id);
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index db68e013b9d0..15ca7be507ce 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -15,6 +15,9 @@
#include <linux/radix-tree.h>
#include <linux/device.h>
#include <linux/sizes.h>
+#include <linux/mutex.h>
+#include <linux/io.h>
+#include "nfit.h"
extern struct list_head nd_bus_list;
extern struct mutex nd_bus_list_mutex;
@@ -33,6 +36,7 @@ enum {
struct nd_bus {
struct nfit_bus_descriptor *nfit_desc;
struct radix_tree_root dimm_radix;
+ wait_queue_head_t probe_wait;
struct module *module;
struct list_head memdevs;
struct list_head dimms;
@@ -41,7 +45,8 @@ struct nd_bus {
struct list_head bdws;
struct list_head list;
struct device dev;
- int id;
+ int id, probe_active;
+ struct mutex reconfig_mutex;
};
struct nd_dimm {
@@ -50,14 +55,20 @@ struct nd_dimm {
struct device dev;
void *provider_data;
int id, nfit_status;
+ atomic_t busy;
struct nd_dimm_delete {
struct nd_bus *nd_bus;
struct nd_mem *nd_mem;
} *del_info;
};
+struct nd_interleave_set {
+ u64 cookie;
+};
+
struct nd_spa {
struct nfit_spa __iomem *nfit_spa;
+ struct nd_interleave_set *nd_set;
struct list_head list;
};
@@ -103,9 +114,13 @@ int __init nd_dimm_init(void);
int __init nd_region_init(void);
void nd_dimm_exit(void);
int nd_region_exit(void);
+void nd_region_probe_start(struct nd_bus *nd_bus, struct device *dev);
+void nd_region_probe_end(struct nd_bus *nd_bus, struct device *dev, int rc);
+void nd_region_notify_remove(struct nd_bus *nd_bus, struct device *dev, int rc);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
int nd_bus_register_dimms(struct nd_bus *nd_bus);
int nd_bus_register_regions(struct nd_bus *nd_bus);
+int nd_bus_init_interleave_sets(struct nd_bus *nd_bus);
int nd_match_dimm(struct device *dev, void *data);
#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 4ac7ff2af4c8..4deed46884c1 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -50,6 +50,7 @@ enum nd_async_mode {
void nd_device_register(struct device *dev);
void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
+u64 nd_fletcher64(void __iomem *addr, size_t len);
extern struct attribute_group nd_device_attribute_group;
struct nd_dimm;
u32 to_nfit_handle(struct nd_dimm *nd_dimm);
@@ -63,4 +64,7 @@ int nd_dimm_firmware_status(struct device *dev);
struct nd_region *to_nd_region(struct device *dev);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
+void nd_bus_lock(struct device *dev);
+void nd_bus_unlock(struct device *dev);
+bool is_nd_bus_locked(struct device *dev);
#endif /* __ND_H__ */
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index 03b192368e1a..13f45be755a5 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -10,7 +10,10 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*/
+#include <linux/scatterlist.h>
+#include <linux/sched.h>
#include <linux/slab.h>
+#include <linux/sort.h>
#include <linux/io.h>
#include "nd-private.h"
#include "nfit.h"
@@ -137,6 +140,21 @@ static ssize_t nstype_show(struct device *dev,
}
static DEVICE_ATTR_RO(nstype);
+static ssize_t set_cookie_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ struct nd_spa *nd_spa = nd_region->nd_spa;
+
+ if (is_nd_pmem(dev) && nd_spa->nd_set)
+ /* pass, should be precluded by nd_region_visible */;
+ else
+ return -ENXIO;
+
+ return sprintf(buf, "%#llx\n", nd_spa->nd_set->cookie);
+}
+static DEVICE_ATTR_RO(set_cookie);
+
static ssize_t init_namespaces_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -154,12 +172,29 @@ static struct attribute *nd_region_attributes[] = {
&dev_attr_nstype.attr,
&dev_attr_mappings.attr,
&dev_attr_spa_index.attr,
+ &dev_attr_set_cookie.attr,
&dev_attr_init_namespaces.attr,
NULL,
};
+static umode_t nd_region_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, typeof(*dev), kobj);
+ struct nd_region *nd_region = to_nd_region(dev);
+ struct nd_spa *nd_spa = nd_region->nd_spa;
+
+ if (a != &dev_attr_set_cookie.attr)
+ return a->mode;
+
+ if (is_nd_pmem(dev) && nd_spa->nd_set)
+ return a->mode;
+
+ return 0;
+}
+
static struct attribute_group nd_region_attribute_group = {
.attrs = nd_region_attributes,
+ .is_visible = nd_region_visible,
};
/*
@@ -203,6 +238,147 @@ static struct nd_mem *nd_memdev_to_mem(struct nd_bus *nd_bus,
return NULL;
}
+/* enough info to uniquely specify an interleave set */
+struct nd_set_info {
+ struct nd_set_info_map {
+ u64 region_spa_offset;
+ u32 serial_number;
+ u32 pad;
+ } mapping[0];
+};
+
+static size_t sizeof_nd_set_info(int num_mappings)
+{
+ return sizeof(struct nd_set_info)
+ + num_mappings * sizeof(struct nd_set_info_map);
+}
+
+static int cmp_map(const void *m0, const void *m1)
+{
+ const struct nd_set_info_map *map0 = m0;
+ const struct nd_set_info_map *map1 = m1;
+
+ return memcmp(&map0->region_spa_offset, &map1->region_spa_offset,
+ sizeof(u64));
+}
+
+static int init_interleave_set(struct nd_bus *nd_bus,
+ struct nd_interleave_set *nd_set, struct nd_spa *nd_spa)
+{
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ int num_mappings = num_nd_mem(nd_bus, spa_index);
+ struct nd_set_info *info;
+ int i;
+
+ info = kzalloc(sizeof_nd_set_info(num_mappings), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+ for (i = 0; i < num_mappings; i++) {
+ struct nd_set_info_map *map = &info->mapping[i];
+ struct nd_memdev *nd_memdev = nd_memdev_from_spa(nd_bus,
+ spa_index, i);
+ struct nd_mem *nd_mem = nd_memdev_to_mem(nd_bus, nd_memdev);
+
+ if (!nd_mem) {
+ dev_err(&nd_bus->dev, "%s: failed to find DCR\n",
+ __func__);
+ kfree(info);
+ return -ENODEV;
+ }
+
+ map->region_spa_offset = readl(
+ &nd_memdev->nfit_mem->region_spa_offset);
+ map->serial_number = readl(&nd_mem->nfit_dcr->serial_number);
+ }
+
+ sort(&info->mapping[0], num_mappings, sizeof(struct nd_set_info_map),
+ cmp_map, NULL);
+ nd_set->cookie = nd_fletcher64(info, sizeof_nd_set_info(num_mappings));
+
+ kfree(info);
+
+ return 0;
+}
+
+int nd_bus_init_interleave_sets(struct nd_bus *nd_bus)
+{
+ struct nd_spa *nd_spa;
+ int rc = 0;
+
+ /* PMEM interleave sets */
+ list_for_each_entry(nd_spa, &nd_bus->spas, list) {
+ u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ int spa_type = nfit_spa_type(nd_spa->nfit_spa);
+ struct nd_interleave_set *nd_set;
+
+ if (spa_type != NFIT_SPA_PM)
+ continue;
+ if (nd_memdev_from_spa(nd_bus, spa_index, 0) == NULL)
+ continue;
+ nd_set = kzalloc(sizeof(*nd_set), GFP_KERNEL);
+ if (!nd_set) {
+ rc = -ENOMEM;
+ break;
+ }
+ nd_spa->nd_set = nd_set;
+
+ rc = init_interleave_set(nd_bus, nd_set, nd_spa);
+ if (rc)
+ break;
+ }
+
+ return rc;
+}
+
+/*
+ * Upon successful probe/remove, take/release a reference on the
+ * associated interleave set (if present)
+ */
+static void nd_region_notify_driver_action(struct nd_bus *nd_bus,
+ struct device *dev, int rc, bool probe)
+{
+ if (rc)
+ return;
+
+ if (is_nd_pmem(dev) || is_nd_blk(dev)) {
+ struct nd_region *nd_region = to_nd_region(dev);
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
+
+ if (probe)
+ atomic_inc(&nd_dimm->busy);
+ else
+ atomic_dec(&nd_dimm->busy);
+ }
+ }
+}
+
+void nd_region_probe_start(struct nd_bus *nd_bus, struct device *dev)
+{
+ nd_bus_lock(&nd_bus->dev);
+ nd_bus->probe_active++;
+ nd_bus_unlock(&nd_bus->dev);
+}
+
+void nd_region_probe_end(struct nd_bus *nd_bus, struct device *dev, int rc)
+{
+ nd_bus_lock(&nd_bus->dev);
+ nd_region_notify_driver_action(nd_bus, dev, rc, true);
+ if (--nd_bus->probe_active == 0)
+ wake_up(&nd_bus->probe_wait);
+ nd_bus_unlock(&nd_bus->dev);
+}
+
+void nd_region_notify_remove(struct nd_bus *nd_bus, struct device *dev, int rc)
+{
+ nd_bus_lock(dev);
+ nd_region_notify_driver_action(nd_bus, dev, rc, false);
+ nd_bus_unlock(dev);
+}
+
static ssize_t mappingN(struct device *dev, char *buf, int n)
{
struct nd_region *nd_region = to_nd_region(dev);
On media label format consists of two index blocks followed by an array
of labels. None of these structures are ever updated in place. A
sequence number tracks the current active index and the next one to
write, while labels are written to free slots.
+------------+
| |
| nsindex0 |
| |
+------------+
| |
| nsindex1 |
| |
+------------+
| label0 |
+------------+
| label1 |
+------------+
| |
....nslot...
| |
+------------+
| labelN |
+------------+
After reading valid labels, store the dpa ranges they claim into
per-dimm resource trees.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Makefile | 1
drivers/block/nd/dimm.c | 25 +++-
drivers/block/nd/dimm_devs.c | 6 +
drivers/block/nd/label.c | 291 ++++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/label.h | 129 +++++++++++++++++++
drivers/block/nd/nd.h | 45 ++++++
include/uapi/linux/ndctl.h | 1
7 files changed, 495 insertions(+), 3 deletions(-)
create mode 100644 drivers/block/nd/label.c
create mode 100644 drivers/block/nd/label.h
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index c0194d52e5ad..93856f1c9dbd 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -27,5 +27,6 @@ nd-y += dimm.o
nd-y += region_devs.o
nd-y += region.o
nd-y += namespace_devs.o
+nd-y += label.o
nd_pmem-y := pmem.o
diff --git a/drivers/block/nd/dimm.c b/drivers/block/nd/dimm.c
index 7e043c0c1bf5..ccc96d8fe2e7 100644
--- a/drivers/block/nd/dimm.c
+++ b/drivers/block/nd/dimm.c
@@ -18,6 +18,7 @@
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/nd.h>
+#include "label.h"
#include "nd.h"
static bool force_enable_dimms;
@@ -53,6 +54,12 @@ static int nd_dimm_probe(struct device *dev)
return -ENOMEM;
dev_set_drvdata(dev, ndd);
+ ndd->dpa.name = dev_name(dev);
+ ndd->ns_current = -1;
+ ndd->ns_next = -1;
+ ndd->dpa.start = 0;
+ ndd->dpa.end = -1;
+ ndd->dev = dev;
rc = nd_dimm_init_nsarea(ndd);
if (rc)
@@ -64,18 +71,34 @@ static int nd_dimm_probe(struct device *dev)
dev_dbg(dev, "config data size: %d\n", ndd->nsarea.config_size);
+ nd_bus_lock(dev);
+ ndd->ns_current = nd_label_validate(ndd);
+ ndd->ns_next = nd_label_next_nsindex(ndd->ns_current);
+ nd_label_copy(ndd, to_next_namespace_index(ndd),
+ to_current_namespace_index(ndd));
+ rc = nd_label_reserve_dpa(ndd);
+ nd_bus_unlock(dev);
+
+ if (rc)
+ goto err;
+
return 0;
err:
free_data(ndd);
return rc;
-
}
static int nd_dimm_remove(struct device *dev)
{
struct nd_dimm_drvdata *ndd = dev_get_drvdata(dev);
+ struct resource *res, *_r;
+ nd_bus_lock(dev);
+ dev_set_drvdata(dev, NULL);
+ for_each_dpa_resource_safe(ndd, res, _r)
+ __release_region(&ndd->dpa, res->start, resource_size(res));
+ nd_bus_unlock(dev);
free_data(ndd);
return 0;
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index 6192d9c82b9b..652dee210fe8 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -94,8 +94,12 @@ int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd)
if (ndd->data)
return 0;
- if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0)
+ if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0
+ || ndd->nsarea.config_size < ND_LABEL_MIN_SIZE) {
+ dev_dbg(ndd->dev, "failed to init config data area: (%d:%d)\n",
+ ndd->nsarea.max_xfer, ndd->nsarea.config_size);
return -ENXIO;
+ }
ndd->data = kmalloc(ndd->nsarea.config_size, GFP_KERNEL);
if (!ndd->data)
diff --git a/drivers/block/nd/label.c b/drivers/block/nd/label.c
new file mode 100644
index 000000000000..e791ea8bbdde
--- /dev/null
+++ b/drivers/block/nd/label.c
@@ -0,0 +1,291 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/device.h>
+#include <linux/ndctl.h>
+#include <linux/io.h>
+#include <linux/nd.h>
+#include "nd-private.h"
+#include "label.h"
+#include "nd.h"
+
+#include <asm-generic/io-64-nonatomic-lo-hi.h>
+
+static u32 best_seq(u32 a, u32 b)
+{
+ a &= NSINDEX_SEQ_MASK;
+ b &= NSINDEX_SEQ_MASK;
+
+ if (a == 0 || a == b)
+ return b;
+ else if (b == 0)
+ return a;
+ else if (nd_inc_seq(a) == b)
+ return b;
+ else
+ return a;
+}
+
+size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd)
+{
+ u32 index_span;
+
+ if (ndd->nsindex_size)
+ return ndd->nsindex_size;
+
+ /*
+ * The minimum index space is 512 bytes, with that amount of
+ * index we can describe ~1400 labels which is less than a byte
+ * of overhead per label. Round up to a byte of overhead per
+ * label and determine the size of the index region. Yes, this
+ * starts to waste space at larger config_sizes, but it's
+ * unlikely we'll ever see anything but 128K.
+ */
+ index_span = ndd->nsarea.config_size / 129;
+ index_span /= NSINDEX_ALIGN * 2;
+ ndd->nsindex_size = index_span * NSINDEX_ALIGN;
+
+ return ndd->nsindex_size;
+}
+
+int nd_label_validate(struct nd_dimm_drvdata *ndd)
+{
+ /*
+ * On media label format consists of two index blocks followed
+ * by an array of labels. None of these structures are ever
+ * updated in place. A sequence number tracks the current
+ * active index and the next one to write, while labels are
+ * written to free slots.
+ *
+ * +------------+
+ * | |
+ * | nsindex0 |
+ * | |
+ * +------------+
+ * | |
+ * | nsindex1 |
+ * | |
+ * +------------+
+ * | label0 |
+ * +------------+
+ * | label1 |
+ * +------------+
+ * | |
+ * ....nslot...
+ * | |
+ * +------------+
+ * | labelN |
+ * +------------+
+ */
+ struct nd_namespace_index __iomem *nsindex[] = {
+ to_namespace_index(ndd, 0),
+ to_namespace_index(ndd, 1),
+ };
+ const int num_index = ARRAY_SIZE(nsindex);
+ struct device *dev = ndd->dev;
+ bool valid[] = { false, false };
+ int i, num_valid = 0;
+ u32 seq;
+
+ for (i = 0; i < num_index; i++) {
+ u64 sum_save, sum;
+ u8 sig[NSINDEX_SIG_LEN];
+
+ memcpy_fromio(sig, nsindex[i]->sig, NSINDEX_SIG_LEN);
+ if (memcmp(sig, NSINDEX_SIGNATURE, NSINDEX_SIG_LEN) != 0) {
+ dev_dbg(dev, "%s: nsindex%d signature invalid\n",
+ __func__, i);
+ continue;
+ }
+ sum_save = readq(&nsindex[i]->checksum);
+ writeq(0, &nsindex[i]->checksum);
+ sum = nd_fletcher64(nsindex[i], sizeof_namespace_index(ndd));
+ writeq(sum_save, &nsindex[i]->checksum);
+ if (sum != sum_save) {
+ dev_dbg(dev, "%s: nsindex%d checksum invalid\n",
+ __func__, i);
+ continue;
+ }
+ if ((readl(&nsindex[i]->seq) & NSINDEX_SEQ_MASK) == 0) {
+ dev_dbg(dev, "%s: nsindex%d sequence: %#x invalid\n",
+ __func__, i, readl(&nsindex[i]->seq));
+ continue;
+ }
+
+ /* sanity check the index against expected values */
+ if (readq(&nsindex[i]->myoff)
+ != i * sizeof_namespace_index(ndd)) {
+ dev_dbg(dev, "%s: nsindex%d myoff: %#llx invalid\n",
+ __func__, i, (unsigned long long)
+ readq(&nsindex[i]->myoff));
+ continue;
+ }
+ if (readq(&nsindex[i]->otheroff)
+ != (!i) * sizeof_namespace_index(ndd)) {
+ dev_dbg(dev, "%s: nsindex%d otheroff: %#llx invalid\n",
+ __func__, i, (unsigned long long)
+ readq(&nsindex[i]->otheroff));
+ continue;
+ }
+ if (readq(&nsindex[i]->mysize) > sizeof_namespace_index(ndd)
+ || readq(&nsindex[i]->mysize)
+ < sizeof(struct nd_namespace_index)) {
+ dev_dbg(dev, "%s: nsindex%d mysize: %#llx invalid\n",
+ __func__, i, (unsigned long long)
+ readq(&nsindex[i]->mysize));
+ continue;
+ }
+ if (readl(&nsindex[i]->nslot) * sizeof(struct nd_namespace_label)
+ + 2 * sizeof_namespace_index(ndd)
+ > ndd->nsarea.config_size) {
+ dev_dbg(dev, "%s: nsindex%d nslot: %u invalid, config_size: %#x\n",
+ __func__, i, readl(&nsindex[i]->nslot),
+ ndd->nsarea.config_size);
+ continue;
+ }
+ valid[i] = true;
+ num_valid++;
+ }
+
+ switch (num_valid) {
+ case 0:
+ break;
+ case 1:
+ for (i = 0; i < num_index; i++)
+ if (valid[i])
+ return i;
+ /* can't have num_valid > 0 but valid[] = { false, false } */
+ WARN_ON(1);
+ break;
+ default:
+ /* pick the best index... */
+ seq = best_seq(readl(&nsindex[0]->seq), readl(&nsindex[1]->seq));
+ if (seq == (readl(&nsindex[1]->seq) & NSINDEX_SEQ_MASK))
+ return 1;
+ else
+ return 0;
+ break;
+ }
+
+ return -1;
+}
+
+void nd_label_copy(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_index __iomem *dst,
+ struct nd_namespace_index __iomem *src)
+{
+ void *s, *d;
+
+ if (dst && src)
+ /* pass */;
+ else
+ return;
+
+ d = (void * __force) dst;
+ s = (void * __force) src;
+ memcpy(d, s, sizeof_namespace_index(ndd));
+}
+
+static struct nd_namespace_label __iomem *nd_label_base(struct nd_dimm_drvdata *ndd)
+{
+ void *base = to_namespace_index(ndd, 0);
+
+ return base + 2 * sizeof_namespace_index(ndd);
+}
+
+#define for_each_clear_bit_le(bit, addr, size) \
+ for ((bit) = find_next_zero_bit_le((addr), (size), 0); \
+ (bit) < (size); \
+ (bit) = find_next_zero_bit_le((addr), (size), (bit) + 1))
+
+/**
+ * preamble_current - common variable initialization for nd_label_* routines
+ * @nd_dimm: dimm container for the relevant label set
+ * @nsindex: on return set to the currently active namespace index
+ * @free: on return set to the free label bitmap in the index
+ * @nslot: on return set to the number of slots in the label space
+ */
+static bool preamble_current(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_index **nsindex,
+ unsigned long **free, u32 *nslot)
+{
+ *nsindex = to_current_namespace_index(ndd);
+ if (*nsindex == NULL)
+ return false;
+
+ *free = (unsigned long __force *) (*nsindex)->free;
+ *nslot = readl(&(*nsindex)->nslot);
+
+ return true;
+}
+
+static char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
+{
+ if (!label_id || !uuid)
+ return NULL;
+ snprintf(label_id->id, ND_LABEL_ID_SIZE, "%s-%pUb",
+ flags & NSLABEL_FLAG_LOCAL ? "blk" : "pmem", uuid);
+ return label_id->id;
+}
+
+static bool slot_valid(struct nd_namespace_label __iomem *nd_label, u32 slot)
+{
+ /* check that we are written where we expect to be written */
+ if (slot != readl(&nd_label->slot))
+ return false;
+
+ /* check that DPA allocations are page aligned */
+ if ((readq(&nd_label->dpa) | readq(&nd_label->rawsize)) % SZ_4K)
+ return false;
+
+ return true;
+}
+
+int nd_label_reserve_dpa(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot, slot;
+
+ if (!preamble_current(ndd, &nsindex, &free, &nslot))
+ return 0; /* no label, nothing to reserve */
+
+ for_each_clear_bit_le(slot, free, nslot) {
+ struct nd_namespace_label __iomem *nd_label;
+ struct nd_region *nd_region = NULL;
+ u8 label_uuid[NSLABEL_UUID_LEN];
+ struct nd_label_id *label_id;
+ struct resource *res;
+ u32 flags;
+
+ nd_label = nd_label_base(ndd) + slot;
+
+ if (!slot_valid(nd_label, slot))
+ continue;
+
+ label_id = devm_kzalloc(ndd->dev, sizeof(*label_id),
+ GFP_KERNEL);
+ if (!label_id)
+ return -ENOMEM;
+ memcpy_fromio(label_uuid, nd_label->uuid,
+ NSLABEL_UUID_LEN);
+ flags = readl(&nd_label->flags);
+ res = __request_region(&ndd->dpa, readq(&nd_label->dpa),
+ readq(&nd_label->rawsize),
+ nd_label_gen_id(label_id, label_uuid, flags), 0);
+ nd_dbg_dpa(nd_region, ndd, res, "reserve\n");
+ if (!res)
+ return -EBUSY;
+ }
+
+ return 0;
+}
diff --git a/drivers/block/nd/label.h b/drivers/block/nd/label.h
new file mode 100644
index 000000000000..79ed885a43c0
--- /dev/null
+++ b/drivers/block/nd/label.h
@@ -0,0 +1,129 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __LABEL_H__
+#define __LABEL_H__
+
+#include <linux/ndctl.h>
+#include <linux/sizes.h>
+#include <linux/io.h>
+
+enum {
+ NSINDEX_SIG_LEN = 16,
+ NSINDEX_ALIGN = 256,
+ NSINDEX_SEQ_MASK = 0x3,
+ NSLABEL_UUID_LEN = 16,
+ NSLABEL_NAME_LEN = 64,
+ NSLABEL_FLAG_ROLABEL = 0x1, /* read-only label */
+ NSLABEL_FLAG_LOCAL = 0x2, /* DIMM-local namespace */
+ NSLABEL_FLAG_BTT = 0x4, /* namespace contains a BTT */
+ NSLABEL_FLAG_UPDATING = 0x8, /* label being updated */
+ BTT_ALIGN = 4096, /* all btt structures */
+ BTTINFO_SIG_LEN = 16,
+ BTTINFO_UUID_LEN = 16,
+ BTTINFO_FLAG_ERROR = 0x1, /* error state (read-only) */
+ BTTINFO_MAJOR_VERSION = 1,
+ ND_LABEL_MIN_SIZE = 512 * 129, /* see sizeof_namespace_index() */
+ ND_LABEL_ID_SIZE = 50,
+};
+
+static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
+
+/**
+ * struct nd_namespace_index - label set superblock
+ * @sig: NAMESPACE_INDEX\0
+ * @flags: placeholder
+ * @seq: sequence number for this index
+ * @myoff: offset of this index in label area
+ * @mysize: size of this index struct
+ * @otheroff: offset of other index
+ * @labeloff: offset of first label slot
+ * @nslot: total number of label slots
+ * @major: label area major version
+ * @minor: label area minor version
+ * @checksum: fletcher64 of all fields
+ * @free[0]: bitmap, nlabel bits
+ *
+ * The size of free[] is rounded up so the total struct size is a
+ * multiple of NSINDEX_ALIGN bytes. Any bits this allocates beyond
+ * nlabel bits must be zero.
+ */
+struct nd_namespace_index {
+ u8 sig[NSINDEX_SIG_LEN];
+ __le32 flags;
+ __le32 seq;
+ __le64 myoff;
+ __le64 mysize;
+ __le64 otheroff;
+ __le64 labeloff;
+ __le32 nslot;
+ __le16 major;
+ __le16 minor;
+ __le64 checksum;
+ u8 free[0];
+};
+
+/**
+ * struct nd_namespace_label - namespace superblock
+ * @uuid: UUID per RFC 4122
+ * @name: optional name (NULL-terminated)
+ * @flags: see NSLABEL_FLAG_*
+ * @nlabel: num labels to describe this ns
+ * @position: labels position in set
+ * @isetcookie: interleave set cookie
+ * @lbasize: LBA size in bytes or 0 for pmem
+ * @dpa: DPA of NVM range on this DIMM
+ * @rawsize: size of namespace
+ * @slot: slot of this label in label area
+ * @unused: must be zero
+ */
+struct nd_namespace_label {
+ u8 uuid[NSLABEL_UUID_LEN];
+ u8 name[NSLABEL_NAME_LEN];
+ __le32 flags;
+ __le16 nlabel;
+ __le16 position;
+ __le64 isetcookie;
+ __le64 lbasize;
+ __le64 dpa;
+ __le64 rawsize;
+ __le32 slot;
+ __le32 unused;
+};
+
+/**
+ * struct nd_label_id - identifier string for dpa allocation
+ * @id: "{blk|pmem}-<namespace uuid>"
+ */
+struct nd_label_id {
+ char id[ND_LABEL_ID_SIZE];
+};
+
+/*
+ * If the 'best' index is invalid, so is the 'next' index. Otherwise,
+ * the next index is MOD(index+1, 2)
+ */
+static inline int nd_label_next_nsindex(int index)
+{
+ if (index < 0)
+ return -1;
+
+ return (index + 1) % 2;
+}
+
+struct nd_dimm_drvdata;
+int nd_label_validate(struct nd_dimm_drvdata *ndd);
+void nd_label_copy(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_index *dst,
+ struct nd_namespace_index *src);
+size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd);
+#endif /* __LABEL_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 4deed46884c1..f8dee1df5e6a 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -15,11 +15,15 @@
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/ndctl.h>
+#include "label.h"
struct nd_dimm_drvdata {
struct device *dev;
+ int nsindex_size;
struct nfit_cmd_get_config_size nsarea;
void *data;
+ int ns_current, ns_next;
+ struct resource dpa;
};
struct nd_region_namespaces {
@@ -27,12 +31,43 @@ struct nd_region_namespaces {
int active;
};
+static inline struct nd_namespace_index __iomem *to_namespace_index(
+ struct nd_dimm_drvdata *ndd, int i)
+{
+ if (i < 0)
+ return NULL;
+
+ return ((void __iomem *) ndd->data + sizeof_namespace_index(ndd) * i);
+}
+
+static inline struct nd_namespace_index __iomem *to_current_namespace_index(
+ struct nd_dimm_drvdata *ndd)
+{
+ return to_namespace_index(ndd, ndd->ns_current);
+}
+
+static inline struct nd_namespace_index __iomem *to_next_namespace_index(
+ struct nd_dimm_drvdata *ndd)
+{
+ return to_namespace_index(ndd, ndd->ns_next);
+}
+
struct nd_mapping {
struct nd_dimm *nd_dimm;
u64 start;
u64 size;
};
+#define nd_dbg_dpa(r, d, res, fmt, arg...) \
+ dev_dbg((r) ? &(r)->dev : (d)->dev, "%s: %.13s: %#llx @ %#llx " fmt, \
+ (r) ? dev_name((d)->dev) : "", res ? res->name : "null", \
+ (unsigned long long) (res ? resource_size(res) : 0), \
+ (unsigned long long) (res ? res->start : 0), ##arg)
+
+#define for_each_dpa_resource_safe(ndd, res, next) \
+ for (res = (ndd)->dpa.child, next = res ? res->sibling : NULL; \
+ res; res = next, next = next ? next->sibling : NULL)
+
struct nd_region {
struct device dev;
struct nd_spa *nd_spa;
@@ -43,6 +78,15 @@ struct nd_region {
struct nd_mapping mapping[0];
};
+/*
+ * Lookup next in the repeating sequence of 01, 10, and 11.
+ */
+static inline unsigned nd_inc_seq(unsigned seq)
+{
+ static const unsigned next[] = { 0, 2, 3, 1 };
+
+ return next[seq & 3];
+}
enum nd_async_mode {
ND_SYNC,
ND_ASYNC,
@@ -67,4 +111,5 @@ int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
void nd_bus_lock(struct device *dev);
void nd_bus_unlock(struct device *dev);
bool is_nd_bus_locked(struct device *dev);
+int nd_label_reserve_dpa(struct nd_dimm_drvdata *ndd);
#endif /* __ND_H__ */
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index 0ccc0f2e5765..097e67a8d477 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -175,7 +175,6 @@ static inline const char *nfit_dimm_cmd_name(unsigned cmd)
#define NFIT_IOCTL_ARS_QUERY _IOWR(ND_IOCTL, NFIT_CMD_ARS_QUERY,\
struct nfit_cmd_ars_query)
-
#define ND_DEVICE_DIMM 1 /* nd_dimm: container for "config data" */
#define ND_DEVICE_REGION_PMEM 2 /* nd_region: (parent of pmem namespaces) */
#define ND_DEVICE_REGION_BLOCK 3 /* nd_region: (parent of block namespaces) */
A complete label set is a PMEM-label per dimm where all the UUIDs
match and the interleave set cookie matches an active interleave set.
Present a sysfs ABI for manipulation of a PMEM-namespace's 'alt_name',
'uuid', and 'size' attributes. A later patch will make these settings
persistent by writing back the label.
Note that PMEM allocations grow forwards from the start of an interleave
set (lowest dimm-physical-address (DPA)). BLK-namespaces that alias
with a PMEM interleave set will grow allocations backward from the
highest DPA.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/bus.c | 6
drivers/block/nd/core.c | 64 ++
drivers/block/nd/dimm.c | 2
drivers/block/nd/dimm_devs.c | 127 +++++
drivers/block/nd/label.c | 54 ++
drivers/block/nd/label.h | 3
drivers/block/nd/namespace_devs.c | 1020 +++++++++++++++++++++++++++++++++++++
drivers/block/nd/nd-private.h | 14 +
drivers/block/nd/nd.h | 33 +
drivers/block/nd/pmem.c | 22 +
drivers/block/nd/region_devs.c | 147 +++++
include/linux/nd.h | 24 +
include/uapi/linux/ndctl.h | 4
13 files changed, 1508 insertions(+), 12 deletions(-)
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index 944d7d7845fe..8e70098b6cb0 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -274,8 +274,10 @@ void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
device_destroy(nd_class, MKDEV(nd_bus_major, nd_bus->id));
}
-static void wait_nd_bus_probe_idle(struct nd_bus *nd_bus)
+void wait_nd_bus_probe_idle(struct device *dev)
{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
do {
if (nd_bus->probe_active == 0)
break;
@@ -294,7 +296,7 @@ static int nd_cmd_clear_to_send(struct nd_dimm *nd_dimm, unsigned int cmd)
return 0;
nd_bus = walk_to_nd_bus(&nd_dimm->dev);
- wait_nd_bus_probe_idle(nd_bus);
+ wait_nd_bus_probe_idle(&nd_bus->dev);
if (atomic_read(&nd_dimm->busy))
return -EBUSY;
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 976cd5e3ebaf..560ed5555496 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -14,6 +14,7 @@
#include <linux/export.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/ctype.h>
#include <linux/ndctl.h>
#include <linux/mutex.h>
#include <linux/slab.h>
@@ -149,6 +150,69 @@ struct nd_bus *walk_to_nd_bus(struct device *nd_dev)
return NULL;
}
+static bool is_uuid_sep(char sep)
+{
+ if (sep == '\n' || sep == '-' || sep == ':' || sep == '\0')
+ return true;
+ return false;
+}
+
+static int nd_uuid_parse(struct device *dev, u8 *uuid_out, const char *buf,
+ size_t len)
+{
+ const char *str = buf;
+ u8 uuid[16];
+ int i;
+
+ for (i = 0; i < 16; i++) {
+ if (!isxdigit(str[0]) || !isxdigit(str[1])) {
+ dev_dbg(dev, "%s: pos: %d buf[%zd]: %c buf[%zd]: %c\n",
+ __func__, i, str - buf, str[0],
+ str + 1 - buf, str[1]);
+ return -EINVAL;
+ }
+
+ uuid[i] = (hex_to_bin(str[0]) << 4) | hex_to_bin(str[1]);
+ str += 2;
+ if (is_uuid_sep(*str))
+ str++;
+ }
+
+ memcpy(uuid_out, uuid, sizeof(uuid));
+ return 0;
+}
+
+/**
+ * nd_uuid_store: common implementation for writing 'uuid' sysfs attributes
+ * @dev: container device for the uuid property
+ * @uuid_out: uuid buffer to replace
+ * @buf: raw sysfs buffer to parse
+ *
+ * Enforce that uuids can only be changed while the device is disabled
+ * (driver detached)
+ * LOCKING: expects device_lock() is held on entry
+ */
+int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
+ size_t len)
+{
+ u8 uuid[16];
+ int rc;
+
+ if (dev->driver)
+ return -EBUSY;
+
+ rc = nd_uuid_parse(dev, uuid, buf, len);
+ if (rc)
+ return rc;
+
+ kfree(*uuid_out);
+ *uuid_out = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
+ if (!(*uuid_out))
+ return -ENOMEM;
+
+ return 0;
+}
+
static ssize_t commands_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
diff --git a/drivers/block/nd/dimm.c b/drivers/block/nd/dimm.c
index ccc96d8fe2e7..eb62bc2848d3 100644
--- a/drivers/block/nd/dimm.c
+++ b/drivers/block/nd/dimm.c
@@ -97,7 +97,7 @@ static int nd_dimm_remove(struct device *dev)
nd_bus_lock(dev);
dev_set_drvdata(dev, NULL);
for_each_dpa_resource_safe(ndd, res, _r)
- __release_region(&ndd->dpa, res->start, resource_size(res));
+ nd_dimm_free_dpa(ndd, res);
nd_bus_unlock(dev);
free_data(ndd);
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index 652dee210fe8..caa51d3ea6af 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -161,6 +161,14 @@ struct nd_dimm *to_nd_dimm(struct device *dev)
return nd_dimm;
}
+struct nd_dimm_drvdata *to_ndd(struct nd_mapping *nd_mapping)
+{
+ struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
+
+ return dev_get_drvdata(&nd_dimm->dev);
+}
+EXPORT_SYMBOL(to_ndd);
+
static struct nfit_mem __iomem *to_nfit_mem(struct device *dev)
{
struct nd_dimm *nd_dimm = to_nd_dimm(dev);
@@ -408,6 +416,125 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
return NULL;
}
+/**
+ * nd_pmem_available_dpa - for the given dimm+region account unallocated dpa
+ * @nd_mapping: container of dpa-resource-root + labels
+ * @nd_region: constrain available space check to this reference region
+ * @overlap: calculate available space assuming this level of overlap
+ *
+ * Validate that a PMEM label, if present, aligns with the start of an
+ * interleave set and truncate the available size at the lowest BLK
+ * overlap point.
+ *
+ * The expectation is that this routine is called multiple times as it
+ * probes for the largest BLK encroachment for any single member DIMM of
+ * the interleave set. Once that value is determined the PMEM-limit for
+ * the set can be established.
+ */
+resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, resource_size_t *overlap)
+{
+ resource_size_t map_end, busy = 0, available, blk_start;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res;
+ const char *reason;
+
+ if (!ndd)
+ return 0;
+
+ map_end = nd_mapping->start + nd_mapping->size - 1;
+ blk_start = max(nd_mapping->start, map_end + 1 - *overlap);
+ for_each_dpa_resource(ndd, res)
+ if (res->start >= nd_mapping->start && res->start < map_end) {
+ if (strncmp(res->name, "blk", 3) == 0)
+ blk_start = min(blk_start, res->start);
+ else if (res->start != nd_mapping->start) {
+ reason = "misaligned to iset";
+ goto err;
+ } else {
+ if (busy) {
+ reason = "duplicate overlapping PMEM reservations?";
+ goto err;
+ }
+ busy += resource_size(res);
+ continue;
+ }
+ } else if (res->end >= nd_mapping->start && res->end <= map_end) {
+ if (strncmp(res->name, "blk", 3) == 0) {
+ /*
+ * If a BLK allocation overlaps the start of
+ * PMEM the entire interleave set may now only
+ * be used for BLK.
+ */
+ blk_start = nd_mapping->start;
+ } else {
+ reason = "misaligned to iset";
+ goto err;
+ }
+ } else if (nd_mapping->start > res->start
+ && nd_mapping->start < res->end) {
+ /* total eclipse of the mapping */
+ busy += nd_mapping->size;
+ blk_start = nd_mapping->start;
+ }
+
+ *overlap = map_end + 1 - blk_start;
+ available = blk_start - nd_mapping->start;
+ if (busy < available)
+ return available - busy;
+ return 0;
+
+ err:
+ /*
+ * Something is wrong, PMEM must align with the start of the
+ * interleave set, and there can only be one allocation per set.
+ */
+ nd_dbg_dpa(nd_region, ndd, res, "%s\n", reason);
+ return 0;
+}
+
+void nd_dimm_free_dpa(struct nd_dimm_drvdata *ndd, struct resource *res)
+{
+ WARN_ON_ONCE(!is_nd_bus_locked(ndd->dev));
+ kfree(res->name);
+ __release_region(&ndd->dpa, res->start, resource_size(res));
+}
+
+struct resource *nd_dimm_allocate_dpa(struct nd_dimm_drvdata *ndd,
+ struct nd_label_id *label_id, resource_size_t start,
+ resource_size_t n)
+{
+ char *name = kmemdup(label_id, sizeof(*label_id), GFP_KERNEL);
+ struct resource *res;
+
+ if (!name)
+ return NULL;
+
+ WARN_ON_ONCE(!is_nd_bus_locked(ndd->dev));
+ res = __request_region(&ndd->dpa, start, n, name, 0);
+ if (!res)
+ kfree(name);
+ return res;
+}
+
+/**
+ * nd_dimm_allocated_dpa - sum up the dpa currently allocated to this label_id
+ * @nd_dimm: container of dpa-resource-root + labels
+ * @label_id: dpa resource name of the form {pmem|blk}-<human readable uuid>
+ */
+resource_size_t nd_dimm_allocated_dpa(struct nd_dimm_drvdata *ndd,
+ struct nd_label_id *label_id)
+{
+ resource_size_t allocated = 0;
+ struct resource *res;
+
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, label_id->id) == 0)
+ allocated += resource_size(res);
+
+ return allocated;
+}
+
static int count_dimms(struct device *dev, void *c)
{
int *count = c;
diff --git a/drivers/block/nd/label.c b/drivers/block/nd/label.c
index e791ea8bbdde..b55fa2a6f872 100644
--- a/drivers/block/nd/label.c
+++ b/drivers/block/nd/label.c
@@ -228,7 +228,7 @@ static bool preamble_current(struct nd_dimm_drvdata *ndd,
return true;
}
-static char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
+char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
{
if (!label_id || !uuid)
return NULL;
@@ -289,3 +289,55 @@ int nd_label_reserve_dpa(struct nd_dimm_drvdata *ndd)
return 0;
}
+
+int nd_label_active_count(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot, slot;
+ int count = 0;
+
+ if (!preamble_current(ndd, &nsindex, &free, &nslot))
+ return 0;
+
+ for_each_clear_bit_le(slot, free, nslot) {
+ struct nd_namespace_label __iomem *nd_label;
+
+ nd_label = nd_label_base(ndd) + slot;
+
+ if (!slot_valid(nd_label, slot)) {
+ dev_dbg(ndd->dev,
+ "%s: slot%d invalid slot: %d dpa: %lx rawsize: %lx\n",
+ __func__, slot, readl(&nd_label->slot),
+ (unsigned long) readq(&nd_label->dpa),
+ (unsigned long) readq(&nd_label->rawsize));
+ continue;
+ }
+ count++;
+ }
+ return count;
+}
+
+struct nd_namespace_label __iomem *nd_label_active(
+ struct nd_dimm_drvdata *ndd, int n)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot, slot;
+
+ if (!preamble_current(ndd, &nsindex, &free, &nslot))
+ return NULL;
+
+ for_each_clear_bit_le(slot, free, nslot) {
+ struct nd_namespace_label __iomem *nd_label;
+
+ nd_label = nd_label_base(ndd) + slot;
+ if (slot != readl(&nd_label->slot))
+ continue;
+
+ if (n-- == 0)
+ return nd_label_base(ndd) + slot;
+ }
+
+ return NULL;
+}
diff --git a/drivers/block/nd/label.h b/drivers/block/nd/label.h
index 79ed885a43c0..4436624f4146 100644
--- a/drivers/block/nd/label.h
+++ b/drivers/block/nd/label.h
@@ -126,4 +126,7 @@ void nd_label_copy(struct nd_dimm_drvdata *ndd,
struct nd_namespace_index *dst,
struct nd_namespace_index *src);
size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd);
+int nd_label_active_count(struct nd_dimm_drvdata *ndd);
+struct nd_namespace_label __iomem *nd_label_active(
+ struct nd_dimm_drvdata *ndd, int n);
#endif /* __LABEL_H__ */
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
index 6861327f4245..386776845830 100644
--- a/drivers/block/nd/namespace_devs.c
+++ b/drivers/block/nd/namespace_devs.c
@@ -14,8 +14,11 @@
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/nd.h>
+#include "nd-private.h"
#include "nd.h"
+#include <asm-generic/io-64-nonatomic-lo-hi.h>
+
static void namespace_io_release(struct device *dev)
{
struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
@@ -23,11 +26,50 @@ static void namespace_io_release(struct device *dev)
kfree(nsio);
}
+static void namespace_pmem_release(struct device *dev)
+{
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ kfree(nspm->alt_name);
+ kfree(nspm->uuid);
+ kfree(nspm);
+}
+
+static void namespace_blk_release(struct device *dev)
+{
+ /* TODO: blk namespace support */
+}
+
static struct device_type namespace_io_device_type = {
.name = "nd_namespace_io",
.release = namespace_io_release,
};
+static struct device_type namespace_pmem_device_type = {
+ .name = "nd_namespace_pmem",
+ .release = namespace_pmem_release,
+};
+
+static struct device_type namespace_blk_device_type = {
+ .name = "nd_namespace_blk",
+ .release = namespace_blk_release,
+};
+
+static bool is_namespace_pmem(struct device *dev)
+{
+ return dev ? dev->type == &namespace_pmem_device_type : false;
+}
+
+static bool is_namespace_blk(struct device *dev)
+{
+ return dev ? dev->type == &namespace_blk_device_type : false;
+}
+
+static bool is_namespace_io(struct device *dev)
+{
+ return dev ? dev->type == &namespace_io_device_type : false;
+}
+
static ssize_t type_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -37,13 +79,674 @@ static ssize_t type_show(struct device *dev,
}
static DEVICE_ATTR_RO(type);
+static ssize_t __alt_name_store(struct device *dev, const char *buf,
+ const size_t len)
+{
+ char *input, *pos, *alt_name, **ns_altname;
+ ssize_t rc;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ ns_altname = &nspm->alt_name;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ } else
+ return -ENXIO;
+
+ if (dev->driver)
+ return -EBUSY;
+
+ input = kmemdup(buf, len + 1, GFP_KERNEL);
+ if (!input)
+ return -ENOMEM;
+
+ input[len] = '\0';
+ pos = strim(input);
+ if (strlen(pos) + 1 > NSLABEL_NAME_LEN) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ alt_name = kzalloc(NSLABEL_NAME_LEN, GFP_KERNEL);
+ if (!alt_name) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ kfree(*ns_altname);
+ *ns_altname = alt_name;
+ sprintf(*ns_altname, "%s", pos);
+ rc = len;
+
+out:
+ kfree(input);
+ return rc;
+}
+
+static ssize_t alt_name_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ ssize_t rc;
+
+ device_lock(dev);
+ nd_bus_lock(dev);
+ wait_nd_bus_probe_idle(dev);
+ rc = __alt_name_store(dev, buf, len);
+ dev_dbg(dev, "%s: %s (%zd)\n", __func__, rc < 0 ? "fail" : "success", rc);
+ nd_bus_unlock(dev);
+ device_unlock(dev);
+
+ return rc;
+}
+
+static ssize_t alt_name_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ char *ns_altname;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ ns_altname = nspm->alt_name;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ } else
+ return -ENXIO;
+
+ return sprintf(buf, "%s\n", ns_altname ? ns_altname : "");
+}
+static DEVICE_ATTR_RW(alt_name);
+
+static int scan_free(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_label_id *label_id,
+ resource_size_t n)
+{
+ bool is_blk = strncmp(label_id->id, "blk", 3) == 0;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ int rc = 0;
+
+ while (n) {
+ struct resource *res, *last;
+ resource_size_t new_start;
+
+ last = NULL;
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, label_id->id) == 0)
+ last = res;
+ res = last;
+ if (!res)
+ return 0;
+
+ if (n >= resource_size(res)) {
+ n -= resource_size(res);
+ nd_dbg_dpa(nd_region, ndd, res, "delete %d\n", rc);
+ nd_dimm_free_dpa(ndd, res);
+ /* retry with last resource deleted */
+ continue;
+ }
+
+ /*
+ * Keep BLK allocations relegated to high DPA as much as
+ * possible
+ */
+ if (is_blk)
+ new_start = res->start + n;
+ else
+ new_start = res->start;
+
+ rc = adjust_resource(res, new_start, resource_size(res) - n);
+ nd_dbg_dpa(nd_region, ndd, res, "shrink %d\n", rc);
+ break;
+ }
+
+ return rc;
+}
+
+/**
+ * shrink_dpa_allocation - for each dimm in region free n bytes for label_id
+ * @nd_region: the set of dimms to reclaim @n bytes from
+ * @label_id: unique identifier for the namespace consuming this dpa range
+ * @n: number of bytes per-dimm to release
+ *
+ * Assumes resources are ordered. Starting from the end try to
+ * adjust_resource() the allocation to @n, but if @n is larger than the
+ * allocation delete it and find the 'new' last allocation in the label
+ * set.
+ */
+static int shrink_dpa_allocation(struct nd_region *nd_region,
+ struct nd_label_id *label_id, resource_size_t n)
+{
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ int rc;
+
+ rc = scan_free(nd_region, nd_mapping, label_id, n);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static resource_size_t init_dpa_allocation(struct nd_label_id *label_id,
+ struct nd_region *nd_region, struct nd_mapping *nd_mapping,
+ resource_size_t n)
+{
+ bool is_blk = strncmp(label_id->id, "blk", 3) == 0;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ resource_size_t first_dpa;
+ struct resource *res;
+ int rc = 0;
+
+ /* allocate blk from highest dpa first */
+ if (is_blk)
+ first_dpa = nd_mapping->start + nd_mapping->size - n;
+ else
+ first_dpa = nd_mapping->start;
+
+ /* first resource allocation for this label-id or dimm */
+ res = nd_dimm_allocate_dpa(ndd, label_id, first_dpa, n);
+ if (!res)
+ rc = -EBUSY;
+
+ nd_dbg_dpa(nd_region, ndd, res, "init %d\n", rc);
+ return rc ? n : 0;
+}
+
+static bool space_valid(bool is_pmem, struct nd_label_id *label_id,
+ struct resource *res)
+{
+ /*
+ * For BLK-space any space is valid, for PMEM-space, it must be
+ * contiguous with an existing allocation.
+ */
+ if (!is_pmem)
+ return true;
+ if (!res || strcmp(res->name, label_id->id) == 0)
+ return true;
+ return false;
+}
+
+enum alloc_loc {
+ ALLOC_ERR = 0, ALLOC_BEFORE, ALLOC_MID, ALLOC_AFTER,
+};
+
+static resource_size_t scan_allocate(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_label_id *label_id,
+ resource_size_t n)
+{
+ resource_size_t mapping_end = nd_mapping->start + nd_mapping->size - 1;
+ bool is_pmem = strncmp(label_id->id, "pmem", 4) == 0;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ const resource_size_t to_allocate = n;
+ struct resource *res;
+ int first;
+
+ retry:
+ first = 0;
+ for_each_dpa_resource(ndd, res) {
+ resource_size_t allocate, available = 0, free_start, free_end;
+ struct resource *next = res->sibling, *new_res = NULL;
+ enum alloc_loc loc = ALLOC_ERR;
+ const char *action;
+ int rc = 0;
+
+ /* ignore resources outside this nd_mapping */
+ if (res->start > mapping_end)
+ continue;
+ if (res->end < nd_mapping->start)
+ continue;
+
+ /* space at the beginning of the mapping */
+ if (!first++ && res->start > nd_mapping->start) {
+ free_start = nd_mapping->start;
+ available = res->start - free_start;
+ if (space_valid(is_pmem, label_id, NULL))
+ loc = ALLOC_BEFORE;
+ }
+
+ /* space between allocations */
+ if (!loc && next) {
+ free_start = res->start + resource_size(res);
+ free_end = min(mapping_end, next->start - 1);
+ if (space_valid(is_pmem, label_id, res)
+ && free_start < free_end) {
+ available = free_end + 1 - free_start;
+ loc = ALLOC_MID;
+ }
+ }
+
+ /* space at the end of the mapping */
+ if (!loc && !next) {
+ free_start = res->start + resource_size(res);
+ free_end = mapping_end;
+ if (space_valid(is_pmem, label_id, res)
+ && free_start < free_end) {
+ available = free_end + 1 - free_start;
+ loc = ALLOC_AFTER;
+ }
+ }
+
+ if (!loc || !available)
+ continue;
+ allocate = min(available, n);
+ switch (loc) {
+ case ALLOC_BEFORE:
+ if (strcmp(res->name, label_id->id) == 0) {
+ /* adjust current resource up */
+ if (is_pmem)
+ return n;
+ rc = adjust_resource(res, res->start - allocate,
+ resource_size(res) + allocate);
+ action = "cur grow up";
+ } else
+ action = "allocate";
+ break;
+ case ALLOC_MID:
+ if (strcmp(next->name, label_id->id) == 0) {
+ /* adjust next resource up */
+ if (is_pmem)
+ return n;
+ rc = adjust_resource(next, next->start
+ - allocate, resource_size(next)
+ + allocate);
+ new_res = next;
+ action = "next grow up";
+ } else if (strcmp(res->name, label_id->id) == 0) {
+ action = "grow down";
+ } else
+ action = "allocate";
+ break;
+ case ALLOC_AFTER:
+ if (strcmp(res->name, label_id->id) == 0)
+ action = "grow down";
+ else
+ action = "allocate";
+ break;
+ default:
+ return n;
+ }
+
+ if (strcmp(action, "allocate") == 0) {
+ /* BLK allocate bottom up */
+ if (!is_pmem)
+ free_start += available - allocate;
+ else if (free_start != nd_mapping->start)
+ return n;
+
+ new_res = nd_dimm_allocate_dpa(ndd, label_id,
+ free_start, allocate);
+ if (!new_res)
+ rc = -EBUSY;
+ } else if (strcmp(action, "grow down") == 0) {
+ /* adjust current resource down */
+ rc = adjust_resource(res, res->start, resource_size(res)
+ + allocate);
+ }
+
+ if (!new_res)
+ new_res = res;
+
+ nd_dbg_dpa(nd_region, ndd, new_res, "%s(%d) %d\n",
+ action, loc, rc);
+
+ if (rc)
+ return n;
+
+ n -= allocate;
+ if (n) {
+ /*
+ * Retry scan with newly inserted resources.
+ * For example, if we did an ALLOC_BEFORE
+ * insertion there may also have been space
+ * available for an ALLOC_AFTER insertion, so we
+ * need to check this same resource again
+ */
+ goto retry;
+ } else
+ return 0;
+ }
+
+ if (is_pmem && n == to_allocate)
+ return init_dpa_allocation(label_id, nd_region, nd_mapping, n);
+ return n;
+}
+
+/**
+ * grow_dpa_allocation - for each dimm allocate n bytes for @label_id
+ * @nd_region: the set of dimms to allocate @n more bytes from
+ * @label_id: unique identifier for the namespace consuming this dpa range
+ * @n: number of bytes per-dimm to add to the existing allocation
+ *
+ * Assumes resources are ordered. For BLK regions, first consume
+ * BLK-only available DPA free space, then consume PMEM-aliased DPA
+ * space starting at the highest DPA. For PMEM regions start
+ * allocations from the start of an interleave set and end at the first
+ * BLK allocation or the end of the interleave set, whichever comes
+ * first.
+ */
+static int grow_dpa_allocation(struct nd_region *nd_region,
+ struct nd_label_id *label_id, resource_size_t n)
+{
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ int rc;
+
+ rc = scan_allocate(nd_region, nd_mapping, label_id, n);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static void nd_namespace_pmem_set_size(struct nd_region *nd_region,
+ struct nd_namespace_pmem *nspm, resource_size_t size)
+{
+ struct resource *res = &nspm->nsio.res;
+
+ res->start = nd_region->ndr_start;
+ res->end = nd_region->ndr_start + size - 1;
+}
+
+static ssize_t __size_store(struct device *dev, unsigned long long val)
+{
+ resource_size_t allocated = 0, available = 0;
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+ struct nd_mapping *nd_mapping;
+ struct nd_dimm_drvdata *ndd;
+ struct nd_label_id label_id;
+ u32 flags = 0, remainder;
+ u8 *uuid = NULL;
+ int rc, i;
+
+ if (dev->driver)
+ return -EBUSY;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ uuid = nspm->uuid;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ }
+
+ /*
+ * We need a uuid for the allocation-label and dimm(s) on which
+ * to store the label.
+ */
+ if (!uuid || nd_region->ndr_mappings == 0)
+ return -ENXIO;
+
+ div_u64_rem(val, SZ_4K * nd_region->ndr_mappings, &remainder);
+ if (remainder) {
+ dev_dbg(dev, "%llu is not %dK aligned\n", val,
+ (SZ_4K * nd_region->ndr_mappings) / SZ_1K);
+ return -EINVAL;
+ }
+
+ nd_label_gen_id(&label_id, uuid, flags);
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ nd_mapping = &nd_region->mapping[i];
+ ndd = to_ndd(nd_mapping);
+
+ /*
+ * All dimms in an interleave set, or the base dimm for a blk
+ * region, need to be enabled for the size to be changed.
+ */
+ if (!ndd)
+ return -ENXIO;
+
+ allocated += nd_dimm_allocated_dpa(ndd, &label_id);
+ }
+ available = nd_region_available_dpa(nd_region);
+
+ if (val > available + allocated)
+ return -ENOSPC;
+
+ if (val == allocated)
+ return 0;
+
+ val = div_u64(val, nd_region->ndr_mappings);
+ allocated = div_u64(allocated, nd_region->ndr_mappings);
+ if (val < allocated)
+ rc = shrink_dpa_allocation(nd_region, &label_id, allocated - val);
+ else
+ rc = grow_dpa_allocation(nd_region, &label_id, val - allocated);
+
+ if (rc)
+ return rc;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ nd_namespace_pmem_set_size(nd_region, nspm,
+ val * nd_region->ndr_mappings);
+ }
+
+ return rc;
+}
+
+static ssize_t size_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ unsigned long long val;
+ u8 **uuid = NULL;
+ int rc;
+
+ rc = kstrtoull(buf, 0, &val);
+ if (rc)
+ return rc;
+
+ device_lock(dev);
+ nd_bus_lock(dev);
+ wait_nd_bus_probe_idle(dev);
+ rc = __size_store(dev, val);
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ uuid = &nspm->uuid;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ rc = -ENXIO;
+ }
+
+ if (rc == 0 && val == 0 && uuid) {
+ /* setting size zero == 'delete namespace' */
+ kfree(*uuid);
+ *uuid = NULL;
+ }
+
+ dev_dbg(dev, "%s: %llx %s (%d)\n", __func__, val, rc < 0
+ ? "fail" : "success", rc);
+
+ nd_bus_unlock(dev);
+ device_unlock(dev);
+
+ return rc ? rc : len;
+}
+
+static ssize_t size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ return sprintf(buf, "%llu\n", (unsigned long long)
+ resource_size(&nspm->nsio.res));
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ } else if (is_namespace_io(dev)) {
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
+
+ return sprintf(buf, "%llu\n", (unsigned long long)
+ resource_size(&nsio->res));
+ } else
+ return -ENXIO;
+}
+static DEVICE_ATTR(size, S_IRUGO, size_show, size_store);
+
+static ssize_t uuid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 *uuid;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ uuid = nspm->uuid;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ } else
+ return -ENXIO;
+
+ if (uuid)
+ return sprintf(buf, "%pUb\n", uuid);
+ return sprintf(buf, "\n");
+}
+
+/**
+ * namespace_update_uuid - check for a unique uuid and whether we're "renaming"
+ * @nd_region: parent region so we can updates all dimms in the set
+ * @dev: namespace type for generating label_id
+ * @new_uuid: incoming uuid
+ * @old_uuid: reference to the uuid storage location in the namespace object
+ */
+static int namespace_update_uuid(struct nd_region *nd_region,
+ struct device *dev, u8 *new_uuid, u8 **old_uuid)
+{
+ u32 flags = is_namespace_blk(dev) ? NSLABEL_FLAG_LOCAL : 0;
+ struct nd_label_id old_label_id;
+ struct nd_label_id new_label_id;
+ int i, rc;
+
+ rc = nd_is_uuid_unique(dev, new_uuid) ? 0 : -EINVAL;
+ if (rc) {
+ kfree(new_uuid);
+ return rc;
+ }
+
+ if (*old_uuid == NULL)
+ goto out;
+
+ nd_label_gen_id(&old_label_id, *old_uuid, flags);
+ nd_label_gen_id(&new_label_id, new_uuid, flags);
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res;
+
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, old_label_id.id) == 0)
+ sprintf((void *) res->name, "%s",
+ new_label_id.id);
+ }
+ kfree(*old_uuid);
+ out:
+ *old_uuid = new_uuid;
+ return 0;
+}
+
+static ssize_t uuid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+ u8 *uuid = NULL;
+ u8 **ns_uuid;
+ ssize_t rc;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ ns_uuid = &nspm->uuid;
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: blk namespace support */
+ return -ENXIO;
+ } else
+ return -ENXIO;
+
+ device_lock(dev);
+ nd_bus_lock(dev);
+ wait_nd_bus_probe_idle(dev);
+ rc = nd_uuid_store(dev, &uuid, buf, len);
+ if (rc >= 0)
+ rc = namespace_update_uuid(nd_region, dev, uuid, ns_uuid);
+ dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
+ rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ nd_bus_unlock(dev);
+ device_unlock(dev);
+
+ return rc ? rc : len;
+}
+static DEVICE_ATTR_RW(uuid);
+
+static ssize_t resource_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct resource *res;
+
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ res = &nspm->nsio.res;
+ } else if (is_namespace_io(dev)) {
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
+
+ res = &nsio->res;
+ } else
+ return -ENXIO;
+
+ /* no address to convey if the namespace has no allocation */
+ if (resource_size(res) == 0)
+ return -ENXIO;
+ return sprintf(buf, "%#llx\n", (unsigned long long) res->start);
+}
+static DEVICE_ATTR_RO(resource);
+
static struct attribute *nd_namespace_attributes[] = {
&dev_attr_type.attr,
+ &dev_attr_size.attr,
+ &dev_attr_uuid.attr,
+ &dev_attr_resource.attr,
+ &dev_attr_alt_name.attr,
NULL,
};
+static umode_t nd_namespace_attr_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+
+ if (a == &dev_attr_resource.attr) {
+ if (is_namespace_blk(dev))
+ return 0;
+ return a->mode;
+ }
+
+ if (is_namespace_pmem(dev) || is_namespace_blk(dev)) {
+ if (a == &dev_attr_size.attr)
+ return S_IWUSR;
+ return a->mode;
+ }
+
+ if (a == &dev_attr_type.attr || a == &dev_attr_size.attr)
+ return a->mode;
+
+ return 0;
+}
+
static struct attribute_group nd_namespace_attribute_group = {
.attrs = nd_namespace_attributes,
+ .is_visible = nd_namespace_attr_visible,
};
static const struct attribute_group *nd_namespace_attribute_groups[] = {
@@ -80,23 +783,322 @@ static struct device **create_namespace_io(struct nd_region *nd_region)
return devs;
}
+static bool has_uuid_at_pos(struct nd_region *nd_region, u8 *uuid, u64 cookie, u16 pos)
+{
+ struct nd_namespace_label __iomem *found = NULL;
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_namespace_label __iomem *nd_label;
+ u8 label_uuid[NSLABEL_UUID_LEN];
+ u8 *found_uuid = NULL;
+ int l;
+
+ for_each_label(l, nd_label, nd_mapping->labels) {
+ u64 isetcookie = readq(&nd_label->isetcookie);
+ u16 position = readw(&nd_label->position);
+ u16 nlabel = readw(&nd_label->nlabel);
+
+ if (isetcookie != cookie)
+ continue;
+
+ memcpy_fromio(label_uuid, nd_label->uuid,
+ NSLABEL_UUID_LEN);
+ if (memcmp(label_uuid, uuid, NSLABEL_UUID_LEN) != 0)
+ continue;
+
+ if (found_uuid) {
+ dev_dbg(to_ndd(nd_mapping)->dev,
+ "%s duplicate entry for uuid\n",
+ __func__);
+ return false;
+ }
+ found_uuid = label_uuid;
+ if (nlabel != nd_region->ndr_mappings)
+ continue;
+ if (position != pos)
+ continue;
+ found = nd_label;
+ break;
+ }
+ if (found)
+ break;
+ }
+ return found != NULL;
+}
+
+static int select_pmem_uuid(struct nd_region *nd_region, u8 *pmem_uuid)
+{
+ struct nd_namespace_label __iomem *select = NULL;
+ int i;
+
+ if (!pmem_uuid)
+ return -ENODEV;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_namespace_label __iomem *nd_label;
+ u64 hw_start, hw_end, pmem_start, pmem_end;
+ int l;
+
+ for_each_label(l, nd_label, nd_mapping->labels) {
+ u8 label_uuid[NSLABEL_UUID_LEN];
+
+ memcpy_fromio(label_uuid, nd_label->uuid,
+ NSLABEL_UUID_LEN);
+ if (memcmp(label_uuid, pmem_uuid, NSLABEL_UUID_LEN) == 0)
+ break;
+ }
+
+ if (!nd_label) {
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ select = nd_label;
+ /*
+ * Check that this label is compliant with the dpa
+ * range published in NFIT
+ */
+ hw_start = nd_mapping->start;
+ hw_end = hw_start + nd_mapping->size;
+ pmem_start = readq(&select->dpa);
+ pmem_end = pmem_start + readq(&select->rawsize);
+ if (pmem_start == hw_start && pmem_end <= hw_end)
+ /* pass */;
+ else
+ return -EINVAL;
+
+ nd_set_label(nd_mapping->labels, select, 0);
+ nd_set_label(nd_mapping->labels, (void __iomem *) NULL, 1);
+ }
+ return 0;
+}
+
+/**
+ * find_pmem_label_set - validate interleave set labelling, retrieve label0
+ * @nd_region: region with mappings to validate
+ */
+static int find_pmem_label_set(struct nd_region *nd_region,
+ struct nd_namespace_pmem *nspm)
+{
+ u64 cookie = nd_region_interleave_set_cookie(nd_region);
+ struct nd_namespace_label __iomem *nd_label;
+ u8 select_uuid[NSLABEL_UUID_LEN];
+ resource_size_t size = 0;
+ u8 *pmem_uuid = NULL;
+ int rc = -ENODEV, l;
+ u16 i;
+
+ if (cookie == 0)
+ return -ENXIO;
+
+ /*
+ * Find a complete set of labels by uuid. By definition we can start
+ * with any mapping as the reference label
+ */
+ for_each_label(l, nd_label, nd_region->mapping[0].labels) {
+ u64 isetcookie = readq(&nd_label->isetcookie);
+ u8 label_uuid[NSLABEL_UUID_LEN];
+
+ if (isetcookie != cookie)
+ continue;
+
+ memcpy_fromio(label_uuid, nd_label->uuid,
+ NSLABEL_UUID_LEN);
+ for (i = 0; nd_region->ndr_mappings; i++)
+ if (!has_uuid_at_pos(nd_region, label_uuid, cookie, i))
+ break;
+ if (i < nd_region->ndr_mappings) {
+ /*
+ * Give up if we don't find an instance of a
+ * uuid at each position (from 0 to
+ * nd_region->ndr_mappings - 1), or if we find a
+ * dimm with two instances of the same uuid.
+ */
+ rc = -EINVAL;
+ goto err;
+ } else if (pmem_uuid) {
+ /*
+ * If there is more than one valid uuid set, we
+ * need userspace to clean this up.
+ */
+ rc = -EBUSY;
+ goto err;
+ }
+ memcpy(select_uuid, label_uuid, NSLABEL_UUID_LEN);
+ pmem_uuid = select_uuid;
+ }
+
+ /*
+ * Fix up each mapping's 'labels' to have the validated pmem label for
+ * that position at labels[0], and NULL at labels[1]. In the process,
+ * check that the namespace aligns with interleave-set. We know
+ * that it does not overlap with any blk namespaces by virtue of
+ * the dimm being enabled (i.e. nd_label_reserve_dpa()
+ * succeeded).
+ */
+ rc = select_pmem_uuid(nd_region, pmem_uuid);
+ if (rc)
+ goto err;
+
+ /* Calculate total size and populate namespace properties from label0 */
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_namespace_label __iomem *label0;
+
+ label0 = nd_get_label(nd_mapping->labels, 0);
+ size += readq(&label0->rawsize);
+ if (readl(&label0->position) != 0)
+ continue;
+ WARN_ON(nspm->alt_name || nspm->uuid);
+ nspm->alt_name = kmemdup((void __force *) label0->name,
+ NSLABEL_NAME_LEN, GFP_KERNEL);
+ nspm->uuid = kmemdup((void __force *) label0->uuid,
+ NSLABEL_UUID_LEN, GFP_KERNEL);
+ }
+
+ if (!nspm->alt_name || !nspm->uuid) {
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ nd_namespace_pmem_set_size(nd_region, nspm, size);
+
+ return 0;
+ err:
+ switch (rc) {
+ case -EINVAL:
+ dev_dbg(&nd_region->dev, "%s: invalid label(s)\n", __func__);
+ break;
+ case -ENODEV:
+ dev_dbg(&nd_region->dev, "%s: label not found\n", __func__);
+ break;
+ default:
+ dev_dbg(&nd_region->dev, "%s: unexpected err: %d\n", __func__, rc);
+ break;
+ }
+ return rc;
+}
+
+static struct device **create_namespace_pmem(struct nd_region *nd_region)
+{
+ struct nd_namespace_pmem *nspm;
+ struct device *dev, **devs;
+ struct resource *res;
+ int rc;
+
+ nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
+ if (!nspm)
+ return NULL;
+
+ dev = &nspm->nsio.dev;
+ dev->type = &namespace_pmem_device_type;
+ res = &nspm->nsio.res;
+ res->name = dev_name(&nd_region->dev);
+ res->flags = IORESOURCE_MEM;
+ rc = find_pmem_label_set(nd_region, nspm);
+ if (rc == -ENODEV) {
+ int i;
+
+ /* Pass, try to permit namespace creation... */
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+
+ kfree(nd_mapping->labels);
+ nd_mapping->labels = NULL;
+ }
+
+ /* Publish a zero-sized namespace for userspace to configure. */
+ nd_namespace_pmem_set_size(nd_region, nspm, 0);
+
+ rc = 0;
+ } else if (rc)
+ goto err;
+
+ devs = kcalloc(2, sizeof(struct device *), GFP_KERNEL);
+ if (!devs)
+ goto err;
+
+ devs[0] = dev;
+ return devs;
+
+ err:
+ namespace_pmem_release(&nspm->nsio.dev);
+ return NULL;
+}
+
+static int init_active_labels(struct nd_region *nd_region)
+{
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ int count, j;
+
+ /*
+ * If the dimm is disabled then prevent the region from
+ * being activated
+ */
+ if (!ndd) {
+ dev_dbg(&nd_region->dev, "%s: is disabled, failing probe\n",
+ dev_name(&nd_mapping->nd_dimm->dev));
+ return -ENXIO;
+ }
+
+ count = nd_label_active_count(ndd);
+ dev_dbg(ndd->dev, "%s: %d\n", __func__, count);
+ if (!count)
+ continue;
+ nd_mapping->labels = kcalloc(count + 1,
+ sizeof(struct nd_namespace_label *), GFP_KERNEL);
+ if (!nd_mapping->labels)
+ return -ENOMEM;
+ for (j = 0; j < count; j++) {
+ struct nd_namespace_label __iomem *label;
+
+ label = nd_label_active(ndd, j);
+ nd_set_label(nd_mapping->labels, label, j);
+ }
+ }
+
+ return 0;
+}
+
int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
{
struct device **devs = NULL;
- int i;
+ int i, rc = 0, type;
*err = 0;
- switch (nd_region_to_namespace_type(nd_region)) {
+ nd_bus_lock(&nd_region->dev);
+ rc = init_active_labels(nd_region);
+ if (rc) {
+ nd_bus_unlock(&nd_region->dev);
+ return rc;
+ }
+
+ type = nd_region_to_namespace_type(nd_region);
+ switch (type) {
case ND_DEVICE_NAMESPACE_IO:
devs = create_namespace_io(nd_region);
break;
+ case ND_DEVICE_NAMESPACE_PMEM:
+ devs = create_namespace_pmem(nd_region);
+ break;
default:
break;
}
+ nd_bus_unlock(&nd_region->dev);
- if (!devs)
- return -ENODEV;
+ if (!devs) {
+ rc = -ENODEV;
+ goto err;
+ }
+ nd_region->ns_seed = devs[0];
for (i = 0; devs[i]; i++) {
struct device *dev = devs[i];
@@ -108,4 +1110,14 @@ int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
kfree(devs);
return i;
+
+ err:
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+
+ kfree(nd_mapping->labels);
+ nd_mapping->labels = NULL;
+ }
+
+ return rc;
}
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 15ca7be507ce..03b14ab8fc7d 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -123,4 +123,18 @@ int nd_bus_register_dimms(struct nd_bus *nd_bus);
int nd_bus_register_regions(struct nd_bus *nd_bus);
int nd_bus_init_interleave_sets(struct nd_bus *nd_bus);
int nd_match_dimm(struct device *dev, void *data);
+struct nd_label_id;
+char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags);
+bool nd_is_uuid_unique(struct device *dev, u8 *uuid);
+struct nd_region;
+struct nd_dimm_drvdata;
+struct nd_mapping;
+resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, resource_size_t *overlap);
+resource_size_t nd_region_available_dpa(struct nd_region *nd_region);
+struct resource *nd_dimm_allocate_dpa(struct nd_dimm_drvdata *ndd,
+ struct nd_label_id *label_id, resource_size_t start,
+ resource_size_t n);
+resource_size_t nd_dimm_allocated_dpa(struct nd_dimm_drvdata *ndd,
+ struct nd_label_id *label_id);
#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index f8dee1df5e6a..386e17056d3c 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -15,6 +15,7 @@
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/ndctl.h>
+#include <linux/types.h>
#include "label.h"
struct nd_dimm_drvdata {
@@ -54,6 +55,7 @@ static inline struct nd_namespace_index __iomem *to_next_namespace_index(
struct nd_mapping {
struct nd_dimm *nd_dimm;
+ struct nd_namespace_label **labels;
u64 start;
u64 size;
};
@@ -64,6 +66,30 @@ struct nd_mapping {
(unsigned long long) (res ? resource_size(res) : 0), \
(unsigned long long) (res ? res->start : 0), ##arg)
+/* sparse helpers */
+static inline void nd_set_label(struct nd_namespace_label **labels,
+ struct nd_namespace_label __iomem *label, int idx)
+{
+ labels[idx] = (void __force *) label;
+}
+
+static inline struct nd_namespace_label __iomem *nd_get_label(
+ struct nd_namespace_label **labels, int idx)
+{
+ struct nd_namespace_label __iomem *label = NULL;
+
+ if (labels)
+ label = (struct nd_namespace_label __iomem *) labels[idx];
+
+ return label;
+}
+
+#define for_each_label(l, label, labels) \
+ for (l = 0; (label = nd_get_label(labels, l)); l++)
+
+#define for_each_dpa_resource(ndd, res) \
+ for (res = (ndd)->dpa.child; res; res = res->sibling)
+
#define for_each_dpa_resource_safe(ndd, res, next) \
for (res = (ndd)->dpa.child, next = res ? res->sibling : NULL; \
res; res = next, next = next ? next->sibling : NULL)
@@ -71,6 +97,7 @@ struct nd_mapping {
struct nd_region {
struct device dev;
struct nd_spa *nd_spa;
+ struct device *ns_seed;
u16 ndr_mappings;
u64 ndr_size;
u64 ndr_start;
@@ -92,11 +119,15 @@ enum nd_async_mode {
ND_ASYNC,
};
+void wait_nd_bus_probe_idle(struct device *dev);
void nd_device_register(struct device *dev);
void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
u64 nd_fletcher64(void __iomem *addr, size_t len);
+int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
+ size_t len);
extern struct attribute_group nd_device_attribute_group;
struct nd_dimm;
+struct nd_dimm_drvdata *to_ndd(struct nd_mapping *nd_mapping);
u32 to_nfit_handle(struct nd_dimm *nd_dimm);
void *nd_dimm_get_pdata(struct nd_dimm *nd_dimm);
void nd_dimm_set_pdata(struct nd_dimm *nd_dimm, void *data);
@@ -108,8 +139,10 @@ int nd_dimm_firmware_status(struct device *dev);
struct nd_region *to_nd_region(struct device *dev);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
+u64 nd_region_interleave_set_cookie(struct nd_region *nd_region);
void nd_bus_lock(struct device *dev);
void nd_bus_unlock(struct device *dev);
bool is_nd_bus_locked(struct device *dev);
int nd_label_reserve_dpa(struct nd_dimm_drvdata *ndd);
+void nd_dimm_free_dpa(struct nd_dimm_drvdata *ndd, struct resource *res);
#endif /* __ND_H__ */
diff --git a/drivers/block/nd/pmem.c b/drivers/block/nd/pmem.c
index cd83a9a98d89..aa2b4fb1f140 100644
--- a/drivers/block/nd/pmem.c
+++ b/drivers/block/nd/pmem.c
@@ -24,6 +24,7 @@
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/nd.h>
+#include "nd.h"
#define PMEM_MINORS 16
@@ -247,9 +248,27 @@ static struct platform_driver pmem_driver = {
static int nd_pmem_probe(struct device *dev)
{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
struct pmem_device *pmem;
+ if (resource_size(&nsio->res) < ND_MIN_NAMESPACE_SIZE) {
+ resource_size_t size = resource_size(&nsio->res);
+
+ dev_dbg(dev, "%s: size: %pa, too small must be at least %#x\n",
+ __func__, &size, ND_MIN_NAMESPACE_SIZE);
+ return -ENODEV;
+ }
+
+ if (nd_region_to_namespace_type(nd_region) == ND_DEVICE_NAMESPACE_PMEM) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ if (!nspm->uuid) {
+ dev_dbg(dev, "%s: uuid not set\n", __func__);
+ return -ENODEV;
+ }
+ }
+
pmem = pmem_alloc(dev, &nsio->res);
if (IS_ERR(pmem))
return PTR_ERR(pmem);
@@ -269,13 +288,14 @@ static int nd_pmem_remove(struct device *dev)
MODULE_ALIAS("pmem");
MODULE_ALIAS_ND_DEVICE(ND_DEVICE_NAMESPACE_IO);
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_NAMESPACE_PMEM);
static struct nd_device_driver nd_pmem_driver = {
.probe = nd_pmem_probe,
.remove = nd_pmem_remove,
.drv = {
.name = "pmem",
},
- .type = ND_DRIVER_NAMESPACE_IO,
+ .type = ND_DRIVER_NAMESPACE_IO | ND_DRIVER_NAMESPACE_PMEM,
};
static int __init pmem_init(void)
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index 13f45be755a5..8bcfd9b91a71 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/sort.h>
#include <linux/io.h>
+#include <linux/nd.h>
#include "nd-private.h"
#include "nfit.h"
#include "nd.h"
@@ -70,6 +71,7 @@ struct nd_region *to_nd_region(struct device *dev)
WARN_ON(dev->type->release != nd_region_release);
return nd_region;
}
+EXPORT_SYMBOL(to_nd_region);
/**
* nd_region_to_namespace_type() - region to an integer namespace type
@@ -92,6 +94,58 @@ int nd_region_to_namespace_type(struct nd_region *nd_region)
return 0;
}
+EXPORT_SYMBOL(nd_region_to_namespace_type);
+
+static int is_uuid_busy(struct device *dev, void *data)
+{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+ u8 *uuid = data;
+
+ switch (nd_region_to_namespace_type(nd_region)) {
+ case ND_DEVICE_NAMESPACE_PMEM: {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ if (!nspm->uuid)
+ break;
+ if (memcmp(uuid, nspm->uuid, NSLABEL_UUID_LEN) == 0)
+ return -EBUSY;
+ break;
+ }
+ case ND_DEVICE_NAMESPACE_BLOCK: {
+ /* TODO: blk namespace support */
+ break;
+ }
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int is_namespace_uuid_busy(struct device *dev, void *data)
+{
+ if (is_nd_pmem(dev) || is_nd_blk(dev))
+ return device_for_each_child(dev, data, is_uuid_busy);
+ return 0;
+}
+
+/**
+ * nd_is_uuid_unique - verify that no other namespace has @uuid
+ * @dev: any device on a nd_bus
+ * @uuid: uuid to check
+ */
+bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+
+ if (!nd_bus)
+ return false;
+ WARN_ON_ONCE(!is_nd_bus_locked(&nd_bus->dev));
+ if (device_for_each_child(&nd_bus->dev, uuid,
+ is_namespace_uuid_busy) != 0)
+ return false;
+ return true;
+}
static ssize_t size_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -155,6 +209,60 @@ static ssize_t set_cookie_show(struct device *dev,
}
static DEVICE_ATTR_RO(set_cookie);
+resource_size_t nd_region_available_dpa(struct nd_region *nd_region)
+{
+ resource_size_t blk_max_overlap = 0, available, overlap;
+ int i;
+
+ WARN_ON(!is_nd_bus_locked(&nd_region->dev));
+
+ retry:
+ available = 0;
+ overlap = blk_max_overlap;
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+
+ /* if a dimm is disabled the available capacity is zero */
+ if (!ndd)
+ return 0;
+
+ if (is_nd_pmem(&nd_region->dev)) {
+ available += nd_pmem_available_dpa(nd_region,
+ nd_mapping, &overlap);
+ if (overlap > blk_max_overlap) {
+ blk_max_overlap = overlap;
+ goto retry;
+ }
+ } else if (is_nd_blk(&nd_region->dev)) {
+ /* TODO: BLK Namespace support */
+ }
+ }
+
+ return available;
+}
+
+static ssize_t available_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ unsigned long long available = 0;
+
+ /*
+ * Flush in-flight updates and grab a snapshot of the available
+ * size. Of course, this value is potentially invalidated the
+ * memory nd_bus_lock() is dropped, but that's userspace's
+ * problem to not race itself.
+ */
+ nd_bus_lock(dev);
+ wait_nd_bus_probe_idle(dev);
+ available = nd_region_available_dpa(nd_region);
+ nd_bus_unlock(dev);
+
+ return sprintf(buf, "%llu\n", available);
+}
+static DEVICE_ATTR_RO(available_size);
+
static ssize_t init_namespaces_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -167,12 +275,30 @@ static ssize_t init_namespaces_show(struct device *dev,
}
static DEVICE_ATTR_RO(init_namespaces);
+static ssize_t namespace_seed_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev);
+ ssize_t rc;
+
+ nd_bus_lock(dev);
+ if (nd_region->ns_seed)
+ rc = sprintf(buf, "%s\n", dev_name(nd_region->ns_seed));
+ else
+ rc = sprintf(buf, "\n");
+ nd_bus_unlock(dev);
+ return rc;
+}
+static DEVICE_ATTR_RO(namespace_seed);
+
static struct attribute *nd_region_attributes[] = {
&dev_attr_size.attr,
&dev_attr_nstype.attr,
&dev_attr_mappings.attr,
&dev_attr_spa_index.attr,
&dev_attr_set_cookie.attr,
+ &dev_attr_available_size.attr,
+ &dev_attr_namespace_seed.attr,
&dev_attr_init_namespaces.attr,
NULL,
};
@@ -181,13 +307,18 @@ static umode_t nd_region_visible(struct kobject *kobj, struct attribute *a, int
{
struct device *dev = container_of(kobj, typeof(*dev), kobj);
struct nd_region *nd_region = to_nd_region(dev);
+ int type = nd_region_to_namespace_type(nd_region);
struct nd_spa *nd_spa = nd_region->nd_spa;
- if (a != &dev_attr_set_cookie.attr)
+ if (a != &dev_attr_set_cookie.attr && a != &dev_attr_available_size.attr)
return a->mode;
- if (is_nd_pmem(dev) && nd_spa->nd_set)
- return a->mode;
+ if ((type == ND_DEVICE_NAMESPACE_PMEM
+ || type == ND_DEVICE_NAMESPACE_BLOCK)
+ && a == &dev_attr_available_size.attr)
+ return a->mode;
+ else if (is_nd_pmem(dev) && nd_spa->nd_set)
+ return a->mode;
return 0;
}
@@ -330,6 +461,16 @@ int nd_bus_init_interleave_sets(struct nd_bus *nd_bus)
return rc;
}
+u64 nd_region_interleave_set_cookie(struct nd_region *nd_region)
+{
+ struct nd_spa *nd_spa = nd_region->nd_spa;
+ struct nd_interleave_set *nd_set = nd_spa ? nd_spa->nd_set : NULL;
+
+ if (nd_set)
+ return nd_set->cookie;
+ return 0;
+}
+
/*
* Upon successful probe/remove, take/release a reference on the
* associated interleave set (if present)
diff --git a/include/linux/nd.h b/include/linux/nd.h
index da70e9962197..255c38a83083 100644
--- a/include/linux/nd.h
+++ b/include/linux/nd.h
@@ -28,16 +28,40 @@ static inline struct nd_device_driver *to_nd_device_driver(
return container_of(drv, struct nd_device_driver, drv);
};
+/**
+ * struct nd_namespace_io - infrastructure for loading an nd_pmem instance
+ * @dev: namespace device created by the nd region driver
+ * @res: struct resource conversion of a NFIT SPA table
+ */
struct nd_namespace_io {
struct device dev;
struct resource res;
};
+/**
+ * struct nd_namespace_pmem - namespace device for dimm-backed interleaved memory
+ * @nsio: device and system physical address range to drive
+ * @alt_name: namespace name supplied in the dimm label
+ * @uuid: namespace name supplied in the dimm label
+ */
+struct nd_namespace_pmem {
+ struct nd_namespace_io nsio;
+ char *alt_name;
+ u8 *uuid;
+};
+
static inline struct nd_namespace_io *to_nd_namespace_io(struct device *dev)
{
return container_of(dev, struct nd_namespace_io, dev);
}
+static inline struct nd_namespace_pmem *to_nd_namespace_pmem(struct device *dev)
+{
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
+
+ return container_of(nsio, struct nd_namespace_pmem, nsio);
+}
+
#define MODULE_ALIAS_ND_DEVICE(type) \
MODULE_ALIAS("nd:t" __stringify(type) "*")
#define ND_DEVICE_MODALIAS_FMT "nd:t%d"
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index 097e67a8d477..5f0cf00872e0 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -190,4 +190,8 @@ enum nd_driver_flags {
ND_DRIVER_NAMESPACE_PMEM = 1 << ND_DEVICE_NAMESPACE_PMEM,
ND_DRIVER_NAMESPACE_BLOCK = 1 << ND_DEVICE_NAMESPACE_BLOCK,
};
+
+enum {
+ ND_MIN_NAMESPACE_SIZE = 0x00400000,
+};
#endif /* __NDCTL_H__ */
A blk label set describes a namespace comprised of one or more
discontiguous dpa ranges on a single dimm. They may alias with one or
more pmem interleave sets that include the given dimm.
This is the runtime/volatile configuration infrastructure for sysfs
manipulation of 'alt_name', 'uuid', 'size', and 'sector_size'. A later
patch will make these settings persistent by writing back the label(s).
Unlike pmem namespaces, multiple blk namespaces can be created per
region. Once a blk namespace has been created a new seed device
(unconfigured child of a parent blk region) is instantiated. As long as
a region has 'available_size' != 0 new child namespaces may be created.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/core.c | 40 +++
drivers/block/nd/dimm_devs.c | 35 +++
drivers/block/nd/namespace_devs.c | 502 ++++++++++++++++++++++++++++++++++---
drivers/block/nd/nd-private.h | 10 +
drivers/block/nd/nd.h | 5
drivers/block/nd/region_devs.c | 15 +
include/linux/nd.h | 25 ++
7 files changed, 588 insertions(+), 44 deletions(-)
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 560ed5555496..880aef08f919 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -213,6 +213,46 @@ int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
return 0;
}
+ssize_t nd_sector_size_show(unsigned long current_lbasize,
+ const unsigned long *supported, char *buf)
+{
+ ssize_t len = 0;
+ int i;
+
+ for (i = 0; supported[i]; i++)
+ if (current_lbasize == supported[i])
+ len += sprintf(buf + len, "[%ld] ", supported[i]);
+ else
+ len += sprintf(buf + len, "%ld ", supported[i]);
+ len += sprintf(buf + len, "\n");
+ return len;
+}
+
+ssize_t nd_sector_size_store(struct device *dev, const char *buf,
+ unsigned long *current_lbasize, const unsigned long *supported)
+{
+ unsigned long lbasize;
+ int rc, i;
+
+ if (dev->driver)
+ return -EBUSY;
+
+ rc = kstrtoul(buf, 0, &lbasize);
+ if (rc)
+ return rc;
+
+ for (i = 0; supported[i]; i++)
+ if (lbasize == supported[i])
+ break;
+
+ if (supported[i]) {
+ *current_lbasize = lbasize;
+ return 0;
+ } else {
+ return -EINVAL;
+ }
+}
+
static ssize_t commands_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index caa51d3ea6af..ae77bf4a5188 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -417,6 +417,41 @@ static struct nd_dimm *nd_dimm_create(struct nd_bus *nd_bus,
}
/**
+ * nd_blk_available_dpa - account the unused dpa of BLK region
+ * @nd_mapping: container of dpa-resource-root + labels
+ *
+ * Unlike PMEM, BLK namespaces can occupy discontiguous DPA ranges.
+ */
+resource_size_t nd_blk_available_dpa(struct nd_mapping *nd_mapping)
+{
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ resource_size_t map_end, busy = 0, available;
+ struct resource *res;
+
+ if (!ndd)
+ return 0;
+
+ map_end = nd_mapping->start + nd_mapping->size - 1;
+ for_each_dpa_resource(ndd, res)
+ if (res->start >= nd_mapping->start && res->start < map_end) {
+ resource_size_t end = min(map_end, res->end);
+
+ busy += end - res->start + 1;
+ } else if (res->end >= nd_mapping->start && res->end <= map_end) {
+ busy += res->end - nd_mapping->start;
+ } else if (nd_mapping->start > res->start
+ && nd_mapping->start < res->end) {
+ /* total eclipse of the BLK region mapping */
+ busy += nd_mapping->size;
+ }
+
+ available = map_end - nd_mapping->start + 1;
+ if (busy < available)
+ return available - busy;
+ return 0;
+}
+
+/**
* nd_pmem_available_dpa - for the given dimm+region account unallocated dpa
* @nd_mapping: container of dpa-resource-root + labels
* @nd_region: constrain available space check to this reference region
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
index 386776845830..de36f3891284 100644
--- a/drivers/block/nd/namespace_devs.c
+++ b/drivers/block/nd/namespace_devs.c
@@ -37,7 +37,15 @@ static void namespace_pmem_release(struct device *dev)
static void namespace_blk_release(struct device *dev)
{
- /* TODO: blk namespace support */
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+
+ if (nsblk->id >= 0)
+ ida_simple_remove(&nd_region->ns_ida, nsblk->id);
+ kfree(nsblk->alt_name);
+ kfree(nsblk->uuid);
+ kfree(nsblk->res);
+ kfree(nsblk);
}
static struct device_type namespace_io_device_type = {
@@ -90,8 +98,9 @@ static ssize_t __alt_name_store(struct device *dev, const char *buf,
ns_altname = &nspm->alt_name;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ ns_altname = &nsblk->alt_name;
} else
return -ENXIO;
@@ -124,6 +133,24 @@ out:
return rc;
}
+static resource_size_t nd_namespace_blk_size(struct nd_namespace_blk *nsblk)
+{
+ struct nd_region *nd_region = to_nd_region(nsblk->dev.parent);
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct nd_label_id label_id;
+ resource_size_t size = 0;
+ struct resource *res;
+
+ if (!nsblk->uuid)
+ return 0;
+ nd_label_gen_id(&label_id, nsblk->uuid, NSLABEL_FLAG_LOCAL);
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, label_id.id) == 0)
+ size += resource_size(res);
+ return size;
+}
+
static ssize_t alt_name_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
@@ -150,8 +177,9 @@ static ssize_t alt_name_show(struct device *dev,
ns_altname = nspm->alt_name;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ ns_altname = nsblk->alt_name;
} else
return -ENXIO;
@@ -197,6 +225,8 @@ static int scan_free(struct nd_region *nd_region,
new_start = res->start;
rc = adjust_resource(res, new_start, resource_size(res) - n);
+ if (rc == 0)
+ res->flags |= DPA_RESOURCE_ADJUSTED;
nd_dbg_dpa(nd_region, ndd, res, "shrink %d\n", rc);
break;
}
@@ -257,14 +287,15 @@ static resource_size_t init_dpa_allocation(struct nd_label_id *label_id,
return rc ? n : 0;
}
-static bool space_valid(bool is_pmem, struct nd_label_id *label_id,
- struct resource *res)
+static bool space_valid(bool is_pmem, bool is_reserve,
+ struct nd_label_id *label_id, struct resource *res)
{
/*
* For BLK-space any space is valid, for PMEM-space, it must be
- * contiguous with an existing allocation.
+ * contiguous with an existing allocation unless we are
+ * reserving pmem.
*/
- if (!is_pmem)
+ if (is_reserve || !is_pmem)
return true;
if (!res || strcmp(res->name, label_id->id) == 0)
return true;
@@ -280,6 +311,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
resource_size_t n)
{
resource_size_t mapping_end = nd_mapping->start + nd_mapping->size - 1;
+ bool is_reserve = strcmp(label_id->id, "pmem-reserve") == 0;
bool is_pmem = strncmp(label_id->id, "pmem", 4) == 0;
struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
const resource_size_t to_allocate = n;
@@ -305,7 +337,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
if (!first++ && res->start > nd_mapping->start) {
free_start = nd_mapping->start;
available = res->start - free_start;
- if (space_valid(is_pmem, label_id, NULL))
+ if (space_valid(is_pmem, is_reserve, label_id, NULL))
loc = ALLOC_BEFORE;
}
@@ -313,7 +345,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
if (!loc && next) {
free_start = res->start + resource_size(res);
free_end = min(mapping_end, next->start - 1);
- if (space_valid(is_pmem, label_id, res)
+ if (space_valid(is_pmem, is_reserve, label_id, res)
&& free_start < free_end) {
available = free_end + 1 - free_start;
loc = ALLOC_MID;
@@ -324,7 +356,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
if (!loc && !next) {
free_start = res->start + resource_size(res);
free_end = mapping_end;
- if (space_valid(is_pmem, label_id, res)
+ if (space_valid(is_pmem, is_reserve, label_id, res)
&& free_start < free_end) {
available = free_end + 1 - free_start;
loc = ALLOC_AFTER;
@@ -338,7 +370,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
case ALLOC_BEFORE:
if (strcmp(res->name, label_id->id) == 0) {
/* adjust current resource up */
- if (is_pmem)
+ if (is_pmem && !is_reserve)
return n;
rc = adjust_resource(res, res->start - allocate,
resource_size(res) + allocate);
@@ -349,7 +381,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
case ALLOC_MID:
if (strcmp(next->name, label_id->id) == 0) {
/* adjust next resource up */
- if (is_pmem)
+ if (is_pmem && !is_reserve)
return n;
rc = adjust_resource(next, next->start
- allocate, resource_size(next)
@@ -375,7 +407,7 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
/* BLK allocate bottom up */
if (!is_pmem)
free_start += available - allocate;
- else if (free_start != nd_mapping->start)
+ else if (!is_reserve && free_start != nd_mapping->start)
return n;
new_res = nd_dimm_allocate_dpa(ndd, label_id,
@@ -386,6 +418,8 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
/* adjust current resource down */
rc = adjust_resource(res, res->start, resource_size(res)
+ allocate);
+ if (rc == 0)
+ res->flags |= DPA_RESOURCE_ADJUSTED;
}
if (!new_res)
@@ -411,11 +445,106 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
return 0;
}
- if (is_pmem && n == to_allocate)
+ /*
+ * If we allocated nothing in the BLK case it may be because we are in
+ * an initial "pmem-reserve pass". Only do an initial BLK allocation
+ * when none of the DPA space is reserved.
+ */
+ if ((is_pmem || !ndd->dpa.child) && n == to_allocate)
return init_dpa_allocation(label_id, nd_region, nd_mapping, n);
return n;
}
+static int merge_dpa(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_label_id *label_id)
+{
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res;
+
+ if (strncmp("pmem", label_id->id, 4) == 0)
+ return 0;
+ retry:
+ for_each_dpa_resource(ndd, res) {
+ int rc;
+ struct resource *next = res->sibling;
+ resource_size_t end = res->start + resource_size(res);
+
+ if (!next || strcmp(res->name, label_id->id) != 0
+ || strcmp(next->name, label_id->id) != 0
+ || end != next->start)
+ continue;
+ end += resource_size(next);
+ nd_dimm_free_dpa(ndd, next);
+ rc = adjust_resource(res, res->start, end - res->start);
+ nd_dbg_dpa(nd_region, ndd, res, "merge %d\n", rc);
+ if (rc)
+ return rc;
+ res->flags |= DPA_RESOURCE_ADJUSTED;
+ goto retry;
+ }
+
+ return 0;
+}
+
+static int __reserve_free_pmem(struct device *dev, void *data)
+{
+ struct nd_dimm *nd_dimm = data;
+ struct nd_region *nd_region;
+ struct nd_label_id label_id;
+ int i;
+
+ if (!is_nd_pmem(dev))
+ return 0;
+
+ nd_region = to_nd_region(dev);
+ if (nd_region->ndr_mappings == 0)
+ return 0;
+
+ memset(&label_id, 0, sizeof(label_id));
+ strcat(label_id.id, "pmem-reserve");
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ resource_size_t n, rem = 0;
+
+ if (nd_mapping->nd_dimm != nd_dimm)
+ continue;
+
+ n = nd_pmem_available_dpa(nd_region, nd_mapping, &rem);
+ if (n == 0)
+ return 0;
+ rem = scan_allocate(nd_region, nd_mapping, &label_id, n);
+ dev_WARN_ONCE(&nd_region->dev, rem,
+ "pmem reserve underrun: %#llx of %#llx bytes\n",
+ (unsigned long long) n - rem,
+ (unsigned long long) n);
+ return rem ? -ENXIO : 0;
+ }
+
+ return 0;
+}
+
+static void release_free_pmem(struct nd_bus *nd_bus, struct nd_mapping *nd_mapping)
+{
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res, *_res;
+
+ for_each_dpa_resource_safe(ndd, res, _res)
+ if (strcmp(res->name, "pmem-reserve") == 0)
+ nd_dimm_free_dpa(ndd, res);
+}
+
+static int reserve_free_pmem(struct nd_bus *nd_bus,
+ struct nd_mapping *nd_mapping)
+{
+ struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
+ int rc;
+
+ rc = device_for_each_child(&nd_bus->dev, nd_dimm, __reserve_free_pmem);
+ if (rc)
+ release_free_pmem(nd_bus, nd_mapping);
+ return rc;
+}
+
/**
* grow_dpa_allocation - for each dimm allocate n bytes for @label_id
* @nd_region: the set of dimms to allocate @n more bytes from
@@ -432,13 +561,44 @@ static resource_size_t scan_allocate(struct nd_region *nd_region,
static int grow_dpa_allocation(struct nd_region *nd_region,
struct nd_label_id *label_id, resource_size_t n)
{
+ struct nd_bus *nd_bus = walk_to_nd_bus(&nd_region->dev);
+ bool is_pmem = strncmp(label_id->id, "pmem", 4) == 0;
int i;
for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_mapping *nd_mapping = &nd_region->mapping[i];
- int rc;
+ resource_size_t rem = n;
+ int rc, j;
- rc = scan_allocate(nd_region, nd_mapping, label_id, n);
+ /*
+ * In the BLK case try once with all unallocated PMEM
+ * reserved, and once without
+ */
+ for (j = is_pmem; j < 2; j++) {
+ bool blk_only = j == 0;
+
+ if (blk_only) {
+ rc = reserve_free_pmem(nd_bus, nd_mapping);
+ if (rc)
+ return rc;
+ }
+ rem = scan_allocate(nd_region, nd_mapping, label_id, rem);
+ if (blk_only)
+ release_free_pmem(nd_bus, nd_mapping);
+
+ /* try again and allow encroachments into PMEM */
+ if (rem == 0)
+ break;
+ }
+
+ dev_WARN_ONCE(&nd_region->dev, rem,
+ "allocation underrun: %#llx of %#llx bytes\n",
+ (unsigned long long) n - rem,
+ (unsigned long long) n);
+ if (rem)
+ return -ENXIO;
+
+ rc = merge_dpa(nd_region, nd_mapping, label_id);
if (rc)
return rc;
}
@@ -474,8 +634,10 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
uuid = nspm->uuid;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ uuid = nsblk->uuid;
+ flags = NSLABEL_FLAG_LOCAL;
}
/*
@@ -529,6 +691,14 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
nd_namespace_pmem_set_size(nd_region, nspm,
val * nd_region->ndr_mappings);
+ } else if (is_namespace_blk(dev)) {
+ /*
+ * Try to delete the namespace if we deleted all of its
+ * allocation and this is not the seed device for the
+ * region.
+ */
+ if (val == 0 && nd_region->ns_seed != dev)
+ nd_device_unregister(dev, ND_ASYNC);
}
return rc;
@@ -555,8 +725,9 @@ static ssize_t size_store(struct device *dev,
uuid = &nspm->uuid;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- rc = -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ uuid = &nsblk->uuid;
}
if (rc == 0 && val == 0 && uuid) {
@@ -577,21 +748,23 @@ static ssize_t size_store(struct device *dev,
static ssize_t size_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
+ unsigned long long size = 0;
+
+ nd_bus_lock(dev);
if (is_namespace_pmem(dev)) {
struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
- return sprintf(buf, "%llu\n", (unsigned long long)
- resource_size(&nspm->nsio.res));
+ size = resource_size(&nspm->nsio.res);
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ size = nd_namespace_blk_size(to_nd_namespace_blk(dev));
} else if (is_namespace_io(dev)) {
struct nd_namespace_io *nsio = to_nd_namespace_io(dev);
- return sprintf(buf, "%llu\n", (unsigned long long)
- resource_size(&nsio->res));
- } else
- return -ENXIO;
+ size = resource_size(&nsio->res);
+ }
+ nd_bus_unlock(dev);
+
+ return sprintf(buf, "%llu\n", size);
}
static DEVICE_ATTR(size, S_IRUGO, size_show, size_store);
@@ -605,8 +778,9 @@ static ssize_t uuid_show(struct device *dev,
uuid = nspm->uuid;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ uuid = nsblk->uuid;
} else
return -ENXIO;
@@ -670,8 +844,9 @@ static ssize_t uuid_store(struct device *dev,
ns_uuid = &nspm->uuid;
} else if (is_namespace_blk(dev)) {
- /* TODO: blk namespace support */
- return -ENXIO;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ ns_uuid = &nsblk->uuid;
} else
return -ENXIO;
@@ -713,12 +888,48 @@ static ssize_t resource_show(struct device *dev,
}
static DEVICE_ATTR_RO(resource);
+static const unsigned long ns_lbasize_supported[] = { 512, 0 };
+
+static ssize_t sector_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ if (!is_namespace_blk(dev))
+ return -ENXIO;
+
+ return nd_sector_size_show(nsblk->lbasize, ns_lbasize_supported, buf);
+}
+
+static ssize_t sector_size_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+ ssize_t rc;
+
+ if (!is_namespace_blk(dev))
+ return -ENXIO;
+
+ device_lock(dev);
+ nd_bus_lock(dev);
+ rc = nd_sector_size_store(dev, buf, &nsblk->lbasize,
+ ns_lbasize_supported);
+ dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
+ rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ nd_bus_unlock(dev);
+ device_unlock(dev);
+
+ return rc ? rc : len;
+}
+static DEVICE_ATTR_RW(sector_size);
+
static struct attribute *nd_namespace_attributes[] = {
&dev_attr_type.attr,
&dev_attr_size.attr,
&dev_attr_uuid.attr,
&dev_attr_resource.attr,
&dev_attr_alt_name.attr,
+ &dev_attr_sector_size.attr,
NULL,
};
@@ -735,6 +946,10 @@ static umode_t nd_namespace_attr_visible(struct kobject *kobj, struct attribute
if (is_namespace_pmem(dev) || is_namespace_blk(dev)) {
if (a == &dev_attr_size.attr)
return S_IWUSR;
+
+ if (is_namespace_pmem(dev) && a == &dev_attr_sector_size.attr)
+ return 0;
+
return a->mode;
}
@@ -1029,6 +1244,173 @@ static struct device **create_namespace_pmem(struct nd_region *nd_region)
return NULL;
}
+struct resource *nsblk_add_resource(struct nd_region *nd_region,
+ struct nd_dimm_drvdata *ndd, struct nd_namespace_blk *nsblk,
+ resource_size_t start)
+{
+ struct nd_label_id label_id;
+ struct resource *res;
+
+ nd_label_gen_id(&label_id, nsblk->uuid, NSLABEL_FLAG_LOCAL);
+ nsblk->res = krealloc(nsblk->res,
+ sizeof(void *) * (nsblk->num_resources + 1),
+ GFP_KERNEL);
+ if (!nsblk->res)
+ return NULL;
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, label_id.id) == 0 && res->start == start) {
+ nsblk->res[nsblk->num_resources++] = res;
+ return res;
+ }
+ return NULL;
+}
+
+static struct device *nd_namespace_blk_create(struct nd_region *nd_region)
+{
+ struct nd_namespace_blk *nsblk;
+ struct device *dev;
+
+ if (!is_nd_blk(&nd_region->dev))
+ return NULL;
+
+ nsblk = kzalloc(sizeof(*nsblk), GFP_KERNEL);
+ if (!nsblk)
+ return NULL;
+
+ dev = &nsblk->dev;
+ dev->type = &namespace_blk_device_type;
+ nsblk->id = ida_simple_get(&nd_region->ns_ida, 0, 0, GFP_KERNEL);
+ if (nsblk->id < 0) {
+ kfree(nsblk);
+ return NULL;
+ }
+ dev_set_name(dev, "namespace%d.%d", nd_region->id, nsblk->id);
+ dev->parent = &nd_region->dev;
+ dev->groups = nd_namespace_attribute_groups;
+
+ return &nsblk->dev;
+}
+
+void nd_region_create_blk_seed(struct nd_region *nd_region)
+{
+ WARN_ON(!is_nd_bus_locked(&nd_region->dev));
+ nd_region->ns_seed = nd_namespace_blk_create(nd_region);
+ /*
+ * Seed creation failures are not fatal, provisioning is simply
+ * disabled until memory becomes available
+ */
+ if (!nd_region->ns_seed)
+ dev_err(&nd_region->dev, "failed to create blk namespace\n");
+ else
+ nd_device_register(nd_region->ns_seed);
+}
+
+static struct device **create_namespace_blk(struct nd_region *nd_region)
+{
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ struct nd_namespace_label __iomem *nd_label;
+ struct device *dev, **devs = NULL;
+ u8 label_uuid[NSLABEL_UUID_LEN];
+ struct nd_namespace_blk *nsblk;
+ struct nd_dimm_drvdata *ndd;
+ int i, l, count = 0;
+ struct resource *res;
+
+ if (nd_region->ndr_mappings == 0)
+ return NULL;
+
+ ndd = to_ndd(nd_mapping);
+ for_each_label(l, nd_label, nd_mapping->labels) {
+ u32 flags = readl(&nd_label->flags);
+ char *name[NSLABEL_NAME_LEN];
+ struct device **__devs;
+
+ if (flags & NSLABEL_FLAG_LOCAL)
+ /* pass */;
+ else
+ continue;
+
+ memcpy_fromio(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+ for (i = 0; i < count; i++) {
+ nsblk = to_nd_namespace_blk(devs[i]);
+ if (memcmp(nsblk->uuid, label_uuid,
+ NSLABEL_UUID_LEN) == 0) {
+ res = nsblk_add_resource(nd_region, ndd, nsblk,
+ readq(&nd_label->dpa));
+ if (!res)
+ goto err;
+ nd_dbg_dpa(nd_region, ndd, res, "%s assign\n",
+ dev_name(&nsblk->dev));
+ break;
+ }
+ }
+ if (i < count)
+ continue;
+ __devs = kcalloc(count + 2, sizeof(dev), GFP_KERNEL);
+ if (!__devs)
+ goto err;
+ memcpy(__devs, devs, sizeof(dev) * count);
+ kfree(devs);
+ devs = __devs;
+
+ nsblk = kzalloc(sizeof(*nsblk), GFP_KERNEL);
+ if (!nsblk)
+ goto err;
+ dev = &nsblk->dev;
+ dev->type = &namespace_blk_device_type;
+ dev_set_name(dev, "namespace%d.%d", nd_region->id, count);
+ devs[count++] = dev;
+ nsblk->id = -1;
+ nsblk->lbasize = readq(&nd_label->lbasize);
+ nsblk->uuid = kmemdup(label_uuid, NSLABEL_UUID_LEN, GFP_KERNEL);
+ if (!nsblk->uuid)
+ goto err;
+ memcpy_fromio(name, nd_label->name, NSLABEL_NAME_LEN);
+ if (name[0])
+ nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
+ GFP_KERNEL);
+ res = nsblk_add_resource(nd_region, ndd, nsblk,
+ readq(&nd_label->dpa));
+ if (!res)
+ goto err;
+ nd_dbg_dpa(nd_region, ndd, res, "%s assign\n",
+ dev_name(&nsblk->dev));
+ }
+
+ dev_dbg(&nd_region->dev, "%s: discovered %d blk namespace%s\n",
+ __func__, count, count == 1 ? "" : "s");
+
+ if (count == 0) {
+ /* Publish a zero-sized namespace for userspace to configure. */
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+
+ kfree(nd_mapping->labels);
+ nd_mapping->labels = NULL;
+ }
+
+ devs = kcalloc(2, sizeof(dev), GFP_KERNEL);
+ if (!devs)
+ goto err;
+ nsblk = kzalloc(sizeof(*nsblk), GFP_KERNEL);
+ if (!nsblk)
+ goto err;
+ dev = &nsblk->dev;
+ dev->type = &namespace_blk_device_type;
+ devs[count++] = dev;
+ }
+
+ return devs;
+
+err:
+ for (i = 0; i < count; i++) {
+ nsblk = to_nd_namespace_blk(devs[i]);
+ namespace_blk_release(&nsblk->dev);
+ }
+ kfree(devs);
+ return NULL;
+}
+
static int init_active_labels(struct nd_region *nd_region)
{
int i;
@@ -1088,6 +1470,9 @@ int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
case ND_DEVICE_NAMESPACE_PMEM:
devs = create_namespace_pmem(nd_region);
break;
+ case ND_DEVICE_NAMESPACE_BLOCK:
+ devs = create_namespace_blk(nd_region);
+ break;
default:
break;
}
@@ -1101,23 +1486,56 @@ int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
nd_region->ns_seed = devs[0];
for (i = 0; devs[i]; i++) {
struct device *dev = devs[i];
+ int id;
+
+ if (type == ND_DEVICE_NAMESPACE_BLOCK) {
+ struct nd_namespace_blk *nsblk;
- dev_set_name(dev, "namespace%d.%d", nd_region->id, i);
+ nsblk = to_nd_namespace_blk(dev);
+ id = ida_simple_get(&nd_region->ns_ida, 0, 0,
+ GFP_KERNEL);
+ nsblk->id = id;
+ } else
+ id = i;
+
+ if (id < 0)
+ break;
+ dev_set_name(dev, "namespace%d.%d", nd_region->id, id);
dev->parent = &nd_region->dev;
dev->groups = nd_namespace_attribute_groups;
nd_device_register(dev);
}
- kfree(devs);
- return i;
+ if (devs[i]) {
+ int j;
+
+ for (j = i; devs[j]; j++) {
+ struct device *dev = devs[j];
+
+ device_initialize(dev);
+ put_device(dev);
+ }
+ *err = j - i;
+ /*
+ * All of the namespaces we tried to register failed, so
+ * fail region activation.
+ */
+ if (*err == 0)
+ rc = -ENODEV;
+ }
+ kfree(devs);
err:
- for (i = 0; i < nd_region->ndr_mappings; i++) {
- struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ if (rc == -ENODEV) {
+ nd_region->ns_seed = NULL;
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
- kfree(nd_mapping->labels);
- nd_mapping->labels = NULL;
+ kfree(nd_mapping->labels);
+ nd_mapping->labels = NULL;
+ }
+ return rc;
}
- return rc;
+ return i;
}
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 03b14ab8fc7d..1a14ebb40a1a 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -17,6 +17,7 @@
#include <linux/sizes.h>
#include <linux/mutex.h>
#include <linux/io.h>
+#include <linux/nd.h>
#include "nfit.h"
extern struct list_head nd_bus_list;
@@ -26,6 +27,8 @@ extern int nd_dimm_major;
enum {
/* need to set a limit somewhere, but yes, this is likely overkill */
ND_IOCTL_MAX_BUFLEN = SZ_4M,
+ /* mark newly adjusted resources as requiring a label update */
+ DPA_RESOURCE_ADJUSTED = 1 << 0,
};
/*
@@ -116,6 +119,8 @@ void nd_dimm_exit(void);
int nd_region_exit(void);
void nd_region_probe_start(struct nd_bus *nd_bus, struct device *dev);
void nd_region_probe_end(struct nd_bus *nd_bus, struct device *dev, int rc);
+struct nd_region;
+void nd_region_create_blk_seed(struct nd_region *nd_region);
void nd_region_notify_remove(struct nd_bus *nd_bus, struct device *dev, int rc);
int nd_bus_create_ndctl(struct nd_bus *nd_bus);
void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
@@ -131,10 +136,15 @@ struct nd_dimm_drvdata;
struct nd_mapping;
resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
struct nd_mapping *nd_mapping, resource_size_t *overlap);
+resource_size_t nd_blk_available_dpa(struct nd_mapping *nd_mapping);
resource_size_t nd_region_available_dpa(struct nd_region *nd_region);
struct resource *nd_dimm_allocate_dpa(struct nd_dimm_drvdata *ndd,
struct nd_label_id *label_id, resource_size_t start,
resource_size_t n);
resource_size_t nd_dimm_allocated_dpa(struct nd_dimm_drvdata *ndd,
struct nd_label_id *label_id);
+struct nd_mapping;
+struct resource *nsblk_add_resource(struct nd_region *nd_region,
+ struct nd_dimm_drvdata *ndd, struct nd_namespace_blk *nsblk,
+ resource_size_t start);
#endif /* __ND_PRIVATE_H__ */
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 386e17056d3c..5adb55e76b33 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -97,6 +97,7 @@ static inline struct nd_namespace_label __iomem *nd_get_label(
struct nd_region {
struct device dev;
struct nd_spa *nd_spa;
+ struct ida ns_ida;
struct device *ns_seed;
u16 ndr_mappings;
u64 ndr_size;
@@ -125,6 +126,10 @@ void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
u64 nd_fletcher64(void __iomem *addr, size_t len);
int nd_uuid_store(struct device *dev, u8 **uuid_out, const char *buf,
size_t len);
+ssize_t nd_sector_size_show(unsigned long current_lbasize,
+ const unsigned long *supported, char *buf);
+ssize_t nd_sector_size_store(struct device *dev, const char *buf,
+ unsigned long *current_lbasize, const unsigned long *supported);
extern struct attribute_group nd_device_attribute_group;
struct nd_dimm;
struct nd_dimm_drvdata *to_ndd(struct nd_mapping *nd_mapping);
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index 8bcfd9b91a71..1b2c81e0eb0f 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -112,7 +112,12 @@ static int is_uuid_busy(struct device *dev, void *data)
break;
}
case ND_DEVICE_NAMESPACE_BLOCK: {
- /* TODO: blk namespace support */
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ if (!nsblk->uuid)
+ break;
+ if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) == 0)
+ return -EBUSY;
break;
}
default:
@@ -235,7 +240,7 @@ resource_size_t nd_region_available_dpa(struct nd_region *nd_region)
goto retry;
}
} else if (is_nd_blk(&nd_region->dev)) {
- /* TODO: BLK Namespace support */
+ available += nd_blk_available_dpa(nd_mapping);
}
}
@@ -494,6 +499,11 @@ static void nd_region_notify_driver_action(struct nd_bus *nd_bus,
else
atomic_dec(&nd_dimm->busy);
}
+ } else if (dev->parent && is_nd_blk(dev->parent) && probe && rc == 0) {
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+
+ if (nd_region->ns_seed == dev)
+ nd_region_create_blk_seed(nd_region);
}
}
@@ -735,6 +745,7 @@ static struct nd_region *nd_region_create(struct nd_bus *nd_bus,
dev->groups = nd_region_attribute_groups;
nd_region->ndr_size = readq(&nd_spa->nfit_spa->spa_length);
nd_region->ndr_start = readq(&nd_spa->nfit_spa->spa_base);
+ ida_init(&nd_region->ns_ida);
switch (spa_type) {
case NFIT_SPA_PM:
nd_spa_range_init(nd_bus, nd_region, &nd_pmem_device_type);
diff --git a/include/linux/nd.h b/include/linux/nd.h
index 255c38a83083..23276ea91690 100644
--- a/include/linux/nd.h
+++ b/include/linux/nd.h
@@ -50,6 +50,26 @@ struct nd_namespace_pmem {
u8 *uuid;
};
+/**
+ * struct nd_namespace_blk - namespace for dimm-bounded persistent memory
+ * @dev: namespace device creation by the nd region driver
+ * @alt_name: namespace name supplied in the dimm label
+ * @uuid: namespace name supplied in the dimm label
+ * @id: ida allocated id
+ * @lbasize: blk namespaces have a native sector size when btt not present
+ * @num_resources: number of dpa extents to claim
+ * @res: discontiguous dpa extents for given dimm
+ */
+struct nd_namespace_blk {
+ struct device dev;
+ char *alt_name;
+ u8 *uuid;
+ int id;
+ unsigned long lbasize;
+ int num_resources;
+ struct resource **res;
+};
+
static inline struct nd_namespace_io *to_nd_namespace_io(struct device *dev)
{
return container_of(dev, struct nd_namespace_io, dev);
@@ -62,6 +82,11 @@ static inline struct nd_namespace_pmem *to_nd_namespace_pmem(struct device *dev)
return container_of(nsio, struct nd_namespace_pmem, nsio);
}
+static inline struct nd_namespace_blk *to_nd_namespace_blk(struct device *dev)
+{
+ return container_of(dev, struct nd_namespace_blk, dev);
+}
+
#define MODULE_ALIAS_ND_DEVICE(type) \
MODULE_ALIAS("nd:t" __stringify(type) "*")
#define ND_DEVICE_MODALIAS_FMT "nd:t%d"
After 'uuid', 'size', and optionally 'alt_name' have been set to valid
values the labels on the dimms can be updated.
Write procedure is:
1/ Allocate and write new labels in the "next" index
2/ Free the old labels in the working copy
3/ Write the bitmap and the label space on the dimm
4/ Write the index to make the update valid
Label ranges directly mirror the dpa resource values for the given
label_id of the namespace.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/dimm_devs.c | 49 ++++++
drivers/block/nd/label.c | 327 +++++++++++++++++++++++++++++++++++++
drivers/block/nd/label.h | 6 +
drivers/block/nd/namespace_devs.c | 82 ++++++++-
drivers/block/nd/nd.h | 3
5 files changed, 453 insertions(+), 14 deletions(-)
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index ae77bf4a5188..a1685c01a2bb 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -134,6 +134,55 @@ int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd)
return rc;
}
+int nd_dimm_set_config_data(struct nd_dimm_drvdata *ndd, size_t offset,
+ void *buf, size_t len)
+{
+ int rc = validate_dimm(ndd);
+ size_t max_cmd_size, buf_offset;
+ struct nfit_cmd_set_config_hdr *cmd;
+ struct nd_bus *nd_bus = walk_to_nd_bus(ndd->dev);
+ struct nfit_bus_descriptor *nfit_desc = nd_bus->nfit_desc;
+
+ if (rc)
+ return rc;
+
+ if (!ndd->data)
+ return -ENXIO;
+
+ if (offset + len > ndd->nsarea.config_size)
+ return -ENXIO;
+
+ max_cmd_size = min_t(u32, PAGE_SIZE, len);
+ max_cmd_size = min_t(u32, max_cmd_size, ndd->nsarea.max_xfer);
+ cmd = kzalloc(max_cmd_size + sizeof(*cmd) + sizeof(u32), GFP_KERNEL);
+ if (!cmd)
+ return -ENOMEM;
+
+ for (buf_offset = 0; len; len -= cmd->in_length,
+ buf_offset += cmd->in_length) {
+ size_t cmd_size;
+ u32 *status;
+
+ cmd->in_offset = offset + buf_offset;
+ cmd->in_length = min(max_cmd_size, len);
+ memcpy(cmd->in_buf, buf + buf_offset, cmd->in_length);
+
+ /* status is output in the last 4-bytes of the command buffer */
+ cmd_size = sizeof(*cmd) + cmd->in_length + sizeof(u32);
+ status = ((void *) cmd) + cmd_size - sizeof(u32);
+
+ rc = nfit_desc->nfit_ctl(nfit_desc, to_nd_dimm(ndd->dev),
+ NFIT_CMD_SET_CONFIG_DATA, cmd, cmd_size);
+ if (rc || *status) {
+ rc = rc ? rc : -ENXIO;
+ break;
+ }
+ }
+ kfree(cmd);
+
+ return rc;
+}
+
static void nd_dimm_release(struct device *dev)
{
struct nd_dimm *nd_dimm = to_nd_dimm(dev);
diff --git a/drivers/block/nd/label.c b/drivers/block/nd/label.c
index b55fa2a6f872..78898b642191 100644
--- a/drivers/block/nd/label.c
+++ b/drivers/block/nd/label.c
@@ -12,6 +12,7 @@
*/
#include <linux/device.h>
#include <linux/ndctl.h>
+#include <linux/slab.h>
#include <linux/io.h>
#include <linux/nd.h>
#include "nd-private.h"
@@ -57,6 +58,11 @@ size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd)
return ndd->nsindex_size;
}
+static int nd_dimm_num_label_slots(struct nd_dimm_drvdata *ndd)
+{
+ return ndd->nsarea.config_size / 129;
+}
+
int nd_label_validate(struct nd_dimm_drvdata *ndd)
{
/*
@@ -202,23 +208,30 @@ static struct nd_namespace_label __iomem *nd_label_base(struct nd_dimm_drvdata *
return base + 2 * sizeof_namespace_index(ndd);
}
+static int to_slot(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_label __iomem *nd_label)
+{
+ return nd_label - nd_label_base(ndd);
+}
+
#define for_each_clear_bit_le(bit, addr, size) \
for ((bit) = find_next_zero_bit_le((addr), (size), 0); \
(bit) < (size); \
(bit) = find_next_zero_bit_le((addr), (size), (bit) + 1))
/**
- * preamble_current - common variable initialization for nd_label_* routines
+ * preamble_index - common variable initialization for nd_label_* routines
* @nd_dimm: dimm container for the relevant label set
+ * @idx: namespace_index index
* @nsindex: on return set to the currently active namespace index
* @free: on return set to the free label bitmap in the index
* @nslot: on return set to the number of slots in the label space
*/
-static bool preamble_current(struct nd_dimm_drvdata *ndd,
+static bool preamble_index(struct nd_dimm_drvdata *ndd, int idx,
struct nd_namespace_index **nsindex,
unsigned long **free, u32 *nslot)
{
- *nsindex = to_current_namespace_index(ndd);
+ *nsindex = to_namespace_index(ndd, idx);
if (*nsindex == NULL)
return false;
@@ -237,6 +250,22 @@ char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags)
return label_id->id;
}
+static bool preamble_current(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_index **nsindex,
+ unsigned long **free, u32 *nslot)
+{
+ return preamble_index(ndd, ndd->ns_current, nsindex,
+ free, nslot);
+}
+
+static bool preamble_next(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_index **nsindex,
+ unsigned long **free, u32 *nslot)
+{
+ return preamble_index(ndd, ndd->ns_next, nsindex,
+ free, nslot);
+}
+
static bool slot_valid(struct nd_namespace_label __iomem *nd_label, u32 slot)
{
/* check that we are written where we expect to be written */
@@ -341,3 +370,295 @@ struct nd_namespace_label __iomem *nd_label_active(
return NULL;
}
+
+u32 nd_label_alloc_slot(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot, slot;
+
+ if (!preamble_next(ndd, &nsindex, &free, &nslot))
+ return UINT_MAX;
+
+ WARN_ON(!is_nd_bus_locked(ndd->dev));
+
+ slot = find_next_bit_le(free, nslot, 0);
+ if (slot == nslot)
+ return UINT_MAX;
+
+ clear_bit_le(slot, free);
+
+ return slot;
+}
+
+bool nd_label_free_slot(struct nd_dimm_drvdata *ndd, u32 slot)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot;
+
+ if (!preamble_next(ndd, &nsindex, &free, &nslot))
+ return false;
+
+ WARN_ON(!is_nd_bus_locked(ndd->dev));
+
+ if (slot < nslot)
+ return !test_and_set_bit_le(slot, free);
+ return false;
+}
+
+u32 nd_label_nfree(struct nd_dimm_drvdata *ndd)
+{
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot;
+
+ WARN_ON(!is_nd_bus_locked(ndd->dev));
+
+ if (!preamble_next(ndd, &nsindex, &free, &nslot))
+ return 0;
+
+ return bitmap_weight(free, nslot);
+}
+
+static int nd_label_write_index(struct nd_dimm_drvdata *ndd, int index, u32 seq,
+ unsigned long flags)
+{
+ struct nd_namespace_index *nsindex = to_namespace_index(ndd, index);
+ unsigned long offset;
+ u64 checksum;
+ u32 nslot;
+ int rc;
+
+ if (flags & ND_NSINDEX_INIT)
+ nslot = nd_dimm_num_label_slots(ndd);
+ else
+ nslot = readl(&nsindex->nslot);
+
+ memcpy_toio(nsindex->sig, NSINDEX_SIGNATURE, NSINDEX_SIG_LEN);
+ writel(0, &nsindex->flags);
+ writel(seq, &nsindex->seq);
+ offset = (unsigned long) nsindex
+ - (unsigned long) to_namespace_index(ndd, 0);
+ writeq(offset, &nsindex->myoff);
+ writeq(sizeof_namespace_index(ndd), &nsindex->mysize);
+ offset = (unsigned long) to_namespace_index(ndd,
+ nd_label_next_nsindex(index))
+ - (unsigned long) to_namespace_index(ndd, 0);
+ writeq(offset, &nsindex->otheroff);
+ offset = (unsigned long) nd_label_base(ndd)
+ - (unsigned long) to_namespace_index(ndd, 0);
+ writeq(offset, &nsindex->labeloff);
+ writel(nslot, &nsindex->nslot);
+ writew(1, &nsindex->major);
+ writew(1, &nsindex->minor);
+ writeq(0, &nsindex->checksum);
+ if (flags & ND_NSINDEX_INIT) {
+ unsigned long *free = (unsigned long __force *) nsindex->free;
+ u32 nfree = ALIGN(nslot, BITS_PER_LONG);
+ int last_bits, i;
+
+ memset_io(nsindex->free, 0xff, nfree / 8);
+ for (i = 0, last_bits = nfree - nslot; i < last_bits; i++)
+ clear_bit_le(nslot + i, free);
+ }
+ checksum = nd_fletcher64(nsindex, sizeof_namespace_index(ndd));
+ writeq(checksum, &nsindex->checksum);
+ rc = nd_dimm_set_config_data(ndd, readq(&nsindex->myoff),
+ nsindex, sizeof_namespace_index(ndd));
+ if (rc < 0)
+ return rc;
+
+ if (flags & ND_NSINDEX_INIT)
+ return 0;
+
+ /* copy the index we just wrote to the new 'next' */
+ WARN_ON(index != ndd->ns_next);
+ nd_label_copy(ndd, to_current_namespace_index(ndd), nsindex);
+ ndd->ns_current = nd_label_next_nsindex(ndd->ns_current);
+ ndd->ns_next = nd_label_next_nsindex(ndd->ns_next);
+ WARN_ON(ndd->ns_current == ndd->ns_next);
+
+ return 0;
+}
+
+static unsigned long nd_label_offset(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_label __iomem *nd_label)
+{
+ return (unsigned long) nd_label
+ - (unsigned long) to_namespace_index(ndd, 0);
+}
+
+static int __pmem_label_update(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+ int pos)
+{
+ u64 cookie = nd_region_interleave_set_cookie(nd_region), rawsize;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct nd_namespace_label __iomem *victim_label;
+ struct nd_namespace_label __iomem *nd_label;
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free;
+ u32 nslot, slot;
+ size_t offset;
+ int rc;
+
+ if (!preamble_next(ndd, &nsindex, &free, &nslot))
+ return -ENXIO;
+
+ /* allocate and write the label to the staging (next) index */
+ slot = nd_label_alloc_slot(ndd);
+ if (slot == UINT_MAX)
+ return -ENXIO;
+ dev_dbg(ndd->dev, "%s: allocated: %d\n", __func__, slot);
+
+ nd_label = nd_label_base(ndd) + slot;
+ memset_io(nd_label, 0, sizeof(struct nd_namespace_label));
+ memcpy_toio(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
+ if (nspm->alt_name)
+ memcpy_toio(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
+ writel(NSLABEL_FLAG_UPDATING, &nd_label->flags);
+ writew(nd_region->ndr_mappings, &nd_label->nlabel);
+ writew(pos, &nd_label->position);
+ writeq(cookie, &nd_label->isetcookie);
+ rawsize = div_u64(resource_size(&nspm->nsio.res),
+ nd_region->ndr_mappings);
+ writeq(rawsize, &nd_label->rawsize);
+ writeq(nd_mapping->start, &nd_label->dpa);
+ writel(slot, &nd_label->slot);
+
+ /* update label */
+ offset = nd_label_offset(ndd, nd_label);
+ rc = nd_dimm_set_config_data(ndd, offset, nd_label,
+ sizeof(struct nd_namespace_label));
+ if (rc < 0)
+ return rc;
+
+ /* Garbage collect the previous label */
+ victim_label = nd_get_label(nd_mapping->labels, 0);
+ if (victim_label) {
+ slot = to_slot(ndd, victim_label);
+ nd_label_free_slot(ndd, slot);
+ dev_dbg(ndd->dev, "%s: free: %d\n", __func__, slot);
+ }
+
+ /* update index */
+ rc = nd_label_write_index(ndd, ndd->ns_next,
+ nd_inc_seq(readl(&nsindex->seq)), 0);
+ if (rc < 0)
+ return rc;
+
+ nd_set_label(nd_mapping->labels, nd_label, 0);
+
+ return 0;
+}
+
+static int init_labels(struct nd_mapping *nd_mapping)
+{
+ int i;
+ struct nd_namespace_index __iomem *nsindex;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+
+ if (!nd_mapping->labels)
+ nd_mapping->labels = kcalloc(2, sizeof(void *), GFP_KERNEL);
+
+ if (!nd_mapping->labels)
+ return -ENOMEM;
+
+ if (ndd->ns_current == -1 || ndd->ns_next == -1)
+ /* pass */;
+ else
+ return 0;
+
+ nsindex = to_namespace_index(ndd, 0);
+ memset_io(nsindex, 0, ndd->nsarea.config_size);
+ for (i = 0; i < 2; i++) {
+ int rc = nd_label_write_index(ndd, i, i*2, ND_NSINDEX_INIT);
+
+ if (rc)
+ return rc;
+ }
+ ndd->ns_next = 1;
+ ndd->ns_current = 0;
+
+ return 0;
+}
+
+static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
+{
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct nd_namespace_label __iomem *nd_label;
+ struct nd_namespace_index __iomem *nsindex;
+ u8 label_uuid[NSLABEL_UUID_LEN];
+ int l, num_freed = 0;
+ unsigned long *free;
+ u32 nslot, slot;
+
+ if (!uuid)
+ return 0;
+
+ /* no index || no labels == nothing to delete */
+ if (!preamble_next(ndd, &nsindex, &free, &nslot)
+ || !nd_mapping->labels)
+ return 0;
+
+ for_each_label(l, nd_label, nd_mapping->labels) {
+ int j;
+
+ memcpy_fromio(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+ if (memcmp(label_uuid, uuid, NSLABEL_UUID_LEN) != 0)
+ continue;
+ slot = to_slot(ndd, nd_label);
+ nd_label_free_slot(ndd, slot);
+ dev_dbg(ndd->dev, "%s: free: %d\n", __func__, slot);
+ for (j = l; nd_get_label(nd_mapping->labels, j + 1); j++) {
+ struct nd_namespace_label __iomem *next_label;
+
+ next_label = nd_get_label(nd_mapping->labels, j + 1);
+ nd_set_label(nd_mapping->labels, next_label, j);
+ }
+ nd_set_label(nd_mapping->labels, NULL, j);
+ num_freed++;
+ }
+
+ if (num_freed > l) {
+ /*
+ * num_freed will only ever be > l when we delete the last
+ * label
+ */
+ kfree(nd_mapping->labels);
+ nd_mapping->labels = NULL;
+ dev_dbg(ndd->dev, "%s: no more labels\n", __func__);
+ }
+
+ return nd_label_write_index(ndd, ndd->ns_next,
+ nd_inc_seq(readl(&nsindex->seq)), 0);
+}
+
+int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ struct nd_namespace_pmem *nspm, resource_size_t size)
+{
+ int i;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ int rc;
+
+ if (size == 0) {
+ rc = del_labels(nd_mapping, nspm->uuid);
+ if (rc)
+ return rc;
+ continue;
+ }
+
+ rc = init_labels(nd_mapping);
+ if (rc)
+ return rc;
+
+ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/drivers/block/nd/label.h b/drivers/block/nd/label.h
index 4436624f4146..e17958941e34 100644
--- a/drivers/block/nd/label.h
+++ b/drivers/block/nd/label.h
@@ -34,6 +34,7 @@ enum {
BTTINFO_MAJOR_VERSION = 1,
ND_LABEL_MIN_SIZE = 512 * 129, /* see sizeof_namespace_index() */
ND_LABEL_ID_SIZE = 50,
+ ND_NSINDEX_INIT = 0x1,
};
static const char NSINDEX_SIGNATURE[] = "NAMESPACE_INDEX\0";
@@ -129,4 +130,9 @@ size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd);
int nd_label_active_count(struct nd_dimm_drvdata *ndd);
struct nd_namespace_label __iomem *nd_label_active(
struct nd_dimm_drvdata *ndd, int n);
+u32 nd_label_nfree(struct nd_dimm_drvdata *ndd);
+struct nd_region;
+struct nd_namespace_pmem;
+int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ struct nd_namespace_pmem *nspm, resource_size_t size);
#endif /* __LABEL_H__ */
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
index de36f3891284..d63a71a5e711 100644
--- a/drivers/block/nd/namespace_devs.c
+++ b/drivers/block/nd/namespace_devs.c
@@ -151,20 +151,52 @@ static resource_size_t nd_namespace_blk_size(struct nd_namespace_blk *nsblk)
return size;
}
+static int nd_namespace_label_update(struct nd_region *nd_region, struct device *dev)
+{
+ dev_WARN_ONCE(dev, dev->driver,
+ "namespace must be idle during label update\n");
+ if (dev->driver)
+ return 0;
+
+ /*
+ * Only allow label writes that will result in a valid namespace
+ * or deletion of an existing namespace.
+ */
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+ struct resource *res = &nspm->nsio.res;
+ resource_size_t size = resource_size(res);
+
+ if (size == 0 && nspm->uuid)
+ /* delete allocation */;
+ else if (!nspm->uuid)
+ return 0;
+
+ return nd_pmem_namespace_label_update(nd_region, nspm, size);
+ } else if (is_namespace_blk(dev)) {
+ /* TODO: implement blk labels */
+ return 0;
+ } else
+ return -ENXIO;
+}
+
static ssize_t alt_name_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
ssize_t rc;
device_lock(dev);
nd_bus_lock(dev);
wait_nd_bus_probe_idle(dev);
rc = __alt_name_store(dev, buf, len);
+ if (rc >= 0)
+ rc = nd_namespace_label_update(nd_region, dev);
dev_dbg(dev, "%s: %s (%zd)\n", __func__, rc < 0 ? "fail" : "success", rc);
nd_bus_unlock(dev);
device_unlock(dev);
- return rc;
+ return rc < 0 ? rc : len;
}
static ssize_t alt_name_show(struct device *dev,
@@ -707,6 +739,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
static ssize_t size_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
unsigned long long val;
u8 **uuid = NULL;
int rc;
@@ -719,6 +752,8 @@ static ssize_t size_store(struct device *dev,
nd_bus_lock(dev);
wait_nd_bus_probe_idle(dev);
rc = __size_store(dev, val);
+ if (rc >= 0)
+ rc = nd_namespace_label_update(nd_region, dev);
if (is_namespace_pmem(dev)) {
struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
@@ -742,7 +777,7 @@ static ssize_t size_store(struct device *dev,
nd_bus_unlock(dev);
device_unlock(dev);
- return rc ? rc : len;
+ return rc < 0 ? rc : len;
}
static ssize_t size_show(struct device *dev,
@@ -802,17 +837,34 @@ static int namespace_update_uuid(struct nd_region *nd_region,
u32 flags = is_namespace_blk(dev) ? NSLABEL_FLAG_LOCAL : 0;
struct nd_label_id old_label_id;
struct nd_label_id new_label_id;
- int i, rc;
+ int i;
- rc = nd_is_uuid_unique(dev, new_uuid) ? 0 : -EINVAL;
- if (rc) {
- kfree(new_uuid);
- return rc;
- }
+ if (!nd_is_uuid_unique(dev, new_uuid))
+ return -EINVAL;
if (*old_uuid == NULL)
goto out;
+ /*
+ * If we've already written a label with this uuid, then it's
+ * too late to rename because we can't reliably update the uuid
+ * without losing the old namespace. Userspace must delete this
+ * namespace to abandon the old uuid.
+ */
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+
+ /*
+ * This check by itself is sufficient because old_uuid
+ * would be NULL above if this uuid did not exist in the
+ * currently written set.
+ *
+ * FIXME: can we delete uuid with zero dpa allocated?
+ */
+ if (nd_mapping->labels)
+ return -EBUSY;
+ }
+
nd_label_gen_id(&old_label_id, *old_uuid, flags);
nd_label_gen_id(&new_label_id, new_uuid, flags);
for (i = 0; i < nd_region->ndr_mappings; i++) {
@@ -856,12 +908,16 @@ static ssize_t uuid_store(struct device *dev,
rc = nd_uuid_store(dev, &uuid, buf, len);
if (rc >= 0)
rc = namespace_update_uuid(nd_region, dev, uuid, ns_uuid);
+ if (rc >= 0)
+ rc = nd_namespace_label_update(nd_region, dev);
+ else
+ kfree(uuid);
dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
rc, buf, buf[len - 1] == '\n' ? "" : "\n");
nd_bus_unlock(dev);
device_unlock(dev);
- return rc ? rc : len;
+ return rc < 0 ? rc : len;
}
static DEVICE_ATTR_RW(uuid);
@@ -905,6 +961,7 @@ static ssize_t sector_size_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+ struct nd_region *nd_region = to_nd_region(dev->parent);
ssize_t rc;
if (!is_namespace_blk(dev))
@@ -914,8 +971,11 @@ static ssize_t sector_size_store(struct device *dev,
nd_bus_lock(dev);
rc = nd_sector_size_store(dev, buf, &nsblk->lbasize,
ns_lbasize_supported);
- dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
- rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ if (rc >= 0)
+ rc = nd_namespace_label_update(nd_region, dev);
+ dev_dbg(dev, "%s: result: %zd %s: %s%s", __func__,
+ rc, rc < 0 ? "tried" : "wrote", buf,
+ buf[len - 1] == '\n' ? "" : "\n");
nd_bus_unlock(dev);
device_unlock(dev);
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 5adb55e76b33..91f61e952003 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -115,6 +115,7 @@ static inline unsigned nd_inc_seq(unsigned seq)
return next[seq & 3];
}
+
enum nd_async_mode {
ND_SYNC,
ND_ASYNC,
@@ -141,6 +142,8 @@ void nd_dimm_set_dsm_mask(struct nd_dimm *nd_dimm, unsigned long dsm_mask);
int nd_dimm_init_nsarea(struct nd_dimm_drvdata *ndd);
int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd);
int nd_dimm_firmware_status(struct device *dev);
+int nd_dimm_set_config_data(struct nd_dimm_drvdata *ndd, size_t offset,
+ void *buf, size_t len);
struct nd_region *to_nd_region(struct device *dev);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
After 'uuid', 'size', 'sector_size', and optionally 'alt_name' have been
set to valid values the labels on the dimm can be updated. The
difference with the pmem case is that blk namespaces are limited to one
dimm and can cover discontiguous ranges in dpa space.
Also, after allocating label slots, it is useful for userspace to know
how many slots are left. Export this information in sysfs.
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/bus.c | 4
drivers/block/nd/dimm_devs.c | 25 +++
drivers/block/nd/label.c | 297 +++++++++++++++++++++++++++++++++++--
drivers/block/nd/label.h | 5 +
drivers/block/nd/namespace_devs.c | 57 +++++++
drivers/block/nd/nd-private.h | 1
6 files changed, 367 insertions(+), 22 deletions(-)
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index 8e70098b6cb0..cb619d70166d 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -165,6 +165,10 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
{
struct device *dev = d;
+ /* flush bus operations before delete */
+ nd_bus_lock(dev);
+ nd_bus_unlock(dev);
+
device_unregister(dev);
put_device(dev);
}
diff --git a/drivers/block/nd/dimm_devs.c b/drivers/block/nd/dimm_devs.c
index a1685c01a2bb..eead15c98196 100644
--- a/drivers/block/nd/dimm_devs.c
+++ b/drivers/block/nd/dimm_devs.c
@@ -19,6 +19,7 @@
#include <linux/fs.h>
#include <linux/mm.h>
#include "nd-private.h"
+#include "label.h"
#include "nfit.h"
#include "nd.h"
@@ -364,6 +365,29 @@ static ssize_t state_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(state);
+static ssize_t available_slots_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_dimm_drvdata *ndd = dev_get_drvdata(dev);
+ ssize_t rc;
+ u32 nfree;
+
+ if (!ndd)
+ return -ENXIO;
+
+ nd_bus_lock(dev);
+ nfree = nd_label_nfree(ndd);
+ if (nfree - 1 > nfree) {
+ dev_WARN_ONCE(dev, 1, "we ate our last label?\n");
+ nfree = 0;
+ } else
+ nfree--;
+ rc = sprintf(buf, "%d\n", nfree);
+ nd_bus_unlock(dev);
+ return rc;
+}
+static DEVICE_ATTR_RO(available_slots);
+
static struct attribute *nd_dimm_attributes[] = {
&dev_attr_handle.attr,
&dev_attr_phys_id.attr,
@@ -374,6 +398,7 @@ static struct attribute *nd_dimm_attributes[] = {
&dev_attr_state.attr,
&dev_attr_revision.attr,
&dev_attr_commands.attr,
+ &dev_attr_available_slots.attr,
NULL,
};
diff --git a/drivers/block/nd/label.c b/drivers/block/nd/label.c
index 78898b642191..069c26d50ed1 100644
--- a/drivers/block/nd/label.c
+++ b/drivers/block/nd/label.c
@@ -58,7 +58,7 @@ size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd)
return ndd->nsindex_size;
}
-static int nd_dimm_num_label_slots(struct nd_dimm_drvdata *ndd)
+int nd_dimm_num_label_slots(struct nd_dimm_drvdata *ndd)
{
return ndd->nsarea.config_size / 129;
}
@@ -416,7 +416,7 @@ u32 nd_label_nfree(struct nd_dimm_drvdata *ndd)
WARN_ON(!is_nd_bus_locked(ndd->dev));
if (!preamble_next(ndd, &nsindex, &free, &nslot))
- return 0;
+ return nd_dimm_num_label_slots(ndd);
return bitmap_weight(free, nslot);
}
@@ -553,22 +553,270 @@ static int __pmem_label_update(struct nd_region *nd_region,
return 0;
}
-static int init_labels(struct nd_mapping *nd_mapping)
+static void del_label(struct nd_mapping *nd_mapping, int l)
+{
+ struct nd_namespace_label __iomem *next_label, __iomem *nd_label;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ unsigned int slot;
+ int j;
+
+ nd_label = nd_get_label(nd_mapping->labels, l);
+ slot = to_slot(ndd, nd_label);
+ dev_vdbg(ndd->dev, "%s: clear: %d\n", __func__, slot);
+
+ for (j = l; (next_label = nd_get_label(nd_mapping->labels, j + 1)); j++)
+ nd_set_label(nd_mapping->labels, next_label, j);
+ nd_set_label(nd_mapping->labels, NULL, j);
+}
+
+static bool is_old_resource(struct resource *res, struct resource **list, int n)
{
int i;
+
+ if (res->flags & DPA_RESOURCE_ADJUSTED)
+ return false;
+ for (i = 0; i < n; i++)
+ if (res == list[i])
+ return true;
+ return false;
+}
+
+static struct resource *to_resource(struct nd_dimm_drvdata *ndd,
+ struct nd_namespace_label __iomem *nd_label)
+{
+ struct resource *res;
+
+ for_each_dpa_resource(ndd, res) {
+ if (res->start != readq(&nd_label->dpa))
+ continue;
+ if (resource_size(res) != readq(&nd_label->rawsize))
+ continue;
+ return res;
+ }
+
+ return NULL;
+}
+
+/*
+ * 1/ Account all the labels that can be freed after this update
+ * 2/ Allocate and write the label to the staging (next) index
+ * 3/ Record the resources in the namespace device
+ */
+static int __blk_label_update(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_namespace_blk *nsblk,
+ int num_labels)
+{
+ int i, l, alloc, victims, nfree, old_num_resources, nlabel, rc = -ENXIO;
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct nd_namespace_label __iomem *nd_label;
+ struct nd_namespace_index __iomem *nsindex;
+ unsigned long *free, *victim_map = NULL;
+ struct resource *res, **old_res_list;
+ struct nd_label_id label_id;
+ u8 uuid[NSLABEL_UUID_LEN];
+ u32 nslot, slot;
+
+ if (!preamble_next(ndd, &nsindex, &free, &nslot))
+ return -ENXIO;
+
+ old_res_list = nsblk->res;
+ nfree = nd_label_nfree(ndd);
+ old_num_resources = nsblk->num_resources;
+ nd_label_gen_id(&label_id, nsblk->uuid, NSLABEL_FLAG_LOCAL);
+
+ /*
+ * We need to loop over the old resources a few times, which seems a
+ * bit inefficient, but we need to know that we have the label
+ * space before we start mutating the tracking structures.
+ * Otherwise the recovery method of last resort for userspace is
+ * disable and re-enable the parent region.
+ */
+ alloc = 0;
+ for_each_dpa_resource(ndd, res) {
+ if (strcmp(res->name, label_id.id) != 0)
+ continue;
+ if (!is_old_resource(res, old_res_list, old_num_resources))
+ alloc++;
+ }
+
+ victims = 0;
+ if (old_num_resources) {
+ /* convert old local-label-map to dimm-slot victim-map */
+ victim_map = kcalloc(BITS_TO_LONGS(nslot), sizeof(long),
+ GFP_KERNEL);
+ if (!victim_map)
+ return -ENOMEM;
+
+ /* mark unused labels for garbage collection */
+ for_each_clear_bit_le(slot, free, nslot) {
+ nd_label = nd_label_base(ndd) + slot;
+ memcpy_fromio(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+ if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+ continue;
+ res = to_resource(ndd, nd_label);
+ if (res && is_old_resource(res, old_res_list,
+ old_num_resources))
+ continue;
+ slot = to_slot(ndd, nd_label);
+ set_bit(slot, victim_map);
+ victims++;
+ }
+ }
+
+ /* don't allow updates that consume the last label */
+ if (nfree - alloc < 0 || nfree - alloc + victims < 1) {
+ dev_info(&nsblk->dev, "insufficient label space\n");
+ kfree(victim_map);
+ return -ENOSPC;
+ }
+ /* from here on we need to abort on error */
+
+
+ /* assign all resources to the namespace before writing the labels */
+ nsblk->res = NULL;
+ nsblk->num_resources = 0;
+ for_each_dpa_resource(ndd, res) {
+ if (strcmp(res->name, label_id.id) != 0)
+ continue;
+ if (!nsblk_add_resource(nd_region, ndd, nsblk, res->start)) {
+ rc = -ENOMEM;
+ goto abort;
+ }
+ }
+
+ for (i = 0; i < nsblk->num_resources; i++) {
+ size_t offset;
+
+ res = nsblk->res[i];
+ if (is_old_resource(res, old_res_list, old_num_resources))
+ continue; /* carry-over */
+ slot = nd_label_alloc_slot(ndd);
+ if (slot == UINT_MAX)
+ goto abort;
+ dev_dbg(ndd->dev, "%s: allocated: %d\n", __func__, slot);
+
+ nd_label = nd_label_base(ndd) + slot;
+ memset_io(nd_label, 0, sizeof(struct nd_namespace_label));
+ memcpy_toio(nd_label->uuid, nsblk->uuid, NSLABEL_UUID_LEN);
+ if (nsblk->alt_name)
+ memcpy_toio(nd_label->name, nsblk->alt_name,
+ NSLABEL_NAME_LEN);
+ writel(NSLABEL_FLAG_LOCAL, &nd_label->flags);
+ writew(0, &nd_label->nlabel); /* N/A */
+ writew(0, &nd_label->position); /* N/A */
+ writeq(0, &nd_label->isetcookie); /* N/A */
+ writeq(res->start, &nd_label->dpa);
+ writeq(resource_size(res), &nd_label->rawsize);
+ writeq(nsblk->lbasize, &nd_label->lbasize);
+ writel(slot, &nd_label->slot);
+
+ /* update label */
+ offset = nd_label_offset(ndd, nd_label);
+ rc = nd_dimm_set_config_data(ndd, offset, nd_label,
+ sizeof(struct nd_namespace_label));
+ if (rc < 0)
+ goto abort;
+ }
+
+ /* free up now unused slots in the new index */
+ for_each_set_bit(slot, victim_map, victim_map ? nslot : 0) {
+ dev_dbg(ndd->dev, "%s: free: %d\n", __func__, slot);
+ nd_label_free_slot(ndd, slot);
+ }
+
+ /* update index */
+ rc = nd_label_write_index(ndd, ndd->ns_next,
+ nd_inc_seq(readl(&nsindex->seq)), 0);
+ if (rc)
+ goto abort;
+
+ /*
+ * Now that the on-dimm labels are up to date, fix up the tracking
+ * entries in nd_mapping->labels
+ */
+ nlabel = 0;
+ for_each_label(l, nd_label, nd_mapping->labels) {
+ nlabel++;
+ memcpy_fromio(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+ if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+ continue;
+ nlabel--;
+ del_label(nd_mapping, l);
+ l--; /* retry with the new label at this index */
+ }
+ if (nlabel + nsblk->num_resources > num_labels) {
+ /*
+ * Bug, we can't end up with more resources than
+ * available labels
+ */
+ WARN_ON_ONCE(1);
+ rc = -ENXIO;
+ goto out;
+ }
+
+ for_each_clear_bit_le(slot, free, nslot) {
+ nd_label = nd_label_base(ndd) + slot;
+ memcpy_fromio(uuid, nd_label->uuid, NSLABEL_UUID_LEN);
+ if (memcmp(uuid, nsblk->uuid, NSLABEL_UUID_LEN) != 0)
+ continue;
+ res = to_resource(ndd, nd_label);
+ res->flags &= ~DPA_RESOURCE_ADJUSTED;
+ dev_vdbg(&nsblk->dev, "assign label[%d] slot: %d\n", l, slot);
+ nd_set_label(nd_mapping->labels, nd_label, l++);
+ }
+ nd_set_label(nd_mapping->labels, NULL, l);
+
+ out:
+ kfree(old_res_list);
+ kfree(victim_map);
+ return rc;
+
+ abort:
+ /*
+ * 1/ repair the allocated label bitmap in the index
+ * 2/ restore the resource list
+ */
+ nd_label_copy(ndd, nsindex, to_current_namespace_index(ndd));
+ kfree(nsblk->res);
+ nsblk->res = old_res_list;
+ nsblk->num_resources = old_num_resources;
+ old_res_list = NULL;
+ goto out;
+}
+
+static int init_labels(struct nd_mapping *nd_mapping, int num_labels)
+{
+ int i, l, old_num_labels = 0;
struct nd_namespace_index __iomem *nsindex;
+ struct nd_namespace_label __iomem *nd_label;
struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ size_t size = (num_labels + 1) * sizeof(struct nd_namespace_label *);
- if (!nd_mapping->labels)
- nd_mapping->labels = kcalloc(2, sizeof(void *), GFP_KERNEL);
+ for_each_label(l, nd_label, nd_mapping->labels)
+ old_num_labels++;
+
+ /*
+ * We need to preserve all the old labels for the mapping so
+ * they can be garbage collected after writing the new labels.
+ */
+ if (num_labels > old_num_labels) {
+ struct nd_namespace_label **labels;
+ labels = krealloc(nd_mapping->labels, size, GFP_KERNEL);
+ if (!labels)
+ return -ENOMEM;
+ nd_mapping->labels = labels;
+ }
if (!nd_mapping->labels)
return -ENOMEM;
+ for (i = old_num_labels; i <= num_labels; i++)
+ nd_set_label(nd_mapping->labels, NULL, i);
+
if (ndd->ns_current == -1 || ndd->ns_next == -1)
/* pass */;
else
- return 0;
+ return max(num_labels, old_num_labels);
nsindex = to_namespace_index(ndd, 0);
memset_io(nsindex, 0, ndd->nsarea.config_size);
@@ -581,7 +829,7 @@ static int init_labels(struct nd_mapping *nd_mapping)
ndd->ns_next = 1;
ndd->ns_current = 0;
- return 0;
+ return max(num_labels, old_num_labels);
}
static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
@@ -603,22 +851,15 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
return 0;
for_each_label(l, nd_label, nd_mapping->labels) {
- int j;
-
memcpy_fromio(label_uuid, nd_label->uuid, NSLABEL_UUID_LEN);
if (memcmp(label_uuid, uuid, NSLABEL_UUID_LEN) != 0)
continue;
slot = to_slot(ndd, nd_label);
nd_label_free_slot(ndd, slot);
dev_dbg(ndd->dev, "%s: free: %d\n", __func__, slot);
- for (j = l; nd_get_label(nd_mapping->labels, j + 1); j++) {
- struct nd_namespace_label __iomem *next_label;
-
- next_label = nd_get_label(nd_mapping->labels, j + 1);
- nd_set_label(nd_mapping->labels, next_label, j);
- }
- nd_set_label(nd_mapping->labels, NULL, j);
+ del_label(nd_mapping, l);
num_freed++;
+ l--; /* retry with new label at this index */
}
if (num_freed > l) {
@@ -651,8 +892,8 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
continue;
}
- rc = init_labels(nd_mapping);
- if (rc)
+ rc = init_labels(nd_mapping, 1);
+ if (rc < 0)
return rc;
rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
@@ -662,3 +903,23 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
return 0;
}
+
+int nd_blk_namespace_label_update(struct nd_region *nd_region,
+ struct nd_namespace_blk *nsblk, resource_size_t size)
+{
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ struct resource *res;
+ int count = 0;
+
+ if (size == 0)
+ return del_labels(nd_mapping, nsblk->uuid);
+
+ for_each_dpa_resource(to_ndd(nd_mapping), res)
+ count++;
+
+ count = init_labels(nd_mapping, count);
+ if (count < 0)
+ return count;
+
+ return __blk_label_update(nd_region, nd_mapping, nsblk, count);
+}
diff --git a/drivers/block/nd/label.h b/drivers/block/nd/label.h
index e17958941e34..a26cebc9f389 100644
--- a/drivers/block/nd/label.h
+++ b/drivers/block/nd/label.h
@@ -130,9 +130,14 @@ size_t sizeof_namespace_index(struct nd_dimm_drvdata *ndd);
int nd_label_active_count(struct nd_dimm_drvdata *ndd);
struct nd_namespace_label __iomem *nd_label_active(
struct nd_dimm_drvdata *ndd, int n);
+u32 nd_label_alloc_slot(struct nd_dimm_drvdata *ndd);
+bool nd_label_free_slot(struct nd_dimm_drvdata *ndd, u32 slot);
u32 nd_label_nfree(struct nd_dimm_drvdata *ndd);
struct nd_region;
struct nd_namespace_pmem;
+struct nd_namespace_blk;
int nd_pmem_namespace_label_update(struct nd_region *nd_region,
struct nd_namespace_pmem *nspm, resource_size_t size);
+int nd_blk_namespace_label_update(struct nd_region *nd_region,
+ struct nd_namespace_blk *nsblk, resource_size_t size);
#endif /* __LABEL_H__ */
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
index d63a71a5e711..8414ca21917d 100644
--- a/drivers/block/nd/namespace_devs.c
+++ b/drivers/block/nd/namespace_devs.c
@@ -164,8 +164,7 @@ static int nd_namespace_label_update(struct nd_region *nd_region, struct device
*/
if (is_namespace_pmem(dev)) {
struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
- struct resource *res = &nspm->nsio.res;
- resource_size_t size = resource_size(res);
+ resource_size_t size = resource_size(&nspm->nsio.res);
if (size == 0 && nspm->uuid)
/* delete allocation */;
@@ -174,8 +173,15 @@ static int nd_namespace_label_update(struct nd_region *nd_region, struct device
return nd_pmem_namespace_label_update(nd_region, nspm, size);
} else if (is_namespace_blk(dev)) {
- /* TODO: implement blk labels */
- return 0;
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+ resource_size_t size = nd_namespace_blk_size(nsblk);
+
+ if (size == 0 && nsblk->uuid)
+ /* delete allocation */;
+ else if (!nsblk->uuid || !nsblk->lbasize)
+ return 0;
+
+ return nd_blk_namespace_label_update(nd_region, nsblk, size);
} else
return -ENXIO;
}
@@ -983,6 +989,48 @@ static ssize_t sector_size_store(struct device *dev,
}
static DEVICE_ATTR_RW(sector_size);
+static ssize_t dpa_extents_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_region *nd_region = to_nd_region(dev->parent);
+ struct nd_label_id label_id;
+ int count = 0, i;
+ u8 *uuid = NULL;
+ u32 flags = 0;
+
+ nd_bus_lock(dev);
+ if (is_namespace_pmem(dev)) {
+ struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);
+
+ uuid = nspm->uuid;
+ flags = 0;
+ } else if (is_namespace_blk(dev)) {
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+
+ uuid = nsblk->uuid;
+ flags = NSLABEL_FLAG_LOCAL;
+ }
+
+ if (!uuid)
+ goto out;
+
+ nd_label_gen_id(&label_id, uuid, flags);
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res;
+
+ for_each_dpa_resource(ndd, res)
+ if (strcmp(res->name, label_id.id) == 0)
+ count++;
+ }
+ out:
+ nd_bus_unlock(dev);
+
+ return sprintf(buf, "%d\n", count);
+}
+static DEVICE_ATTR_RO(dpa_extents);
+
static struct attribute *nd_namespace_attributes[] = {
&dev_attr_type.attr,
&dev_attr_size.attr,
@@ -990,6 +1038,7 @@ static struct attribute *nd_namespace_attributes[] = {
&dev_attr_resource.attr,
&dev_attr_alt_name.attr,
&dev_attr_sector_size.attr,
+ &dev_attr_dpa_extents.attr,
NULL,
};
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 1a14ebb40a1a..f88140dceea7 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -147,4 +147,5 @@ struct nd_mapping;
struct resource *nsblk_add_resource(struct nd_region *nd_region,
struct nd_dimm_drvdata *ndd, struct nd_namespace_blk *nsblk,
resource_size_t start);
+int nd_dimm_num_label_slots(struct nd_dimm_drvdata *ndd);
#endif /* __ND_PRIVATE_H__ */
Block devices from an nd bus, in addition to accepting "struct bio"
based requests, also have the capability to perform byte-aligned
accesses. By default only the bio/block interface is used. However, if
another driver can make effective use of the byte-aligned capability it
can claim/disable the block interface and use the byte-aligned "nd_io"
interface.
The BTT driver is the intended first consumer of this mechanism to allow
layering atomic sector update guarantees on top of nd_io capable
nd-bus-block-devices.
Cc: Greg KH <[email protected]>
Cc: Neil Brown <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Kconfig | 3
drivers/block/nd/Makefile | 2
drivers/block/nd/btt.h | 45 ++++
drivers/block/nd/btt_devs.c | 442 +++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/bus.c | 128 ++++++++++++
drivers/block/nd/core.c | 80 +++++++
drivers/block/nd/nd-private.h | 28 +++
drivers/block/nd/nd.h | 94 +++++++++
drivers/block/nd/pmem.c | 30 +++
include/uapi/linux/ndctl.h | 2
10 files changed, 850 insertions(+), 4 deletions(-)
create mode 100644 drivers/block/nd/btt.h
create mode 100644 drivers/block/nd/btt_devs.c
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index 38eae5f0ae4b..faa756841773 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -89,4 +89,7 @@ config BLK_DEV_PMEM
Say Y if you want to use a NVDIMM described by NFIT
+config ND_BTT_DEVS
+ def_bool y
+
endif
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 93856f1c9dbd..3e4878e0fe1d 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -29,4 +29,6 @@ nd-y += region.o
nd-y += namespace_devs.o
nd-y += label.o
+nd-$(CONFIG_ND_BTT_DEVS) += btt_devs.o
+
nd_pmem-y := pmem.o
diff --git a/drivers/block/nd/btt.h b/drivers/block/nd/btt.h
new file mode 100644
index 000000000000..e8f6d8e0ddd3
--- /dev/null
+++ b/drivers/block/nd/btt.h
@@ -0,0 +1,45 @@
+/*
+ * Block Translation Table library
+ * Copyright (c) 2014-2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#ifndef _LINUX_BTT_H
+#define _LINUX_BTT_H
+
+#include <linux/types.h>
+
+#define BTT_SIG_LEN 16
+#define BTT_SIG "BTT_ARENA_INFO\0"
+
+struct btt_sb {
+ u8 signature[BTT_SIG_LEN];
+ u8 uuid[16];
+ u8 parent_uuid[16];
+ __le32 flags;
+ __le16 version_major;
+ __le16 version_minor;
+ __le32 external_lbasize;
+ __le32 external_nlba;
+ __le32 internal_lbasize;
+ __le32 internal_nlba;
+ __le32 nfree;
+ __le32 infosize;
+ __le64 nextoff;
+ __le64 dataoff;
+ __le64 mapoff;
+ __le64 logoff;
+ __le64 info2off;
+ u8 padding[3968];
+ __le64 checksum;
+};
+
+#endif
diff --git a/drivers/block/nd/btt_devs.c b/drivers/block/nd/btt_devs.c
new file mode 100644
index 000000000000..746d582910b6
--- /dev/null
+++ b/drivers/block/nd/btt_devs.c
@@ -0,0 +1,442 @@
+/*
+ * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/device.h>
+#include <linux/genhd.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include "nd-private.h"
+#include "btt.h"
+#include "nd.h"
+
+static DEFINE_IDA(btt_ida);
+
+static void nd_btt_release(struct device *dev)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+
+ dev_dbg(dev, "%s\n", __func__);
+ WARN_ON(nd_btt->backing_dev);
+ ndio_del_claim(nd_btt->ndio_claim);
+ ida_simple_remove(&btt_ida, nd_btt->id);
+ kfree(nd_btt->uuid);
+ kfree(nd_btt);
+}
+
+static struct device_type nd_btt_device_type = {
+ .name = "nd_btt",
+ .release = nd_btt_release,
+};
+
+bool is_nd_btt(struct device *dev)
+{
+ return dev->type == &nd_btt_device_type;
+}
+
+struct nd_btt *to_nd_btt(struct device *dev)
+{
+ struct nd_btt *nd_btt = container_of(dev, struct nd_btt, dev);
+
+ WARN_ON(!is_nd_btt(dev));
+ return nd_btt;
+}
+EXPORT_SYMBOL(to_nd_btt);
+
+static const unsigned long btt_lbasize_supported[] = { 512, 4096, 0 };
+
+static ssize_t sector_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+
+ return nd_sector_size_show(nd_btt->lbasize, btt_lbasize_supported, buf);
+}
+
+static ssize_t sector_size_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ ssize_t rc;
+
+ device_lock(dev);
+ nd_bus_lock(dev);
+ rc = nd_sector_size_store(dev, buf, &nd_btt->lbasize,
+ btt_lbasize_supported);
+ dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
+ rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ nd_bus_unlock(dev);
+ device_unlock(dev);
+
+ return rc ? rc : len;
+}
+static DEVICE_ATTR_RW(sector_size);
+
+static ssize_t uuid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+
+ if (nd_btt->uuid)
+ return sprintf(buf, "%pUb\n", nd_btt->uuid);
+ return sprintf(buf, "\n");
+}
+
+static ssize_t uuid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ ssize_t rc;
+
+ device_lock(dev);
+ rc = nd_uuid_store(dev, &nd_btt->uuid, buf, len);
+ dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
+ rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ device_unlock(dev);
+
+ return rc ? rc : len;
+}
+static DEVICE_ATTR_RW(uuid);
+
+static ssize_t backing_dev_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ char name[BDEVNAME_SIZE];
+
+ if (nd_btt->backing_dev)
+ return sprintf(buf, "/dev/%s\n",
+ bdevname(nd_btt->backing_dev, name));
+ else
+ return sprintf(buf, "\n");
+}
+
+static const fmode_t nd_btt_devs_mode = FMODE_READ | FMODE_WRITE | FMODE_EXCL;
+
+static void nd_btt_ndio_notify_remove(struct nd_io_claim *ndio_claim)
+{
+ char bdev_name[BDEVNAME_SIZE];
+ struct nd_btt *nd_btt;
+
+ if (!ndio_claim || !ndio_claim->holder)
+ return;
+
+ nd_btt = to_nd_btt(ndio_claim->holder);
+ WARN_ON_ONCE(!is_nd_bus_locked(&nd_btt->dev));
+ dev_dbg(&nd_btt->dev, "%pf: %s: release /dev/%s\n",
+ __builtin_return_address(0), __func__,
+ bdevname(nd_btt->backing_dev, bdev_name));
+ blkdev_put(nd_btt->backing_dev, nd_btt_devs_mode);
+ nd_btt->backing_dev = NULL;
+
+ /*
+ * Once we've had our backing device removed we need to be fully
+ * reconfigured. The bus will have already created a new seed
+ * for this purpose, so now is a good time to clean up this
+ * stale nd_btt instance.
+ */
+ if (nd_btt->dev.driver)
+ nd_device_unregister(&nd_btt->dev, ND_ASYNC);
+ else {
+ ndio_del_claim(ndio_claim);
+ nd_btt->ndio_claim = NULL;
+ }
+}
+
+static ssize_t __backing_dev_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ char bdev_name[BDEVNAME_SIZE];
+ struct block_device *bdev;
+ struct nd_io *ndio;
+ char *path;
+
+ if (dev->driver) {
+ dev_dbg(dev, "%s: -EBUSY\n", __func__);
+ return -EBUSY;
+ }
+
+ path = kstrndup(buf, len, GFP_KERNEL);
+ if (!path)
+ return -ENOMEM;
+
+ /* detach the backing device */
+ if (strcmp(strim(path), "") == 0) {
+ if (!nd_btt->backing_dev)
+ goto out;
+ nd_btt_ndio_notify_remove(nd_btt->ndio_claim);
+ goto out;
+ } else if (nd_btt->backing_dev) {
+ dev_dbg(dev, "backing_dev already set\n");
+ len = -EBUSY;
+ goto out;
+ }
+
+ bdev = blkdev_get_by_path(strim(path), nd_btt_devs_mode, nd_btt);
+ if (IS_ERR(bdev)) {
+ dev_dbg(dev, "open '%s' failed: %ld\n", strim(path),
+ PTR_ERR(bdev));
+ len = PTR_ERR(bdev);
+ goto out;
+ }
+
+ if (get_capacity(bdev->bd_disk) < SZ_16M / 512) {
+ blkdev_put(bdev, nd_btt_devs_mode);
+ len = -ENXIO;
+ goto out;
+ }
+
+ ndio = ndio_lookup(nd_bus, bdevname(bdev->bd_contains, bdev_name));
+ if (!ndio) {
+ dev_dbg(dev, "%s does not have an ndio interface\n",
+ strim(path));
+ blkdev_put(bdev, nd_btt_devs_mode);
+ len = -ENXIO;
+ goto out;
+ }
+
+ nd_btt->ndio_claim = ndio_add_claim(ndio, &nd_btt->dev,
+ nd_btt_ndio_notify_remove);
+ if (!nd_btt->ndio_claim) {
+ blkdev_put(bdev, nd_btt_devs_mode);
+ len = -ENOMEM;
+ goto out;
+ }
+
+ WARN_ON_ONCE(!is_nd_bus_locked(&nd_btt->dev));
+ nd_btt->backing_dev = bdev;
+
+ out:
+ kfree(path);
+ return len;
+}
+
+static ssize_t backing_dev_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ ssize_t rc;
+
+ nd_bus_lock(dev);
+ device_lock(dev);
+ rc = __backing_dev_store(dev, attr, buf, len);
+ dev_dbg(dev, "%s: result: %zd wrote: %s%s", __func__,
+ rc, buf, buf[len - 1] == '\n' ? "" : "\n");
+ device_unlock(dev);
+ nd_bus_unlock(dev);
+
+ return rc;
+}
+static DEVICE_ATTR_RW(backing_dev);
+
+static bool is_nd_btt_idle(struct device *dev)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(dev);
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+
+ if (nd_bus->nd_btt == nd_btt || dev->driver || nd_btt->backing_dev)
+ return false;
+ return true;
+}
+
+static ssize_t delete_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ /* return 1 if can be deleted */
+ return sprintf(buf, "%d\n", is_nd_btt_idle(dev));
+}
+
+static ssize_t delete_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t len)
+{
+ unsigned long val;
+
+ /* write 1 to delete */
+ if (kstrtoul(buf, 0, &val) != 0 || val != 1)
+ return -EINVAL;
+
+ /* prevent deletion while this btt is active, or is the current seed */
+ if (!is_nd_btt_idle(dev))
+ return -EBUSY;
+
+ /*
+ * userspace raced itself if device goes active here and it gets
+ * to keep the pieces
+ */
+ nd_device_unregister(dev, ND_ASYNC);
+
+ return len;
+}
+static DEVICE_ATTR_RW(delete);
+
+static struct attribute *nd_btt_attributes[] = {
+ &dev_attr_sector_size.attr,
+ &dev_attr_backing_dev.attr,
+ &dev_attr_delete.attr,
+ &dev_attr_uuid.attr,
+ NULL,
+};
+
+static struct attribute_group nd_btt_attribute_group = {
+ .attrs = nd_btt_attributes,
+};
+
+static const struct attribute_group *nd_btt_attribute_groups[] = {
+ &nd_btt_attribute_group,
+ &nd_device_attribute_group,
+ NULL,
+};
+
+static struct nd_btt *__nd_btt_create(struct nd_bus *nd_bus,
+ unsigned long lbasize, u8 *uuid)
+{
+ struct nd_btt *nd_btt = kzalloc(sizeof(*nd_btt), GFP_KERNEL);
+ struct device *dev;
+
+ if (!nd_btt)
+ return NULL;
+ nd_btt->id = ida_simple_get(&btt_ida, 0, 0, GFP_KERNEL);
+ if (nd_btt->id < 0) {
+ kfree(nd_btt);
+ return NULL;
+ }
+
+ nd_btt->lbasize = lbasize;
+ if (uuid)
+ uuid = kmemdup(uuid, 16, GFP_KERNEL);
+ nd_btt->uuid = uuid;
+ dev = &nd_btt->dev;
+ dev_set_name(dev, "btt%d", nd_btt->id);
+ dev->parent = &nd_bus->dev;
+ dev->type = &nd_btt_device_type;
+ dev->groups = nd_btt_attribute_groups;
+ return nd_btt;
+}
+
+struct nd_btt *nd_btt_create(struct nd_bus *nd_bus)
+{
+ struct nd_btt *nd_btt = __nd_btt_create(nd_bus, 0, NULL);
+
+ if (!nd_btt)
+ return NULL;
+ nd_device_register(&nd_btt->dev);
+ return nd_btt;
+}
+
+/*
+ * btt_sb_checksum: compute checksum for btt info block
+ *
+ * Returns a fletcher64 checksum of everything in the given info block
+ * except the last field (since that's where the checksum lives).
+ */
+u64 btt_sb_checksum(struct btt_sb *btt_sb)
+{
+ u64 sum, sum_save;
+
+ sum_save = btt_sb->checksum;
+ btt_sb->checksum = 0;
+ sum = nd_fletcher64(btt_sb, sizeof(*btt_sb));
+ btt_sb->checksum = sum_save;
+ return sum;
+}
+EXPORT_SYMBOL(btt_sb_checksum);
+
+static int nd_btt_autodetect(struct nd_bus *nd_bus, struct nd_io *ndio,
+ struct block_device *bdev)
+{
+ char name[BDEVNAME_SIZE];
+ struct nd_btt *nd_btt;
+ struct btt_sb *btt_sb;
+ u64 offset, checksum;
+ u32 lbasize;
+ u8 *uuid;
+ int rc;
+
+ btt_sb = kzalloc(sizeof(*btt_sb), GFP_KERNEL);
+ if (!btt_sb)
+ return -ENODEV;
+
+ offset = nd_partition_offset(bdev);
+ rc = ndio->rw_bytes(ndio, btt_sb, offset + SZ_4K, sizeof(*btt_sb), READ);
+ if (rc)
+ goto out_free_sb;
+
+ if (get_capacity(bdev->bd_disk) < SZ_16M / 512)
+ goto out_free_sb;
+
+ if (memcmp(btt_sb->signature, BTT_SIG, BTT_SIG_LEN) != 0)
+ goto out_free_sb;
+
+ checksum = le64_to_cpu(btt_sb->checksum);
+ btt_sb->checksum = 0;
+ if (checksum != btt_sb_checksum(btt_sb))
+ goto out_free_sb;
+ btt_sb->checksum = cpu_to_le64(checksum);
+
+ uuid = kmemdup(btt_sb->uuid, 16, GFP_KERNEL);
+ if (!uuid)
+ goto out_free_sb;
+
+ lbasize = le32_to_cpu(btt_sb->external_lbasize);
+ nd_btt = __nd_btt_create(nd_bus, lbasize, uuid);
+ if (!nd_btt)
+ goto out_free_uuid;
+
+ device_initialize(&nd_btt->dev);
+ nd_btt->ndio_claim = ndio_add_claim(ndio, &nd_btt->dev,
+ nd_btt_ndio_notify_remove);
+ if (!nd_btt->ndio_claim)
+ goto out_free_btt;
+
+ nd_btt->backing_dev = bdev;
+ dev_dbg(&nd_btt->dev, "%s: activate %s\n", __func__,
+ bdevname(bdev, name));
+ __nd_device_register(&nd_btt->dev);
+ kfree(btt_sb);
+ return 0;
+
+ out_free_btt:
+ kfree(nd_btt);
+ out_free_uuid:
+ kfree(uuid);
+ out_free_sb:
+ kfree(btt_sb);
+
+ return -ENODEV;
+}
+
+void nd_btt_notify_ndio(struct nd_bus *nd_bus, struct nd_io *ndio)
+{
+ struct disk_part_iter piter;
+ struct hd_struct *part;
+
+ disk_part_iter_init(&piter, ndio->disk, DISK_PITER_INCL_PART0);
+ while ((part = disk_part_iter_next(&piter))) {
+ struct block_device *bdev;
+ int rc;
+
+ bdev = bdget_disk(ndio->disk, part->partno);
+ if (!bdev)
+ continue;
+ if (blkdev_get(bdev, nd_btt_devs_mode, nd_bus) != 0)
+ continue;
+ rc = nd_btt_autodetect(nd_bus, ndio, bdev);
+ if (rc)
+ blkdev_put(bdev, nd_btt_devs_mode);
+ /* no need to scan further in the case of whole disk btt */
+ if (rc == 0 && part->partno == 0)
+ break;
+ }
+ disk_part_iter_exit(&piter);
+}
diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
index cb619d70166d..7fbac49faad2 100644
--- a/drivers/block/nd/bus.c
+++ b/drivers/block/nd/bus.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/fcntl.h>
#include <linux/async.h>
+#include <linux/genhd.h>
#include <linux/ndctl.h>
#include <linux/sched.h>
#include <linux/slab.h>
@@ -41,6 +42,8 @@ static int to_nd_device_type(struct device *dev)
return ND_DEVICE_REGION_BLOCK;
else if (is_nd_pmem(dev->parent) || is_nd_blk(dev->parent))
return nd_region_to_namespace_type(to_nd_region(dev->parent));
+ else if (is_nd_btt(dev))
+ return ND_DEVICE_BTT;
return 0;
}
@@ -85,6 +88,21 @@ static int nd_bus_probe(struct device *dev)
dev_dbg(&nd_bus->dev, "%s.probe(%s) = %d\n", dev->driver->name,
dev_name(dev), rc);
+
+ /* check if our btt-seed has sprouted, and plant another */
+ if (rc == 0 && is_nd_btt(dev) && dev == &nd_bus->nd_btt->dev) {
+ const char *sep = "", *name = "", *status = "failed";
+
+ nd_bus->nd_btt = nd_btt_create(nd_bus);
+ if (nd_bus->nd_btt) {
+ status = "succeeded";
+ sep = ": ";
+ name = dev_name(&nd_bus->nd_btt->dev);
+ }
+ dev_dbg(&nd_bus->dev, "btt seed creation %s%s%s\n",
+ status, sep, name);
+ }
+
if (rc != 0)
module_put(provider);
return rc;
@@ -173,14 +191,19 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
put_device(dev);
}
-void nd_device_register(struct device *dev)
+void __nd_device_register(struct device *dev)
{
dev->bus = &nd_bus_type;
- device_initialize(dev);
get_device(dev);
async_schedule_domain(nd_async_device_register, dev,
&nd_async_domain);
}
+
+void nd_device_register(struct device *dev)
+{
+ device_initialize(dev);
+ __nd_device_register(dev);
+}
EXPORT_SYMBOL(nd_device_register);
void nd_device_unregister(struct device *dev, enum nd_async_mode mode)
@@ -229,6 +252,107 @@ int __nd_driver_register(struct nd_device_driver *nd_drv, struct module *owner,
}
EXPORT_SYMBOL(__nd_driver_register);
+/**
+ * nd_register_ndio() - register byte-aligned access capability for an nd-bdev
+ * @disk: child gendisk of the ndio namepace device
+ * @ndio: initialized ndio instance to register
+ *
+ * LOCKING: hold nd_bus_lock() over the creation of ndio->disk and the
+ * subsequent nd_region_ndio event
+ */
+int nd_register_ndio(struct nd_io *ndio)
+{
+ struct nd_bus *nd_bus;
+ struct device *dev;
+
+ if (!ndio || !ndio->dev || !ndio->disk || !list_empty(&ndio->list)
+ || !ndio->rw_bytes || !list_empty(&ndio->claims)) {
+ pr_debug("%s bad parameters from %pf\n", __func__,
+ __builtin_return_address(0));
+ return -EINVAL;
+ }
+
+ dev = ndio->dev;
+ nd_bus = walk_to_nd_bus(dev);
+ if (!nd_bus)
+ return -EINVAL;
+
+ WARN_ON_ONCE(!is_nd_bus_locked(&nd_bus->dev));
+ list_add(&ndio->list, &nd_bus->ndios);
+
+ /* TODO: generic infrastructure for 3rd party ndio claimers */
+ nd_btt_notify_ndio(nd_bus, ndio);
+
+ return 0;
+}
+EXPORT_SYMBOL(nd_register_ndio);
+
+/**
+ * __nd_unregister_ndio() - try to remove an ndio interface
+ * @ndio: interface to remove
+ */
+static int __nd_unregister_ndio(struct nd_io *ndio)
+{
+ struct nd_io_claim *ndio_claim, *_n;
+ struct nd_bus *nd_bus;
+ LIST_HEAD(claims);
+
+ nd_bus = walk_to_nd_bus(ndio->dev);
+ if (!nd_bus || list_empty(&ndio->list))
+ return -ENXIO;
+
+ spin_lock(&ndio->lock);
+ list_splice_init(&ndio->claims, &claims);
+ spin_unlock(&ndio->lock);
+
+ list_for_each_entry_safe(ndio_claim, _n, &claims, list)
+ ndio_claim->notify_remove(ndio_claim);
+
+ list_del_init(&ndio->list);
+
+ return 0;
+}
+
+int nd_unregister_ndio(struct nd_io *ndio)
+{
+ struct device *dev = ndio->dev;
+ int rc;
+
+ nd_bus_lock(dev);
+ rc = __nd_unregister_ndio(ndio);
+ nd_bus_unlock(dev);
+
+ /*
+ * Flush in case ->notify_remove() kicked off asynchronous device
+ * unregistration
+ */
+ nd_synchronize();
+
+ return rc;
+}
+EXPORT_SYMBOL(nd_unregister_ndio);
+
+static struct nd_io *__ndio_lookup(struct nd_bus *nd_bus, const char *diskname)
+{
+ struct nd_io *ndio;
+
+ list_for_each_entry(ndio, &nd_bus->ndios, list)
+ if (strcmp(diskname, ndio->disk->disk_name) == 0)
+ return ndio;
+
+ return NULL;
+}
+
+struct nd_io *ndio_lookup(struct nd_bus *nd_bus, const char *diskname)
+{
+ struct nd_io *ndio;
+
+ WARN_ON_ONCE(!is_nd_bus_locked(&nd_bus->dev));
+ ndio = __ndio_lookup(nd_bus, diskname);
+
+ return ndio;
+}
+
static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 880aef08f919..a093c6468a53 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -62,6 +62,62 @@ bool is_nd_bus_locked(struct device *dev)
}
EXPORT_SYMBOL(is_nd_bus_locked);
+void nd_init_ndio(struct nd_io *ndio, nd_rw_bytes_fn rw_bytes,
+ struct device *dev, struct gendisk *disk, unsigned long align)
+{
+ memset(ndio, 0, sizeof(*ndio));
+ INIT_LIST_HEAD(&ndio->claims);
+ INIT_LIST_HEAD(&ndio->list);
+ spin_lock_init(&ndio->lock);
+ ndio->dev = dev;
+ ndio->disk = disk;
+ ndio->align = align;
+ ndio->rw_bytes = rw_bytes;
+}
+EXPORT_SYMBOL(nd_init_ndio);
+
+void ndio_del_claim(struct nd_io_claim *ndio_claim)
+{
+ struct nd_io *ndio;
+ struct device *holder;
+
+ if (!ndio_claim)
+ return;
+ ndio = ndio_claim->parent;
+ holder = ndio_claim->holder;
+
+ dev_dbg(holder, "%s: drop %s\n", __func__, dev_name(ndio->dev));
+ spin_lock(&ndio->lock);
+ list_del(&ndio_claim->list);
+ spin_unlock(&ndio->lock);
+ put_device(ndio->dev);
+ kfree(ndio_claim);
+ put_device(holder);
+}
+
+struct nd_io_claim *ndio_add_claim(struct nd_io *ndio, struct device *holder,
+ ndio_notify_remove_fn notify_remove)
+{
+ struct nd_io_claim *ndio_claim = kzalloc(sizeof(*ndio_claim), GFP_KERNEL);
+
+ if (!ndio_claim)
+ return NULL;
+
+ INIT_LIST_HEAD(&ndio_claim->list);
+ ndio_claim->parent = ndio;
+ get_device(ndio->dev);
+
+ spin_lock(&ndio->lock);
+ list_add(&ndio_claim->list, &ndio->claims);
+ spin_unlock(&ndio->lock);
+
+ ndio_claim->holder = holder;
+ ndio_claim->notify_remove = notify_remove;
+ get_device(holder);
+
+ return ndio_claim;
+}
+
/**
* nd_dimm_by_handle - lookup an nd_dimm by its corresponding nfit_handle
* @nd_bus: parent bus of the dimm
@@ -125,6 +181,8 @@ static void nd_bus_release(struct device *dev)
kfree(nd_mem);
}
+ WARN_ON(!list_empty(&nd_bus->ndios));
+
ida_simple_remove(&nd_ida, nd_bus->id);
kfree(nd_bus);
}
@@ -323,11 +381,29 @@ static ssize_t wait_probe_show(struct device *dev,
}
static DEVICE_ATTR_RO(wait_probe);
+static ssize_t btt_seed_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nd_bus *nd_bus = to_nd_bus(dev);
+ ssize_t rc;
+
+ nd_bus_lock(dev);
+ if (nd_bus->nd_btt)
+ rc = sprintf(buf, "%s\n", dev_name(&nd_bus->nd_btt->dev));
+ else
+ rc = sprintf(buf, "\n");
+ nd_bus_unlock(dev);
+
+ return rc;
+}
+static DEVICE_ATTR_RO(btt_seed);
+
static struct attribute *nd_bus_attributes[] = {
&dev_attr_commands.attr,
&dev_attr_wait_probe.attr,
&dev_attr_provider.attr,
&dev_attr_revision.attr,
+ &dev_attr_btt_seed.attr,
NULL,
};
@@ -353,6 +429,7 @@ static void *nd_bus_new(struct device *parent,
INIT_LIST_HEAD(&nd_bus->bdws);
INIT_LIST_HEAD(&nd_bus->memdevs);
INIT_LIST_HEAD(&nd_bus->dimms);
+ INIT_LIST_HEAD(&nd_bus->ndios);
INIT_LIST_HEAD(&nd_bus->list);
init_waitqueue_head(&nd_bus->probe_wait);
INIT_RADIX_TREE(&nd_bus->dimm_radix, GFP_KERNEL);
@@ -737,6 +814,9 @@ static struct nd_bus *nd_bus_probe(struct nd_bus *nd_bus)
if (rc)
goto err_child;
+ nd_bus->nd_btt = nd_btt_create(nd_bus);
+ nd_synchronize();
+
mutex_lock(&nd_bus_list_mutex);
list_add_tail(&nd_bus->list, &nd_bus_list);
mutex_unlock(&nd_bus_list_mutex);
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index f88140dceea7..5f58e8e96a41 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -31,6 +31,11 @@ enum {
DPA_RESOURCE_ADJUSTED = 1 << 0,
};
+struct block_device;
+struct nd_io_claim;
+struct nd_btt;
+struct nd_io;
+
/*
* List manipulation is protected by nd_bus_list_mutex, except for the
* deferred probe tracking list which nests under instances where
@@ -46,10 +51,12 @@ struct nd_bus {
struct list_head spas;
struct list_head dcrs;
struct list_head bdws;
+ struct list_head ndios;
struct list_head list;
struct device dev;
int id, probe_active;
struct mutex reconfig_mutex;
+ struct nd_btt *nd_btt;
};
struct nd_dimm {
@@ -100,12 +107,32 @@ struct nd_mem {
struct list_head list;
};
+struct nd_io *ndio_lookup(struct nd_bus *nd_bus, const char *diskname);
const char *spa_type_name(u16 type);
int nfit_spa_type(struct nfit_spa __iomem *nfit_spa);
struct nd_dimm *nd_dimm_by_handle(struct nd_bus *nd_bus, u32 nfit_handle);
bool is_nd_dimm(struct device *dev);
bool is_nd_blk(struct device *dev);
bool is_nd_pmem(struct device *dev);
+#if IS_ENABLED(CONFIG_ND_BTT_DEVS)
+bool is_nd_btt(struct device *dev);
+struct nd_btt *nd_btt_create(struct nd_bus *nd_bus);
+void nd_btt_notify_ndio(struct nd_bus *nd_bus, struct nd_io *ndio);
+#else
+static inline bool is_nd_btt(struct device *dev)
+{
+ return false;
+}
+
+static inline struct nd_btt *nd_btt_create(struct nd_bus *nd_bus)
+{
+ return NULL;
+}
+
+static inline void nd_btt_notify_ndio(struct nd_bus *nd_bus, struct nd_io *ndio)
+{
+}
+#endif
struct nd_bus *to_nd_bus(struct device *dev);
struct nd_dimm *to_nd_dimm(struct device *dev);
struct nd_bus *walk_to_nd_bus(struct device *nd_dev);
@@ -127,6 +154,7 @@ void nd_bus_destroy_ndctl(struct nd_bus *nd_bus);
int nd_bus_register_dimms(struct nd_bus *nd_bus);
int nd_bus_register_regions(struct nd_bus *nd_bus);
int nd_bus_init_interleave_sets(struct nd_bus *nd_bus);
+void __nd_device_register(struct device *dev);
int nd_match_dimm(struct device *dev, void *data);
struct nd_label_id;
char *nd_label_gen_id(struct nd_label_id *label_id, u8 *uuid, u32 flags);
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index 91f61e952003..b4f95ccc0252 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -12,12 +12,18 @@
*/
#ifndef __ND_H__
#define __ND_H__
+#include <linux/genhd.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/ndctl.h>
#include <linux/types.h>
+#include <linux/fs.h>
#include "label.h"
+enum {
+ SECTOR_SHIFT = 9,
+};
+
struct nd_dimm_drvdata {
struct device *dev;
int nsindex_size;
@@ -116,6 +122,84 @@ static inline unsigned nd_inc_seq(unsigned seq)
return next[seq & 3];
}
+struct nd_io;
+/**
+ * nd_rw_bytes_fn() - access bytes relative to the "whole disk" namespace device
+ * @ndio: per-namespace context
+ * @buf: source / target for the write / read
+ * @offset: offset relative to the start of the namespace device
+ * @n: num bytes to access
+ * @flags: READ, WRITE, and other REQ_* flags
+ *
+ * Note: Implementations may assume that offset + n never crosses ndio->align
+ */
+typedef int (*nd_rw_bytes_fn)(struct nd_io *ndio, void *buf, size_t offset,
+ size_t n, unsigned long flags);
+#define nd_data_dir(flags) (flags & 1)
+
+/**
+ * struct nd_io - info for byte-aligned access to nd devices
+ * @rw_bytes: operation to perform byte-aligned access
+ * @align: a single ->rw_bytes() request may not cross this alignment
+ * @gendisk: whole disk block device for the namespace
+ * @list: for the core to cache a list of "ndio"s for later association
+ * @dev: namespace device
+ * @claims: list of clients using this interface
+ * @lock: protect @claims mutation
+ */
+struct nd_io {
+ nd_rw_bytes_fn rw_bytes;
+ unsigned long align;
+ struct gendisk *disk;
+ struct list_head list;
+ struct device *dev;
+ struct list_head claims;
+ spinlock_t lock;
+};
+
+struct nd_io_claim;
+typedef void (*ndio_notify_remove_fn)(struct nd_io_claim *ndio_claim);
+
+/**
+ * struct nd_io_claim - instance of a claim on a parent ndio
+ * @notify_remove: ndio is going away, release resources
+ * @holder: object that has claimed this ndio
+ * @parent: ndio in use
+ * @holder: holder device
+ * @list: claim peers
+ *
+ * An ndio may be claimed multiple times, consider the case of a btt
+ * instance per partition on a namespace.
+ */
+struct nd_io_claim {
+ struct nd_io *parent;
+ ndio_notify_remove_fn notify_remove;
+ struct list_head list;
+ struct device *holder;
+};
+
+struct nd_btt {
+ struct device dev;
+ struct nd_io *ndio;
+ struct block_device *backing_dev;
+ unsigned long lbasize;
+ u8 *uuid;
+ u64 offset;
+ int id;
+ struct nd_io_claim *ndio_claim;
+};
+
+static inline u64 nd_partition_offset(struct block_device *bdev)
+{
+ struct hd_struct *p;
+
+ if (bdev == bdev->bd_contains)
+ return 0;
+
+ p = bdev->bd_part;
+ return ((u64) p->start_sect) << SECTOR_SHIFT;
+}
+
enum nd_async_mode {
ND_SYNC,
ND_ASYNC,
@@ -131,6 +215,13 @@ ssize_t nd_sector_size_show(unsigned long current_lbasize,
const unsigned long *supported, char *buf);
ssize_t nd_sector_size_store(struct device *dev, const char *buf,
unsigned long *current_lbasize, const unsigned long *supported);
+int nd_register_ndio(struct nd_io *ndio);
+int nd_unregister_ndio(struct nd_io *ndio);
+void nd_init_ndio(struct nd_io *ndio, nd_rw_bytes_fn rw_bytes,
+ struct device *dev, struct gendisk *disk, unsigned long align);
+void ndio_del_claim(struct nd_io_claim *ndio_claim);
+struct nd_io_claim *ndio_add_claim(struct nd_io *ndio, struct device *holder,
+ ndio_notify_remove_fn notify_remove);
extern struct attribute_group nd_device_attribute_group;
struct nd_dimm;
struct nd_dimm_drvdata *to_ndd(struct nd_mapping *nd_mapping);
@@ -144,6 +235,9 @@ int nd_dimm_init_config_data(struct nd_dimm_drvdata *ndd);
int nd_dimm_firmware_status(struct device *dev);
int nd_dimm_set_config_data(struct nd_dimm_drvdata *ndd, size_t offset,
void *buf, size_t len);
+struct nd_btt *to_nd_btt(struct device *dev);
+struct btt_sb;
+u64 btt_sb_checksum(struct btt_sb *btt_sb);
struct nd_region *to_nd_region(struct device *dev);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
diff --git a/drivers/block/nd/pmem.c b/drivers/block/nd/pmem.c
index aa2b4fb1f140..c902e74f10a7 100644
--- a/drivers/block/nd/pmem.c
+++ b/drivers/block/nd/pmem.c
@@ -31,6 +31,7 @@
struct pmem_device {
struct request_queue *pmem_queue;
struct gendisk *pmem_disk;
+ struct nd_io ndio;
/* One contiguous memory region per device */
phys_addr_t phys_addr;
@@ -100,6 +101,26 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector,
return 0;
}
+static int pmem_rw_bytes(struct nd_io *ndio, void *buf, size_t offset,
+ size_t n, unsigned long flags)
+{
+ struct pmem_device *pmem = container_of(ndio, typeof(*pmem), ndio);
+ int rw = nd_data_dir(flags);
+
+ if (unlikely(offset + n > pmem->size)) {
+ dev_WARN_ONCE(ndio->dev, 1, "%s: request out of range\n",
+ __func__);
+ return -EFAULT;
+ }
+
+ if (rw == READ)
+ memcpy(buf, pmem->virt_addr + offset, n);
+ else
+ memcpy(pmem->virt_addr + offset, buf, n);
+
+ return 0;
+}
+
static long pmem_direct_access(struct block_device *bdev, sector_t sector,
void **kaddr, unsigned long *pfn, long size)
{
@@ -179,8 +200,6 @@ static struct pmem_device *pmem_alloc(struct device *dev, struct resource *res)
set_capacity(disk, pmem->size >> 9);
pmem->pmem_disk = disk;
- add_disk(disk);
-
return pmem;
out_free_queue:
@@ -224,6 +243,7 @@ static int pmem_probe(struct platform_device *pdev)
if (IS_ERR(pmem))
return PTR_ERR(pmem);
+ add_disk(pmem->pmem_disk);
platform_set_drvdata(pdev, pmem);
return 0;
@@ -273,7 +293,12 @@ static int nd_pmem_probe(struct device *dev)
if (IS_ERR(pmem))
return PTR_ERR(pmem);
+ nd_bus_lock(dev);
+ add_disk(pmem->pmem_disk);
dev_set_drvdata(dev, pmem);
+ nd_init_ndio(&pmem->ndio, pmem_rw_bytes, dev, pmem->pmem_disk, 0);
+ nd_register_ndio(&pmem->ndio);
+ nd_bus_unlock(dev);
return 0;
}
@@ -282,6 +307,7 @@ static int nd_pmem_remove(struct device *dev)
{
struct pmem_device *pmem = dev_get_drvdata(dev);
+ nd_unregister_ndio(&pmem->ndio);
pmem_free(pmem);
return 0;
}
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index 5f0cf00872e0..4156a0f816ca 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -181,6 +181,7 @@ static inline const char *nfit_dimm_cmd_name(unsigned cmd)
#define ND_DEVICE_NAMESPACE_IO 4 /* legacy persistent memory */
#define ND_DEVICE_NAMESPACE_PMEM 5 /* persistent memory namespace (may alias) */
#define ND_DEVICE_NAMESPACE_BLOCK 6 /* block-data-window namespace (may alias) */
+#define ND_DEVICE_BTT 7 /* block-translation table device */
enum nd_driver_flags {
ND_DRIVER_DIMM = 1 << ND_DEVICE_DIMM,
@@ -189,6 +190,7 @@ enum nd_driver_flags {
ND_DRIVER_NAMESPACE_IO = 1 << ND_DEVICE_NAMESPACE_IO,
ND_DRIVER_NAMESPACE_PMEM = 1 << ND_DEVICE_NAMESPACE_PMEM,
ND_DRIVER_NAMESPACE_BLOCK = 1 << ND_DEVICE_NAMESPACE_BLOCK,
+ ND_DRIVER_BTT = 1 << ND_DEVICE_BTT,
};
enum {
From: Vishal Verma <[email protected]>
BTT stands for Block Translation Table, and is a way to provide power
fail sector atomicity semantics for block devices that have the ability
to perform byte granularity IO. It relies on the ->rw_bytes() capability
of provided nd namespace devices.
The BTT works as a stacked blocked device, and reserves a chunk of space
from the backing device for its accounting metadata. BLK namespaces may
mandate use of a BTT and expect the bus to initialize a BTT if not
already present. Otherwise if a BTT is desired for other namespaces (or
partitions of a namespace) a BTT may be manually configured.
Cc: Andy Lutomirski <[email protected]>
Cc: Boaz Harrosh <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Dave Chinner <[email protected]>
Cc: Greg KH <[email protected]>
[jmoyer: fix nmi watchdog timeout in btt_map_init]
[jmoyer: move btt initialization to module load path]
[jmoyer: fix memory leak in the btt initialization path]
[jmoyer: Don't overwrite corrupted arenas]
Signed-off-by: Vishal Verma <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
Documentation/blockdev/btt.txt | 273 ++++++++
drivers/block/nd/Kconfig | 18 -
drivers/block/nd/Makefile | 2
drivers/block/nd/btt.c | 1423 ++++++++++++++++++++++++++++++++++++++++
drivers/block/nd/btt.h | 140 ++++
drivers/block/nd/btt_devs.c | 3
drivers/block/nd/core.c | 1
drivers/block/nd/nd.h | 9
drivers/block/nd/region_devs.c | 77 ++
9 files changed, 1944 insertions(+), 2 deletions(-)
create mode 100644 Documentation/blockdev/btt.txt
create mode 100644 drivers/block/nd/btt.c
diff --git a/Documentation/blockdev/btt.txt b/Documentation/blockdev/btt.txt
new file mode 100644
index 000000000000..95134d5ec4a0
--- /dev/null
+++ b/Documentation/blockdev/btt.txt
@@ -0,0 +1,273 @@
+BTT - Block Translation Table
+=============================
+
+
+1. Introduction
+---------------
+
+Persistent memory based storage is able to perform IO at byte (or more
+accurately, cache line) granularity. However, we often want to expose such
+storage as traditional block devices. The block drivers for persistent memory
+will do exactly this. However, they do not provide any atomicity guarantees.
+Traditional SSDs typically provide protection against torn sectors in hardware,
+using stored energy in capacitors to complete in-flight block writes, or perhaps
+in firmware. We don't have this luxury with persistent memory - if a write is in
+progress, and we experience a power failure, the block will contain a mix of old
+and new data. Applications may not be prepared to handle such a scenario.
+
+The Block Translation Table (BTT) provides atomic sector update semantics for
+persistent memory devices, so that applications that rely on sector writes not
+being torn can continue to do so. The BTT manifests itself as a stacked block
+device, and reserves a portion of the underlying storage for its metadata. At
+the heart of it, is an indirection table that re-maps all the blocks on the
+volume. It can be thought of as an extremely simple file system that only
+provides atomic sector updates.
+
+
+2. Static Layout
+----------------
+
+The underlying storage on which a BTT can be laid out is not limited in any way.
+The BTT, however, splits the available space into chunks of up to 512 GiB,
+called "Arenas".
+
+Each arena follows the same layout for its metadata, and all references in an
+arena are internal to it (with the exception of one field that points to the
+next arena). The following depicts the "On-disk" metadata layout:
+
+
+ Backing Store +-------> Arena
++---------------+ | +------------------+
+| | | | Arena info block |
+| Arena 0 +---+ | 4K |
+| 512G | +------------------+
+| | | |
++---------------+ | |
+| | | |
+| Arena 1 | | Data Blocks |
+| 512G | | |
+| | | |
++---------------+ | |
+| . | | |
+| . | | |
+| . | | |
+| | | |
+| | | |
++---------------+ +------------------+
+ | |
+ | BTT Map |
+ | |
+ | |
+ +------------------+
+ | |
+ | BTT Flog |
+ | |
+ +------------------+
+ | Info block copy |
+ | 4K |
+ +------------------+
+
+
+3. Theory of Operation
+----------------------
+
+
+a. The BTT Map
+--------------
+
+The map is a simple lookup/indirection table that maps an LBA to an internal
+block. Each map entry is 32 bits. The two most significant bits are special
+flags, and the remaining form the internal block number.
+
+Bit Description
+31 : TRIM flag - marks if the block was trimmed or discarded
+30 : ERROR flag - marks an error block. Cleared on write.
+29 - 0 : Mappings to internal 'postmap' blocks
+
+
+Some of the terminology that will be subsequently used:
+
+External LBA : LBA as made visible to upper layers.
+ABA : Arena Block Address - Block offset/number within an arena
+Premap ABA : The block offset into an arena, which was decided upon by range
+ checking the External LBA
+Postmap ABA : The block number in the "Data Blocks" area obtained after
+ indirection from the map
+nfree : The number of free blocks that are maintained at any given time.
+ This is the number of concurrent writes that can happen to the
+ arena.
+
+
+For example, after adding a BTT, we surface a disk of 1024G. We get a read for
+the external LBA at 768G. This falls into the second arena, and of the 512G
+worth of blocks that this arena contributes, this block is at 256G. Thus, the
+premap ABA is 256G. We now refer to the map, and find out the mapping for block
+'X' (256G) points to block 'Y', say '64'. Thus the postmap ABA is 64.
+
+
+b. The BTT Flog
+---------------
+
+The BTT provides sector atomicity by making every write an "allocating write",
+i.e. Every write goes to a "free" block. A running list of free blocks is
+maintained in the form of the BTT flog. 'Flog' is a combination of the words
+"free list" and "log". The flog contains 'nfree' entries, and an entry contains:
+
+lba : The premap ABA that is being written to
+old_map : The old postmap ABA - after 'this' write completes, this will be a
+ free block.
+new_map : The new postmap ABA. The map will up updated to reflect this
+ lba->postmap_aba mapping, but we log it here in case we have to
+ recover.
+seq : Sequence number to mark which of the 2 sections of this flog entry is
+ valid/newest. It cycles between 01->10->11->01 (binary) under normal
+ operation, with 00 indicating an uninitialized state.
+lba' : alternate lba entry
+old_map': alternate old postmap entry
+new_map': alternate new postmap entry
+seq' : alternate sequence number.
+
+Each of the above fields is 32-bit, making one entry 16 bytes. Flog updates are
+done such that for any entry being written, it:
+a. overwrites the 'old' section in the entry based on sequence numbers
+b. writes the new entry such that the sequence number is written last.
+
+
+c. The concept of lanes
+-----------------------
+
+While 'nfree' describes the number of concurrent IOs an arena can process
+concurrently, 'nlanes' is the number of IOs the BTT device as a whole can
+process.
+ nlanes = min(nfree, num_cpus)
+A lane number is obtained at the start of any IO, and is used for indexing into
+all the on-disk and in-memory data structures for the duration of the IO. It is
+protected by a spinlock.
+
+
+d. In-memory data structure: Read Tracking Table (RTT)
+------------------------------------------------------
+
+Consider a case where we have two threads, one doing reads and the other,
+writes. We can hit a condition where the writer thread grabs a free block to do
+a new IO, but the (slow) reader thread is still reading from it. In other words,
+the reader consulted a map entry, and started reading the corresponding block. A
+writer started writing to the same external LBA, and finished the write updating
+the map for that external LBA to point to its new postmap ABA. At this point the
+internal, postmap block that the reader is (still) reading has been inserted
+into the list of free blocks. If another write comes in for the same LBA, it can
+grab this free block, and start writing to it, causing the reader to read
+incorrect data. To prevent this, we introduce the RTT.
+
+The RTT is a simple, per arena table with 'nfree' entries. Every reader inserts
+into rtt[lane_number], the postmap ABA it is reading, and clears it after the
+read is complete. Every writer thread, after grabbing a free block, checks the
+RTT for its presence. If the postmap free block is in the RTT, it waits till the
+reader clears the RTT entry, and only then starts writing to it.
+
+
+e. In-memory data structure: map locks
+--------------------------------------
+
+Consider a case where two writer threads are writing to the same LBA. There can
+be a race in the following sequence of steps:
+
+free[lane] = map[premap_aba]
+map[premap_aba] = postmap_aba
+
+Both threads can update their respective free[lane] with the same old, freed
+postmap_aba. This has made the layout inconsistent by losing a free entry, and
+at the same time, duplicating another free entry for two lanes.
+
+To solve this, we could have a single map lock (per arena) that has to be taken
+before performing the above sequence, but we feel that could be too contentious.
+Instead we use an array of (nfree) map_locks that is indexed by
+(premap_aba modulo nfree).
+
+
+f. Reconstruction from the Flog
+-------------------------------
+
+On startup, we analyze the BTT flog to create our list of free blocks. We walk
+through all the entries, and for each lane, of the set of two possible
+'sections', we always look at the most recent one only (based on the sequence
+number). The reconstruction rules/steps are simple:
+- Read map[log_entry.lba].
+- If log_entry.new matches the map entry, then log_entry.old is free.
+- If log_entry.new does not match the map entry, then log_entry.new is free.
+ (This case can only be caused by power-fails/unsafe shutdowns)
+
+
+g. Summarizing - Read and Write flows
+-------------------------------------
+
+Read:
+
+1. Convert external LBA to arena number + pre-map ABA
+2. Get a lane (and take lane_lock)
+3. Read map to get the entry for this pre-map ABA
+4. Enter post-map ABA into RTT[lane]
+5. If TRIM flag set in map, return zeroes, and end IO (go to step 8)
+6. If ERROR flag set in map, end IO with EIO (go to step 8)
+7. Read data from this block
+8. Remove post-map ABA entry from RTT[lane]
+9. Release lane (and lane_lock)
+
+Write:
+
+1. Convert external LBA to Arena number + pre-map ABA
+2. Get a lane (and take lane_lock)
+3. Use lane to index into in-memory free list and obtain a new block, next flog
+ index, next sequence number
+4. Scan the RTT to check if free block is present, and spin/wait if it is.
+5. Write data to this free block
+6. Read map to get the existing post-map ABA entry for this pre-map ABA
+7. Write flog entry: [premap_aba / old postmap_aba / new postmap_aba / seq_num]
+8. Write new post-map ABA into map.
+9. Write old post-map entry into the free list
+10. Calculate next sequence number and write into the free list entry
+11. Release lane (and lane_lock)
+
+
+4. Error Handling
+=================
+
+An arena would be in an error state if any of the metadata is corrupted
+irrecoverably, either due to a bug or a media error. The following conditions
+indicate an error:
+- Info block checksum does not match (and recovering from the copy also fails)
+- All internal available blocks are not uniquely and entirely addressed by the
+ sum of mapped blocks and free blocks (from the BTT flog).
+- Rebuilding free list from the flog reveals missing/duplicate/impossible
+ entries
+- A map entry is out of bounds
+
+If any of these error conditions are encountered, the arena is put into a read
+only state using a flag in the info block.
+
+
+5. In-kernel usage
+==================
+
+Any block driver that supports byte granularity IO to the storage may register
+with the BTT. It will have to provide the rw_bytes interface in its
+block_device_operations struct:
+
+ int (*rw_bytes)(struct gendisk *, void *, size_t, off_t, int rw);
+
+It may register with the BTT after it adds its own gendisk, using btt_init:
+
+ struct btt *btt_init(struct gendisk *disk, unsigned long long rawsize,
+ u32 lbasize, u8 uuid[], int maxlane);
+
+note that maxlane is the maximum amount of concurrency the driver wishes to
+allow the BTT to use.
+
+The BTT 'disk' appears as a stacked block device that grabs the underlying block
+device in the O_EXCL mode.
+
+When the driver wishes to remove the backing disk, it should similarly call
+btt_fini using the same struct btt* handle that was provided to it by btt_init.
+
+ void btt_fini(struct btt *btt);
+
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index faa756841773..29d9f8e4eedb 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -90,6 +90,22 @@ config BLK_DEV_PMEM
Say Y if you want to use a NVDIMM described by NFIT
config ND_BTT_DEVS
- def_bool y
+ bool
+
+config ND_BTT
+ tristate "BTT: Block Translation Table (atomic sector updates)"
+ depends on ND_CORE
+ default ND_CORE
+ select ND_BTT_DEVS
+
+config ND_MAX_REGIONS
+ int "Maximum number of regions supported by the sub-system"
+ default 64
+ ---help---
+ A 'region' corresponds to an individual DIMM or an interleave
+ set of DIMMs. A typical maximally configured system may have
+ up to 32 DIMMs.
+
+ Leave the default of 64 if you are unsure.
endif
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 3e4878e0fe1d..2dc1ab6fdef2 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -17,6 +17,7 @@ endif
obj-$(CONFIG_ND_CORE) += nd.o
obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
+obj-$(CONFIG_ND_BTT) += nd_btt.o
nd_acpi-y := acpi.o
@@ -32,3 +33,4 @@ nd-y += label.o
nd-$(CONFIG_ND_BTT_DEVS) += btt_devs.o
nd_pmem-y := pmem.o
+nd_btt-y := btt.o
diff --git a/drivers/block/nd/btt.c b/drivers/block/nd/btt.c
new file mode 100644
index 000000000000..1075012d13c0
--- /dev/null
+++ b/drivers/block/nd/btt.c
@@ -0,0 +1,1423 @@
+/*
+ * Block Translation Table
+ * Copyright (c) 2014-2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/highmem.h>
+#include <linux/debugfs.h>
+#include <linux/blkdev.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include <linux/hdreg.h>
+#include <linux/genhd.h>
+#include <linux/sizes.h>
+#include <linux/ndctl.h>
+#include <linux/fs.h>
+#include <linux/nd.h>
+#include "btt.h"
+#include "nd.h"
+
+enum log_ent_request {
+ LOG_NEW_ENT = 0,
+ LOG_OLD_ENT
+};
+
+static int btt_major;
+
+static int nd_btt_rw_bytes(struct nd_btt *nd_btt, void *buf, size_t offset,
+ size_t n, unsigned long flags)
+{
+ struct nd_io *ndio = nd_btt->ndio;
+
+ if (unlikely(nd_data_dir(flags) == WRITE)
+ && bdev_read_only(nd_btt->backing_dev))
+ return -EACCES;
+
+ return ndio->rw_bytes(ndio, buf, offset + nd_btt->offset, n, flags);
+}
+
+static int arena_rw_bytes(struct arena_info *arena, void *buf, size_t n,
+ size_t offset, unsigned long flags)
+{
+ /* yes, FIXME, 'offset' and 'n' are swapped */
+ return nd_btt_rw_bytes(arena->nd_btt, buf, offset, n, flags);
+}
+
+static int btt_info_write(struct arena_info *arena, struct btt_sb *super)
+{
+ int ret;
+
+ ret = arena_rw_bytes(arena, super, sizeof(struct btt_sb),
+ arena->info2off, WRITE);
+ if (ret)
+ return ret;
+
+ return arena_rw_bytes(arena, super, sizeof(struct btt_sb),
+ arena->infooff, WRITE);
+}
+
+static int btt_info_read(struct arena_info *arena, struct btt_sb *super)
+{
+ WARN_ON(!super);
+ return arena_rw_bytes(arena, super, sizeof(struct btt_sb),
+ arena->infooff, READ);
+
+}
+
+/*
+ * 'raw' version of btt_map write
+ * Assumptions:
+ * mapping is in little-endian
+ * mapping contains 'E' and 'Z' flags as desired
+ */
+static int __btt_map_write(struct arena_info *arena, u32 lba, __le32 mapping)
+{
+ u64 ns_off = arena->mapoff + (lba * MAP_ENT_SIZE);
+
+ WARN_ON(lba >= arena->external_nlba);
+ return arena_rw_bytes(arena, &mapping, MAP_ENT_SIZE, ns_off, WRITE);
+}
+
+static int btt_map_write(struct arena_info *arena, u32 lba, u32 mapping,
+ u32 z_flag, u32 e_flag)
+{
+ u32 ze;
+ __le32 mapping_le;
+
+ /*
+ * This 'mapping' is supposed to be just the LBA mapping, without
+ * any flags set, so strip the flag bits.
+ */
+ mapping &= MAP_LBA_MASK;
+
+ ze = (z_flag << 1) + e_flag;
+ switch (ze) {
+ case 0:
+ /*
+ * We want to set neither of the Z or E flags, and
+ * in the actual layout, this means setting the bit
+ * positions of both to '1' to indicate a 'normal'
+ * map entry
+ */
+ mapping |= MAP_ENT_NORMAL;
+ break;
+ case 1:
+ mapping |= (1 << MAP_ERR_SHIFT);
+ break;
+ case 2:
+ mapping |= (1 << MAP_TRIM_SHIFT);
+ break;
+ default:
+ /*
+ * The case where Z and E are both sent in as '1' could be
+ * construed as a valid 'normal' case, but we decide not to,
+ * to avoid confusion
+ */
+ WARN_ONCE(1, "Invalid use of Z and E flags\n");
+ return -EIO;
+ }
+
+ mapping_le = cpu_to_le32(mapping);
+ return __btt_map_write(arena, lba, mapping_le);
+}
+
+static int btt_map_read(struct arena_info *arena, u32 lba, u32 *mapping,
+ int *trim, int *error)
+{
+ int ret;
+ __le32 in;
+ u32 raw_mapping, postmap, ze, z_flag, e_flag;
+ u64 ns_off = arena->mapoff + (lba * MAP_ENT_SIZE);
+
+ WARN_ON(lba >= arena->external_nlba);
+
+ ret = arena_rw_bytes(arena, &in, MAP_ENT_SIZE, ns_off, READ);
+ if (ret)
+ return ret;
+
+ raw_mapping = le32_to_cpu(in);
+
+ z_flag = (raw_mapping & MAP_TRIM_MASK) >> MAP_TRIM_SHIFT;
+ e_flag = (raw_mapping & MAP_ERR_MASK) >> MAP_ERR_SHIFT;
+ ze = (z_flag << 1) + e_flag;
+ postmap = raw_mapping & MAP_LBA_MASK;
+
+ /* Reuse the {z,e}_flag variables for *trim and *error */
+ z_flag = 0;
+ e_flag = 0;
+
+ switch (ze) {
+ case 0:
+ /* Initial state. Return postmap = premap */
+ *mapping = lba;
+ break;
+ case 1:
+ *mapping = postmap;
+ e_flag = 1;
+ break;
+ case 2:
+ *mapping = postmap;
+ z_flag = 1;
+ break;
+ case 3:
+ *mapping = postmap;
+ break;
+ default:
+ return -EIO;
+ }
+
+ if (trim)
+ *trim = z_flag;
+ if (error)
+ *error = e_flag;
+
+ return ret;
+}
+
+static int btt_log_read_pair(struct arena_info *arena, u32 lane,
+ struct log_entry *ent)
+{
+ WARN_ON(!ent);
+ return arena_rw_bytes(arena, ent, 2 * LOG_ENT_SIZE,
+ arena->logoff + (2 * lane * LOG_ENT_SIZE), READ);
+}
+
+static struct dentry *debugfs_root;
+
+static void arena_debugfs_init(struct arena_info *a, struct dentry *parent,
+ int idx)
+{
+ char dirname[32];
+ struct dentry *d;
+
+ /* If for some reason, parent bttN was not created, exit */
+ if (!parent)
+ return;
+
+ snprintf(dirname, 32, "arena%d", idx);
+ d = debugfs_create_dir(dirname, parent);
+ if (IS_ERR_OR_NULL(d))
+ return;
+ a->debugfs_dir = d;
+
+ debugfs_create_x64("size", S_IRUGO, d, &a->size);
+ debugfs_create_x64("external_lba_start", S_IRUGO, d,
+ &a->external_lba_start);
+ debugfs_create_x32("internal_nlba", S_IRUGO, d, &a->internal_nlba);
+ debugfs_create_u32("internal_lbasize", S_IRUGO, d,
+ &a->internal_lbasize);
+ debugfs_create_x32("external_nlba", S_IRUGO, d, &a->external_nlba);
+ debugfs_create_u32("external_lbasize", S_IRUGO, d,
+ &a->external_lbasize);
+ debugfs_create_u32("nfree", S_IRUGO, d, &a->nfree);
+ debugfs_create_u16("version_major", S_IRUGO, d, &a->version_major);
+ debugfs_create_u16("version_minor", S_IRUGO, d, &a->version_minor);
+ debugfs_create_x64("nextoff", S_IRUGO, d, &a->nextoff);
+ debugfs_create_x64("infooff", S_IRUGO, d, &a->infooff);
+ debugfs_create_x64("dataoff", S_IRUGO, d, &a->dataoff);
+ debugfs_create_x64("mapoff", S_IRUGO, d, &a->mapoff);
+ debugfs_create_x64("logoff", S_IRUGO, d, &a->logoff);
+ debugfs_create_x64("info2off", S_IRUGO, d, &a->info2off);
+ debugfs_create_x32("flags", S_IRUGO, d, &a->flags);
+}
+
+static void btt_debugfs_init(struct btt *btt)
+{
+ int i = 0;
+ struct arena_info *arena;
+
+ btt->debugfs_dir = debugfs_create_dir(dev_name(&btt->nd_btt->dev),
+ debugfs_root);
+ if (IS_ERR_OR_NULL(btt->debugfs_dir))
+ return;
+
+ list_for_each_entry(arena, &btt->arena_list, list) {
+ arena_debugfs_init(arena, btt->debugfs_dir, i);
+ i++;
+ }
+}
+
+/*
+ * This function accepts two log entries, and uses the
+ * sequence number to find the 'older' entry.
+ * It also updates the sequence number in this old entry to
+ * make it the 'new' one if the mark_flag is set.
+ * Finally, it returns which of the entries was the older one.
+ *
+ * TODO The logic feels a bit kludge-y. make it better..
+ */
+static int btt_log_get_old(struct log_entry *ent)
+{
+ int old;
+
+ /*
+ * the first ever time this is seen, the entry goes into [0]
+ * the next time, the following logic works out to put this
+ * (next) entry into [1]
+ */
+ if (ent[0].seq == 0) {
+ ent[0].seq = cpu_to_le32(1);
+ return 0;
+ }
+
+ if (ent[0].seq == ent[1].seq)
+ return -EINVAL;
+ if (le32_to_cpu(ent[0].seq) + le32_to_cpu(ent[1].seq) > 5)
+ return -EINVAL;
+
+ if (le32_to_cpu(ent[0].seq) < le32_to_cpu(ent[1].seq)) {
+ if (le32_to_cpu(ent[1].seq) - le32_to_cpu(ent[0].seq) == 1)
+ old = 0;
+ else
+ old = 1;
+ } else {
+ if (le32_to_cpu(ent[0].seq) - le32_to_cpu(ent[1].seq) == 1)
+ old = 1;
+ else
+ old = 0;
+ }
+
+ return old;
+}
+
+static struct device *to_dev(struct arena_info *arena)
+{
+ return &arena->nd_btt->dev;
+}
+
+/*
+ * This function copies the desired (old/new) log entry into ent if
+ * it is not NULL. It returns the sub-slot number (0 or 1)
+ * where the desired log entry was found. Negative return values
+ * indicate errors.
+ */
+static int btt_log_read(struct arena_info *arena, u32 lane,
+ struct log_entry *ent, int old_flag)
+{
+ int ret;
+ int old_ent, ret_ent;
+ struct log_entry log[2];
+
+ ret = btt_log_read_pair(arena, lane, log);
+ if (ret)
+ return -EIO;
+
+ old_ent = btt_log_get_old(log);
+ if (old_ent < 0 || old_ent > 1) {
+ dev_info(to_dev(arena),
+ "log corruption (%d): lane %d seq [%d, %d]\n",
+ old_ent, lane, log[0].seq, log[1].seq);
+ /* TODO set error state? */
+ return -EIO;
+ }
+
+ ret_ent = (old_flag ? old_ent : (1 - old_ent));
+
+ if (ent != NULL)
+ memcpy(ent, &log[ret_ent], LOG_ENT_SIZE);
+
+ return ret_ent;
+}
+
+/*
+ * This function commits a log entry to media
+ * It does _not_ prepare the freelist entry for the next write
+ * btt_flog_write is the wrapper for updating the freelist elements
+ */
+static int __btt_log_write(struct arena_info *arena, u32 lane,
+ u32 sub, struct log_entry *ent)
+{
+ int ret;
+ /*
+ * Ignore the padding in log_entry for calculating log_half.
+ * The entry is 'committed' when we write the sequence number,
+ * and we want to ensure that that is the last thing written.
+ * We don't bother writing the padding as that would be extra
+ * media wear and write amplification
+ */
+ unsigned int log_half = (LOG_ENT_SIZE - 2 * sizeof(u64)) / 2;
+ u64 ns_off = arena->logoff + (((2 * lane) + sub) * LOG_ENT_SIZE);
+ void *src = ent;
+
+ /* split the 16B write into atomic, durable halves */
+ ret = arena_rw_bytes(arena, src, log_half, ns_off, WRITE);
+ if (ret)
+ return ret;
+
+ ns_off += log_half;
+ src += log_half;
+ return arena_rw_bytes(arena, src, log_half, ns_off, WRITE);
+}
+
+static int btt_flog_write(struct arena_info *arena, u32 lane, u32 sub,
+ struct log_entry *ent)
+{
+ int ret;
+
+ ret = __btt_log_write(arena, lane, sub, ent);
+ if (ret)
+ return ret;
+
+ /* prepare the next free entry */
+ arena->freelist[lane].sub = 1 - arena->freelist[lane].sub;
+ if (++(arena->freelist[lane].seq) == 4)
+ arena->freelist[lane].seq = 1;
+ arena->freelist[lane].block = le32_to_cpu(ent->old_map);
+
+ return ret;
+}
+
+/*
+ * This function initializes the BTT map to a state with all externally
+ * exposed blocks having an identity mapping, and the TRIM flag set
+ */
+static int btt_map_init(struct arena_info *arena)
+{
+ int ret = -EINVAL;
+ void *zerobuf;
+ size_t offset = 0;
+ size_t chunk_size = SZ_2M;
+ size_t mapsize = arena->logoff - arena->mapoff;
+
+ zerobuf = kzalloc(chunk_size, GFP_KERNEL);
+ if (!zerobuf)
+ return -ENOMEM;
+
+ while (mapsize) {
+ size_t size = min(mapsize, chunk_size);
+
+ ret = arena_rw_bytes(arena, zerobuf, size,
+ arena->mapoff + offset, WRITE);
+ if (ret)
+ goto free;
+
+ offset += size;
+ mapsize -= size;
+ cond_resched();
+ }
+
+ free:
+ kfree(zerobuf);
+ return ret;
+}
+
+/*
+ * This function initializes the BTT log with 'fake' entries pointing
+ * to the initial reserved set of blocks as being free
+ */
+static int btt_log_init(struct arena_info *arena)
+{
+ int ret;
+ u32 i;
+ struct log_entry log, zerolog;
+
+ memset(&zerolog, 0, sizeof(zerolog));
+
+ for (i = 0; i < arena->nfree; i++) {
+ log.lba = cpu_to_le32(i);
+ log.old_map = cpu_to_le32(arena->external_nlba + i);
+ log.new_map = cpu_to_le32(arena->external_nlba + i);
+ log.seq = cpu_to_le32(LOG_SEQ_INIT);
+ ret = __btt_log_write(arena, i, 0, &log);
+ if (ret)
+ return ret;
+ ret = __btt_log_write(arena, i, 1, &zerolog);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int btt_freelist_init(struct arena_info *arena)
+{
+ int old, new, ret;
+ u32 i, map_entry;
+ struct log_entry log_new, log_old;
+
+ arena->freelist = kcalloc(arena->nfree, sizeof(struct free_entry),
+ GFP_KERNEL);
+ if (!arena->freelist)
+ return -ENOMEM;
+
+ for (i = 0; i < arena->nfree; i++) {
+ old = btt_log_read(arena, i, &log_old, LOG_OLD_ENT);
+ if (old < 0)
+ return old;
+
+ new = btt_log_read(arena, i, &log_new, LOG_NEW_ENT);
+ if (new < 0)
+ return new;
+
+ /* sub points to the next one to be overwritten */
+ arena->freelist[i].sub = 1 - new;
+ arena->freelist[i].seq = nd_inc_seq(le32_to_cpu(log_new.seq));
+ arena->freelist[i].block = le32_to_cpu(log_new.old_map);
+
+ /* This implies a newly created or untouched flog entry */
+ if (log_new.old_map == log_new.new_map)
+ continue;
+
+ /* Check if map recovery is needed */
+ ret = btt_map_read(arena, le32_to_cpu(log_new.lba), &map_entry,
+ NULL, NULL);
+ if (ret)
+ return ret;
+ if ((le32_to_cpu(log_new.new_map) != map_entry) &&
+ (le32_to_cpu(log_new.old_map) == map_entry)) {
+ /*
+ * Last transaction wrote the flog, but wasn't able
+ * to complete the map write. So fix up the map.
+ */
+ ret = btt_map_write(arena, le32_to_cpu(log_new.lba),
+ le32_to_cpu(log_new.new_map), 0, 0);
+ if (ret)
+ return ret;
+ }
+
+ }
+
+ return 0;
+}
+
+static int btt_rtt_init(struct arena_info *arena)
+{
+ arena->rtt = kcalloc(arena->nfree, sizeof(u32), GFP_KERNEL);
+ if (arena->rtt == NULL)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int btt_maplocks_init(struct arena_info *arena)
+{
+ u32 i;
+
+ arena->map_locks = kcalloc(arena->nfree, sizeof(struct aligned_lock),
+ GFP_KERNEL);
+ if (!arena->map_locks)
+ return -ENOMEM;
+
+ for (i = 0; i < arena->nfree; i++)
+ spin_lock_init(&arena->map_locks[i].lock);
+
+ return 0;
+}
+
+static struct arena_info *alloc_arena(struct btt *btt, size_t size,
+ size_t start, size_t arena_off)
+{
+ struct arena_info *arena;
+ u64 logsize, mapsize, datasize;
+ u64 available = size;
+
+ arena = kzalloc(sizeof(struct arena_info), GFP_KERNEL);
+ if (!arena)
+ return NULL;
+ arena->nd_btt = btt->nd_btt;
+
+ if (!size)
+ return arena;
+
+ arena->size = size;
+ arena->external_lba_start = start;
+ arena->external_lbasize = btt->lbasize;
+ arena->internal_lbasize = roundup(arena->external_lbasize,
+ INT_LBASIZE_ALIGNMENT);
+ arena->nfree = BTT_DEFAULT_NFREE;
+ arena->version_major = 1;
+ arena->version_minor = 1;
+
+ if (available % BTT_PG_SIZE)
+ available -= (available % BTT_PG_SIZE);
+
+ /* Two pages are reserved for the super block and its copy */
+ available -= 2 * BTT_PG_SIZE;
+
+ /* The log takes a fixed amount of space based on nfree */
+ logsize = roundup(2 * arena->nfree * sizeof(struct log_entry),
+ BTT_PG_SIZE);
+ available -= logsize;
+
+ /* Calculate optimal split between map and data area */
+ arena->internal_nlba = div_u64(available - BTT_PG_SIZE,
+ arena->internal_lbasize + MAP_ENT_SIZE);
+ arena->external_nlba = arena->internal_nlba - arena->nfree;
+
+ mapsize = roundup((arena->external_nlba * MAP_ENT_SIZE), BTT_PG_SIZE);
+ datasize = available - mapsize;
+
+ /* 'Absolute' values, relative to start of storage space */
+ arena->infooff = arena_off;
+ arena->dataoff = arena->infooff + BTT_PG_SIZE;
+ arena->mapoff = arena->dataoff + datasize;
+ arena->logoff = arena->mapoff + mapsize;
+ arena->info2off = arena->logoff + logsize;
+ return arena;
+}
+
+static void free_arenas(struct btt *btt)
+{
+ struct arena_info *arena, *next;
+
+ list_for_each_entry_safe(arena, next, &btt->arena_list, list) {
+ list_del(&arena->list);
+ kfree(arena->rtt);
+ kfree(arena->map_locks);
+ kfree(arena->freelist);
+ debugfs_remove_recursive(arena->debugfs_dir);
+ kfree(arena);
+ }
+}
+
+/*
+ * This function checks if the metadata layout is valid and error free
+ */
+static int arena_is_valid(struct arena_info *arena, struct btt_sb *super,
+ u8 *uuid)
+{
+ u64 checksum;
+
+ if (memcmp(super->uuid, uuid, 16))
+ return 0;
+
+ checksum = le64_to_cpu(super->checksum);
+ super->checksum = 0;
+ if (checksum != btt_sb_checksum(super))
+ return 0;
+ super->checksum = cpu_to_le64(checksum);
+
+ /* TODO: figure out action for this */
+ if ((le32_to_cpu(super->flags) & IB_FLAG_ERROR_MASK) != 0)
+ dev_info(to_dev(arena), "Found arena with an error flag\n");
+
+ return 1;
+}
+
+/*
+ * This function reads an existing valid btt superblock and
+ * populates the corresponding arena_info struct
+ */
+static void parse_arena_meta(struct arena_info *arena, struct btt_sb *super,
+ u64 arena_off)
+{
+ arena->internal_nlba = le32_to_cpu(super->internal_nlba);
+ arena->internal_lbasize = le32_to_cpu(super->internal_lbasize);
+ arena->external_nlba = le32_to_cpu(super->external_nlba);
+ arena->external_lbasize = le32_to_cpu(super->external_lbasize);
+ arena->nfree = le32_to_cpu(super->nfree);
+ arena->version_major = le16_to_cpu(super->version_major);
+ arena->version_minor = le16_to_cpu(super->version_minor);
+
+ arena->nextoff = (super->nextoff == 0) ? 0 : (arena_off +
+ le64_to_cpu(super->nextoff));
+ arena->infooff = arena_off;
+ arena->dataoff = arena_off + le64_to_cpu(super->dataoff);
+ arena->mapoff = arena_off + le64_to_cpu(super->mapoff);
+ arena->logoff = arena_off + le64_to_cpu(super->logoff);
+ arena->info2off = arena_off + le64_to_cpu(super->info2off);
+
+ arena->size = (super->nextoff > 0) ? (le64_to_cpu(super->nextoff)) :
+ (arena->info2off - arena->infooff + BTT_PG_SIZE);
+
+ arena->flags = le32_to_cpu(super->flags);
+}
+
+static int discover_arenas(struct btt *btt)
+{
+ int ret = 0;
+ struct arena_info *arena;
+ struct btt_sb *super;
+ size_t remaining = btt->rawsize;
+ u64 cur_nlba = 0;
+ size_t cur_off = 0;
+ int num_arenas = 0;
+
+ super = kzalloc(sizeof(*super), GFP_KERNEL);
+ if (!super)
+ return -ENOMEM;
+
+ while (remaining) {
+ /* Alloc memory for arena */
+ arena = alloc_arena(btt, 0, 0, 0);
+ if (!arena) {
+ ret = -ENOMEM;
+ goto out_super;
+ }
+
+ arena->infooff = cur_off;
+ ret = btt_info_read(arena, super);
+ if (ret)
+ goto out;
+
+ if (!arena_is_valid(arena, super, btt->nd_btt->uuid)) {
+ if (remaining == btt->rawsize) {
+ btt->init_state = INIT_NOTFOUND;
+ dev_info(to_dev(arena), "No existing arenas\n");
+ goto out;
+ } else {
+ dev_info(to_dev(arena),
+ "Found corrupted metadata!\n");
+ ret = -ENODEV;
+ goto out;
+ }
+ }
+
+ arena->external_lba_start = cur_nlba;
+ parse_arena_meta(arena, super, cur_off);
+
+ ret = btt_freelist_init(arena);
+ if (ret)
+ goto out;
+
+ ret = btt_rtt_init(arena);
+ if (ret)
+ goto out;
+
+ ret = btt_maplocks_init(arena);
+ if (ret)
+ goto out;
+
+ list_add_tail(&arena->list, &btt->arena_list);
+
+ remaining -= arena->size;
+ cur_off += arena->size;
+ cur_nlba += arena->external_nlba;
+ num_arenas++;
+
+ if (arena->nextoff == 0)
+ break;
+ }
+ btt->num_arenas = num_arenas;
+ btt->nlba = cur_nlba;
+ btt->init_state = INIT_READY;
+
+ kfree(super);
+ return ret;
+
+ out:
+ kfree(arena);
+ free_arenas(btt);
+ out_super:
+ kfree(super);
+ return ret;
+}
+
+static int create_arenas(struct btt *btt)
+{
+ size_t remaining = btt->rawsize;
+ size_t cur_off = 0;
+
+ while (remaining) {
+ struct arena_info *arena;
+ size_t arena_size = min_t(u64, ARENA_MAX_SIZE, remaining);
+
+ remaining -= arena_size;
+ if (arena_size < ARENA_MIN_SIZE)
+ break;
+
+ arena = alloc_arena(btt, arena_size, btt->nlba, cur_off);
+ if (!arena) {
+ free_arenas(btt);
+ return -ENOMEM;
+ }
+ btt->nlba += arena->external_nlba;
+ if (remaining >= ARENA_MIN_SIZE)
+ arena->nextoff = arena->size;
+ else
+ arena->nextoff = 0;
+ cur_off += arena_size;
+ list_add_tail(&arena->list, &btt->arena_list);
+ }
+
+ return 0;
+}
+
+/*
+ * This function completes arena initialization by writing
+ * all the metadata.
+ * It is only called for an uninitialized arena when a write
+ * to that arena occurs for the first time.
+ */
+static int btt_arena_write_layout(struct arena_info *arena, u8 *uuid)
+{
+ int ret;
+ struct btt_sb *super;
+
+ ret = btt_map_init(arena);
+ if (ret)
+ return ret;
+
+ ret = btt_log_init(arena);
+ if (ret)
+ return ret;
+
+ super = kzalloc(sizeof(struct btt_sb), GFP_NOIO);
+ if (!super)
+ return -ENOMEM;
+
+ strncpy(super->signature, BTT_SIG, BTT_SIG_LEN);
+ memcpy(super->uuid, uuid, 16);
+ super->flags = cpu_to_le32(arena->flags);
+ super->version_major = cpu_to_le16(arena->version_major);
+ super->version_minor = cpu_to_le16(arena->version_minor);
+ super->external_lbasize = cpu_to_le32(arena->external_lbasize);
+ super->external_nlba = cpu_to_le32(arena->external_nlba);
+ super->internal_lbasize = cpu_to_le32(arena->internal_lbasize);
+ super->internal_nlba = cpu_to_le32(arena->internal_nlba);
+ super->nfree = cpu_to_le32(arena->nfree);
+ super->infosize = cpu_to_le32(sizeof(struct btt_sb));
+
+ /* TODO: make these relative to arena start. For now we get this
+ * since each file = 1 arena = 1 dimm, but will change */
+ super->nextoff = cpu_to_le64(arena->nextoff);
+ /*
+ * Subtract arena->infooff (arena start) so numbers are relative
+ * to 'this' arena
+ */
+ super->dataoff = cpu_to_le64(arena->dataoff - arena->infooff);
+ super->mapoff = cpu_to_le64(arena->mapoff - arena->infooff);
+ super->logoff = cpu_to_le64(arena->logoff - arena->infooff);
+ super->info2off = cpu_to_le64(arena->info2off - arena->infooff);
+
+ super->flags = 0;
+ super->checksum = cpu_to_le64(btt_sb_checksum(super));
+
+ ret = btt_info_write(arena, super);
+
+ kfree(super);
+ return ret;
+}
+
+/*
+ * This function completes the initialization for the BTT namespace
+ * such that it is ready to accept IOs
+ */
+static int btt_meta_init(struct btt *btt)
+{
+ int ret = 0;
+ struct arena_info *arena;
+
+ mutex_lock(&btt->init_lock);
+ list_for_each_entry(arena, &btt->arena_list, list) {
+ ret = btt_arena_write_layout(arena, btt->nd_btt->uuid);
+ if (ret)
+ goto unlock;
+
+ ret = btt_freelist_init(arena);
+ if (ret)
+ goto unlock;
+
+ ret = btt_rtt_init(arena);
+ if (ret)
+ goto unlock;
+
+ ret = btt_maplocks_init(arena);
+ if (ret)
+ goto unlock;
+ }
+
+ btt->init_state = INIT_READY;
+
+ unlock:
+ mutex_unlock(&btt->init_lock);
+ return ret;
+}
+
+/*
+ * This function calculates the arena in which the given LBA lies
+ * by doing a linear walk. This is acceptable since we expect only
+ * a few arenas. If we have backing devices that get much larger,
+ * we can construct a balanced binary tree of arenas at init time
+ * so that this range search becomes faster.
+ */
+static int lba_to_arena(struct btt *btt, sector_t sector, __u32 *premap,
+ struct arena_info **arena)
+{
+ struct arena_info *arena_list;
+ __u64 lba = div_u64(sector << SECTOR_SHIFT, btt->lbasize);
+
+ list_for_each_entry(arena_list, &btt->arena_list, list) {
+ if (lba < arena_list->external_nlba) {
+ *arena = arena_list;
+ *premap = lba;
+ return 0;
+ }
+ lba -= arena_list->external_nlba;
+ }
+
+ return -EIO;
+}
+
+/*
+ * The following (lock_map, unlock_map) are mostly just to improve
+ * readability, since they index into an array of locks
+ */
+static void lock_map(struct arena_info *arena, u32 premap)
+{
+ u32 idx = (premap * MAP_ENT_SIZE / L1_CACHE_BYTES) % arena->nfree;
+
+ spin_lock(&arena->map_locks[idx].lock);
+}
+
+static void unlock_map(struct arena_info *arena, u32 premap)
+{
+ u32 idx = (premap * MAP_ENT_SIZE / L1_CACHE_BYTES) % arena->nfree;
+
+ spin_unlock(&arena->map_locks[idx].lock);
+}
+
+static u64 to_namespace_offset(struct arena_info *arena, u64 lba)
+{
+ return arena->dataoff + ((u64)lba * arena->internal_lbasize);
+}
+
+static int btt_data_read(struct arena_info *arena, struct page *page,
+ unsigned int off, u32 lba, u32 len)
+{
+ int ret;
+ u64 nsoff = to_namespace_offset(arena, lba);
+ void *mem = kmap_atomic(page);
+
+ ret = arena_rw_bytes(arena, mem + off, len, nsoff, READ);
+ kunmap_atomic(mem);
+
+ return ret;
+}
+
+static int btt_data_write(struct arena_info *arena, u32 lba,
+ struct page *page, unsigned int off, u32 len)
+{
+ int ret;
+ u64 nsoff = to_namespace_offset(arena, lba);
+ void *mem = kmap_atomic(page);
+
+ ret = arena_rw_bytes(arena, mem + off, len, nsoff, WRITE);
+ kunmap_atomic(mem);
+
+ return ret;
+}
+
+static void zero_fill_data(struct page *page, unsigned int off, u32 len)
+{
+ void *mem = kmap_atomic(page);
+
+ memset(mem + off, 0, len);
+ kunmap_atomic(mem);
+}
+
+static int btt_read_pg(struct btt *btt, struct page *page, unsigned int off,
+ sector_t sector, unsigned int len)
+{
+ int ret = 0;
+ int t_flag, e_flag;
+ struct arena_info *arena = NULL;
+ u32 lane = 0, premap, postmap;
+
+ while (len) {
+ u32 cur_len;
+
+ lane = nd_region_acquire_lane(btt->nd_region);
+
+ ret = lba_to_arena(btt, sector, &premap, &arena);
+ if (ret)
+ goto out_lane;
+
+ cur_len = min(arena->external_lbasize, len);
+
+ ret = btt_map_read(arena, premap, &postmap, &t_flag, &e_flag);
+ if (ret)
+ goto out_lane;
+
+ /*
+ * We loop to make sure that the post map LBA didn't change
+ * from under us between writing the RTT and doing the actual
+ * read.
+ */
+ while (1) {
+ u32 new_map;
+
+ if (t_flag) {
+ zero_fill_data(page, off, cur_len);
+ goto out_lane;
+ }
+
+ if (e_flag) {
+ ret = -EIO;
+ goto out_lane;
+ }
+
+ arena->rtt[lane] = RTT_VALID | postmap;
+ /*
+ * Barrier to make sure this write is not reordered
+ * to do the verification map_read before the RTT store
+ */
+ barrier();
+
+ ret = btt_map_read(arena, premap, &new_map, &t_flag,
+ &e_flag);
+ if (ret)
+ goto out_rtt;
+
+ if (postmap == new_map)
+ break;
+
+ postmap = new_map;
+ }
+
+ ret = btt_data_read(arena, page, off, postmap, cur_len);
+ if (ret)
+ goto out_rtt;
+
+ arena->rtt[lane] = RTT_INVALID;
+ nd_region_release_lane(btt->nd_region, lane);
+
+ len -= cur_len;
+ off += cur_len;
+ sector += arena->external_lbasize >> SECTOR_SHIFT;
+ }
+
+ return 0;
+
+ out_rtt:
+ arena->rtt[lane] = RTT_INVALID;
+ out_lane:
+ nd_region_release_lane(btt->nd_region, lane);
+ return ret;
+}
+
+static int btt_write_pg(struct btt *btt, sector_t sector, struct page *page,
+ unsigned int off, unsigned int len)
+{
+ int ret = 0;
+ struct arena_info *arena = NULL;
+ u32 premap = 0, old_postmap, new_postmap, lane = 0, i;
+ struct log_entry log;
+ int sub;
+
+ while (len) {
+ u32 cur_len;
+
+ lane = nd_region_acquire_lane(btt->nd_region);
+
+ ret = lba_to_arena(btt, sector, &premap, &arena);
+ if (ret)
+ goto out_lane;
+ cur_len = min(arena->external_lbasize, len);
+
+ if ((arena->flags & IB_FLAG_ERROR_MASK) != 0) {
+ ret = -EIO;
+ goto out_lane;
+ }
+
+ new_postmap = arena->freelist[lane].block;
+
+ /* Wait if the new block is being read from */
+ for (i = 0; i < arena->nfree; i++)
+ while (arena->rtt[i] == (RTT_VALID | new_postmap))
+ cpu_relax();
+
+
+ if (new_postmap >= arena->internal_nlba) {
+ ret = -EIO;
+ goto out_lane;
+ } else
+ ret = btt_data_write(arena, new_postmap, page,
+ off, cur_len);
+ if (ret)
+ goto out_lane;
+
+ lock_map(arena, premap);
+ ret = btt_map_read(arena, premap, &old_postmap, NULL, NULL);
+ if (ret)
+ goto out_map;
+ if (old_postmap >= arena->internal_nlba) {
+ ret = -EIO;
+ goto out_map;
+ }
+
+ log.lba = cpu_to_le32(premap);
+ log.old_map = cpu_to_le32(old_postmap);
+ log.new_map = cpu_to_le32(new_postmap);
+ log.seq = cpu_to_le32(arena->freelist[lane].seq);
+ sub = arena->freelist[lane].sub;
+ ret = btt_flog_write(arena, lane, sub, &log);
+ if (ret)
+ goto out_map;
+
+ ret = btt_map_write(arena, premap, new_postmap, 0, 0);
+ if (ret)
+ goto out_map;
+
+ unlock_map(arena, premap);
+ nd_region_release_lane(btt->nd_region, lane);
+
+ len -= cur_len;
+ off += cur_len;
+ sector += arena->external_lbasize >> SECTOR_SHIFT;
+ }
+
+ return 0;
+
+ out_map:
+ unlock_map(arena, premap);
+ out_lane:
+ nd_region_release_lane(btt->nd_region, lane);
+ return ret;
+}
+
+static int btt_do_bvec(struct btt *btt, struct page *page,
+ unsigned int len, unsigned int off, int rw,
+ sector_t sector)
+{
+ int ret;
+
+ if (rw == READ) {
+ ret = btt_read_pg(btt, page, off, sector, len);
+ flush_dcache_page(page);
+ } else {
+ flush_dcache_page(page);
+ ret = btt_write_pg(btt, sector, page, off, len);
+ }
+
+ return ret;
+}
+
+static void btt_make_request(struct request_queue *q, struct bio *bio)
+{
+ struct block_device *bdev = bio->bi_bdev;
+ struct btt *btt = q->queuedata;
+ int rw;
+ struct bio_vec bvec;
+ sector_t sector;
+ struct bvec_iter iter;
+ int err = 0;
+
+ sector = bio->bi_iter.bi_sector;
+ if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) {
+ err = -EIO;
+ goto out;
+ }
+
+ BUG_ON(bio->bi_rw & REQ_DISCARD);
+
+ rw = bio_rw(bio);
+ if (rw == READA)
+ rw = READ;
+
+ bio_for_each_segment(bvec, bio, iter) {
+ unsigned int len = bvec.bv_len;
+
+ BUG_ON(len > PAGE_SIZE);
+ /* Make sure len is in multiples of lbasize. */
+ /* XXX is this right? */
+ BUG_ON(len < btt->lbasize);
+ BUG_ON(len % btt->lbasize);
+
+ err = btt_do_bvec(btt, bvec.bv_page, len, bvec.bv_offset,
+ rw, sector);
+ if (err) {
+ dev_info(&btt->nd_btt->dev,
+ "io error in %s sector %lld, len %d,\n",
+ (rw == READ) ? "READ" : "WRITE",
+ (unsigned long long) sector, len);
+ goto out;
+ }
+ sector += len >> SECTOR_SHIFT;
+ }
+
+out:
+ bio_endio(bio, err);
+}
+
+static int btt_getgeo(struct block_device *bd, struct hd_geometry *geo)
+{
+ /* some standard values */
+ geo->heads = 1 << 6;
+ geo->sectors = 1 << 5;
+ geo->cylinders = get_capacity(bd->bd_disk) >> 11;
+ return 0;
+}
+
+static const struct block_device_operations btt_fops = {
+ .owner = THIS_MODULE,
+ /* TODO: Disable rw_page till lazy init is reworked */
+ /*.rw_page = btt_rw_page, */
+ .getgeo = btt_getgeo,
+};
+
+static int btt_blk_init(struct btt *btt)
+{
+ int ret;
+
+ /* create a new disk and request queue for btt */
+ btt->btt_queue = blk_alloc_queue(GFP_KERNEL);
+ if (!btt->btt_queue)
+ return -ENOMEM;
+
+ btt->btt_disk = alloc_disk(0);
+ if (!btt->btt_disk) {
+ ret = -ENOMEM;
+ goto out_free_queue;
+ }
+
+ sprintf(btt->btt_disk->disk_name, "%s", dev_name(&btt->nd_btt->dev));
+ btt->btt_disk->driverfs_dev = &btt->nd_btt->dev;
+ btt->btt_disk->major = btt_major;
+ btt->btt_disk->first_minor = 0;
+ btt->btt_disk->fops = &btt_fops;
+ btt->btt_disk->private_data = btt;
+ btt->btt_disk->queue = btt->btt_queue;
+ btt->btt_disk->flags = GENHD_FL_EXT_DEVT;
+
+ blk_queue_make_request(btt->btt_queue, btt_make_request);
+ blk_queue_max_hw_sectors(btt->btt_queue, 1024);
+ blk_queue_bounce_limit(btt->btt_queue, BLK_BOUNCE_ANY);
+ blk_queue_logical_block_size(btt->btt_queue, btt->lbasize);
+ btt->btt_queue->queuedata = btt;
+
+ set_capacity(btt->btt_disk, btt->nlba * btt->lbasize >> SECTOR_SHIFT);
+ add_disk(btt->btt_disk);
+
+ return 0;
+
+out_free_queue:
+ blk_cleanup_queue(btt->btt_queue);
+ return ret;
+}
+
+static void btt_blk_cleanup(struct btt *btt)
+{
+ del_gendisk(btt->btt_disk);
+ put_disk(btt->btt_disk);
+ blk_cleanup_queue(btt->btt_queue);
+}
+
+/**
+ * btt_init - initialize a block translation table for the given device
+ * @nd_btt: device with BTT geometry and backing device info
+ * @rawsize: raw size in bytes of the backing device
+ * @lbasize: lba size of the backing device
+ * @uuid: A uuid for the backing device - this is stored on media
+ * @maxlane: maximum number of parallel requests the device can handle
+ *
+ * Initialize a Block Translation Table on a backing device to provide
+ * single sector power fail atomicity.
+ *
+ * Context:
+ * Might sleep.
+ *
+ * Returns:
+ * Pointer to a new struct btt on success, NULL on failure.
+ */
+static struct btt *btt_init(struct nd_btt *nd_btt, unsigned long long rawsize,
+ u32 lbasize, u8 *uuid, struct nd_region *nd_region)
+{
+ int ret;
+ struct btt *btt;
+ struct device *dev = &nd_btt->dev;
+
+ btt = kzalloc(sizeof(struct btt), GFP_KERNEL);
+ if (!btt)
+ return NULL;
+
+ btt->nd_btt = nd_btt;
+ btt->rawsize = rawsize;
+ btt->lbasize = lbasize;
+ INIT_LIST_HEAD(&btt->arena_list);
+ mutex_init(&btt->init_lock);
+ btt->nd_region = nd_region;
+
+ ret = discover_arenas(btt);
+ if (ret) {
+ dev_err(dev, "init: error in arena_discover: %d\n", ret);
+ goto out_free;
+ }
+
+ if (btt->init_state != INIT_READY) {
+ btt->num_arenas = (rawsize / ARENA_MAX_SIZE) +
+ ((rawsize % ARENA_MAX_SIZE) ? 1 : 0);
+ dev_dbg(dev, "init: %d arenas for %llu rawsize\n",
+ btt->num_arenas, rawsize);
+
+ ret = create_arenas(btt);
+ if (ret) {
+ dev_info(dev, "init: create_arenas: %d\n", ret);
+ goto out_free;
+ }
+
+ ret = btt_meta_init(btt);
+ if (ret) {
+ dev_err(dev, "init: error in meta_init: %d\n", ret);
+ return NULL;
+ }
+ }
+
+ ret = btt_blk_init(btt);
+ if (ret) {
+ dev_err(dev, "init: error in blk_init: %d\n", ret);
+ goto out_free;
+ }
+
+ btt_debugfs_init(btt);
+
+ return btt;
+
+ out_free:
+ kfree(btt);
+ return NULL;
+}
+
+/**
+ * btt_fini - de-initialize a BTT
+ * @btt: the BTT handle that was generated by btt_init
+ *
+ * De-initialize a Block Translation Table on device removal
+ *
+ * Context:
+ * Might sleep.
+ */
+static void btt_fini(struct btt *btt)
+{
+ if (btt) {
+ btt_blk_cleanup(btt);
+ free_arenas(btt);
+ debugfs_remove_recursive(btt->debugfs_dir);
+ kfree(btt);
+ }
+}
+
+static int link_btt(struct nd_btt *nd_btt)
+{
+ struct block_device *bdev = nd_btt->backing_dev;
+ struct kobject *dir = &part_to_dev(bdev->bd_part)->kobj;
+
+ return sysfs_create_link(dir, &nd_btt->dev.kobj, "nd_btt");
+}
+
+static void unlink_btt(struct nd_btt *nd_btt)
+{
+ struct block_device *bdev = nd_btt->backing_dev;
+ struct kobject *dir;
+
+ /* if backing_dev was deleted first we may have nothing to unlink */
+ if (!nd_btt->backing_dev)
+ return;
+
+ dir = &part_to_dev(bdev->bd_part)->kobj;
+ sysfs_remove_link(dir, "nd_btt");
+}
+
+static int nd_btt_probe(struct device *dev)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ struct nd_io_claim *ndio_claim = nd_btt->ndio_claim;
+ struct nd_region *nd_region;
+ struct block_device *bdev;
+ struct btt *btt;
+ size_t rawsize;
+ int rc;
+
+ if (!ndio_claim || !nd_btt->uuid || !nd_btt->backing_dev
+ || !nd_btt->lbasize)
+ return -ENODEV;
+
+ rc = link_btt(nd_btt);
+ if (rc)
+ return rc;
+
+ bdev = nd_btt->backing_dev;
+ /* the first 4K of a device is padding */
+ nd_btt->offset = nd_partition_offset(bdev) + SZ_4K;
+ rawsize = (bdev->bd_part->nr_sects << SECTOR_SHIFT) - SZ_4K;
+ if (rawsize < ARENA_MIN_SIZE) {
+ rc = -ENXIO;
+ goto err_btt;
+ }
+ nd_btt->ndio = nd_btt->ndio_claim->parent;
+ nd_region = to_nd_region(nd_btt->ndio->dev->parent);
+ btt = btt_init(nd_btt, rawsize, nd_btt->lbasize, nd_btt->uuid,
+ nd_region);
+ if (!btt) {
+ rc = -ENOMEM;
+ goto err_btt;
+ }
+ btt->backing_dev = bdev;
+ dev_set_drvdata(dev, btt);
+
+ return 0;
+ err_btt:
+ unlink_btt(nd_btt);
+ return rc;
+}
+
+static int nd_btt_remove(struct device *dev)
+{
+ struct nd_btt *nd_btt = to_nd_btt(dev);
+ struct btt *btt = dev_get_drvdata(dev);
+
+ btt_fini(btt);
+ unlink_btt(nd_btt);
+
+ return 0;
+}
+
+static struct nd_device_driver nd_btt_driver = {
+ .probe = nd_btt_probe,
+ .remove = nd_btt_remove,
+ .drv = {
+ .name = "nd_btt",
+ },
+ .type = ND_DRIVER_BTT,
+};
+
+static int __init nd_btt_init(void)
+{
+ int rc;
+
+ BUILD_BUG_ON(sizeof(struct btt_sb) != SZ_4K);
+
+ btt_major = register_blkdev(0, "btt");
+ if (btt_major < 0)
+ return btt_major;
+
+ debugfs_root = debugfs_create_dir("btt", NULL);
+ if (IS_ERR_OR_NULL(debugfs_root)) {
+ rc = -ENXIO;
+ goto err_debugfs;
+ }
+
+ rc = nd_driver_register(&nd_btt_driver);
+ if (rc < 0)
+ goto err_driver;
+ return 0;
+
+ err_driver:
+ debugfs_remove_recursive(debugfs_root);
+ err_debugfs:
+ unregister_blkdev(btt_major, "btt");
+
+ return rc;
+}
+
+static void __exit nd_btt_exit(void)
+{
+ driver_unregister(&nd_btt_driver.drv);
+ debugfs_remove_recursive(debugfs_root);
+ unregister_blkdev(btt_major, "btt");
+}
+
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_BTT);
+MODULE_AUTHOR("Vishal Verma <[email protected]>");
+MODULE_LICENSE("GPL v2");
+module_init(nd_btt_init);
+module_exit(nd_btt_exit);
diff --git a/drivers/block/nd/btt.h b/drivers/block/nd/btt.h
index e8f6d8e0ddd3..d4e67c75c91f 100644
--- a/drivers/block/nd/btt.h
+++ b/drivers/block/nd/btt.h
@@ -19,6 +19,39 @@
#define BTT_SIG_LEN 16
#define BTT_SIG "BTT_ARENA_INFO\0"
+#define MAP_ENT_SIZE 4
+#define MAP_TRIM_SHIFT 31
+#define MAP_TRIM_MASK (1 << MAP_TRIM_SHIFT)
+#define MAP_ERR_SHIFT 30
+#define MAP_ERR_MASK (1 << MAP_ERR_SHIFT)
+#define MAP_LBA_MASK (~((1 << MAP_TRIM_SHIFT) | (1 << MAP_ERR_SHIFT)))
+#define MAP_ENT_NORMAL 0xC0000000
+#define LOG_ENT_SIZE sizeof(struct log_entry)
+#define ARENA_MIN_SIZE (1UL << 24) /* 16 MB */
+#define ARENA_MAX_SIZE (1ULL << 39) /* 512 GB */
+#define RTT_VALID (1UL << 31)
+#define RTT_INVALID 0
+#define INT_LBASIZE_ALIGNMENT 256
+#define BTT_PG_SIZE 4096
+#define BTT_DEFAULT_NFREE ND_MAX_LANES
+#define LOG_SEQ_INIT 1
+
+#define IB_FLAG_ERROR 0x00000001
+#define IB_FLAG_ERROR_MASK 0x00000001
+
+enum btt_init_state {
+ INIT_UNCHECKED = 0,
+ INIT_NOTFOUND,
+ INIT_READY
+};
+
+struct log_entry {
+ __le32 lba;
+ __le32 old_map;
+ __le32 new_map;
+ __le32 seq;
+ __le64 padding[2];
+};
struct btt_sb {
u8 signature[BTT_SIG_LEN];
@@ -42,4 +75,111 @@ struct btt_sb {
__le64 checksum;
};
+struct free_entry {
+ u32 block;
+ u8 sub;
+ u8 seq;
+};
+
+struct aligned_lock {
+ union {
+ spinlock_t lock;
+ u8 cacheline_padding[L1_CACHE_BYTES];
+ };
+};
+
+/**
+ * struct arena_info - handle for an arena
+ * @size: Size in bytes this arena occupies on the raw device.
+ * This includes arena metadata.
+ * @external_lba_start: The first external LBA in this arena.
+ * @internal_nlba: Number of internal blocks available in the arena
+ * including nfree reserved blocks
+ * @internal_lbasize: Internal and external lba sizes may be different as
+ * we can round up 'odd' external lbasizes such as 520B
+ * to be aligned.
+ * @external_nlba: Number of blocks contributed by the arena to the number
+ * reported to upper layers. (internal_nlba - nfree)
+ * @external_lbasize: LBA size as exposed to upper layers.
+ * @nfree: A reserve number of 'free' blocks that is used to
+ * handle incoming writes.
+ * @version_major: Metadata layout version major.
+ * @version_minor: Metadata layout version minor.
+ * @nextoff: Offset in bytes to the start of the next arena.
+ * @infooff: Offset in bytes to the info block of this arena.
+ * @dataoff: Offset in bytes to the data area of this arena.
+ * @mapoff: Offset in bytes to the map area of this arena.
+ * @logoff: Offset in bytes to the log area of this arena.
+ * @info2off: Offset in bytes to the backup info block of this arena.
+ * @freelist: Pointer to in-memory list of free blocks
+ * @rtt: Pointer to in-memory "Read Tracking Table"
+ * @map_locks: Spinlocks protecting concurrent map writes
+ * @nd_btt: Pointer to parent nd_btt structure.
+ * @list: List head for list of arenas
+ * @debugfs_dir: Debugfs dentry
+ * @flags: Arena flags - may signify error states.
+ *
+ * arena_info is a per-arena handle. Once an arena is narrowed down for an
+ * IO, this struct is passed around for the duration of the IO.
+ */
+struct arena_info {
+ u64 size; /* Total bytes for this arena */
+ u64 external_lba_start;
+ u32 internal_nlba;
+ u32 internal_lbasize;
+ u32 external_nlba;
+ u32 external_lbasize;
+ u32 nfree;
+ u16 version_major;
+ u16 version_minor;
+ /* Byte offsets to the different on-media structures */
+ u64 nextoff;
+ u64 infooff;
+ u64 dataoff;
+ u64 mapoff;
+ u64 logoff;
+ u64 info2off;
+ /* Pointers to other in-memory structures for this arena */
+ struct free_entry *freelist;
+ u32 *rtt;
+ struct aligned_lock *map_locks;
+ struct nd_btt *nd_btt;
+ struct list_head list;
+ struct dentry *debugfs_dir;
+ /* Arena flags */
+ u32 flags;
+};
+
+/**
+ * struct btt - handle for a BTT instance
+ * @btt_disk: Pointer to the gendisk for BTT device
+ * @btt_queue: Pointer to the request queue for the BTT device
+ * @arena_list: Head of the list of arenas
+ * @debugfs_dir: Debugfs dentry
+ * @backing_dev: Backing block device for the BTT
+ * @nd_btt: Parent nd_btt struct
+ * @nlba: Number of logical blocks exposed to the upper layers
+ * after removing the amount of space needed by metadata
+ * @rawsize: Total size in bytes of the available backing device
+ * @lbasize: LBA size as requested and presented to upper layers
+ * @lanes: Per-lane spinlocks
+ * @init_lock: Mutex used for the BTT initialization
+ * @init_state: Flag describing the initialization state for the BTT
+ * @num_arenas: Number of arenas in the BTT instance
+ */
+struct btt {
+ struct gendisk *btt_disk;
+ struct request_queue *btt_queue;
+ struct list_head arena_list;
+ struct dentry *debugfs_dir;
+ struct block_device *backing_dev;
+ struct nd_btt *nd_btt;
+ u64 nlba;
+ unsigned long long rawsize;
+ u32 lbasize;
+ struct nd_region *nd_region;
+ struct mutex init_lock;
+ int init_state;
+ int num_arenas;
+};
#endif
diff --git a/drivers/block/nd/btt_devs.c b/drivers/block/nd/btt_devs.c
index 746d582910b6..6db50443cb2f 100644
--- a/drivers/block/nd/btt_devs.c
+++ b/drivers/block/nd/btt_devs.c
@@ -342,7 +342,8 @@ struct nd_btt *nd_btt_create(struct nd_bus *nd_bus)
*/
u64 btt_sb_checksum(struct btt_sb *btt_sb)
{
- u64 sum, sum_save;
+ u64 sum;
+ __le64 sum_save;
sum_save = btt_sb->checksum;
btt_sb->checksum = 0;
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index a093c6468a53..065ab9b5ec61 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -149,6 +149,7 @@ u64 nd_fletcher64(void __iomem *addr, size_t len)
return hi32 << 32 | lo32;
}
+EXPORT_SYMBOL(nd_fletcher64);
static void nd_bus_release(struct device *dev)
{
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index b4f95ccc0252..ef77b5893628 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -21,6 +21,12 @@
#include "label.h"
enum {
+ /*
+ * Limits the maximum number of block apertures a dimm can
+ * support and is an input to the geometry/on-disk-format of a
+ * BTT instance
+ */
+ ND_MAX_LANES = 256,
SECTOR_SHIFT = 9,
};
@@ -109,6 +115,7 @@ struct nd_region {
u64 ndr_size;
u64 ndr_start;
int id;
+ int num_lanes;
struct nd_mapping mapping[0];
};
@@ -239,6 +246,8 @@ struct nd_btt *to_nd_btt(struct device *dev);
struct btt_sb;
u64 btt_sb_checksum(struct btt_sb *btt_sb);
struct nd_region *to_nd_region(struct device *dev);
+unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
+void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
u64 nd_region_interleave_set_cookie(struct nd_region *nd_region);
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index 1b2c81e0eb0f..0aab8bb0a982 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -22,6 +22,72 @@
#include <asm-generic/io-64-nonatomic-lo-hi.h>
+static struct {
+ struct {
+ int count[CONFIG_ND_MAX_REGIONS];
+ spinlock_t lock[CONFIG_ND_MAX_REGIONS];
+ } lane[NR_CPUS];
+} nd_percpu_lane;
+
+static void nd_region_init_locks(void)
+{
+ int i, j;
+
+ for (i = 0; i < NR_CPUS; i++)
+ for (j = 0; j < CONFIG_ND_MAX_REGIONS; j++)
+ spin_lock_init(&nd_percpu_lane.lane[i].lock[j]);
+}
+
+/**
+ * nd_region_acquire_lane - allocate and lock a lane
+ * @nd_region: region id and number of lanes possible
+ *
+ * A lane correlates to a BLK-data-window and/or a log slot in the BTT.
+ * We optimize for the common case where there are 256 lanes, one
+ * per-cpu. For larger systems we need to lock to share lanes. For now
+ * this implementation assumes the cost of maintaining an allocator for
+ * free lanes is on the order of the lock hold time, so it implements a
+ * static lane = cpu % num_lanes mapping.
+ *
+ * In the case of a BTT instance on top of a BLK namespace a lane may be
+ * acquired recursively. We lock on the first instance.
+ *
+ * In the case of a BTT instance on top of PMEM, we only acquire a lane
+ * for the BTT metadata updates.
+ */
+unsigned int nd_region_acquire_lane(struct nd_region *nd_region)
+{
+ unsigned int cpu, lane;
+
+ cpu = get_cpu();
+
+ if (nd_region->num_lanes < NR_CPUS) {
+ unsigned int id = nd_region->id;
+
+ lane = cpu % nd_region->num_lanes;
+ if (nd_percpu_lane.lane[cpu].count[id]++ == 0)
+ spin_lock(&nd_percpu_lane.lane[lane].lock[id]);
+ } else
+ lane = cpu;
+
+ return lane;
+}
+EXPORT_SYMBOL(nd_region_acquire_lane);
+
+void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane)
+{
+ if (nd_region->num_lanes < NR_CPUS) {
+ unsigned int cpu = get_cpu();
+ unsigned int id = nd_region->id;
+
+ if (--nd_percpu_lane.lane[cpu].count[id] == 0)
+ spin_unlock(&nd_percpu_lane.lane[lane].lock[id]);
+ put_cpu();
+ }
+ put_cpu();
+}
+EXPORT_SYMBOL(nd_region_release_lane);
+
static DEFINE_IDA(region_ida);
static void nd_region_release(struct device *dev)
@@ -679,6 +745,8 @@ static void nd_blk_init(struct nd_bus *nd_bus, struct nd_region *nd_region,
}
nd_region->ndr_mappings = 1;
+ nd_region->num_lanes = min_t(unsigned short,
+ readw(&nd_mem->nfit_bdw->num_bdw), ND_MAX_LANES);
nd_mapping = &nd_region->mapping[0];
nd_mapping->nd_dimm = nd_dimm;
nd_mapping->size = readq(&nd_mem->nfit_bdw->blk_capacity);
@@ -692,6 +760,7 @@ static void nd_spa_range_init(struct nd_bus *nd_bus, struct nd_region *nd_region
struct nd_spa *nd_spa = nd_region->nd_spa;
u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
+ nd_region->num_lanes = ND_MAX_LANES;
nd_region->dev.type = type;
for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_memdev *nd_memdev = nd_memdev_from_spa(nd_bus,
@@ -736,6 +805,12 @@ static struct nd_region *nd_region_create(struct nd_bus *nd_bus,
if (nd_region->id < 0) {
kfree(nd_region);
return NULL;
+ } else if (nd_region->id >= CONFIG_ND_MAX_REGIONS) {
+ dev_err(&nd_bus->dev, "max region limit %d reached\n",
+ CONFIG_ND_MAX_REGIONS);
+ ida_simple_remove(®ion_ida, nd_region->id);
+ kfree(nd_region);
+ return NULL;
}
nd_region->nd_spa = nd_spa;
nd_region->ndr_mappings = num_mappings;
@@ -769,6 +844,8 @@ int nd_bus_register_regions(struct nd_bus *nd_bus)
struct nd_spa *nd_spa;
int rc = 0;
+ nd_region_init_locks();
+
mutex_lock(&nd_bus_list_mutex);
list_for_each_entry(nd_spa, &nd_bus->spas, list) {
int spa_type;
From: Ross Zwisler <[email protected]>
Block-device driver for BLK namespaces described by DCR (dimm control
region), BDW (block data window), and IDT (interleave descriptor) NFIT
structures.
The BIOS may choose to interleave multiple dimms into a given SPA
(system physical address) range, so this driver includes core nd
infrastructure for multiplexing multiple BLK namespace devices on a
single request_mem_region() + ioremap() mapping. Note, the math and
table walking to de-interleave the memory space on each I/O may prove to
be too computationally expensive, in which case we would look to replace
it with a flat lookup implementation.
A new nd core api nd_blk_validate_namespace() is introduced to check
that the labels on the DIMM are in sync with the current set of
dpa-resources assigned to the namespace. nd_blk_validate_namespace()
prevents enabling the namespace when they are out of sync. Userspace
can retry the writing the labels in that scenario.
Finally, enable testing of the BLK namespace infrastructure via
nfit_test. Provide a mock implementations of nd_blk_do_io() to route
block-data-window accesses to an nfit_test allocation simulating BLK
storage.
Cc: Andy Lutomirski <[email protected]>
Cc: Boaz Harrosh <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Ross Zwisler <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
---
drivers/block/nd/Kconfig | 19 ++
drivers/block/nd/Makefile | 3
drivers/block/nd/blk.c | 269 ++++++++++++++++++++++++++++++++
drivers/block/nd/core.c | 57 ++++++-
drivers/block/nd/namespace_devs.c | 47 ++++++
drivers/block/nd/nd-private.h | 24 +++
drivers/block/nd/nd.h | 51 ++++++
drivers/block/nd/region.c | 11 +
drivers/block/nd/region_devs.c | 314 ++++++++++++++++++++++++++++++++++++-
drivers/block/nd/test/iomap.c | 53 ++++++
drivers/block/nd/test/nfit.c | 3
drivers/block/nd/test/nfit_test.h | 14 ++
12 files changed, 851 insertions(+), 14 deletions(-)
create mode 100644 drivers/block/nd/blk.c
diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
index 29d9f8e4eedb..72580cb0e39c 100644
--- a/drivers/block/nd/Kconfig
+++ b/drivers/block/nd/Kconfig
@@ -70,6 +70,9 @@ config NFIT_TEST
load. Kconfig does not allow for numerical value
dependencies, so we can only warn at runtime.
+ Enabling this option will degrade the performance of other BLK
+ namespaces. Do not enable for production environments.
+
Say N unless you are doing development of the 'nd' subsystem.
config BLK_DEV_PMEM
@@ -89,6 +92,22 @@ config BLK_DEV_PMEM
Say Y if you want to use a NVDIMM described by NFIT
+config ND_BLK
+ tristate "BLK: Block data window (aperture) device support"
+ depends on ND_CORE
+ default ND_CORE
+ help
+ This driver performs I/O using a set of DCR/BDW defined
+ apertures. The set of apertures will all access the one
+ DIMM. Multiple windows allow multiple concurrent accesses,
+ much like tagged-command-queuing, and would likely be used
+ by different threads or different CPUs.
+
+ The NFIT specification defines a standard format for a Block
+ Data Window.
+
+ Say Y if you want to use a NVDIMM described by NFIT
+
config ND_BTT_DEVS
bool
diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
index 2dc1ab6fdef2..df104f2123a4 100644
--- a/drivers/block/nd/Makefile
+++ b/drivers/block/nd/Makefile
@@ -12,12 +12,14 @@ ldflags-y += --wrap=ioremap_nocache
ldflags-y += --wrap=iounmap
ldflags-y += --wrap=__request_region
ldflags-y += --wrap=__release_region
+ldflags-y += --wrap=nd_blk_do_io
endif
obj-$(CONFIG_ND_CORE) += nd.o
obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
+obj-$(CONFIG_ND_BLK) += nd_blk.o
nd_acpi-y := acpi.o
@@ -34,3 +36,4 @@ nd-$(CONFIG_ND_BTT_DEVS) += btt_devs.o
nd_pmem-y := pmem.o
nd_btt-y := btt.o
+nd_blk-y := blk.o
diff --git a/drivers/block/nd/blk.c b/drivers/block/nd/blk.c
new file mode 100644
index 000000000000..9e32ae610d15
--- /dev/null
+++ b/drivers/block/nd/blk.c
@@ -0,0 +1,269 @@
+/*
+ * NVDIMM Block Window Driver
+ * Copyright (c) 2014, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/nd.h>
+#include <linux/sizes.h>
+#include "nd.h"
+
+struct nd_blk_device {
+ struct request_queue *queue;
+ struct gendisk *disk;
+ struct nd_namespace_blk *nsblk;
+ struct nd_blk_window *ndbw;
+ struct nd_io ndio;
+ size_t disk_size;
+ int id;
+};
+
+static int nd_blk_major;
+static DEFINE_IDA(nd_blk_ida);
+
+static resource_size_t to_dev_offset(struct nd_namespace_blk *nsblk,
+ resource_size_t ns_offset, unsigned int len)
+{
+ int i;
+
+ for (i = 0; i < nsblk->num_resources; i++) {
+ if (ns_offset < resource_size(nsblk->res[i])) {
+ if (ns_offset + len > resource_size(nsblk->res[i])) {
+ dev_WARN_ONCE(&nsblk->dev, 1,
+ "%s: illegal request\n", __func__);
+ return SIZE_MAX;
+ }
+ return nsblk->res[i]->start + ns_offset;
+ }
+ ns_offset -= resource_size(nsblk->res[i]);
+ }
+
+ dev_WARN_ONCE(&nsblk->dev, 1, "%s: request out of range\n", __func__);
+ return SIZE_MAX;
+}
+
+static void nd_blk_make_request(struct request_queue *q, struct bio *bio)
+{
+ struct block_device *bdev = bio->bi_bdev;
+ struct gendisk *disk = bdev->bd_disk;
+ struct nd_namespace_blk *nsblk;
+ struct nd_blk_device *blk_dev;
+ struct nd_blk_window *ndbw;
+ struct bvec_iter iter;
+ struct bio_vec bvec;
+ int err = 0, rw;
+ sector_t sector;
+
+ sector = bio->bi_iter.bi_sector;
+ if (bio_end_sector(bio) > get_capacity(disk)) {
+ err = -EIO;
+ goto out;
+ }
+
+ BUG_ON(bio->bi_rw & REQ_DISCARD);
+
+ rw = bio_data_dir(bio);
+
+ blk_dev = disk->private_data;
+ nsblk = blk_dev->nsblk;
+ ndbw = blk_dev->ndbw;
+ bio_for_each_segment(bvec, bio, iter) {
+ unsigned int len = bvec.bv_len;
+ resource_size_t dev_offset;
+ void *iobuf;
+
+ BUG_ON(len > PAGE_SIZE);
+
+ dev_offset = to_dev_offset(nsblk, sector << SECTOR_SHIFT, len);
+ if (dev_offset == SIZE_MAX) {
+ err = -EIO;
+ goto out;
+ }
+
+ iobuf = kmap_atomic(bvec.bv_page);
+ err = nd_blk_do_io(ndbw, iobuf + bvec.bv_offset, len, rw,
+ dev_offset);
+ kunmap_atomic(iobuf);
+ if (err)
+ goto out;
+
+ sector += len >> SECTOR_SHIFT;
+ }
+
+ out:
+ bio_endio(bio, err);
+}
+
+static int nd_blk_rw_bytes(struct nd_io *ndio, void *iobuf, size_t offset,
+ size_t n, unsigned long flags)
+{
+ struct nd_namespace_blk *nsblk;
+ struct nd_blk_device *blk_dev;
+ int rw = nd_data_dir(flags);
+ struct nd_blk_window *ndbw;
+ resource_size_t dev_offset;
+
+ blk_dev = container_of(ndio, typeof(*blk_dev), ndio);
+ ndbw = blk_dev->ndbw;
+ nsblk = blk_dev->nsblk;
+ dev_offset = to_dev_offset(nsblk, offset, n);
+
+ if (unlikely(offset + n > blk_dev->disk_size)) {
+ dev_WARN_ONCE(ndio->dev, 1, "%s: request out of range\n",
+ __func__);
+ return -EFAULT;
+ }
+
+ if (dev_offset == SIZE_MAX)
+ return -EIO;
+
+ return nd_blk_do_io(ndbw, iobuf, n, rw, dev_offset);
+}
+
+static const struct block_device_operations nd_blk_fops = {
+ .owner = THIS_MODULE,
+};
+
+static int nd_blk_probe(struct device *dev)
+{
+ struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);
+ struct nd_blk_device *blk_dev;
+ resource_size_t disk_size;
+ struct gendisk *disk;
+ int err;
+
+ disk_size = nd_namespace_blk_validate(nsblk);
+ if (disk_size < ND_MIN_NAMESPACE_SIZE)
+ return -ENXIO;
+
+ blk_dev = kzalloc(sizeof(*blk_dev), GFP_KERNEL);
+ if (!blk_dev)
+ return -ENOMEM;
+
+ blk_dev->id = ida_simple_get(&nd_blk_ida, 0, 0, GFP_KERNEL);
+ if (blk_dev->id < 0) {
+ err = blk_dev->id;
+ goto err_ida;
+ }
+
+ blk_dev->disk_size = disk_size;
+
+ blk_dev->queue = blk_alloc_queue(GFP_KERNEL);
+ if (!blk_dev->queue) {
+ err = -ENOMEM;
+ goto err_alloc_queue;
+ }
+
+ blk_queue_make_request(blk_dev->queue, nd_blk_make_request);
+ blk_queue_max_hw_sectors(blk_dev->queue, 1024);
+ blk_queue_bounce_limit(blk_dev->queue, BLK_BOUNCE_ANY);
+
+ disk = blk_dev->disk = alloc_disk(0);
+ if (!disk) {
+ err = -ENOMEM;
+ goto err_alloc_disk;
+ }
+
+ blk_dev->ndbw = &to_nd_region(nsblk->dev.parent)->bw;
+ blk_dev->nsblk = nsblk;
+
+ disk->driverfs_dev = dev;
+ disk->major = nd_blk_major;
+ disk->first_minor = 0;
+ disk->fops = &nd_blk_fops;
+ disk->private_data = blk_dev;
+ disk->queue = blk_dev->queue;
+ disk->flags = GENHD_FL_EXT_DEVT;
+ sprintf(disk->disk_name, "nd%d", blk_dev->id);
+ set_capacity(disk, disk_size >> SECTOR_SHIFT);
+
+ nd_bus_lock(dev);
+ dev_set_drvdata(dev, blk_dev);
+
+ add_disk(disk);
+ nd_init_ndio(&blk_dev->ndio, nd_blk_rw_bytes, dev, disk, 0);
+ nd_register_ndio(&blk_dev->ndio);
+ nd_bus_unlock(dev);
+
+ return 0;
+
+ err_alloc_disk:
+ blk_cleanup_queue(blk_dev->queue);
+ err_alloc_queue:
+ ida_simple_remove(&nd_blk_ida, blk_dev->id);
+ err_ida:
+ kfree(blk_dev);
+ return err;
+}
+
+static int nd_blk_remove(struct device *dev)
+{
+ /* FIXME: eventually need to get to nd_blk_device from struct device.
+ struct nd_namespace_io *nsio = to_nd_namespace_io(dev); */
+
+ struct nd_blk_device *blk_dev = dev_get_drvdata(dev);
+
+ nd_unregister_ndio(&blk_dev->ndio);
+ del_gendisk(blk_dev->disk);
+ put_disk(blk_dev->disk);
+ blk_cleanup_queue(blk_dev->queue);
+ ida_simple_remove(&nd_blk_ida, blk_dev->id);
+ kfree(blk_dev);
+
+ return 0;
+}
+
+static struct nd_device_driver nd_blk_driver = {
+ .probe = nd_blk_probe,
+ .remove = nd_blk_remove,
+ .drv = {
+ .name = "nd_blk",
+ },
+ .type = ND_DRIVER_NAMESPACE_BLOCK,
+};
+
+static int __init nd_blk_init(void)
+{
+ int rc;
+
+ rc = nfit_test_blk_init();
+ if (rc)
+ return rc;
+
+ rc = register_blkdev(0, "nd_blk");
+ if (rc < 0)
+ return rc;
+
+ nd_blk_major = rc;
+ rc = nd_driver_register(&nd_blk_driver);
+
+ if (rc < 0)
+ unregister_blkdev(nd_blk_major, "nd_blk");
+
+ return rc;
+}
+
+static void __exit nd_blk_exit(void)
+{
+ driver_unregister(&nd_blk_driver.drv);
+ unregister_blkdev(nd_blk_major, "nd_blk");
+}
+
+MODULE_AUTHOR("Ross Zwisler <[email protected]>");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_NAMESPACE_BLOCK);
+module_init(nd_blk_init);
+module_exit(nd_blk_exit);
diff --git a/drivers/block/nd/core.c b/drivers/block/nd/core.c
index 065ab9b5ec61..43ced72f8676 100644
--- a/drivers/block/nd/core.c
+++ b/drivers/block/nd/core.c
@@ -159,6 +159,7 @@ static void nd_bus_release(struct device *dev)
struct nd_mem *nd_mem, *_mem;
struct nd_dcr *nd_dcr, *_dcr;
struct nd_bdw *nd_bdw, *_bdw;
+ struct nd_idt *nd_idt, *_idt;
list_for_each_entry_safe(nd_spa, _spa, &nd_bus->spas, list) {
list_del_init(&nd_spa->list);
@@ -177,6 +178,10 @@ static void nd_bus_release(struct device *dev)
list_del_init(&nd_memdev->list);
kfree(nd_memdev);
}
+ list_for_each_entry_safe(nd_idt, _idt, &nd_bus->idts, list) {
+ list_del_init(&nd_idt->list);
+ kfree(nd_idt);
+ }
list_for_each_entry_safe(nd_mem, _mem, &nd_bus->dimms, list) {
list_del_init(&nd_mem->list);
kfree(nd_mem);
@@ -427,7 +432,9 @@ static void *nd_bus_new(struct device *parent,
return NULL;
INIT_LIST_HEAD(&nd_bus->spas);
INIT_LIST_HEAD(&nd_bus->dcrs);
+ INIT_LIST_HEAD(&nd_bus->idts);
INIT_LIST_HEAD(&nd_bus->bdws);
+ INIT_LIST_HEAD(&nd_bus->spa_maps);
INIT_LIST_HEAD(&nd_bus->memdevs);
INIT_LIST_HEAD(&nd_bus->dimms);
INIT_LIST_HEAD(&nd_bus->ndios);
@@ -436,6 +443,7 @@ static void *nd_bus_new(struct device *parent,
INIT_RADIX_TREE(&nd_bus->dimm_radix, GFP_KERNEL);
nd_bus->id = ida_simple_get(&nd_ida, 0, 0, GFP_KERNEL);
mutex_init(&nd_bus->reconfig_mutex);
+ mutex_init(&nd_bus->spa_map_mutex);
if (nd_bus->id < 0) {
kfree(nd_bus);
return NULL;
@@ -574,10 +582,21 @@ static void __iomem *add_table(struct nd_bus *nd_bus, void __iomem *table,
readw(&nfit_bdw->num_bdw));
break;
}
- /* TODO */
- case NFIT_TABLE_IDT:
- dev_dbg(&nd_bus->dev, "%s: idt\n", __func__);
+ case NFIT_TABLE_IDT: {
+ struct nd_idt *nd_idt = kzalloc(sizeof(*nd_idt), GFP_KERNEL);
+ struct nfit_idt __iomem *nfit_idt = table;
+
+ if (!nd_idt)
+ goto err;
+ INIT_LIST_HEAD(&nd_idt->list);
+ nd_idt->nfit_idt = nfit_idt;
+ list_add_tail(&nd_idt->list, &nd_bus->idts);
+ dev_dbg(&nd_bus->dev, "%s: idt index: %d num_lines: %d\n", __func__,
+ readw(&nfit_idt->idt_index),
+ readl(&nfit_idt->num_lines));
break;
+ }
+ /* TODO */
case NFIT_TABLE_FLUSH:
dev_dbg(&nd_bus->dev, "%s: flush\n", __func__);
break;
@@ -632,8 +651,11 @@ static void nd_mem_add(struct nd_bus *nd_bus, struct nd_mem *nd_mem)
{
u16 dcr_index = readw(&nd_mem->nfit_mem_dcr->dcr_index);
u16 spa_index = readw(&nd_mem->nfit_spa_dcr->spa_index);
+ struct nd_memdev *nd_memdev;
struct nd_dcr *nd_dcr;
struct nd_bdw *nd_bdw;
+ struct nd_idt *nd_idt;
+ u16 idt_index;
list_for_each_entry(nd_dcr, &nd_bus->dcrs, list) {
if (readw(&nd_dcr->nfit_dcr->dcr_index) != dcr_index)
@@ -667,6 +689,26 @@ static void nd_mem_add(struct nd_bus *nd_bus, struct nd_mem *nd_mem)
return;
nd_mem_find_spa_bdw(nd_bus, nd_mem);
+
+ if (!nd_mem->nfit_spa_bdw)
+ return;
+
+ spa_index = readw(&nd_mem->nfit_spa_bdw->spa_index);
+
+ list_for_each_entry(nd_memdev, &nd_bus->memdevs, list) {
+ if (readw(&nd_memdev->nfit_mem->spa_index) != spa_index ||
+ readw(&nd_memdev->nfit_mem->dcr_index) != dcr_index)
+ continue;
+ nd_mem->nfit_mem_bdw = nd_memdev->nfit_mem;
+ idt_index = readw(&nd_memdev->nfit_mem->idt_index);
+ list_for_each_entry(nd_idt, &nd_bus->idts, list) {
+ if (readw(&nd_idt->nfit_idt->idt_index) != idt_index)
+ continue;
+ nd_mem->nfit_idt_bdw = nd_idt->nfit_idt;
+ break;
+ }
+ break;
+ }
}
static int nd_mem_cmp(void *priv, struct list_head *__a, struct list_head *__b)
@@ -700,7 +742,9 @@ static int nd_mem_init(struct nd_bus *nd_bus)
int type = nfit_spa_type(nd_spa->nfit_spa);
struct nd_mem *nd_mem, *found;
struct nd_memdev *nd_memdev;
+ struct nd_idt *nd_idt;
u16 dcr_index;
+ u16 idt_index;
if (type != NFIT_SPA_DCR)
continue;
@@ -726,6 +770,13 @@ static int nd_mem_init(struct nd_bus *nd_bus)
INIT_LIST_HEAD(&nd_mem->list);
nd_mem->nfit_spa_dcr = nd_spa->nfit_spa;
nd_mem->nfit_mem_dcr = nd_memdev->nfit_mem;
+ idt_index = readw(&nd_memdev->nfit_mem->idt_index);
+ list_for_each_entry(nd_idt, &nd_bus->idts, list) {
+ if (readw(&nd_idt->nfit_idt->idt_index) != idt_index)
+ continue;
+ nd_mem->nfit_idt_dcr = nd_idt->nfit_idt;
+ break;
+ }
nd_mem_add(nd_bus, nd_mem);
}
}
diff --git a/drivers/block/nd/namespace_devs.c b/drivers/block/nd/namespace_devs.c
index 8414ca21917d..3e0eb585119c 100644
--- a/drivers/block/nd/namespace_devs.c
+++ b/drivers/block/nd/namespace_devs.c
@@ -151,6 +151,53 @@ static resource_size_t nd_namespace_blk_size(struct nd_namespace_blk *nsblk)
return size;
}
+resource_size_t nd_namespace_blk_validate(struct nd_namespace_blk *nsblk)
+{
+ struct nd_region *nd_region = to_nd_region(nsblk->dev.parent);
+ struct nd_mapping *nd_mapping = &nd_region->mapping[0];
+ struct nd_dimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct nd_label_id label_id;
+ struct resource *res;
+ int count, i;
+
+ if (!nsblk->uuid || !nsblk->lbasize)
+ return 0;
+
+ count = 0;
+ nd_label_gen_id(&label_id, nsblk->uuid, NSLABEL_FLAG_LOCAL);
+ for_each_dpa_resource(ndd, res) {
+ if (strcmp(res->name, label_id.id) != 0)
+ continue;
+ /*
+ * Resources with unacknoweldged adjustments indicate a
+ * failure to update labels
+ */
+ if (res->flags & DPA_RESOURCE_ADJUSTED)
+ return 0;
+ count++;
+ }
+
+ /* These values match after a successful label update */
+ if (count != nsblk->num_resources)
+ return 0;
+
+ for (i = 0; i < nsblk->num_resources; i++) {
+ struct resource *found = NULL;
+
+ for_each_dpa_resource(ndd, res)
+ if (res == nsblk->res[i]) {
+ found = res;
+ break;
+ }
+ /* stale resource */
+ if (!found)
+ return 0;
+ }
+
+ return nd_namespace_blk_size(nsblk);
+}
+EXPORT_SYMBOL(nd_namespace_blk_validate);
+
static int nd_namespace_label_update(struct nd_region *nd_region, struct device *dev)
{
dev_WARN_ONCE(dev, dev->driver,
diff --git a/drivers/block/nd/nd-private.h b/drivers/block/nd/nd-private.h
index 5f58e8e96a41..f65309780df4 100644
--- a/drivers/block/nd/nd-private.h
+++ b/drivers/block/nd/nd-private.h
@@ -46,16 +46,19 @@ struct nd_bus {
struct radix_tree_root dimm_radix;
wait_queue_head_t probe_wait;
struct module *module;
+ struct list_head spa_maps;
struct list_head memdevs;
struct list_head dimms;
struct list_head spas;
struct list_head dcrs;
struct list_head bdws;
+ struct list_head idts;
struct list_head ndios;
struct list_head list;
struct device dev;
int id, probe_active;
struct mutex reconfig_mutex;
+ struct mutex spa_map_mutex;
struct nd_btt *nd_btt;
};
@@ -92,6 +95,11 @@ struct nd_bdw {
struct list_head list;
};
+struct nd_idt {
+ struct nfit_idt __iomem *nfit_idt;
+ struct list_head list;
+};
+
struct nd_memdev {
struct nfit_mem __iomem *nfit_mem;
struct list_head list;
@@ -100,13 +108,29 @@ struct nd_memdev {
/* assembled tables for a given dimm */
struct nd_mem {
struct nfit_mem __iomem *nfit_mem_dcr;
+ struct nfit_mem __iomem *nfit_mem_bdw;
struct nfit_dcr __iomem *nfit_dcr;
struct nfit_bdw __iomem *nfit_bdw;
struct nfit_spa __iomem *nfit_spa_dcr;
struct nfit_spa __iomem *nfit_spa_bdw;
+ struct nfit_idt __iomem *nfit_idt_dcr;
+ struct nfit_idt __iomem *nfit_idt_bdw;
+ struct list_head list;
+};
+
+struct nd_spa_mapping {
+ struct nfit_spa __iomem *nfit_spa;
struct list_head list;
+ struct nd_bus *nd_bus;
+ struct kref kref;
+ void *spa;
};
+static inline struct nd_spa_mapping *to_spa_map(struct kref *kref)
+{
+ return container_of(kref, struct nd_spa_mapping, kref);
+}
+
struct nd_io *ndio_lookup(struct nd_bus *nd_bus, const char *diskname);
const char *spa_type_name(u16 type);
int nfit_spa_type(struct nfit_spa __iomem *nfit_spa);
diff --git a/drivers/block/nd/nd.h b/drivers/block/nd/nd.h
index ef77b5893628..b092253f3521 100644
--- a/drivers/block/nd/nd.h
+++ b/drivers/block/nd/nd.h
@@ -106,6 +106,11 @@ static inline struct nd_namespace_label __iomem *nd_get_label(
for (res = (ndd)->dpa.child, next = res ? res->sibling : NULL; \
res; res = next, next = next ? next->sibling : NULL)
+enum nd_blk_mmio_selector {
+ BDW,
+ DCR,
+};
+
struct nd_region {
struct device dev;
struct nd_spa *nd_spa;
@@ -116,6 +121,22 @@ struct nd_region {
u64 ndr_start;
int id;
int num_lanes;
+ /* only valid for blk regions */
+ struct nd_blk_window {
+ struct nd_blk_mmio {
+ void *base;
+ u64 size;
+ u64 base_offset;
+ u32 line_size;
+ u32 num_lines;
+ u32 table_size;
+ struct nfit_idt __iomem *nfit_idt;
+ struct nfit_spa __iomem *nfit_spa;
+ } mmio[2];
+ u64 bdw_offset; /* post interleave offset */
+ u64 stat_offset;
+ u64 cmd_offset;
+ } bw;
struct nd_mapping mapping[0];
};
@@ -129,6 +150,11 @@ static inline unsigned nd_inc_seq(unsigned seq)
return next[seq & 3];
}
+static inline struct nd_region *ndbw_to_region(struct nd_blk_window *ndbw)
+{
+ return container_of(ndbw, struct nd_region, bw);
+}
+
struct nd_io;
/**
* nd_rw_bytes_fn() - access bytes relative to the "whole disk" namespace device
@@ -212,6 +238,27 @@ enum nd_async_mode {
ND_ASYNC,
};
+/*
+ * When testing BLK I/O (with CONFIG_NFIT_TEST) we override
+ * nd_blk_do_io() and optionally route it to simulated resources. Given
+ * circular dependencies nfit_test needs to be loaded for the BLK I/O
+ * fallback path in the case of real hardware. See
+ * __wrap_nd_blk_do_io().
+ */
+#if IS_ENABLED(CONFIG_NFIT_TEST)
+#include <linux/kmod.h>
+
+static inline int nfit_test_blk_init(void)
+{
+ return request_module("nfit_test");
+}
+#else
+static inline int nfit_test_blk_init(void)
+{
+ return 0;
+}
+#endif
+
void wait_nd_bus_probe_idle(struct device *dev);
void nd_device_register(struct device *dev);
void nd_device_unregister(struct device *dev, enum nd_async_mode mode);
@@ -248,6 +295,7 @@ u64 btt_sb_checksum(struct btt_sb *btt_sb);
struct nd_region *to_nd_region(struct device *dev);
unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
+int nd_blk_init_region(struct nd_region *nd_region);
int nd_region_to_namespace_type(struct nd_region *nd_region);
int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
u64 nd_region_interleave_set_cookie(struct nd_region *nd_region);
@@ -256,4 +304,7 @@ void nd_bus_unlock(struct device *dev);
bool is_nd_bus_locked(struct device *dev);
int nd_label_reserve_dpa(struct nd_dimm_drvdata *ndd);
void nd_dimm_free_dpa(struct nd_dimm_drvdata *ndd, struct resource *res);
+int nd_blk_do_io(struct nd_blk_window *ndbw, void *iobuf,
+ unsigned int len, int rw, resource_size_t dev_offset);
+resource_size_t nd_namespace_blk_validate(struct nd_namespace_blk *nsblk);
#endif /* __ND_H__ */
diff --git a/drivers/block/nd/region.c b/drivers/block/nd/region.c
index 29019a65808e..7f484ed0528c 100644
--- a/drivers/block/nd/region.c
+++ b/drivers/block/nd/region.c
@@ -17,11 +17,18 @@
static int nd_region_probe(struct device *dev)
{
- int err;
+ int err, rc;
struct nd_region_namespaces *num_ns;
struct nd_region *nd_region = to_nd_region(dev);
- int rc = nd_region_register_namespaces(nd_region, &err);
+ rc = nd_blk_init_region(nd_region);
+ if (rc) {
+ dev_err(&nd_region->dev, "%s: failed to map block windows: %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ rc = nd_region_register_namespaces(nd_region, &err);
num_ns = devm_kzalloc(dev, sizeof(*num_ns), GFP_KERNEL);
if (!num_ns)
return -ENOMEM;
diff --git a/drivers/block/nd/region_devs.c b/drivers/block/nd/region_devs.c
index 0aab8bb0a982..c1a69bcc7626 100644
--- a/drivers/block/nd/region_devs.c
+++ b/drivers/block/nd/region_devs.c
@@ -11,6 +11,7 @@
* General Public License for more details.
*/
#include <linux/scatterlist.h>
+#include <linux/highmem.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/sort.h>
@@ -542,29 +543,148 @@ u64 nd_region_interleave_set_cookie(struct nd_region *nd_region)
return 0;
}
+static void nd_spa_mapping_release(struct kref *kref)
+{
+ struct nd_spa_mapping *spa_map = to_spa_map(kref);
+ struct nfit_spa __iomem *nfit_spa = spa_map->nfit_spa;
+ struct nd_bus *nd_bus = spa_map->nd_bus;
+
+ WARN_ON(!mutex_is_locked(&nd_bus->spa_map_mutex));
+ dev_dbg(&nd_bus->dev, "%s: SPA%d\n", __func__,
+ readw(&nfit_spa->spa_index));
+ iounmap(spa_map->spa);
+ release_mem_region(readq(&nfit_spa->spa_base),
+ readq(&nfit_spa->spa_length));
+ list_del(&spa_map->list);
+ kfree(spa_map);
+}
+
+static struct nd_spa_mapping *find_spa_mapping(struct nd_bus *nd_bus,
+ struct nfit_spa __iomem *nfit_spa)
+{
+ struct nd_spa_mapping *spa_map;
+
+ WARN_ON(!mutex_is_locked(&nd_bus->spa_map_mutex));
+ list_for_each_entry(spa_map, &nd_bus->spa_maps, list)
+ if (spa_map->nfit_spa == nfit_spa)
+ return spa_map;
+
+ return NULL;
+}
+
+static void nd_spa_unmap(struct nd_bus *nd_bus, struct nfit_spa __iomem *nfit_spa)
+{
+ struct nd_spa_mapping *spa_map;
+
+ mutex_lock(&nd_bus->spa_map_mutex);
+ spa_map = find_spa_mapping(nd_bus, nfit_spa);
+
+ if (spa_map)
+ kref_put(&spa_map->kref, nd_spa_mapping_release);
+ mutex_unlock(&nd_bus->spa_map_mutex);
+}
+
+static void *__nd_spa_map(struct nd_bus *nd_bus, struct nfit_spa __iomem *nfit_spa)
+{
+ resource_size_t start = readq(&nfit_spa->spa_base);
+ resource_size_t n = readq(&nfit_spa->spa_length);
+ struct nd_spa_mapping *spa_map;
+ struct resource *res;
+
+ WARN_ON(!mutex_is_locked(&nd_bus->spa_map_mutex));
+
+ spa_map = find_spa_mapping(nd_bus, nfit_spa);
+ if (spa_map) {
+ kref_get(&spa_map->kref);
+ return spa_map->spa;
+ }
+
+ spa_map = kzalloc(sizeof(*spa_map), GFP_KERNEL);
+ if (!spa_map)
+ return NULL;
+
+ INIT_LIST_HEAD(&spa_map->list);
+ spa_map->nfit_spa = nfit_spa;
+ kref_init(&spa_map->kref);
+ spa_map->nd_bus = nd_bus;
+
+ res = request_mem_region(start, n, dev_name(&nd_bus->dev));
+ if (!res)
+ goto err_mem;
+
+ /* TODO: cacheability based on the spa type */
+ spa_map->spa = ioremap_nocache(start, n);
+ if (!spa_map->spa)
+ goto err_map;
+
+ list_add_tail(&spa_map->list, &nd_bus->spa_maps);
+ return spa_map->spa;
+
+ err_map:
+ release_mem_region(start, n);
+ err_mem:
+ kfree(spa_map);
+ return NULL;
+}
+
+/**
+ * nd_spa_map - nd core managed mappings of NFIT_SPA_DCR and NFIT_SPA_BDW ranges
+ * @nd_bus: NFIT-bus that provided the spa table entry
+ * @nfit_spa: spa table to map
+ *
+ * In the case where block-data-window apertures and
+ * dimm-control-regions are interleaved they will end up sharing a
+ * single request_mem_region() + ioremap() for the address range. In
+ * the style of devm nd_spa_map() mappings are automatically dropped
+ * when all region devices referencing the same mapping are disabled /
+ * unbound.
+ */
+static void *nd_spa_map(struct nd_bus *nd_bus, struct nfit_spa __iomem *nfit_spa)
+{
+ struct nd_spa_mapping *spa_map;
+
+ mutex_lock(&nd_bus->spa_map_mutex);
+ spa_map = __nd_spa_map(nd_bus, nfit_spa);
+ mutex_unlock(&nd_bus->spa_map_mutex);
+
+ return spa_map;
+}
+
/*
* Upon successful probe/remove, take/release a reference on the
- * associated interleave set (if present)
+ * associated dimms in the interleave set, on successful probe of a BLK
+ * namespace check if we need a new seed, and on remove or failed probe
+ * of a BLK region drop interleaved spa mappings.
*/
static void nd_region_notify_driver_action(struct nd_bus *nd_bus,
struct device *dev, int rc, bool probe)
{
- if (rc)
- return;
-
if (is_nd_pmem(dev) || is_nd_blk(dev)) {
struct nd_region *nd_region = to_nd_region(dev);
+ struct nd_blk_window *ndbw = &nd_region->bw;
int i;
for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_mapping *nd_mapping = &nd_region->mapping[i];
struct nd_dimm *nd_dimm = nd_mapping->nd_dimm;
- if (probe)
+ if (probe && rc == 0)
atomic_inc(&nd_dimm->busy);
- else
+ else if (!probe)
atomic_dec(&nd_dimm->busy);
}
+
+ if (is_nd_pmem(dev) || (probe && rc == 0))
+ return;
+
+ /* auto-free BLK spa mappings */
+ for (i = 0; i < 2; i++) {
+ struct nd_blk_mmio *mmio = &ndbw->mmio[i];
+
+ if (mmio->base)
+ nd_spa_unmap(nd_bus, mmio->nfit_spa);
+ }
+ memset(ndbw, 0, sizeof(*ndbw));
} else if (dev->parent && is_nd_blk(dev->parent) && probe && rc == 0) {
struct nd_region *nd_region = to_nd_region(dev->parent);
@@ -716,6 +836,188 @@ static const struct attribute_group *nd_region_attribute_groups[] = {
NULL,
};
+static u64 to_interleave_offset(u64 offset, struct nd_blk_mmio *mmio)
+{
+ struct nfit_idt __iomem *nfit_idt = mmio->nfit_idt;
+ u32 sub_line_offset, line_index, line_offset;
+ u64 line_no, table_skip_count, table_offset;
+
+ line_no = div_u64_rem(offset, mmio->line_size, &sub_line_offset);
+ table_skip_count = div_u64_rem(line_no, mmio->num_lines, &line_index);
+ line_offset = readl(&nfit_idt->line_offset[line_index])
+ * mmio->line_size;
+ table_offset = table_skip_count * mmio->table_size;
+
+ return mmio->base_offset + line_offset + table_offset + sub_line_offset;
+}
+
+static u64 read_blk_stat(struct nd_blk_window *ndbw, unsigned int bw)
+{
+ struct nd_blk_mmio *mmio = &ndbw->mmio[DCR];
+ u64 offset = ndbw->stat_offset + mmio->size * bw;
+
+ if (mmio->num_lines)
+ offset = to_interleave_offset(offset, mmio);
+
+ return readq(mmio->base + offset);
+}
+
+static void write_blk_ctl(struct nd_blk_window *ndbw, unsigned int bw,
+ resource_size_t dpa, unsigned int len, unsigned int write)
+{
+ u64 cmd, offset;
+ struct nd_blk_mmio *mmio = &ndbw->mmio[DCR];
+
+ enum {
+ BCW_OFFSET_MASK = (1ULL << 48)-1,
+ BCW_LEN_SHIFT = 48,
+ BCW_LEN_MASK = (1ULL << 8) - 1,
+ BCW_CMD_SHIFT = 56,
+ };
+
+ cmd = (dpa >> L1_CACHE_SHIFT) & BCW_OFFSET_MASK;
+ len = len >> L1_CACHE_SHIFT;
+ cmd |= ((u64) len & BCW_LEN_MASK) << BCW_LEN_SHIFT;
+ cmd |= ((u64) write) << BCW_CMD_SHIFT;
+
+ offset = ndbw->cmd_offset + mmio->size * bw;
+ if (mmio->num_lines)
+ offset = to_interleave_offset(offset, mmio);
+
+ writeq(cmd, mmio->base + offset);
+ /* FIXME: conditionally perform read-back if mandated by firmware */
+}
+
+/* len is <= PAGE_SIZE by this point, so it can be done in a single BW I/O */
+int nd_blk_do_io(struct nd_blk_window *ndbw, void *iobuf, unsigned int len,
+ int write, resource_size_t dpa)
+{
+ struct nd_region *nd_region = ndbw_to_region(ndbw);
+ struct nd_blk_mmio *mmio = &ndbw->mmio[BDW];
+ unsigned int bw, copied = 0;
+ u64 base_offset;
+ int rc;
+
+ bw = nd_region_acquire_lane(nd_region);
+ base_offset = ndbw->bdw_offset + dpa % L1_CACHE_BYTES + bw * mmio->size;
+ /* TODO: non-temporal access, flush hints, cache management etc... */
+ write_blk_ctl(ndbw, bw, dpa, len, write);
+ while (len) {
+ unsigned int c;
+ u64 offset;
+
+ if (mmio->num_lines) {
+ u32 line_offset;
+
+ offset = to_interleave_offset(base_offset + copied,
+ mmio);
+ div_u64_rem(offset, mmio->line_size, &line_offset);
+ c = min(len, mmio->line_size - line_offset);
+ } else {
+ offset = base_offset + ndbw->bdw_offset;
+ c = len;
+ }
+
+ if (write)
+ memcpy(mmio->base + offset, iobuf + copied, c);
+ else
+ memcpy(iobuf + copied, mmio->base + offset, c);
+
+ len -= c;
+ copied += c;
+ }
+ rc = read_blk_stat(ndbw, bw) ? -EIO : 0;
+ nd_region_release_lane(nd_region, bw);
+
+ return rc;
+}
+EXPORT_SYMBOL(nd_blk_do_io);
+
+static int nd_blk_init_interleave(struct nd_blk_mmio *mmio,
+ struct nfit_idt __iomem *nfit_idt, u16 interleave_ways)
+{
+ if (nfit_idt) {
+ mmio->num_lines = readl(&nfit_idt->num_lines);
+ mmio->line_size = readl(&nfit_idt->line_size);
+ if (interleave_ways == 0)
+ return -ENXIO;
+ mmio->table_size = mmio->num_lines * interleave_ways
+ * mmio->line_size;
+ }
+
+ return 0;
+}
+
+int nd_blk_init_region(struct nd_region *nd_region)
+{
+ struct nd_bus *nd_bus = walk_to_nd_bus(&nd_region->dev);
+ struct nd_blk_window *ndbw = &nd_region->bw;
+ struct nd_mapping *nd_mapping;
+ struct nd_blk_mmio *mmio;
+ struct nd_dimm *nd_dimm;
+ struct nd_mem *nd_mem;
+ int rc;
+
+ if (!is_nd_blk(&nd_region->dev))
+ return 0;
+
+ /* FIXME: use nfit values rather than hard coded */
+ if (nd_region->ndr_mappings != 1)
+ return -ENXIO;
+
+ nd_mapping = &nd_region->mapping[0];
+ nd_dimm = nd_mapping->nd_dimm;
+ nd_mem = nd_dimm->nd_mem;
+ if (!nd_mem->nfit_dcr || !nd_mem->nfit_bdw)
+ return -ENXIO;
+
+ /* map block aperture memory */
+ ndbw->bdw_offset = readq(&nd_mem->nfit_bdw->bdw_offset);
+ mmio = &ndbw->mmio[BDW];
+ mmio->base = nd_spa_map(nd_bus, nd_mem->nfit_spa_bdw);
+ if (!mmio->base)
+ return -ENOMEM;
+ mmio->size = readq(&nd_mem->nfit_bdw->bdw_size);
+ mmio->base_offset = readq(&nd_mem->nfit_mem_bdw->region_spa_offset);
+ mmio->nfit_idt = nd_mem->nfit_idt_bdw;
+ mmio->nfit_spa = nd_mem->nfit_spa_bdw;
+ rc = nd_blk_init_interleave(mmio, nd_mem->nfit_idt_bdw,
+ readw(&nd_mem->nfit_mem_bdw->interleave_ways));
+ if (rc)
+ return rc;
+
+ /* map block control memory */
+ ndbw->cmd_offset = readq(&nd_mem->nfit_dcr->cmd_offset);
+ ndbw->stat_offset = readq(&nd_mem->nfit_dcr->status_offset);
+ mmio = &ndbw->mmio[DCR];
+ mmio->base = nd_spa_map(nd_bus, nd_mem->nfit_spa_dcr);
+ if (!mmio->base)
+ return -ENOMEM;
+ mmio->size = readq(&nd_mem->nfit_dcr->bcw_size);
+ mmio->base_offset = readq(&nd_mem->nfit_mem_dcr->region_spa_offset);
+ mmio->nfit_idt = nd_mem->nfit_idt_dcr;
+ mmio->nfit_spa = nd_mem->nfit_spa_dcr;
+ rc = nd_blk_init_interleave(mmio, nd_mem->nfit_idt_dcr,
+ readw(&nd_mem->nfit_mem_dcr->interleave_ways));
+ if (rc)
+ return rc;
+
+ if (mmio->line_size == 0)
+ return 0;
+
+ if ((u32) ndbw->cmd_offset % mmio->line_size + 8 > mmio->line_size) {
+ dev_err(&nd_region->dev,
+ "cmd_offset crosses interleave boundary\n");
+ return -ENXIO;
+ } else if ((u32) ndbw->stat_offset % mmio->line_size + 8 > mmio->line_size) {
+ dev_err(&nd_region->dev,
+ "stat_offset crosses interleave boundary\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
static void nd_blk_init(struct nd_bus *nd_bus, struct nd_region *nd_region,
struct nd_mem *nd_mem)
{
diff --git a/drivers/block/nd/test/iomap.c b/drivers/block/nd/test/iomap.c
index 87e6a1255237..2724e671c376 100644
--- a/drivers/block/nd/test/iomap.c
+++ b/drivers/block/nd/test/iomap.c
@@ -17,17 +17,27 @@
#include <linux/types.h>
#include <linux/io.h>
#include "nfit_test.h"
+#include "../nd.h"
static LIST_HEAD(iomap_head);
static struct iomap_ops {
nfit_test_lookup_fn nfit_test_lookup;
+ nfit_test_acquire_lane_fn nfit_test_acquire_lane;
+ nfit_test_release_lane_fn nfit_test_release_lane;
+ nfit_test_blk_do_io_fn nfit_test_blk_do_io;
struct list_head list;
} iomap_ops;
-void nfit_test_setup(nfit_test_lookup_fn lookup)
+void nfit_test_setup(nfit_test_lookup_fn lookup,
+ nfit_test_acquire_lane_fn acquire_lane,
+ nfit_test_release_lane_fn release_lane,
+ nfit_test_blk_do_io_fn blk_do_io)
{
iomap_ops.nfit_test_lookup = lookup;
+ iomap_ops.nfit_test_acquire_lane = acquire_lane;
+ iomap_ops.nfit_test_release_lane = release_lane;
+ iomap_ops.nfit_test_blk_do_io = blk_do_io;
INIT_LIST_HEAD(&iomap_ops.list);
list_add_rcu(&iomap_ops.list, &iomap_head);
}
@@ -145,4 +155,45 @@ void __wrap___release_region(struct resource *parent, resource_size_t start,
}
EXPORT_SYMBOL(__wrap___release_region);
+int __wrap_nd_blk_do_io(struct nd_blk_window *ndbw, void *iobuf,
+ unsigned int len, int rw, resource_size_t dpa)
+{
+ struct nd_region *nd_region = ndbw_to_region(ndbw);
+ struct nd_blk_mmio *mmio = &ndbw->mmio[BDW];
+ struct nfit_test_resource *nfit_res;
+ struct iomap_ops *ops;
+ int rc = 0;
+
+ rcu_read_lock();
+ ops = list_first_or_null_rcu(&iomap_head, typeof(*ops), list);
+ nfit_res = ops ? ops->nfit_test_lookup((unsigned long) mmio->base) : NULL;
+ if (nfit_res) {
+ unsigned int bw;
+
+ dev_vdbg(&nd_region->dev, "%s: base: %p offset: %pa\n",
+ __func__, mmio->base, &dpa);
+ bw = ops->nfit_test_acquire_lane(nd_region);
+ if (rw)
+ memcpy(nfit_res->buf + dpa, iobuf, len);
+ else
+ memcpy(iobuf, nfit_res->buf + dpa, len);
+ ops->nfit_test_release_lane(nd_region, bw);
+ } else if (ops) {
+ rc = ops->nfit_test_blk_do_io(ndbw, iobuf, len, rw, dpa);
+ } else {
+ /*
+ * We can't call nd_blk_do_io() directly here as it would
+ * create a circular dependency. nfit_test must remain loaded
+ * to maintain nfit_test_blk_do_io() => nd_blk_do_io().
+ */
+ dev_WARN_ONCE(&nd_region->dev, 1,
+ "load nfit_test.ko or disable CONFIG_NFIT_TEST\n");
+ rc = -EIO;
+ }
+ rcu_read_unlock();
+
+ return rc;
+}
+EXPORT_SYMBOL(__wrap_nd_blk_do_io);
+
MODULE_LICENSE("GPL v2");
diff --git a/drivers/block/nd/test/nfit.c b/drivers/block/nd/test/nfit.c
index e9fb9da765b9..7218b55a9a34 100644
--- a/drivers/block/nd/test/nfit.c
+++ b/drivers/block/nd/test/nfit.c
@@ -949,7 +949,8 @@ static __init int nfit_test_init(void)
return -EINVAL;
}
- nfit_test_setup(nfit_test_lookup);
+ nfit_test_setup(nfit_test_lookup, nd_region_acquire_lane,
+ nd_region_release_lane, nd_blk_do_io);
for (i = 0; i < NUM_NFITS; i++) {
struct nfit_test *nfit_test;
diff --git a/drivers/block/nd/test/nfit_test.h b/drivers/block/nd/test/nfit_test.h
index 8a300c51b6bc..a6978563ad4e 100644
--- a/drivers/block/nd/test/nfit_test.h
+++ b/drivers/block/nd/test/nfit_test.h
@@ -12,6 +12,7 @@
*/
#ifndef __NFIT_TEST_H__
#define __NFIT_TEST_H__
+#include <linux/types.h>
struct nfit_test_resource {
struct list_head list;
@@ -20,6 +21,17 @@ struct nfit_test_resource {
};
typedef struct nfit_test_resource *(*nfit_test_lookup_fn)(resource_size_t);
-void nfit_test_setup(nfit_test_lookup_fn fn);
+struct nd_region;
+typedef unsigned int (*nfit_test_acquire_lane_fn)(struct nd_region *nd_region);
+typedef void (*nfit_test_release_lane_fn)(struct nd_region *nd_region,
+ unsigned int lane);
+struct nd_blk_window;
+struct page;
+typedef int (*nfit_test_blk_do_io_fn)(struct nd_blk_window *ndbw, void *iobuf,
+ unsigned int len, int rw, resource_size_t dpa);
+void nfit_test_setup(nfit_test_lookup_fn lookup,
+ nfit_test_acquire_lane_fn acquire_lane,
+ nfit_test_release_lane_fn release_lane,
+ nfit_test_blk_do_io_fn blk_do_io);
void nfit_test_teardown(void);
#endif
On Fri, Apr 17, 2015 at 6:35 PM, Dan Williams <[email protected]> wrote:
> ACPI 6.0 formalizes e820-type-7 and efi-type-14 as persistent memory.
> Mark it "reserved" and allow it to be claimed by a persistent memory
> device driver.
>
> This definition is in addition to the Linux kernel's existing type-12
> definition that was recently added in support of shipping platforms with
> NVDIMM support that predate ACPI 6.0 (which now classifies type-12 as
> OEM reserved). We may choose to exploit this wealth of definitions for
> NVDIMMs to differentiate E820_PRAM (type-12) from E820_PMEM (type-7).
> One potential differentiation is that PMEM is not backed by struct page
> by default in contrast to PRAM. For now, they are effectively treated
> as aliases by the mm.
>
> Note, /proc/iomem can be consulted for differentiating legacy
> "Persistent RAM" E820_PRAM vs standard "Persistent I/O Memory"
> E820_PMEM.
>
Looks reasonable. Time to ask my vendor if they can give me ACPI
6.0-compliant firmware.
--Andy
> Cc: Andy Lutomirski <[email protected]>
> Cc: Boaz Harrosh <[email protected]>
> Cc: H. Peter Anvin <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> Reviewed-by: Ross Zwisler <[email protected]>
> ---
> arch/arm64/kernel/efi.c | 1 +
> arch/ia64/kernel/efi.c | 1 +
> arch/x86/boot/compressed/eboot.c | 4 ++++
> arch/x86/include/uapi/asm/e820.h | 1 +
> arch/x86/kernel/e820.c | 25 +++++++++++++++++++------
> arch/x86/platform/efi/efi.c | 3 +++
> include/linux/efi.h | 3 ++-
> 7 files changed, 31 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> index ab21e0d58278..9d4aa18f2a82 100644
> --- a/arch/arm64/kernel/efi.c
> +++ b/arch/arm64/kernel/efi.c
> @@ -158,6 +158,7 @@ static __init int is_reserve_region(efi_memory_desc_t *md)
> case EFI_BOOT_SERVICES_CODE:
> case EFI_BOOT_SERVICES_DATA:
> case EFI_CONVENTIONAL_MEMORY:
> + case EFI_PERSISTENT_MEMORY:
> return 0;
> default:
> break;
> diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
> index c52d7540dc05..cd8b7485e396 100644
> --- a/arch/ia64/kernel/efi.c
> +++ b/arch/ia64/kernel/efi.c
> @@ -1227,6 +1227,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
> case EFI_RUNTIME_SERVICES_CODE:
> case EFI_RUNTIME_SERVICES_DATA:
> case EFI_ACPI_RECLAIM_MEMORY:
> + case EFI_PERSISTENT_MEMORY:
> default:
> name = "reserved";
> break;
> diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
> index ef17683484e9..dde5bf7726f4 100644
> --- a/arch/x86/boot/compressed/eboot.c
> +++ b/arch/x86/boot/compressed/eboot.c
> @@ -1222,6 +1222,10 @@ static efi_status_t setup_e820(struct boot_params *params,
> e820_type = E820_NVS;
> break;
>
> + case EFI_PERSISTENT_MEMORY:
> + e820_type = E820_PMEM;
> + break;
> +
> default:
> continue;
> }
> diff --git a/arch/x86/include/uapi/asm/e820.h b/arch/x86/include/uapi/asm/e820.h
> index 960a8a9dc4ab..0f457e6eab18 100644
> --- a/arch/x86/include/uapi/asm/e820.h
> +++ b/arch/x86/include/uapi/asm/e820.h
> @@ -32,6 +32,7 @@
> #define E820_ACPI 3
> #define E820_NVS 4
> #define E820_UNUSABLE 5
> +#define E820_PMEM 7
>
> /*
> * This is a non-standardized way to represent ADR or NVDIMM regions that
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index 11cc7d54ec3f..410af501a941 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
> case E820_RESERVED_KERN:
> printk(KERN_CONT "usable");
> break;
> + case E820_PMEM:
> + case E820_PRAM:
> case E820_RESERVED:
> printk(KERN_CONT "reserved");
> break;
> @@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
> case E820_UNUSABLE:
> printk(KERN_CONT "unusable");
> break;
> - case E820_PRAM:
> - printk(KERN_CONT "persistent (type %u)", type);
> - break;
> default:
> printk(KERN_CONT "type %u", type);
> break;
> @@ -919,10 +918,26 @@ static inline const char *e820_type_to_string(int e820_type)
> case E820_NVS: return "ACPI Non-volatile Storage";
> case E820_UNUSABLE: return "Unusable memory";
> case E820_PRAM: return "Persistent RAM";
> + case E820_PMEM: return "Persistent I/O Memory";
> default: return "reserved";
> }
> }
>
> +static bool do_mark_busy(u32 type, struct resource *res)
> +{
> + if (res->start < (1ULL<<20))
> + return true;
> +
> + switch (type) {
> + case E820_RESERVED:
> + case E820_PRAM:
> + case E820_PMEM:
> + return false;
> + default:
> + return true;
> + }
> +}
> +
> /*
> * Mark e820 reserved areas as busy for the resource manager.
> */
> @@ -952,9 +967,7 @@ void __init e820_reserve_resources(void)
> * pci device BAR resource and insert them later in
> * pcibios_resource_survey()
> */
> - if (((e820.map[i].type != E820_RESERVED) &&
> - (e820.map[i].type != E820_PRAM)) ||
> - res->start < (1ULL<<20)) {
> + if (do_mark_busy(e820.map[i].type, res)) {
> res->flags |= IORESOURCE_BUSY;
> insert_resource(&iomem_resource, res);
> }
> diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
> index dbc8627a5cdf..a116e236ac3f 100644
> --- a/arch/x86/platform/efi/efi.c
> +++ b/arch/x86/platform/efi/efi.c
> @@ -145,6 +145,9 @@ static void __init do_add_efi_memmap(void)
> case EFI_UNUSABLE_MEMORY:
> e820_type = E820_UNUSABLE;
> break;
> + case EFI_PERSISTENT_MEMORY:
> + e820_type = E820_PMEM;
> + break;
> default:
> /*
> * EFI_RESERVED_TYPE EFI_RUNTIME_SERVICES_CODE
> diff --git a/include/linux/efi.h b/include/linux/efi.h
> index cf7e431cbc73..28868504aa17 100644
> --- a/include/linux/efi.h
> +++ b/include/linux/efi.h
> @@ -85,7 +85,8 @@ typedef struct {
> #define EFI_MEMORY_MAPPED_IO 11
> #define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12
> #define EFI_PAL_CODE 13
> -#define EFI_MAX_MEMORY_TYPE 14
> +#define EFI_PERSISTENT_MEMORY 14
> +#define EFI_MAX_MEMORY_TYPE 15
>
> /* Attribute values: */
> #define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */
>
--
Andy Lutomirski
AMA Capital Management, LLC
On Fri, Apr 17, 2015 at 09:36:18PM -0400, Dan Williams wrote:
> nd_pmem attaches to persistent memory regions and namespaces emitted by
> the nd subsystem, and, same as the original pmem driver, presents the
> system-physical-address range as a block device.
I don't think there is any need to move the driver around. Also please split
the addition of the ida allocation from adding the new probe methods.
On Fri, Apr 17, 2015 at 09:35:52PM -0400, Dan Williams wrote:
> Register the dimms described in the nfit as devices on a nd_bus, named
> "dimmN" where N is a global ida index. The dimm numbering per-bus may
> appear contiguous, since we only allow a single nd_bus to be registered
> at at a time. However, eventually, dimm-hotplug invalidates this
> property and dimms should be addressed via NFIT-handle.
>
> Cc: Greg KH <[email protected]>
> Cc: Neil Brown <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> ---
> drivers/block/nd/Makefile | 1
> drivers/block/nd/bus.c | 62 +++++++++-
> drivers/block/nd/core.c | 55 +++++++++
> drivers/block/nd/dimm_devs.c | 243 +++++++++++++++++++++++++++++++++++++++++
> drivers/block/nd/nd-private.h | 19 +++
> 5 files changed, 373 insertions(+), 7 deletions(-)
> create mode 100644 drivers/block/nd/dimm_devs.c
>
> diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
> index 7772fb599809..6b34dd4d4df8 100644
> --- a/drivers/block/nd/Makefile
> +++ b/drivers/block/nd/Makefile
> @@ -21,3 +21,4 @@ nd_acpi-y := acpi.o
>
> nd-y := core.o
> nd-y += bus.o
> +nd-y += dimm_devs.o
> diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
> index c27db50511f2..e24db67001d0 100644
> --- a/drivers/block/nd/bus.c
> +++ b/drivers/block/nd/bus.c
> @@ -13,18 +13,59 @@
> #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> #include <linux/uaccess.h>
> #include <linux/fcntl.h>
> +#include <linux/async.h>
> #include <linux/slab.h>
> #include <linux/fs.h>
> #include <linux/io.h>
> #include "nd-private.h"
> #include "nfit.h"
>
> -static int nd_major;
> +static int nd_bus_major;
Call it nd_bus_major in the previous patch, and avoid the unneeded churn
in this patch.
thanks,
greg k-h
On Fri, Apr 17, 2015 at 09:35:46PM -0400, Dan Williams wrote:
> This is the position (device topology) independent method to find all
> the NFIT-defined buses in the system. The expectation is that there
> will only ever be one "nd" bus discovered via /sys/class/nd/ndctl0.
> However, we allow for the possibility of multiple buses and they will
> listed in discovery order as ndctl0...ndctlN. This character device
> hosts the ioctl for passing control messages (as defined by the NFIT
> spec). The "format" and "revision" attributes of this device identify
> the format of the messages. In the event an NFIT is registered with an
> unknown/unsupported control message format then the "format" attribute
> will not be visible.
>
> Cc: Greg KH <[email protected]>
> Cc: Neil Brown <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> ---
> drivers/block/nd/Makefile | 1
> drivers/block/nd/bus.c | 84 +++++++++++++++++++++++++++++++++++++++++
> drivers/block/nd/core.c | 71 ++++++++++++++++++++++++++++++++++-
> drivers/block/nd/nd-private.h | 5 ++
> 4 files changed, 160 insertions(+), 1 deletion(-)
> create mode 100644 drivers/block/nd/bus.c
>
> diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
> index c6bec0c185c5..7772fb599809 100644
> --- a/drivers/block/nd/Makefile
> +++ b/drivers/block/nd/Makefile
> @@ -20,3 +20,4 @@ obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
> nd_acpi-y := acpi.o
>
> nd-y := core.o
> +nd-y += bus.o
> diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
> new file mode 100644
> index 000000000000..c27db50511f2
> --- /dev/null
> +++ b/drivers/block/nd/bus.c
> @@ -0,0 +1,84 @@
> +/*
> + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of version 2 of the GNU General Public License as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * General Public License for more details.
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#include <linux/uaccess.h>
> +#include <linux/fcntl.h>
> +#include <linux/slab.h>
> +#include <linux/fs.h>
> +#include <linux/io.h>
> +#include "nd-private.h"
> +#include "nfit.h"
> +
> +static int nd_major;
> +static struct class *nd_class;
> +
> +int nd_bus_create_ndctl(struct nd_bus *nd_bus)
> +{
> + dev_t devt = MKDEV(nd_major, nd_bus->id);
> + struct device *dev;
> +
> + dev = device_create(nd_class, &nd_bus->dev, devt, nd_bus, "ndctl%d",
> + nd_bus->id);
> +
> + if (IS_ERR(dev)) {
> + dev_dbg(&nd_bus->dev, "failed to register ndctl%d: %ld\n",
> + nd_bus->id, PTR_ERR(dev));
> + return PTR_ERR(dev);
> + }
> + return 0;
> +}
> +
> +void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
> +{
> + device_destroy(nd_class, MKDEV(nd_major, nd_bus->id));
> +}
> +
> +static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> +{
> + return -ENXIO;
> +}
There is no ioctl call here, so why even have this character device?
thanks,
greg k-h
On Fri, Apr 17, 2015 at 6:35 PM, Dan Williams <[email protected]> wrote:
> Since 2010 Intel has included non-volatile memory support on a few
> storage-focused platforms with a feature named ADR (Asynchronous DRAM
> Refresh). These platforms were mostly targeted at custom applications
> and never enjoyed standard discovery mechanisms for platform firmware
> to advertise non-volatile memory capabilities. This now changes with
> the publication of version 6 of the ACPI specification [1] and its
> inclusion of a new table for describing platform memory capabilities.
> The NVDIMM Firmware Interface Table (NFIT), along with new EFI and E820
> memory types, enumerates persistent memory ranges, memory-mapped-I/O
> apertures, physical memory devices (DIMMs), and their associated
> properties.
>
> The ND-subsystem wraps a Linux device driver model around the objects
> and address boundaries defined in the specification and introduces 3 new
> drivers.
>
> nd_pmem: NFIT enabled version of the existing 'pmem' driver [2]
> nd_blk: mmio aperture method for accessing persistent storage
> nd_btt: give persistent memory disk semantics (atomic sector update)
>
> See the documentation in patch2 for more details, and there is
> supplemental documentation on pmem.io [4]. Please review, and
> patches welcome...
>
> For kicking the tires, this release is accompanied by a userspace
> management library 'ndctl' that includes unit tests (make check) for all
> of the kernel ABIs. The nfit_test.ko module can be used to explore a
> sample NFIT topology.
>
> [1]: http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
> [2]: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/log/?h=x86/pmem
> [3]: https://github.com/pmem/ndctl
> [4]: http://pmem.io/documents/
nd.git and ndctl.git trees are now up to date. I'll aim to keep
nd.git non-rebasing, but what eventually goes upstream is likely to be
a re-flowed patch set. ndctl.git won't rebase.
nd.git: git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git nd
ndctl.git: https://github.com/pmem/ndctl.git
On Fri, Apr 17, 2015 at 11:38 PM, Christoph Hellwig <[email protected]> wrote:
> On Fri, Apr 17, 2015 at 09:36:18PM -0400, Dan Williams wrote:
>> nd_pmem attaches to persistent memory regions and namespaces emitted by
>> the nd subsystem, and, same as the original pmem driver, presents the
>> system-physical-address range as a block device.
>
> I don't think there is any need to move the driver around.
At this point in the patch series I agree, but in later patches we
take advantage of nd bus services. "[PATCH 15/21] nd: pmem label sets
and namespace instantiation" adds support for labeled pmem namespaces,
and in "[PATCH 19/21] nd: infrastructure for btt devices" we make pmem
capable of hosting btt instances.
On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> --- /dev/null
> +++ b/drivers/block/nd/Kconfig
> + depends on (X86 || IA64 || ARM || ARM64 || SH || XTENSA)
I've only skimmed this series. I still noticed this patch contains the
only Kconfig typo I know by heart. Because I think you meant to say:
depends on (X86 || IA64 || ARM || ARM64 || SUPERH || XTENSA)
Is that right?
Paul Bolle
On Sat, Apr 18, 2015 at 1:07 AM, Greg KH <[email protected]> wrote:
> On Fri, Apr 17, 2015 at 09:35:46PM -0400, Dan Williams wrote:
>> This is the position (device topology) independent method to find all
>> the NFIT-defined buses in the system. The expectation is that there
>> will only ever be one "nd" bus discovered via /sys/class/nd/ndctl0.
>> However, we allow for the possibility of multiple buses and they will
>> listed in discovery order as ndctl0...ndctlN. This character device
>> hosts the ioctl for passing control messages (as defined by the NFIT
>> spec). The "format" and "revision" attributes of this device identify
>> the format of the messages. In the event an NFIT is registered with an
>> unknown/unsupported control message format then the "format" attribute
>> will not be visible.
>>
>> Cc: Greg KH <[email protected]>
>> Cc: Neil Brown <[email protected]>
>> Signed-off-by: Dan Williams <[email protected]>
>> ---
>> drivers/block/nd/Makefile | 1
>> drivers/block/nd/bus.c | 84 +++++++++++++++++++++++++++++++++++++++++
>> drivers/block/nd/core.c | 71 ++++++++++++++++++++++++++++++++++-
>> drivers/block/nd/nd-private.h | 5 ++
>> 4 files changed, 160 insertions(+), 1 deletion(-)
>> create mode 100644 drivers/block/nd/bus.c
>>
>> diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
>> index c6bec0c185c5..7772fb599809 100644
>> --- a/drivers/block/nd/Makefile
>> +++ b/drivers/block/nd/Makefile
>> @@ -20,3 +20,4 @@ obj-$(CONFIG_NFIT_ACPI) += nd_acpi.o
>> nd_acpi-y := acpi.o
>>
>> nd-y := core.o
>> +nd-y += bus.o
>> diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
>> new file mode 100644
>> index 000000000000..c27db50511f2
>> --- /dev/null
>> +++ b/drivers/block/nd/bus.c
>> @@ -0,0 +1,84 @@
>> +/*
>> + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of version 2 of the GNU General Public License as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but
>> + * WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
>> + * General Public License for more details.
>> + */
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> +#include <linux/uaccess.h>
>> +#include <linux/fcntl.h>
>> +#include <linux/slab.h>
>> +#include <linux/fs.h>
>> +#include <linux/io.h>
>> +#include "nd-private.h"
>> +#include "nfit.h"
>> +
>> +static int nd_major;
>> +static struct class *nd_class;
>> +
>> +int nd_bus_create_ndctl(struct nd_bus *nd_bus)
>> +{
>> + dev_t devt = MKDEV(nd_major, nd_bus->id);
>> + struct device *dev;
>> +
>> + dev = device_create(nd_class, &nd_bus->dev, devt, nd_bus, "ndctl%d",
>> + nd_bus->id);
>> +
>> + if (IS_ERR(dev)) {
>> + dev_dbg(&nd_bus->dev, "failed to register ndctl%d: %ld\n",
>> + nd_bus->id, PTR_ERR(dev));
>> + return PTR_ERR(dev);
>> + }
>> + return 0;
>> +}
>> +
>> +void nd_bus_destroy_ndctl(struct nd_bus *nd_bus)
>> +{
>> + device_destroy(nd_class, MKDEV(nd_major, nd_bus->id));
>> +}
>> +
>> +static long nd_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>> +{
>> + return -ENXIO;
>> +}
>
> There is no ioctl call here, so why even have this character device?
Our management library finds nd buses by /sys/class/nd. The
nd_ioctl() gets filled out in "[PATCH 08/21] nd: ndctl.h, the nd ioctl
abi".
On Sat, Apr 18, 2015 at 1:06 AM, Greg KH <[email protected]> wrote:
> On Fri, Apr 17, 2015 at 09:35:52PM -0400, Dan Williams wrote:
>> Register the dimms described in the nfit as devices on a nd_bus, named
>> "dimmN" where N is a global ida index. The dimm numbering per-bus may
>> appear contiguous, since we only allow a single nd_bus to be registered
>> at at a time. However, eventually, dimm-hotplug invalidates this
>> property and dimms should be addressed via NFIT-handle.
>>
>> Cc: Greg KH <[email protected]>
>> Cc: Neil Brown <[email protected]>
>> Signed-off-by: Dan Williams <[email protected]>
>> ---
>> drivers/block/nd/Makefile | 1
>> drivers/block/nd/bus.c | 62 +++++++++-
>> drivers/block/nd/core.c | 55 +++++++++
>> drivers/block/nd/dimm_devs.c | 243 +++++++++++++++++++++++++++++++++++++++++
>> drivers/block/nd/nd-private.h | 19 +++
>> 5 files changed, 373 insertions(+), 7 deletions(-)
>> create mode 100644 drivers/block/nd/dimm_devs.c
>>
>> diff --git a/drivers/block/nd/Makefile b/drivers/block/nd/Makefile
>> index 7772fb599809..6b34dd4d4df8 100644
>> --- a/drivers/block/nd/Makefile
>> +++ b/drivers/block/nd/Makefile
>> @@ -21,3 +21,4 @@ nd_acpi-y := acpi.o
>>
>> nd-y := core.o
>> nd-y += bus.o
>> +nd-y += dimm_devs.o
>> diff --git a/drivers/block/nd/bus.c b/drivers/block/nd/bus.c
>> index c27db50511f2..e24db67001d0 100644
>> --- a/drivers/block/nd/bus.c
>> +++ b/drivers/block/nd/bus.c
>> @@ -13,18 +13,59 @@
>> #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> #include <linux/uaccess.h>
>> #include <linux/fcntl.h>
>> +#include <linux/async.h>
>> #include <linux/slab.h>
>> #include <linux/fs.h>
>> #include <linux/io.h>
>> #include "nd-private.h"
>> #include "nfit.h"
>>
>> -static int nd_major;
>> +static int nd_bus_major;
>
> Call it nd_bus_major in the previous patch, and avoid the unneeded churn
> in this patch.
Ok, will do when re-flowing this series for merging.
On 04/18/2015 04:35 AM, Dan Williams wrote:
> ACPI 6.0 formalizes e820-type-7 and efi-type-14 as persistent memory.
> Mark it "reserved" and allow it to be claimed by a persistent memory
> device driver.
>
> This definition is in addition to the Linux kernel's existing type-12
> definition that was recently added in support of shipping platforms with
> NVDIMM support that predate ACPI 6.0 (which now classifies type-12 as
> OEM reserved). We may choose to exploit this wealth of definitions for
> NVDIMMs to differentiate E820_PRAM (type-12) from E820_PMEM (type-7).
> One potential differentiation is that PMEM is not backed by struct page
> by default in contrast to PRAM. For now, they are effectively treated
> as aliases by the mm.
>
> Note, /proc/iomem can be consulted for differentiating legacy
> "Persistent RAM" E820_PRAM vs standard "Persistent I/O Memory"
> E820_PMEM.
>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Boaz Harrosh <[email protected]>
> Cc: H. Peter Anvin <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> Reviewed-by: Ross Zwisler <[email protected]>
> ---
> arch/arm64/kernel/efi.c | 1 +
> arch/ia64/kernel/efi.c | 1 +
> arch/x86/boot/compressed/eboot.c | 4 ++++
> arch/x86/include/uapi/asm/e820.h | 1 +
> arch/x86/kernel/e820.c | 25 +++++++++++++++++++------
> arch/x86/platform/efi/efi.c | 3 +++
> include/linux/efi.h | 3 ++-
> 7 files changed, 31 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> index ab21e0d58278..9d4aa18f2a82 100644
> --- a/arch/arm64/kernel/efi.c
> +++ b/arch/arm64/kernel/efi.c
> @@ -158,6 +158,7 @@ static __init int is_reserve_region(efi_memory_desc_t *md)
> case EFI_BOOT_SERVICES_CODE:
> case EFI_BOOT_SERVICES_DATA:
> case EFI_CONVENTIONAL_MEMORY:
> + case EFI_PERSISTENT_MEMORY:
> return 0;
> default:
> break;
> diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
> index c52d7540dc05..cd8b7485e396 100644
> --- a/arch/ia64/kernel/efi.c
> +++ b/arch/ia64/kernel/efi.c
> @@ -1227,6 +1227,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
> case EFI_RUNTIME_SERVICES_CODE:
> case EFI_RUNTIME_SERVICES_DATA:
> case EFI_ACPI_RECLAIM_MEMORY:
> + case EFI_PERSISTENT_MEMORY:
> default:
> name = "reserved";
> break;
> diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
> index ef17683484e9..dde5bf7726f4 100644
> --- a/arch/x86/boot/compressed/eboot.c
> +++ b/arch/x86/boot/compressed/eboot.c
> @@ -1222,6 +1222,10 @@ static efi_status_t setup_e820(struct boot_params *params,
> e820_type = E820_NVS;
> break;
>
> + case EFI_PERSISTENT_MEMORY:
> + e820_type = E820_PMEM;
> + break;
> +
> default:
> continue;
> }
> diff --git a/arch/x86/include/uapi/asm/e820.h b/arch/x86/include/uapi/asm/e820.h
> index 960a8a9dc4ab..0f457e6eab18 100644
> --- a/arch/x86/include/uapi/asm/e820.h
> +++ b/arch/x86/include/uapi/asm/e820.h
> @@ -32,6 +32,7 @@
> #define E820_ACPI 3
> #define E820_NVS 4
> #define E820_UNUSABLE 5
> +#define E820_PMEM 7
>
> /*
> * This is a non-standardized way to represent ADR or NVDIMM regions that
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index 11cc7d54ec3f..410af501a941 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
> case E820_RESERVED_KERN:
> printk(KERN_CONT "usable");
> break;
> + case E820_PMEM:
> + case E820_PRAM:
NACK!
This is the most important print in the system and it is a pure
user Interface. It has no effect what so ever on functionality
It is to Inform the user through dmesg what is the content of the
table.
> case E820_RESERVED:
> printk(KERN_CONT "reserved");
> break;
> @@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
> case E820_UNUSABLE:
> printk(KERN_CONT "unusable");
> break;
+ case E820_PMEM:
> - case E820_PRAM:
> - printk(KERN_CONT "persistent (type %u)", type);
> - break;
Just add the new (7) entry here please. Here Christoph has bike shed
it for you.
> default:
> printk(KERN_CONT "type %u", type);
> break;
> @@ -919,10 +918,26 @@ static inline const char *e820_type_to_string(int e820_type)
> case E820_NVS: return "ACPI Non-volatile Storage";
> case E820_UNUSABLE: return "Unusable memory";
> case E820_PRAM: return "Persistent RAM";
> + case E820_PMEM: return "Persistent I/O Memory";
> default: return "reserved";
> }
> }
>
> +static bool do_mark_busy(u32 type, struct resource *res)
> +{
> + if (res->start < (1ULL<<20))
> + return true;
> +
> + switch (type) {
> + case E820_RESERVED:
> + case E820_PRAM:
> + case E820_PMEM:
> + return false;
> + default:
> + return true;
> + }
Sigh. Again an unknown type comes out busy. Busy means
resource used. It does *not* mean "unknown type".
It just forces researchers to ignore the return value of
request_region. And not be protected by double lock. It
does not really prevent anything
Thanks
Boaz
> +}
> +
> /*
> * Mark e820 reserved areas as busy for the resource manager.
> */
> @@ -952,9 +967,7 @@ void __init e820_reserve_resources(void)
> * pci device BAR resource and insert them later in
> * pcibios_resource_survey()
> */
> - if (((e820.map[i].type != E820_RESERVED) &&
> - (e820.map[i].type != E820_PRAM)) ||
> - res->start < (1ULL<<20)) {
> + if (do_mark_busy(e820.map[i].type, res)) {
> res->flags |= IORESOURCE_BUSY;
> insert_resource(&iomem_resource, res);
> }
> diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
> index dbc8627a5cdf..a116e236ac3f 100644
> --- a/arch/x86/platform/efi/efi.c
> +++ b/arch/x86/platform/efi/efi.c
> @@ -145,6 +145,9 @@ static void __init do_add_efi_memmap(void)
> case EFI_UNUSABLE_MEMORY:
> e820_type = E820_UNUSABLE;
> break;
> + case EFI_PERSISTENT_MEMORY:
> + e820_type = E820_PMEM;
> + break;
> default:
> /*
> * EFI_RESERVED_TYPE EFI_RUNTIME_SERVICES_CODE
> diff --git a/include/linux/efi.h b/include/linux/efi.h
> index cf7e431cbc73..28868504aa17 100644
> --- a/include/linux/efi.h
> +++ b/include/linux/efi.h
> @@ -85,7 +85,8 @@ typedef struct {
> #define EFI_MEMORY_MAPPED_IO 11
> #define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12
> #define EFI_PAL_CODE 13
> -#define EFI_MAX_MEMORY_TYPE 14
> +#define EFI_PERSISTENT_MEMORY 14
> +#define EFI_MAX_MEMORY_TYPE 15
>
> /* Attribute values: */
> #define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */
>
On Friday, April 17, 2015 09:35:30 PM Dan Williams wrote:
> 1/ Autodetect an NFIT table for the ACPI namespace device with _HID of
> "ACPI0012"
>
> 2/ Skeleton implementation to register an NFIT bus.
>
> The NFIT provided by ACPI is the primary method by which platforms will
> discover NVDIMM resources. However, the intent of the
> nfit_bus_descriptor abstraction is to contain "provider" specific
> details, leaving the nd core to be NFIT-provider agnostic. This
> flexibility is exploited in later patches to implement special purpose
> providers of test and custom-defined NFITs.
>
> Cc: <[email protected]>
> Cc: Robert Moore <[email protected]>
> Cc: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
So as discussed internally, nfit.h will have to wait for the ACPICA's
NFIT support to prevent clashes from happening.
Also please CC *all* *patches* with "ACPI" (or "acpi" etc) anywhere in the
subject/changelog/body to [email protected].
More comments likely to follow.
Thanks!
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
* Dan Williams <[email protected]> wrote:
> Maintainer information and documenation for drivers/block/nd/
>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Boaz Harrosh <[email protected]>
> Cc: H. Peter Anvin <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Neil Brown <[email protected]>
> Cc: Greg KH <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> ---
> Documentation/blockdev/nd.txt | 867 +++++++++++++++++++++++++++++++++++++++++
> MAINTAINERS | 34 +-
> 2 files changed, 895 insertions(+), 6 deletions(-)
> create mode 100644 Documentation/blockdev/nd.txt
>
> diff --git a/Documentation/blockdev/nd.txt b/Documentation/blockdev/nd.txt
> new file mode 100644
> index 000000000000..bcfdf21063ab
> --- /dev/null
> +++ b/Documentation/blockdev/nd.txt
> @@ -0,0 +1,867 @@
> + The NFIT-Defined/NVDIMM Sub-system (ND)
> +
> + nd - kernel abi / device-model & ndctl - userspace helper library
> + [email protected]
> + v9: April 17th, 2015
> +
> +
> + Glossary
> +
> + Overview
> + Supporting Documents
> + Git Trees
> +
> + NFIT Terminology and NVDIMM Types
>
> [...]
>
> +The “NVDIMM Firmware Interface Table” (NFIT) [...]
Ok, I'll bite.
So why on earth is this whole concept and the naming itself
('drivers/block/nd/' stands for 'NFIT Defined', apparently) revolving
around a specific 'firmware' mindset and revolving around specific,
weirdly named, overly complicated looking firmware interfaces that
come with their own new weird glossary??
Firmware might be a discovery method - or not. A non-volatile device
might be e820 enumerated, or PCI discovered - potentially with all
discovery handled by the driver.
Why do you restrict this driver to a naming and design that is so
firmware centric?
Discovery matters, but what matters _most_ to devices is actually its
runtime properties and runtime implementation - and I sure hope
firmware has no active role in that!
I really think this is backwards from the get go, it gives me a
feeling of someone having spent way too much time in committee and too
little time spent thinking about simple, proper kernel design and
reusing existing terminology ...
Also:
+ nd - kernel abi / device-model & ndctl - userspace helper library
WTF is a 'kernel ABI'??
Thanks,
Ingo
On Mon, Apr 20, 2015 at 12:06 AM, Ingo Molnar <[email protected]> wrote:
>
> * Dan Williams <[email protected]> wrote:
>
>> Maintainer information and documenation for drivers/block/nd/
>>
>> Cc: Andy Lutomirski <[email protected]>
>> Cc: Boaz Harrosh <[email protected]>
>> Cc: H. Peter Anvin <[email protected]>
>> Cc: Jens Axboe <[email protected]>
>> Cc: Ingo Molnar <[email protected]>
>> Cc: Christoph Hellwig <[email protected]>
>> Cc: Neil Brown <[email protected]>
>> Cc: Greg KH <[email protected]>
>> Signed-off-by: Dan Williams <[email protected]>
>> ---
>> Documentation/blockdev/nd.txt | 867 +++++++++++++++++++++++++++++++++++++++++
>> MAINTAINERS | 34 +-
>> 2 files changed, 895 insertions(+), 6 deletions(-)
>> create mode 100644 Documentation/blockdev/nd.txt
>>
>> diff --git a/Documentation/blockdev/nd.txt b/Documentation/blockdev/nd.txt
>> new file mode 100644
>> index 000000000000..bcfdf21063ab
>> --- /dev/null
>> +++ b/Documentation/blockdev/nd.txt
>> @@ -0,0 +1,867 @@
>> + The NFIT-Defined/NVDIMM Sub-system (ND)
>> +
>> + nd - kernel abi / device-model & ndctl - userspace helper library
>> + [email protected]
>> + v9: April 17th, 2015
>> +
>> +
>> + Glossary
>> +
>> + Overview
>> + Supporting Documents
>> + Git Trees
>> +
>> + NFIT Terminology and NVDIMM Types
>>
>> [...]
>>
>> +The “NVDIMM Firmware Interface Table” (NFIT) [...]
>
> Ok, I'll bite.
>
> So why on earth is this whole concept and the naming itself
> ('drivers/block/nd/' stands for 'NFIT Defined', apparently) revolving
> around a specific 'firmware' mindset and revolving around specific,
> weirdly named, overly complicated looking firmware interfaces that
> come with their own new weird glossary??
There's only three core properties of NVDIMMs that this implementation
cares about.
1/ directly mapped interleaved persistent memory (PMEM)
2/ indirect mmio aperture accessed (windowed) persistent memory (BLK)
3/ the possibility that those 2 access modes may alias the same
on-media addresses
Most of complexity of the implementation is dealing with aspect 3, but
that complexity can and is bypassed in places.
> Firmware might be a discovery method - or not. A non-volatile device
> might be e820 enumerated, or PCI discovered - potentially with all
> discovery handled by the driver.
PCI attached non-volatile memory is NVMe. ND is handling address
ranges that support direct cpu load store.
> Why do you restrict this driver to a naming and design that is so
> firmware centric?
PMEM, BLK, and the fact that they may alias are the generic properties
that are independent of the specification. Granted some of the NFIT
terminology has leaked past the point of initial table parsing, but
its too early to start claiming "restrictive" design. We already
support three ways of attaching PMEM with varying degrees of backing
complexity, and we're more than willing to beat NFIT back where it
makes sense to accommodate more non-NFIT NVDIMM implementations.
> Discovery matters, but what matters _most_ to devices is actually its
> runtime properties and runtime implementation - and I sure hope
> firmware has no active role in that!
It doesn't. Once PMEM and BLK aliasing are resolved the firmware is
out of the picture. In some cases this aliasing is resolved from the
outset (simple memory range, type-12 etc...), the bulk of the
implementation is bypassed in that case.
> I really think this is backwards from the get go, it gives me a
> feeling of someone having spent way too much time in committee and too
> little time spent thinking about simple, proper kernel design and
> reusing existing terminology ...
The simple paths are there, in addition to support for the rest of the
spec. Do we have an existing term for a dimm-relative-address in the
kernel? Some of this is simply novel to the kernel.
> Also:
>
> + nd - kernel abi / device-model & ndctl - userspace helper library
>
> WTF is a 'kernel ABI'??
"ABI" like Documentation/ABI/, the sysfs layout and ioctls for passing
a handful of management commands to firmware. Wherever possible all
the slow path configuration is done with sysfs.
[I haven't much time to look through the patches, so only high level
hand wavey comments for now, sorry..]
On Mon, Apr 20, 2015 at 01:14:42AM -0700, Dan Williams wrote:
> > So why on earth is this whole concept and the naming itself
> > ('drivers/block/nd/' stands for 'NFIT Defined', apparently) revolving
> > around a specific 'firmware' mindset and revolving around specific,
> > weirdly named, overly complicated looking firmware interfaces that
> > come with their own new weird glossary??
>
> There's only three core properties of NVDIMMs that this implementation
> cares about.
>
> 1/ directly mapped interleaved persistent memory (PMEM)
> 2/ indirect mmio aperture accessed (windowed) persistent memory (BLK)
> 3/ the possibility that those 2 access modes may alias the same
> on-media addresses
>
> Most of complexity of the implementation is dealing with aspect 3, but
> that complexity can and is bypassed in places.
>
> > Firmware might be a discovery method - or not. A non-volatile device
> > might be e820 enumerated, or PCI discovered - potentially with all
> > discovery handled by the driver.
>
> PCI attached non-volatile memory is NVMe. ND is handling address
> ranges that support direct cpu load store.
But those can't be attached in all kinds of different ways. It's not like
this is a new thing - they've been used in Storage OEM systems for a long
time, both on Intel platforms and other CPUs.
And the current pmem.c can also handle cases like a PCI card exposing
a large mmio region that can be used as persistent memory.
So a big vote from me into namving this the pmem subsystem and trying
to have names not too tied to one specific firmware interface. Once
I'll go through this in more detail I'll comment more.
On Mon, Apr 20, 2015 at 5:53 AM, Christoph Hellwig <[email protected]> wrote:
> [I haven't much time to look through the patches, so only high level
> hand wavey comments for now, sorry..]
>
> On Mon, Apr 20, 2015 at 01:14:42AM -0700, Dan Williams wrote:
>> > So why on earth is this whole concept and the naming itself
>> > ('drivers/block/nd/' stands for 'NFIT Defined', apparently) revolving
>> > around a specific 'firmware' mindset and revolving around specific,
>> > weirdly named, overly complicated looking firmware interfaces that
>> > come with their own new weird glossary??
>>
>> There's only three core properties of NVDIMMs that this implementation
>> cares about.
>>
>> 1/ directly mapped interleaved persistent memory (PMEM)
>> 2/ indirect mmio aperture accessed (windowed) persistent memory (BLK)
>> 3/ the possibility that those 2 access modes may alias the same
>> on-media addresses
>>
>> Most of complexity of the implementation is dealing with aspect 3, but
>> that complexity can and is bypassed in places.
>>
>> > Firmware might be a discovery method - or not. A non-volatile device
>> > might be e820 enumerated, or PCI discovered - potentially with all
>> > discovery handled by the driver.
>>
>> PCI attached non-volatile memory is NVMe. ND is handling address
>> ranges that support direct cpu load store.
>
> But those can't be attached in all kinds of different ways. It's not like
> this is a new thing - they've been used in Storage OEM systems for a long
> time, both on Intel platforms and other CPUs.
>
> And the current pmem.c can also handle cases like a PCI card exposing
> a large mmio region that can be used as persistent memory.
>
> So a big vote from me into naming this the pmem subsystem and trying
> to have names not too tied to one specific firmware interface.
While I understand a kernel developer's natural aversion to anything
committee defined, NFIT does seem be a superset of all the base
mechanisms needed to describe NVDIMM resources. Also, it's worth
noting that meaning of 'N' in ND is purposefully vague. The whole
point of listing it as "Nfit-Defined / NvDimm Subsystem" was to
indicate that ND is generic and could also refer generally to
"Non-volatile-Devices". What's missing, in my opinion, is an existing
NVDIMM platform that would like to leverage some of base enabling that
this sub-system provides and will never have an NFIT capability. In
the absence of alternative concerns/implementations we reached for
NFIT terminology out of convenience, but I'm all up for deprecating
"NFIT-Defined" as one of the meanings of 'ND'.
> Once I'll go through this in more detail I'll comment more.
Sounds good.
On Sun, Apr 19, 2015 at 12:46 AM, Boaz Harrosh <[email protected]> wrote:
> On 04/18/2015 04:35 AM, Dan Williams wrote:
[..]
>> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
>> index 11cc7d54ec3f..410af501a941 100644
>> --- a/arch/x86/kernel/e820.c
>> +++ b/arch/x86/kernel/e820.c
>> @@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
>> case E820_RESERVED_KERN:
>> printk(KERN_CONT "usable");
>> break;
>> + case E820_PMEM:
>> + case E820_PRAM:
>
> NACK!
>
> This is the most important print in the system and it is a pure
> user Interface. It has no effect what so ever on functionality
> It is to Inform the user through dmesg what is the content of the
> table.
It still describes how the memory is used which is "reserved" for a
driver, I don't see how increasing the verbosity here improves debug
given the alternatives, see below...
>
>> case E820_RESERVED:
>> printk(KERN_CONT "reserved");
>> break;
>> @@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
>> case E820_UNUSABLE:
>> printk(KERN_CONT "unusable");
>> break;
>
> + case E820_PMEM:
>> - case E820_PRAM:
>> - printk(KERN_CONT "persistent (type %u)", type);
>> - break;
>
> Just add the new (7) entry here please. Here Christoph has bike shed
> it for you.
/proc/iomem has these details to differentiate PRAM and PMEM as well
as show which driver(s)/device(s) have claimed the range(s).
>> default:
>> printk(KERN_CONT "type %u", type);
>> break;
Here is where you can see undefined/unknown types.
>> @@ -919,10 +918,26 @@ static inline const char *e820_type_to_string(int e820_type)
>> case E820_NVS: return "ACPI Non-volatile Storage";
>> case E820_UNUSABLE: return "Unusable memory";
>> case E820_PRAM: return "Persistent RAM";
>> + case E820_PMEM: return "Persistent I/O Memory";
>> default: return "reserved";
>> }
>> }
>>
>> +static bool do_mark_busy(u32 type, struct resource *res)
>> +{
>> + if (res->start < (1ULL<<20))
>> + return true;
>> +
>> + switch (type) {
>> + case E820_RESERVED:
>> + case E820_PRAM:
>> + case E820_PMEM:
>> + return false;
>> + default:
>> + return true;
>> + }
>
> Sigh. Again an unknown type comes out busy. Busy means
> resource used. It does *not* mean "unknown type".
>
> It just forces researchers to ignore the return value of
> request_region. And not be protected by double lock. It
> does not really prevent anything
You're free to submit a standalone patch to change this policy... see
the new "OEM-reserved" memory types in ACPI 6.
That said, I think we're better off with the current policy. If
unknown memory types were treated as permanently-busy back when we
initially started experimenting with NVDIMM support (2010) then I
doubt the e820-type-12 prototype would ever have escaped the lab. We
could have avoided a good amount of confusion.
On Mon, Apr 20, 2015 at 8:57 AM, Dan Williams <[email protected]> wrote:
> On Mon, Apr 20, 2015 at 5:53 AM, Christoph Hellwig <[email protected]> wrote:
>> Once I'll go through this in more detail I'll comment more.
>
> Sounds good.
Given that the ACPICA folks are going to define their own nfit.h with
possibly different structure names, that damage should be limited to
just acpi.c. Currently, changing nfit.h structure field names would
impact multiple files. It's a straightforward rework to disentangle,
I'll post patches soon.
On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
:
> +
> +static int nd_mem_init(struct nd_bus *nd_bus)
> +{
> + struct nd_spa *nd_spa;
> +
> + /*
> + * For each SPA-DCR address range find its corresponding
> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
> + * Then, try to find a SPA-BDW and a corresponding BDW that
> + * references the DCR. Throw it all into an nd_mem object.
> + * Note, that BDWs are optional.
> + */
> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
> + int type = nfit_spa_type(nd_spa->nfit_spa);
> + struct nd_mem *nd_mem, *found;
> + struct nd_memdev *nd_memdev;
> + u16 dcr_index;
> +
> + if (type != NFIT_SPA_DCR)
> + continue;
This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
Control Region GUID, for initializing an nd_mem object. However,
battery-backed DIMMs do not have such control region SPA. IIUC, the
NFIT spec does not require NFIT_SPA_DCR.
Can you change this function to work with NFIT_SPA_PM as well?
Thanks,
-Toshi
On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> > :
> >> +
> >> +static int nd_mem_init(struct nd_bus *nd_bus)
> >> +{
> >> + struct nd_spa *nd_spa;
> >> +
> >> + /*
> >> + * For each SPA-DCR address range find its corresponding
> >> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
> >> + * Then, try to find a SPA-BDW and a corresponding BDW that
> >> + * references the DCR. Throw it all into an nd_mem object.
> >> + * Note, that BDWs are optional.
> >> + */
> >> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
> >> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
> >> + int type = nfit_spa_type(nd_spa->nfit_spa);
> >> + struct nd_mem *nd_mem, *found;
> >> + struct nd_memdev *nd_memdev;
> >> + u16 dcr_index;
> >> +
> >> + if (type != NFIT_SPA_DCR)
> >> + continue;
> >
> > This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
> > Control Region GUID, for initializing an nd_mem object. However,
> > battery-backed DIMMs do not have such control region SPA. IIUC, the
> > NFIT spec does not require NFIT_SPA_DCR.
> >
> > Can you change this function to work with NFIT_SPA_PM as well?
>
> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
> nd_region_create() in patch 10.
If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
nd_bus_xxx() calls. So, nd_region_create() won't be called.
nd_bus_init_interleave_sets() fails because init_interleave_set()
returns -ENODEV if (!nd_mem).
BTW, there are two nd_bus_probe() in bus.c and core.c, which is
confusing.
Thanks,
-Toshi
On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> :
>> +
>> +static int nd_mem_init(struct nd_bus *nd_bus)
>> +{
>> + struct nd_spa *nd_spa;
>> +
>> + /*
>> + * For each SPA-DCR address range find its corresponding
>> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
>> + * Then, try to find a SPA-BDW and a corresponding BDW that
>> + * references the DCR. Throw it all into an nd_mem object.
>> + * Note, that BDWs are optional.
>> + */
>> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
>> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
>> + int type = nfit_spa_type(nd_spa->nfit_spa);
>> + struct nd_mem *nd_mem, *found;
>> + struct nd_memdev *nd_memdev;
>> + u16 dcr_index;
>> +
>> + if (type != NFIT_SPA_DCR)
>> + continue;
>
> This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
> Control Region GUID, for initializing an nd_mem object. However,
> battery-backed DIMMs do not have such control region SPA. IIUC, the
> NFIT spec does not require NFIT_SPA_DCR.
>
> Can you change this function to work with NFIT_SPA_PM as well?
NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
nd_region_create() in patch 10.
On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
> > On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
> >> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
> >> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> >> > :
> >> >> +
> >> >> +static int nd_mem_init(struct nd_bus *nd_bus)
> >> >> +{
> >> >> + struct nd_spa *nd_spa;
> >> >> +
> >> >> + /*
> >> >> + * For each SPA-DCR address range find its corresponding
> >> >> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
> >> >> + * Then, try to find a SPA-BDW and a corresponding BDW that
> >> >> + * references the DCR. Throw it all into an nd_mem object.
> >> >> + * Note, that BDWs are optional.
> >> >> + */
> >> >> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
> >> >> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
> >> >> + int type = nfit_spa_type(nd_spa->nfit_spa);
> >> >> + struct nd_mem *nd_mem, *found;
> >> >> + struct nd_memdev *nd_memdev;
> >> >> + u16 dcr_index;
> >> >> +
> >> >> + if (type != NFIT_SPA_DCR)
> >> >> + continue;
> >> >
> >> > This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
> >> > Control Region GUID, for initializing an nd_mem object. However,
> >> > battery-backed DIMMs do not have such control region SPA. IIUC, the
> >> > NFIT spec does not require NFIT_SPA_DCR.
> >> >
> >> > Can you change this function to work with NFIT_SPA_PM as well?
> >>
> >> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
> >> nd_region_create() in patch 10.
> >
> > If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
> > core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
> > nd_bus_xxx() calls. So, nd_region_create() won't be called.
> >
> > nd_bus_init_interleave_sets() fails because init_interleave_set()
> > returns -ENODEV if (!nd_mem).
>
> Ah, ok your test case is specifying PMEM backed by memory device
> info. We have a test case for simple ranges (nfit_test1_setup()), but
> it doesn't hit this bug because it does not specify any memory-device
> tables.
>
> Thanks, will fix this in v2 of the patch set.
>
> > BTW, there are two nd_bus_probe() in bus.c and core.c, which is
> > confusing.
>
> Ok, will fix this as well in the v2 posting.
Cool! Thanks Dan!
-Toshi
On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
> On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
>> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
>> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> > :
>> >> +
>> >> +static int nd_mem_init(struct nd_bus *nd_bus)
>> >> +{
>> >> + struct nd_spa *nd_spa;
>> >> +
>> >> + /*
>> >> + * For each SPA-DCR address range find its corresponding
>> >> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
>> >> + * Then, try to find a SPA-BDW and a corresponding BDW that
>> >> + * references the DCR. Throw it all into an nd_mem object.
>> >> + * Note, that BDWs are optional.
>> >> + */
>> >> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
>> >> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
>> >> + int type = nfit_spa_type(nd_spa->nfit_spa);
>> >> + struct nd_mem *nd_mem, *found;
>> >> + struct nd_memdev *nd_memdev;
>> >> + u16 dcr_index;
>> >> +
>> >> + if (type != NFIT_SPA_DCR)
>> >> + continue;
>> >
>> > This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
>> > Control Region GUID, for initializing an nd_mem object. However,
>> > battery-backed DIMMs do not have such control region SPA. IIUC, the
>> > NFIT spec does not require NFIT_SPA_DCR.
>> >
>> > Can you change this function to work with NFIT_SPA_PM as well?
>>
>> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
>> nd_region_create() in patch 10.
>
> If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
> core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
> nd_bus_xxx() calls. So, nd_region_create() won't be called.
>
> nd_bus_init_interleave_sets() fails because init_interleave_set()
> returns -ENODEV if (!nd_mem).
Ah, ok your test case is specifying PMEM backed by memory device
info. We have a test case for simple ranges (nfit_test1_setup()), but
it doesn't hit this bug because it does not specify any memory-device
tables.
Thanks, will fix this in v2 of the patch set.
> BTW, there are two nd_bus_probe() in bus.c and core.c, which is
> confusing.
Ok, will fix this as well in the v2 posting.
On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> Most configuration of the nd-subsystem is done via nd-sysfs. However,
> the NFIT specification defines a small set of messages that can be
> passed to the subsystem via platform-firmware-defined methods. The
> command set (as of the current version of the NFIT-DSM spec) is:
>
> NFIT_CMD_SMART: media health and diagnostics
> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
> NFIT_CMD_GET_CONFIG_DATA: read label
> NFIT_CMD_SET_CONFIG_DATA: write label
> NFIT_CMD_VENDOR: vendor-specific command passthrough
> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
> NFIT_CMD_START_ARS: initiate scrubbing
> NFIT_CMD_QUERY_ARS: report on scrubbing state
> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>
> Most of the commands target a specific dimm. However, the
> address-range-scrubbing commands target the entire NFIT-bus / platform.
> The 'commands' attribute of an nd-bus, or an nd-dimm enumerate the
> supported commands for that object.
>
> Cc: <[email protected]>
> Cc: Robert Moore <[email protected]>
> Cc: Rafael J. Wysocki <[email protected]>
> Reported-by: Nicholas Moulin <[email protected]>
> Signed-off-by: Dan Williams <[email protected]>
> ---
> drivers/block/nd/Kconfig | 11 +
> drivers/block/nd/acpi.c | 333 +++++++++++++++++++++++++++++++++++++++++
> drivers/block/nd/bus.c | 230 ++++++++++++++++++++++++++++
> drivers/block/nd/core.c | 17 ++
> drivers/block/nd/dimm_devs.c | 69 ++++++++
> drivers/block/nd/nd-private.h | 11 +
> drivers/block/nd/nd.h | 21 +++
> drivers/block/nd/test/nfit.c | 89 +++++++++++
> include/uapi/linux/Kbuild | 1
> include/uapi/linux/ndctl.h | 178 ++++++++++++++++++++++
> 10 files changed, 950 insertions(+), 10 deletions(-)
> create mode 100644 drivers/block/nd/nd.h
> create mode 100644 include/uapi/linux/ndctl.h
>
> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
> index 0106b3807202..6c15d10bf4e0 100644
> --- a/drivers/block/nd/Kconfig
> +++ b/drivers/block/nd/Kconfig
> @@ -42,6 +42,17 @@ config NFIT_ACPI
> enables the core to craft ACPI._DSM messages for platform/dimm
> configuration.
>
> +config NFIT_ACPI_DEBUG
> + bool "NFIT ACPI: Turn on extra debugging"
> + depends on NFIT_ACPI
> + depends on DYNAMIC_DEBUG
> + default n
> + help
> + Enabling this option causes the nd_acpi driver to dump the
> + input and output buffers of _DSM operations on the ACPI0012
> + device, which can be very verbose. Leave it disabled unless
> + you are debugging a hardware / firmware issue.
> +
> config NFIT_TEST
> tristate "NFIT TEST: Manufactured NFIT for interface testing"
> depends on DMA_CMA
> diff --git a/drivers/block/nd/acpi.c b/drivers/block/nd/acpi.c
> index 48db723d7a90..073ff28fdbfe 100644
> --- a/drivers/block/nd/acpi.c
> +++ b/drivers/block/nd/acpi.c
> @@ -13,8 +13,10 @@
> #include <linux/list.h>
> #include <linux/acpi.h>
> #include <linux/mutex.h>
> +#include <linux/ndctl.h>
> #include <linux/module.h>
> #include "nfit.h"
> +#include "nd.h"
>
> enum {
> NFIT_ACPI_NOTIFY_TABLE = 0x80,
> @@ -26,20 +28,330 @@ struct acpi_nfit {
> struct nd_bus *nd_bus;
> };
>
> +static struct acpi_nfit *to_acpi_nfit(struct nfit_bus_descriptor *nfit_desc)
> +{
> + return container_of(nfit_desc, struct acpi_nfit, nfit_desc);
> +}
> +
> +#define NFIT_ACPI_MAX_ELEM 4
> +struct nfit_cmd_desc {
> + int in_num;
> + int out_num;
> + u32 in_sizes[NFIT_ACPI_MAX_ELEM];
> + int out_sizes[NFIT_ACPI_MAX_ELEM];
> +};
> +
> +static const struct nfit_cmd_desc nfit_dimm_descs[] = {
> + [NFIT_CMD_IMPLEMENTED] = { },
> + [NFIT_CMD_SMART] = {
> + .out_num = 2,
> + .out_sizes = { 4, 8, },
> + },
> + [NFIT_CMD_SMART_THRESHOLD] = {
> + .out_num = 2,
> + .out_sizes = { 4, 8, },
> + },
> + [NFIT_CMD_DIMM_FLAGS] = {
> + .out_num = 2,
> + .out_sizes = { 4, 4 },
> + },
> + [NFIT_CMD_GET_CONFIG_SIZE] = {
> + .out_num = 3,
> + .out_sizes = { 4, 4, 4, },
> + },
> + [NFIT_CMD_GET_CONFIG_DATA] = {
> + .in_num = 2,
> + .in_sizes = { 4, 4, },
> + .out_num = 2,
> + .out_sizes = { 4, UINT_MAX, },
> + },
> + [NFIT_CMD_SET_CONFIG_DATA] = {
> + .in_num = 3,
> + .in_sizes = { 4, 4, UINT_MAX, },
> + .out_num = 1,
> + .out_sizes = { 4, },
> + },
> + [NFIT_CMD_VENDOR] = {
> + .in_num = 3,
> + .in_sizes = { 4, 4, UINT_MAX, },
> + .out_num = 3,
> + .out_sizes = { 4, 4, UINT_MAX, },
> + },
> +};
> +
> +static const struct nfit_cmd_desc nfit_acpi_descs[] = {
> + [NFIT_CMD_IMPLEMENTED] = { },
> + [NFIT_CMD_ARS_CAP] = {
> + .in_num = 2,
> + .in_sizes = { 8, 8, },
> + .out_num = 2,
> + .out_sizes = { 4, 4, },
> + },
> + [NFIT_CMD_ARS_START] = {
> + .in_num = 4,
> + .in_sizes = { 8, 8, 2, 6, },
> + .out_num = 1,
> + .out_sizes = { 4, },
> + },
> + [NFIT_CMD_ARS_QUERY] = {
> + .out_num = 2,
> + .out_sizes = { 4, UINT_MAX, },
> + },
> +};
> +
> +static u32 to_cmd_in_size(struct nd_dimm *nd_dimm, int cmd,
> + const struct nfit_cmd_desc *desc, int idx, void *buf)
> +{
> + if (idx >= desc->in_num)
> + return UINT_MAX;
> +
> + if (desc->in_sizes[idx] < UINT_MAX)
> + return desc->in_sizes[idx];
> +
> + if (nd_dimm && cmd == NFIT_CMD_SET_CONFIG_DATA && idx == 2) {
> + struct nfit_cmd_set_config_hdr *hdr = buf;
> +
> + return hdr->in_length;
> + } else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2) {
> + struct nfit_cmd_vendor_hdr *hdr = buf;
> +
> + return hdr->in_length;
> + }
> +
> + return UINT_MAX;
> +}
> +
> +static u32 to_cmd_out_size(struct nd_dimm *nd_dimm, int cmd,
> + const struct nfit_cmd_desc *desc, int idx,
> + void *buf, u32 out_length, u32 offset)
> +{
> + if (idx >= desc->out_num)
> + return UINT_MAX;
> +
> + if (desc->out_sizes[idx] < UINT_MAX)
> + return desc->out_sizes[idx];
> +
> + if (offset >= out_length)
> + return UINT_MAX;
> +
> + if (nd_dimm && cmd == NFIT_CMD_GET_CONFIG_DATA && idx == 1)
> + return out_length - offset;
> + else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2)
> + return out_length - offset;
> + else if (!nd_dimm && cmd == NFIT_CMD_ARS_QUERY && idx == 1)
> + return out_length - offset;
> +
> + return UINT_MAX;
> +}
> +
> +static u8 nd_acpi_uuids[2][16]; /* initialized at nd_acpi_init */
> +
> +static u8 *nd_acpi_bus_uuid(void)
> +{
> + return nd_acpi_uuids[0];
> +}
> +
> +static u8 *nd_acpi_dimm_uuid(void)
> +{
> + return nd_acpi_uuids[1];
> +}
> +
> static int nd_acpi_ctl(struct nfit_bus_descriptor *nfit_desc,
> struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
> unsigned int buf_len)
> {
> - return -ENOTTY;
> + struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
> + union acpi_object in_obj, in_buf, *out_obj;
> + const struct nfit_cmd_desc *desc = NULL;
> + struct device *dev = &nfit->dev->dev;
> + const char *cmd_name, *dimm_name;
> + unsigned long dsm_mask;
> + acpi_handle handle;
> + u32 offset;
> + int rc, i;
> + u8 *uuid;
> +
> + if (nd_dimm) {
> + struct acpi_device *adev = nd_dimm_get_pdata(nd_dimm);
> +
> + if (cmd < ARRAY_SIZE(nfit_dimm_descs))
> + desc = &nfit_dimm_descs[cmd];
> + cmd_name = nfit_dimm_cmd_name(cmd);
> + dsm_mask = nd_dimm_get_dsm_mask(nd_dimm);
> + handle = adev->handle;
> + uuid = nd_acpi_dimm_uuid();
> + dimm_name = dev_name(&adev->dev);
> + } else {
> + if (cmd < ARRAY_SIZE(nfit_acpi_descs))
> + desc = &nfit_acpi_descs[cmd];
> + cmd_name = nfit_bus_cmd_name(cmd);
> + dsm_mask = nfit_desc->dsm_mask;
> + handle = nfit->dev->handle;
> + uuid = nd_acpi_bus_uuid();
> + dimm_name = "bus";
> + }
> +
> + if (!desc || (cmd && (desc->out_num + desc->in_num == 0)))
> + return -ENOTTY;
> +
> + if (!test_bit(cmd, &dsm_mask))
> + return -ENOTTY;
> +
> + in_obj.type = ACPI_TYPE_PACKAGE;
> + in_obj.package.count = 1;
> + in_obj.package.elements = &in_buf;
> + in_buf.type = ACPI_TYPE_BUFFER;
> + in_buf.buffer.pointer = buf;
> + in_buf.buffer.length = 0;
> +
> + /* double check that the nfit_acpi_cmd_descs table is self consistent */
> + if (desc->in_num > NFIT_ACPI_MAX_ELEM) {
> + WARN_ON_ONCE(1);
> + return -ENXIO;
> + }
> +
> + for (i = 0; i < desc->in_num; i++) {
> + u32 in_size;
> +
> + in_size = to_cmd_in_size(nd_dimm, cmd, desc, i, buf);
> + if (in_size == UINT_MAX) {
> + dev_err(dev, "%s:%s unknown input size cmd: %s field: %d\n",
> + __func__, dimm_name, cmd_name, i);
> + return -ENXIO;
> + }
> + in_buf.buffer.length += in_size;
> + if (in_buf.buffer.length > buf_len) {
> + dev_err(dev, "%s:%s input underrun cmd: %s field: %d\n",
> + __func__, dimm_name, cmd_name, i);
> + return -ENXIO;
> + }
> + }
> +
> + dev_dbg(dev, "%s:%s cmd: %s input length: %d\n", __func__, dimm_name,
> + cmd_name, in_buf.buffer.length);
> + if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
> + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
> + 4, in_buf.buffer.pointer, min_t(u32, 128,
> + in_buf.buffer.length), true);
> +
> + out_obj = acpi_evaluate_dsm(handle, uuid, 1, cmd, &in_obj);
> + if (!out_obj) {
> + dev_dbg(dev, "%s:%s _DSM failed cmd: %s\n", __func__, dimm_name,
> + cmd_name);
> + return -EINVAL;
> + }
> +
> + if (out_obj->package.type != ACPI_TYPE_BUFFER) {
> + dev_dbg(dev, "%s:%s unexpected output object type cmd: %s type: %d\n",
> + __func__, dimm_name, cmd_name, out_obj->type);
> + rc = -EINVAL;
> + goto out;
> + }
> +
> + dev_dbg(dev, "%s:%s cmd: %s output length: %d\n", __func__, dimm_name,
> + cmd_name, out_obj->buffer.length);
> + if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
> + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
> + 4, out_obj->buffer.pointer, min_t(u32, 128,
> + out_obj->buffer.length), true);
> +
> + for (i = 0, offset = 0; i < desc->out_num; i++) {
> + u32 out_size = to_cmd_out_size(nd_dimm, cmd, desc, i, buf,
> + out_obj->buffer.length, offset);
> +
> + if (out_size == UINT_MAX) {
> + dev_dbg(dev, "%s:%s unknown output size cmd: %s field: %d\n",
> + __func__, dimm_name, cmd_name, i);
> + break;
> + }
> +
> + if (offset + out_size > out_obj->buffer.length) {
> + dev_dbg(dev, "%s:%s output object underflow cmd: %s field: %d\n",
> + __func__, dimm_name, cmd_name, i);
> + break;
> + }
> +
> + if (in_buf.buffer.length + offset + out_size > buf_len) {
> + dev_dbg(dev, "%s:%s output overrun cmd: %s field: %d\n",
> + __func__, dimm_name, cmd_name, i);
> + rc = -ENXIO;
> + goto out;
> + }
> + memcpy(buf + in_buf.buffer.length + offset,
> + out_obj->buffer.pointer + offset, out_size);
> + offset += out_size;
> + }
> + if (offset + in_buf.buffer.length < buf_len) {
> + if (i >= 1) {
> + /*
> + * status valid, return the number of bytes left
> + * unfilled in the output buffer
> + */
> + rc = buf_len - offset - in_buf.buffer.length;
> + } else {
> + dev_err(dev, "%s:%s underrun cmd: %s buf_len: %d out_len: %d\n",
> + __func__, dimm_name, cmd_name, buf_len, offset);
> + rc = -ENXIO;
> + }
> + } else
> + rc = 0;
> +
> + out:
> + ACPI_FREE(out_obj);
> +
> + return rc;
> +}
> +
> +static int nd_acpi_add_dimm(struct nfit_bus_descriptor *nfit_desc,
> + struct nd_dimm *nd_dimm)
> +{
> + struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
> + u32 nfit_handle = to_nfit_handle(nd_dimm);
> + struct device *dev = &nfit->dev->dev;
> + struct acpi_device *acpi_dimm;
> + unsigned long dsm_mask = 0;
> + u8 *uuid = nd_acpi_dimm_uuid();
> + unsigned long long sta;
> + int i, rc = -ENODEV;
> + acpi_status status;
> +
> + acpi_dimm = acpi_find_child_device(nfit->dev, nfit_handle, false);
> + if (!acpi_dimm) {
> + dev_err(dev, "no ACPI.NFIT device with _ADR %#x, disabling...\n",
> + nfit_handle);
> + return -ENODEV;
> + }
> +
> + status = acpi_evaluate_integer(acpi_dimm->handle, "_STA", NULL, &sta);
> + if (status == AE_NOT_FOUND)
> + dev_err(dev, "%s missing _STA, disabling...\n",
> + dev_name(&acpi_dimm->dev));
I do not think it is correct to set a DIMM _ADR object disabled when it
has no _STA. ACPI 6.0 spec states the followings:
- Section 6.3.7 _STA, "If a device object describes a device that is
not on an enumerable bus and the device object does not have an _STA
object, then OSPM assumes that the device is present, enabled, shown in
the UI, and functioning."
- Section 9.20.1 Hot Plug Support, "1. Prior to hot add of the NVDIMM,
the corresponding ACPI Name Space devices, NVD1, NVD2 return an address
from _ADR object (NFIT Device handle) which does not match any entries
present in NFIT (either the static or from _FIT) indicating that the
corresponding NVDIMM is not present."
So, in this case, it should set the DIMM object enabled or look up the
NFIT table to check the presence.
Thanks,
-Toshi
On Tue, Apr 21, 2015 at 2:20 PM, Toshi Kani <[email protected]> wrote:
> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> Most configuration of the nd-subsystem is done via nd-sysfs. However,
>> the NFIT specification defines a small set of messages that can be
>> passed to the subsystem via platform-firmware-defined methods. The
>> command set (as of the current version of the NFIT-DSM spec) is:
>>
>> NFIT_CMD_SMART: media health and diagnostics
>> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
>> NFIT_CMD_GET_CONFIG_DATA: read label
>> NFIT_CMD_SET_CONFIG_DATA: write label
>> NFIT_CMD_VENDOR: vendor-specific command passthrough
>> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
>> NFIT_CMD_START_ARS: initiate scrubbing
>> NFIT_CMD_QUERY_ARS: report on scrubbing state
>> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>>
>> Most of the commands target a specific dimm. However, the
>> address-range-scrubbing commands target the entire NFIT-bus / platform.
>> The 'commands' attribute of an nd-bus, or an nd-dimm enumerate the
>> supported commands for that object.
>>
>> Cc: <[email protected]>
>> Cc: Robert Moore <[email protected]>
>> Cc: Rafael J. Wysocki <[email protected]>
>> Reported-by: Nicholas Moulin <[email protected]>
>> Signed-off-by: Dan Williams <[email protected]>
>> ---
>> drivers/block/nd/Kconfig | 11 +
>> drivers/block/nd/acpi.c | 333 +++++++++++++++++++++++++++++++++++++++++
>> drivers/block/nd/bus.c | 230 ++++++++++++++++++++++++++++
>> drivers/block/nd/core.c | 17 ++
>> drivers/block/nd/dimm_devs.c | 69 ++++++++
>> drivers/block/nd/nd-private.h | 11 +
>> drivers/block/nd/nd.h | 21 +++
>> drivers/block/nd/test/nfit.c | 89 +++++++++++
>> include/uapi/linux/Kbuild | 1
>> include/uapi/linux/ndctl.h | 178 ++++++++++++++++++++++
>> 10 files changed, 950 insertions(+), 10 deletions(-)
>> create mode 100644 drivers/block/nd/nd.h
>> create mode 100644 include/uapi/linux/ndctl.h
>>
>> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
>> index 0106b3807202..6c15d10bf4e0 100644
>> --- a/drivers/block/nd/Kconfig
>> +++ b/drivers/block/nd/Kconfig
>> @@ -42,6 +42,17 @@ config NFIT_ACPI
>> enables the core to craft ACPI._DSM messages for platform/dimm
>> configuration.
>>
>> +config NFIT_ACPI_DEBUG
>> + bool "NFIT ACPI: Turn on extra debugging"
>> + depends on NFIT_ACPI
>> + depends on DYNAMIC_DEBUG
>> + default n
>> + help
>> + Enabling this option causes the nd_acpi driver to dump the
>> + input and output buffers of _DSM operations on the ACPI0012
>> + device, which can be very verbose. Leave it disabled unless
>> + you are debugging a hardware / firmware issue.
>> +
>> config NFIT_TEST
>> tristate "NFIT TEST: Manufactured NFIT for interface testing"
>> depends on DMA_CMA
>> diff --git a/drivers/block/nd/acpi.c b/drivers/block/nd/acpi.c
>> index 48db723d7a90..073ff28fdbfe 100644
>> --- a/drivers/block/nd/acpi.c
>> +++ b/drivers/block/nd/acpi.c
>> @@ -13,8 +13,10 @@
>> #include <linux/list.h>
>> #include <linux/acpi.h>
>> #include <linux/mutex.h>
>> +#include <linux/ndctl.h>
>> #include <linux/module.h>
>> #include "nfit.h"
>> +#include "nd.h"
>>
>> enum {
>> NFIT_ACPI_NOTIFY_TABLE = 0x80,
>> @@ -26,20 +28,330 @@ struct acpi_nfit {
>> struct nd_bus *nd_bus;
>> };
>>
>> +static struct acpi_nfit *to_acpi_nfit(struct nfit_bus_descriptor *nfit_desc)
>> +{
>> + return container_of(nfit_desc, struct acpi_nfit, nfit_desc);
>> +}
>> +
>> +#define NFIT_ACPI_MAX_ELEM 4
>> +struct nfit_cmd_desc {
>> + int in_num;
>> + int out_num;
>> + u32 in_sizes[NFIT_ACPI_MAX_ELEM];
>> + int out_sizes[NFIT_ACPI_MAX_ELEM];
>> +};
>> +
>> +static const struct nfit_cmd_desc nfit_dimm_descs[] = {
>> + [NFIT_CMD_IMPLEMENTED] = { },
>> + [NFIT_CMD_SMART] = {
>> + .out_num = 2,
>> + .out_sizes = { 4, 8, },
>> + },
>> + [NFIT_CMD_SMART_THRESHOLD] = {
>> + .out_num = 2,
>> + .out_sizes = { 4, 8, },
>> + },
>> + [NFIT_CMD_DIMM_FLAGS] = {
>> + .out_num = 2,
>> + .out_sizes = { 4, 4 },
>> + },
>> + [NFIT_CMD_GET_CONFIG_SIZE] = {
>> + .out_num = 3,
>> + .out_sizes = { 4, 4, 4, },
>> + },
>> + [NFIT_CMD_GET_CONFIG_DATA] = {
>> + .in_num = 2,
>> + .in_sizes = { 4, 4, },
>> + .out_num = 2,
>> + .out_sizes = { 4, UINT_MAX, },
>> + },
>> + [NFIT_CMD_SET_CONFIG_DATA] = {
>> + .in_num = 3,
>> + .in_sizes = { 4, 4, UINT_MAX, },
>> + .out_num = 1,
>> + .out_sizes = { 4, },
>> + },
>> + [NFIT_CMD_VENDOR] = {
>> + .in_num = 3,
>> + .in_sizes = { 4, 4, UINT_MAX, },
>> + .out_num = 3,
>> + .out_sizes = { 4, 4, UINT_MAX, },
>> + },
>> +};
>> +
>> +static const struct nfit_cmd_desc nfit_acpi_descs[] = {
>> + [NFIT_CMD_IMPLEMENTED] = { },
>> + [NFIT_CMD_ARS_CAP] = {
>> + .in_num = 2,
>> + .in_sizes = { 8, 8, },
>> + .out_num = 2,
>> + .out_sizes = { 4, 4, },
>> + },
>> + [NFIT_CMD_ARS_START] = {
>> + .in_num = 4,
>> + .in_sizes = { 8, 8, 2, 6, },
>> + .out_num = 1,
>> + .out_sizes = { 4, },
>> + },
>> + [NFIT_CMD_ARS_QUERY] = {
>> + .out_num = 2,
>> + .out_sizes = { 4, UINT_MAX, },
>> + },
>> +};
>> +
>> +static u32 to_cmd_in_size(struct nd_dimm *nd_dimm, int cmd,
>> + const struct nfit_cmd_desc *desc, int idx, void *buf)
>> +{
>> + if (idx >= desc->in_num)
>> + return UINT_MAX;
>> +
>> + if (desc->in_sizes[idx] < UINT_MAX)
>> + return desc->in_sizes[idx];
>> +
>> + if (nd_dimm && cmd == NFIT_CMD_SET_CONFIG_DATA && idx == 2) {
>> + struct nfit_cmd_set_config_hdr *hdr = buf;
>> +
>> + return hdr->in_length;
>> + } else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2) {
>> + struct nfit_cmd_vendor_hdr *hdr = buf;
>> +
>> + return hdr->in_length;
>> + }
>> +
>> + return UINT_MAX;
>> +}
>> +
>> +static u32 to_cmd_out_size(struct nd_dimm *nd_dimm, int cmd,
>> + const struct nfit_cmd_desc *desc, int idx,
>> + void *buf, u32 out_length, u32 offset)
>> +{
>> + if (idx >= desc->out_num)
>> + return UINT_MAX;
>> +
>> + if (desc->out_sizes[idx] < UINT_MAX)
>> + return desc->out_sizes[idx];
>> +
>> + if (offset >= out_length)
>> + return UINT_MAX;
>> +
>> + if (nd_dimm && cmd == NFIT_CMD_GET_CONFIG_DATA && idx == 1)
>> + return out_length - offset;
>> + else if (nd_dimm && cmd == NFIT_CMD_VENDOR && idx == 2)
>> + return out_length - offset;
>> + else if (!nd_dimm && cmd == NFIT_CMD_ARS_QUERY && idx == 1)
>> + return out_length - offset;
>> +
>> + return UINT_MAX;
>> +}
>> +
>> +static u8 nd_acpi_uuids[2][16]; /* initialized at nd_acpi_init */
>> +
>> +static u8 *nd_acpi_bus_uuid(void)
>> +{
>> + return nd_acpi_uuids[0];
>> +}
>> +
>> +static u8 *nd_acpi_dimm_uuid(void)
>> +{
>> + return nd_acpi_uuids[1];
>> +}
>> +
>> static int nd_acpi_ctl(struct nfit_bus_descriptor *nfit_desc,
>> struct nd_dimm *nd_dimm, unsigned int cmd, void *buf,
>> unsigned int buf_len)
>> {
>> - return -ENOTTY;
>> + struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
>> + union acpi_object in_obj, in_buf, *out_obj;
>> + const struct nfit_cmd_desc *desc = NULL;
>> + struct device *dev = &nfit->dev->dev;
>> + const char *cmd_name, *dimm_name;
>> + unsigned long dsm_mask;
>> + acpi_handle handle;
>> + u32 offset;
>> + int rc, i;
>> + u8 *uuid;
>> +
>> + if (nd_dimm) {
>> + struct acpi_device *adev = nd_dimm_get_pdata(nd_dimm);
>> +
>> + if (cmd < ARRAY_SIZE(nfit_dimm_descs))
>> + desc = &nfit_dimm_descs[cmd];
>> + cmd_name = nfit_dimm_cmd_name(cmd);
>> + dsm_mask = nd_dimm_get_dsm_mask(nd_dimm);
>> + handle = adev->handle;
>> + uuid = nd_acpi_dimm_uuid();
>> + dimm_name = dev_name(&adev->dev);
>> + } else {
>> + if (cmd < ARRAY_SIZE(nfit_acpi_descs))
>> + desc = &nfit_acpi_descs[cmd];
>> + cmd_name = nfit_bus_cmd_name(cmd);
>> + dsm_mask = nfit_desc->dsm_mask;
>> + handle = nfit->dev->handle;
>> + uuid = nd_acpi_bus_uuid();
>> + dimm_name = "bus";
>> + }
>> +
>> + if (!desc || (cmd && (desc->out_num + desc->in_num == 0)))
>> + return -ENOTTY;
>> +
>> + if (!test_bit(cmd, &dsm_mask))
>> + return -ENOTTY;
>> +
>> + in_obj.type = ACPI_TYPE_PACKAGE;
>> + in_obj.package.count = 1;
>> + in_obj.package.elements = &in_buf;
>> + in_buf.type = ACPI_TYPE_BUFFER;
>> + in_buf.buffer.pointer = buf;
>> + in_buf.buffer.length = 0;
>> +
>> + /* double check that the nfit_acpi_cmd_descs table is self consistent */
>> + if (desc->in_num > NFIT_ACPI_MAX_ELEM) {
>> + WARN_ON_ONCE(1);
>> + return -ENXIO;
>> + }
>> +
>> + for (i = 0; i < desc->in_num; i++) {
>> + u32 in_size;
>> +
>> + in_size = to_cmd_in_size(nd_dimm, cmd, desc, i, buf);
>> + if (in_size == UINT_MAX) {
>> + dev_err(dev, "%s:%s unknown input size cmd: %s field: %d\n",
>> + __func__, dimm_name, cmd_name, i);
>> + return -ENXIO;
>> + }
>> + in_buf.buffer.length += in_size;
>> + if (in_buf.buffer.length > buf_len) {
>> + dev_err(dev, "%s:%s input underrun cmd: %s field: %d\n",
>> + __func__, dimm_name, cmd_name, i);
>> + return -ENXIO;
>> + }
>> + }
>> +
>> + dev_dbg(dev, "%s:%s cmd: %s input length: %d\n", __func__, dimm_name,
>> + cmd_name, in_buf.buffer.length);
>> + if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
>> + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
>> + 4, in_buf.buffer.pointer, min_t(u32, 128,
>> + in_buf.buffer.length), true);
>> +
>> + out_obj = acpi_evaluate_dsm(handle, uuid, 1, cmd, &in_obj);
>> + if (!out_obj) {
>> + dev_dbg(dev, "%s:%s _DSM failed cmd: %s\n", __func__, dimm_name,
>> + cmd_name);
>> + return -EINVAL;
>> + }
>> +
>> + if (out_obj->package.type != ACPI_TYPE_BUFFER) {
>> + dev_dbg(dev, "%s:%s unexpected output object type cmd: %s type: %d\n",
>> + __func__, dimm_name, cmd_name, out_obj->type);
>> + rc = -EINVAL;
>> + goto out;
>> + }
>> +
>> + dev_dbg(dev, "%s:%s cmd: %s output length: %d\n", __func__, dimm_name,
>> + cmd_name, out_obj->buffer.length);
>> + if (IS_ENABLED(CONFIG_NFIT_ACPI_DEBUG))
>> + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
>> + 4, out_obj->buffer.pointer, min_t(u32, 128,
>> + out_obj->buffer.length), true);
>> +
>> + for (i = 0, offset = 0; i < desc->out_num; i++) {
>> + u32 out_size = to_cmd_out_size(nd_dimm, cmd, desc, i, buf,
>> + out_obj->buffer.length, offset);
>> +
>> + if (out_size == UINT_MAX) {
>> + dev_dbg(dev, "%s:%s unknown output size cmd: %s field: %d\n",
>> + __func__, dimm_name, cmd_name, i);
>> + break;
>> + }
>> +
>> + if (offset + out_size > out_obj->buffer.length) {
>> + dev_dbg(dev, "%s:%s output object underflow cmd: %s field: %d\n",
>> + __func__, dimm_name, cmd_name, i);
>> + break;
>> + }
>> +
>> + if (in_buf.buffer.length + offset + out_size > buf_len) {
>> + dev_dbg(dev, "%s:%s output overrun cmd: %s field: %d\n",
>> + __func__, dimm_name, cmd_name, i);
>> + rc = -ENXIO;
>> + goto out;
>> + }
>> + memcpy(buf + in_buf.buffer.length + offset,
>> + out_obj->buffer.pointer + offset, out_size);
>> + offset += out_size;
>> + }
>> + if (offset + in_buf.buffer.length < buf_len) {
>> + if (i >= 1) {
>> + /*
>> + * status valid, return the number of bytes left
>> + * unfilled in the output buffer
>> + */
>> + rc = buf_len - offset - in_buf.buffer.length;
>> + } else {
>> + dev_err(dev, "%s:%s underrun cmd: %s buf_len: %d out_len: %d\n",
>> + __func__, dimm_name, cmd_name, buf_len, offset);
>> + rc = -ENXIO;
>> + }
>> + } else
>> + rc = 0;
>> +
>> + out:
>> + ACPI_FREE(out_obj);
>> +
>> + return rc;
>> +}
>> +
>> +static int nd_acpi_add_dimm(struct nfit_bus_descriptor *nfit_desc,
>> + struct nd_dimm *nd_dimm)
>> +{
>> + struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
>> + u32 nfit_handle = to_nfit_handle(nd_dimm);
>> + struct device *dev = &nfit->dev->dev;
>> + struct acpi_device *acpi_dimm;
>> + unsigned long dsm_mask = 0;
>> + u8 *uuid = nd_acpi_dimm_uuid();
>> + unsigned long long sta;
>> + int i, rc = -ENODEV;
>> + acpi_status status;
>> +
>> + acpi_dimm = acpi_find_child_device(nfit->dev, nfit_handle, false);
>> + if (!acpi_dimm) {
>> + dev_err(dev, "no ACPI.NFIT device with _ADR %#x, disabling...\n",
>> + nfit_handle);
>> + return -ENODEV;
>> + }
>> +
>> + status = acpi_evaluate_integer(acpi_dimm->handle, "_STA", NULL, &sta);
>> + if (status == AE_NOT_FOUND)
>> + dev_err(dev, "%s missing _STA, disabling...\n",
>> + dev_name(&acpi_dimm->dev));
>
> I do not think it is correct to set a DIMM _ADR object disabled when it
> has no _STA. ACPI 6.0 spec states the followings:
>
> - Section 6.3.7 _STA, "If a device object describes a device that is
> not on an enumerable bus and the device object does not have an _STA
> object, then OSPM assumes that the device is present, enabled, shown in
> the UI, and functioning."
Ok, I'll take a look.
[..]
> So, in this case, it should set the DIMM object enabled or look up the
> NFIT table to check the presence.
At this point we've already determined that a dimm device is present
because nd_acpi_add_dimm() is called for each dimm found in the NFIT.
Does that count as "enumerable" and require an _STA?
On Tue, 2015-04-21 at 15:05 -0700, Dan Williams wrote:
> On Tue, Apr 21, 2015 at 2:20 PM, Toshi Kani <[email protected]> wrote:
:
> >> +static int nd_acpi_add_dimm(struct nfit_bus_descriptor *nfit_desc,
> >> + struct nd_dimm *nd_dimm)
> >> +{
> >> + struct acpi_nfit *nfit = to_acpi_nfit(nfit_desc);
> >> + u32 nfit_handle = to_nfit_handle(nd_dimm);
> >> + struct device *dev = &nfit->dev->dev;
> >> + struct acpi_device *acpi_dimm;
> >> + unsigned long dsm_mask = 0;
> >> + u8 *uuid = nd_acpi_dimm_uuid();
> >> + unsigned long long sta;
> >> + int i, rc = -ENODEV;
> >> + acpi_status status;
> >> +
> >> + acpi_dimm = acpi_find_child_device(nfit->dev, nfit_handle, false);
> >> + if (!acpi_dimm) {
> >> + dev_err(dev, "no ACPI.NFIT device with _ADR %#x, disabling...\n",
> >> + nfit_handle);
> >> + return -ENODEV;
> >> + }
> >> +
> >> + status = acpi_evaluate_integer(acpi_dimm->handle, "_STA", NULL, &sta);
> >> + if (status == AE_NOT_FOUND)
> >> + dev_err(dev, "%s missing _STA, disabling...\n",
> >> + dev_name(&acpi_dimm->dev));
> >
> > I do not think it is correct to set a DIMM _ADR object disabled when it
> > has no _STA. ACPI 6.0 spec states the followings:
> >
> > - Section 6.3.7 _STA, "If a device object describes a device that is
> > not on an enumerable bus and the device object does not have an _STA
> > object, then OSPM assumes that the device is present, enabled, shown in
> > the UI, and functioning."
>
> Ok, I'll take a look.
Great!
> [..]
> > So, in this case, it should set the DIMM object enabled or look up the
> > NFIT table to check the presence.
>
> At this point we've already determined that a dimm device is present
> because nd_acpi_add_dimm() is called for each dimm found in the NFIT.
> Does that count as "enumerable" and require an _STA?
I think it means that if a bus is enumerable, then it needs to enumerate
the bus to check the status, instead of assuming it present. In other
words, _STA is required for representing non-present status on a
non-enumerable bus.
In any case, we've already enumerated the NFIT table before this point,
so there is no reason to handle the non-_STA case as disabled.
Thanks,
-Toshi
On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
> > On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
> >> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
> >> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> >> > :
> >> >> +
> >> >> +static int nd_mem_init(struct nd_bus *nd_bus)
> >> >> +{
> >> >> + struct nd_spa *nd_spa;
> >> >> +
> >> >> + /*
> >> >> + * For each SPA-DCR address range find its corresponding
> >> >> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
> >> >> + * Then, try to find a SPA-BDW and a corresponding BDW that
> >> >> + * references the DCR. Throw it all into an nd_mem object.
> >> >> + * Note, that BDWs are optional.
> >> >> + */
> >> >> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
> >> >> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
> >> >> + int type = nfit_spa_type(nd_spa->nfit_spa);
> >> >> + struct nd_mem *nd_mem, *found;
> >> >> + struct nd_memdev *nd_memdev;
> >> >> + u16 dcr_index;
> >> >> +
> >> >> + if (type != NFIT_SPA_DCR)
> >> >> + continue;
> >> >
> >> > This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
> >> > Control Region GUID, for initializing an nd_mem object. However,
> >> > battery-backed DIMMs do not have such control region SPA. IIUC, the
> >> > NFIT spec does not require NFIT_SPA_DCR.
> >> >
> >> > Can you change this function to work with NFIT_SPA_PM as well?
> >>
> >> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
> >> nd_region_create() in patch 10.
> >
> > If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
> > core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
> > nd_bus_xxx() calls. So, nd_region_create() won't be called.
> >
> > nd_bus_init_interleave_sets() fails because init_interleave_set()
> > returns -ENODEV if (!nd_mem).
>
> Ah, ok your test case is specifying PMEM backed by memory device
> info. We have a test case for simple ranges (nfit_test1_setup()), but
> it doesn't hit this bug because it does not specify any memory-device
> tables.
Yes, we have NFIT table with SPA range (PM), memory device to SPA, and
NVDIMM control region structures. With the memory device to SPA
structure, this code requires full sets of information, including the
namespace label data in _DSM [1], which is outside of ACPI 6.0 and is
optional. Battery-backed DIMMs do not have such label data. It needs
to work with NFIT table with these structures without this _DSM or with
a different type of _DSM which this code may or may not need to support.
It should also check Region Format Interface Code (RFIC) in the NVDIMM
control region structure before assuming this _DSM is present to
implement RFIC 0x0201.
[1] http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
Thanks,
-Toshi
On Wed, Apr 22, 2015 at 9:39 AM, Toshi Kani <[email protected]> wrote:
> On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
>> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
>> > On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
>> >> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
>> >> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> >> > :
>> >> >> +
>> >> >> +static int nd_mem_init(struct nd_bus *nd_bus)
>> >> >> +{
>> >> >> + struct nd_spa *nd_spa;
>> >> >> +
>> >> >> + /*
>> >> >> + * For each SPA-DCR address range find its corresponding
>> >> >> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
>> >> >> + * Then, try to find a SPA-BDW and a corresponding BDW that
>> >> >> + * references the DCR. Throw it all into an nd_mem object.
>> >> >> + * Note, that BDWs are optional.
>> >> >> + */
>> >> >> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
>> >> >> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
>> >> >> + int type = nfit_spa_type(nd_spa->nfit_spa);
>> >> >> + struct nd_mem *nd_mem, *found;
>> >> >> + struct nd_memdev *nd_memdev;
>> >> >> + u16 dcr_index;
>> >> >> +
>> >> >> + if (type != NFIT_SPA_DCR)
>> >> >> + continue;
>> >> >
>> >> > This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
>> >> > Control Region GUID, for initializing an nd_mem object. However,
>> >> > battery-backed DIMMs do not have such control region SPA. IIUC, the
>> >> > NFIT spec does not require NFIT_SPA_DCR.
>> >> >
>> >> > Can you change this function to work with NFIT_SPA_PM as well?
>> >>
>> >> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
>> >> nd_region_create() in patch 10.
>> >
>> > If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
>> > core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
>> > nd_bus_xxx() calls. So, nd_region_create() won't be called.
>> >
>> > nd_bus_init_interleave_sets() fails because init_interleave_set()
>> > returns -ENODEV if (!nd_mem).
>>
>> Ah, ok your test case is specifying PMEM backed by memory device
>> info. We have a test case for simple ranges (nfit_test1_setup()), but
>> it doesn't hit this bug because it does not specify any memory-device
>> tables.
>
> Yes, we have NFIT table with SPA range (PM), memory device to SPA, and
> NVDIMM control region structures. With the memory device to SPA
> structure, this code requires full sets of information, including the
> namespace label data in _DSM [1], which is outside of ACPI 6.0 and is
> optional. Battery-backed DIMMs do not have such label data.
This is what "nd_namespace_io" devices are for, they do not require labels.
Question, if you don't have labels and you don't have DSMs then why
publish a MEMDEV table at all? Why not simply publish an anonymous
range? See nfit_test1_setup().
> It needs
> to work with NFIT table with these structures without this _DSM or with
> a different type of _DSM which this code may or may not need to support.
> It should also check Region Format Interface Code (RFIC) in the NVDIMM
> control region structure before assuming this _DSM is present to
> implement RFIC 0x0201.
Ok I can look into adding this check, but I don't think it is
necessary if you simply refrain from publishing a MEMDEV entry.
On 4/22/2015 1:03 PM, Dan Williams wrote:
> On Wed, Apr 22, 2015 at 9:39 AM, Toshi Kani <[email protected]> wrote:
>> On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
>>> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
>>>> On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
>>>>> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
>>>>>> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>>>>>> :
>>>>>>> +
>>>>>>> +static int nd_mem_init(struct nd_bus *nd_bus)
>>>>>>> +{
>>>>>>> + struct nd_spa *nd_spa;
>>>>>>> +
>>>>>>> + /*
>>>>>>> + * For each SPA-DCR address range find its corresponding
>>>>>>> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
>>>>>>> + * Then, try to find a SPA-BDW and a corresponding BDW that
>>>>>>> + * references the DCR. Throw it all into an nd_mem object.
>>>>>>> + * Note, that BDWs are optional.
>>>>>>> + */
>>>>>>> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
>>>>>>> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
>>>>>>> + int type = nfit_spa_type(nd_spa->nfit_spa);
>>>>>>> + struct nd_mem *nd_mem, *found;
>>>>>>> + struct nd_memdev *nd_memdev;
>>>>>>> + u16 dcr_index;
>>>>>>> +
>>>>>>> + if (type != NFIT_SPA_DCR)
>>>>>>> + continue;
>>>>>>
>>>>>> This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
>>>>>> Control Region GUID, for initializing an nd_mem object. However,
>>>>>> battery-backed DIMMs do not have such control region SPA. IIUC, the
>>>>>> NFIT spec does not require NFIT_SPA_DCR.
>>>>>>
>>>>>> Can you change this function to work with NFIT_SPA_PM as well?
>>>>>
>>>>> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
>>>>> nd_region_create() in patch 10.
>>>>
>>>> If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
>>>> core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
>>>> nd_bus_xxx() calls. So, nd_region_create() won't be called.
>>>>
>>>> nd_bus_init_interleave_sets() fails because init_interleave_set()
>>>> returns -ENODEV if (!nd_mem).
>>>
>>> Ah, ok your test case is specifying PMEM backed by memory device
>>> info. We have a test case for simple ranges (nfit_test1_setup()), but
>>> it doesn't hit this bug because it does not specify any memory-device
>>> tables.
>>
>> Yes, we have NFIT table with SPA range (PM), memory device to SPA, and
>> NVDIMM control region structures. With the memory device to SPA
>> structure, this code requires full sets of information, including the
>> namespace label data in _DSM [1], which is outside of ACPI 6.0 and is
>> optional. Battery-backed DIMMs do not have such label data.
>
> This is what "nd_namespace_io" devices are for, they do not require labels.
>
> Question, if you don't have labels and you don't have DSMs then why
> publish a MEMDEV table at all? Why not simply publish an anonymous
> range? See nfit_test1_setup().
The MEMDEV table provides useful information, and there may be _DSMs,
perhaps just not the same _DSM as some other devices.
>> It needs
>> to work with NFIT table with these structures without this _DSM or with
>> a different type of _DSM which this code may or may not need to support.
>> It should also check Region Format Interface Code (RFIC) in the NVDIMM
>> control region structure before assuming this _DSM is present to
>> implement RFIC 0x0201.
>
> Ok I can look into adding this check, but I don't think it is
> necessary if you simply refrain from publishing a MEMDEV entry.
But we need the MEMDEV. And as Toshi mentions, we could have other
RFICs with other _DSMs than your example. That's why there is an RFIC.
-- ljk
> _______________________________________________
> Linux-nvdimm mailing list
> [email protected]
> https://lists.01.org/mailman/listinfo/linux-nvdimm
>
On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
> On 4/22/2015 1:03 PM, Dan Williams wrote:
>> On Wed, Apr 22, 2015 at 9:39 AM, Toshi Kani <[email protected]> wrote:
>>> On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
>>>> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
>>>>> On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
>>>>>> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
>>>>>>> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>>>>>>> :
>>>>>>>> +
>>>>>>>> +static int nd_mem_init(struct nd_bus *nd_bus)
>>>>>>>> +{
>>>>>>>> + struct nd_spa *nd_spa;
>>>>>>>> +
>>>>>>>> + /*
>>>>>>>> + * For each SPA-DCR address range find its corresponding
>>>>>>>> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
>>>>>>>> + * Then, try to find a SPA-BDW and a corresponding BDW that
>>>>>>>> + * references the DCR. Throw it all into an nd_mem object.
>>>>>>>> + * Note, that BDWs are optional.
>>>>>>>> + */
>>>>>>>> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
>>>>>>>> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
>>>>>>>> + int type = nfit_spa_type(nd_spa->nfit_spa);
>>>>>>>> + struct nd_mem *nd_mem, *found;
>>>>>>>> + struct nd_memdev *nd_memdev;
>>>>>>>> + u16 dcr_index;
>>>>>>>> +
>>>>>>>> + if (type != NFIT_SPA_DCR)
>>>>>>>> + continue;
>>>>>>>
>>>>>>> This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
>>>>>>> Control Region GUID, for initializing an nd_mem object. However,
>>>>>>> battery-backed DIMMs do not have such control region SPA. IIUC, the
>>>>>>> NFIT spec does not require NFIT_SPA_DCR.
>>>>>>>
>>>>>>> Can you change this function to work with NFIT_SPA_PM as well?
>>>>>>
>>>>>> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
>>>>>> nd_region_create() in patch 10.
>>>>>
>>>>> If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
>>>>> core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
>>>>> nd_bus_xxx() calls. So, nd_region_create() won't be called.
>>>>>
>>>>> nd_bus_init_interleave_sets() fails because init_interleave_set()
>>>>> returns -ENODEV if (!nd_mem).
>>>>
>>>> Ah, ok your test case is specifying PMEM backed by memory device
>>>> info. We have a test case for simple ranges (nfit_test1_setup()), but
>>>> it doesn't hit this bug because it does not specify any memory-device
>>>> tables.
>>>
>>> Yes, we have NFIT table with SPA range (PM), memory device to SPA, and
>>> NVDIMM control region structures. With the memory device to SPA
>>> structure, this code requires full sets of information, including the
>>> namespace label data in _DSM [1], which is outside of ACPI 6.0 and is
>>> optional. Battery-backed DIMMs do not have such label data.
>>
>> This is what "nd_namespace_io" devices are for, they do not require labels.
>>
>> Question, if you don't have labels and you don't have DSMs then why
>> publish a MEMDEV table at all? Why not simply publish an anonymous
>> range? See nfit_test1_setup().
>
> The MEMDEV table provides useful information, and there may be _DSMs,
> perhaps just not the same _DSM as some other devices.
>
>>> It needs
>>> to work with NFIT table with these structures without this _DSM or with
>>> a different type of _DSM which this code may or may not need to support.
>>> It should also check Region Format Interface Code (RFIC) in the NVDIMM
>>> control region structure before assuming this _DSM is present to
>>> implement RFIC 0x0201.
>>
>> Ok I can look into adding this check, but I don't think it is
>> necessary if you simply refrain from publishing a MEMDEV entry.
>
> But we need the MEMDEV. And as Toshi mentions, we could have other
> RFICs with other _DSMs than your example. That's why there is an RFIC.
Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
device. That specific problem can be fixed by either deleting the
MEMDEV, or adding a DCR.
Of course, if you add a DCR with a different intended DSM layout than
the DSM-example-interface the driver will need to add support for
handling that case.
On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
> > On 4/22/2015 1:03 PM, Dan Williams wrote:
> >> On Wed, Apr 22, 2015 at 9:39 AM, Toshi Kani <[email protected]> wrote:
> >>> On Tue, 2015-04-21 at 13:35 -0700, Dan Williams wrote:
> >>>> On Tue, Apr 21, 2015 at 12:55 PM, Toshi Kani <[email protected]> wrote:
> >>>>> On Tue, 2015-04-21 at 12:58 -0700, Dan Williams wrote:
> >>>>>> On Tue, Apr 21, 2015 at 12:35 PM, Toshi Kani <[email protected]> wrote:
> >>>>>>> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> >>>>>>> :
> >>>>>>>> +
> >>>>>>>> +static int nd_mem_init(struct nd_bus *nd_bus)
> >>>>>>>> +{
> >>>>>>>> + struct nd_spa *nd_spa;
> >>>>>>>> +
> >>>>>>>> + /*
> >>>>>>>> + * For each SPA-DCR address range find its corresponding
> >>>>>>>> + * MEMDEV(s). From each MEMDEV find the corresponding DCR.
> >>>>>>>> + * Then, try to find a SPA-BDW and a corresponding BDW that
> >>>>>>>> + * references the DCR. Throw it all into an nd_mem object.
> >>>>>>>> + * Note, that BDWs are optional.
> >>>>>>>> + */
> >>>>>>>> + list_for_each_entry(nd_spa, &nd_bus->spas, list) {
> >>>>>>>> + u16 spa_index = readw(&nd_spa->nfit_spa->spa_index);
> >>>>>>>> + int type = nfit_spa_type(nd_spa->nfit_spa);
> >>>>>>>> + struct nd_mem *nd_mem, *found;
> >>>>>>>> + struct nd_memdev *nd_memdev;
> >>>>>>>> + u16 dcr_index;
> >>>>>>>> +
> >>>>>>>> + if (type != NFIT_SPA_DCR)
> >>>>>>>> + continue;
> >>>>>>>
> >>>>>>> This function requires NFIT_SPA_DCR, SPA Range Structure with NVDIMM
> >>>>>>> Control Region GUID, for initializing an nd_mem object. However,
> >>>>>>> battery-backed DIMMs do not have such control region SPA. IIUC, the
> >>>>>>> NFIT spec does not require NFIT_SPA_DCR.
> >>>>>>>
> >>>>>>> Can you change this function to work with NFIT_SPA_PM as well?
> >>>>>>
> >>>>>> NFIT_SPA_PM ranges are handled separately from nd_mem_init(). See
> >>>>>> nd_region_create() in patch 10.
> >>>>>
> >>>>> If nd_mem_init() does not initialize nd_mem objects, nd_bus_probe() in
> >>>>> core.c fails in nd_bus_init_interleave_sets() and skips all subsequent
> >>>>> nd_bus_xxx() calls. So, nd_region_create() won't be called.
> >>>>>
> >>>>> nd_bus_init_interleave_sets() fails because init_interleave_set()
> >>>>> returns -ENODEV if (!nd_mem).
> >>>>
> >>>> Ah, ok your test case is specifying PMEM backed by memory device
> >>>> info. We have a test case for simple ranges (nfit_test1_setup()), but
> >>>> it doesn't hit this bug because it does not specify any memory-device
> >>>> tables.
> >>>
> >>> Yes, we have NFIT table with SPA range (PM), memory device to SPA, and
> >>> NVDIMM control region structures. With the memory device to SPA
> >>> structure, this code requires full sets of information, including the
> >>> namespace label data in _DSM [1], which is outside of ACPI 6.0 and is
> >>> optional. Battery-backed DIMMs do not have such label data.
> >>
> >> This is what "nd_namespace_io" devices are for, they do not require labels.
> >>
> >> Question, if you don't have labels and you don't have DSMs then why
> >> publish a MEMDEV table at all? Why not simply publish an anonymous
> >> range? See nfit_test1_setup().
> >
> > The MEMDEV table provides useful information, and there may be _DSMs,
> > perhaps just not the same _DSM as some other devices.
> >
> >>> It needs
> >>> to work with NFIT table with these structures without this _DSM or with
> >>> a different type of _DSM which this code may or may not need to support.
> >>> It should also check Region Format Interface Code (RFIC) in the NVDIMM
> >>> control region structure before assuming this _DSM is present to
> >>> implement RFIC 0x0201.
> >>
> >> Ok I can look into adding this check, but I don't think it is
> >> necessary if you simply refrain from publishing a MEMDEV entry.
> >
> > But we need the MEMDEV. And as Toshi mentions, we could have other
> > RFICs with other _DSMs than your example. That's why there is an RFIC.
>
> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
> device. That specific problem can be fixed by either deleting the
> MEMDEV, or adding a DCR.
By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
Adding a DCR structure does not solve this issue since it requires SPA
with Control Region GUID, which battery-backed DIMMs do not have.
> Of course, if you add a DCR with a different intended DSM layout than
> the DSM-example-interface the driver will need to add support for
> handling that case.
Yes, we consider to add different _DSMs for management. We do not need
the nd_acpi driver to support it now, but we need this framework to work
without the DSM-example-interface present.
Thanks,
-Toshi
> -----Original Message-----
> From: Linux-nvdimm [mailto:[email protected]] On Behalf Of
> Dan Williams
> Sent: Friday, April 17, 2015 8:35 PM
> To: [email protected]
> Subject: [Linux-nvdimm] [PATCH 00/21] ND: NFIT-Defined / NVDIMM Subsystem
>
...
> create mode 100644 drivers/block/nd/acpi.c
> create mode 100644 drivers/block/nd/blk.c
> create mode 100644 drivers/block/nd/bus.c
> create mode 100644 drivers/block/nd/core.c
...
The kernel already has lots of files with these names:
5 acpi.c
10 bus.c
66 core.c
I often use ctags like this:
vim -t core.c
but that doesn?t immediately work with common filenames - it
presents a list of all 66 files to choose from.
Also, blk.c is a name one might expect to see in the block/
directory (e.g., next to blk.h).
An nd_ prefix on all the filenames would help.
> -----Original Message-----
> From: Linux-nvdimm [mailto:[email protected]] On Behalf Of
> Dan Williams
> Sent: Friday, April 17, 2015 8:37 PM
> To: [email protected]
> Subject: [Linux-nvdimm] [PATCH 19/21] nd: infrastructure for btt devices
>
...
> +/*
> + * btt_sb_checksum: compute checksum for btt info block
> + *
> + * Returns a fletcher64 checksum of everything in the given info block
> + * except the last field (since that's where the checksum lives).
> + */
> +u64 btt_sb_checksum(struct btt_sb *btt_sb)
> +{
> + u64 sum, sum_save;
> +
> + sum_save = btt_sb->checksum;
> + btt_sb->checksum = 0;
> + sum = nd_fletcher64(btt_sb, sizeof(*btt_sb));
> + btt_sb->checksum = sum_save;
> + return sum;
> +}
> +EXPORT_SYMBOL(btt_sb_checksum);
...
Of all the functions with prototypes in nd.h, this is the only
function that doesn't have a name starting with nd_.
Following such a convention helps ease setting up ftrace filters.
---
Robert Elliott, HP Server Storage
On Wed, Apr 22, 2015 at 11:23 AM, Toshi Kani <[email protected]> wrote:
> On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
>> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
>> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
>> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
>> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
>> device. That specific problem can be fixed by either deleting the
>> MEMDEV, or adding a DCR.
>
> By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
Hmm, I meant a DCR as defined below. I agree you would not need a "SPA-DCR".
> Adding a DCR structure does not solve this issue since it requires SPA
> with Control Region GUID, which battery-backed DIMMs do not have.
I would not go that far, half of a DCR entry is relevant for any
NVDIMM, and half is only relevant if a DIMM offers BLK access:
struct acpi_nfit_dcr {
u16 type;
u16 length;
u16 dcr_index;
u16 vendor_id;
u16 device_id;
u16 revision_id;
u16 sub_vendor_id;
u16 sub_device_id;
u16 sub_revision_id;
u8 reserved[6];
u32 serial_number;
u16 fic;
<<<<< BLK relevant fields start here <<<<<
u16 num_bcw;
u64 bcw_size;
u64 cmd_offset;
u64 cmd_size;
u64 status_offset;
u64 status_size;
u16 flags;
u8 reserved2[6];
};
>> Of course, if you add a DCR with a different intended DSM layout than
>> the DSM-example-interface the driver will need to add support for
>> handling that case.
>
> Yes, we consider to add different _DSMs for management. We do not need
> the nd_acpi driver to support it now, but we need this framework to work
> without the DSM-example-interface present.
>
One possible workaround is that I could ignore MEMDEV entries that do
not have a corresponding DCR. This would enable nd_namespace_io
devices to be surfaced for your use case. Would that work for you?
I.e. do you need the nfit_handle exposed?
On Wed, 2015-04-22 at 12:28 -0700, Dan Williams wrote:
> On Wed, Apr 22, 2015 at 11:23 AM, Toshi Kani <[email protected]> wrote:
> > On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
> >> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
> >> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
> >> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
> >> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
> >> device. That specific problem can be fixed by either deleting the
> >> MEMDEV, or adding a DCR.
> >
> > By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
>
> Hmm, I meant a DCR as defined below. I agree you would not need a "SPA-DCR".
>
> > Adding a DCR structure does not solve this issue since it requires SPA
> > with Control Region GUID, which battery-backed DIMMs do not have.
>
> I would not go that far, half of a DCR entry is relevant for any
> NVDIMM, and half is only relevant if a DIMM offers BLK access:
>
> struct acpi_nfit_dcr {
> u16 type;
> u16 length;
> u16 dcr_index;
> u16 vendor_id;
> u16 device_id;
> u16 revision_id;
> u16 sub_vendor_id;
> u16 sub_device_id;
> u16 sub_revision_id;
> u8 reserved[6];
> u32 serial_number;
> u16 fic;
> <<<<< BLK relevant fields start here <<<<<
> u16 num_bcw;
> u64 bcw_size;
> u64 cmd_offset;
> u64 cmd_size;
> u64 status_offset;
> u64 status_size;
> u16 flags;
> u8 reserved2[6];
> };
Yes, we do have a DCR entry. But we do not have a SPA-DCR.
The previous issue I reported to nd_mem_init() was caused by the fact
that there was no "SPA-DCR". nd_mem_init() requires SPA-DCR to
initialize nd_mem objects.
> >> Of course, if you add a DCR with a different intended DSM layout than
> >> the DSM-example-interface the driver will need to add support for
> >> handling that case.
> >
> > Yes, we consider to add different _DSMs for management. We do not need
> > the nd_acpi driver to support it now, but we need this framework to work
> > without the DSM-example-interface present.
> >
>
> One possible workaround is that I could ignore MEMDEV entries that do
> not have a corresponding DCR. This would enable nd_namespace_io
> devices to be surfaced for your use case. Would that work for you?
> I.e. do you need the nfit_handle exposed?
We have MEMDEV entries and their corresponding DCR entries. ACPI 6.0
states that NVDIMM control region structure index must contain a
non-zero value in a MEMDEV entry, so I think they must correspond.
Yes, we need this framework to enumerate all entries.
Thanks,
-Toshi
On Wed, Apr 22, 2015 at 12:06 PM, Elliott, Robert (Server Storage)
<[email protected]> wrote:
>> -----Original Message-----
>> From: Linux-nvdimm [mailto:[email protected]] On Behalf Of
>> Dan Williams
>> Sent: Friday, April 17, 2015 8:35 PM
>> To: [email protected]
>> Subject: [Linux-nvdimm] [PATCH 00/21] ND: NFIT-Defined / NVDIMM Subsystem
>>
> ...
>> create mode 100644 drivers/block/nd/acpi.c
>> create mode 100644 drivers/block/nd/blk.c
>> create mode 100644 drivers/block/nd/bus.c
>> create mode 100644 drivers/block/nd/core.c
> ...
>
> The kernel already has lots of files with these names:
> 5 acpi.c
> 10 bus.c
> 66 core.c
>
> I often use ctags like this:
> vim -t core.c
> but that doesn’t immediately work with common filenames - it
> presents a list of all 66 files to choose from.
>
> Also, blk.c is a name one might expect to see in the block/
> directory (e.g., next to blk.h).
>
> An nd_ prefix on all the filenames would help.
>
I picked up the "don't duplicate the directory name in the source file
name" approach from a review comment from Linus on a SCSI driver a
long time back (iirc). I'm not motivated to stop that practice now.
On Wed, Apr 22, 2015 at 12:12 PM, Elliott, Robert (Server Storage)
<[email protected]> wrote:
>> -----Original Message-----
>> From: Linux-nvdimm [mailto:[email protected]] On Behalf Of
>> Dan Williams
>> Sent: Friday, April 17, 2015 8:37 PM
>> To: [email protected]
>> Subject: [Linux-nvdimm] [PATCH 19/21] nd: infrastructure for btt devices
>>
> ...
>> +/*
>> + * btt_sb_checksum: compute checksum for btt info block
>> + *
>> + * Returns a fletcher64 checksum of everything in the given info block
>> + * except the last field (since that's where the checksum lives).
>> + */
>> +u64 btt_sb_checksum(struct btt_sb *btt_sb)
>> +{
>> + u64 sum, sum_save;
>> +
>> + sum_save = btt_sb->checksum;
>> + btt_sb->checksum = 0;
>> + sum = nd_fletcher64(btt_sb, sizeof(*btt_sb));
>> + btt_sb->checksum = sum_save;
>> + return sum;
>> +}
>> +EXPORT_SYMBOL(btt_sb_checksum);
> ...
>
> Of all the functions with prototypes in nd.h, this is the only
> function that doesn't have a name starting with nd_.
>
> Following such a convention helps ease setting up ftrace filters.
Sure, I'll fix that up.
On Wed, Apr 22, 2015 at 12:38 PM, Toshi Kani <[email protected]> wrote:
> On Wed, 2015-04-22 at 12:28 -0700, Dan Williams wrote:
>> On Wed, Apr 22, 2015 at 11:23 AM, Toshi Kani <[email protected]> wrote:
>> > On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
>> >> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
>> >> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
>> >> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
>> >> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
>> >> device. That specific problem can be fixed by either deleting the
>> >> MEMDEV, or adding a DCR.
>> >
>> > By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
>>
>> Hmm, I meant a DCR as defined below. I agree you would not need a "SPA-DCR".
>>
>> > Adding a DCR structure does not solve this issue since it requires SPA
>> > with Control Region GUID, which battery-backed DIMMs do not have.
>>
>> I would not go that far, half of a DCR entry is relevant for any
>> NVDIMM, and half is only relevant if a DIMM offers BLK access:
>>
>> struct acpi_nfit_dcr {
>> u16 type;
>> u16 length;
>> u16 dcr_index;
>> u16 vendor_id;
>> u16 device_id;
>> u16 revision_id;
>> u16 sub_vendor_id;
>> u16 sub_device_id;
>> u16 sub_revision_id;
>> u8 reserved[6];
>> u32 serial_number;
>> u16 fic;
>> <<<<< BLK relevant fields start here <<<<<
>> u16 num_bcw;
>> u64 bcw_size;
>> u64 cmd_offset;
>> u64 cmd_size;
>> u64 status_offset;
>> u64 status_size;
>> u16 flags;
>> u8 reserved2[6];
>> };
>
> Yes, we do have a DCR entry. But we do not have a SPA-DCR.
Got it. will fix.
* Elliott, Robert (Server Storage) <[email protected]> wrote:
> > -----Original Message-----
> > From: Linux-nvdimm [mailto:[email protected]] On Behalf Of
> > Dan Williams
> > Sent: Friday, April 17, 2015 8:35 PM
> > To: [email protected]
> > Subject: [Linux-nvdimm] [PATCH 00/21] ND: NFIT-Defined / NVDIMM Subsystem
> >
> ...
> > create mode 100644 drivers/block/nd/acpi.c
> > create mode 100644 drivers/block/nd/blk.c
> > create mode 100644 drivers/block/nd/bus.c
> > create mode 100644 drivers/block/nd/core.c
> ...
>
> The kernel already has lots of files with these names:
> 5 acpi.c
> 10 bus.c
> 66 core.c
>
> I often use ctags like this:
> vim -t core.c
> but that doesn’t immediately work with common filenames - it
> presents a list of all 66 files to choose from.
>
> Also, blk.c is a name one might expect to see in the block/
> directory (e.g., next to blk.h).
>
> An nd_ prefix on all the filenames would help.
It's really stupid to duplicate information that is present in the
pathname.
To type:
vim -t nd/core.c
should be the same as:
vim -t nd_core.c
Thanks,
Ingo
On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> Most configuration of the nd-subsystem is done via nd-sysfs. However,
> the NFIT specification defines a small set of messages that can be
> passed to the subsystem via platform-firmware-defined methods. The
> command set (as of the current version of the NFIT-DSM spec) is:
>
> NFIT_CMD_SMART: media health and diagnostics
> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
> NFIT_CMD_GET_CONFIG_DATA: read label
> NFIT_CMD_SET_CONFIG_DATA: write label
> NFIT_CMD_VENDOR: vendor-specific command passthrough
> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
> NFIT_CMD_START_ARS: initiate scrubbing
> NFIT_CMD_QUERY_ARS: report on scrubbing state
> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
"nd/bus.c" provides two features, 1) the top level ND bus driver which
is the central part of the ND, and 2) the ioctl interface specific to
the example-DSM-interface. I think the example-DSM-specific part should
be put into an example-DSM-support module, so that the ND can support
other _DSMs as necessary. Also, _DSM needs to be handled as optional.
Thanks,
-Toshi
On Fri, 2015-04-24 at 09:56 -0600, Toshi Kani wrote:
> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> > Most configuration of the nd-subsystem is done via nd-sysfs. However,
> > the NFIT specification defines a small set of messages that can be
> > passed to the subsystem via platform-firmware-defined methods. The
> > command set (as of the current version of the NFIT-DSM spec) is:
> >
> > NFIT_CMD_SMART: media health and diagnostics
> > NFIT_CMD_GET_CONFIG_SIZE: size of the label space
> > NFIT_CMD_GET_CONFIG_DATA: read label
> > NFIT_CMD_SET_CONFIG_DATA: write label
> > NFIT_CMD_VENDOR: vendor-specific command passthrough
> > NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
> > NFIT_CMD_START_ARS: initiate scrubbing
> > NFIT_CMD_QUERY_ARS: report on scrubbing state
> > NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>
> "nd/bus.c" provides two features, 1) the top level ND bus driver which
> is the central part of the ND, and 2) the ioctl interface specific to
> the example-DSM-interface. I think the example-DSM-specific part should
> be put into an example-DSM-support module, so that the ND can support
> other _DSMs as necessary. Also, _DSM needs to be handled as optional.
And the same for "nd/acpi.c", which is 1) the ACPI0012 handler, and 2)
the example-DSM-support module. I think they need to be separated.
Thanks,
-Toshi
On Fri, Apr 24, 2015 at 8:56 AM, Toshi Kani <[email protected]> wrote:
> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> Most configuration of the nd-subsystem is done via nd-sysfs. However,
>> the NFIT specification defines a small set of messages that can be
>> passed to the subsystem via platform-firmware-defined methods. The
>> command set (as of the current version of the NFIT-DSM spec) is:
>>
>> NFIT_CMD_SMART: media health and diagnostics
>> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
>> NFIT_CMD_GET_CONFIG_DATA: read label
>> NFIT_CMD_SET_CONFIG_DATA: write label
>> NFIT_CMD_VENDOR: vendor-specific command passthrough
>> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
>> NFIT_CMD_START_ARS: initiate scrubbing
>> NFIT_CMD_QUERY_ARS: report on scrubbing state
>> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>
> "nd/bus.c" provides two features, 1) the top level ND bus driver which
> is the central part of the ND, and 2) the ioctl interface specific to
> the example-DSM-interface. I think the example-DSM-specific part should
> be put into an example-DSM-support module, so that the ND can support
> other _DSMs as necessary. Also, _DSM needs to be handled as optional.
I don't think it needs to be separated, they'll both end up using the
same infrastructure just with different UUIDs on the ACPI device
interface or different format-interface-codes. A firmware
implementation is also free to disable individual DSMs (see
nd_acpi_add_dimm). That said, you're right, we do need a fix to allow
PMEM from DIMMs without DSMs to activate.
On Fri, Apr 24, 2015 at 9:09 AM, Toshi Kani <[email protected]> wrote:
> On Fri, 2015-04-24 at 09:56 -0600, Toshi Kani wrote:
>> On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> > Most configuration of the nd-subsystem is done via nd-sysfs. However,
>> > the NFIT specification defines a small set of messages that can be
>> > passed to the subsystem via platform-firmware-defined methods. The
>> > command set (as of the current version of the NFIT-DSM spec) is:
>> >
>> > NFIT_CMD_SMART: media health and diagnostics
>> > NFIT_CMD_GET_CONFIG_SIZE: size of the label space
>> > NFIT_CMD_GET_CONFIG_DATA: read label
>> > NFIT_CMD_SET_CONFIG_DATA: write label
>> > NFIT_CMD_VENDOR: vendor-specific command passthrough
>> > NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
>> > NFIT_CMD_START_ARS: initiate scrubbing
>> > NFIT_CMD_QUERY_ARS: report on scrubbing state
>> > NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>>
>> "nd/bus.c" provides two features, 1) the top level ND bus driver which
>> is the central part of the ND, and 2) the ioctl interface specific to
>> the example-DSM-interface. I think the example-DSM-specific part should
>> be put into an example-DSM-support module, so that the ND can support
>> other _DSMs as necessary. Also, _DSM needs to be handled as optional.
>
> And the same for "nd/acpi.c", which is 1) the ACPI0012 handler, and 2)
> the example-DSM-support module. I think they need to be separated.
>
Ok, send me a patch as I'm not sure what type of separation you are proposing.
On Fri, 2015-04-24 at 09:25 -0700, Dan Williams wrote:
> On Fri, Apr 24, 2015 at 8:56 AM, Toshi Kani <[email protected]> wrote:
> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> >> Most configuration of the nd-subsystem is done via nd-sysfs. However,
> >> the NFIT specification defines a small set of messages that can be
> >> passed to the subsystem via platform-firmware-defined methods. The
> >> command set (as of the current version of the NFIT-DSM spec) is:
> >>
> >> NFIT_CMD_SMART: media health and diagnostics
> >> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
> >> NFIT_CMD_GET_CONFIG_DATA: read label
> >> NFIT_CMD_SET_CONFIG_DATA: write label
> >> NFIT_CMD_VENDOR: vendor-specific command passthrough
> >> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
> >> NFIT_CMD_START_ARS: initiate scrubbing
> >> NFIT_CMD_QUERY_ARS: report on scrubbing state
> >> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
> >
> > "nd/bus.c" provides two features, 1) the top level ND bus driver which
> > is the central part of the ND, and 2) the ioctl interface specific to
> > the example-DSM-interface. I think the example-DSM-specific part should
> > be put into an example-DSM-support module, so that the ND can support
> > other _DSMs as necessary. Also, _DSM needs to be handled as optional.
>
> I don't think it needs to be separated, they'll both end up using the
> same infrastructure just with different UUIDs on the ACPI device
> interface or different format-interface-codes. A firmware
> implementation is also free to disable individual DSMs (see
> nd_acpi_add_dimm).
Well, ioctl cmd# is essentially func# of the _DSM, and each cmd
structure needs to match with its _DSM output data structure. So, I do
not think these cmds will work for other _DSMs. That said, the ND is
complex enough already, and we should not make it more complicated for
the initial version... So, how about changing the name of /dev/ndctl0
to indicate RFIC 0x0201, ex. /dev/nd0201ctl0? That should allow
separate ioctl()s for other RFICs. The code can be updated when other
_DSM actually needs to be supported by the ND.
> That said, you're right, we do need a fix to allow
> PMEM from DIMMs without DSMs to activate.
Great!
Thanks,
-Toshi
On Fri, Apr 24, 2015 at 10:18 AM, Toshi Kani <[email protected]> wrote:
> On Fri, 2015-04-24 at 09:25 -0700, Dan Williams wrote:
>> On Fri, Apr 24, 2015 at 8:56 AM, Toshi Kani <[email protected]> wrote:
>> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
>> >> Most configuration of the nd-subsystem is done via nd-sysfs. However,
>> >> the NFIT specification defines a small set of messages that can be
>> >> passed to the subsystem via platform-firmware-defined methods. The
>> >> command set (as of the current version of the NFIT-DSM spec) is:
>> >>
>> >> NFIT_CMD_SMART: media health and diagnostics
>> >> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
>> >> NFIT_CMD_GET_CONFIG_DATA: read label
>> >> NFIT_CMD_SET_CONFIG_DATA: write label
>> >> NFIT_CMD_VENDOR: vendor-specific command passthrough
>> >> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
>> >> NFIT_CMD_START_ARS: initiate scrubbing
>> >> NFIT_CMD_QUERY_ARS: report on scrubbing state
>> >> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
>> >
>> > "nd/bus.c" provides two features, 1) the top level ND bus driver which
>> > is the central part of the ND, and 2) the ioctl interface specific to
>> > the example-DSM-interface. I think the example-DSM-specific part should
>> > be put into an example-DSM-support module, so that the ND can support
>> > other _DSMs as necessary. Also, _DSM needs to be handled as optional.
>>
>> I don't think it needs to be separated, they'll both end up using the
>> same infrastructure just with different UUIDs on the ACPI device
>> interface or different format-interface-codes. A firmware
>> implementation is also free to disable individual DSMs (see
>> nd_acpi_add_dimm).
>
> Well, ioctl cmd# is essentially func# of the _DSM, and each cmd
> structure needs to match with its _DSM output data structure. So, I do
> not think these cmds will work for other _DSMs. That said, the ND is
> complex enough already, and we should not make it more complicated for
> the initial version... So, how about changing the name of /dev/ndctl0
> to indicate RFIC 0x0201, ex. /dev/nd0201ctl0? That should allow
> separate ioctl()s for other RFICs. The code can be updated when other
> _DSM actually needs to be supported by the ND.
No, all you need is unique command names (see libndctl
ndctl_{bus|dimm}_is_cmd_supported()) and then translate the ND cmd
number to the firmware function number in the "provider". It just so
happens that for these first set of commands the ND cmd number matches
the ACPI device function number in the DSM-interface-example, but
there is no reason that need always be the case.
On 4/17/2015 9:35 PM, Dan Williams wrote:
:
> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
> index 5fa74f124b3e..0106b3807202 100644
> --- a/drivers/block/nd/Kconfig
> +++ b/drivers/block/nd/Kconfig
> @@ -41,4 +41,24 @@ config NFIT_ACPI
> register the platform-global NFIT blob with the core. Also
> enables the core to craft ACPI._DSM messages for platform/dimm
> configuration.
> +
> +config NFIT_TEST
> + tristate "NFIT TEST: Manufactured NFIT for interface testing"
> + depends on DMA_CMA
> + depends on ND_CORE=m
> + depends on m
> + help
> + For development purposes register a manufactured
> + NFIT table to verify the resulting device model topology.
> + Note, this module arranges for ioremap_cache() to be
> + overridden locally to allow simulation of system-memory as an
> + io-memory-resource.
> +
> + Note, this test expects to be able to find at least
> + 256MB of CMA space (CONFIG_CMA_SIZE_MBYTES) or it will fail to
It seems to actually be wanting >= 584MB.
-- ljk
> + load. Kconfig does not allow for numerical value
> + dependencies, so we can only warn at runtime.
> +
> + Say N unless you are doing development of the 'nd' subsystem.
> +
> endif
On Fri, Apr 24, 2015 at 2:47 PM, Linda Knippers <[email protected]> wrote:
> On 4/17/2015 9:35 PM, Dan Williams wrote:
> :
>> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
>> index 5fa74f124b3e..0106b3807202 100644
>> --- a/drivers/block/nd/Kconfig
>> +++ b/drivers/block/nd/Kconfig
>> @@ -41,4 +41,24 @@ config NFIT_ACPI
>> register the platform-global NFIT blob with the core. Also
>> enables the core to craft ACPI._DSM messages for platform/dimm
>> configuration.
>> +
>> +config NFIT_TEST
>> + tristate "NFIT TEST: Manufactured NFIT for interface testing"
>> + depends on DMA_CMA
>> + depends on ND_CORE=m
>> + depends on m
>> + help
>> + For development purposes register a manufactured
>> + NFIT table to verify the resulting device model topology.
>> + Note, this module arranges for ioremap_cache() to be
>> + overridden locally to allow simulation of system-memory as an
>> + io-memory-resource.
>> +
>> + Note, this test expects to be able to find at least
>> + 256MB of CMA space (CONFIG_CMA_SIZE_MBYTES) or it will fail to
>
> It seems to actually be wanting >= 584MB.
Ah, true, this Kconfig text is stale. Will fix.
On 4/24/2015 5:50 PM, Dan Williams wrote:
> On Fri, Apr 24, 2015 at 2:47 PM, Linda Knippers <[email protected]> wrote:
>> On 4/17/2015 9:35 PM, Dan Williams wrote:
>> :
>>> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
>>> index 5fa74f124b3e..0106b3807202 100644
>>> --- a/drivers/block/nd/Kconfig
>>> +++ b/drivers/block/nd/Kconfig
>>> @@ -41,4 +41,24 @@ config NFIT_ACPI
>>> register the platform-global NFIT blob with the core. Also
>>> enables the core to craft ACPI._DSM messages for platform/dimm
>>> configuration.
>>> +
>>> +config NFIT_TEST
>>> + tristate "NFIT TEST: Manufactured NFIT for interface testing"
>>> + depends on DMA_CMA
>>> + depends on ND_CORE=m
>>> + depends on m
>>> + help
>>> + For development purposes register a manufactured
>>> + NFIT table to verify the resulting device model topology.
>>> + Note, this module arranges for ioremap_cache() to be
>>> + overridden locally to allow simulation of system-memory as an
>>> + io-memory-resource.
>>> +
>>> + Note, this test expects to be able to find at least
>>> + 256MB of CMA space (CONFIG_CMA_SIZE_MBYTES) or it will fail to
>>
>> It seems to actually be wanting >= 584MB.
>
> Ah, true, this Kconfig text is stale. Will fix.
Thanks. One more question...
> +#ifdef CONFIG_CMA_SIZE_MBYTES
> +#define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
> +#else
> +#define CMA_SIZE_MBYTES 0
> +#endif
> +
> +static __init int nfit_test_init(void)
> +{
> + int rc, i;
> +
> + if (CMA_SIZE_MBYTES < 584) {
> + pr_err("need CONFIG_CMA_SIZE_MBYTES >= 584 to load\n");
> + return -EINVAL;
> + }
> +
Since the kernel takes a cma= boot parameter, it would be nice if
this check is against what the kernel is using rather than the config
option. Is that possible?
-- ljk
On Fri, Apr 24, 2015 at 2:59 PM, Linda Knippers <[email protected]> wrote:
> On 4/24/2015 5:50 PM, Dan Williams wrote:
>> On Fri, Apr 24, 2015 at 2:47 PM, Linda Knippers <[email protected]> wrote:
>>> On 4/17/2015 9:35 PM, Dan Williams wrote:
>>> :
>>>> diff --git a/drivers/block/nd/Kconfig b/drivers/block/nd/Kconfig
>>>> index 5fa74f124b3e..0106b3807202 100644
>>>> --- a/drivers/block/nd/Kconfig
>>>> +++ b/drivers/block/nd/Kconfig
>>>> @@ -41,4 +41,24 @@ config NFIT_ACPI
>>>> register the platform-global NFIT blob with the core. Also
>>>> enables the core to craft ACPI._DSM messages for platform/dimm
>>>> configuration.
>>>> +
>>>> +config NFIT_TEST
>>>> + tristate "NFIT TEST: Manufactured NFIT for interface testing"
>>>> + depends on DMA_CMA
>>>> + depends on ND_CORE=m
>>>> + depends on m
>>>> + help
>>>> + For development purposes register a manufactured
>>>> + NFIT table to verify the resulting device model topology.
>>>> + Note, this module arranges for ioremap_cache() to be
>>>> + overridden locally to allow simulation of system-memory as an
>>>> + io-memory-resource.
>>>> +
>>>> + Note, this test expects to be able to find at least
>>>> + 256MB of CMA space (CONFIG_CMA_SIZE_MBYTES) or it will fail to
>>>
>>> It seems to actually be wanting >= 584MB.
>>
>> Ah, true, this Kconfig text is stale. Will fix.
>
> Thanks. One more question...
>
>> +#ifdef CONFIG_CMA_SIZE_MBYTES
>> +#define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
>> +#else
>> +#define CMA_SIZE_MBYTES 0
>> +#endif
>> +
>> +static __init int nfit_test_init(void)
>> +{
>> + int rc, i;
>> +
>> + if (CMA_SIZE_MBYTES < 584) {
>> + pr_err("need CONFIG_CMA_SIZE_MBYTES >= 584 to load\n");
>> + return -EINVAL;
>> + }
>> +
>
> Since the kernel takes a cma= boot parameter, it would be nice if
> this check is against what the kernel is using rather than the config
> option. Is that possible?
Yeah, that would be more friendly. I also think we can reduce the BLK
aperture sizes. Since those don't need to be DAX capable they can
come from vmalloc memory rather than CMA. I'll take a look.
On Fri, 2015-04-24 at 10:45 -0700, Dan Williams wrote:
> On Fri, Apr 24, 2015 at 10:18 AM, Toshi Kani <[email protected]> wrote:
> > On Fri, 2015-04-24 at 09:25 -0700, Dan Williams wrote:
> >> On Fri, Apr 24, 2015 at 8:56 AM, Toshi Kani <[email protected]> wrote:
> >> > On Fri, 2015-04-17 at 21:35 -0400, Dan Williams wrote:
> >> >> Most configuration of the nd-subsystem is done via nd-sysfs. However,
> >> >> the NFIT specification defines a small set of messages that can be
> >> >> passed to the subsystem via platform-firmware-defined methods. The
> >> >> command set (as of the current version of the NFIT-DSM spec) is:
> >> >>
> >> >> NFIT_CMD_SMART: media health and diagnostics
> >> >> NFIT_CMD_GET_CONFIG_SIZE: size of the label space
> >> >> NFIT_CMD_GET_CONFIG_DATA: read label
> >> >> NFIT_CMD_SET_CONFIG_DATA: write label
> >> >> NFIT_CMD_VENDOR: vendor-specific command passthrough
> >> >> NFIT_CMD_ARS_CAP: report address-range-scrubbing capabilities
> >> >> NFIT_CMD_START_ARS: initiate scrubbing
> >> >> NFIT_CMD_QUERY_ARS: report on scrubbing state
> >> >> NFIT_CMD_SMART_THRESHOLD: configure alarm thresholds for smart events
> >> >
> >> > "nd/bus.c" provides two features, 1) the top level ND bus driver which
> >> > is the central part of the ND, and 2) the ioctl interface specific to
> >> > the example-DSM-interface. I think the example-DSM-specific part should
> >> > be put into an example-DSM-support module, so that the ND can support
> >> > other _DSMs as necessary. Also, _DSM needs to be handled as optional.
> >>
> >> I don't think it needs to be separated, they'll both end up using the
> >> same infrastructure just with different UUIDs on the ACPI device
> >> interface or different format-interface-codes. A firmware
> >> implementation is also free to disable individual DSMs (see
> >> nd_acpi_add_dimm).
> >
> > Well, ioctl cmd# is essentially func# of the _DSM, and each cmd
> > structure needs to match with its _DSM output data structure. So, I do
> > not think these cmds will work for other _DSMs. That said, the ND is
> > complex enough already, and we should not make it more complicated for
> > the initial version... So, how about changing the name of /dev/ndctl0
> > to indicate RFIC 0x0201, ex. /dev/nd0201ctl0? That should allow
> > separate ioctl()s for other RFICs. The code can be updated when other
> > _DSM actually needs to be supported by the ND.
>
> No, all you need is unique command names (see libndctl
> ndctl_{bus|dimm}_is_cmd_supported()) and then translate the ND cmd
> number to the firmware function number in the "provider". It just so
> happens that for these first set of commands the ND cmd number matches
> the ACPI device function number in the DSM-interface-example, but
> there is no reason that need always be the case.
I misread the code -- /dev/ndctlN is for a bus, and /dev/nmemN is for a
DIMM. RFIC 0x0201 matches to DIMMs, not the bus. Since the _DSM under
ACPI0013 is generic, we are probably OK with ndctl.
The DIMM driver is fully integrated with the example-DSM. Separating
nd/acpi.c alone would not solve it... In your fix that will make the
DIMM _DSM optional, do you plan to make the DIMM driver more independent
from the example-DIMM _DSM?
Thanks,
-Toshi
On Fri, Apr 17, 2015 at 09:35:19PM -0400, Dan Williams wrote:
> diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
> index c52d7540dc05..cd8b7485e396 100644
> --- a/arch/ia64/kernel/efi.c
> +++ b/arch/ia64/kernel/efi.c
> @@ -1227,6 +1227,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
> case EFI_RUNTIME_SERVICES_CODE:
> case EFI_RUNTIME_SERVICES_DATA:
> case EFI_ACPI_RECLAIM_MEMORY:
> + case EFI_PERSISTENT_MEMORY:
> default:
> name = "reserved";
You probably want pmem as name here..
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index 11cc7d54ec3f..410af501a941 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
> case E820_RESERVED_KERN:
> printk(KERN_CONT "usable");
> break;
> + case E820_PMEM:
> + case E820_PRAM:
> case E820_RESERVED:
> printk(KERN_CONT "reserved");
> break;
> @@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
> case E820_UNUSABLE:
> printk(KERN_CONT "unusable");
> break;
> - case E820_PRAM:
> - printk(KERN_CONT "persistent (type %u)", type);
> - break;
Please keep this printk, and add the new E820_PMEM case to it as well.
> +static bool do_mark_busy(u32 type, struct resource *res)
> +{
> + if (res->start < (1ULL<<20))
> + return true;
> +
> + switch (type) {
> + case E820_RESERVED:
> + case E820_PRAM:
> + case E820_PMEM:
> + return false;
> + default:
> + return true;
> + }
> +}
Please add a comment explaining the choices once you start refactoring
this. Especially the address check is black magic..
On Fri, Apr 17, 2015 at 09:35:25PM -0400, Dan Williams wrote:
> Maintainer information and documenation for drivers/block/nd/
Usuaully this would go last in the series..
On Fri, Apr 17, 2015 at 09:35:30PM -0400, Dan Williams wrote:
> new file mode 100644
> index 000000000000..5fa74f124b3e
> --- /dev/null
> +++ b/drivers/block/nd/Kconfig
> @@ -0,0 +1,44 @@
> +config ND_ARCH_HAS_IOREMAP_CACHE
> + depends on (X86 || IA64 || ARM || ARM64 || SH || XTENSA)
> + def_bool y
As mentioned before please either define this symbol in each
arch Kconfig, or just ensure every architecture proides a stub.
But more importantly it doesn't seem like you're actually using
ioremap_cache anywhere. Allowing a cached ioremap would be a very
worthwile addition to the pmem drivers once we have the proper
memcpy functions making it safe, and is one of the high priority
todo items for the pmem driver.
> +
> +menuconfig NFIT_DEVICES
> + bool "NVDIMM (NFIT) Support"
Please just call all the symbolc and file names nvdimm instead of nfit
or nd to make eryones life simpler for the generic code. Just use the
EFI/ACPI terminology in those parts that actually parse those tables.
Eww, the --wrap stuff is too ugly too live. Just implement the
implemenetation of persistent nvdimms into qemu where it belongs.
Note that having a not actually persistent implementation that register
with the subsystems which doesn't need these hacks still sounds ok to
me, altough I suspect most users would much prefer the virtualization
based variant.
On Sat, Apr 18, 2015 at 12:37:09PM -0700, Dan Williams wrote:
> At this point in the patch series I agree, but in later patches we
> take advantage of nd bus services. "[PATCH 15/21] nd: pmem label sets
> and namespace instantiation" adds support for labeled pmem namespaces,
> and in "[PATCH 19/21] nd: infrastructure for btt devices" we make pmem
> capable of hosting btt instances.
Thats fine, but still doesn't require moving it around.
On Fri, Apr 17, 2015 at 09:36:55PM -0400, Dan Williams wrote:
> Block devices from an nd bus, in addition to accepting "struct bio"
> based requests, also have the capability to perform byte-aligned
> accesses. By default only the bio/block interface is used. However, if
> another driver can make effective use of the byte-aligned capability it
> can claim/disable the block interface and use the byte-aligned "nd_io"
> interface.
>
> The BTT driver is the intended first consumer of this mechanism to allow
> layering atomic sector update guarantees on top of nd_io capable
> nd-bus-block-devices.
Please don't add any of the nd-specific hacks into the pmem driver.
The rw_bytes functionality already is provided by the existing block
level ->rw_page method which pmem already implements, and any sort
of bus locking for different access methods should be in the bus
glue, not in pmem.c
On Tue, Apr 28, 2015 at 06:01:04AM -0700, Christoph Hellwig wrote:
> Please don't add any of the nd-specific hacks into the pmem driver.
> The rw_bytes functionality already is provided by the existing block
> level ->rw_page method which pmem already implements, and any sort
> of bus locking for different access methods should be in the bus
> glue, not in pmem.c
->rw_page only lets you do an entire page, not a range of bytes within
a page.
On Wed, 2015-04-22 at 13:00 -0700, Dan Williams wrote:
> On Wed, Apr 22, 2015 at 12:38 PM, Toshi Kani <[email protected]> wrote:
> > On Wed, 2015-04-22 at 12:28 -0700, Dan Williams wrote:
> >> On Wed, Apr 22, 2015 at 11:23 AM, Toshi Kani <[email protected]> wrote:
> >> > On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
> >> >> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
> >> >> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
> >> >> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
> >> >> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
> >> >> device. That specific problem can be fixed by either deleting the
> >> >> MEMDEV, or adding a DCR.
> >> >
> >> > By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
> >>
> >> Hmm, I meant a DCR as defined below. I agree you would not need a "SPA-DCR".
> >>
> >> > Adding a DCR structure does not solve this issue since it requires SPA
> >> > with Control Region GUID, which battery-backed DIMMs do not have.
> >>
> >> I would not go that far, half of a DCR entry is relevant for any
> >> NVDIMM, and half is only relevant if a DIMM offers BLK access:
> >>
> >> struct acpi_nfit_dcr {
> >> u16 type;
> >> u16 length;
> >> u16 dcr_index;
> >> u16 vendor_id;
> >> u16 device_id;
> >> u16 revision_id;
> >> u16 sub_vendor_id;
> >> u16 sub_device_id;
> >> u16 sub_revision_id;
> >> u8 reserved[6];
> >> u32 serial_number;
> >> u16 fic;
> >> <<<<< BLK relevant fields start here <<<<<
> >> u16 num_bcw;
> >> u64 bcw_size;
> >> u64 cmd_offset;
> >> u64 cmd_size;
> >> u64 status_offset;
> >> u64 status_size;
> >> u16 flags;
> >> u8 reserved2[6];
> >> };
> >
> > Yes, we do have a DCR entry. But we do not have a SPA-DCR.
>
> Got it. will fix.
Attached is an example implementation of the NFIT table with 2
battery-backed NVDIMM cards, which I have used for testing. I hope this
provides a good example of an NFIT table with SPA(PMEM), MEMDEV and DCR
entries, which allows optional _DSMs for battery-backed NVDIMMs as
necessary.
HP is also defining _DSM method for battery-backed NVDIMMs, and will
share the spec when it is ready.
Thanks,
-Toshi
Thanks,
-Toshi
On Tue, 2015-04-28 at 10:47 -0600, Toshi Kani wrote:
> On Wed, 2015-04-22 at 13:00 -0700, Dan Williams wrote:
> > On Wed, Apr 22, 2015 at 12:38 PM, Toshi Kani <[email protected]> wrote:
> > > On Wed, 2015-04-22 at 12:28 -0700, Dan Williams wrote:
> > >> On Wed, Apr 22, 2015 at 11:23 AM, Toshi Kani <[email protected]> wrote:
> > >> > On Wed, 2015-04-22 at 11:20 -0700, Dan Williams wrote:
> > >> >> On Wed, Apr 22, 2015 at 11:00 AM, Linda Knippers <[email protected]> wrote:
> > >> >> Wait, point of clarification, DCRs (dimm-control-regions) have RFICs,
> > >> >> not MEMDEVs (memory-device-to-spa-mapping). Toshi's original report
> > >> >> was that an NFIT with a SPA+MEMDEV was failing to enable a PMEM
> > >> >> device. That specific problem can be fixed by either deleting the
> > >> >> MEMDEV, or adding a DCR.
> > >> >
> > >> > By a DCR, do you mean a DCR structure or SPA with Control Region GUID?
> > >>
> > >> Hmm, I meant a DCR as defined below. I agree you would not need a "SPA-DCR".
> > >>
> > >> > Adding a DCR structure does not solve this issue since it requires SPA
> > >> > with Control Region GUID, which battery-backed DIMMs do not have.
> > >>
> > >> I would not go that far, half of a DCR entry is relevant for any
> > >> NVDIMM, and half is only relevant if a DIMM offers BLK access:
> > >>
> > >> struct acpi_nfit_dcr {
> > >> u16 type;
> > >> u16 length;
> > >> u16 dcr_index;
> > >> u16 vendor_id;
> > >> u16 device_id;
> > >> u16 revision_id;
> > >> u16 sub_vendor_id;
> > >> u16 sub_device_id;
> > >> u16 sub_revision_id;
> > >> u8 reserved[6];
> > >> u32 serial_number;
> > >> u16 fic;
> > >> <<<<< BLK relevant fields start here <<<<<
> > >> u16 num_bcw;
> > >> u64 bcw_size;
> > >> u64 cmd_offset;
> > >> u64 cmd_size;
> > >> u64 status_offset;
> > >> u64 status_size;
> > >> u16 flags;
> > >> u8 reserved2[6];
> > >> };
> > >
> > > Yes, we do have a DCR entry. But we do not have a SPA-DCR.
> >
> > Got it. will fix.
>
> Attached is an example implementation of the NFIT table with 2
> battery-backed NVDIMM cards, which I have used for testing. I hope this
> provides a good example of an NFIT table with SPA(PMEM), MEMDEV and DCR
> entries, which allows optional _DSMs for battery-backed NVDIMMs as
> necessary.
>
> HP is also defining _DSM method for battery-backed NVDIMMs, and will
> share the spec when it is ready.
Sorry, using ".txt" extension to a Linux text file caused my mailer to
perform some unnecessary conversion... Attached is the same file without
".txt" this time.
-Toshi
On Tue, Apr 28, 2015 at 5:46 AM, Christoph Hellwig <[email protected]> wrote:
> On Fri, Apr 17, 2015 at 09:35:19PM -0400, Dan Williams wrote:
>> diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
>> index c52d7540dc05..cd8b7485e396 100644
>> --- a/arch/ia64/kernel/efi.c
>> +++ b/arch/ia64/kernel/efi.c
>> @@ -1227,6 +1227,7 @@ efi_initialize_iomem_resources(struct resource *code_resource,
>> case EFI_RUNTIME_SERVICES_CODE:
>> case EFI_RUNTIME_SERVICES_DATA:
>> case EFI_ACPI_RECLAIM_MEMORY:
>> + case EFI_PERSISTENT_MEMORY:
>> default:
>> name = "reserved";
>
> You probably want pmem as name here..
>
>> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
>> index 11cc7d54ec3f..410af501a941 100644
>> --- a/arch/x86/kernel/e820.c
>> +++ b/arch/x86/kernel/e820.c
>> @@ -137,6 +137,8 @@ static void __init e820_print_type(u32 type)
>> case E820_RESERVED_KERN:
>> printk(KERN_CONT "usable");
>> break;
>> + case E820_PMEM:
>> + case E820_PRAM:
>> case E820_RESERVED:
>> printk(KERN_CONT "reserved");
>> break;
>> @@ -149,9 +151,6 @@ static void __init e820_print_type(u32 type)
>> case E820_UNUSABLE:
>> printk(KERN_CONT "unusable");
>> break;
>> - case E820_PRAM:
>> - printk(KERN_CONT "persistent (type %u)", type);
>> - break;
>
> Please keep this printk, and add the new E820_PMEM case to it as well.
>
>> +static bool do_mark_busy(u32 type, struct resource *res)
>> +{
>> + if (res->start < (1ULL<<20))
>> + return true;
>> +
>> + switch (type) {
>> + case E820_RESERVED:
>> + case E820_PRAM:
>> + case E820_PMEM:
>> + return false;
>> + default:
>> + return true;
>> + }
>> +}
>
> Please add a comment explaining the choices once you start refactoring
> this. Especially the address check is black magic..
Ok, I was able to incorporate all these into v2.
On Tue, Apr 28, 2015 at 5:53 AM, Christoph Hellwig <[email protected]> wrote:
> On Fri, Apr 17, 2015 at 09:35:30PM -0400, Dan Williams wrote:
>> new file mode 100644
>> index 000000000000..5fa74f124b3e
>> --- /dev/null
>> +++ b/drivers/block/nd/Kconfig
>> @@ -0,0 +1,44 @@
>> +config ND_ARCH_HAS_IOREMAP_CACHE
>> + depends on (X86 || IA64 || ARM || ARM64 || SH || XTENSA)
>> + def_bool y
>
> As mentioned before please either define this symbol in each
> arch Kconfig, or just ensure every architecture proides a stub.
>
> But more importantly it doesn't seem like you're actually using
> ioremap_cache anywhere. Allowing a cached ioremap would be a very
> worthwile addition to the pmem drivers once we have the proper
> memcpy functions making it safe, and is one of the high priority
> todo items for the pmem driver.
>
>> +
>> +menuconfig NFIT_DEVICES
>> + bool "NVDIMM (NFIT) Support"
>
> Please just call all the symbolc and file names nvdimm instead of nfit
> or nd to make eryones life simpler for the generic code. Just use the
> EFI/ACPI terminology in those parts that actually parse those tables.
Done in v2.
On Tue, Apr 28, 2015 at 5:54 AM, Christoph Hellwig <[email protected]> wrote:
> Eww, the --wrap stuff is too ugly too live.
Since when are unit tests pretty?
> Just implement the
> implemenetation of persistent nvdimms into qemu where it belongs.
Ugh, no, I'm not keen to introduce yet another roadblock to running
the tests and another degree of freedom for things to bit rot. It
will never be pretty, but the implementation at least gets slightly
cleaner in v2 with the removal of the wrapping for nd_blk_do_io().
It's also worth noting that 0day is currently running our unit tests.
> Note that having a not actually persistent implementation that register
> with the subsystems which doesn't need these hacks still sounds ok to
> me, altough I suspect most users would much prefer the virtualization
> based variant.
KVM NFIT enabling is happening, but I don't think it is useful as a
unit test vehicle.
On Tue, Apr 28, 2015 at 5:56 AM, Christoph Hellwig <[email protected]> wrote:
> On Sat, Apr 18, 2015 at 12:37:09PM -0700, Dan Williams wrote:
>> At this point in the patch series I agree, but in later patches we
>> take advantage of nd bus services. "[PATCH 15/21] nd: pmem label sets
>> and namespace instantiation" adds support for labeled pmem namespaces,
>> and in "[PATCH 19/21] nd: infrastructure for btt devices" we make pmem
>> capable of hosting btt instances.
>
> Thats fine, but still doesn't require moving it around.
I ended up not moving it in v2. Let me know if the updated rationale
makes sense.