Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752861AbbD3U4i (ORCPT ); Thu, 30 Apr 2015 16:56:38 -0400 Received: from mga01.intel.com ([192.55.52.88]:16612 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750800AbbD3U4g (ORCPT ); Thu, 30 Apr 2015 16:56:36 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,678,1422950400"; d="scan'208";a="564510912" Message-ID: <1430427390.23785.1.camel@theros.lm.intel.com> Subject: Re: [PATCH v2 00/20] libnd: non-volatile memory device support From: Ross Zwisler To: Andy Lutomirski Cc: Dan Williams , linux-nvdimm , Boaz Harrosh , Neil Brown , Dave Chinner , "H. Peter Anvin" , Ingo Molnar , "Rafael J. Wysocki" , Robert Moore , Christoph Hellwig , Linux ACPI , Jeff Moyer , Nicholas Moulin , Matthew Wilcox , Vishal Verma , Jens Axboe , Borislav Petkov , Thomas Gleixner , Greg KH , "linux-kernel@vger.kernel.org" , Andrew Morton , Linus Torvalds Date: Thu, 30 Apr 2015 14:56:30 -0600 In-Reply-To: References: <20150428181203.35812.60474.stgit@dwillia2-desk3.amr.corp.intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4 (3.10.4-4.fc20.rez) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3483 Lines: 72 On Tue, 2015-04-28 at 16:05 -0700, Andy Lutomirski wrote: > On Tue, Apr 28, 2015 at 3:28 PM, Dan Williams wrote: > > On Tue, Apr 28, 2015 at 2:06 PM, Andy Lutomirski wrote: > >> On Tue, Apr 28, 2015 at 1:59 PM, Dan Williams wrote: > >>> On Tue, Apr 28, 2015 at 1:52 PM, Andy Lutomirski wrote: > >>>> On Tue, Apr 28, 2015 at 11:24 AM, Dan Williams wrote: > >>>> Mostly for my understanding: is there a name for "address relative to > >>>> the address lines on the DIMM"? That is, a DIMM that exposes 8 GB of > >>>> apparent physical memory, possibly interleaved, broken up, or weirdly > >>>> remapped by the memory controller, would still have addresses between > >>>> 0 and 8 GB. Some of those might be PMEM windows, some might be MMIO, > >>>> some might be BLK apertures, etc. > >>>> > >>>> IIUC "DPA" refers to actual addressable storage, not this type of address? > >>> > >>> No, DPA is exactly as you describe above. You can't directly access > >>> it except through a PMEM mapping (possibly interleaved with DPA from > >>> other DIMMs) or a BLK aperture (mmio window into DPA). > >> > >> So the thing I'm describing has no name, then? Oh, well. > > > > What? The thing you are describing *is* DPA. > > I'm confused. Here are the two things I have in mind: > > 1. An address into on-DIMM storage. If I have a DIMM that is mapped > to 8 GB of SPA but has 64 GB of usable storage (accessed through BLK > apertures, say), then this address runs from 0 to 64 GB. > > 2. An address into the DIMM's view of physical address space. If I > have a DIMM that is mapped to 8 GB of SPA but has 64 GB of usable > storage (accessed through BLK apertures, say), then this address runs > from 0 to 8 GB. There's a one-to-one mapping between SPA and this > type of address. > > Since you said "a dimm may provide both PMEM-mode and BLK-mode access > to a range of DPA.," I thought that DPA was #1. > > --Andy I think that you've got the right definition, #1 above, for DPA. The DPA is relative to the DIMM, knows nothing about interleaving or SPA or anything else in the system, and is basically equivalent to the idea of an LBA on a disk. A DIMM that has 64 GiB of storage could have a DPA space ranging from 0 to 64 GiB. The second concept is a little trickier - we've been talking about this by using the term "N-way interleave set". Say you have your 64 GiB DIMM and only the first 8 GiB are given to the OS in an SPA, and that DIMM isn't interleaved with any other DIMMs. This would be a 1-way interleave set, ranging from DPA 0 - 8GiB on the DIMM. If you have 2 DIMMs of size 64 GiB, and they each have a 8 GiB region given to the SPA space, those two regions could be interleaved together. The OS would then see a 16 GiB 2-way interleave set, made up of DPAs 0 -> 8 GiB on each of the two DIMMs. You can figure out exactly how all the interleaving works by looking at the SPA tables, the Memory Device tables and the Interleave Tables. These are in sections 5.2.25.1 - 5.2.25.3 in ACPI 6, and are in our code as struct acpi_nfit_spa, struct acpi_nfit_memdev and struct acpi_nfit_idt. - Ross -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/