Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932089AbbFUQbR (ORCPT ); Sun, 21 Jun 2015 12:31:17 -0400 Received: from mail-wg0-f51.google.com ([74.125.82.51]:34336 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755995AbbFUQbE (ORCPT ); Sun, 21 Jun 2015 12:31:04 -0400 MIME-Version: 1.0 In-Reply-To: <20150621100329.GA5915@lst.de> References: <20150617235209.12943.24419.stgit@dwillia2-desk3.amr.corp.intel.com> <20150617235503.12943.76918.stgit@dwillia2-desk3.amr.corp.intel.com> <20150621100329.GA5915@lst.de> Date: Sun, 21 Jun 2015 09:31:02 -0700 Message-ID: Subject: Re: [PATCH 03/15] nd_btt: atomic sector updates From: Dan Williams To: Christoph Hellwig Cc: Jens Axboe , "linux-nvdimm@lists.01.org" , Boaz Harrosh , "Kani, Toshimitsu" , Vishal Verma , Neil Brown , Greg KH , Dave Chinner , "linux-kernel@vger.kernel.org" , Andy Lutomirski , Jens Axboe , Linux ACPI , Jeff Moyer , "H. Peter Anvin" , linux-fsdevel , Ingo Molnar Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 965 Lines: 21 On Sun, Jun 21, 2015 at 3:03 AM, Christoph Hellwig wrote: >> +config ND_MAX_REGIONS >> + int "Maximum number of regions supported by the sub-system" >> + default 64 >> + ---help--- >> + A 'region' corresponds to an individual DIMM or an interleave >> + set of DIMMs. A typical maximally configured system may have >> + up to 32 DIMMs. >> + >> + Leave the default of 64 if you are unsure. > > Having static limits in Kconfig is a bad idea. What prevents you > from handling any (reasonable) number dynamically? Hmm, yes, this was a bad holdover from before we were using percpu definitions for the lane locks. Now that it's converted we can kill the static definition of nd_percpu_lane and just do an alloc_percpu() for each region dynamically. Fixed in v2 and passing the test suite. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/