Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp539336pxa; Fri, 31 Jul 2020 20:45:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpFtOWoiMGAbUshNk7bmHTI5UGVwEnUhZSXEH0CZH/jDj2ejSzX2mFL2jg62oV8jaKMkAD X-Received: by 2002:a17:906:8506:: with SMTP id i6mr7006621ejx.446.1596253544542; Fri, 31 Jul 2020 20:45:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596253544; cv=none; d=google.com; s=arc-20160816; b=HWVdVzpgLOFbetm0nbyEoiEDAx9XW0FSC7uzndTs0jOy0Y10amBEuIHJdA6+PjBBLw 39gekrXaxaSAbvHQ/oBQ3MCK5csEErvgDnR1uXdwKG7oTSCAW4NBZm0Lz69JqCMxlxZG 5v1E611r2WtVhZqNc/LPeHfWklg9YsvGnh+2MSh/zblZKbNykzTBnS1LNhJzTEzQPLlG 7dO8LauiVBv0e5dVlQuwjpjB5YbfgnNavq3SrmqAXVzebUb7IsZ6y4DZhI65LMuAUBqH nqN6Inck2OlwJBgRJVWAfBw8G3cM1Olq+Ijx1eGthAQBkKrldmtdJ7YC+DQ9Pidkbjtv fW2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:ironport-sdr:ironport-sdr; bh=EXLXVDA4l2WeNaF/nL80kJls/kQXF0Jn50pp1HLGvwQ=; b=ajiHVUzBKJaw7v9OL3VvwCPn8oZK8CGjOoZ1IGm1+CfnkAI488Z8pfWhPqI/ZvXaA4 lUJHK6KbYoLZZ/LnVdhOmuO9/xv7QZ0OBxfRllBfkPHOn5WfoRoHy0iIBSRPJSwRXbgG JNOL4A9VaGSdVwtaqAeD/0P2MNB4WJLf2Hy5bQkt35xM16JH43+yu/0qYINYvTIxTlgP ZkOprm0dq2/Um0rCgjCWLtZkuJIalawmUfi6eFONvb9D48zqmniigo+GTUiP4HBbsNtN eLcRa30KvUrWxnldUFtPln+A83+Hhp6dKEw4UgX7HOYeHt19GpaDi/ziLung26DW3Bsq Gl/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s23si6189779eji.327.2020.07.31.20.45.22; Fri, 31 Jul 2020 20:45:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728749AbgHADmV (ORCPT + 99 others); Fri, 31 Jul 2020 23:42:21 -0400 Received: from mga12.intel.com ([192.55.52.136]:48044 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727978AbgHADmU (ORCPT ); Fri, 31 Jul 2020 23:42:20 -0400 IronPort-SDR: HcLxZ/j10CHmgqm6S8/IAoo7AQ1q3A+Eho/1otIuNyf1SG2RVnXqqaSXLHhEsXWsFstqjr2/rt C8v46AosCkXA== X-IronPort-AV: E=McAfee;i="6000,8403,9699"; a="131470811" X-IronPort-AV: E=Sophos;i="5.75,420,1589266800"; d="scan'208";a="131470811" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2020 20:42:20 -0700 IronPort-SDR: ONvEH+SJtKAahLRLkv3MoHjNtAi/7QkSDQ9QBOUph/UdZRwTiGUkG+3uwfWjhrjK9DwMxSsewa 6P+EjBpcj6HA== X-IronPort-AV: E=Sophos;i="5.75,420,1589266800"; d="scan'208";a="491754079" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2020 20:42:19 -0700 Subject: [PATCH v3 11/23] device-dax: Kill dax_kmem_res From: Dan Williams To: akpm@linux-foundation.org Cc: David Hildenbrand , Vishal Verma , Dave Hansen , Pavel Tatashin , peterz@infradead.org, ard.biesheuvel@linaro.org, vishal.l.verma@intel.com, linux-mm@kvack.org, linux-nvdimm@lists.01.org, joao.m.martins@oracle.com, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, dri-devel@lists.freedesktop.org Date: Fri, 31 Jul 2020 20:26:01 -0700 Message-ID: <159625236129.3040297.8607704947114784109.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com> References: <159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Several related issues around this unneeded attribute: - The dax_kmem_res property allows the kmem driver to stash the adjusted resource range that was used for the hotplug operation, but that can be recalculated from the original base range. - kmem is using an open coded release_resource() + kfree() when an idiomatic release_mem_region() is sufficient. - The driver managed resource need only manage the busy flag. Other flags are of no concern to the kmem driver. In fact if kmem inherits some memory range that add_memory_driver_managed() rejects that is a memory-hotplug-core policy that the driver is in no position to override. - The implementation trusts that failed remove_memory() results in the entire resource range remaining pinned busy. The driver need not make that layering violation assumption and just maintain the busy state in its local resource. - The "Hot-remove not yet implemented." comment is stale since hotremove support is now included. Cc: David Hildenbrand Cc: Vishal Verma Cc: Dave Hansen Cc: Pavel Tatashin Signed-off-by: Dan Williams --- drivers/dax/dax-private.h | 3 - drivers/dax/kmem.c | 123 +++++++++++++++++++++------------------------ 2 files changed, 58 insertions(+), 68 deletions(-) diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index 6779f683671d..12a2dbc43b40 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -42,8 +42,6 @@ struct dax_region { * @dev - device core * @pgmap - pgmap for memmap setup / lifetime (driver owned) * @range: resource range for the instance - * @dax_mem_res: physical address range of hotadded DAX memory - * @dax_mem_name: name for hotadded DAX memory via add_memory_driver_managed() */ struct dev_dax { struct dax_region *region; @@ -52,7 +50,6 @@ struct dev_dax { struct device dev; struct dev_pagemap *pgmap; struct range range; - struct resource *dax_kmem_res; }; static inline u64 range_len(struct range *range) diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index 5bb133df147d..77e25361fbeb 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -19,16 +19,24 @@ static const char *kmem_name; /* Set if any memory will remain added when the driver will be unloaded. */ static bool any_hotremove_failed; +static struct range dax_kmem_range(struct dev_dax *dev_dax) +{ + struct range range; + + /* memory-block align the hotplug range */ + range.start = ALIGN(dev_dax->range.start, memory_block_size_bytes()); + range.end = ALIGN_DOWN(dev_dax->range.end + 1, + memory_block_size_bytes()) - 1; + return range; +} + int dev_dax_kmem_probe(struct device *dev) { struct dev_dax *dev_dax = to_dev_dax(dev); - struct range *range = &dev_dax->range; - resource_size_t kmem_start; - resource_size_t kmem_size; - resource_size_t kmem_end; - struct resource *new_res; - const char *new_res_name; - int numa_node; + struct range range = dax_kmem_range(dev_dax); + int numa_node = dev_dax->target_node; + struct resource *res; + char *res_name; int rc; /* @@ -37,109 +45,94 @@ int dev_dax_kmem_probe(struct device *dev) * could be mixed in a node with faster memory, causing * unavoidable performance issues. */ - numa_node = dev_dax->target_node; if (numa_node < 0) { dev_warn(dev, "rejecting DAX region with invalid node: %d\n", numa_node); return -EINVAL; } - /* Hotplug starting at the beginning of the next block: */ - kmem_start = ALIGN(range->start, memory_block_size_bytes()); - - kmem_size = range_len(range); - /* Adjust the size down to compensate for moving up kmem_start: */ - kmem_size -= kmem_start - range->start; - /* Align the size down to cover only complete blocks: */ - kmem_size &= ~(memory_block_size_bytes() - 1); - kmem_end = kmem_start + kmem_size; - - new_res_name = kstrdup(dev_name(dev), GFP_KERNEL); - if (!new_res_name) + res_name = kstrdup(dev_name(dev), GFP_KERNEL); + if (!res_name) return -ENOMEM; - /* Region is permanently reserved if hotremove fails. */ - new_res = request_mem_region(kmem_start, kmem_size, new_res_name); - if (!new_res) { - dev_warn(dev, "could not reserve region [%pa-%pa]\n", - &kmem_start, &kmem_end); - kfree(new_res_name); + res = request_mem_region(range.start, range_len(&range), res_name); + if (!res) { + dev_warn(dev, "could not reserve region [%#llx-%#llx]\n", + range.start, range.end); + kfree(res_name); return -EBUSY; } /* - * Set flags appropriate for System RAM. Leave ..._BUSY clear - * so that add_memory() can add a child resource. Do not - * inherit flags from the parent since it may set new flags - * unknown to us that will break add_memory() below. + * Temporarily clear busy to allow add_memory_driver_managed() + * to claim it. */ - new_res->flags = IORESOURCE_SYSTEM_RAM; + res->flags &= ~IORESOURCE_BUSY; /* * Ensure that future kexec'd kernels will not treat this as RAM * automatically. */ - rc = add_memory_driver_managed(numa_node, new_res->start, - resource_size(new_res), kmem_name); + rc = add_memory_driver_managed(numa_node, res->start, + resource_size(res), kmem_name); + + res->flags |= IORESOURCE_BUSY; if (rc) { - release_resource(new_res); - kfree(new_res); - kfree(new_res_name); + release_mem_region(range.start, range_len(&range)); + kfree(res_name); return rc; } - dev_dax->dax_kmem_res = new_res; + + dev_set_drvdata(dev, res_name); return 0; } #ifdef CONFIG_MEMORY_HOTREMOVE -static int dev_dax_kmem_remove(struct device *dev) +static void dax_kmem_release(struct dev_dax *dev_dax) { - struct dev_dax *dev_dax = to_dev_dax(dev); - struct resource *res = dev_dax->dax_kmem_res; - resource_size_t kmem_start = res->start; - resource_size_t kmem_size = resource_size(res); - const char *res_name = res->name; int rc; + struct device *dev = &dev_dax->dev; + const char *res_name = dev_get_drvdata(dev); + struct range range = dax_kmem_range(dev_dax); /* * We have one shot for removing memory, if some memory blocks were not * offline prior to calling this function remove_memory() will fail, and * there is no way to hotremove this memory until reboot because device - * unbind will succeed even if we return failure. + * unbind will proceed regardless of the remove_memory result. */ - rc = remove_memory(dev_dax->target_node, kmem_start, kmem_size); - if (rc) { - any_hotremove_failed = true; - dev_err(dev, - "DAX region %pR cannot be hotremoved until the next reboot\n", - res); - return rc; + rc = remove_memory(dev_dax->target_node, range.start, range_len(&range)); + if (rc == 0) { + release_mem_region(range.start, range_len(&range)); + dev_set_drvdata(dev, NULL); + kfree(res_name); + return; } - /* Release and free dax resources */ - release_resource(res); - kfree(res); - kfree(res_name); - dev_dax->dax_kmem_res = NULL; - - return 0; + any_hotremove_failed = true; + dev_err(dev, "%#llx-%#llx cannot be hotremoved until the next reboot\n", + range.start, range.end); } #else -static int dev_dax_kmem_remove(struct device *dev) +static void dax_kmem_release(struct dev_dax *dev_dax) { /* - * Without hotremove purposely leak the request_mem_region() for the - * device-dax range and return '0' to ->remove() attempts. The removal - * of the device from the driver always succeeds, but the region is - * permanently pinned as reserved by the unreleased - * request_mem_region(). + * Without hotremove purposely leak the request_mem_region() for + * the device-dax range attempts. The removal of the device from + * the driver always succeeds, but the region is permanently + * pinned as reserved by the unreleased request_mem_region(). */ any_hotremove_failed = true; - return 0; } #endif /* CONFIG_MEMORY_HOTREMOVE */ +static int dev_dax_kmem_remove(struct device *dev) +{ + dax_kmem_release(to_dev_dax(dev)); + return 0; +} + static struct dax_device_driver device_dax_kmem_driver = { .drv = { .probe = dev_dax_kmem_probe,