Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp730423pxk; Wed, 23 Sep 2020 14:45:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLvInb0XESAm97FkDIibnj9uIkU8kXjrPxSIVyEmoZxbA3XD3PS7s3BMpGixKEQitFaqUM X-Received: by 2002:a17:907:72c2:: with SMTP id du2mr1702906ejc.512.1600897549359; Wed, 23 Sep 2020 14:45:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600897549; cv=none; d=google.com; s=arc-20160816; b=xOngw9HpDcZhyKth91gH7XqVv8J+UntXvPgIJXhYwa6m1SLYjLSVYayzJa1O+MW95I tAdksyjB5luir2tKTziNPxYpiyMQ1C2kaKJTR90x6UFXKx7ZHeUkfHypOiMJsO78mvyF NDeMHSjoi5DZ/xzrqC7bYucd3lQcaGGDsq6EAIfuabxTgn8xnptsrX0YvMJDfxyk8wUv dUFe/OR5WnxMs4elwFlMSEXF4Es6/VRa589kcbfzV7NqMq80zZa/GpH3ZHICBTnD0Quw HuTY+rs0zO3jo8KVgg/G5CClmf0m0LZKKaI95OAg6WnIHheU7NVYW0V1bod29HHk374J mKig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=4h/RkUUdwjIPqFz1gTT0y7JkCLow5mQtN0rQpacgcms=; b=vJ+kP0D4nmYIYv37VexGeZUZmjW0IV5p1DFWbIo+0/RdLdN4klo0zs9DDJ1Q/GRKhK OH/PWhm93+IL1kV2Z+fWceEvwcmrf6VivEtJU3wYgjOMCZa8MA+pPWhg8ly31qDHT4UN N2BhVom7+xvSuIv8jqYwZ2MBF746uMeexwGijwD7Sj9Ga9+N87RhTDwy7GEBI0ioAaYl QsxOMopcD+0r+zTfTqwiaBgaoVoCpAA6u+T+d+362JBF4KqXD1J5A8Diu7zhlL7X2XY3 zMDJgNwnqhhEsc/Ak2MT3bHY9idDLdjOHN0KtY7HDbQKNdaxGJpWV9G3sXUvYjz0Tknn hqVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=JCni4lxa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c12si790805edy.146.2020.09.23.14.45.26; Wed, 23 Sep 2020 14:45:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=JCni4lxa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726735AbgIWVl7 (ORCPT + 99 others); Wed, 23 Sep 2020 17:41:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726134AbgIWVl6 (ORCPT ); Wed, 23 Sep 2020 17:41:58 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B19EC0613CE for ; Wed, 23 Sep 2020 14:41:58 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id nw23so1653447ejb.4 for ; Wed, 23 Sep 2020 14:41:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4h/RkUUdwjIPqFz1gTT0y7JkCLow5mQtN0rQpacgcms=; b=JCni4lxaiZYVrmErS1js3kV9RMqH+yMaUhmxSb2biXg+NxceGuRWwUcjOUYKvuk4EF i7h8djcftPLdWtyW2ZCTbvztQ2QkGV6hyZuyfd+DrsvP0SavOfFcmzLqPgWjao/niJN8 xTPjeAdMq/cqzGM7k0A1RxUyzDcgH4Dl0C4KVxc3b7dANp170gJ0fxC5DutOMdDoe+nk wbKwwsTK5gjqSxLEgy64cPvZzJgRmLx6s6uHvM2IFtebLaRY7B8kgLv0WAqOQj+ADboE k/4GYgwXtFFHnQCLp+2XvzJUrtAsOR/+uhlobUfdAfpr9276v1bCYVoP8QbQQyb16alu 2x2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4h/RkUUdwjIPqFz1gTT0y7JkCLow5mQtN0rQpacgcms=; b=jWZSCh9t7xlCdZjLSdKF6699goirWdKQoSy6DrY+b0i7AR+6s/OkzdKYBBzSjKt8eJ tUASHl2knfNKR6JQFj1n3SbBiwQRL5p9kuzYT79azMjD9AVgpbfjENu3Cyavdt3MCSiY y+1EqoMMhmi/BT7I+o4g4LnECHaOasvZr9hD9NmNND+SnxxhJQCbAjfgz4MGMmJx9G/S Pv6eMU7/4TWErZ0Q8gJIEnGcsXCFSamg2vNBvkYb+j61QxnMPtwp6MQp/FcOXM2Xw1tL 8N6jk/KEy329yLguC1MQiy6P/8vyu/kBFETMONazqHy0deQLZOVx7akq2tWRauu1gs3d wFdQ== X-Gm-Message-State: AOAM532r7El0wogFRAQt+BCt/AAbseEUb+227W+8GsUZrVyMJtIP0tYo NhaglTDsbirhNmOudg7teSgp9FKRcqo//04FAy7QlA== X-Received: by 2002:a17:906:8289:: with SMTP id h9mr1597984ejx.45.1600897317154; Wed, 23 Sep 2020 14:41:57 -0700 (PDT) MIME-Version: 1.0 References: <159643094279.4062302.17779410714418721328.stgit@dwillia2-desk3.amr.corp.intel.com> <159643100485.4062302.976628339798536960.stgit@dwillia2-desk3.amr.corp.intel.com> <17686fcc-202e-0982-d0de-54d5349cfb5d@oracle.com> <9acc6148-72eb-7016-dba9-46fa87ded5a5@redhat.com> In-Reply-To: <9acc6148-72eb-7016-dba9-46fa87ded5a5@redhat.com> From: Dan Williams Date: Wed, 23 Sep 2020 14:41:45 -0700 Message-ID: Subject: Re: [PATCH v4 11/23] device-dax: Kill dax_kmem_res To: David Hildenbrand Cc: Joao Martins , Andrew Morton , Vishal Verma , Dave Hansen , Pavel Tatashin , Peter Zijlstra , Ard Biesheuvel , Linux MM , linux-nvdimm , Linux Kernel Mailing List , Linux ACPI , Maling list - DRI developers Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 23, 2020 at 1:04 AM David Hildenbrand wrote: > > On 08.09.20 17:33, Joao Martins wrote: > > [Sorry for the late response] > > > > On 8/21/20 11:06 AM, David Hildenbrand wrote: > >> On 03.08.20 07:03, Dan Williams wrote: > >>> @@ -37,109 +45,94 @@ int dev_dax_kmem_probe(struct device *dev) > >>> * could be mixed in a node with faster memory, causing > >>> * unavoidable performance issues. > >>> */ > >>> - numa_node = dev_dax->target_node; > >>> if (numa_node < 0) { > >>> dev_warn(dev, "rejecting DAX region with invalid node: %d\n", > >>> numa_node); > >>> return -EINVAL; > >>> } > >>> > >>> - /* Hotplug starting at the beginning of the next block: */ > >>> - kmem_start = ALIGN(range->start, memory_block_size_bytes()); > >>> - > >>> - kmem_size = range_len(range); > >>> - /* Adjust the size down to compensate for moving up kmem_start: */ > >>> - kmem_size -= kmem_start - range->start; > >>> - /* Align the size down to cover only complete blocks: */ > >>> - kmem_size &= ~(memory_block_size_bytes() - 1); > >>> - kmem_end = kmem_start + kmem_size; > >>> - > >>> - new_res_name = kstrdup(dev_name(dev), GFP_KERNEL); > >>> - if (!new_res_name) > >>> + res_name = kstrdup(dev_name(dev), GFP_KERNEL); > >>> + if (!res_name) > >>> return -ENOMEM; > >>> > >>> - /* Region is permanently reserved if hotremove fails. */ > >>> - new_res = request_mem_region(kmem_start, kmem_size, new_res_name); > >>> - if (!new_res) { > >>> - dev_warn(dev, "could not reserve region [%pa-%pa]\n", > >>> - &kmem_start, &kmem_end); > >>> - kfree(new_res_name); > >>> + res = request_mem_region(range.start, range_len(&range), res_name); > >> > >> I think our range could be empty after aligning. I assume > >> request_mem_region() would check that, but maybe we could report a > >> better error/warning in that case. > >> > > dax_kmem_range() already returns a memory-block-aligned @range but > > IIUC request_mem_region() isn't checking for that. Having said that > > the returned @res wouldn't be different from the passed range.start. > > > >>> /* > >>> * Ensure that future kexec'd kernels will not treat this as RAM > >>> * automatically. > >>> */ > >>> - rc = add_memory_driver_managed(numa_node, new_res->start, > >>> - resource_size(new_res), kmem_name); > >>> + rc = add_memory_driver_managed(numa_node, res->start, > >>> + resource_size(res), kmem_name); > >>> + > >>> + res->flags |= IORESOURCE_BUSY; > >> > >> Hm, I don't think that's correct. Any specific reason why to mark the > >> not-added, unaligned parts BUSY? E.g., walk_system_ram_range() could > >> suddenly stumble over it - and e.g., similarly kexec code when trying to > >> find memory for placing kexec images. I think we should leave this > >> !BUSY, just as it is right now. > >> > > Agreed. > > > >>> if (rc) { > >>> - release_resource(new_res); > >>> - kfree(new_res); > >>> - kfree(new_res_name); > >>> + release_mem_region(range.start, range_len(&range)); > >>> + kfree(res_name); > >>> return rc; > >>> } > >>> - dev_dax->dax_kmem_res = new_res; > >>> + > >>> + dev_set_drvdata(dev, res_name); > >>> > >>> return 0; > >>> } > >>> > >>> #ifdef CONFIG_MEMORY_HOTREMOVE > >>> -static int dev_dax_kmem_remove(struct device *dev) > >>> +static void dax_kmem_release(struct dev_dax *dev_dax) > >>> { > >>> - struct dev_dax *dev_dax = to_dev_dax(dev); > >>> - struct resource *res = dev_dax->dax_kmem_res; > >>> - resource_size_t kmem_start = res->start; > >>> - resource_size_t kmem_size = resource_size(res); > >>> - const char *res_name = res->name; > >>> int rc; > >>> + struct device *dev = &dev_dax->dev; > >>> + const char *res_name = dev_get_drvdata(dev); > >>> + struct range range = dax_kmem_range(dev_dax); > >>> > >>> /* > >>> * We have one shot for removing memory, if some memory blocks were not > >>> * offline prior to calling this function remove_memory() will fail, and > >>> * there is no way to hotremove this memory until reboot because device > >>> - * unbind will succeed even if we return failure. > >>> + * unbind will proceed regardless of the remove_memory result. > >>> */ > >>> - rc = remove_memory(dev_dax->target_node, kmem_start, kmem_size); > >>> - if (rc) { > >>> - any_hotremove_failed = true; > >>> - dev_err(dev, > >>> - "DAX region %pR cannot be hotremoved until the next reboot\n", > >>> - res); > >>> - return rc; > >>> + rc = remove_memory(dev_dax->target_node, range.start, range_len(&range)); > >>> + if (rc == 0) { > >> > >> if (!rc) ? > >> > > Better off would be to keep the old order: > > > > if (rc) { > > any_hotremove_failed = true; > > dev_err(dev, "%#llx-%#llx cannot be hotremoved until the next reboot\n", > > range.start, range.end); > > return; > > } > > > > release_mem_region(range.start, range_len(&range)); > > dev_set_drvdata(dev, NULL); > > kfree(res_name); > > return; > > > > > >>> + release_mem_region(range.start, range_len(&range)); > >> > >> remove_memory() does a release_mem_region_adjustable(). Don't you > >> actually want to release the *unaligned* region you requested? > >> > > Isn't it what we're doing here? > > (The release_mem_region_adjustable() is using the same > > dax_kmem-aligned range and there's no split/adjust) > > > > Meaning right now (+ parent marked as !BUSY), and if I am understanding > > this correctly: > > > > request_mem_region(range.start, range_len) > > __request_region(iomem_res, range.start, range_len) -> alloc @parent > > add_memory_driver_managed(parent.start, resource_size(parent)) > > __request_region(parent.start, resource_size(parent)) -> alloc @child > > > > [...] > > > > remove_memory(range.start, range_len) > > request_mem_region_adjustable(range.start, range_len) > > __release_region(range.start, range_len) -> remove @child > > > > release_mem_region(range.start, range_len) > > __release_region(range.start, range_len) -> doesn't remove @parent because !BUSY? > > > > The add/removal of this relies on !BUSY. But now I am wondering if the parent remaining > > unreleased is deliberate even on CONFIG_MEMORY_HOTREMOVE=y. > > > > Joao > > > > Thinking about it, if we don't set the parent resource BUSY (which is > what I think is the right way of doing things), and don't want to store > the parent resource pointer, we could add something like > lookup_resource() - e.g., lookup_mem_resource() - , however, searching > properly in the whole hierarchy (instead of only the first level), and > traversing down to the last hierarchy. Then it would be as simple as > > remove_memory(range.start, range_len) > res = lookup_mem_resource(range.start); > release_resource(res); Another thought... I notice that you've taught register_memory_resource() a IORESOURCE_MEM_DRIVER_MANAGED special case. Lets just make the assumption of add_memory_driver_managed() that it is the driver's responsibility to mark the range busy before calling, and the driver's responsibility to release the region. I.e. validate (rather than request) that the range is busy in register_memory_resource(), and teach release_memory_resource() to skip releasing the region when the memory is marked driver managed. That would let dax_kmem drop its manipulation of the 'busy' flag which is a layering violation no matter how many comments we put around it.