Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1113493imm; Wed, 26 Sep 2018 11:53:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV63tM/8Zb7fJeQCPAwuqfyZyylSCwlEpaQEUHZ8Doptv3gAvAmYj1LSfiycXE4ShV0jEj9ta X-Received: by 2002:a62:6d02:: with SMTP id i2-v6mr7746381pfc.218.1537988035001; Wed, 26 Sep 2018 11:53:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537988034; cv=none; d=google.com; s=arc-20160816; b=mheoxzXJ8QrdigVr3sO+6QpwOSQdAMppUkcTaWotblN1fYBO1CPd/afoy9973NwqoN Wbt2FedzI5SnAPeRVmZflrzEwYdfgs08JSWKNVpiAcDDfVwq2QQzLPIiu6RNP8r3oBgf +GTJXQ90b44lMXAApfWQdUhc+RHhURj0wDN/WZ64zSwUoVoW4XeH8wVRxzVLnJ7O6hFK k46KC1flW5tHh9Ugg4rI5hspX3gEYdPLWow5bjYww2BU3zFWZdLgTbht05TzAbFSSpTs K7zk+7h22cSsZ9ZGAPYBv6RjTmuaE1c9yPb5frB6H+GBAisxpc3PF8zGe/zx+pwJs8Th zHQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=oS6YZlZxSl4vHIksOvycCSDix91/LSZmXH6NehzvYZE=; b=MpGSyHWnQtaOJ5DD6V2zwg7bc6QWlJf5WNMYPWNM8AFayamH11SCea116b6JeAyb/s Xv8RVWhtnl5OqkwYSVPxs5xy6qijKifsU5tvE/Os5LW1sLULm2taEFNwTFOHazD7V7r4 Y6CjkQj4zyVQZ3nCiUBrZpaM58cwiXHpaBUEH0XhtE1nOqdmwzmYvrVwiTXHuHfya7BL gE/ZtXRvhUhEEFPeQgYFGv5u75hNgxvzyR/kWZFssOZKshwfTFqGNuFabWQg3zdm1aZC cE5QFDxj64vALYHi6ei9RcCKPwJpaWSAOpuZquognPPkr3uS6/nHZQ8ACdp5aU/yyipZ DR8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=HyDmgN7Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m64-v6si6302362pfc.17.2018.09.26.11.53.39; Wed, 26 Sep 2018 11:53:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=HyDmgN7Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728483AbeI0BH2 (ORCPT + 99 others); Wed, 26 Sep 2018 21:07:28 -0400 Received: from mail-ot1-f65.google.com ([209.85.210.65]:46918 "EHLO mail-ot1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725806AbeI0BH2 (ORCPT ); Wed, 26 Sep 2018 21:07:28 -0400 Received: by mail-ot1-f65.google.com with SMTP id q4-v6so16099741otf.13 for ; Wed, 26 Sep 2018 11:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=oS6YZlZxSl4vHIksOvycCSDix91/LSZmXH6NehzvYZE=; b=HyDmgN7YhFLgNaWUBPMkkqhO3mm+I8mWzFzACspPKMvpDFLxNLGRh2r20yFvNBBBLQ /3MvqViQ2BdsfS77IAYa/XfjrjNPsZvPJZg1bjKsXnIR02g5mpitrx93UnLDPVgPDw4m HOGgbQEDuSPbQV5A3jDK34oVLCkvopQQkLLv6IFB73ZXuH8sK9tZjh2NUcB8DhRQr/Ic CxTBClAU7SJlOJQCOxrKE2m6a2Qg1gdM9yIZhOorWLN94xl+Ap+KzPMQmzclDeJjXjAU CHpsWm6qdj0gDSIZlzvk0pp5sv+DquzV8CG8leLZU1t4Exfsbsd1WWwVHdt9jm6XkFpn gSFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=oS6YZlZxSl4vHIksOvycCSDix91/LSZmXH6NehzvYZE=; b=aLfnLP6/2H3IcT9WZ4OImodhmoSWpTrMpcpHOXfj/N3R28MTUm8A3VluoDMkw7hsGC 6QkooR+v4njf8SDPHjKmXX+WP6qK0GhrwiGJ7VYMvBzHK4vma6K1nCTMaAgtn3leUMT6 daWFHlSnfXNr4i237Xnu7nUNOUaDC97jbre4lIChU7Rzqu+eE9pMKJCAmvYqpYMyiBnl jGnIhaiCtI07jG5T8q+KokVQ0Wx58cT5MQJ42VYbwAdBsaoQIr/GqMNTYCmtU/t0owWV ZX4mehiKlFJ6crDVgBY3vE2SNX3NTkeKbdDcqtVFENFalxE/BNZs8rCgeNRPsGBpEhpU OXlA== X-Gm-Message-State: ABuFfogbQCqC/oXlqVPaQzalmUFvIAO42VQ4Go99ANQqlGUP6d7/8af4 2G/XWOsmCTv3npbi/LDrsE2ua7KuFzwCGav2P+gtvQ== X-Received: by 2002:a9d:da2:: with SMTP id 31-v6mr4982708ots.33.1537987988156; Wed, 26 Sep 2018 11:53:08 -0700 (PDT) MIME-Version: 1.0 References: <20180925200551.3576.18755.stgit@localhost.localdomain> <20180925202053.3576.66039.stgit@localhost.localdomain> <20180926075540.GD6278@dhcp22.suse.cz> <6f87a5d7-05e2-00f4-8568-bb3521869cea@linux.intel.com> In-Reply-To: <6f87a5d7-05e2-00f4-8568-bb3521869cea@linux.intel.com> From: Dan Williams Date: Wed, 26 Sep 2018 11:52:56 -0700 Message-ID: Subject: Re: [PATCH v5 4/4] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap To: alexander.h.duyck@linux.intel.com Cc: Michal Hocko , Linux MM , Andrew Morton , Linux Kernel Mailing List , linux-nvdimm , Pasha Tatashin , Dave Jiang , Dave Hansen , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , rppt@linux.vnet.ibm.com, Logan Gunthorpe , Ingo Molnar , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 26, 2018 at 11:25 AM Alexander Duyck wrote: > > > > On 9/26/2018 12:55 AM, Michal Hocko wrote: > > On Tue 25-09-18 13:21:24, Alexander Duyck wrote: > >> The ZONE_DEVICE pages were being initialized in two locations. One was with > >> the memory_hotplug lock held and another was outside of that lock. The > >> problem with this is that it was nearly doubling the memory initialization > >> time. Instead of doing this twice, once while holding a global lock and > >> once without, I am opting to defer the initialization to the one outside of > >> the lock. This allows us to avoid serializing the overhead for memory init > >> and we can instead focus on per-node init times. > >> > >> One issue I encountered is that devm_memremap_pages and > >> hmm_devmmem_pages_create were initializing only the pgmap field the same > >> way. One wasn't initializing hmm_data, and the other was initializing it to > >> a poison value. Since this is something that is exposed to the driver in > >> the case of hmm I am opting for a third option and just initializing > >> hmm_data to 0 since this is going to be exposed to unknown third party > >> drivers. > > > > Why cannot you pull move_pfn_range_to_zone out of the hotplug lock? In > > other words why are you making zone device even more special in the > > generic hotplug code when it already has its own means to initialize the > > pfn range by calling move_pfn_range_to_zone. Not to mention the code > > duplication. > > So there were a few things I wasn't sure we could pull outside of the > hotplug lock. One specific example is the bits related to resizing the > pgdat and zone. I wanted to avoid pulling those bits outside of the > hotplug lock. > > The other bit that I left inside the hot-plug lock with this approach > was the initialization of the pages that contain the vmemmap. > > > That being said I really dislike this patch. > > In my mind this was a patch that "killed two birds with one stone". I > had two issues to address, the first one being the fact that we were > performing the memmap_init_zone while holding the hotplug lock, and the > other being the loop that was going through and initializing pgmap in > the hmm and memremap calls essentially added another 20 seconds > (measured for 3TB of memory per node) to the init time. With this patch > I was able to cut my init time per node by that 20 seconds, and then > made it so that we could scale as we added nodes as they could run in > parallel. Yeah, at the very least there is no reason for devm_memremap_pages() to do another loop through all pages, the core should handle this, but cleaning up the scope of the hotplug lock is needed. > With that said I am open to suggestions if you still feel like I need to > follow this up with some additional work. I just want to avoid > introducing any regressions in regards to functionality or performance. Could we push the hotplug lock deeper to the places that actually need it? What I found with my initial investigation is that we don't even need the hotplug lock for the vmemmap initialization with this patch [1]. Alternatively it seems the hotplug lock wants to synchronize changes to the zone and the page init work. If the hotplug lock was an rwsem the zone changes would be a write lock, but the init work could be done as a read lock to allow parallelism. I.e. still provide a sync point to be able to assert that no hotplug work is in-flight will holding the write lock, but otherwise allow threads that are touching independent parts of the memmap to run at the same time. [1]: https://patchwork.kernel.org/patch/10527229/ just focus on the mm/sparse-vmemmap.c changes at the end.