Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1714892ybb; Thu, 9 Apr 2020 07:43:07 -0700 (PDT) X-Google-Smtp-Source: APiQypKWUGnQKhF9eHoQ94LYiU6Gft6ihzLAYeXx0bH4zosijWed/ElU6XtRMb3dTtn5adepZ3qw X-Received: by 2002:ac8:3723:: with SMTP id o32mr4664486qtb.176.1586443387032; Thu, 09 Apr 2020 07:43:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586443387; cv=none; d=google.com; s=arc-20160816; b=quv2agw+c1iKBVQorcOMlE2vEaBngUYJajabySnSKJv/rE20l8I6eBFfcdB410cqga 9ORjBjhrISYhQU+XMdmCs4+DFCGHdw8iNO28dlxrb+ul7XxbmUTVgVYgkwDstLJxHyhW sDArD0MLKwC6YlewrJ/jNbm2I7azr+uJehEwL6SC60fN7nwtHcSdFyDRQYC+eL9o28ye 5x2TUb1FaCOpLT/lhlp1nrE/f8CnDOoxZv/xiEy6UjdBlEuQtOq+b3c90udixSdJKxSe E4Z1OivVzvgMGT3+O0D8HKgV/HVzekUKMiOJtvAR1su9WG+KwSoFoKaue63t20i5pgxK MeAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=89GNIOSKjsOD8QJV6eCKtCzrS0MjyFeJYEzczikaUfM=; b=eT1NAJowHM24LQS6xXXZGiJ7d99kBl9N9l9uA7fk1CfD2L5bgRdlJOX5PHIAwepKP/ rDn44U+hLktgetK73Vrsh0MGvlb67vAkQ1ntXYuWP0o+oLoek1WPTriJhxoRCCJ4KiDI aZ+joauHiwm4SM45m8P53ajJ9UAwOokDbVp9TZN6/YtFXSoqFkzPF23/zRyq/AoS0ryh bgxYI3B2g0q/rwakviQ724dCYNNoQ4TaxfsJsxAU6BxFld2MLJQPxAAu/TrgmJ1vWMtS 1X0KlMCz45BY+Se0enRvBunufeDLj3b5NQAyyEQndVfMr0j4n4svd9YT1t1Ii2A0hg6W Zbsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WmixIhEA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i11si5731705qkk.44.2020.04.09.07.42.50; Thu, 09 Apr 2020 07:43:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WmixIhEA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727834AbgDIOle (ORCPT + 99 others); Thu, 9 Apr 2020 10:41:34 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:60654 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727737AbgDIOle (ORCPT ); Thu, 9 Apr 2020 10:41:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1586443292; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=89GNIOSKjsOD8QJV6eCKtCzrS0MjyFeJYEzczikaUfM=; b=WmixIhEAPyFwklTqe2cWyM/gqP7bbMkWxLW1zsOUbwnsQK0j7bVTPNIxHWzYUZYGleDfAY Z9os8mmtSg8kd/sDxmSN7Qz19eYl/3zLcQR61fy9wI0AIr+OBQT+ZhRB8oPF1LLQlWaCbO eq0g8yhX1cNKOMuXEuJwQ4HGioxH5/c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-282-cRVUOBR5Nx2JHnnd5Bfdcg-1; Thu, 09 Apr 2020 10:41:26 -0400 X-MC-Unique: cRVUOBR5Nx2JHnnd5Bfdcg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4E8F08017F3; Thu, 9 Apr 2020 14:41:23 +0000 (UTC) Received: from localhost (ovpn-12-133.pek2.redhat.com [10.72.12.133]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F81E391; Thu, 9 Apr 2020 14:41:22 +0000 (UTC) Date: Thu, 9 Apr 2020 22:41:19 +0800 From: Baoquan He To: Mike Rapoport , Michal Hocko Cc: Hoan Tran , Catalin Marinas , Will Deacon , Andrew Morton , Vlastimil Babka , Oscar Salvador , Pavel Tatashin , Alexander Duyck , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "David S. Miller" , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , "open list:MEMORY MANAGEMENT" , linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, lho@amperecomputing.com, mmorana@amperecomputing.com Subject: Re: [PATCH RFC] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP (was: Re: [PATCH v3 0/5] mm: Enable CONFIG_NODES_SPAN_OTHER_NODES by default for NUMA) Message-ID: <20200409144119.GE2129@MiWiFi-R3L-srv> References: <1585420282-25630-1-git-send-email-Hoan@os.amperecomputing.com> <20200330074246.GA14243@dhcp22.suse.cz> <20200330092127.GB30942@linux.ibm.com> <20200330095843.GF14243@dhcp22.suse.cz> <20200331215618.GG30942@linux.ibm.com> <20200401054227.GC2129@MiWiFi-R3L-srv> <20200401075155.GH30942@linux.ibm.com> <20200402080144.GK22681@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200402080144.GK22681@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/02/20 at 10:01am, Michal Hocko wrote: > On Wed 01-04-20 10:51:55, Mike Rapoport wrote: > > Hi, > > > > On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote: > [...] > > > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and > > > replace it with CONFIG_NUMA. That sounds more sensible to store nid into > > > memblock when NUMA support is enabled. > > > > Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but > > this will not help cleaning up the whole node/zone initialization mess and > > we'll be stuck with two implementations. > > Yeah, this is far from optimal. > > > The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as > > most architectures will anyway discard the entire memblock, so having it in > > a UMA arch won't be a problem. The only exception is arm that uses > > memblock for pfn_valid(), here we may also think about a solution to > > compensate the addition of nid to the memblock structures. > > Well, we can make memblock_region->nid defined only for CONFIG_NUMA. > memblock_get_region_node would then unconditionally return 0 on UMA. > Essentially the same way we do NUMA for other MM code. I only see few > direct usage of region->nid. Checked code again, seems HAVE_MEMBLOCK_NODE_MAP is selected directly in all ARCHes which support it. Means HAVE_MEMBLOCK_NODE_MAP is enabled by default on those ARCHes, and has no dependency on CONFIG_NUMA at all. E.g on x86, it just calls free_area_init_nodes() in generic code path, while free_area_init_nodes() is defined in CONFIG_HAVE_MEMBLOCK_NODE_MAP ifdeffery scope. So I tend to agree with Mike to remove HAVE_MEMBLOCK_NODE_MAP firstly on all ARCHes. We can check if it's worth only defining memblock_region->nid for CONFIG_NUMA case after HAVE_MEMBLOCK_NODE_MAP is removed. config X86 def_bool y ... select HAVE_MEMBLOCK_NODE_MAP ...