Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753632AbbGXXGM (ORCPT ); Fri, 24 Jul 2015 19:06:12 -0400 Received: from mail-pd0-f182.google.com ([209.85.192.182]:34056 "EHLO mail-pd0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751824AbbGXXGK (ORCPT ); Fri, 24 Jul 2015 19:06:10 -0400 Date: Fri, 24 Jul 2015 16:06:08 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Vlastimil Babka cc: Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Mel Gorman , Greg Thelen , "Aneesh Kumar K.V" , Pekka Enberg , Joonsoo Kim , Naoya Horiguchi Subject: Re: [RFC v2 4/4] mm: fallback for offline nodes in alloc_pages_node In-Reply-To: <55B2A292.7080503@suse.cz> Message-ID: References: <1437749126-25867-1-git-send-email-vbabka@suse.cz> <1437749126-25867-4-git-send-email-vbabka@suse.cz> <55B2A292.7080503@suse.cz> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3129 Lines: 68 On Fri, 24 Jul 2015, Vlastimil Babka wrote: > >>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h > >>> index 531c72d..104a027 100644 > >>> --- a/include/linux/gfp.h > >>> +++ b/include/linux/gfp.h > >>> @@ -321,8 +321,12 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, > >>> unsigned int order) > >>> { > >>> /* Unknown node is current (or closest) node */ > >>> - if (nid == NUMA_NO_NODE) > >>> + if (nid == NUMA_NO_NODE) { > >>> nid = numa_mem_id(); > >>> + } else if (!node_online(nid)) { > >>> + VM_WARN_ON(!node_online(nid)); > >>> + nid = numa_mem_id(); > >>> + } > >> > >> I would think you would only want this for debugging purposes. The > >> overwhelming majority of hardware out there has no memory > >> onlining/offlining capability after all and this adds the overhead to each > >> call to alloc_pages_node. > >> > >> Make this dependo n CONFIG_VM_DEBUG or some such thing? > >> > > > > Yeah, the suggestion was for VM_WARN_ON() in the conditional, but the > > placement has changed somewhat because of the new __alloc_pages_node(). I > > think > > > > else if (VM_WARN_ON(!node_online(nid))) > > nid = numa_mem_id(); > > > > should be fine since it only triggers for CONFIG_DEBUG_VM. > > Um, so on your original suggestion I thought that you assumed that the condition > inside VM_WARN_ON is evaluated regardless of CONFIG_DEBUG_VM, it just will or > will not generate a warning. Which is how BUG_ON works, but VM_WARN_ON (and > VM_BUG_ON) doesn't. IIUC VM_WARN_ON() with !CONFIG_DEBUG_VM will always be false. Right, that's what Christoph is also suggesting. VM_WARN_ON without CONFIG_DEBUG_VM should permit the compiler to check the expression but not generate any code and we don't want to check node_online() here for every allocation, it's only a debugging measure. > Because I didn't think you would suggest the "nid = numa_mem_id()" for > !node_online(nid) fixup would happen only for CONFIG_DEBUG_VM kernels. But it > seems that you do suggest that? I would understand if the fixup (correcting an > offline node to some that's online) was done regardless of DEBUG_VM, and > DEBUG_VM just switched between silent and noisy fixup. But having a debug option > alter the outcome seems wrong? Hmm, not sure why this is surprising, I don't expect people to deploy production kernels with CONFIG_DEBUG_VM enabled, it's far too expensive. I was expecting they would enable it for, well... debug :) In that case, if nid is a valid node but offline, then the nid = numa_mem_id() fixup seems fine to allow the kernel to continue debugging. When a node is offlined as a result of memory hotplug, the pgdat doesn't get freed so it can be onlined later. Thus, alloc_pages_node() with an offline node and !CONFIG_DEBUG_VM may not panic. If it does, this can probably be removed because we're covered. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/