Received: by 2002:ac0:8c9a:0:0:0:0:0 with SMTP id r26csp2542965ima; Sun, 3 Feb 2019 01:40:17 -0800 (PST) X-Google-Smtp-Source: AHgI3IafFZLPbEXNITLGBaxqogbirQQPpHNtHPRatfFX3P4lsy8k+85j2CxASaRbHcEBui9F3XqZ X-Received: by 2002:a17:902:bd89:: with SMTP id q9mr4678561pls.151.1549186817867; Sun, 03 Feb 2019 01:40:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549186817; cv=none; d=google.com; s=arc-20160816; b=mFI3VQK5E22T33PeLhthg1EKtiWuoJxMkqvyJ34EsHEQ9qAjgGs4gH0uXm0OhMzpif lVJpvc8kwG4pzOeKafXLwtJ6nmcxkc64KLc0UD7/9psNnoxoldA0/ZPYbQoZ8foC3chp TvD1MohWvduzG4TazztPv0BnOS8BicbJJ8+SsqVJPSk2YQIAVx8uNkkPRf+KilQr0qQa bABsnm/1kHUX+27Su9DwVLP+0TCFujDmCcjl37CeJq0T1iCNpfzB7Gy8PnO0MC6HBEtj 4x8JQpzq1M4bsAyyajMlhy5lzAZrgcWYzfgZjWpn8p6wdZlgc7sbyRYCLWxKedRbFykK El9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=nYTyj5Rd+SbsHhnrf2sshdks629AIdKm2l5YdhHRBo4=; b=BRO/WtNwjN5EZrezlvpMclvXR26oxKv0ZRrszPMshPVujv15csFCRfJt2IdZ36tsHv HNrI4bYxGXcndx/ot+dgra8KMuC2U/2bkJS2owHDBSOA0YsrKssgXIgqAOGOJjoc2rmk kanaQ1U7RyPOsBS8+cQ4rHCgbot3VYlB78q+IN4R8s7M/MTohrKcSsQYTfYIsIiSRjlG r3XFa3Tvkj8fW5/ScwZbVpJtX1rZ8/ZoZ1qtTUaWlz3D8YxPGKw5yiGs66+xX13NP45Z IkgEmAtjHvhOsjD6mDf4t0jP5XnvSMqGEa7cDXUVahjK7uB3m9gfVankUInT76PjgI55 IJ9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a16si13311967plm.365.2019.02.03.01.40.02; Sun, 03 Feb 2019 01:40:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727667AbfBCJjm (ORCPT + 99 others); Sun, 3 Feb 2019 04:39:42 -0500 Received: from ozlabs.org ([203.11.71.1]:42857 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727479AbfBCJjl (ORCPT ); Sun, 3 Feb 2019 04:39:41 -0500 Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPSA id 43sm550vBPz9sMp; Sun, 3 Feb 2019 20:39:19 +1100 (AEDT) From: Michael Ellerman To: Mike Rapoport , linux-mm@kvack.org Cc: Andrew Morton , Catalin Marinas , Christoph Hellwig , "David S. Miller" , Dennis Zhou , Geert Uytterhoeven , Greentime Hu , Greg Kroah-Hartman , Guan Xuetao , Guo Ren , Heiko Carstens , Mark Salter , Matt Turner , Max Filippov , Michal Simek , Paul Burton , Petr Mladek , Rich Felker , Richard Weinberger , Rob Herring , Russell King , Stafford Horne , Tony Luck , Vineet Gupta , Yoshinori Sato , devicetree@vger.kernel.org, kasan-dev@googlegroups.com, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-usb@vger.kernel.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, x86@kernel.org, xen-devel@lists.xenproject.org, Mike Rapoport Subject: Re: [PATCH v2 10/21] memblock: refactor internal allocation functions In-Reply-To: <1548057848-15136-11-git-send-email-rppt@linux.ibm.com> References: <1548057848-15136-1-git-send-email-rppt@linux.ibm.com> <1548057848-15136-11-git-send-email-rppt@linux.ibm.com> Date: Sun, 03 Feb 2019 20:39:20 +1100 Message-ID: <87ftt5nrcn.fsf@concordia.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Mike Rapoport writes: > Currently, memblock has several internal functions with overlapping > functionality. They all call memblock_find_in_range_node() to find free > memory and then reserve the allocated range and mark it with kmemleak. > However, there is difference in the allocation constraints and in fallback > strategies. > > The allocations returning physical address first attempt to find free > memory on the specified node within mirrored memory regions, then retry on > the same node without the requirement for memory mirroring and finally fall > back to all available memory. > > The allocations returning virtual address start with clamping the allowed > range to memblock.current_limit, attempt to allocate from the specified > node from regions with mirroring and with user defined minimal address. If > such allocation fails, next attempt is done with node restriction lifted. > Next, the allocation is retried with minimal address reset to zero and at > last without the requirement for mirrored regions. > > Let's consolidate various fallbacks handling and make them more consistent > for physical and virtual variants. Most of the fallback handling is moved > to memblock_alloc_range_nid() and it now handles node and mirror fallbacks. > > The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a > physical address of the allocated range and converts it to virtual address. > > The fallback for allocation below the specified minimal address remains in > memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA > with exact requirement for lower bounds. This is causing problems on some of my machines. I see NODE_DATA allocations falling back to node 0 when they shouldn't, or didn't previously. eg, before: 57990190: (116011251): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff] 58152042: (116373087): numa: NODE_DATA [mem 0x8fff90980-0x8fff97fff] after: 16356872061562: (6296877055): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff] 16356872079279: (6296894772): numa: NODE_DATA [mem 0xfffcd300-0xfffd497f] 16356872096376: (6296911869): numa: NODE_DATA(1) on node 0 On some of my other systems it does that, and then panics because it can't allocate anything at all: [ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff] [ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f] [ 0.000000] numa: NODE_DATA(1) on node 0 [ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1 [ 0.000000] Call Trace: [ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable) [ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0 [ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260 [ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418 [ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684 [ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c [ 0.000000] Rebooting in 180 seconds.. So there's something going wrong there, I haven't had time to dig into it though (Sunday night here). cheers