Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp4184895pxb; Tue, 2 Nov 2021 05:42:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzKdmKhZjcDTZAtWlPUY4N6a8EkXmqWneK8/lOB/u43u0LauHbFumW+2IAgoUZ1M5xfU17R X-Received: by 2002:a17:907:1c25:: with SMTP id nc37mr30319148ejc.333.1635856964944; Tue, 02 Nov 2021 05:42:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635856964; cv=none; d=google.com; s=arc-20160816; b=Y5N4cFIQias4ERw/ozwxngCYqK7HaIKOY/VdOHeUeio8+0k+Rt7LedxtDdkZJ5fAkf UgOjflyO5S1cRLwwirW+COaKh1U/TSImBILJRf6+6sLV7+awwZFOBYDPMULyLcqUGvb0 8JpQ8ln46G/8Y6bd6UsGWYjUMHDD/gxIosXXpETwyh0MTRFNPS3yFjSEwyZoN9LG7GNo McvBiKcHP4Ysg7WjA54u7C8FfIKgaVxgqwuTzAVnzC1lzbK0Vxe3W5gCzayGbnEC0QEr em2EeTQ9GtpNh7+NeOCgxV6rn2SUn6C9HB5swxA2+AotKSxVWcLMNPZEmrdrSbpIe0Mq qdDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=jIkHBillbCKCZYFZuYQdXQ8luVuR63mT21wNxhTkVuM=; b=HOXYJnJoLPejU7yqQNyB/tqDeITTybsc7XZ+7UaNlxVM9GLEsD2YyK6tml1TShf5Jr gmRP280axiZlv5naIt3wm7yFQ66O5+JRmjsmpbtr1WTnfp4o0ZizONUMAPZ4nzLCxYFq VInSJzNm3VA3S/JN3JTuw+eEupbiQIGEjsC0kthGnbxRozgwYroQQv/6MV4McWQ3Ltfs L/yVF2JH71/uuHKsK4Kh643FxnO4ffxkSrfJB21V2LVJFLNMxg8G+h51/lPn51vI06f/ rudxNX2boSU1duZ7TDAMImghFF7xAsrp6iSeMjfJPyBWuwWnxxZRp2p+bJpyxOaGcXbC S1HQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BB++xVUy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id he14si32216548ejc.350.2021.11.02.05.42.13; Tue, 02 Nov 2021 05:42:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BB++xVUy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231357AbhKBMlr (ORCPT + 99 others); Tue, 2 Nov 2021 08:41:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:26048 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbhKBMlo (ORCPT ); Tue, 2 Nov 2021 08:41:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635856749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jIkHBillbCKCZYFZuYQdXQ8luVuR63mT21wNxhTkVuM=; b=BB++xVUynx8mhTUznoCeZvnNricmBVTo3YWK3RqGBFvaHCZKwymWZXbRTE8pHUAqPgbk+K OvJ2pw4+tqJsCJRHamxK+7EGE6UC9oMnGvaTNHyJ2AIHRBYC2ukXELmwZXAxCnIMbrgEGL 2fBHMhMpwkNWf2c0eV9b2+59jlj2ENc= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-271-M0G65OAON1eJ7BjEpy4_Xw-1; Tue, 02 Nov 2021 08:39:08 -0400 X-MC-Unique: M0G65OAON1eJ7BjEpy4_Xw-1 Received: by mail-wm1-f69.google.com with SMTP id z137-20020a1c7e8f000000b0030cd1800d86so6905230wmc.2 for ; Tue, 02 Nov 2021 05:39:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=jIkHBillbCKCZYFZuYQdXQ8luVuR63mT21wNxhTkVuM=; b=IpiizpoWuBCLZeUUWcylUniocJp1DB76W3FNNCpsNow3VigUqjctB4xOQzsIP4UNas ohTQfbKBfq0mRabIksUC5oe3tyySFCrEDPa3y3TrAUCY6KIEtKNyWNCdDKXRFAFRQI/U q/lPn1NEVg/QTqzYpN0rFFLIPiT9cY7tDeOJ5AKazeOe9+vVv6xIwhKwJqhzQ625BgYb gq7FoSrgzD5YB6aSQ6WIHNzDAX2MDCz6R6fUOgmUUBOHUWvA2u7NpcMfF3cXNaBfcpQy hQQvVcXodUE6yqmrovJ2ZwvRdI4MzfL6zTQ7LGE2P0OLOZkyVEGT8Ui7kyD7fWjDS6HY mbMw== X-Gm-Message-State: AOAM530VgI+nu4oAkjFX3j/DxFm8Ps016ylScGJ1utRhOHAaM7y9C4d9 xREjEZ4UtILfPw9u0fFhNEomxEcc1fux/uguZefDvQGGFg5WM/Yez+ck7dzK22d9JQE+LbqDviH lG+5lWZ/xl3YLwLkBJHOCB+gx X-Received: by 2002:a5d:648b:: with SMTP id o11mr47850882wri.56.1635856747722; Tue, 02 Nov 2021 05:39:07 -0700 (PDT) X-Received: by 2002:a5d:648b:: with SMTP id o11mr47850848wri.56.1635856747513; Tue, 02 Nov 2021 05:39:07 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6810.dip0.t-ipconnect.de. [91.12.104.16]) by smtp.gmail.com with ESMTPSA id h14sm2656442wmq.34.2021.11.02.05.39.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 02 Nov 2021 05:39:07 -0700 (PDT) Message-ID: Date: Tue, 2 Nov 2021 13:39:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH] mm: fix panic in __alloc_pages Content-Language: en-US To: Michal Hocko Cc: Alexey Makhalov , "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" , Oscar Salvador References: <20211101201312.11589-1-amakhalov@vmware.com> <7136c959-63ff-b866-b8e4-f311e0454492@redhat.com> <42abfba6-b27e-ca8b-8cdf-883a9398b506@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >> Yes, but a zonelist cannot be correct for an offline node, where we might >> not even have an allocated pgdat yet. No pgdat, no zonelist. So as soon as >> we allocate the pgdat and set the node online (->hotadd_new_pgdat()), the zone lists have to be correct. And I can spot an build_all_zonelists() in hotadd_new_pgdat(). > > Yes, that is what I had in mind. We are talking about two things here. > Memoryless nodes and offline nodes. The later sounds like a bug to me. Agreed. memoryless nodes should just have proper zonelists -- which seems to be the case. >> Maybe __alloc_pages_bulk() and alloc_pages_node() should bail out directly >> (VM_BUG()) in case we're providing an offline node with eventually no/stale pgdat as >> preferred nid. > > Historically, those allocation interfaces were not trying to be robust > against wrong inputs because that adds cpu cycles for everybody for > "what if buggy" code. This has worked (surprisingly) well. Memory less > nodes have brought in some confusion but this is still something that we > can address on a higher level. Nobody give arbitrary nodes as an input. > cpu_to_node might be tricky because it can point to a memory less node > which along with __GFP_THISNODE is very likely not something anybody > wants. Hence cpu_to_mem should be used for allocations. I hate we have > two very similar APIs... To be precise, I'm wondering if we should do: diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..8c49b88336ee 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -565,7 +565,7 @@ static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); - VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); + VM_WARN_ON(!node_online(nid)); return __alloc_pages(gfp_mask, order, nid, NULL); } (Or maybe VM_BUG_ON) Because it cannot possibly work and we'll dereference NULL later. > > But something seems wrong in this case. cpu_to_node shouldn't return > offline nodes. That is just a land mine. It is not clear to me how the > cpu has been brought up so that the numa node allocation was left > behind. As pointed in other email add_cpu resp. cpu_up is not it. > Is it possible that the cpu bring up was only half way? I tried to follow the code (what sets a CPU present, what sets a CPU online, when do we update cpu_to_node() mapping) and IMHO it's all a big mess. Maybe it's clearer to people familiar with that code, but CPU hotplug in general seems to be a confusing piece of (arch-specific) code. Also, I have no clue if cpu_to_node() mapping will get invalidated after unplugging that CPU, or if the mapping will simply stay around for all eternity ... -- Thanks, David / dhildenb