Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6447925imu; Wed, 14 Nov 2018 01:26:43 -0800 (PST) X-Google-Smtp-Source: AJdET5dhr9NEnOvVQy4g3FIYbPyyBgTRlG8hf2MvYlGiYvGV8TLdSn8oed7oO+/3XiJs814tF+7c X-Received: by 2002:a62:7687:: with SMTP id r129mr1157842pfc.17.1542187603312; Wed, 14 Nov 2018 01:26:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542187603; cv=none; d=google.com; s=arc-20160816; b=DOSTZ/f/CbdS9pAGaXZGG/tqWvJLGJxq1Bw6QDhRB6toZoh3z2rzmGj9eAuWpMgCj/ dKjMBuFTSQeI/ohQCdRdVHJOGCa6XxQCyI+7b5/wGgfzUxB+NwPuzC/VP//b1vaVLsgr vS5EF8qjhFo1AgLb9tovmq09g5D/iV/7oCgYL9RAjt9xc6Ednwa9n6fIRkwGcCesFsGX PZHZk7TgOIpuRVxw25JW1G1G6I+XiD2DfRRfU8zw1y3vkzSt47hzTAuwR0Br0FHF/Qzc jtS+41AJK90SR5QvtwyBlDYoL7/hIfkV7/NYfcOU/ZX35o7O3TAkyIk8br/km0IJk7vU kMCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=lucjwCLPOawygCfTlyV5olibT04JaVXFqyP6F3MPB+I=; b=s9TOZH/4WkryoB7KOEXcrveWjhNMy1Lm9E0lnizZD3IItYTWwvfBcJGs17rZWdtQdR OqQnHZUW+CTScIff45xXLqi1lVc5V0BPPXvEsaFUUSw3nOiNiPuJwHs7vC/iOd7+Ti/q 81Yb/LKX749hm6ReKfiaWiMajsp0B9sTksp7rDJmQ3JM6tTtfn4rDsHI0L6GEuRO5SSd KCkDzU/WP6lHhwKCyAeoRbUvsKOXfP6/95lbZoReyUMD5WLi3IV7r+iP0xKHq4711z1V RT5bQTpokp2AGK7rpWvAsSDEq9vlOx06tcGKa1XY9dIjF8v3JPrIarpIWG4P0aBn4nvk qNNw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j20si22928609pgh.224.2018.11.14.01.26.27; Wed, 14 Nov 2018 01:26:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732170AbeKNT21 (ORCPT + 99 others); Wed, 14 Nov 2018 14:28:27 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36114 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727558AbeKNT21 (ORCPT ); Wed, 14 Nov 2018 14:28:27 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F074130034A5; Wed, 14 Nov 2018 09:26:00 +0000 (UTC) Received: from [10.36.117.19] (ovpn-117-19.ams2.redhat.com [10.36.117.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D9A55C22D; Wed, 14 Nov 2018 09:25:58 +0000 (UTC) Subject: Re: Memory hotplug softlock issue To: Baoquan He Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, mhocko@suse.com, akpm@linux-foundation.org, aarcange@redhat.com References: <20181114070909.GB2653@MiWiFi-R3L-srv> <5a6c6d6b-ebcd-8bfa-d6e0-4312bfe86586@redhat.com> <20181114090042.GD2653@MiWiFi-R3L-srv> From: David Hildenbrand Openpgp: preference=signencrypt Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwX4EEwECACgFAljj9eoCGwMFCQlmAYAGCwkI BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEE3eEPcA/4Na5IIP/3T/FIQMxIfNzZshIq687qgG 8UbspuE/YSUDdv7r5szYTK6KPTlqN8NAcSfheywbuYD9A4ZeSBWD3/NAVUdrCaRP2IvFyELj xoMvfJccbq45BxzgEspg/bVahNbyuBpLBVjVWwRtFCUEXkyazksSv8pdTMAs9IucChvFmmq3 jJ2vlaz9lYt/lxN246fIVceckPMiUveimngvXZw21VOAhfQ+/sofXF8JCFv2mFcBDoa7eYob s0FLpmqFaeNRHAlzMWgSsP80qx5nWWEvRLdKWi533N2vC/EyunN3HcBwVrXH4hxRBMco3jvM m8VKLKao9wKj82qSivUnkPIwsAGNPdFoPbgghCQiBjBe6A75Z2xHFrzo7t1jg7nQfIyNC7ez MZBJ59sqA9EDMEJPlLNIeJmqslXPjmMFnE7Mby/+335WJYDulsRybN+W5rLT5aMvhC6x6POK z55fMNKrMASCzBJum2Fwjf/VnuGRYkhKCqqZ8gJ3OvmR50tInDV2jZ1DQgc3i550T5JDpToh dPBxZocIhzg+MBSRDXcJmHOx/7nQm3iQ6iLuwmXsRC6f5FbFefk9EjuTKcLMvBsEx+2DEx0E UnmJ4hVg7u1PQ+2Oy+Lh/opK/BDiqlQ8Pz2jiXv5xkECvr/3Sv59hlOCZMOaiLTTjtOIU7Tq 7ut6OL64oAq+zsFNBFXLn5EBEADn1959INH2cwYJv0tsxf5MUCghCj/CA/lc/LMthqQ773ga uB9mN+F1rE9cyyXb6jyOGn+GUjMbnq1o121Vm0+neKHUCBtHyseBfDXHA6m4B3mUTWo13nid 0e4AM71r0DS8+KYh6zvweLX/LL5kQS9GQeT+QNroXcC1NzWbitts6TZ+IrPOwT1hfB4WNC+X 2n4AzDqp3+ILiVST2DT4VBc11Gz6jijpC/KI5Al8ZDhRwG47LUiuQmt3yqrmN63V9wzaPhC+ xbwIsNZlLUvuRnmBPkTJwwrFRZvwu5GPHNndBjVpAfaSTOfppyKBTccu2AXJXWAE1Xjh6GOC 8mlFjZwLxWFqdPHR1n2aPVgoiTLk34LR/bXO+e0GpzFXT7enwyvFFFyAS0Nk1q/7EChPcbRb hJqEBpRNZemxmg55zC3GLvgLKd5A09MOM2BrMea+l0FUR+PuTenh2YmnmLRTro6eZ/qYwWkC u8FFIw4pT0OUDMyLgi+GI1aMpVogTZJ70FgV0pUAlpmrzk/bLbRkF3TwgucpyPtcpmQtTkWS gDS50QG9DR/1As3LLLcNkwJBZzBG6PWbvcOyrwMQUF1nl4SSPV0LLH63+BrrHasfJzxKXzqg rW28CTAE2x8qi7e/6M/+XXhrsMYG+uaViM7n2je3qKe7ofum3s4vq7oFCPsOgwARAQABwsFl BBgBAgAPBQJVy5+RAhsMBQkJZgGAAAoJEE3eEPcA/4NagOsP/jPoIBb/iXVbM+fmSHOjEshl KMwEl/m5iLj3iHnHPVLBUWrXPdS7iQijJA/VLxjnFknhaS60hkUNWexDMxVVP/6lbOrs4bDZ NEWDMktAeqJaFtxackPszlcpRVkAs6Msn9tu8hlvB517pyUgvuD7ZS9gGOMmYwFQDyytpepo YApVV00P0u3AaE0Cj/o71STqGJKZxcVhPaZ+LR+UCBZOyKfEyq+ZN311VpOJZ1IvTExf+S/5 lqnciDtbO3I4Wq0ArLX1gs1q1XlXLaVaA3yVqeC8E7kOchDNinD3hJS4OX0e1gdsx/e6COvy qNg5aL5n0Kl4fcVqM0LdIhsubVs4eiNCa5XMSYpXmVi3HAuFyg9dN+x8thSwI836FoMASwOl C7tHsTjnSGufB+D7F7ZBT61BffNBBIm1KdMxcxqLUVXpBQHHlGkbwI+3Ye+nE6HmZH7IwLwV W+Ajl7oYF+jeKaH4DZFtgLYGLtZ1LDwKPjX7VAsa4Yx7S5+EBAaZGxK510MjIx6SGrZWBrrV TEvdV00F2MnQoeXKzD7O4WFbL55hhyGgfWTHwZ457iN9SgYi1JLPqWkZB0JRXIEtjd4JEQcx +8Umfre0Xt4713VxMygW0PnQt5aSQdMD58jHFxTk092mU+yIHj5LeYgvwSgZN4airXk5yRXl SE+xAvmumFBY Organization: Red Hat GmbH Message-ID: <8c03f925-8ca4-688c-569a-a7a449612782@redhat.com> Date: Wed, 14 Nov 2018 10:25:57 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0 MIME-Version: 1.0 In-Reply-To: <20181114090042.GD2653@MiWiFi-R3L-srv> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Wed, 14 Nov 2018 09:26:01 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14.11.18 10:00, Baoquan He wrote: > Hi David, > > On 11/14/18 at 09:18am, David Hildenbrand wrote: >> Code seems to be waiting for the mem_hotplug_lock in read. >> We hold mem_hotplug_lock in write whenever we online/offline/add/remove >> memory. There are two ways to trigger offlining of memory: >> >> 1. Offlining via "cat offline > /sys/devices/system/memory/memory0/state" >> >> This always properly took the mem_hotplug_lock. Nothing changed >> >> 2. Offlining via "cat 0 > /sys/devices/system/memory/memory0/online" >> >> This didn't take the mem_hotplug_lock and I fixed that for this release. >> >> So if you were testing with 1., you should have seen the same error >> before this release (unless there is something else now broken in this >> release). > > Thanks a lot for looking into this. > > I triggered sysrq+t to check threads' state. You can see that we use > firmware to trigger ACPI event to go to acpi_bus_offline(), it truly > didn't take mem_hotplug_lock() and has taken it with your fix in > commit 381eab4a6ee mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock > > [ +0.007062] Workqueue: kacpi_hotplug acpi_hotplug_work_fn > [ +0.005398] Call Trace: > [ +0.002476] ? page_vma_mapped_walk+0x307/0x710 > [ +0.004538] ? page_remove_rmap+0xa2/0x340 > [ +0.004104] ? ptep_clear_flush+0x54/0x60 > [ +0.004027] ? enqueue_entity+0x11c/0x620 > [ +0.005904] ? schedule+0x28/0x80 > [ +0.003336] ? rmap_walk_file+0xf9/0x270 > [ +0.003940] ? try_to_unmap+0x9c/0xf0 > [ +0.003695] ? migrate_pages+0x2b0/0xb90 > [ +0.003959] ? try_offline_node+0x160/0x160 > [ +0.004214] ? __offline_pages+0x6ce/0x8e0 > [ +0.004134] ? memory_subsys_offline+0x40/0x60 > [ +0.004474] ? device_offline+0x81/0xb0 > [ +0.003867] ? acpi_bus_offline+0xdb/0x140 > [ +0.004117] ? acpi_device_hotplug+0x21c/0x460 > [ +0.004458] ? acpi_hotplug_work_fn+0x1a/0x30 > [ +0.004372] ? process_one_work+0x1a1/0x3a0 > [ +0.004195] ? worker_thread+0x30/0x380 > [ +0.003851] ? drain_workqueue+0x120/0x120 > [ +0.004117] ? kthread+0x112/0x130 > [ +0.003411] ? kthread_park+0x80/0x80 > [ +0.005325] ? ret_from_fork+0x35/0x40 > Yes, this is indeed another code path that was fixed (and I didn't actually realize it ;) ). Thanks for the callchain. Before my fix hotplug still would have never succeeded (offline_pages would have silently looped forever) as far as I can tell. > >> >> >> The real question is, however, why offlining of the last block doesn't >> succeed. In __offline_pages() we basically have an endless loop (while >> holding the mem_hotplug_lock in write). Now I consider this piece of >> code very problematic (we should automatically fail after X >> attempts/after X seconds, we should not ignore -ENOMEM), and we've had >> other BUGs whereby we would run into an endless loop here (e.g. related >> to hugepages I guess). > > Hmm, even though memory hotplug stalled there, there are still much > memory. E.g in this system, it has 8 nodes and each node has 64 GB > memory, it's 512 GB in all. Now I run "stress -m 200" to trigger 200 > processes to malloc then free 256 MB contiously, and it's eating 50 GB > in all. In theory, it still has much memory for migrating to. Maybe a NUMA issue? But I am just making wild guesses. Maybe it is not -ENOMEM but just some other migration condition that is not properly handled (see Michals reply). > >> >> You mentioned memory pressure, if our host is under memory pressure we >> can easily trigger running into an endless loop there, because we >> basically ignore -ENOMEM e.g. when we cannot get a page to migrate some >> memory to be offlined. I assume this is the case here. >> do_migrate_range() could be the bad boy if it keeps failing forever and >> we keep retrying. > > Not sure what other people think about this. If failed the memory removing > when still much free memory left, I worry customer will complain. Indeed, we have to look into this. > > Yeah, it stoped at do_migrate_range() when try to migrate the last > memory block. And each time it's the last memory block which can't be > offlined and hang. It would be interesting to see which error message we keep getting. > > If any message or information needed, I can provide. > > Thanks > Baoquan > -- Thanks, David / dhildenb