Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6480017imu; Wed, 14 Nov 2018 02:05:01 -0800 (PST) X-Google-Smtp-Source: AJdET5cWYHjuBJgnJswABswyKIA+VXCNnLTB4C5/FWx5tGoqYYoM0bsrnDfZteXzYMb0NHswMsIb X-Received: by 2002:a62:6fc7:: with SMTP id k190-v6mr1280955pfc.97.1542189901726; Wed, 14 Nov 2018 02:05:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542189901; cv=none; d=google.com; s=arc-20160816; b=PHjiY69qVLU8ZJBKncboGOF/R1bZ+b+xMIg+CTxO0SS7Om98Z0a05PBur41vyIBejK MRatQeMSMW35wRIb9YgtBJu4+FHLjgQ/hsOrPqeZ0ft1M+3dYTcM/G7uUUVrAOqmtEHg 1rUblgi4btBF6UilZd4RAMs1ssjF4MRP0YQ4TVtdHwjwoTElUY1PYzLcYylaWCUgyT6Z qkrGdCzBl1SgxkbEOFQhzCGX5ydrEHV9jjrZZUJfnOjK0HWqd+IdjSrpY6Tk3x+2oAO4 QuCNp9OEF+jxImFC6RIdNafP30PdQ14wUAIn1xcWoN5GlxMmi/Qt1kCdG/d70VKacNf9 9alA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=4+Z8KjljTUBj1rtEZELPJdbg4k+Ed6Fest93ajm3yvM=; b=qK4pMNYoFNadVSO180eAjdIi/Vk4zwbeu+6A1EtmymOUPY4P+A3PPRI0hiNb/uZn7g zhRSfSaOfOyUQWCgm7NY/NFBnZbTlzARoQxmMfXOhQGJBEthRoyZoG4xLVtNBdrL8QU0 NdfFNIP6asKkegIdpvbpGuPzjHKIHNImAmpWFg24GuNEYTeuLu8DgqSxZCaGGq09Sprl bz3Ibyqzh701OLmbe2GSOlVSMTmTqQwlLvomajH6vDyB6dtYmDfLPX9c8KHDgE27A3X8 M4wfYhPfJ/FcwEeC5OZOGKA93j33xjDyJiWDmdy/WtKeQagMnncZC3J2soQDP25VmfQU JVUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6si16647479pgg.2.2018.11.14.02.04.46; Wed, 14 Nov 2018 02:05:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732724AbeKNUGp (ORCPT + 99 others); Wed, 14 Nov 2018 15:06:45 -0500 Received: from mx2.suse.de ([195.135.220.15]:41362 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732664AbeKNUGo (ORCPT ); Wed, 14 Nov 2018 15:06:44 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 16DC5AE61; Wed, 14 Nov 2018 10:04:08 +0000 (UTC) Date: Wed, 14 Nov 2018 11:04:07 +0100 From: Michal Hocko To: David Hildenbrand Cc: Baoquan He , linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, aarcange@redhat.com Subject: Re: Memory hotplug softlock issue Message-ID: <20181114100407.GL23419@dhcp22.suse.cz> References: <20181114070909.GB2653@MiWiFi-R3L-srv> <5a6c6d6b-ebcd-8bfa-d6e0-4312bfe86586@redhat.com> <20181114090042.GD2653@MiWiFi-R3L-srv> <8c03f925-8ca4-688c-569a-a7a449612782@redhat.com> <20181114094104.GJ23419@dhcp22.suse.cz> <9bb86c98-e062-b045-7c22-6f037bd56f36@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9bb86c98-e062-b045-7c22-6f037bd56f36@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 14-11-18 10:48:09, David Hildenbrand wrote: > On 14.11.18 10:41, Michal Hocko wrote: > > On Wed 14-11-18 10:25:57, David Hildenbrand wrote: > >> On 14.11.18 10:00, Baoquan He wrote: > >>> Hi David, > >>> > >>> On 11/14/18 at 09:18am, David Hildenbrand wrote: > >>>> Code seems to be waiting for the mem_hotplug_lock in read. > >>>> We hold mem_hotplug_lock in write whenever we online/offline/add/remove > >>>> memory. There are two ways to trigger offlining of memory: > >>>> > >>>> 1. Offlining via "cat offline > /sys/devices/system/memory/memory0/state" > >>>> > >>>> This always properly took the mem_hotplug_lock. Nothing changed > >>>> > >>>> 2. Offlining via "cat 0 > /sys/devices/system/memory/memory0/online" > >>>> > >>>> This didn't take the mem_hotplug_lock and I fixed that for this release. > >>>> > >>>> So if you were testing with 1., you should have seen the same error > >>>> before this release (unless there is something else now broken in this > >>>> release). > >>> > >>> Thanks a lot for looking into this. > >>> > >>> I triggered sysrq+t to check threads' state. You can see that we use > >>> firmware to trigger ACPI event to go to acpi_bus_offline(), it truly > >>> didn't take mem_hotplug_lock() and has taken it with your fix in > >>> commit 381eab4a6ee mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock > >>> > >>> [ +0.007062] Workqueue: kacpi_hotplug acpi_hotplug_work_fn > >>> [ +0.005398] Call Trace: > >>> [ +0.002476] ? page_vma_mapped_walk+0x307/0x710 > >>> [ +0.004538] ? page_remove_rmap+0xa2/0x340 > >>> [ +0.004104] ? ptep_clear_flush+0x54/0x60 > >>> [ +0.004027] ? enqueue_entity+0x11c/0x620 > >>> [ +0.005904] ? schedule+0x28/0x80 > >>> [ +0.003336] ? rmap_walk_file+0xf9/0x270 > >>> [ +0.003940] ? try_to_unmap+0x9c/0xf0 > >>> [ +0.003695] ? migrate_pages+0x2b0/0xb90 > >>> [ +0.003959] ? try_offline_node+0x160/0x160 > >>> [ +0.004214] ? __offline_pages+0x6ce/0x8e0 > >>> [ +0.004134] ? memory_subsys_offline+0x40/0x60 > >>> [ +0.004474] ? device_offline+0x81/0xb0 > >>> [ +0.003867] ? acpi_bus_offline+0xdb/0x140 > >>> [ +0.004117] ? acpi_device_hotplug+0x21c/0x460 > >>> [ +0.004458] ? acpi_hotplug_work_fn+0x1a/0x30 > >>> [ +0.004372] ? process_one_work+0x1a1/0x3a0 > >>> [ +0.004195] ? worker_thread+0x30/0x380 > >>> [ +0.003851] ? drain_workqueue+0x120/0x120 > >>> [ +0.004117] ? kthread+0x112/0x130 > >>> [ +0.003411] ? kthread_park+0x80/0x80 > >>> [ +0.005325] ? ret_from_fork+0x35/0x40 > >>> > >> > >> Yes, this is indeed another code path that was fixed (and I didn't > >> actually realize it ;) ). Thanks for the callchain. Before my fix > >> hotplug still would have never succeeded (offline_pages would have > >> silently looped forever) as far as I can tell. > > > > I haven't studied your patch yet so I am not really sure why you have > > added the lock into this path. The memory hotplug locking is certainly > > far from great but I believe we should really rething the scope of the > > lock. There shouldn't be any fundamental reason to use the global lock > > for the full offlining. So rather than moving the lock from one place to > > another we need a range locking I believe. > See the patches for details, the lock was removed on this path by > mistake not by design. OK, so I guess we should plug that hole first I guess. > Replacing the lock by some range lock can now be done. The tricky part > will be get_online_mems(), we'll have to indicate a range somehow. For > online/offline/add/remove, we have the range. I would argue that get_online_mems() needs some rethinking. Callers shouldn't really care that a node went offline. If they care about the specific pfn range of the node to not go away then the range locking should work just fine for them. -- Michal Hocko SUSE Labs