Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6460931imu; Wed, 14 Nov 2018 01:42:07 -0800 (PST) X-Google-Smtp-Source: AJdET5cY49u5gt2J/97hiwN35rNCz8gHHN6TelJJErAv6Y05TGLdjn/4nWw3n0TCRFJuPa13Sh6s X-Received: by 2002:a62:8d92:: with SMTP id p18-v6mr1229315pfk.217.1542188527113; Wed, 14 Nov 2018 01:42:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542188527; cv=none; d=google.com; s=arc-20160816; b=H5B2v8NkOGMh65A+9BFbq/UfAWNELcuNp6GwWRt5Xhj3v+KSNHJbrC45T4VdsR2tM/ g/Po/QuSu++YYkTV0Rt9Y0W1fV+oFCHVio/pg2iy/e4Agpfs/J7Gc2Ewq9Lak+yLYGKP HXEZUwXHkO8mDSps3n31XTFaGYncXIpneFvK+yvROjdmdTe49EywcBdMHl2idqK5HUq4 jFE4V2HwHHacn9AOuS1wOtV+TO9HjsvlG774+gIue9CvbShQH+4IklmrgLfRFMRxM3A+ juPXXSGQ3Yp3HYpN/ZOYhM+NsdalkthlmzQXyZAn+j95+UH6TZkpo4mQ1eVWcSNYA9E4 2oKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=cAU9FUU4OTrS2iFCEKxXNo6+seNuKHp42ssvVDfWcAo=; b=dqQUFCLtxFn+FufXvhB18evhVoTZh2BCgtqRW7bCWnDilwzPhXLmZSEP5AQjiJ/6YC 2WNUuooivOdrKY0ShCegBaVfRTB2w9QG8ZNXpR00fS+f4gV2ML9IA5vMCz8hSXw1w4oa L7CTCY1tOMFBt2EK++wxWtZxfwWTuhBlGKXwosvJ5gozbGWjPwrHi35rdjyeLWP5o3j/ OLp8kVrSVAAPbMqsvHSRSbc4rwCjC6wApmN8s/zSDqYRuJNRczb1U0t+xwzkIjJ7KL0K 48sM25IbDFIo3JTy3/ROrmxSPVDZhz3MkVa+EV8eiOFVU+uI83BW8P+KQyHqLTMkpuXJ VMmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q15si23558929pgm.420.2018.11.14.01.41.51; Wed, 14 Nov 2018 01:42:07 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731846AbeKNTnf (ORCPT + 99 others); Wed, 14 Nov 2018 14:43:35 -0500 Received: from mx2.suse.de ([195.135.220.15]:37522 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727576AbeKNTnf (ORCPT ); Wed, 14 Nov 2018 14:43:35 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EE74DAE41; Wed, 14 Nov 2018 09:41:04 +0000 (UTC) Date: Wed, 14 Nov 2018 10:41:04 +0100 From: Michal Hocko To: David Hildenbrand Cc: Baoquan He , linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, aarcange@redhat.com Subject: Re: Memory hotplug softlock issue Message-ID: <20181114094104.GJ23419@dhcp22.suse.cz> References: <20181114070909.GB2653@MiWiFi-R3L-srv> <5a6c6d6b-ebcd-8bfa-d6e0-4312bfe86586@redhat.com> <20181114090042.GD2653@MiWiFi-R3L-srv> <8c03f925-8ca4-688c-569a-a7a449612782@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8c03f925-8ca4-688c-569a-a7a449612782@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 14-11-18 10:25:57, David Hildenbrand wrote: > On 14.11.18 10:00, Baoquan He wrote: > > Hi David, > > > > On 11/14/18 at 09:18am, David Hildenbrand wrote: > >> Code seems to be waiting for the mem_hotplug_lock in read. > >> We hold mem_hotplug_lock in write whenever we online/offline/add/remove > >> memory. There are two ways to trigger offlining of memory: > >> > >> 1. Offlining via "cat offline > /sys/devices/system/memory/memory0/state" > >> > >> This always properly took the mem_hotplug_lock. Nothing changed > >> > >> 2. Offlining via "cat 0 > /sys/devices/system/memory/memory0/online" > >> > >> This didn't take the mem_hotplug_lock and I fixed that for this release. > >> > >> So if you were testing with 1., you should have seen the same error > >> before this release (unless there is something else now broken in this > >> release). > > > > Thanks a lot for looking into this. > > > > I triggered sysrq+t to check threads' state. You can see that we use > > firmware to trigger ACPI event to go to acpi_bus_offline(), it truly > > didn't take mem_hotplug_lock() and has taken it with your fix in > > commit 381eab4a6ee mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock > > > > [ +0.007062] Workqueue: kacpi_hotplug acpi_hotplug_work_fn > > [ +0.005398] Call Trace: > > [ +0.002476] ? page_vma_mapped_walk+0x307/0x710 > > [ +0.004538] ? page_remove_rmap+0xa2/0x340 > > [ +0.004104] ? ptep_clear_flush+0x54/0x60 > > [ +0.004027] ? enqueue_entity+0x11c/0x620 > > [ +0.005904] ? schedule+0x28/0x80 > > [ +0.003336] ? rmap_walk_file+0xf9/0x270 > > [ +0.003940] ? try_to_unmap+0x9c/0xf0 > > [ +0.003695] ? migrate_pages+0x2b0/0xb90 > > [ +0.003959] ? try_offline_node+0x160/0x160 > > [ +0.004214] ? __offline_pages+0x6ce/0x8e0 > > [ +0.004134] ? memory_subsys_offline+0x40/0x60 > > [ +0.004474] ? device_offline+0x81/0xb0 > > [ +0.003867] ? acpi_bus_offline+0xdb/0x140 > > [ +0.004117] ? acpi_device_hotplug+0x21c/0x460 > > [ +0.004458] ? acpi_hotplug_work_fn+0x1a/0x30 > > [ +0.004372] ? process_one_work+0x1a1/0x3a0 > > [ +0.004195] ? worker_thread+0x30/0x380 > > [ +0.003851] ? drain_workqueue+0x120/0x120 > > [ +0.004117] ? kthread+0x112/0x130 > > [ +0.003411] ? kthread_park+0x80/0x80 > > [ +0.005325] ? ret_from_fork+0x35/0x40 > > > > Yes, this is indeed another code path that was fixed (and I didn't > actually realize it ;) ). Thanks for the callchain. Before my fix > hotplug still would have never succeeded (offline_pages would have > silently looped forever) as far as I can tell. I haven't studied your patch yet so I am not really sure why you have added the lock into this path. The memory hotplug locking is certainly far from great but I believe we should really rething the scope of the lock. There shouldn't be any fundamental reason to use the global lock for the full offlining. So rather than moving the lock from one place to another we need a range locking I believe. -- Michal Hocko SUSE Labs