Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp196299pxb; Tue, 17 Aug 2021 23:32:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9cPHm4Y1SDnFKSxzlvJY+MTHbYyJodgY1rBh4tn1hnPP4IitM2aL7kp7h6kMBbKtmrbwi X-Received: by 2002:a92:cb52:: with SMTP id f18mr5331594ilq.120.1629268370620; Tue, 17 Aug 2021 23:32:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629268370; cv=none; d=google.com; s=arc-20160816; b=WP1O1Xp1nJDCnGleG0bdLjO0oQnJPodbnIVkBcPzcU+r53FMbtc98Y3kq4SojxlQlW d3crfwFw31iFsdnm/113MCwSZ+BtixFFbPlk+CytmlWh0lRqzQMDiH7qEMKsDOwN638U 89Li5HslnCSrk3SaoHatWo0GbgCtynhZ/5j1JaktKju3afvL0Hm3cKtB1XagFdXMgP4C QyO+S/h2huPbwv/uVNmDqyCl9Sn6X/6otQ5dPBPyF/PdPrznwshSzoqN2aXpcrBrJ3S8 h/Q2opVAJQQXowCzvxB61OFiEze5nS7RHcEJQ8jZz4KHVbfJkj6IJs/0ZLK+xm5NS9lu 8QWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:dkim-signature:date; bh=4+y1w2cJDB4TCUahkeLKEQYMWNkdA4dKlcdPI1ef7f8=; b=hUnnbImAj4zkQGorJYd6A7bnhq8csZsRreM9zr5ZD1ODLVXa8dmkatDtF3PfLkDZPe udJRSb+NxsaPVaEw78pEUwUeIevJxjVQxiDNYTKG7Bifeg/LUL9Y7HFrijxuwaHKi4J9 bgaIi+TDcWsNsuPOHw30hmIr91tHbHdOIU+nQVG0EZFH4xKHvkmzsfZ1im/AixytiknG W0nC7hYLHp7JQJSxd6CRoDi+d8K1ybIzPRYMFgZWS39/5sH+wq1u7UNlcZ9lgYghqKMl fV8ghOQ+qOb7HUX72r9SK4jGm5Z6Nk+FICwY8s5wukmLafAiSxAgU37HngH0nO1xvScf rPmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=G5LhJhkj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q3si4530535ils.46.2021.08.17.23.32.39; Tue, 17 Aug 2021 23:32:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=G5LhJhkj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238625AbhHRGb3 (ORCPT + 99 others); Wed, 18 Aug 2021 02:31:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238633AbhHRGb2 (ORCPT ); Wed, 18 Aug 2021 02:31:28 -0400 Received: from out1.migadu.com (out1.migadu.com [IPv6:2001:41d0:2:863f::]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F5A0C061764 for ; Tue, 17 Aug 2021 23:30:54 -0700 (PDT) Date: Wed, 18 Aug 2021 15:30:42 +0900 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1629268252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4+y1w2cJDB4TCUahkeLKEQYMWNkdA4dKlcdPI1ef7f8=; b=G5LhJhkj3YFTZCwOF7qAvjiygCFB3G2KBK+yTUE4u/PwbXpaLUIIWcL92EyPwiN8Ex5uUI GJAmQyecSna7eSCsLHUN6PzI9ZWPMV/VLKt+/PIUhD5r/TfGmatNJYtwvKsnjbxA75hGZV QPLcbu7qfN//sgIMJoJ7PYpCqdopHmA= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Naoya Horiguchi To: Yang Shi Cc: osalvador@suse.de, tdmackey@twitter.com, akpm@linux-foundation.org, corbet@lwn.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Naoya Horiguchi Subject: Re: [PATCH 1/2] mm: hwpoison: don't drop slab caches for offlining non-LRU page Message-ID: <20210818063042.GA2310427@u2004> References: <20210816180909.3603-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20210816180909.3603-1-shy828301@gmail.com> X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: naoya.horiguchi@linux.dev Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 16, 2021 at 11:09:08AM -0700, Yang Shi wrote: > In the current implementation of soft offline, if non-LRU page is met, > all the slab caches will be dropped to free the page then offline. But > if the page is not slab page all the effort is wasted in vain. Even > though it is a slab page, it is not guaranteed the page could be freed > at all. > > However the side effect and cost is quite high. It does not only drop > the slab caches, but also may drop a significant amount of page caches > which are associated with inode caches. It could make the most > workingset gone in order to just offline a page. And the offline is not > guaranteed to succeed at all, actually I really doubt the success rate > for real life workload. > > Furthermore the worse consequence is the system may be locked up and > unusable since the page cache release may incur huge amount of works > queued for memcg release. > > Actually we ran into such unpleasant case in our production environment. > Firstly, the workqueue of memory_failure_work_func is locked up as > below: > > BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 53s! > Showing busy workqueues and worker pools: > workqueue events: flags=0x0 >   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=14/256 refcnt=15 >     in-flight: 409271:memory_failure_work_func >     pending: kfree_rcu_work, kfree_rcu_monitor, kfree_rcu_work, rht_deferred_worker, rht_deferred_worker, rht_deferred_worker, rht_deferred_worker, kfree_rcu_work, kfree_rcu_work, kfree_rcu_work, kfree_rcu_work, drain_local_stock, kfree_rcu_work > workqueue mm_percpu_wq: flags=0x8 >   pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 >     pending: vmstat_update > workqueue cgroup_destroy: flags=0x0 > pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=12072 > pending: css_release_work_fn > > There were over 12K css_release_work_fn queued, and this caused a few > lockups due to the contention of worker pool lock with IRQ disabled, for > example: > > NMI watchdog: Watchdog detected hard LOCKUP on cpu 1 > Modules linked in: amd64_edac_mod edac_mce_amd crct10dif_pclmul crc32_pclmul ghash_clmulni_intel xt_DSCP iptable_mangle kvm_amd bpfilter vfat fat acpi_ipmi i2c_piix4 usb_storage ipmi_si k10temp i2c_core ipmi_devintf ipmi_msghandler acpi_cpufreq sch_fq_codel xfs libcrc32c crc32c_intel mlx5_core mlxfw nvme xhci_pci ptp nvme_core pps_core xhci_hcd > CPU: 1 PID: 205500 Comm: kworker/1:0 Tainted: G L 5.10.32-t1.el7.twitter.x86_64 #1 > Hardware name: TYAN F5AMT /z /S8026GM2NRE-CGN, BIOS V8.030 03/30/2021 > Workqueue: events memory_failure_work_func > RIP: 0010:queued_spin_lock_slowpath+0x41/0x1a0 > Code: 41 f0 0f ba 2f 08 0f 92 c0 0f b6 c0 c1 e0 08 89 c2 8b 07 30 e4 09 d0 a9 00 01 ff ff 75 1b 85 c0 74 0e 8b 07 84 c0 74 08 f3 90 <8b> 07 84 c0 75 f8 b8 01 00 00 00 66 89 07 c3 f6 c4 01 75 04 c6 47 > RSP: 0018:ffff9b2ac278f900 EFLAGS: 00000002 > RAX: 0000000000480101 RBX: ffff8ce98ce71800 RCX: 0000000000000084 > RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8ce98ce6a140 > RBP: 00000000000284c8 R08: ffffd7248dcb6808 R09: 0000000000000000 > R10: 0000000000000003 R11: ffff9b2ac278f9b0 R12: 0000000000000001 > R13: ffff8cb44dab9c00 R14: ffffffffbd1ce6a0 R15: ffff8cacaa37f068 > FS: 0000000000000000(0000) GS:ffff8ce98ce40000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 00007fcf6e8cb000 CR3: 0000000a0c60a000 CR4: 0000000000350ee0 > Call Trace: > __queue_work+0xd6/0x3c0 > queue_work_on+0x1c/0x30 > uncharge_batch+0x10e/0x110 > mem_cgroup_uncharge_list+0x6d/0x80 > release_pages+0x37f/0x3f0 > __pagevec_release+0x1c/0x50 > __invalidate_mapping_pages+0x348/0x380 > ? xfs_alloc_buftarg+0xa4/0x120 [xfs] > inode_lru_isolate+0x10a/0x160 > ? iput+0x1d0/0x1d0 > __list_lru_walk_one+0x7b/0x170 > ? iput+0x1d0/0x1d0 > list_lru_walk_one+0x4a/0x60 > prune_icache_sb+0x37/0x50 > super_cache_scan+0x123/0x1a0 > do_shrink_slab+0x10c/0x2c0 > shrink_slab+0x1f1/0x290 > drop_slab_node+0x4d/0x70 > soft_offline_page+0x1ac/0x5b0 > ? dev_mce_log+0xee/0x110 > ? notifier_call_chain+0x39/0x90 > memory_failure_work_func+0x6a/0x90 > process_one_work+0x19e/0x340 > ? process_one_work+0x340/0x340 > worker_thread+0x30/0x360 > ? process_one_work+0x340/0x340 > kthread+0x116/0x130 > > The lockup made the machine is quite unusable. And it also made the > most workingset gone, the reclaimabled slab caches were reduced from 12G > to 300MB, the page caches were decreased from 17G to 4G. > > But the most disappointing thing is all the effort doesn't make the page > offline, it just returns: > > soft_offline: 0x1469f2: unknown non LRU page type 5ffff0000000000 () > > It seems the aggressive behavior for non-LRU page didn't pay back, so it > doesn't make too much sense to keep it considering the terrible side > effect. > > Reported-by: David Mackey > Cc: Naoya Horiguchi > Cc: Oscar Salvador > Signed-off-by: Yang Shi Thank you. I agree with the idea of dropping drop_slab_node() in shake_page(), hoping that range-based slab shrinker will be implemented in the future. This patch conflicts with the patch https://lore.kernel.org/linux-mm/20210817053703.2267588-1-naoya.horiguchi@linux.dev/T/#u which adds another shake_page(), so could you add the following hunk in your patch? diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 64f8ac969544..7dd2ca665866 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1198,7 +1198,7 @@ static int get_any_page(struct page *p, unsigned long flags) * page, retry. */ if (pass++ < 3) { - shake_page(p, 1); + shake_page(p); goto try_again; } goto out; Thanks, Naoya Horiguchi