Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp686151imu; Thu, 13 Dec 2018 02:43:43 -0800 (PST) X-Google-Smtp-Source: AFSGD/X70QPtECOnP1Rk7L3ZjJPpXlp5Opd797rf2/Z//N1MhoQ7B2of1/4z7eY5qiLcRhD3N89q X-Received: by 2002:a63:4d:: with SMTP id 74mr21913509pga.248.1544697823449; Thu, 13 Dec 2018 02:43:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544697823; cv=none; d=google.com; s=arc-20160816; b=CnevcW7aZ1qwDYRL089p7IQ4qlUhKGrcO7Sm53feS5nrR3hT1MPz9/L5DC9nrXnrE3 CF3op615UKEKbRlOQZfvG0dyawTpxJKLjl1dY63oW5DBzpDpm2/ZZmjrpnZ8uOvJtpDn fP1s5mmJ8RDzEjASbpCVSN1AUjcFqHUmhMcumuAqnf/R9NlG3UDFz0pai5CZ0pQMyr5/ vlHoA+E9yvNuuX30QR3yqSh+QYd6suUi6BsHSDr+723Ww3YqlcppEyUJNqA9hKr2dH4u OhSvty30TFLI4u2DlqPwtcqi8QAVJFoLB/WaajOohBI485MvIfOrNA58xlL6+543880V fE+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=JN2+b77EC0oN1aqMgMvWFwxhb7UGD5959tUtHmpF8Kg=; b=bmE4gkQ4jvyKwHpKPVoexlfu4f+YNXA02mXdrHWQ4pGL3CLK9iaFxe9Id61B1Rw8m8 qehJ/gCJBh2YAZK2P5aYXTtoSaFPyBNcbhilgTAHKiVcZ+St0/xAB8i4KOZUVeVfaM1X 6sV2FnQBHtRqu+KxeVo782ysD9FUkGjJzQcs/f6W8iofj4tFqGd4k/DdpD6bsLZ7uOaN b2flMs9LeW4a0s2K6JT4I2A7KshI3dQ0FcMzxCOi+E7gcJ1PBBx9fX9/1+Y9nn+wVtrV 7fmEvpy1pbrDGqEybIqU0TjJedUJdOcM73/+Sxc2bn+vy0ZpV8VIU2Z5vNSUGDMZjynS LRXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=MGRRmw9c; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 102si1319376plc.277.2018.12.13.02.43.28; Thu, 13 Dec 2018 02:43:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=MGRRmw9c; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728575AbeLMKly (ORCPT + 99 others); Thu, 13 Dec 2018 05:41:54 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:38673 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728392AbeLMKly (ORCPT ); Thu, 13 Dec 2018 05:41:54 -0500 Received: by mail-pl1-f196.google.com with SMTP id e5so890115plb.5 for ; Thu, 13 Dec 2018 02:41:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=JN2+b77EC0oN1aqMgMvWFwxhb7UGD5959tUtHmpF8Kg=; b=MGRRmw9cS59Q8fTjmVxucW0pUPrsLxgk61saF568HMab1V75TvKaAjXWHc9ZW6H1qI rzcxvv2IW7pVtuBXh3vUL6cxKKY54xG/oIveTXy5KIf4LTHUHkPAy3voTROKh98YybyA rCrTgZQ8cJ7/D5ATrUs1W603r2iMlaNMGOu8qDe6sHQI8QjeHip3+WwWqQ6dtUzhVXUD GNVHBTWai+oYo5GSl2OLxVZgAEj3l2IAQc2VEvl/9Kmsl2Zg8ZrxwHThYZlLuBunhJR5 Izuf+4gh7/loPynm+YqTjSFBDnq0htWVhgJiVYY6nzeJiEoCt/qiC3iB8KFyyItpINQP ElHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=JN2+b77EC0oN1aqMgMvWFwxhb7UGD5959tUtHmpF8Kg=; b=Uc/WMXFSvh1YY5mosN21uFDFtYMLOUC9t/vbyCNvC0HOM9HdIoyIG/bZA/tlEsdV+Z pE7mfmh2/XWg574zYn/xIs+A49zAoBS3igQLl5dUIjjb2dBD6mVXgun2PaTXzoVl6zXq vkHMafwIXbOXhkMMOo9En7hvTQtL1JfODpKBzTaRjlP/O6ncuTbmpBIrLDcn3pltK+tX MP5BoWVQcbW55ddKOj3mHB6F1gUSCfUfwRGbHGe7qsL4mDficRWwyj4DprCM/W1WjLJ/ agSHHMHGodCDRULJ7Npz3GX4y2YcbxdZUtLR8dG2U/TkxnhBxOCg/vIJVHfIlzwOlBNY j5cg== X-Gm-Message-State: AA+aEWbRSXwTi1f505uNPADBc7ZEdnEBULonZdeBpWcceHTRybmMG8r/ ysjE6hLmoyOVk6pRUVaSqYF33Q== X-Received: by 2002:a17:902:33c2:: with SMTP id b60mr23105327plc.211.1544697713290; Thu, 13 Dec 2018 02:41:53 -0800 (PST) Received: from kshutemo-mobl1.localdomain ([134.134.139.82]) by smtp.gmail.com with ESMTPSA id q5sm3210949pfi.165.2018.12.13.02.41.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 13 Dec 2018 02:41:52 -0800 (PST) Received: by kshutemo-mobl1.localdomain (Postfix, from userid 1000) id 8E89B300258; Thu, 13 Dec 2018 13:41:47 +0300 (+03) Date: Thu, 13 Dec 2018 13:41:47 +0300 From: "Kirill A. Shutemov" To: Michal Hocko Cc: Andrew Morton , Liu Bo , Jan Kara , Dave Chinner , Theodore Ts'o , Johannes Weiner , Vladimir Davydov , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, LKML , Shakeel Butt , Michal Hocko , Stable tree Subject: Re: [PATCH v3] mm, memcg: fix reclaim deadlock with writeback Message-ID: <20181213104147.ud2lngxn5avri2zm@kshutemo-mobl1> References: <20181212155055.1269-1-mhocko@kernel.org> <20181213092221.27270-1-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181213092221.27270-1-mhocko@kernel.org> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 13, 2018 at 10:22:21AM +0100, Michal Hocko wrote: > From: Michal Hocko > > Liu Bo has experienced a deadlock between memcg (legacy) reclaim and the > ext4 writeback > task1: > [] wait_on_page_bit+0x82/0xa0 > [] shrink_page_list+0x907/0x960 > [] shrink_inactive_list+0x2c7/0x680 > [] shrink_node_memcg+0x404/0x830 > [] shrink_node+0xd8/0x300 > [] do_try_to_free_pages+0x10d/0x330 > [] try_to_free_mem_cgroup_pages+0xd5/0x1b0 > [] try_charge+0x14d/0x720 > [] memcg_kmem_charge_memcg+0x3c/0xa0 > [] memcg_kmem_charge+0x7e/0xd0 > [] __alloc_pages_nodemask+0x178/0x260 > [] alloc_pages_current+0x95/0x140 > [] pte_alloc_one+0x17/0x40 > [] __pte_alloc+0x1e/0x110 > [] alloc_set_pte+0x5fe/0xc20 > [] do_fault+0x103/0x970 > [] handle_mm_fault+0x61e/0xd10 > [] __do_page_fault+0x252/0x4d0 > [] do_page_fault+0x30/0x80 > [] page_fault+0x28/0x30 > [] 0xffffffffffffffff > > task2: > [] __lock_page+0x86/0xa0 > [] mpage_prepare_extent_to_map+0x2e7/0x310 [ext4] > [] ext4_writepages+0x479/0xd60 > [] do_writepages+0x1e/0x30 > [] __writeback_single_inode+0x45/0x320 > [] writeback_sb_inodes+0x272/0x600 > [] __writeback_inodes_wb+0x92/0xc0 > [] wb_writeback+0x268/0x300 > [] wb_workfn+0xb4/0x390 > [] process_one_work+0x189/0x420 > [] worker_thread+0x4e/0x4b0 > [] kthread+0xe6/0x100 > [] ret_from_fork+0x41/0x50 > [] 0xffffffffffffffff > > He adds > : task1 is waiting for the PageWriteback bit of the page that task2 has > : collected in mpd->io_submit->io_bio, and tasks2 is waiting for the LOCKED > : bit the page which tasks1 has locked. > > More precisely task1 is handling a page fault and it has a page locked > while it charges a new page table to a memcg. That in turn hits a memory > limit reclaim and the memcg reclaim for legacy controller is waiting on > the writeback but that is never going to finish because the writeback > itself is waiting for the page locked in the #PF path. So this is > essentially ABBA deadlock: > lock_page(A) > SetPageWriteback(A) > unlock_page(A) > lock_page(B) > lock_page(B) > pte_alloc_pne > shrink_page_list > wait_on_page_writeback(A) > SetPageWriteback(B) > unlock_page(B) > > # flush A, B to clear the writeback > > This accumulating of more pages to flush is used by several filesystems > to generate a more optimal IO patterns. > > Waiting for the writeback in legacy memcg controller is a workaround > for pre-mature OOM killer invocations because there is no dirty IO > throttling available for the controller. There is no easy way around > that unfortunately. Therefore fix this specific issue by pre-allocating > the page table outside of the page lock. We have that handy > infrastructure for that already so simply reuse the fault-around pattern > which already does this. > > There are probably other hidden __GFP_ACCOUNT | GFP_KERNEL allocations > from under a fs page locked but they should be really rare. I am not > aware of a better solution unfortunately. > > Reported-and-Debugged-by: Liu Bo > Cc: stable > Fixes: c3b94f44fcb0 ("memcg: further prevent OOM with too many dirty pages") > Signed-off-by: Michal Hocko Acked-by: Kirill A. Shutemov Will you take care about converting vmf_insert_* to use the pre-allocated page table? -- Kirill A. Shutemov