Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3107972ybt; Mon, 29 Jun 2020 15:37:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwaaxtW1vWNET26Ggx42J4xedzi6BngP3vaaHXV/ZGkfUHIoVCl1sMafokgO2tX8Xnc9QyS X-Received: by 2002:a17:906:2a91:: with SMTP id l17mr16148382eje.539.1593470220212; Mon, 29 Jun 2020 15:37:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593470220; cv=none; d=google.com; s=arc-20160816; b=oGkVfWLDpnZxbpZ5ed/xZ+cHZfgj+5zyq/aVeahzaaq/S5mQ6CoLfKn8lEoZavo/1L Dy0bzDJ+zSq1nG3HLkALGXnjV1AnkBHRsNfrfVbIIJkkBXmGAr2ph5wS9S32FKaxhoSq fHOlvtChJfQsl2fc3oZa4asVttrBAOPVsj2b3gIWB96Bo8dviGcpNbElrbEdydT31BkQ EWVRUjFh3NcfHaQFsY875OpqF7vkXNJ54H2fMrFtUoi19FIkWzYW0ECHvMVzFaU0BEe5 /AIVY7Wk4wcgHYTcWAx5KyQC2MsC4ao4nlHXMOzn47yKrcg2zqOUN7xBvel/GJWRp5Vm /Dpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=G2VWtO/7oMhd2DlUEUqjRwdd7iBcOtop0b4Hd9mcgx8=; b=jaogfCOUkha/+XKgo5nGJ5ycVFP3PyDULR8hLbEEHSDzyLeZ4pcK3U6LzxGfC0MlfW ryBY1tFPjvUJBKWA9JviNv92wG3mDuJl8LouqXKchKa1c6G594SonbQKocYczJnDUL6j ZVz/mglqt/ZLh8gXZik4o1plHATQSK+/dcpE3hqcCTMlAL+rjo84gF43JKx9W/xm/2co qtqkXaXO2xXDpJNQiSnvQCx5zZiIk24scB79nC28lpK7iB+k/qxbEIqtg7nYv5eiQqUt lCjQxjLCmpS6cS1o1t5cfnB+Z2PBCebbSty6D87ss4uM6ZBRQ6cFWdGIQEMcCKPI4mCo TU7Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b5si628590edz.386.2020.06.29.15.36.35; Mon, 29 Jun 2020 15:37:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726897AbgF2Wee (ORCPT + 99 others); Mon, 29 Jun 2020 18:34:34 -0400 Received: from mail107.syd.optusnet.com.au ([211.29.132.53]:58010 "EHLO mail107.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726756AbgF2Wee (ORCPT ); Mon, 29 Jun 2020 18:34:34 -0400 Received: from dread.disaster.area (pa49-180-53-24.pa.nsw.optusnet.com.au [49.180.53.24]) by mail107.syd.optusnet.com.au (Postfix) with ESMTPS id AA1D8D5AC37; Tue, 30 Jun 2020 08:34:27 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1jq2Lq-0000hT-NA; Tue, 30 Jun 2020 08:34:10 +1000 Date: Tue, 30 Jun 2020 08:34:10 +1000 From: Dave Chinner To: Mikulas Patocka Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-xfs@vger.kernel.org, dm-devel@redhat.com, Jens Axboe , NeilBrown Subject: Re: [PATCH 0/6] Overhaul memalloc_no* Message-ID: <20200629223410.GK2005@dread.disaster.area> References: <20200625113122.7540-1-willy@infradead.org> <20200626230847.GI2005@dread.disaster.area> <20200629003550.GJ2005@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=X6os11be c=1 sm=1 tr=0 a=moVtWZxmCkf3aAMJKIb/8g==:117 a=moVtWZxmCkf3aAMJKIb/8g==:17 a=kj9zAlcOel0A:10 a=nTHF0DUjJn0A:10 a=VwQbUJbxAAAA:8 a=7-415B0cAAAA:8 a=kwWbjxPo83IwENs2kycA:9 a=CjuIK1q_8ugA:10 a=AjGcO6oz07-iQ99wixmX:22 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 29, 2020 at 09:43:23AM -0400, Mikulas Patocka wrote: > On Mon, 29 Jun 2020, Dave Chinner wrote: > > On Sat, Jun 27, 2020 at 09:09:09AM -0400, Mikulas Patocka wrote: > > > On Sat, 27 Jun 2020, Dave Chinner wrote: > > > > On Fri, Jun 26, 2020 at 11:02:19AM -0400, Mikulas Patocka wrote: > > > > > Hi > > > > > > > > > > I suggest to join memalloc_noio and memalloc_nofs into just one flag that > > > > > prevents both filesystem recursion and i/o recursion. > > > > > > > > > > Note that any I/O can recurse into a filesystem via the loop device, thus > > > > > it doesn't make much sense to have a context where PF_MEMALLOC_NOFS is set > > > > > and PF_MEMALLOC_NOIO is not set. > > > > > > > > Correct me if I'm wrong, but I think that will prevent swapping from > > > > GFP_NOFS memory reclaim contexts. > > > > > > Yes. > > > > > > > IOWs, this will substantially > > > > change the behaviour of the memory reclaim system under sustained > > > > GFP_NOFS memory pressure. Sustained GFP_NOFS memory pressure is > > > > quite common, so I really don't think we want to telling memory > > > > reclaim "you can't do IO at all" when all we are trying to do is > > > > prevent recursion back into the same filesystem. > > > > > > So, we can define __GFP_ONLY_SWAP_IO and __GFP_IO. > > > > Uh, why? > > > > Exactly what problem are you trying to solve here? > > This: > > 1. The filesystem does a GFP_NOFS allocation. > 2. The allocation calls directly a dm-bufio shrinker. > 3. The dm-bufio shrinker sees that there is __GFP_IO set, so it assumes > that it can do I/O. It selects some dirty buffers, writes them back and > waits for the I/O to finish. And so you are doing IO in a GFP_NOFS context because someone thought the block layer can't recurse back into filesystems? That's a broken assumption and has been since the loop device was introduced over a couple of decades ago. I mean, the dm-bufio IO submission path uses GFP_NOIO for obvious reasons, but once it's in the next device down it loses all control of the submission context. This is what I mean about "looking at reclaim contexts above the current layer is a Big Red Flag"? The fundamental assumption of dm-bufio that it can issue IO in GFP_NOFS context and not have a lower layer recurse back into a filesystem has always been incorrect. Just because the loop device now does GFP_NOIO allocation, that doesn't mean what dm-bufio is doing in this shrinker is correct or valid. Because, as you point out: > 4. The dirty buffers belong to a loop device. > 5. The loop device thread calls the filesystem that did the GFP_NOFS > allocation in step 1 (and that is still waiting for the allocation to > succeed). > Note that setting PF_MEMALLOC_NOIO on the loop thread won't help with this > deadlock. Right, re-entering the filesystem might block on a lock, IO, memory allocation, journal space reservation, etc. Indeed, it might not even be able to issue transactions because the allocating context is using GFP_NOFS because it is already running a transaction. > Do you argue that this is a bug in dm-bufio? Or a bug in the kernel? Or > that it can't happen? That's a bug in dm-bufio - dm is a layered block device and so has _always_ been able to have filesystems both above and below it in the storage stack. i.e. the assumption that there is no filesystem context under the DM layers has always been wrong. i.e. the memory reclaim context specfically directed dm-bufio that whatever the shrinker does, it must not recurse into the filesystem layer. It is the responsibility of the shrinker to obey the constraints it was given by memory reclaim, and the dm-bufio shrinker's assumption that there cannot be a filesystem below the DM device violates this directive because, quite clearly, there can be filesystems underneath DM devices. IOWs, assuming that you can issue and block on IO from a -layered block device- in GFP_NOFS shrinker context is flawed. i.e. Anything that presents as a block device that is layered on top of another block device can recurse into a filesystem as they can sit on top of a loop device. This has always been the case, and that means the assumptions the dm-bufio shrinker is making about what it can do in GFP_NOFS shrinker context has always been incorrect. Remember that I explained "you should not block kswapd" in this shrinker a year ago? | What follows from that, and is pertinent for in this situation, is | that if you don't block kswapd, then other reclaim contexts are not | going to get stuck waiting for it regardless of the reclaim context | they use. https://lore.kernel.org/linux-fsdevel/20190809215733.GZ7777@dread.disaster.area/ If you did that when I suggested it, this problem would be solved. i.e. The only way to fix this problem once adn for all is to stop using the shrinker as a mechanism to issue and wait on IO. If you need background writeback of dirty buffers, do it from a WQ_MEM_RECLAIM workqueue that isn't directly in the memory reclaim path and so can issue writeback and block safely from a GFP_KERNEL context. Kick the workqueue from the shrinker context, but get rid of the IO submission and waiting from the shrinker and all the GFP_NOFS memory reclaim recursion problems go away. Cheers, Dave. -- Dave Chinner david@fromorbit.com