Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp160691ybl; Tue, 28 Jan 2020 00:32:05 -0800 (PST) X-Google-Smtp-Source: APXvYqze6W+NwXSge9cTvMOPWjLG+/9q8DiYdrQZmY8iE97yBAPltizhHIUODeLxaK9pL2v/vgPt X-Received: by 2002:a05:6830:1691:: with SMTP id k17mr16116492otr.282.1580200324910; Tue, 28 Jan 2020 00:32:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580200324; cv=none; d=google.com; s=arc-20160816; b=XsiHOrf10Dp84uLPqnxX1lLAmUjgd/lf9bYb+cpTPeXcBc9WET3S6NnZKxWtXrC6m7 SD11SpB2x2iciYl3p8flr9Ev3OefZxPDVoZ/HoZIVdV/RhybbGJKyIWJtCo+oj6GTjzh pfZoAajEuk92zGqcE7SV3Yxzz9RR/ZxscwWrPsa+KvlFQRXrR2hg+W4ejPPII9c9dtbL Y/vLxJ+rmXLBIRdgJyD9R8U/vz/zwLq6wekDJz2O4gx4OnlPOG+09xdNr/jLIrLnvThI mp3UDwepWqOcc8TUAFh67cUZHU3LL2uDOug/0QG8dACFygsMoJkMBwNi0LbnH6u2ffuT 9kbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=2+/gVAvYBlL8oa7wpkNriFT1H6VzmpxQUtsVLdvG6Cc=; b=Z3AKeG+BsDnWlSj48b14NC5pGy2i2LA0zUnta6sLdPDgIooUYnf+mnVD/lXs2/MzEu GX+4z9wSI2ub5yQE95r1MjE5HRlxVaPzprtNEPIkzjdnXyshK4QkUXaT0rgFCJUuWiu8 ZFo00DGrtS0O0yTxsz9F+JbhFp42kQ+emxgbtHimdqeU/9z2qpRR/smCQSVsDf1w20bA 39ZUjejsDy2getpL+MumWN+u206oo4bKiVMAqHC+8/qLoQOsyPk6UrBPhL6Mm4K/BL1R DHs7rFll5h8OtsvOO51GB89S/9Xk3l/ilAzz7ECqlExNnMncgvyTStm5JC9Vrk7BWNTX BwPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=aIujI3As; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d8si8550753oti.306.2020.01.28.00.31.51; Tue, 28 Jan 2020 00:32:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=aIujI3As; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725932AbgA1Iaq (ORCPT + 99 others); Tue, 28 Jan 2020 03:30:46 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:44982 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725914AbgA1Iaq (ORCPT ); Tue, 28 Jan 2020 03:30:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=2+/gVAvYBlL8oa7wpkNriFT1H6VzmpxQUtsVLdvG6Cc=; b=aIujI3AsJjriPqpzupdhhnRXz UUFvER1bBH0La9gdj+2YsNdKPPdu5iTfbBwnk4y4a5cFNSqAnxRejXiLkOyBL26hZgudMUFTlSUWQ +rTsPIKpNWClT3HHWFb69fqVW1cau4wR7Ym5zYCkQEmGwVhy3EWKQ9Ri7v06pdv6XaztcYOnRG24+ V7A5m+eNM3V4zV7a3ZWCjSvcGaj+rc1zaL4nvo04uzpWEK/IptxmJrr2aMEz4CCEFAK/LTGoSbggw K/20Pr0Jj0ZyWT0LamCAsMj8+pwW7//aI7F457wwSG0vjTBsm/6+QXHA7vbROW2MpSvI9lQA9fnxz P0aMTF4OA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1iwMGi-00068o-Tb; Tue, 28 Jan 2020 08:30:44 +0000 Date: Tue, 28 Jan 2020 00:30:44 -0800 From: Matthew Wilcox To: Michal Hocko Cc: Cong Wang , LKML , Andrew Morton , linux-mm , Mel Gorman , Vlastimil Babka Subject: Re: [PATCH] mm: avoid blocking lock_page() in kcompactd Message-ID: <20200128083044.GB6615@bombadil.infradead.org> References: <20200109225646.22983-1-xiyou.wangcong@gmail.com> <20200110073822.GC29802@dhcp22.suse.cz> <20200121090048.GG29276@dhcp22.suse.cz> <20200126233935.GA11536@bombadil.infradead.org> <20200127150024.GN1183@dhcp22.suse.cz> <20200127190653.GA8708@bombadil.infradead.org> <20200128081712.GA18145@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200128081712.GA18145@dhcp22.suse.cz> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 28, 2020 at 09:17:12AM +0100, Michal Hocko wrote: > On Mon 27-01-20 11:06:53, Matthew Wilcox wrote: > > On Mon, Jan 27, 2020 at 04:00:24PM +0100, Michal Hocko wrote: > > > On Sun 26-01-20 15:39:35, Matthew Wilcox wrote: > > > > On Sun, Jan 26, 2020 at 11:53:55AM -0800, Cong Wang wrote: > > > > > I suspect the process gets stuck in the retry loop in try_charge(), as > > > > > the _shortest_ stacktrace of the perf samples indicated: > > > > > > > > > > cycles:ppp: > > > > > ffffffffa72963db mem_cgroup_iter > > > > > ffffffffa72980ca mem_cgroup_oom_unlock > > > > > ffffffffa7298c15 try_charge > > > > > ffffffffa729a886 mem_cgroup_try_charge > > > > > ffffffffa720ec03 __add_to_page_cache_locked > > > > > ffffffffa720ee3a add_to_page_cache_lru > > > > > ffffffffa7312ddb iomap_readpages_actor > > > > > ffffffffa73133f7 iomap_apply > > > > > ffffffffa73135da iomap_readpages > > > > > ffffffffa722062e read_pages > > > > > ffffffffa7220b3f __do_page_cache_readahead > > > > > ffffffffa7210554 filemap_fault > > > > > ffffffffc039e41f __xfs_filemap_fault > > > > > ffffffffa724f5e7 __do_fault > > > > > ffffffffa724c5f2 __handle_mm_fault > > > > > ffffffffa724cbc6 handle_mm_fault > > > > > ffffffffa70a313e __do_page_fault > > > > > ffffffffa7a00dfe page_fault > > > > > > > > > > But I don't see how it could be, the only possible case is when > > > > > mem_cgroup_oom() returns OOM_SUCCESS. However I can't > > > > > find any clue in dmesg pointing to OOM. These processes in the > > > > > same memcg are either running or sleeping (that is not exiting or > > > > > coredump'ing), I don't see how and why they could be selected as > > > > > a victim of OOM killer. I don't see any signal pending either from > > > > > their /proc/X/status. > > > > > > > > I think this is a situation where we might end up with a genuine deadlock > > > > if we're not trylocking the pages. readahead allocates a batch of > > > > locked pages and adds them to the pagecache. If it has allocated, > > > > say, 5 pages, successfully inserted the first three into i_pages, then > > > > needs to allocate memory to insert the fourth one into i_pages, and > > > > the process then attempts to migrate the pages which are still locked, > > > > they will never come unlocked because they haven't yet been submitted > > > > to the filesystem for reading. > > > > > > Just to make sure I understand. Do you mean this? > > > lock_page(A) > > > alloc_pages > > > try_to_compact_pages > > > compact_zone_order > > > compact_zone(MIGRATE_SYNC_LIGHT) > > > migrate_pages > > > unmap_and_move > > > __unmap_and_move > > > lock_page(A) > > > > Yes. There's a little more to it than that, eg slab is involved, but > > you have it in a nutshell. > > I am not deeply familiar with the readahead code. But is there really a > high oerder allocation (order > 1) that would trigger compaction in the > phase when pages are locked? Thanks to sl*b, yes: radix_tree_node 80890 102536 584 28 4 : tunables 0 0 0 : slabdata 3662 3662 0 so it's allocating 4 pages for an allocation of a 576 byte node. > Btw. the compaction rejects to consider file backed pages when __GFP_FS > is not present AFAIR. Ah, that would save us.