Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7708358imu; Mon, 3 Dec 2018 18:06:50 -0800 (PST) X-Google-Smtp-Source: AFSGD/UfJoyOH3DGM7LnyxzBIdLcxGiy1vZqVmHJ2cHwpcuq/AoywE6n69OMXz8DA/i9rVGAAqEt X-Received: by 2002:a63:2315:: with SMTP id j21mr15416817pgj.297.1543889210354; Mon, 03 Dec 2018 18:06:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543889210; cv=none; d=google.com; s=arc-20160816; b=CqdU/lH2BUWi3fRsw001ST60oElY7Nj0dxiQ1IyK0j/gp3poW+E+s9QEa13oOZCKy0 0QrdURYn3vxRjLTGnqMTlolg30aOFOHPrnoEWES1FKA4fyF7zhmPDF1QwhcyVR7SkP41 ZYcfMH7VXP8w0USHXw6PQjnYx7CmaVrlmiZ5RZNJe7axLOyy0vobXP58vjPWeDJffiw2 G96+5F9hf87fvkLwnwJFeuOQa+JlaqCm/dPW15w/NJKIH/s/sa/Z6B1HH53OsAkmS8va l08/5FXqKmfN+6LTR6J7/ZvBKKWbuGHU3ojKkAPMfKRYsRN8UxchkFtKVPVXo40E/WtV nR9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=HeUGHEkJDFxg+o1MzYDka6+sl+WrrpWYLApr2db5h4Y=; b=cf5rd88r5WMiyt7k4GAVL4mwuAROOs8Rmh3VPfeblISq1FWJzMAVE7CnCDKqJ6VY9Q mmPS1WGtc7R75/PcJ/nInR51UcNC6yv97qxx/XGt1pf+QxaqnPy1hSKs29vUDoUXuIIE MFl4boG9v5/HNT9yDPmaH9QXiW3QxYvqUVGT8YnVJ8B2zB3c/MLYViCRWKOxZjPKGs/C /fxgngRXEX9QXhBIVq7UjrNkMJ/IjkNyQ/H0LpPvwMgpUhjXCo0kAYjJJ1zOziv/VrLo kkOcIVMyOgZ8u6Oz8fb93n8w92OeX0txhZRcmMy10x6guhItm8lRni3KJqWcJnk/1plf doJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m76si15447720pfj.48.2018.12.03.18.06.35; Mon, 03 Dec 2018 18:06:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726027AbeLDCFv (ORCPT + 99 others); Mon, 3 Dec 2018 21:05:51 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:16075 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725915AbeLDCFu (ORCPT ); Mon, 3 Dec 2018 21:05:50 -0500 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 08819696980F0; Tue, 4 Dec 2018 10:05:38 +0800 (CST) Received: from huawei.com (10.90.53.225) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.408.0; Tue, 4 Dec 2018 10:05:33 +0800 From: Hou Tao To: CC: , , Subject: [PATCH] squashfs: enable __GFP_FS in ->readpage to prevent hang in mem alloc Date: Tue, 4 Dec 2018 10:08:40 +0800 Message-ID: <20181204020840.49576-1-houtao1@huawei.com> X-Mailer: git-send-email 2.16.2.dirty MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.90.53.225] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is no need to disable __GFP_FS in ->readpage: * It's a read-only fs, so there will be no dirty/writeback page and there will be no deadlock against the caller's locked page * It just allocates one page, so compaction will not be invoked * It doesn't take any inode lock, so the reclamation of inode will be fine And no __GFP_FS may lead to hang in __alloc_pages_slowpath() if a squashfs page fault occurs in the context of a memory hogger, because the hogger will not be killed due to the logic in __alloc_pages_may_oom(). Signed-off-by: Hou Tao --- fs/squashfs/file.c | 3 ++- fs/squashfs/file_direct.c | 4 +++- fs/squashfs/squashfs_fs_f.h | 25 +++++++++++++++++++++++++ 3 files changed, 30 insertions(+), 2 deletions(-) create mode 100644 fs/squashfs/squashfs_fs_f.h diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c index f1c1430ae721..8603dda4a719 100644 --- a/fs/squashfs/file.c +++ b/fs/squashfs/file.c @@ -51,6 +51,7 @@ #include "squashfs_fs.h" #include "squashfs_fs_sb.h" #include "squashfs_fs_i.h" +#include "squashfs_fs_f.h" #include "squashfs.h" /* @@ -414,7 +415,7 @@ void squashfs_copy_cache(struct page *page, struct squashfs_cache_entry *buffer, TRACE("bytes %d, i %d, available_bytes %d\n", bytes, i, avail); push_page = (i == page->index) ? page : - grab_cache_page_nowait(page->mapping, i); + squashfs_grab_cache_page_nowait(page->mapping, i); if (!push_page) continue; diff --git a/fs/squashfs/file_direct.c b/fs/squashfs/file_direct.c index 80db1b86a27c..a0fdd6215348 100644 --- a/fs/squashfs/file_direct.c +++ b/fs/squashfs/file_direct.c @@ -17,6 +17,7 @@ #include "squashfs_fs.h" #include "squashfs_fs_sb.h" #include "squashfs_fs_i.h" +#include "squashfs_fs_f.h" #include "squashfs.h" #include "page_actor.h" @@ -60,7 +61,8 @@ int squashfs_readpage_block(struct page *target_page, u64 block, int bsize, /* Try to grab all the pages covered by the Squashfs block */ for (missing_pages = 0, i = 0, n = start_index; i < pages; i++, n++) { page[i] = (n == target_page->index) ? target_page : - grab_cache_page_nowait(target_page->mapping, n); + squashfs_grab_cache_page_nowait( + target_page->mapping, n); if (page[i] == NULL) { missing_pages++; diff --git a/fs/squashfs/squashfs_fs_f.h b/fs/squashfs/squashfs_fs_f.h new file mode 100644 index 000000000000..fc5fb7aeb27d --- /dev/null +++ b/fs/squashfs/squashfs_fs_f.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef SQUASHFS_FS_F +#define SQUASHFS_FS_F + +/* + * No need to use FGP_NOFS here: + * 1. It's a read-only fs, so there will be no dirty/writeback page and + * there will be no deadlock against the caller's locked page. + * 2. It just allocates one page, so compaction will not be invoked. + * 3. It doesn't take any inode lock, so the reclamation of inode + * will be fine. + * + * And GFP_NOFS may lead to infinite loop in __alloc_pages_slowpath() if a + * squashfs page fault occurs in the context of a memory hogger, because + * the hogger will not be killed due to the logic in __alloc_pages_may_oom(). + */ +static inline struct page * +squashfs_grab_cache_page_nowait(struct address_space *mapping, pgoff_t index) +{ + return pagecache_get_page(mapping, index, + FGP_LOCK|FGP_CREAT|FGP_NOWAIT, + mapping_gfp_mask(mapping)); +} +#endif + -- 2.16.2.dirty