Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2737987rwd; Sun, 28 May 2023 23:22:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6SwcC1t3Ia41ENpI4didoewaAh8722OMwmU4/R0LZ4jdzQHhXwsuBeWOH0SIRbJWJAk76i X-Received: by 2002:a05:6a00:1d29:b0:64d:42b9:6895 with SMTP id a41-20020a056a001d2900b0064d42b96895mr7562394pfx.5.1685341378202; Sun, 28 May 2023 23:22:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685341378; cv=none; d=google.com; s=arc-20160816; b=sV+vARJTepjKEVnBkUCv5Zf1pLt3d4KLhjWyCHtvZohh2MQIkOmCNnhMUyqpSKiUaC n6CV604yp33u1iEWITyh7O1RahoXt2PeccdO9fKWlp5bFzXkhTitnMBvIPw514Acjxpb Zl6lN6nq+FKLnnn9hbdi6J6KL8dDqMvyCeOcP4e6NrXoBhO8muq47oPBE/b5RWbLEAF4 NW4nytarY8I5PPzXv5iXq38b9mvyC+8lvDWszy59YTrl0rzOVTkhP4i8QRT2THrStT/1 53KW+mypOBojrTWHQIs5/PsJ5wTKf6IUaohsO0o0bK5fQcC5GW1TunuGqeUYeQF8uxqg 0DMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wOJYVTXnoEm5Z/BbFhrWdccE5L0r3SqMJHK9UvXxhZ8=; b=nd7n7NMyYXFVUo0DaI9+vKj9fDegEnmtE6XOVm26ONfT5rCAt788w0GkmHUHqdZo+9 zjyafpc3Y58G5AkfbJtGHwJVax4Kxdsn9scm8S3iEEaE6+UcQSQdBPrl+XI+Jp3Mp7o4 bcOChP35iF2fJhJNIws+3Kly/FJ0OPV2cskF2dVs6kJfZ2E+ruFiRHRUXbBBQfa0vsqo +6JxFQmIt3BJwdVPg4mjpT/QZEvG5KEXBETfQ3c8x61kbVnEjQS8ph5RyiNQu8XbnaPF tht4qn40i/0XAhD1qG0QihEf4Eae5ZZBblheZmnNGIGI0x5suQEtO3N3p6xtGCWq3SUT DRfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DlehFadi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g17-20020aa79dd1000000b00646df7a783bsi2436417pfq.118.2023.05.28.23.22.44; Sun, 28 May 2023 23:22:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DlehFadi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbjE2GOb (ORCPT + 99 others); Mon, 29 May 2023 02:14:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229958AbjE2GOX (ORCPT ); Mon, 29 May 2023 02:14:23 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AF1DB1 for ; Sun, 28 May 2023 23:14:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685340862; x=1716876862; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Oxpi0PPziaQhMGSH3TrNy4IFICDc28ZfyvpBf2ejj4=; b=DlehFadix+4U+iWpL/fGaz7SLQ8p/smv5qoK/XfFKa6WPKzUKo4ZYRlF go98QSjc5Gt5s7PmqGZpaLv2+EiSntjVbIXIlrB7UYchNbq2tgZetQ2VJ St6zHzrnU8JyBItisGmohcSwjDj2Oi89j2iRzdaPN2gE0ItATH3LYP9p+ s5ul0U7OVGd9ffArcwvwCyI8hzl8xE72sqvXYsf+NCTFLBxXXmVkzj9+J IokpS/30mPMXuSUS1tghh/MUuXRT/Nritu1yiZvdSUlbYh4B/6ZI0SHgp KHJF9UlDihAEIBLX98vENdrLrix9KR7G9yqHHiJ7YPMKH+3IwLMbEQqsL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="357881814" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="357881814" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="1036079991" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="1036079991" Received: from azhao3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.126]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:18 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Minchan Kim , Tim Chen , Yang Shi , Yu Zhao , Chris Li , Yosry Ahmed Subject: [PATCH -V3 2/5] swap, __read_swap_cache_async(): enlarge get/put_swap_device protection range Date: Mon, 29 May 2023 14:13:52 +0800 Message-Id: <20230529061355.125791-3-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529061355.125791-1-ying.huang@intel.com> References: <20230529061355.125791-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This makes the function a little easier to be understood because we don't need to consider swapoff. And this makes it possible to remove get/put_swap_device() calling in some functions called by __read_swap_cache_async(). Signed-off-by: "Huang, Ying" Cc: David Hildenbrand Cc: Hugh Dickins Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Michal Hocko Cc: Minchan Kim Cc: Tim Chen Cc: Yang Shi Cc: Yu Zhao Cc: Chris Li Cc: Yosry Ahmed --- mm/swap_state.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a8450b4a110c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -417,9 +417,13 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, { struct swap_info_struct *si; struct folio *folio; + struct page *page; void *shadow = NULL; *new_page_allocated = false; + si = get_swap_device(entry); + if (!si) + return NULL; for (;;) { int err; @@ -428,14 +432,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * called after swap_cache_get_folio() failed, re-calling * that would confuse statistics. */ - si = get_swap_device(entry); - if (!si) - return NULL; folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry)); - put_swap_device(si); - if (!IS_ERR(folio)) - return folio_file_page(folio, swp_offset(entry)); + if (!IS_ERR(folio)) { + page = folio_file_page(folio, swp_offset(entry)); + goto got_page; + } /* * Just skip read ahead for unused swap slot. @@ -446,7 +448,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * else swap_off will be aborted if we return NULL. */ if (!__swp_swapcount(entry) && swap_slot_cache_enabled) - return NULL; + goto fail_put_swap; /* * Get a new page to read into from swap. Allocate it now, @@ -455,7 +457,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false); if (!folio) - return NULL; + goto fail_put_swap; /* * Swap entry may have been freed since our caller observed it. @@ -466,7 +468,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, folio_put(folio); if (err != -EEXIST) - return NULL; + goto fail_put_swap; /* * We might race against __delete_from_swap_cache(), and @@ -500,12 +502,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* Caller will initiate read into locked folio */ folio_add_lru(folio); *new_page_allocated = true; - return &folio->page; + page = &folio->page; +got_page: + put_swap_device(si); + return page; fail_unlock: put_swap_folio(folio, entry); folio_unlock(folio); folio_put(folio); +fail_put_swap: + put_swap_device(si); return NULL; } @@ -514,6 +521,10 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * and reading the disk if it is not already cached. * A failure return means that either the page allocation failed or that * the swap entry is no longer in use. + * + * get/put_swap_device() aren't needed to call this function, because + * __read_swap_cache_async() call them and swap_readpage() holds the + * swap cache folio lock. */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, -- 2.39.2