Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2738152rwd; Sun, 28 May 2023 23:23:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4cQCCsW7dXLQ8TWtBjOuuy4Ubp/Z9f4/vF1tIYiBhkWwEAG7BZ832XwrrhjqWALV/kMxjQ X-Received: by 2002:a05:6a00:b96:b0:64d:2e8a:4cc1 with SMTP id g22-20020a056a000b9600b0064d2e8a4cc1mr15184459pfj.27.1685341391349; Sun, 28 May 2023 23:23:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685341391; cv=none; d=google.com; s=arc-20160816; b=GliM67eY3Dj1/OVW3JZ3XOEgGPuG062rY5OQOXY7fej77Kwdfohcq42csdbbujOizS Dz+VvfBU4L/3Xru/bdVmGMI8wLQ7YGsp9eVR/s3Woefg40N5abjkq3+PFaLNSH4WnUga xcdgG7ZxmkM5betvajIhvcfdYxRd90gPNKE4Luk1liZsP5ad8B33LJqcuhLHgiT7ISMQ WUQ17z5l8MQ9fQlF+EmdchapbmUzWrVjjbi6vZPKpwoq1NiwHuKmLsoDj2nb4nbsT6LX wZ4pi2YG1b9DWAj1MwWdSAQYf8m9ConfDROxjiojLnPuzLKN/rWopM7ka74SOcnIdF3E t9jA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fpr/LXblsUT/QgnM7sVAAykJFiLzmJXXkITKLa/9iMc=; b=ImnmNWQIAxHxoRCegD3V+mrtj325Uuj5qLqiePgtLXiFIbQVyhCyodljz8926TAYJj GytZd5C7CjktuANms4y+fef51+1/fEI8FrHEOsQiE3pdcjSm7MgO0fZ1EEHidDBVH24A twYE7/nfvSPW+7uv1e0X6WHfqJsdm1CCw2xwpD3M0o4wwkVgTRE4GIIHHy9GQBs9T9+e FTTW9Lz0ttJkIo07C6cTwgIZi3oSyrUzULfXFc8Kd/exazLwfNaDFjy99LK+iaZZInNE VaaihnuJ50LZrLXDVT/Vw9vfRIOppDeUJRbo7KlatejlTrrAKd8LsAh3R6YO54mAnXfM RUJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SSYZBD30; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x8-20020aa79568000000b0063f18073138si9160461pfq.99.2023.05.28.23.22.57; Sun, 28 May 2023 23:23:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SSYZBD30; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229624AbjE2GOs (ORCPT + 99 others); Mon, 29 May 2023 02:14:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231428AbjE2GOk (ORCPT ); Mon, 29 May 2023 02:14:40 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8435D8 for ; Sun, 28 May 2023 23:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685340872; x=1716876872; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gqcq606hnAPL/bQemwK8mT4rmFd1An/GN7J3orZJsNQ=; b=SSYZBD301JuYZBoBT7PD8AXXBoFzSO4yCeBuFXJdalYNwpHI8vbfrPgq 7Ojw1/FHjZ+1gCzdD4TEc5ri+ul3ewyumOgpcMLepDI5OJSUpJKK1rwiL UIAzJnVIbsBW5YVZXHfSxvqrcYple6lvkmaBoze5nwoHu/6am0CmRbWRy jD0F06lFIx6fZmBgyAq9NX5SxAEusvXQOjWzJdfEJTtE2dQ9VPgcY4f4B EiMbSLer25BIcWAtrD+QZr/7bjBGWH3AVR6yqxLWqyj8mXMrPBZ3aLUm+ aw051r7ZzpD8TPUBIYQo7EYnKVYKenAUSCbuztyX40qRLhWQ/0HTkBlCh g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="357881879" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="357881879" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="1036080021" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="1036080021" Received: from azhao3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.126]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 23:14:29 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , David Hildenbrand , Yosry Ahmed , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Minchan Kim , Tim Chen , Yang Shi , Yu Zhao , Chris Li Subject: [PATCH -V3 5/5] swap: comments get_swap_device() with usage rule Date: Mon, 29 May 2023 14:13:55 +0800 Message-Id: <20230529061355.125791-6-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529061355.125791-1-ying.huang@intel.com> References: <20230529061355.125791-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The general rule to use a swap entry is as follows. When we get a swap entry, if there aren't some other ways to prevent swapoff, such as the folio in swap cache is locked, page table lock is held, etc., the swap entry may become invalid because of swapoff. Then, we need to enclose all swap related functions with get_swap_device() and put_swap_device(), unless the swap functions call get/put_swap_device() by themselves. Add the rule as comments of get_swap_device(). Signed-off-by: "Huang, Ying" Reviewed-by: David Hildenbrand Reviewed-by: Yosry Ahmed Cc: Hugh Dickins Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Michal Hocko Cc: Minchan Kim Cc: Tim Chen Cc: Yang Shi Cc: Yu Zhao Cc: Chris Li --- mm/swapfile.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 4dbaea64635d..3d0e932497f0 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1219,6 +1219,13 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, } /* + * When we get a swap entry, if there aren't some other ways to + * prevent swapoff, such as the folio in swap cache is locked, page + * table lock is held, etc., the swap entry may become invalid because + * of swapoff. Then, we need to enclose all swap related functions + * with get_swap_device() and put_swap_device(), unless the swap + * functions call get/put_swap_device() by themselves. + * * Check whether swap entry is valid in the swap device. If so, * return pointer to swap_info_struct, and keep the swap entry valid * via preventing the swap device from being swapoff, until @@ -1227,9 +1234,8 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, * Notice that swapoff or swapoff+swapon can still happen before the * percpu_ref_tryget_live() in get_swap_device() or after the * percpu_ref_put() in put_swap_device() if there isn't any other way - * to prevent swapoff, such as page lock, page table lock, etc. The - * caller must be prepared for that. For example, the following - * situation is possible. + * to prevent swapoff. The caller must be prepared for that. For + * example, the following situation is possible. * * CPU1 CPU2 * do_swap_page() -- 2.39.2