Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp975462pxb; Sat, 17 Apr 2021 02:43:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw3B+nTFUcXUwV+agzHbIF7UQWw1T0NTNJv6aWUeebXPBkM8J6Z9gtxAKeWEghW4EHMSYMJ X-Received: by 2002:a17:903:1cc:b029:e6:f37a:2185 with SMTP id e12-20020a17090301ccb02900e6f37a2185mr13730852plh.67.1618652621265; Sat, 17 Apr 2021 02:43:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618652621; cv=none; d=google.com; s=arc-20160816; b=XcrHK1Crh0Tt+EQ8bH5lliegcwanUAlyY+ZdpveQNnGWENhE2WNDUNoQKCy6iNWwag lN0x80ZPIWwb3/ueSlyJeBIy1PmFxs8CWg1q82TeDFYHBoUweRz3j57bLN0lRBf8R+Zh iGF5tTkFGP81prLQ85fJ3JB5BBOOtWQhaqF0Q8FxGOw3sODrOinv5hL3qTVUj15Pprtw lG/KZ69k494wr5SCqOaAlbfviaxperPzIiId37X5YVO9XU/M7c8l5cwFAbFBKMiaC0vi 8GU+j/Qxl2I25fQ4MESkIUxahBgYvfE0Qpw4RAlXQbP0Jg1XRsfMFVeSdEF1jDwNoTHH Un6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=5WOde4QLf1B0WE16dihP5+25krQOqZho5QDTbhlfsY4=; b=sxeVhJQ3usYSTSmt4gfX54OIIqrerJjI85b0RznbWg8JUDYwF84bnTPeEQW57yus88 TxxenDlZl/tkyz56SZS7k9/VDiuA2DfXKs5VwqD8YncCiOjNQzvXsIZmgd7+qhRqPIse cfceupRogG1cSTFlGS2HvyVtdLkzRFOH0dWITQyNuHenAFhnfvZn2VZtMkI27h8wXdaF yBiw1S3DF8UUm2mvbjJEo8uFxDq22lBG3hTQX1mckEqSNWPyTdb144c8Atbn9cItjKUo zlL7mqMYLVKxcUwNY8DNqH+kp4DemF6sOOwiv5BysVyabtPafQC8xsL7ebEmU58gPMK5 QwvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s23si5107050plr.151.2021.04.17.02.43.29; Sat, 17 Apr 2021 02:43:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236014AbhDQJmS (ORCPT + 99 others); Sat, 17 Apr 2021 05:42:18 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:17371 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230510AbhDQJmN (ORCPT ); Sat, 17 Apr 2021 05:42:13 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FMp2b2Sy1zlYMq; Sat, 17 Apr 2021 17:39:51 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.498.0; Sat, 17 Apr 2021 17:41:38 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , Subject: [PATCH v2 2/5] mm/swapfile: use percpu_ref to serialize against concurrent swapoff Date: Sat, 17 Apr 2021 05:40:36 -0400 Message-ID: <20210417094039.51711-3-linmiaohe@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20210417094039.51711-1-linmiaohe@huawei.com> References: <20210417094039.51711-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use percpu_ref to serialize against concurrent swapoff. Also remove the SWP_VALID flag because it's used together with RCU solution. Signed-off-by: Miaohe Lin --- include/linux/swap.h | 3 +-- mm/swapfile.c | 43 +++++++++++++++++-------------------------- 2 files changed, 18 insertions(+), 28 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 8be36eb58b7a..993693b38109 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -177,7 +177,6 @@ enum { SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ - SWP_VALID = (1 << 13), /* swap is valid to be operated on? */ /* add others here before... */ SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ }; @@ -514,7 +513,7 @@ sector_t swap_page_sector(struct page *page); static inline void put_swap_device(struct swap_info_struct *si) { - rcu_read_unlock(); + percpu_ref_put(&si->users); } #else /* CONFIG_SWAP */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 66515a3a2824..90e197bc2eeb 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1279,18 +1279,12 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, * via preventing the swap device from being swapoff, until * put_swap_device() is called. Otherwise return NULL. * - * The entirety of the RCU read critical section must come before the - * return from or after the call to synchronize_rcu() in - * enable_swap_info() or swapoff(). So if "si->flags & SWP_VALID" is - * true, the si->map, si->cluster_info, etc. must be valid in the - * critical section. - * * Notice that swapoff or swapoff+swapon can still happen before the - * rcu_read_lock() in get_swap_device() or after the rcu_read_unlock() - * in put_swap_device() if there isn't any other way to prevent - * swapoff, such as page lock, page table lock, etc. The caller must - * be prepared for that. For example, the following situation is - * possible. + * percpu_ref_tryget_live() in get_swap_device() or after the + * percpu_ref_put() in put_swap_device() if there isn't any other way + * to prevent swapoff, such as page lock, page table lock, etc. The + * caller must be prepared for that. For example, the following + * situation is possible. * * CPU1 CPU2 * do_swap_page() @@ -1318,21 +1312,24 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) si = swp_swap_info(entry); if (!si) goto bad_nofile; - - rcu_read_lock(); - if (data_race(!(si->flags & SWP_VALID))) - goto unlock_out; + if (!percpu_ref_tryget_live(&si->users)) + goto out; + /* + * Guarantee we will not reference uninitialized fields + * of swap_info_struct. + */ + smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) - goto unlock_out; + goto put_out; return si; bad_nofile: pr_err("%s: %s%08lx\n", __func__, Bad_file, entry.val); out: return NULL; -unlock_out: - rcu_read_unlock(); +put_out: + percpu_ref_put(&si->users); return NULL; } @@ -2475,7 +2472,7 @@ static void setup_swap_info(struct swap_info_struct *p, int prio, static void _enable_swap_info(struct swap_info_struct *p) { - p->flags |= SWP_WRITEOK | SWP_VALID; + p->flags |= SWP_WRITEOK; atomic_long_add(p->pages, &nr_swap_pages); total_swap_pages += p->pages; @@ -2507,7 +2504,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, spin_unlock(&swap_lock); /* * Guarantee swap_map, cluster_info, etc. fields are valid - * between get/put_swap_device() if SWP_VALID bit is set + * between get/put_swap_device(). */ percpu_ref_resurrect(&p->users); spin_lock(&swap_lock); @@ -2625,12 +2622,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) reenable_swap_slots_cache_unlock(); - spin_lock(&swap_lock); - spin_lock(&p->lock); - p->flags &= ~SWP_VALID; /* mark swap device as invalid */ - spin_unlock(&p->lock); - spin_unlock(&swap_lock); - percpu_ref_kill(&p->users); /* * We need synchronize_rcu() here to protect the accessing -- 2.19.1