Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2228541pxb; Sun, 18 Apr 2021 23:57:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwr2wqjrCn0ESUbMTWlji+co7RhS9N+BhpXhBNcDQAPqCsooHnvjsgPIRtGKYzi2cx/m5Iv X-Received: by 2002:a17:906:151a:: with SMTP id b26mr20649155ejd.492.1618815463486; Sun, 18 Apr 2021 23:57:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618815463; cv=none; d=google.com; s=arc-20160816; b=ydLIbmGgGK+Ddb+PvkZutfWYx3mdq7FzF20NAU/jD1iv9M1Ir/078RZ1fOKUx+ZjRV w5Dc4mqlBvpMJdG6cMvV44/DwuYD8GbPnnUVwltMAUCToLdwhbolaF28PsV2et5pnUVe CZGD7JKu04QZySvkoIoDtzh7MoaB6lzBKh7/FN8QbRTicMhGSr3NdWhgRfExydFQarWd Zw0O+ZOOOlSF6oPckMYPIB3eDWp5CfNE7WrptZstpzsRd6cFAnYnnoj69lzyRAZkZSLM pAmGeh2q2yJNwO7us1uyUotV5rh+MpI9u9bbY4B/QSs6Ez/fdkCI4NLhrnKsZdN0k7zC 3blA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=ObFNCffjH5BQjHkoX4+b4VPMiaPukzyVIE+jGR96OB0=; b=vjr50ghX7NlIEMsm51Cy8vgDnrQZv8+DtTsK2g5mA7y3iiknakYNTbJHVAj0KVD88v pZcZy2LEDH03KpFVE/9aF4M2hBdBMY6PO+K8Q+G9m8Eqx5bFjtK7gssR/q7ebv7Fxv1A uIW8yYr41fPfAcDk3DycfAg70452bmXd5U1XmA0FSUsWCzH7owOtaSXTyL+QnY0JGZa1 A1p34NliCgwvPEs1R6cH2ApwttCk+u9H2ErQIWILyMJM48ZC1wc02/R8w/kwODIwP87e v21j2cJB+JzFoWykfQ+lac8jcFMO6WRtqyATIJiywZuTzQudG1DwTnJzbv2AftIIGYMl RXcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 25si10921854ejy.309.2021.04.18.23.57.19; Sun, 18 Apr 2021 23:57:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233414AbhDSG44 (ORCPT + 99 others); Mon, 19 Apr 2021 02:56:56 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:16600 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229473AbhDSG4z (ORCPT ); Mon, 19 Apr 2021 02:56:55 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FNyDc3jpvz16LC8; Mon, 19 Apr 2021 14:52:32 +0800 (CST) Received: from [10.174.178.5] (10.174.178.5) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.498.0; Mon, 19 Apr 2021 14:54:50 +0800 Subject: Re: [PATCH v2 3/5] swap: fix do_swap_page() race with swapoff To: "Huang, Ying" CC: , , , , , , , , , , , , References: <20210417094039.51711-1-linmiaohe@huawei.com> <20210417094039.51711-4-linmiaohe@huawei.com> <87k0ozko5c.fsf@yhuang6-desk1.ccr.corp.intel.com> From: Miaohe Lin Message-ID: <38d0ac56-62e0-76b8-36ef-711089b88d91@huawei.com> Date: Mon, 19 Apr 2021 14:54:50 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <87k0ozko5c.fsf@yhuang6-desk1.ccr.corp.intel.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.5] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/19 10:23, Huang, Ying wrote: > Miaohe Lin writes: > >> When I was investigating the swap code, I found the below possible race >> window: >> >> CPU 1 CPU 2 >> ----- ----- >> do_swap_page > > This is OK for swap cache cases. So > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) > > should be shown here. Ok. > >> swap_readpage(skip swap cache case) >> if (data_race(sis->flags & SWP_FS_OPS)) { >> swapoff >> p->flags = &= ~SWP_VALID; >> .. >> synchronize_rcu(); >> .. >> p->swap_file = NULL; >> struct file *swap_file = sis->swap_file; >> struct address_space *mapping = swap_file->f_mapping;[oops!] >> >> Note that for the pages that are swapped in through swap cache, this isn't >> an issue. Because the page is locked, and the swap entry will be marked >> with SWAP_HAS_CACHE, so swapoff() can not proceed until the page has been >> unlocked. >> >> Using current get/put_swap_device() to guard against concurrent swapoff for >> swap_readpage() looks terrible because swap_readpage() may take really long >> time. And this race may not be really pernicious because swapoff is usually >> done when system shutdown only. To reduce the performance overhead on the >> hot-path as much as possible, it appears we can use the percpu_ref to close >> this race window(as suggested by Huang, Ying). > > I still suggest to squash PATCH 1-3, at least PATCH 1-2. That will > change the relevant code together and make it easier to review. > Will squash PATCH 1-2. Thanks. > Best Regards, > Huang, Ying > >> Fixes: 0bcac06f27d7 ("mm,swap: skip swapcache for swapin of synchronous device") >> Reported-by: kernel test robot (auto build test ERROR) >> Signed-off-by: Miaohe Lin >> --- >> include/linux/swap.h | 9 +++++++++ >> mm/memory.c | 9 +++++++++ >> 2 files changed, 18 insertions(+) >> >> diff --git a/include/linux/swap.h b/include/linux/swap.h >> index 993693b38109..523c2411a135 100644 >> --- a/include/linux/swap.h >> +++ b/include/linux/swap.h >> @@ -528,6 +528,15 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry) >> return NULL; >> } >> >> +static inline struct swap_info_struct *get_swap_device(swp_entry_t entry) >> +{ >> + return NULL; >> +} >> + >> +static inline void put_swap_device(struct swap_info_struct *si) >> +{ >> +} >> + >> #define swap_address_space(entry) (NULL) >> #define get_nr_swap_pages() 0L >> #define total_swap_pages 0L >> diff --git a/mm/memory.c b/mm/memory.c >> index 27014c3bde9f..7a2fe12cf641 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -3311,6 +3311,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >> { >> struct vm_area_struct *vma = vmf->vma; >> struct page *page = NULL, *swapcache; >> + struct swap_info_struct *si = NULL; >> swp_entry_t entry; >> pte_t pte; >> int locked; >> @@ -3338,6 +3339,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >> goto out; >> } >> >> + /* Prevent swapoff from happening to us. */ >> + si = get_swap_device(entry); >> + if (unlikely(!si)) >> + goto out; >> >> delayacct_set_flag(current, DELAYACCT_PF_SWAPIN); >> page = lookup_swap_cache(entry, vma, vmf->address); >> @@ -3514,6 +3519,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >> unlock: >> pte_unmap_unlock(vmf->pte, vmf->ptl); >> out: >> + if (si) >> + put_swap_device(si); >> return ret; >> out_nomap: >> pte_unmap_unlock(vmf->pte, vmf->ptl); >> @@ -3525,6 +3532,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >> unlock_page(swapcache); >> put_page(swapcache); >> } >> + if (si) >> + put_swap_device(si); >> return ret; >> } > . >