Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp115183pxb; Fri, 9 Apr 2021 20:18:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz4jTcK56ejgxoQ+QNtMDOai1T8WHOXv6fPNaVLUtmSH9jtBRN57cPX90rSehAhr7Xy7pSA X-Received: by 2002:a17:907:2151:: with SMTP id rk17mr8436362ejb.203.1618024734070; Fri, 09 Apr 2021 20:18:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618024734; cv=none; d=google.com; s=arc-20160816; b=QJdfPCjcwk4MyJMvwJLSO/KbbqcI+B3xPQDKXVYEr/QUxvn0so7/AQ8VsV8Lu/2Qem qvbMZ28pA5QXiTc4OUXx7vnUh6hKI/abwKlvM62U3HkrdTp+hRVasnxgoCwnBwcIFM+c 33EkPWD37mNyq5ZIdtK9CxATRipT86RZEU8ptPneBBTAWOLEiT35oN8ndS7p5FMmlUFJ RHv7owPXgGsaBVQ5l3byPF7HK5MtUsYNLaP72o4eozpabjX6XOumzryR1QY4XRNM/CF9 VP1jokH2/DSRlVpw9P56lPTLksjDc/+drH6C4JKobBbUR3L+IabEULs6QTYFYwFo+Eki Umag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=+gXTeIEahhNmc/Y1ZuLb/7BzWafh7y8c0XqVG64+Jss=; b=w4pCl1OSriy7GaSyIQ8pf2tvivPEdDi5oq1YKe2KK3wwhExobSeaE6o9rY7Imzzhfk niy4ruxgf+NqfKAUbw9AxbdynBSXKA6yRqL1eTs+ep7Q4c8Euu8EZu2/MjYkD5ggqOqQ +4wjZJSgVZWIL8Pu9fim5aNgyKGpYgZZiATzc6Sq+AP2gA/RBbbKABh1O57wm0iQZaHJ u4A4XLLeY4a/QAaX7ryQvXs7JxNKriHWFxVqahyIdwdyVVIiVpP0pi5bigKKYBRF+wcQ G1MlL8WILTGypJ5/zUrrUDcIbY0AvPl0jqhqrAi/xuGQ4PCLm4rHYg+Z0ZlFJGBAvA2I tiyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id kw24si798703ejc.336.2021.04.09.20.18.28; Fri, 09 Apr 2021 20:18:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232855AbhDJDRw (ORCPT + 99 others); Fri, 9 Apr 2021 23:17:52 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:17301 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbhDJDRv (ORCPT ); Fri, 9 Apr 2021 23:17:51 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FHKr94ZV4z9xF3; Sat, 10 Apr 2021 11:15:21 +0800 (CST) Received: from [10.174.179.9] (10.174.179.9) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Sat, 10 Apr 2021 11:17:29 +0800 Subject: Re: [PATCH 2/5] swap: fix do_swap_page() race with swapoff To: Tim Chen , CC: , , , , , , , , , , , References: <20210408130820.48233-1-linmiaohe@huawei.com> <20210408130820.48233-3-linmiaohe@huawei.com> <7684b3de-2824-9b1f-f033-d4bc14f9e195@linux.intel.com> <50d34b02-c155-bad7-da1f-03807ad31275@huawei.com> <995a130b-f07a-4771-1fe3-477d2f3c1e8e@linux.intel.com> From: Miaohe Lin Message-ID: Date: Sat, 10 Apr 2021 11:17:29 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <995a130b-f07a-4771-1fe3-477d2f3c1e8e@linux.intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.9] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/10 1:17, Tim Chen wrote: > > > On 4/9/21 1:42 AM, Miaohe Lin wrote: >> On 2021/4/9 5:34, Tim Chen wrote: >>> >>> >>> On 4/8/21 6:08 AM, Miaohe Lin wrote: >>>> When I was investigating the swap code, I found the below possible race >>>> window: >>>> >>>> CPU 1 CPU 2 >>>> ----- ----- >>>> do_swap_page >>>> synchronous swap_readpage >>>> alloc_page_vma >>>> swapoff >>>> release swap_file, bdev, or ... >>> >> >> Many thanks for quick review and reply! >> >>> Perhaps I'm missing something. The release of swap_file, bdev etc >>> happens after we have cleared the SWP_VALID bit in si->flags in destroy_swap_extents >>> if I read the swapoff code correctly. >> Agree. Let's look this more close: >> CPU1 CPU2 >> ----- ----- >> swap_readpage >> if (data_race(sis->flags & SWP_FS_OPS)) { >> swapoff >> p->swap_file = NULL; >> struct file *swap_file = sis->swap_file; >> struct address_space *mapping = swap_file->f_mapping;[oops!] >> ... >> p->flags = 0; >> ... >> >> Does this make sense for you? > > p->swapfile = NULL happens after the > p->flags &= ~SWP_VALID, synchronize_rcu(), destroy_swap_extents() sequence in swapoff(). > > So I don't think the sequence you illustrated on CPU2 is in the right order. > That said, without get_swap_device/put_swap_device in swap_readpage, you could > potentially blow pass synchronize_rcu() on CPU2 and causes a problem. so I think > the problematic race looks something like the following: > > > CPU1 CPU2 > ----- ----- > swap_readpage > if (data_race(sis->flags & SWP_FS_OPS)) { > swapoff > p->flags = &= ~SWP_VALID; > .. > synchronize_rcu(); > .. > p->swap_file = NULL; > struct file *swap_file = sis->swap_file; > struct address_space *mapping = swap_file->f_mapping;[oops!] > ... > ... > Agree. This is also what I meant to illustrate. And you provide a better one. Many thanks! > By adding get_swap_device/put_swap_device, then the race is fixed. > > > CPU1 CPU2 > ----- ----- > swap_readpage > get_swap_device() > .. > if (data_race(sis->flags & SWP_FS_OPS)) { > swapoff > p->flags = &= ~SWP_VALID; > .. > struct file *swap_file = sis->swap_file; > struct address_space *mapping = swap_file->f_mapping;[valid value] > .. > put_swap_device() > synchronize_rcu(); > .. > p->swap_file = NULL; > > >> >>>> >>>> swap_readpage >>>> check sis->flags is ok >>>> access swap_file, bdev...[oops!] >>>> si->flags = 0 >>> >>> This happens after we clear the si->flags >>> synchronize_rcu() >>> release swap_file, bdev, in destroy_swap_extents() >>> >>> So I think if we have get_swap_device/put_swap_device in do_swap_page, >>> it should fix the race you've pointed out here. >>> Then synchronize_rcu() will wait till we have completed do_swap_page and >>> call put_swap_device. >> >> Right, get_swap_device/put_swap_device could fix this race. __But__ rcu_read_lock() >> in get_swap_device() could disable preempt and do_swap_page() may take a really long >> time because it involves I/O. It may not be acceptable to disable preempt for such a >> long time. :( > > I can see that it is not a good idea to hold rcu read lock for a long > time over slow file I/O operation, which will be the side effect of > introducing get/put_swap_device to swap_readpage. So using percpu_ref > will then be preferable for synchronization once we introduce > get/put_swap_device into swap_readpage. > The sis->bdev should also be protected by get/put_swap_device. It has the similar issue. And swap_slot_free_notify (called from callback end_swap_bio_read) would race with swapoff too. So I use get/put_swap_device to protect swap_readpage until file I/O operation is completed. Thanks again! > Tim > . >