Received: by 2002:ab2:3319:0:b0:1ef:7a0f:c32d with SMTP id i25csp252013lqc; Thu, 7 Mar 2024 17:00:12 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWh7xXkt2T+KE6PYvz0Ft8bCWgwcM58iTwYgVZ3tjdvzMaIExDJj6585Bm2gz5LUNFdoUcKQIC8rnAhdGZz/ONxpVv3jepNPD4y2bhesw== X-Google-Smtp-Source: AGHT+IH1vJWV8nmXVa4UdsjfRwiU8jFAMf0Cx4bl3GmyCOvBySeJhZD7lWya7toaTgga3s6nIQ2r X-Received: by 2002:a05:6a21:3405:b0:1a0:f3d0:15af with SMTP id yn5-20020a056a21340500b001a0f3d015afmr10812768pzb.34.1709859612195; Thu, 07 Mar 2024 17:00:12 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709859612; cv=pass; d=google.com; s=arc-20160816; b=FryFuY83z2KkUMgdb4uXVcDyRPXFPpeJkJ1tpMP9BiotVeghaUbY9d1BlTa454OsY4 cUoW0Zc7LaywqgGn9RK3hfwY582CqekspPhAUS7dzTBKpj/oRHNFJ5gVdO1I1ExLzxIx /Izivk6V+S7fBOAOZs4/loOReVsNBTG0IRmtaNz4wthlDanmnea7YJZoxFB3ovOoVysv UQrxNZGdBYvA7ZZi5CekxY2xxM8FmNEjvC3fX6ljbCPNLsG7ZqUirDrAOdKN+oa8TJMC ZEI1aYNiP6yGejBErgBSaYcGc4jANbmcMNykhRlwcf4PJg+cKkmH1/lZlvUWEN4OZxEv SlWw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :message-id:date:user-agent:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=Wla91iK2gk+6//1o1AiSyCfBLkHBdYAiSaek5lerYZ8=; fh=oorP8+gXYej6p1cWDgPWYcBInQTho+i1REXyLW5KXGE=; b=HlKs7w/3Z5UDcrs96RTeT0RTI6w9ENS+zykUvm61Wxvn8qcA3unIj6skjYwUEPmcBD 7M4kKYE57B+p7aZ0KrYT3r0uukjadfJeVA8T8mRd65E/ZqPMVeWHaKXNB4zJPwgTx5Pq u8jeYLzceA1q5htSFSgXVYrdmYRKJJomNN/g9Ohm4uMHABIDQp5G6guJe/fzpkRUJr8/ RK0niyfsDXmTJAsm5B8MVtaEdAs5NqEF8EiMPLw2lWAdWDaJzEwF61OXbHLubd4IrM99 X1n9K9DlPB579+zIc/AJxRTxnIyjjAoI1kUqUozn0hvNdAYbiTWRt3EotfDCWPHZoHv6 HmGA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LPOMpFkB; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-96410-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-96410-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id x9-20020a1709029a4900b001dc3789a2bfsi14766859plv.495.2024.03.07.17.00.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Mar 2024 17:00:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-96410-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LPOMpFkB; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-96410-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-96410-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D819F2831EC for ; Fri, 8 Mar 2024 01:00:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EB70C225AE; Fri, 8 Mar 2024 00:58:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LPOMpFkB" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73B8E2135C; Fri, 8 Mar 2024 00:58:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709859531; cv=none; b=m75vBdoD/3C8g5vCc/kuqtdt61lZYI3s3dGVWCkdo3c9bDS8cx3SqLrzT5+lvTbwf5oRZ7E/Sz80oZip0PIwNVwGvN+sYQ5pvKu2/2F0fwktgty8+5JT01nNDmNH4DJB1d47obt6Pj0loQcjUwWoW6NLmvgKPOqPGL8fc8PmrZ4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709859531; c=relaxed/simple; bh=UnUW87fuiaXOWWLX9//SnpFXWJlC5GpqiAqCizoHHQ0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=FJHwuczv+EbmT+rH7iMDgHZKR+y1EwK9OKYcCFLjUMIXW0NYXRVdx9ClS9DjnaguopWHr9CKbgGrU6nfq40dUF0YwUde76rd+1vMSi+YtBBfBVO7oIpCcO8LxhXEVi9Y0WcqL076rlxR7XGDdAJXVvVFEef+Nfue1VvKQ0ylE5E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LPOMpFkB; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709859529; x=1741395529; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=UnUW87fuiaXOWWLX9//SnpFXWJlC5GpqiAqCizoHHQ0=; b=LPOMpFkBVyJ+ZKEw/OFlli2E1244yMgrQO7LcB2dBUmRZPodPdJBuPGJ EdaXKjlUHgixXzB9A2xXHJA955+ApsAP/ss8UWJHQfoqszA16PRIOedc3 sOq5hG+ffpNRSgsjM24jDczj8plAiOJtADZfLCJgpl+wTaQK9AMLyBgTu WUBIZQVGaKgoMlQlqw9u3nFmFOJko3KDz0Sqv+4Um5FqQyHukXVqE5Bvp m8+BI0WNelAWGcxA/SRoqZfInJsVIjpmeTZWBLjQXu0GIk+05rM/+NIUu AF4z+S2NjeG87TBpcFipsIBt3ZW8+v7WJLeInuf3wan0B7xo541l6LBjO Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11006"; a="15133567" X-IronPort-AV: E=Sophos;i="6.07,108,1708416000"; d="scan'208";a="15133567" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2024 16:57:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,108,1708416000"; d="scan'208";a="14793930" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2024 16:57:12 -0800 From: "Huang, Ying" To: Ryan Roberts Cc: Miaohe Lin , Andrew Morton , David Hildenbrand , , , Subject: Re: [PATCH v1] mm: swap: Fix race between free_swap_and_cache() and swapoff() In-Reply-To: <29335a89-b14b-4ef3-abf8-0b41e6d0ec67@arm.com> (Ryan Roberts's message of "Thu, 7 Mar 2024 09:19:20 +0000") References: <20240305151349.3781428-1-ryan.roberts@arm.com> <875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com> <87bk7q7ffp.fsf@yhuang6-desk2.ccr.corp.intel.com> <0925807f-d226-7f08-51d1-ab771b1a6c24@huawei.com> <8734t27awd.fsf@yhuang6-desk2.ccr.corp.intel.com> <92672c62-47d8-44ff-bd05-951c813c95a5@arm.com> <87y1au5smu.fsf@yhuang6-desk2.ccr.corp.intel.com> <29335a89-b14b-4ef3-abf8-0b41e6d0ec67@arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Date: Fri, 08 Mar 2024 08:55:18 +0800 Message-ID: <87jzmd5yq1.fsf@yhuang6-desk2.ccr.corp.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Ryan Roberts writes: > On 07/03/2024 08:54, Huang, Ying wrote: >> Ryan Roberts writes: >> >>> On 07/03/2024 07:34, Huang, Ying wrote: >>>> Miaohe Lin writes: >>>> >>>>> On 2024/3/7 13:56, Huang, Ying wrote: >>>>>> Miaohe Lin writes: >>>>>> >>>>>>> On 2024/3/6 17:31, Ryan Roberts wrote: >>>>>>>> On 06/03/2024 08:51, Miaohe Lin wrote: >>>>>>>>> On 2024/3/6 10:52, Huang, Ying wrote: >>>>>>>>>> Ryan Roberts writes: >>>>>>>>>> >>>>>>>>>>> There was previously a theoretical window where swapoff() could run and >>>>>>>>>>> teardown a swap_info_struct while a call to free_swap_and_cache() was >>>>>>>>>>> running in another thread. This could cause, amongst other bad >>>>>>>>>>> possibilities, swap_page_trans_huge_swapped() (called by >>>>>>>>>>> free_swap_and_cache()) to access the freed memory for swap_map. >>>>>>>>>>> >>>>>>>>>>> This is a theoretical problem and I haven't been able to provoke it from >>>>>>>>>>> a test case. But there has been agreement based on code review that this >>>>>>>>>>> is possible (see link below). >>>>>>>>>>> >>>>>>>>>>> Fix it by using get_swap_device()/put_swap_device(), which will stall >>>>>>>>>>> swapoff(). There was an extra check in _swap_info_get() to confirm that >>>>>>>>>>> the swap entry was valid. This wasn't present in get_swap_device() so >>>>>>>>>>> I've added it. I couldn't find any existing get_swap_device() call sites >>>>>>>>>>> where this extra check would cause any false alarms. >>>>>>>>>>> >>>>>>>>>>> Details of how to provoke one possible issue (thanks to David Hilenbrand >>>>>>>>>>> for deriving this): >>>>>>>>>>> >>>>>>>>>>> --8<----- >>>>>>>>>>> >>>>>>>>>>> __swap_entry_free() might be the last user and result in >>>>>>>>>>> "count == SWAP_HAS_CACHE". >>>>>>>>>>> >>>>>>>>>>> swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. >>>>>>>>>>> >>>>>>>>>>> So the question is: could someone reclaim the folio and turn >>>>>>>>>>> si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). >>>>>>>>>>> >>>>>>>>>>> Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are >>>>>>>>>>> still references by swap entries. >>>>>>>>>>> >>>>>>>>>>> Process 1 still references subpage 0 via swap entry. >>>>>>>>>>> Process 2 still references subpage 1 via swap entry. >>>>>>>>>>> >>>>>>>>>>> Process 1 quits. Calls free_swap_and_cache(). >>>>>>>>>>> -> count == SWAP_HAS_CACHE >>>>>>>>>>> [then, preempted in the hypervisor etc.] >>>>>>>>>>> >>>>>>>>>>> Process 2 quits. Calls free_swap_and_cache(). >>>>>>>>>>> -> count == SWAP_HAS_CACHE >>>>>>>>>>> >>>>>>>>>>> Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls >>>>>>>>>>> __try_to_reclaim_swap(). >>>>>>>>>>> >>>>>>>>>>> __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> >>>>>>>>>>> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> >>>>>>>>>>> swap_entry_free()->swap_range_free()-> >>>>>>>>>>> ... >>>>>>>>>>> WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); >>>>>>>>>>> >>>>>>>>>>> What stops swapoff to succeed after process 2 reclaimed the swap cache >>>>>>>>>>> but before process1 finished its call to swap_page_trans_huge_swapped()? >>>>>>>>>>> >>>>>>>>>>> --8<----- >>>>>>>>>> >>>>>>>>>> I think that this can be simplified. Even for a 4K folio, this could >>>>>>>>>> happen. >>>>>>>>>> >>>>>>>>>> CPU0 CPU1 >>>>>>>>>> ---- ---- >>>>>>>>>> >>>>>>>>>> zap_pte_range >>>>>>>>>> free_swap_and_cache >>>>>>>>>> __swap_entry_free >>>>>>>>>> /* swap count become 0 */ >>>>>>>>>> swapoff >>>>>>>>>> try_to_unuse >>>>>>>>>> filemap_get_folio >>>>>>>>>> folio_free_swap >>>>>>>>>> /* remove swap cache */ >>>>>>>>>> /* free si->swap_map[] */ >>>>>>>>>> >>>>>>>>>> swap_page_trans_huge_swapped <-- access freed si->swap_map !!! >>>>>>>>> >>>>>>>>> Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held. >>>>>>>> >>>>>>>> I don't beleive it has the PTL when called by shmem. >>>>>>> >>>>>>> In the case of shmem, folio_lock is used to guard against the race. >>>>>> >>>>>> I don't find folio is lock for shmem. find_lock_entries() will only >>>>>> lock the folio if (!xa_is_value()), that is, not swap entry. Can you >>>>>> point out where the folio is locked for shmem? >>>>> >>>>> You're right, folio is locked if not swap entry. That's my mistake. But it seems above race is still nonexistent. >>>>> shmem_unuse() will first be called to read all the shared memory data that resides in the swap device back into >>>>> memory when doing swapoff. In that case, all the swapped pages are moved to page cache thus there won't be any >>>>> xa_is_value(folio) cases when calling shmem_undo_range(). free_swap_and_cache() even won't be called from >>>>> shmem_undo_range() after shmem_unuse(). Or am I miss something? >>>> >>>> I think the following situation is possible. Right? >>>> >>>> CPU0 CPU1 >>>> ---- ---- >>>> shmem_undo_range >>>> shmem_free_swap >>>> xa_cmpxchg_irq >>>> free_swap_and_cache >>>> __swap_entry_free >>>> /* swap count become 0 */ >>>> swapoff >>>> try_to_unuse >>>> shmem_unuse /* cannot find swap entry */ >>>> find_next_to_unuse >>>> filemap_get_folio >>>> folio_free_swap >>>> /* remove swap cache */ >>>> /* free si->swap_map[] */ >>>> swap_page_trans_huge_swapped <-- access freed si->swap_map !!! >>>> >>>> shmem_undo_range can run earlier. >>> >>> Yes that's the shmem problem I've been trying to convey. Perhaps there are other >>> (extremely subtle) mechanisms that make this impossible, I don't know. >>> >>> Either way, given the length of this discussion, and the subtleties in the >>> syncrhonization mechanisms that have so far been identified, I think the safest >>> thing to do is just apply the patch. Then we have explicit syncrhonization that >>> we can trivially reason about. >> >> Yes. This is tricky and we can improve it. So I suggest to, >> >> - Revise the patch description to use shmem race as example except >> someone found it's impossible. >> >> - Revise the comments of get_swap_device() about RCU reader side lock >> (including IRQ off, spinlock, etc.) can prevent swapoff via >> synchronize_rcu() in swapoff(). >> >> - Revise the comments of synchronize_rcu() in swapoff(), which can >> prevent swapoff in parallel with RCU reader side lock including swap >> cache operations, etc. > > The only problem with this is that Andrew has already put my v2 into mm-*stable* :-| > > So (1) from that list isn't possible. I could do a patch for (2) and (3), but to > be honest, I think you would do a better job of writing it up than I would - any > chance you could post the patch? > Sure. I will do that. -- Best Regards, Huang, Ying