Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp552276iob; Wed, 11 May 2022 22:25:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy+dEEVUJuVClYMNscGEe1vgElOF/PWIFQngPjy1/Fcbp0kIBEb7NLBJmkdx4A/LCzz9sUC X-Received: by 2002:a62:1ad4:0:b0:510:c635:e516 with SMTP id a203-20020a621ad4000000b00510c635e516mr9767053pfa.42.1652333111346; Wed, 11 May 2022 22:25:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652333111; cv=none; d=google.com; s=arc-20160816; b=tTpgZ7nb58/s74Y0ZBfaP3FSqbZCP+OnM7+NmhMqIdSoVu0xrDhOs379eruzUlh+u0 Ppz6e/HCCRWe0rhsRSFhcmv7DPlYY2d2rShCHI7+0DDMSeyGJLty5/HGzRvb+tIPyt6d vBr4vIBZuXEEh+okfv4unjhkHvhZ6FRpey/c0WyItVa/bojl6Sp8UH53MOgsAz3WLhkR K9OqI5xo+0mrW6g9msX0y1RArSQC7ymOE3Mic48+B1oaUH4E38bMyua97WYdcDm1qkwH 2pUsZaxkNvPHKUg1FQIDw9wuJK7RXVcZPkAXLki8A6UYMu/Hob6rdUDdg/KrLJ+aQAUY 1XFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=x6ZYfx/FmTGeNp3axjnUCK+9peN+HxRl/Crr+HjHafo=; b=sS1yJSA9wo5/MIh+aXpTDjUgNkWDT/PDcsX1Ip+Wp3B0192g7jt6CE+j70Tj6zgMrg Uo73z2FOnk6wpcdI4LyR8CHv+aHchyxacW6q8ZAaI6lLSkCs6jMa5hsXuBTvwGq08w/v q9eGEjHxWD/R7LwCSmTBDI+nKCx6QEXTA90MckceX7nBy70yh+FLl3IN8QPpr7ExxqRG Z2Ea3kn5nZ+Lp8KIuz9xxN8ovztSJg0dSvmIdxJoAo5wQnF2TQdpuyDI29XUnLgPfoSg 9BuPd85nnLRCguD2O2PEo0AEoCDDddwc9u9OX/pAxuC1lHBdyEtspM/N30UqQbGZLcAD 5jyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=kdxcJwBX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x11-20020a63fe4b000000b003db33165e39si2161005pgj.575.2022.05.11.22.24.58; Wed, 11 May 2022 22:25:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=kdxcJwBX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346030AbiEKSBR (ORCPT + 99 others); Wed, 11 May 2022 14:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346068AbiEKSBG (ORCPT ); Wed, 11 May 2022 14:01:06 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6119938196; Wed, 11 May 2022 11:01:04 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id r71so2113635pgr.0; Wed, 11 May 2022 11:01:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=x6ZYfx/FmTGeNp3axjnUCK+9peN+HxRl/Crr+HjHafo=; b=kdxcJwBXu9IAQwLPKaeCqP7YPp/1DBAqxphMpSVyqGIqOa0OSxCCNVw8XUUQLIk8Iq RmqMcKeTnf3XhIKp34f5vPjEPHY+pI+zBYIX7nUKXY4ruJR3uRxJ2gdDcFsX6Nvc/jqE 6xuYNWxiDv9gtug8krZRcaETvQGEKG7avfoTf4IzT4viPi2ryVvtHDzL9l+/e1iRi38E Fwgg3ktWJTVcSHJ5fLByxGNbVzPQBYrW/vWgnVG3mZxTgZqxsO4BQiRLafCfvEubxyYu OU5++CwStDqEjRdoqB8LhUOykxzPMGvgGcnLdiTPBtpaN+NIo9u37frPazQWFkRGMoQh 7/gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=x6ZYfx/FmTGeNp3axjnUCK+9peN+HxRl/Crr+HjHafo=; b=KIw3sRw9QGUDNaC+7OGWCnqeauT93LMrNeicB5zi2iVLujpiSkrqGZF61sHadMrgUQ XWXl/m+aN4q7s0VviRNKC9RjSFP1lYlDSAlra8VIkYJzVHGgRs0gCMJA9lao+jb9Cz8f vZ+Sqwor2sBqJFEKsTDm1sgcosvBEOscKYtGc9hrd4nHjf1vmaNaSOH+bc+hze4vsLb+ oaA/yUyJAA/vcGm4VHGkWlF+yGwobS4urrvq7IGbUkv2PVPyOwgYVyYgRmMXHXkJ24pJ 5qNjkr/0ZPuswx2CTuJoc0cc8Q5NWMRZ4RWNtoDU8UlM38oNBtfHWWyGepFBsYuGnJ6i iVIA== X-Gm-Message-State: AOAM533FqbPL62MMbi7PAeRNRlicMuavWxe9O/G7tdXEGfSqVB8LeVBn H8nMLZgBh6DJdCH1DwOKvMU96xJEkDc= X-Received: by 2002:a65:4c8e:0:b0:3aa:24bf:9e63 with SMTP id m14-20020a654c8e000000b003aa24bf9e63mr21840616pgt.592.1652292063854; Wed, 11 May 2022 11:01:03 -0700 (PDT) Received: from google.com ([2620:15c:211:201:69ef:9c87:7816:4f74]) by smtp.gmail.com with ESMTPSA id y3-20020a626403000000b0050dc76281b6sm2130000pfb.144.2022.05.11.11.01.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 11:01:03 -0700 (PDT) Sender: Minchan Kim Date: Wed, 11 May 2022 11:01:01 -0700 From: Minchan Kim To: Sultan Alsawaf Cc: stable@vger.kernel.org, Nitin Gupta , Sergey Senozhatsky , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration Message-ID: References: <20220509024703.243847-1-sultan@kerneltoast.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220509024703.243847-1-sultan@kerneltoast.com> X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote: > From: Sultan Alsawaf > > The asynchronous zspage free worker tries to lock a zspage's entire page > list without defending against page migration. Since pages which haven't > yet been locked can concurrently migrate off the zspage page list while > lock_zspage() churns away, lock_zspage() can suffer from a few different > lethal races. It can lock a page which no longer belongs to the zspage and > unsafely dereference page_private(), it can unsafely dereference a torn > pointer to the next page (since there's a data race), and it can observe a > spurious NULL pointer to the next page and thus not lock all of the > zspage's pages (since a single page migration will reconstruct the entire > page list, and create_page_chain() unconditionally zeroes out each list > pointer in the process). > > Fix the races by using migrate_read_lock() in lock_zspage() to synchronize > with page migration. > > Cc: stable@vger.kernel.org > Fixes: 48b4800a1c6a ("zsmalloc: page migration support") Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse)? Because we didn't migrate ZS_EMPTY pages before. > Signed-off-by: Sultan Alsawaf > --- > mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++---- > 1 file changed, 33 insertions(+), 4 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 9152fbde33b5..5d5fc04385b8 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class, > */ > static void lock_zspage(struct zspage *zspage) > { > - struct page *page = get_first_page(zspage); > + struct page *curr_page, *page; > > - do { > - lock_page(page); > - } while ((page = get_next_page(page)) != NULL); > + /* > + * Pages we haven't locked yet can be migrated off the list while we're > + * trying to lock them, so we need to be careful and only attempt to > + * lock each page under migrate_read_lock(). Otherwise, the page we lock > + * may no longer belong to the zspage. This means that we may wait for > + * the wrong page to unlock, so we must take a reference to the page > + * prior to waiting for it to unlock outside migrate_read_lock(). I couldn't get the point here. Why couldn't we simple lock zspage migration? diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9152fbde33b5..05ff2315b7b1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work) list_for_each_entry_safe(zspage, tmp, &free_pages, list) { list_del(&zspage->list); + + migrate_read_lock(zspage); lock_zspage(zspage); + migrate_read_unlock(zspage); get_zspage_mapping(zspage, &class_idx, &fullness); VM_BUG_ON(fullness != ZS_EMPTY); > + */ > + while (1) { > + migrate_read_lock(zspage); > + page = get_first_page(zspage); > + if (trylock_page(page)) > + break; > + get_page(page); > + migrate_read_unlock(zspage); > + wait_on_page_locked(page); > + put_page(page); > + } > + > + curr_page = page; > + while ((page = get_next_page(curr_page))) { > + if (trylock_page(page)) { > + curr_page = page; > + } else { > + get_page(page); > + migrate_read_unlock(zspage); > + wait_on_page_locked(page); > + put_page(page); > + migrate_read_lock(zspage); > + } > + } > + migrate_read_unlock(zspage); > } > > static int zs_init_fs_context(struct fs_context *fc) > -- > 2.36.0 >