Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp80925iof; Sun, 5 Jun 2022 21:46:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyBwHUlZFQ7QgeSMOrMF2WVhVxi/11RIX+6yhk8ek9Uc0MWzci//Rx1AsRN4bjUykB0+tPo X-Received: by 2002:a65:4c41:0:b0:3f5:cf9f:283b with SMTP id l1-20020a654c41000000b003f5cf9f283bmr19606505pgr.301.1654490779282; Sun, 05 Jun 2022 21:46:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654490779; cv=none; d=google.com; s=arc-20160816; b=nDdZlOWrNUUhYs1bXYDbJavaHgswmKNUU22vo0fIis8KOl2DPPrf8FOk2Yx8E4ARVS AiTThcyC28IVqwsn9Wx21u1OFTOqJeEToO0rSDqGRbFmq1wgCcwyHQOPW0DbRYbBRqGD pSyB8ykgswtcW6LSsXpr4LFJLLJiizxcs5XBpHM2skLQBMV3PTO1SJODfGFwfwq0X4VU 0K8Taj44cZQv9pxZayRPqKPLguJcWIhfAFOJK44toedvpaZtEpxj2v0Ys0dbYujw4Mfr PxylUVNd2FMNkzCBHjVhhYtm641l6Z58GTZDXqR16RFKNvSGduKKb7rJFYBVxnjEZds1 lzhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dqSdHWwqsAI/jJXKHSY48do/Dbebg7hG7MExVC0ema4=; b=YzHBsSEmbI0W1ZmQCj/MLKu2ysV+4TFSXRlVcNd9EzJygh36kAqfV2Zz88AeVplPWo bpw/LF+xf1u1hGzBv0MFpvUDbRb79WIUvCWwUMXf6GJY5eEs5/AEhntd3Ihu5XqPA6on K3E65NUcGX8LXc15gJvRvvYu+jPu6BFnw4TGe4keNEyne8sUUB0zjcP6520vOQTiKoMw yvaPgdKE1fX065OWyusDSnXCUrxiiwjgO5sXksAwcsE8oXQ3oCy7OsOofDPY0AQbSTeT Fwv1Ul0avSA4XrS3xVdVWD2YzRVdHN5fSJ1BFmQ0cmxXyhlI7DzLTdJTtTaNyieSLitf n7UQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=0mvb+Ylm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id 32-20020a630e60000000b003fc9203ee53si20020795pgo.574.2022.06.05.21.46.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Jun 2022 21:46:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=0mvb+Ylm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B645121271; Sun, 5 Jun 2022 21:04:11 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231373AbiFCRt6 (ORCPT + 99 others); Fri, 3 Jun 2022 13:49:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345277AbiFCRsL (ORCPT ); Fri, 3 Jun 2022 13:48:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 067C253A65; Fri, 3 Jun 2022 10:45:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 97C2A604EF; Fri, 3 Jun 2022 17:45:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E055C385B8; Fri, 3 Jun 2022 17:45:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654278301; bh=V/yGTt7Ocp4ajWdl5FHnoH8c4RKdCJlTJGqL+no4af0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0mvb+YlmmNtBfl3eFiqOvjuvnPwV23b88VzjLEojufnkJDtz3YxwI26rD5V2NdtoS ibsdmtBWIAPhtHJ/cNcxtG2go+LUggYfCWF8pVE2cDzwSAacSqkuvC4XvztmBOyBY5 DfcH5bfhRVeaE1P6lMo7Ot/Wau1lGAuj5nBcVZq0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sultan Alsawaf , Minchan Kim , Nitin Gupta , Sergey Senozhatsky , Andrew Morton Subject: [PATCH 5.4 22/34] zsmalloc: fix races between asynchronous zspage free and page migration Date: Fri, 3 Jun 2022 19:43:18 +0200 Message-Id: <20220603173816.638262170@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220603173815.990072516@linuxfoundation.org> References: <20220603173815.990072516@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sultan Alsawaf commit 2505a981114dcb715f8977b8433f7540854851d8 upstream. The asynchronous zspage free worker tries to lock a zspage's entire page list without defending against page migration. Since pages which haven't yet been locked can concurrently migrate off the zspage page list while lock_zspage() churns away, lock_zspage() can suffer from a few different lethal races. It can lock a page which no longer belongs to the zspage and unsafely dereference page_private(), it can unsafely dereference a torn pointer to the next page (since there's a data race), and it can observe a spurious NULL pointer to the next page and thus not lock all of the zspage's pages (since a single page migration will reconstruct the entire page list, and create_page_chain() unconditionally zeroes out each list pointer in the process). Fix the races by using migrate_read_lock() in lock_zspage() to synchronize with page migration. Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com Fixes: 77ff465799c602 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse") Signed-off-by: Sultan Alsawaf Acked-by: Minchan Kim Cc: Nitin Gupta Cc: Sergey Senozhatsky Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1748,11 +1748,40 @@ static enum fullness_group putback_zspag */ static void lock_zspage(struct zspage *zspage) { - struct page *page = get_first_page(zspage); + struct page *curr_page, *page; - do { - lock_page(page); - } while ((page = get_next_page(page)) != NULL); + /* + * Pages we haven't locked yet can be migrated off the list while we're + * trying to lock them, so we need to be careful and only attempt to + * lock each page under migrate_read_lock(). Otherwise, the page we lock + * may no longer belong to the zspage. This means that we may wait for + * the wrong page to unlock, so we must take a reference to the page + * prior to waiting for it to unlock outside migrate_read_lock(). + */ + while (1) { + migrate_read_lock(zspage); + page = get_first_page(zspage); + if (trylock_page(page)) + break; + get_page(page); + migrate_read_unlock(zspage); + wait_on_page_locked(page); + put_page(page); + } + + curr_page = page; + while ((page = get_next_page(curr_page))) { + if (trylock_page(page)) { + curr_page = page; + } else { + get_page(page); + migrate_read_unlock(zspage); + wait_on_page_locked(page); + put_page(page); + migrate_read_lock(zspage); + } + } + migrate_read_unlock(zspage); } static int zs_init_fs_context(struct fs_context *fc)