Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp8372imw; Thu, 7 Jul 2022 19:52:32 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vV/aamCKa+WUPNspK7dYe+KhS2F470VF7eCDQKPcXKSypx9QiSY8xqUI1VwDyY5sz11KWt X-Received: by 2002:a17:907:7213:b0:726:9f27:8fc8 with SMTP id dr19-20020a170907721300b007269f278fc8mr1236794ejc.523.1657248752650; Thu, 07 Jul 2022 19:52:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657248752; cv=none; d=google.com; s=arc-20160816; b=hN8yMWD8eWX4+pWG40UzX2f1PocP4Omp1kcalQx0ibOlG7wC95VR8CaxB6s6XPXuVE GnEvUGPIpqd1ti7wigKwZNjx7UbxXrh4b+7Q/jxKfvwa1eFOp7VF6d9c8cXw1w2o4rN1 f/jsVkCtN/vD0+wXk7DH4ME3T77fJWPRaWHgi1VrW/sGw6Tu9kUW/J+rS+cHxBGcKG5O LVeC9npYw4hNHaqG6HBq2QJjx6xx+UrWWbw5EGOAOkpE1Ya5pbw3YhOtt0KERdG3hL0I i7/PfKjkyiufiLn80twv1c1MDeOmTgWsogyLIrSXeneh2NMe00nFQotlzbnPp8E1lCxm TGlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=Ad97sT16le5gjY7PgDU6gmeetvBzm9wlE8bQkILVjNc=; b=AKA5G1FLShBQxD1bsqYaNt+/ydoIEvxjXNYxi229bfLxXlkqTICtu9g/wEIv2cRbPa 6D012JYl2BOYp22ESiE89NIcVfP8VccJrjm/cAKqkNLObw6lSyA2VA7ftS1dYpAgqsTD WTeO6j0FTQXjq0s9ZvRLJUO2mSC7TOhNK6Uqw90da6OhyYx0NRfS5daz4nnwXVez4CpW ws9MmFj1M7JOfB5iCRhvKZTQWLCIU1XsFH6dGJZ1Fa8kmxeT4P7JfJ0ae1LGpE5KIIDY 6rPITUEzTjXR1H5aMqCjNYNCycFtX7qNlzX51jsXAG10LxMoCTKMcmPHL47xdVzTmbBj FnxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=tCcxRqjI; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a27-20020a1709063a5b00b006ff05cc7140si17453064ejf.255.2022.07.07.19.51.52; Thu, 07 Jul 2022 19:52:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=tCcxRqjI; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236955AbiGHCuh (ORCPT + 99 others); Thu, 7 Jul 2022 22:50:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236885AbiGHCug (ORCPT ); Thu, 7 Jul 2022 22:50:36 -0400 Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com [IPv6:2607:f8b0:4864:20::833]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98E4F7437F for ; Thu, 7 Jul 2022 19:50:33 -0700 (PDT) Received: by mail-qt1-x833.google.com with SMTP id g14so25567373qto.9 for ; Thu, 07 Jul 2022 19:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=Ad97sT16le5gjY7PgDU6gmeetvBzm9wlE8bQkILVjNc=; b=tCcxRqjITVsiKb1O2qN38oOtehuQYEQmQlnA5EmpLLTcqVHGWkBNn5r6VjhB95uftz oBhwab0Wa2p2hfzUf6HitqoN4V8My4lx3Sr3LHdaGn7TrTI27nd6mE0wsQOIBv87ES7u uwlTgHRa2iafBHAxVTfOiHdPWVETU8CJjwRGskRUwQecY/1YbABG8fpWqJ0i5forgO17 n0R7m15mcAmskoYtfNfHUvAotB5BgPlp6Llxsn+Fa0qYPR44yx2cfMCeEIswKRQZAgPI NLOhCVeNcKdBjtuIw/nnF9iOSfGliNNxy/YqmZphg0HMl4dZzroFochhJCRrtxDrneBV hZ4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=Ad97sT16le5gjY7PgDU6gmeetvBzm9wlE8bQkILVjNc=; b=5uDnUqO9dsL2CKa/kxYNIwVzinCLZZZIRATHvYYz2TLXmVaHp5yO4eYGFtzXYIOhB5 m7QSQOb3omxmGjsDL7ModU9cr3AHau9Iw3s+nvni6BdXHPKviL2ZQXO6R1PEJA4SuMcq vLciWWWRMRS8vxmz46xMm5kddBgDQ3Eutu7uEwHIz0rqGtNde8CFY2rFarjO+DAJuovV eDNguzgldbqovuG/R3OaV/uguuqIHyYxiKo1JXb/ohQ2UNe2NY5ssjeig1Rw5AHp9Ryn 13Kqx52mvq5pCkPP7Ldye7BxZRT/vTM71XNpCqymrgLz9hX+sU+DGymyl1Mx51qetMGR MIVA== X-Gm-Message-State: AJIora8JPIhCPwPXfsNl9gahyBt5y/A2+9ToMEvWZLQYYvpNxcFi3VBh w41l5RCx2pNRIORd6gxTKmM1cw== X-Received: by 2002:ac8:7d52:0:b0:319:51f0:e418 with SMTP id h18-20020ac87d52000000b0031951f0e418mr1088284qtb.481.1657248632632; Thu, 07 Jul 2022 19:50:32 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id s7-20020a05620a254700b006a65c58db99sm35676841qko.64.2022.07.07.19.50.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jul 2022 19:50:32 -0700 (PDT) Date: Thu, 7 Jul 2022 19:50:17 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: "Matthew Wilcox (Oracle)" cc: Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-aio@kvack.org, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-mm@kvack.org, linux-xfs@vger.kernel.org, linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net, ocfs2-devel@oss.oracle.com, linux-mtd@lists.infradead.org, virtualization@lists.linux-foundation.org, Christoph Hellwig Subject: Re: [PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs() In-Reply-To: <20220608150249.3033815-8-willy@infradead.org> Message-ID: <6e7599d1-8a5f-bf16-383c-febd753bd051@google.com> References: <20220608150249.3033815-1-willy@infradead.org> <20220608150249.3033815-8-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Wed, 8 Jun 2022, Matthew Wilcox (Oracle) wrote: > Now that both callers have a folio, convert this function to > take a folio & rename it. > > Signed-off-by: Matthew Wilcox (Oracle) > Reviewed-by: Christoph Hellwig > --- > mm/migrate.c | 19 ++++++++++++------- > 1 file changed, 12 insertions(+), 7 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 2975f0c4d7cf..2e2f41572066 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -336,13 +336,18 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) > } > #endif > > -static int expected_page_refs(struct address_space *mapping, struct page *page) > +static int folio_expected_refs(struct address_space *mapping, > + struct folio *folio) > { > - int expected_count = 1; > + int refs = 1; > + if (!mapping) > + return refs; > > - if (mapping) > - expected_count += compound_nr(page) + page_has_private(page); > - return expected_count; > + refs += folio_nr_pages(folio); > + if (folio_get_private(folio)) > + refs++; > + > + return refs; > } > > /* > @@ -359,7 +364,7 @@ int folio_migrate_mapping(struct address_space *mapping, > XA_STATE(xas, &mapping->i_pages, folio_index(folio)); > struct zone *oldzone, *newzone; > int dirty; > - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count; > + int expected_count = folio_expected_refs(mapping, folio) + extra_count; > long nr = folio_nr_pages(folio); > > if (!mapping) { > @@ -669,7 +674,7 @@ static int __buffer_migrate_folio(struct address_space *mapping, > return migrate_page(mapping, &dst->page, &src->page, mode); > > /* Check whether page does not have extra refs before we do more work */ > - expected_count = expected_page_refs(mapping, &src->page); > + expected_count = folio_expected_refs(mapping, src); > if (folio_ref_count(src) != expected_count) > return -EAGAIN; > > -- > 2.35.1 This commit (742e89c9e352d38df1a5825fe40c4de73a5d5f7a in pagecache.git folio/for-next and recent linux-next) is dangerously wrong, at least for swapcache, and probably for some others. I say "dangerously" because it tells page migration a swapcache page is safe for migration when it certainly is not. The fun that typically ensues is kernel BUG at include/linux/mm.h:750! put_page_testzero() VM_BUG_ON_PAGE(page_ref_count(page) == 0, page), if CONFIG_DEBUG_VM=y (bisecting for that is what brought me to this). But I guess you might get silent data corruption too. I assumed at first that you'd changed the rules, and were now expecting any subsystem that puts a non-zero value into folio->private to raise its refcount - whereas the old convention (originating with buffer heads) is that setting PG_private says an extra refcount has been taken, please call try_to_release_page() to lower it, and maybe that will use data in page->private to do so; but page->private free for the subsystem owning the page to use as it wishes, no refcount implication. But that you had missed updating swapcache. So I got working okay with the patch below; but before turning it into a proper patch, noticed that there were still plenty of other places applying the test for PG_private: so now think that maybe you set out with intention as above, realized it wouldn't work, but got distracted before cleaning up some places you'd already changed. And patch below now goes in the wrong direction. Or maybe you didn't intend any change, but the PG_private test just got missed in a few places. I don't know, hope you remember, but current linux-next badly inconsistent. Over to you, thanks, Hugh --- a/mm/migrate.c 2022-07-06 14:24:44.499941975 -0700 +++ b/mm/migrate.c 2022-07-06 15:49:25.000000000 -0700 @@ -351,6 +351,10 @@ unlock: } #endif +static inline bool folio_counted_private(struct folio *folio) +{ + return !folio_test_swapcache(folio) && folio_get_private(folio); +} static int folio_expected_refs(struct address_space *mapping, struct folio *folio) { @@ -359,7 +363,7 @@ static int folio_expected_refs(struct ad return refs; refs += folio_nr_pages(folio); - if (folio_get_private(folio)) + if (folio_counted_private(folio)) refs++; return refs; --- a/mm/vmscan.c 2022-07-06 14:24:44.531942217 -0700 +++ b/mm/vmscan.c 2022-07-06 15:49:37.000000000 -0700 @@ -2494,6 +2494,10 @@ shrink_inactive_list(unsigned long nr_to * The downside is that we have to touch folio->_refcount against each folio. * But we had to alter folio->flags anyway. */ +static inline bool folio_counted_private(struct folio *folio) +{ + return !folio_test_swapcache(folio) && folio_get_private(folio); +} static void shrink_active_list(unsigned long nr_to_scan, struct lruvec *lruvec, struct scan_control *sc, @@ -2538,8 +2542,9 @@ static void shrink_active_list(unsigned } if (unlikely(buffer_heads_over_limit)) { - if (folio_get_private(folio) && folio_trylock(folio)) { - if (folio_get_private(folio)) + if (folio_counted_private(folio) && + folio_trylock(folio)) { + if (folio_counted_private(folio)) filemap_release_folio(folio, 0); folio_unlock(folio); }