Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp153210pxb; Tue, 10 Nov 2020 23:30:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJz0jlolTUNlXD4VXJ9SMiy8MkNYHGmp+M//SBImcqtl26kJKhB5ySitAWK1OWKB1Wp2DrNR X-Received: by 2002:a05:6402:1119:: with SMTP id u25mr3717160edv.37.1605079805096; Tue, 10 Nov 2020 23:30:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605079805; cv=none; d=google.com; s=arc-20160816; b=yE2jCmATehE6FrPmdAdM6GnOysL16O/bbgAIEHMo19WAg8hyKbumLF7m4gy8OD8ZvZ G98aRh/ebi8KfPsOLOFZPoP85HPbm9RjmiNTl2UfLUGGj2RbnU9swyWGsVYMdYVlSj0K ckU3EjzIXHRTXsESe0kZXW7EkRP1ASqwPSIV4aOI8i0nISS47vHUFWLUGGBN7oCs4hKO ar7AzgLY25zq8pPEjINiSgW3baPMhMGwCM2ydD7oBHfsjvpPtsCieS/1FPQ/Ygsvn9S0 dT0iUiZvQbbIJcjOGP3w8Bbdx1HG+UPIMZ//IeNFmR6aXH2NSx2On9Mz8bYRdk9oBw42 U6Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=DkcV1x7MgfBS236GtgzpXoMpSPJBXCFSqYnEz2asS8Y=; b=tTv98ArcR+0G4JhUK2YzerjzbJFzlqAPolib0mfJZrgKtqyjOdPpDamdYhzzrePtGM 24JqBQBehixjD6pl4qVHOJN66Di1+83UNqYbS8aOBF9sFoMOl9GI/f8cCJFL4BQH/ETF HyMjZvYQj3BpzeS+OEbKTL9Tt+Q8qCpgeoNVu0d9/Vmh7Pigixe72T+ON+CgtpQqbkN5 wRvs2sq/dFcN0qc0d9HqdQlUbwmsmq6oTqZQvjK/gqOKzCveQGEpn0p2Zu9bixHyWtu4 F8dchd8uFVLOg0IRWurSew7dOLn7pwmsmJSnM5kpOhG6x95Ic46+DgOEWSrN08xPEEhv N8og== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Gf4tMT9i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s20si901601eji.711.2020.11.10.23.29.41; Tue, 10 Nov 2020 23:30:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Gf4tMT9i; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726136AbgKKH2P (ORCPT + 99 others); Wed, 11 Nov 2020 02:28:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726297AbgKKH2L (ORCPT ); Wed, 11 Nov 2020 02:28:11 -0500 Received: from mail-oo1-xc44.google.com (mail-oo1-xc44.google.com [IPv6:2607:f8b0:4864:20::c44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C8A0C0613D1 for ; Tue, 10 Nov 2020 23:28:11 -0800 (PST) Received: by mail-oo1-xc44.google.com with SMTP id c25so207050ooe.13 for ; Tue, 10 Nov 2020 23:28:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=DkcV1x7MgfBS236GtgzpXoMpSPJBXCFSqYnEz2asS8Y=; b=Gf4tMT9irso5W1qAZaQwEIQlLc3fBbqGXy95E/Xm26rt6qFj3o/whHQ2xSvngDlqYF +vJUBG9fAZrNuVlOA2/pgxsCcs+9epBcBrAWtNFVQKW2WZk8qjMPoyBWWBuqh7OCHJjt dUAHkYVjqsx9e9QbCBr8oznfG0H/DVtWt8iwN8Xrq1XUdCjDyrqOPb3m3o/Vco87M76t 3+5Ka0GhgcEy1jngP+hKdR+QfAVxpZSSyL+K/aL/gmDpirv0lu7X6Bo+RoKAAycLxRXL 5Rdkwa4r3TqZyyWgSD+i5GkOhkQlSKG5Ib1aweIyfEM7O5r+WQkKzqz0U8SgnKu0G15/ fWXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=DkcV1x7MgfBS236GtgzpXoMpSPJBXCFSqYnEz2asS8Y=; b=VLoWXO3dhq9u5S4PeTaBtu+pbzAwzXIW08t7CcSw2Wun5yGx7uI3ljdod0nerEjwlG 19p8l//G3+AFmuDNdeFXo4R7I8dcyj4qawu8MkkMUB3b3iH7Edsxk7YTNP8Me84ApCeL 2jo4N7obzPXzv/eGFYDuVcQffr8duhm3AyrkMgANx0KVkC5dOfhLTGW5+6aX/iZa+r3c nrcM7VjmsjRnImILvD+DMqK57A4uQQieXb5TEHHAnWJ+1TIEhYZV3pfnDNhp5Fq0ep0o i5CmEcNOQs3xS28NyTphA82/GGPB6lcnwYBx72Zd3RGiOXsxl4vTeobC4meX7bqHzlpI ksLg== X-Gm-Message-State: AOAM531Nae1rBv4PXO86+TpVcA2bCir5T2uP3RP8voc1cfKByyAZaNww Q2LGxBXfTVAeu4SuykfBfHHM1w== X-Received: by 2002:a4a:b3c5:: with SMTP id q5mr1122260ooo.60.1605079690270; Tue, 10 Nov 2020 23:28:10 -0800 (PST) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id w14sm378030oou.16.2020.11.10.23.28.07 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Tue, 10 Nov 2020 23:28:09 -0800 (PST) Date: Tue, 10 Nov 2020 23:27:54 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Johannes Weiner cc: Alex Shi , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, Vlastimil Babka , Minchan Kim Subject: Re: [PATCH v20 08/20] mm: page_idle_get_page() does not need lru_lock In-Reply-To: <20201102144110.GB724984@cmpxchg.org> Message-ID: References: <1603968305-8026-1-git-send-email-alex.shi@linux.alibaba.com> <1603968305-8026-9-git-send-email-alex.shi@linux.alibaba.com> <20201102144110.GB724984@cmpxchg.org> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2 Nov 2020, Johannes Weiner wrote: > On Thu, Oct 29, 2020 at 06:44:53PM +0800, Alex Shi wrote: > > From: Hugh Dickins > > > > It is necessary for page_idle_get_page() to recheck PageLRU() after > > get_page_unless_zero(), but holding lru_lock around that serves no > > useful purpose, and adds to lru_lock contention: delete it. > > > > See https://lore.kernel.org/lkml/20150504031722.GA2768@blaptop for the > > discussion that led to lru_lock there; but __page_set_anon_rmap() now > > uses WRITE_ONCE(), > > That doesn't seem to be the case in Linus's or Andrew's tree. Am I > missing a dependent patch series? Sorry, I was out of action, then slower than ever, for a while. Many thanks for calling out my falsehood there, Johannes. What led me to write that? It has baffled me, but at last I see: this patch to page_idle_get_page() was 0002 in my lru_lock patchset against v5.3 last year, and 0001 was the patch which made it true. Then when I checked against mainline, I must have got confused by the similar WRITE_ONCE in page_move_anon_rmap(). Appended below, but not rediffed, and let's not hold up Alex's set for the rest of it: it is all theoretical until the kernel gets to be built with a suitably malicious compiler; but I'll follow up with a fresh version of the below after his set is safely in. From a1abcbc2aac70c6ba47b8991992bb85b86b4a160 Mon Sep 17 00:00:00 2001 From: Hugh Dickins Date: Thu, 22 Aug 2019 15:49:44 -0700 Subject: [PATCH 1/9] mm: more WRITE_ONCE and READ_ONCE on page->mapping v4.2 commit 414e2fb8ce5a ("rmap: fix theoretical race between do_wp_page and shrink_active_list") added a WRITE_ONCE() where page_move_anon_rmap() composes page->mapping from anon_vma pointer and PAGE_MAPPING_ANON. Now do the same where __page_set_anon_rmap() does the same, and where compaction.c applies PAGE_MAPPING_MOVABLE, and ksm.c PAGE_MAPPING_KSM. rmap.c already uses READ_ONCE(page->mapping), but util.c should too: add READ_ONCE() in page_rmapping(), page_anon_vma() and page_mapping(). Delete the then unused helper __page_rmapping(). I doubt that this commit fixes anything, but it's harmless and unintrusive, and makes reasoning about page mapping flags easier. What if a compiler implements "page->mapping = mapping" in other places by, say, first assigning the odd bits of mapping, then adding in the even bits? Then we shall not build the kernel with such a compiler. Signed-off-by: Hugh Dickins Cc: Vladimir Davydov Cc: Vlastimil Babka Cc: Minchan Kim Cc: Yu Zhao Cc: Alex Shi --- mm/compaction.c | 7 ++++--- mm/ksm.c | 2 +- mm/rmap.c | 7 ++++++- mm/util.c | 24 ++++++++++-------------- 4 files changed, 21 insertions(+), 19 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 952dc2fb24e5..c405f4362624 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -113,7 +113,8 @@ void __SetPageMovable(struct page *page, struct address_space *mapping) { VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE((unsigned long)mapping & PAGE_MAPPING_MOVABLE, page); - page->mapping = (void *)((unsigned long)mapping | PAGE_MAPPING_MOVABLE); + WRITE_ONCE(page->mapping, + (unsigned long)mapping | PAGE_MAPPING_MOVABLE); } EXPORT_SYMBOL(__SetPageMovable); @@ -126,8 +127,8 @@ void __ClearPageMovable(struct page *page) * flag so that VM can catch up released page by driver after isolation. * With it, VM migration doesn't try to put it back. */ - page->mapping = (void *)((unsigned long)page->mapping & - PAGE_MAPPING_MOVABLE); + WRITE_ONCE(page->mapping, + (unsigned long)page->mapping & PAGE_MAPPING_MOVABLE); } EXPORT_SYMBOL(__ClearPageMovable); diff --git a/mm/ksm.c b/mm/ksm.c index 3dc4346411e4..426b6a40ea41 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -865,7 +865,7 @@ static inline struct stable_node *page_stable_node(struct page *page) static inline void set_page_stable_node(struct page *page, struct stable_node *stable_node) { - page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); + WRITE_ONCE(page->mapping, (unsigned long)stable_node | PAGE_MAPPING_KSM); } #ifdef CONFIG_SYSFS diff --git a/mm/rmap.c b/mm/rmap.c index 003377e24232..9480df437edc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1044,7 +1044,12 @@ static void __page_set_anon_rmap(struct page *page, anon_vma = anon_vma->root; anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; - page->mapping = (struct address_space *) anon_vma; + /* + * Ensure that anon_vma and the PAGE_MAPPING_ANON bit are written + * simultaneously, so a concurrent reader (eg page_referenced()'s + * PageAnon()) will not see one without the other. + */ + WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); page->index = linear_page_index(vma, address); } diff --git a/mm/util.c b/mm/util.c index e6351a80f248..09b9fcbedac3 100644 --- a/mm/util.c +++ b/mm/util.c @@ -489,21 +489,14 @@ void kvfree(const void *addr) } EXPORT_SYMBOL(kvfree); -static inline void *__page_rmapping(struct page *page) -{ - unsigned long mapping; - - mapping = (unsigned long)page->mapping; - mapping &= ~PAGE_MAPPING_FLAGS; - - return (void *)mapping; -} - /* Neutral page->mapping pointer to address_space or anon_vma or other */ void *page_rmapping(struct page *page) { + unsigned long mapping; + page = compound_head(page); - return __page_rmapping(page); + mapping = (unsigned long)READ_ONCE(page->mapping); + return (void *)(mapping & ~PAGE_MAPPING_FLAGS); } /* @@ -534,10 +527,11 @@ struct anon_vma *page_anon_vma(struct page *page) unsigned long mapping; page = compound_head(page); - mapping = (unsigned long)page->mapping; + mapping = (unsigned long)READ_ONCE(page->mapping); + /* Return NULL if file or PageMovable or PageKsm */ if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) return NULL; - return __page_rmapping(page); + return (struct anon_vma *)(mapping & ~PAGE_MAPPING_FLAGS); } struct address_space *page_mapping(struct page *page) @@ -557,10 +551,12 @@ struct address_space *page_mapping(struct page *page) return swap_address_space(entry); } - mapping = page->mapping; + mapping = READ_ONCE(page->mapping); + /* Return NULL if PageAnon (including PageKsm) */ if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; + /* Return struct address_space pointer if file or PageMovable */ return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } EXPORT_SYMBOL(page_mapping); -- 2.23.0.187.g17f5b7556c-goog