Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp250255pxb; Wed, 4 Nov 2020 21:43:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJxie0VwKxMePbSzG2MOJhQJL5vbNd6d9iNwRCtM+VJQ6ariKP+VdpnhUBxsogc8YJe1FsHm X-Received: by 2002:a17:906:7254:: with SMTP id n20mr668921ejk.382.1604555000104; Wed, 04 Nov 2020 21:43:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604555000; cv=none; d=google.com; s=arc-20160816; b=CFiT4CtNKJ9tRwQnc/YWH1SPJYCbd/lLAw7yue8v5MRSxgx7wo/clLTkQg+ERVgXBi VZ6NCd9RI7csUdh1eYuIGaWvTDFNr/x/O1kKZ/WSg3U8lquuZs1KKJQQGbC9R7x58TAu CiKFdHjdtDyKDXCPHbou/XsnV6Ee3nTeQKcxweK0Lq+Caasqm07inMEA3aOFCJuQRjM0 XRm5oi0xF1zMpluiipd18S8Qx2zuTU7y56kUfAcytFkzymNiCFsJdS8mQ2H9b29vE2sw KNsHRoDtDOuKDFgJHnU3yi2t2QA9on0iXm76+m31vMW64xGoexy78ZCmWpm1qzxRabwI fasg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=egxoITnl8sn2WNi/zYl1KiGbUPWqEtRL2TlXGlmqb7c=; b=BBDdD4LQim8vvKAMuwV2hKitESXDmaChxYqSEcFhZpU+7+9bKrn+RErgOKGAF5tG0X gYzPpK5Ecpus+QW6EOuIcm9i3G+PZfFbmEWDHRqtCg5tboHVaA3GzXjiNaQp1ryd7Wy9 0HIV24ammNbKmWVIBuWXGSxmzfgeeGTrLdPyEGTbzQPw4IqceCWXSkbm7ODpJv04HDtY eqoPNE4v9E0wJY5dBkhYgGp38vrOMoLHvIno8tM0sR29QwQupe1EykIZ0xiP4aFU7svz KM/SoSTrmLp8bOh06TH0gBjsClrVsijWuSYH7Pg7uMjrPIvGFN9amMNOKKaNwGnht2e2 o53Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f14si513746edj.34.2020.11.04.21.42.57; Wed, 04 Nov 2020 21:43:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725320AbgKEFDf (ORCPT + 99 others); Thu, 5 Nov 2020 00:03:35 -0500 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:46810 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725275AbgKEFDe (ORCPT ); Thu, 5 Nov 2020 00:03:34 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R281e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=alimailimapcm10staff010182156082;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=23;SR=0;TI=SMTPD_---0UEHWfWQ_1604552603; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UEHWfWQ_1604552603) by smtp.aliyun-inc.com(127.0.0.1); Thu, 05 Nov 2020 13:03:24 +0800 Subject: Re: [PATCH v20 08/20] mm: page_idle_get_page() does not need lru_lock To: Matthew Wilcox Cc: Johannes Weiner , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, Vlastimil Babka , Minchan Kim References: <1603968305-8026-1-git-send-email-alex.shi@linux.alibaba.com> <1603968305-8026-9-git-send-email-alex.shi@linux.alibaba.com> <20201102144110.GB724984@cmpxchg.org> <20201102144927.GN27442@casper.infradead.org> <20201102202003.GA740958@cmpxchg.org> <20201104174603.GB744831@cmpxchg.org> <6eea82d8-e406-06ee-2333-eb6e2f1944e5@linux.alibaba.com> <20201105045702.GI17076@casper.infradead.org> From: Alex Shi Message-ID: <1e8f0162-cf2e-03eb-e7e0-ccc9f6a3eaf2@linux.alibaba.com> Date: Thu, 5 Nov 2020 13:03:18 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201105045702.GI17076@casper.infradead.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2020/11/5 ????12:57, Matthew Wilcox ะด??: > On Thu, Nov 05, 2020 at 12:52:05PM +0800, Alex Shi wrote: >> @@ -1054,8 +1054,27 @@ static void __page_set_anon_rmap(struct page *page, >> if (!exclusive) >> anon_vma = anon_vma->root; >> >> + /* >> + * w/o the WRITE_ONCE here the following scenario may happens due to >> + * store reordering. >> + * >> + * CPU 0 CPU 1 >> + * >> + * do_anonymous_page page_idle_clear_pte_refs >> + * __page_set_anon_rmap >> + * page->mapping = anon_vma + PAGE_MAPPING_ANON >> + * lru_cache_add_inactive_or_unevictable() >> + * SetPageLRU(page) >> + * rmap_walk >> + * if PageAnon(page) >> + * >> + * The 'SetPageLRU' may reordered before page->mapping setting, and >> + * page->mapping may set with anon_vma, w/o anon bit, then rmap_walk >> + * may goes to rmap_walk_file() for a anon page. >> + */ >> + >> anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; >> - page->mapping = (struct address_space *) anon_vma; >> + WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); >> page->index = linear_page_index(vma, address); >> } > > I don't like these verbose comments with detailed descriptions in > the source code. They're fine in changelogs, but they clutter the > code, and they get outdated really quickly. My preference is for > something more brief: > > /* > * Prevent page->mapping from pointing to an anon_vma without > * the PAGE_MAPPING_ANON bit set. This could happen if the > * compiler stores anon_vma and then adds PAGE_MAPPING_ANON to it. > */ > Yes, it's reansonble. So is the following fine? From f166f0d5df350c5eae1218456b9e6e1bd43434e7 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Thu, 5 Nov 2020 11:38:24 +0800 Subject: [PATCH] mm/rmap: stop store reordering issue on page->mapping Hugh Dickins and Minchan Kim observed a long time issue which discussed here, but actully the mentioned fix missed. https://lore.kernel.org/lkml/20150504031722.GA2768@blaptop/ The store reordering may cause problem in the scenario: CPU 0 CPU1 do_anonymous_page page_add_new_anon_rmap() page->mapping = anon_vma + PAGE_MAPPING_ANON lru_cache_add_inactive_or_unevictable() spin_lock(lruvec->lock) SetPageLRU() spin_unlock(lruvec->lock) /* idletacking judged it as LRU * page so pass the page in * page_idle_clear_pte_refs */ page_idle_clear_pte_refs rmap_walk if PageAnon(page) Johannes give detailed examples how the store reordering could cause a trouble: The concern is the SetPageLRU may get reorder before 'page->mapping' setting, That would make CPU 1 will observe at page->mapping after observing PageLRU set on the page. 1. anon_vma + PAGE_MAPPING_ANON That's the in-order scenario and is fine. 2. NULL That's possible if the page->mapping store gets reordered to occur after SetPageLRU. That's fine too because we check for it. 3. anon_vma without the PAGE_MAPPING_ANON bit That would be a problem and could lead to all kinds of undesirable behavior including crashes and data corruption. Is it possible? AFAICT the compiler is allowed to tear the store to page->mapping and I don't see anything that would prevent it. That said, I also don't see how the reader testing PageLRU under the lru_lock would prevent that in the first place. AFAICT we need that WRITE_ONCE() around the page->mapping assignment. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Andrew Morton Cc: Hugh Dickins Cc: Matthew Wilcox Cc: Minchan Kim Cc: Vladimir Davydov Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- mm/rmap.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index c050dab2ae65..73788505aa0a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1054,8 +1054,13 @@ static void __page_set_anon_rmap(struct page *page, if (!exclusive) anon_vma = anon_vma->root; + /* + * Prevent page->mapping from pointing to an anon_vma without + * the PAGE_MAPPING_ANON bit set. This could happen if the + * compiler stores anon_vma and then adds PAGE_MAPPING_ANON to it. + */ anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; - page->mapping = (struct address_space *) anon_vma; + WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); page->index = linear_page_index(vma, address); } -- 1.8.3.1