Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp543781pxa; Fri, 31 Jul 2020 20:59:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzY4JatzWBVLheA//ZsG8DimiirrMOD+YmA4znHN7g0nvnu+dhTuRmALq2ljZIx783WrKeV X-Received: by 2002:a17:906:2e51:: with SMTP id r17mr6956743eji.308.1596254367560; Fri, 31 Jul 2020 20:59:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596254367; cv=none; d=google.com; s=arc-20160816; b=eOVMi1DP5V/CxLQ5dn1mP/gAmQbytks7GWZMy4qsqQ4kTLr+5RiM+Xlu70c36mBFEm frt3ZpGvuIsizdpuwTbEfBR47LWoIlCKMgjOUMuf7VlnHNCDae0O5QQU1qVesZXtnfrW OwlxMq5MPWAYOzgVO29hB9Hs6O9PwBMHDYetaD1ut/K8rEpEuwnrvgyKDkEmVWcrkFyN Rfeh8/uX/1yyYc0MhIIVkInV7uJhEF9oYkIwl0LNwXGYHfVv9UtwdYdwsZVmY13uzphB xy/hJJEaSsZETZ5w9El4vXz3FoWgsGyqBRiDBZR3dO7UwdsByat6pD6JuE9XucfESHp6 rclA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=zY5T3RFlVSTuO5vvAm3+QGGuNXpvF7Ym6HOa1QVJCU8=; b=psUHFEZataDaVDbYUwN2Em6xOC4PCYhLAJ27DIG51jHYVUCJIOpiDBnmED+m1wSnv4 CrRzGDghwA2RjCMID1ApPUwSisWebAn3KEdf1wOQpVyK/s2AVw0+/6jRs0BVVRsXWRK/ E7oWdksjm3BRG8mctU94n8qND9lQU3sDy84VKITyy0wHDD6jfcwl014KVC1xTafgj/TW Myls0Zoz/0bPDuevtVO955bTbUpRtQRTLPoO4m9NfPN1OcHNJrlPUS5BtSGmZf5aqxCf NvjjmRiXLofIX7VKGiZiL4t/PVwD1DRCuH2OesyNbqHEKKhovm0IHG9eI82aHEJPm6Tz BniQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p2si6213358eju.192.2020.07.31.20.59.04; Fri, 31 Jul 2020 20:59:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728532AbgHAD6z (ORCPT + 99 others); Fri, 31 Jul 2020 23:58:55 -0400 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:41700 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728459AbgHAD6z (ORCPT ); Fri, 31 Jul 2020 23:58:55 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R261e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U4MsvgN_1596254329; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U4MsvgN_1596254329) by smtp.aliyun-inc.com(127.0.0.1); Sat, 01 Aug 2020 11:58:50 +0800 Subject: Re: [PATCH] mm/memcg: remove useless check on page->mem_cgroup To: Johannes Weiner Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1596166480-22814-1-git-send-email-alex.shi@linux.alibaba.com> <20200731151655.GB491801@cmpxchg.org> From: Alex Shi Message-ID: <9338716f-ca0e-057f-8d94-03e2b3f70281@linux.alibaba.com> Date: Sat, 1 Aug 2020 11:58:41 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <20200731151655.GB491801@cmpxchg.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2020/7/31 ????11:16, Johannes Weiner ะด??: >> if (!entry.val) { >> memcg_memory_event(memcg, MEMCG_SWAP_FAIL); > Uncharged readahead pages are gone, but I'm not 100% sure uncharged > pages in general are gone. ISTR that the !page->mem_cgroup check in > mem_cgroup_uncharge() prevented a crash - although that is of course a > much broader interface, whereas the ones you change should only apply > to LRU pages (which are hopefully all charged). > > Nevertheless, to avoid unnecessary crashes if we discover that we've > been wrong, how about leaving the branches for now, but adding a (new) > VM_WARN_ON_ONCE_PAGE() to them? Right, let's see if other unexcepted things happens, and then do actions. So it's the patch: From 28893cf8e55b98665cce58c0ba6d54aeafb63a62 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Sat, 1 Aug 2020 10:43:55 +0800 Subject: [PATCH] mm/memcg: warning on !memcg after readahead page charged Since readahead page is charged on memcg too, in theory we don't have to check this exception now. Before safely remove them all, add a warning for the unexpected !memcg. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmdebug.h | 8 ++++++++ mm/memcontrol.c | 15 ++++++++------- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 2ad72d2c8cc5..639e98a3384e 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -37,6 +37,13 @@ BUG(); \ } \ } while (0) +#define VM_WARN_ON_ONCE_PAGE(cond, page) \ + do { \ + if (unlikely(cond)) { \ + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ + WARN_ON_ONCE(cond); \ + } \ + } while (0) #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format) @@ -48,6 +55,7 @@ #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e84c2b5596f2..0174c31f6491 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1322,10 +1322,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd } memcg = page->mem_cgroup; - /* - * Swapcache readahead pages are added to the LRU - and - * possibly migrated - before they are charged. - */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) memcg = root_mem_cgroup; @@ -6897,8 +6895,9 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) if (newpage->mem_cgroup) return; - /* Swapcache readahead pages can get replaced before being charged */ memcg = oldpage->mem_cgroup; + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; @@ -7095,7 +7094,8 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) memcg = page->mem_cgroup; - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return; @@ -7159,7 +7159,8 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg = page->mem_cgroup; - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return 0; -- 1.8.3.1