Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp3817591iog; Tue, 21 Jun 2022 06:37:07 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tVHHdmE0eLfbTEHcCwjFAJZQjUT5ppkR6T5JQUSQ1OS6poNiu34B+n3Ic24GvvzjO2l3HG X-Received: by 2002:a17:907:9488:b0:722:e5c8:c647 with SMTP id dm8-20020a170907948800b00722e5c8c647mr2287913ejc.291.1655818627334; Tue, 21 Jun 2022 06:37:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655818627; cv=none; d=google.com; s=arc-20160816; b=A1/nUu9sOYBaNYo/veG1mIZimJIfkIpqLBC4nd2e8XB7n5wNNgGiPGSU0QmfzbB+Rc 9HtiagrLQTCDV4EhWLnFHEozYavjKTVmiBydMtnD7nG1kb50KJvw21QiHPLxkmRHmSUW vhpSIt7nkmRG30Vaz8cnghjbp92muX6yee2eUF9zqVLuyUzWmK9wsDmB9TdXf1oghrJE NdtFB9PLXLldbquU5wJeu1wqAoUe7xHOgRUssToW6Qb00hgeHZGxC2NtrKggR8ytsf+I NUlzr8ssQwePs12ShkgcyEU87U2rRvGqRQXAh4lYoSWoTCDe9xm0fsosiGHNGFNlQRCK kvUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FXYyMqCuMn8N8AtiePCadIk2ihaRxjS8+sCa4dkDZyY=; b=upffzzdFFZkj1IsUeHu/MkfjzNZwe64XzIqMVfDFoRaKHoxz7kxcnrPs5mwFoiXBUT K9D4uZBYYnK6TDyGUzLQnr224QC+MBz8s1406i0kaoTb8broDrWjX7dySExIHatMxJsM Ep6TM9X5qd8oP9CTrAr/2b5FzZcv34yLr0ZONL/xMXLhNp3ngwIrDumZX75PfyrzpMFa Lq8/YZ22iWphboAMeIFZsXxyIHzZ/kcx1I9P6JVzJJl1Mte9an9D76GchByYU32PWxaj 040cVzIQAY/xZD70baikZ0FKlyfncj9CSP2kOWE0t/65pb8QCrK97Eq0f2Rw3hk3gue9 p8aQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ID4gt1H5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v2-20020a056402348200b004359a504775si1811712edc.238.2022.06.21.06.36.41; Tue, 21 Jun 2022 06:37:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ID4gt1H5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232569AbiFUM7X (ORCPT + 99 others); Tue, 21 Jun 2022 08:59:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351157AbiFUM6h (ORCPT ); Tue, 21 Jun 2022 08:58:37 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 701B0B2C for ; Tue, 21 Jun 2022 05:58:36 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id z14so13085514pgh.0 for ; Tue, 21 Jun 2022 05:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FXYyMqCuMn8N8AtiePCadIk2ihaRxjS8+sCa4dkDZyY=; b=ID4gt1H5XAXZc2WTCkcM4F1wzX1sAe3JOWtacCk0iQKQ+aLMZXAoV2LeDm9V/qhVXx 6tpAVEejJhQBmJjVUK+nIaNyniVWz7hJGtPgx1OZl1uOfj2pujAFTcbGChoVU9DBG1kQ iSr/YRd45uGdFIuCRF+nZy0GIGBavXDFzjNLsbIp/9bZz+yRlZan+nvKfKhVHEPvjiD5 P/VKwyJbbrvB0eRUS26CEwFTb/hg3CWoix8AAXQYFzZU3v7OZBDBODtMEhL6UOydPlDK QPnx0hfY8bRXub0eY6AxeQXF/rpFM66dPqaBxFcGtr4OkljZGqrs6ATOyrSkIf9U3exr 0w8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FXYyMqCuMn8N8AtiePCadIk2ihaRxjS8+sCa4dkDZyY=; b=1Ho6hQdYop/3SMfjykvCh8G6aGxYHbuvzmQt4r/rW0P+7+qJlCmWkEl7cTQj2UePYF 5ppcYjmDt+lGPEJOen5b77FqMzgWHQr52Xv8WlRSbSZs9TIT0yQPq3f4NWf4kLBrYEym tosP6LX6Y4WIoh0QzFvYbnr4I2i8MbZMWuWgCIcYK50Bc9u9pVmdLvZn1iw6KnpGVLL+ CAf2QPvfbg3I5eMNmbVq+/yxKQi6JJZdiW7C0GP9cWFG/4Vy0MEtf5usHkIyZizsdAOO zMd4M/iME34yuwanpOLXqj82LYcSfmArlT1c0ipeK5ekADAQ35jWmVyqD/e/Mu1g0T88 9/fg== X-Gm-Message-State: AJIora9jd46xjSYXFEF0V8PmxMbPnD2eGPbtmxVbl9VUawfSPi0qvayk P+S06Lm7BOwPKhtU3EOwuYsxVg== X-Received: by 2002:a63:7c4e:0:b0:380:8ae9:c975 with SMTP id l14-20020a637c4e000000b003808ae9c975mr26959498pgn.25.1655816315865; Tue, 21 Jun 2022 05:58:35 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id e3-20020a170903240300b0015ea3a491a1sm10643134plo.191.2022.06.21.05.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 05:58:35 -0700 (PDT) From: Muchun Song To: akpm@linux-foundation.org, hannes@cmpxchg.org, longman@redhat.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com Cc: cgroups@vger.kernel.org, duanxiongchun@bytedance.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v6 11/11] mm: lru: use lruvec lock to serialize memcg changes Date: Tue, 21 Jun 2022 20:56:58 +0800 Message-Id: <20220621125658.64935-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220621125658.64935-1-songmuchun@bytedance.com> References: <20220621125658.64935-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now folio_lruvec_lock*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++ mm/swap.c | 32 +++++++++++++++----------------- mm/vmscan.c | 16 +++++++--------- 3 files changed, 56 insertions(+), 26 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 803dbdf5f233..85adc43c5a25 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1330,10 +1330,39 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock(&lruvec->lru_lock); + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec = folio_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock(&lruvec->lru_lock); goto retry; } + + /* + * When we reach here, it means that the folio_memcg(folio) is + * stable. + */ rcu_read_unlock(); return lruvec; @@ -1361,6 +1390,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) lruvec = folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1394,6 +1424,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, lruvec = folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5809,7 +5840,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_put(rcu_dereference(from->objcg)); rcu_read_unlock(); + /* See the comments in folio_lruvec_lock(). */ + spin_lock(&from_vec->lru_lock); folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg); + spin_unlock(&from_vec->lru_lock); __folio_memcg_unlock(from); diff --git a/mm/swap.c b/mm/swap.c index 987dcbd93ffa..0fc59409e27d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -196,6 +196,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio) VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); + folio_set_lru(folio); /* * Is an smp_mb__after_atomic() still required here, before * folio_evictable() tests the mlocked flag, to rule out the possibility @@ -238,14 +239,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) for (i = 0; i < folio_batch_count(fbatch); i++) { struct folio *folio = fbatch->folios[i]; - /* block memcg migration while the folio moves between lru */ - if (move_fn != lru_add_fn && !folio_test_clear_lru(folio)) - continue; - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); move_fn(lruvec, folio); - - folio_set_lru(folio); } if (lruvec) @@ -265,7 +260,7 @@ static void folio_batch_add_and_move(struct folio_batch *fbatch, static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio) { - if (!folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_unevictable(folio)) { lruvec_del_folio(lruvec, folio); folio_clear_active(folio); lruvec_add_folio_tail(lruvec, folio); @@ -348,7 +343,8 @@ void lru_note_cost_folio(struct folio *folio) static void folio_activate_fn(struct lruvec *lruvec, struct folio *folio) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); @@ -394,12 +390,9 @@ static void folio_activate(struct folio *folio) { struct lruvec *lruvec; - if (folio_test_clear_lru(folio)) { - lruvec = folio_lruvec_lock_irq(folio); - folio_activate_fn(lruvec, folio); - lruvec_unlock_irq(lruvec); - folio_set_lru(folio); - } + lruvec = folio_lruvec_lock_irq(folio); + folio_activate_fn(lruvec, folio); + lruvec_unlock_irq(lruvec); } #endif @@ -542,6 +535,9 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio) bool active = folio_test_active(folio); long nr_pages = folio_nr_pages(folio); + if (!folio_test_lru(folio)) + return; + if (folio_test_unevictable(folio)) return; @@ -580,7 +576,8 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio) static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio) { - if (folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); @@ -596,8 +593,9 @@ static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio) static void lru_lazyfree_fn(struct lruvec *lruvec, struct folio *folio) { - if (folio_test_anon(folio) && folio_test_swapbacked(folio) && - !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && folio_test_anon(folio) && + folio_test_swapbacked(folio) && !folio_test_swapcache(folio) && + !folio_test_unevictable(folio)) { long nr_pages = folio_nr_pages(folio); lruvec_del_folio(lruvec, folio); diff --git a/mm/vmscan.c b/mm/vmscan.c index 51b1607c81e4..11e1f6fc5898 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4864,21 +4864,19 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (PageTransTail(page)) continue; - nr_pages = thp_nr_pages(page); + nr_pages = folio_nr_pages(folio); pgscanned += nr_pages; - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec = folio_lruvec_relock_irq(folio, lruvec); + if (!folio_test_lru(folio) || !folio_test_unevictable(folio)) continue; - lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + if (folio_evictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); pgrescued += nr_pages; } - SetPageLRU(page); } if (lruvec) { -- 2.11.0