Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp2787314ioo; Tue, 24 May 2022 06:05:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwAZARRz9FVj2vuPRResyIhyCKuZZCuqC2QDoA8m5Z/JYj+lBlyY9JHwu9rALf771q5bdRS X-Received: by 2002:a17:907:9605:b0:6f5:c66:7c13 with SMTP id gb5-20020a170907960500b006f50c667c13mr23849849ejc.66.1653397549744; Tue, 24 May 2022 06:05:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653397549; cv=none; d=google.com; s=arc-20160816; b=YkxTIDLEsxLXZxsxALPumiyM1Ng+eRQcEECUo98XxrGzzVk3KiuX1i92YvNaP3k8gP ENoEtHq63q4bYAklQtAK6h2cRKi6eA2stJ5rjx72j5KLJ9V46eyvvpL5iydlujzuV7O9 bnoo1DgVSs7Pj4Uuy5dW4oLAI4D1mj8Z8xkzsqGM0JvhOtiWGAGf2wHsm8BurDznEkmz CzTug3xv68k/bwEhXC1KQ9SCxfpMLEFd+FhN44lTGtvzYL/rTi3gGSsxF7IWHANMfEhI zi8mUR6fsmuZ2qEoqvurV0uKHt15TARmlY+M1WocZnqvKejmeCtlrkGKgQlzymsWWGyY rcdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=348SaC7WcgSM/sY/xPmE22D0TXDMzza3SSCcL487100=; b=UBschg+AdIf82HsNbaBIsK4lwjprvsUhszdK8Q4fQmLBbnOzPyaElkUPh854kHkpfA 39516XZ2eZeiByX9Oow49DkqFb2pB7icdZ2G64HNCX+VDTzHr/PjmDfR0LosEC+zrSiO p6fK2SIXOIypo7XYIbnAox43pLX0Dxdvp/lDDOVtvE0iDMM4vJniuKItkrBn/AxXztSr 7wGOVZfCPX231FxipuyvKtDDCL65Ofbjt2+2FJCVKb+WNdV7X5LY7oH5HllY8iCpiu/Z iNl45hzMgmKhJNrO2rSiLgLAxnCN1YCO9LJYok0cvWJOFsr7BddrlOR+s4F8TVPxvZ4S SELQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0+r2wt6T; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cw11-20020a170906478b00b006f458fb419bsi11264361ejc.219.2022.05.24.06.05.16; Tue, 24 May 2022 06:05:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0+r2wt6T; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234191AbiEXG15 (ORCPT + 99 others); Tue, 24 May 2022 02:27:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235057AbiEXG1y (ORCPT ); Tue, 24 May 2022 02:27:54 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6766A9729B for ; Mon, 23 May 2022 23:27:49 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id bo5so15623584pfb.4 for ; Mon, 23 May 2022 23:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=348SaC7WcgSM/sY/xPmE22D0TXDMzza3SSCcL487100=; b=0+r2wt6T13k+HnQR9gTMlwFAXOvTOPrOAa7wvRnvNatQt+nuKscaUIr/xDsTQt8dx7 cmfbnMdBcmrFZqxpV056iUOHc17yG9yoWbd+kugli0Dk0ER+5jrqBy6nLydMe2HVyu0K OCM5YN0lcvsR0oPmGjixkjfBfZUFhKNLmDfjV/qiA7CdjRwKrE7G1bega5RFViTRveKf ALxIf3bdEqBOsFd/aAlY8MjkwjKJ89pjPsIsxgUqSMzjoKQgMas8Mm73sZaR4KHMyrko r0xabTvv49viltamLfcZA36N9pC4w3xW7PPrxZfYBNexHQZrpO0Dd5cde8KkIupcosQ0 O5Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=348SaC7WcgSM/sY/xPmE22D0TXDMzza3SSCcL487100=; b=wgRXnoAep5ekSrpF8xH5IQb+p0tla7T9J27t/u64/FShDRDELkA8x2I5IJEAcaxf+i 9XZbpaSbCV+AP2wC7Tki7rllp9RK2fDOLLH6jUf/b3G7jfQ/6pZRmxu4Quh4/Y2QK7Ks rdoV2eMcvTnoS8reka0VITW99AsM/p8/gw6olNgRrlbkVK+X4nZjzvYeq7ZBs1gPbCRt ue4Jo+lHen0QsBcbTaaTWxyCIO9Ah/5EDIOgKV5RJrxM3zfTtW39UW/EfTaQcHpJO8DA QkOvKHh7+WFbPlO2NCYM4fyRBloScRj1+ufooWHDCRoC1ksZQMfuANhLHKEWmzqk7I9h h06w== X-Gm-Message-State: AOAM532gQiqzVJqev/hNolCXqgR4lXePCYGJkTzs4ngIKwAWqe2UM9G1 D8rGSm8XmN44ZZDmDzMP9TR8wg== X-Received: by 2002:a63:5f83:0:b0:3fa:8154:dddd with SMTP id t125-20020a635f83000000b003fa8154ddddmr4679828pgb.486.1653373668901; Mon, 23 May 2022 23:27:48 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:f940:af17:c2f5:8656]) by smtp.gmail.com with ESMTPSA id h5-20020a170902f54500b0016168e90f2dsm6254455plf.219.2022.05.23.23.27.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 May 2022 23:27:48 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v4 05/11] mm: thp: introduce folio_split_queue_lock{_irqsave}() Date: Tue, 24 May 2022 14:05:45 +0800 Message-Id: <20220524060551.80037-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220524060551.80037-1-songmuchun@bytedance.com> References: <20220524060551.80037-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We should make thp deferred split queue lock safe when LRU pages are reparented. Similar to folio_lruvec_lock{_irqsave, _irq}(), we introduce folio_split_queue_lock{_irqsave}() to make the deferred split queue lock easier to be reparented. And in the next patch, we can use a similar approach (just like lruvec lock does) to make thp deferred split queue lock safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 10 +++++ mm/huge_memory.c | 100 +++++++++++++++++++++++++++++++++------------ 2 files changed, 84 insertions(+), 26 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4042e4d21fe2..8c2f1ba2f471 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1650,6 +1650,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); void reparent_shrinker_deferred(struct mem_cgroup *memcg); + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return shrinker->id; +} #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1663,6 +1668,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { } + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return -1; +} #endif #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 910a138e9859..ea152bde441e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -503,25 +503,74 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) } #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) { - struct mem_cgroup *memcg = page_memcg(compound_head(page)); - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + if (mem_cgroup_disabled()) + return NULL; + if (&NODE_DATA(folio_nid(folio))->deferred_split_queue == queue) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + struct mem_cgroup *memcg = folio_memcg(folio); + + return memcg ? &memcg->deferred_split_queue : NULL; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio, + struct deferred_split *queue) { - struct pglist_data *pgdat = NODE_DATA(page_to_nid(page)); + return NULL; +} - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio) +{ + return NULL; } #endif +static struct deferred_split *folio_split_queue(struct folio *folio) +{ + struct deferred_split *queue = folio_memcg_split_queue(folio); + + return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock(&queue->split_queue_lock); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct deferred_split *queue; + + queue = folio_split_queue(folio); + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + return queue; +} + +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *queue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2489,7 +2538,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) { struct folio *folio = page_folio(page); struct page *head = &folio->page; - struct deferred_split *ds_queue = get_deferred_split_queue(head); + struct deferred_split *ds_queue; XA_STATE(xas, &head->mapping->i_pages, head->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; @@ -2581,13 +2630,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue = folio_split_queue_lock(folio); if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr = thp_nr_pages(head); @@ -2605,7 +2654,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); fail: if (mapping) xas_unlock(&xas); @@ -2630,25 +2679,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); free_compound_page(page); } void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue = get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg = page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; unsigned long flags; + struct folio *folio = page_folio(page); VM_BUG_ON_PAGE(!PageTransHuge(page), page); @@ -2665,18 +2712,19 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue = folio_split_queue_lock_irqsave(folio, &flags); if (list_empty(page_deferred_list(page))) { + struct mem_cgroup *memcg; + + memcg = folio_split_queue_memcg(folio, ds_queue); count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); ds_queue->split_queue_len++; -#ifdef CONFIG_MEMCG if (memcg) set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); -#endif + shrinker_id(&deferred_split_shrinker)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } static unsigned long deferred_split_count(struct shrinker *shrink, -- 2.11.0