Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp80251rwn; Wed, 7 Sep 2022 20:34:30 -0700 (PDT) X-Google-Smtp-Source: AA6agR4dTivSGlOt/kX3PviKRjJuL6lY6HbQCsVUbduHnL+Vj7khBcssuLjW/fX5YOfINkb1i0GI X-Received: by 2002:a05:6402:1f02:b0:445:f674:eac0 with SMTP id b2-20020a0564021f0200b00445f674eac0mr5201129edb.370.1662608070077; Wed, 07 Sep 2022 20:34:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662608070; cv=none; d=google.com; s=arc-20160816; b=YEz6uMvWhzfe3rmr6KU7cmoLfO9/wtCaEOlniW63mRIqJ0qbqrGc5v/W6RBwqflBc8 yYwBUtn9Bca3RyRDbMP+tT4N7QOg1CWdiXZT2M/cH5TUjadHZmwFVzJkcOX7YBSELP0V hk88s9QwLUj8LOaly6fxpu7KkBgzPexVrl7hJNSeY9ulvy8NH2spsrbsnIBjf57ddRmV vuS4gcblcggRdDQljuZYJeWo3XnarW+ijYHEZyWDtL8XDP+DLg7+/KPq7wU1n9vydqwk sZSHOMyotRZwJyWxqoCeq4zgtOn6F5vvc4ygw0ebHvR/y0p/Q8I3b2oYi+9pZm/S9xpZ 7uog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=RAWnUnN0T0RN6vhWtig/LupGwECqM/CirppgXb10ZfU=; b=IRMMQLjT3PpvE09Z2thzWOclbIpeh+a2Ug0TLiGXWt7xV+DctmekAWxFYYJ32l0vjI vQeA8skhP5/EPC5yT4YDRSZZ8wVU0smZrrCm3xfrqfYhl6uAvwfgJ9bA1edwhrt2tQuR tCVfcyNU1XAa2IMowEZHPlNch+uDseqn7sfrKHvybgMdIZ/Soo1Kn4vIhueAamqxGnCp Z4gYjfJtcajImoWA5NSHcahwKrsUDFU3xXNDtvsYtoKpP5e9WFDFGypyD1NNpDNqGXKX VjGAw/RkOLcWHBe1qOmAqrTg+K1Qug2szDoUt/LkN9QRVn/YoEnBjk23MZ8gwsF3cVEn L++A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l7-20020a056402124700b0044ed33f00a8si5024762edw.519.2022.09.07.20.34.04; Wed, 07 Sep 2022 20:34:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbiIHDVn (ORCPT + 99 others); Wed, 7 Sep 2022 23:21:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229472AbiIHDVj (ORCPT ); Wed, 7 Sep 2022 23:21:39 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34DD079624 for ; Wed, 7 Sep 2022 20:21:37 -0700 (PDT) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MNPT32jW4zmVFn; Thu, 8 Sep 2022 11:17:59 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 8 Sep 2022 11:21:35 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 8 Sep 2022 11:21:34 +0800 From: Liu Shixin To: Andrew Morton , "Kirill A . Shutemov" , Andrea Arcangeli CC: , , Liu Shixin , Kefeng Wang Subject: [PATCH v2] mm/huge_memory: prevent THP_ZERO_PAGE_ALLOC increased twice Date: Thu, 8 Sep 2022 11:55:33 +0800 Message-ID: <20220908035533.2186159-1-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If two or more threads call get_huge_zero_page concurrently, THP_ZERO_PAGE_ALLOC may increased two or more times. But actually, this should only count as once since the extra zero pages has been freed. Redefine its meaning to indicate the times a huge zero page used for thp is successfully allocated. Update Documentation/admin-guide/mm/transhuge.rst together. Signed-off-by: Liu Shixin --- v1->v2: Update documnet. Documentation/admin-guide/mm/transhuge.rst | 7 +++---- mm/huge_memory.c | 2 +- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index c9c37f16eef8..8e3418ec4503 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -366,10 +366,9 @@ thp_split_pmd page table entry. thp_zero_page_alloc - is incremented every time a huge zero page is - successfully allocated. It includes allocations which where - dropped due race with other allocation. Note, it doesn't count - every map of the huge zero page, only its allocation. + is incremented every time a huge zero page used for thp is + successfully allocated. Note, it doesn't count every map of + the huge zero page, only its allocation. thp_zero_page_alloc_failed is incremented if kernel fails to allocate diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 88d98241a635..5c83a424803a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -163,7 +163,6 @@ static bool get_huge_zero_page(void) count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); return false; } - count_vm_event(THP_ZERO_PAGE_ALLOC); preempt_disable(); if (cmpxchg(&huge_zero_page, NULL, zero_page)) { preempt_enable(); @@ -175,6 +174,7 @@ static bool get_huge_zero_page(void) /* We take additional reference here. It will be put back by shrinker */ atomic_set(&huge_zero_refcount, 2); preempt_enable(); + count_vm_event(THP_ZERO_PAGE_ALLOC); return true; } -- 2.25.1