Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2595891iof; Wed, 8 Jun 2022 08:10:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzcirVRgr4vwn64HAHT46FdCkwcX/Y7eoG+M8OAa9cTcXXzr+pmTj/zuWdlukNNR0R/ck6I X-Received: by 2002:a63:c:0:b0:3fe:2810:ba93 with SMTP id 12-20020a63000c000000b003fe2810ba93mr4216418pga.468.1654701053038; Wed, 08 Jun 2022 08:10:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654701053; cv=none; d=google.com; s=arc-20160816; b=UNMGVCIvHc5T5W5Z/Dt2tFrwDYH+SIRlpiY9lH3e/IwlwlVRMxKyD3nMkOiwJg26lj XrbxX6z3cGCGKT+AWhr80yxQDNEvihBou7PSfnvVPWxuCAzVUhIMF5BK8gbeCIeYLofU +fwA0PbuxNR4dqxVcmj7pThw27xo54zXqs4NUkFEsM8PE3qBWrO5vlpEKcvHIFnqKMfp HpjzROs1BMWN0rugIGJXntZbV6sUPYJiGxauEgc5jE4Bmc1GlEZo1sM5yYUOgUConWQB LveMIEu6lJaoPJPHYEoCRVdbCy8JYgIet4Ztlh15DImuFFagBBSp+xohF9inWoVeAJpv l8RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=BweDu5liBSOSv4TTI6PGX4oXmNZuvt9rDnMDT4whqtw=; b=ProLVgpiUcPWtsWxx9ym2aG5UEzVmunOyllC839T72DjKxMk0j19/OAgDZwY6AVXub Aaq7hBSqrc6cWvMkX3BrpQC15zpkqs7/IH0EpMkSqAsO0sZja2+gXAbNpFw9dBQM6Qn3 lzyMW9OF+5rTkMhVM8Bt5Strocz5ikaKXIJFO1i3G72j+matden7i96oEW+m3VdwRBuL hONF0ZgDuJREDEhusQThBc6kiJVW1lub8ywSE8NriqPG3ubZfYSzKOsA/DlcfjosTzi5 tlr2McD00xu89tq7XjagTmTWNsED25atdpndcaKzDxUAk5asy1WyZfYq9a8j8skPTU6w ltig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id j65-20020a638b44000000b003fcd61d90ccsi26197154pge.852.2022.06.08.08.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Jun 2022 08:10:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 70D5B159047; Wed, 8 Jun 2022 07:41:23 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241682AbiFHOki (ORCPT + 99 others); Wed, 8 Jun 2022 10:40:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241585AbiFHOkg (ORCPT ); Wed, 8 Jun 2022 10:40:36 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8083913B8CC for ; Wed, 8 Jun 2022 07:40:30 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LJ8vH6YdpzRj9k; Wed, 8 Jun 2022 22:37:15 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 8 Jun 2022 22:40:28 +0800 From: Miaohe Lin To: CC: , , , Subject: [PATCH v2 3/3] mm/swap: remove swap_cache_info statistics Date: Wed, 8 Jun 2022 22:40:31 +0800 Message-ID: <20220608144031.829-4-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220608144031.829-1-linmiaohe@huawei.com> References: <20220608144031.829-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org swap_cache_info are not statistics that could be easily used to tune system performance because they are not easily accessile. Also they can't provide really useful info when OOM occurs. Remove these statistics can also help mitigate unneeded global swap_cache_info cacheline contention. Suggested-by: David Hildenbrand Signed-off-by: Miaohe Lin --- mm/swap_state.c | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 0a2021fc55ad..41c6a6053d5c 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -59,24 +59,11 @@ static bool enable_vma_readahead __read_mostly = true; #define GET_SWAP_RA_VAL(vma) \ (atomic_long_read(&(vma)->swap_readahead_info) ? : 4) -#define INC_CACHE_INFO(x) data_race(swap_cache_info.x++) -#define ADD_CACHE_INFO(x, nr) data_race(swap_cache_info.x += (nr)) - -static struct { - unsigned long add_total; - unsigned long del_total; - unsigned long find_success; - unsigned long find_total; -} swap_cache_info; - static atomic_t swapin_readahead_hits = ATOMIC_INIT(4); void show_swap_cache_info(void) { printk("%lu pages in swap cache\n", total_swapcache_pages()); - printk("Swap cache stats: add %lu, delete %lu, find %lu/%lu\n", - swap_cache_info.add_total, swap_cache_info.del_total, - swap_cache_info.find_success, swap_cache_info.find_total); printk("Free swap = %ldkB\n", get_nr_swap_pages() << (PAGE_SHIFT - 10)); printk("Total swap = %lukB\n", total_swap_pages << (PAGE_SHIFT - 10)); @@ -133,7 +120,6 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, address_space->nrpages += nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_lruvec_page_state(page, NR_SWAPCACHE, nr); - ADD_CACHE_INFO(add_total, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -172,7 +158,6 @@ void __delete_from_swap_cache(struct page *page, address_space->nrpages -= nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); __mod_lruvec_page_state(page, NR_SWAPCACHE, -nr); - ADD_CACHE_INFO(del_total, nr); } /** @@ -348,12 +333,10 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, page = find_get_page(swap_address_space(entry), swp_offset(entry)); put_swap_device(si); - INC_CACHE_INFO(find_total); if (page) { bool vma_ra = swap_use_vma_readahead(); bool readahead; - INC_CACHE_INFO(find_success); /* * At the moment, we don't support PG_readahead for anon THP * so let's bail out rather than confusing the readahead stat. -- 2.23.0