Received: by 10.213.65.68 with SMTP id h4csp231106imn; Tue, 20 Mar 2018 01:55:31 -0700 (PDT) X-Google-Smtp-Source: AG47ELtlSLHIlfhZGoUKlg4oRQyGlpJb/uYSvzjrmWNHrCRV1HAYxXQxZ+KzDftJlk3pUK9Ah/+V X-Received: by 10.99.168.68 with SMTP id i4mr11373622pgp.420.1521536131299; Tue, 20 Mar 2018 01:55:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521536131; cv=none; d=google.com; s=arc-20160816; b=L8di2pDGqq/cgSWfgAPjAbnG6ByRggLDju3HYFza7VO9b6Eu+jjJr9dawWY4Ayl9cU bgF2aOAJ3iJX97t77KvUslZjk3/2vXcZ1nv3FNGIAWykSch46vk5CKMGnvpTGxWqo4GI VHr5tJFi9l6GOUncZFf1JtFIIa2rP7NgkNkZiDPNcqsp5LLXk9UgBm3y0cs3Mu/moA2m 4LaDV4U3dKqnCAj8sbth7gLj4cv6OCx4AoMVKFQsbmThjCsgQLNOU9YDeLF9x/sjzeZM rsOQ1d1+ym9b+B8eCATynvJMO5dQaegiPLUAwhcduP9roSuCUhKUcficm+zWYlwihYvG f+pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=QCiPjBeZQ5LI6vEkQ7KyN3EtbfqQXtQ0XK5uJtaR808=; b=ZYTwRHBtlvX2PS2VnFdfLCLZxttWv33c+ce86iNCzig6ZiMqXbCKkL/emxoctq5wOH t5HiiUJ5Tdku/zeY8f4ly+lSn4ewoGgTr6qx3juuL5OkmsUmSa3rtnNQc4oIYygvVr2q TiocVSfigjn3YUDkTLXrzxCmGynsrD8V2emxFjzi0PlZAwuzzj/RoK3X0V4kxiOsEX6b r8o5YG6nEpLYjsWroVQ/dr8U08NH4RVj9ekDWcz3DjSTjdjWV3NpoHtsP41boXne15Tn kv6SbWIq15GEHhNNrFfQ2xEncaFqFkdGdB+5nPJtT+hs+2G6rNqf9rEWPnioLUlsOxTg PV6g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r7si883841pgp.212.2018.03.20.01.55.14; Tue, 20 Mar 2018 01:55:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752478AbeCTIyD (ORCPT + 99 others); Tue, 20 Mar 2018 04:54:03 -0400 Received: from mga14.intel.com ([192.55.52.115]:51143 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752385AbeCTIx4 (ORCPT ); Tue, 20 Mar 2018 04:53:56 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Mar 2018 01:53:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,334,1517904000"; d="scan'208";a="184360782" Received: from aaronlu.sh.intel.com ([10.239.159.135]) by orsmga004.jf.intel.com with ESMTP; 20 Mar 2018 01:53:53 -0700 From: Aaron Lu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox , Daniel Jordan Subject: [RFC PATCH v2 4/4] mm/free_pcppages_bulk: reduce overhead of cluster operation on free path Date: Tue, 20 Mar 2018 16:54:52 +0800 Message-Id: <20180320085452.24641-5-aaron.lu@intel.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180320085452.24641-1-aaron.lu@intel.com> References: <20180320085452.24641-1-aaron.lu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After "no_merge for order 0", the biggest overhead in free path for order 0 pages is now add_to_cluster(). As pages are freed one by one, it caused frequent operation of add_to_cluster(). Ideally, if only one migratetype pcp list has pages to free and count=pcp->batch in free_pcppages_bulk(), we can avoid calling add_to_cluster() one time per page but adding them in one go as a single cluster. Let's call this ideal case as single_mt and single_mt_unmovable represents when only unmovable pcp list has pages and count in free_pcppages_bulk() equals to pcp->batch. Ditto for single_mt_movable and single_mt_reclaimable. I added some counters to see how often this ideal case is. On my desktop, after boot: free_pcppages_bulk: 6268 single_mt: 3885 (62%) free_pcppages_bulk means the number of time this function gets called. single_mt means number of times when only one pcp migratetype list has pages to be freed and count=pcp->batch. single_mt can be further devided into the following 3 cases: single_mt_unmovable: 263 (4%) single_mt_movable: 2566 (41%) single_mt_reclaimable: 1056 (17%) After kbuild with a distro kconfig: free_pcppages_bulk: 9100508 single_mt: 8440310 (93%) Again, single_mt can be further devided: single_mt_unmovable: 290 (0%) single_mt_movable: 8435483 (92.75%) single_mt_reclaimable: 4537 (0.05%) Considering capturing the case of single_mt_movable requires fewer lines of code and it is the most often ideal case, I think capturing this case alone is enough. This optimization brings zone->lock contention down from 25% to almost zero again using the parallel free workload. Signed-off-by: Aaron Lu --- mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ac93833a2877..ad15e4ef99d6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1281,6 +1281,36 @@ static bool bulkfree_pcp_prepare(struct page *page) } #endif /* CONFIG_DEBUG_VM */ +static inline bool free_cluster_pages(struct zone *zone, struct list_head *list, + int mt, int count) +{ + struct cluster *c; + struct page *page, *n; + + if (!can_skip_merge(zone, 0)) + return false; + + if (count != this_cpu_ptr(zone->pageset)->pcp.batch) + return false; + + c = new_cluster(zone, count, list_first_entry(list, struct page, lru)); + if (unlikely(!c)) + return false; + + list_for_each_entry_safe(page, n, list, lru) { + set_page_order(page, 0); + set_page_merge_skipped(page); + page->cluster = c; + list_add(&page->lru, &zone->free_area[0].free_list[mt]); + } + + INIT_LIST_HEAD(list); + zone->free_area[0].nr_free += count; + __mod_zone_page_state(zone, NR_FREE_PAGES, count); + + return true; +} + /* * Frees a number of pages from the PCP lists * Assumes all pages on list are in same zone, and of same order. @@ -1295,9 +1325,9 @@ static bool bulkfree_pcp_prepare(struct page *page) static void free_pcppages_bulk(struct zone *zone, int count, struct per_cpu_pages *pcp) { - int migratetype = 0; - int batch_free = 0; - bool isolated_pageblocks; + int migratetype = MIGRATE_MOVABLE; + int batch_free = 0, saved_count = count; + bool isolated_pageblocks, single_mt = false; struct page *page, *tmp; LIST_HEAD(head); @@ -1319,8 +1349,11 @@ static void free_pcppages_bulk(struct zone *zone, int count, } while (list_empty(list)); /* This is the only non-empty list. Free them all. */ - if (batch_free == MIGRATE_PCPTYPES) + if (batch_free == MIGRATE_PCPTYPES) { batch_free = count; + if (batch_free == saved_count) + single_mt = true; + } do { unsigned long pfn, buddy_pfn; @@ -1359,9 +1392,14 @@ static void free_pcppages_bulk(struct zone *zone, int count, spin_lock(&zone->lock); isolated_pageblocks = has_isolate_pageblock(zone); + if (!isolated_pageblocks && single_mt) + free_cluster_pages(zone, &head, migratetype, saved_count); + /* * Use safe version since after __free_one_page(), * page->lru.next will not point to original list. + * + * If free_cluster_pages() succeeds, head will be an empty list here. */ list_for_each_entry_safe(page, tmp, &head, lru) { int mt = get_pcppage_migratetype(page); -- 2.14.3