Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp2812138ybb; Fri, 27 Mar 2020 12:31:01 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsVn2137Q+0MByOHvWB3rjlfQV2LEGobkyg/Potnou78Nxeal7aS4NfZx4ZFW+5Vn56ymXh X-Received: by 2002:a9d:7ccb:: with SMTP id r11mr230542otn.204.1585337460807; Fri, 27 Mar 2020 12:31:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585337460; cv=none; d=google.com; s=arc-20160816; b=pUJW7tSZrwtpZENZFtpXSf7iOyWjKVTXntJjFsk1Wgk1JbUJ89WSrPu3heByOUS1Py 9GnAqp9qKl8nYJZIr7WnXUmxKzDNYqBdxvxICkQVAZcIR3UOwFbtTTTyAeQS7GSbcjpM l2gNgd3ZrYfiTL/ZHHh0HzNO8uklaV4zNiIUgFqtqwVYUL+qefdYfGLl3wZ7Ad0jtFix FcgOgjNZA4K1FkCk9YxmmuZTb7bhsxlNt27yOLyt+b+1NpVSlbzibOgy833hOsWFJuGe RvLBo4+tzcBNUXAqgJIid/LUVIzKbxgte4qIOPnFmLW81BV5dvkb6bHAw4hQ4L3Xd9pP I7ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=82/A2Cq0X3MEbcQTBEwfSA5Nh8Q2zzf1xh/oUC5a0r0=; b=v9bMjjQr2xQ8epTwKxJqOO3UX1Ph198c7GPKUXbdkP3S+Zj8fMBdoLdSw4jBxNYlpq eUCu9zsRW93iZ4/fV8KAdPNshHrkmQ6xBybIZgnqpgwO9Lqx+BFO4VL0YdLCy6thc+I2 2KlBSWF41LS3RlshMl5NtNRgsNEB1UW19xhOC/ZwaFY5bEObsv+0g/sLW4g5k5Gz+GWW WMMhDcf0jXZuU7A+IyWNeW+VP1YtjtEp8x/CW9AR7OEjEmZrQ0PyLMhZgAwUyGhvdFcN XjrxDfVI4l0LrCCbHMepgP8XTEaxnW9pR/dfwpirV778rkZMEIi3XrU2E8z8VzEucm66 8GKA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k18si927528otf.285.2020.03.27.12.30.41; Fri, 27 Mar 2020 12:31:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727607AbgC0T3v (ORCPT + 99 others); Fri, 27 Mar 2020 15:29:51 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:35767 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726959AbgC0T3v (ORCPT ); Fri, 27 Mar 2020 15:29:51 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04427;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0TtnYtzc_1585337380; Received: from localhost(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TtnYtzc_1585337380) by smtp.aliyun-inc.com(127.0.0.1); Sat, 28 Mar 2020 03:29:47 +0800 From: Yang Shi To: kirill.shutemov@linux.intel.com, hughd@google.com, aarcange@redhat.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: thp: don't need drain lru cache when splitting and mlocking THP Date: Sat, 28 Mar 2020 03:29:40 +0800 Message-Id: <1585337380-97368-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival") THP would not stay in pagevec anymore. So the optimization made by commit d965432234db ("thp: increase split_huge_page() success rate") doesn't make sense anymore, which tries to unpin munlocked THPs from pagevec by draining pagevec. And draining lru cache before isolating THP in mlock path is unnecessary either. Cc: Kirill A. Shutemov Cc: Hugh Dickins Cc: Andrea Arcangeli Signed-off-by: Yang Shi --- mm/huge_memory.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199..1af2e7d6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1527,7 +1527,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, goto skip_mlock; if (!trylock_page(page)) goto skip_mlock; - lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) mlock_vma_page(page); unlock_page(page); @@ -2711,7 +2710,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - bool mlocked; unsigned long flags; pgoff_t end; @@ -2770,14 +2768,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out_unlock; } - mlocked = PageMlocked(head); unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* Make sure the page is not on per-CPU pagevec as it takes pin */ - if (mlocked) - lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ spin_lock_irqsave(&pgdata->lru_lock, flags); -- 1.8.3.1