Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp4306868pxv; Mon, 19 Jul 2021 23:58:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzDfGpkToLBqaD1ZfTZvRlvss4an6+vRJssCSauoYWzgOypgKBpFQ8DxaclwLg/N9jkeUVR X-Received: by 2002:a17:907:5096:: with SMTP id fv22mr30044293ejc.525.1626764319704; Mon, 19 Jul 2021 23:58:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626764319; cv=none; d=google.com; s=arc-20160816; b=crJ5+21K3mwre7QSZwuVECo2jn8T/l1EBL3Q+WO/PSliG+CYEBLAuCWQAnRsadXvGE 8Iz9Rq8hIOCemRQN2Ej0FZORX/Y+9+T8HgQTuk0fvQq+Y3iqL8xghHwv6oE5Fh+QaPEs qXjAONOOGJpU9Fo5+qTsOkenQoJbc8xZGfLljbw5Jwim0IkvmDX+tnHHmSDo/o98i+FU VS0YiGIrvwPKN8EajRBUMfXqFywXRSClIeWtp2cGSmvQ4P0j81JlrhWigTqhjjPPoc0o 8XNLLDrgV7mRnIp3jGiBTG8YQ/HyLLb4ixSsrj1jXtLqIR23KKlGoVomDrCE0NAHew3N yYIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=SbubF/yZ8pVjStQd25w9F/MAX2Sv7/GFfj+ZMduK0BM=; b=BVfuIpDzQUgrErp0ZMfjE4UztRKY/LLZMh6BiPqer0QXkS1JL1Q4kpektZQqY5HW8D nGutGhrqbtO0tZVKA8tOE7DiFpM5dCtY/v9s6SKjGs26ELShkWE226jHibkaV/qSkuGY pCDBBlxPYzE/2C1dD3P2t8ux+4fL7JavuqgnAayBpgf2xbQNUj9RzaEIx+SgHNTKU/uj Hq1wVD2J866+b4VIYnlQPSRE0RRoRCo1NQFjqi/UzByyYdQPfD3uT609cVHHzthBwiL/ lRwTvOAfvj+ue4jkYOpSU6Oik7TSk8a2+KqGBsoeJ1myxv3vZBk0yEDLwJkDkXHr7jGs QT8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si23278658ejf.146.2021.07.19.23.58.16; Mon, 19 Jul 2021 23:58:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241912AbhGTGQE (ORCPT + 99 others); Tue, 20 Jul 2021 02:16:04 -0400 Received: from mga12.intel.com ([192.55.52.136]:38914 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235313AbhGTGPY (ORCPT ); Tue, 20 Jul 2021 02:15:24 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10050"; a="190773558" X-IronPort-AV: E=Sophos;i="5.84,254,1620716400"; d="scan'208";a="190773558" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 23:56:02 -0700 X-IronPort-AV: E=Sophos;i="5.84,254,1620716400"; d="scan'208";a="500534441" Received: from yhuang6-desk2.sh.intel.com ([10.239.159.119]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 23:55:59 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Yang Shi , Dan Carpenter , Mel Gorman , Christian Borntraeger , Gerald Schaefer , Heiko Carstens , Hugh Dickins , Andrea Arcangeli , "Kirill A . Shutemov" , Michal Hocko , Vasily Gorbik , Zi Yan Subject: [PATCH] mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code Date: Tue, 20 Jul 2021 14:55:29 +0800 Message-Id: <20210720065529.716031-1-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself via flush_tlb_range(). But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault handling"), the TLB flushing is done in migrate_pages() as in the following code path anyway. do_huge_pmd_numa_page migrate_misplaced_page migrate_pages So now, the TLB flushing code in do_huge_pmd_numa_page() becomes unnecessary. So the code is deleted in this patch to simplify the code. This is only code cleanup, there's no visible performance difference. Signed-off-by: "Huang, Ying" Cc: Yang Shi Cc: Dan Carpenter Cc: Mel Gorman Cc: Christian Borntraeger Cc: Gerald Schaefer Cc: Heiko Carstens Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Kirill A. Shutemov Cc: Michal Hocko Cc: Vasily Gorbik Cc: Zi Yan --- mm/huge_memory.c | 26 -------------------------- 1 file changed, 26 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index afff3ac87067..9f21e44c9030 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) goto out; } - /* - * Since we took the NUMA fault, we must have observed the !accessible - * bit. Make sure all other CPUs agree with that, to avoid them - * modifying the page we're about to migrate. - * - * Must be done under PTL such that we'll observe the relevant - * inc_tlb_flush_pending(). - * - * We are not sure a pending tlb flush here is for a huge page - * mapping or not. Hence use the tlb range variant - */ - if (mm_tlb_flush_pending(vma->vm_mm)) { - flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE); - /* - * change_huge_pmd() released the pmd lock before - * invalidating the secondary MMUs sharing the primary - * MMU pagetables (with ->invalidate_range()). The - * mmu_notifier_invalidate_range_end() (which - * internally calls ->invalidate_range()) in - * change_pmd_range() will run after us, so we can't - * rely on it here and we need an explicit invalidate. - */ - mmu_notifier_invalidate_range(vma->vm_mm, haddr, - haddr + HPAGE_PMD_SIZE); - } - pmd = pmd_modify(oldpmd, vma->vm_page_prot); page = vm_normal_page_pmd(vma, haddr, pmd); if (!page) -- 2.30.2