Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp994379ybl; Tue, 28 Jan 2020 16:28:33 -0800 (PST) X-Google-Smtp-Source: APXvYqyD3KsE0owMy2WyEA2uAErHFNezUTkN69rxTG7cb1U3tkck1fzXTIBzS8xGnf6qvqqgT0Vi X-Received: by 2002:aca:4c9:: with SMTP id 192mr4911540oie.105.1580257713112; Tue, 28 Jan 2020 16:28:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580257713; cv=none; d=google.com; s=arc-20160816; b=zYZiRe/PXea1lhT/GKh+0Hg74B1Cq31Gr8wy2ie4e8M7I31LJTyxemru6WIYiIrIqy ihlnDLzfcjRLkn48UHSdNGcJMDicku4k4VL9J2HMIGbW/aioejM+LU2TkKat1fqGKKAf NvCH5lhxKs9/SO4JQuZ1ee+XkBR1/BDgRA8SHylQ9YQDaeYxFYnDZoyJpknwrRaS9GT9 5/864Ak081n1fMfS2acb1lkYwIgWUOBn3AL2zXsV/spQVcOaAGyQdUY25GxG6l9OWGcr 0wgQtNAiI3JPdVFDITRwEouxxTJeKljqS6bTS/quT/cT/YuxR113w9u1+glSHZj8xFMN YG8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=iDOVjBHDPfFRzbq4FAtKc1GvRloE3f8Fcf77WQgy2V4=; b=zBCB4BpiCtT222Lia8APaz7hPzi7iMH7kUeZLC20AnhnjZjP1N0r3zJ+fxSzcbZUhV NzDbGNZ/rGMnjIdYbsSZcLqyel8JFavhaIycNZbuK0nWcCJzxUMWkkjQtB6oTqn6z62b Esq4IeABEkgtNsvwH2I9xUNEWc51ZOIFr5x3C8Ln0uaAq+lNMlG7NLIm//f1d1v8xUrW FdivU8j45GjxXNFhZgB7hDf0Vl3gACBqV7Q5nvDIis37hr/CWDV8GGYu9ek0mdSxtU1g 9TUR4WbpvOQ1XVnvv4lSX5+r3yPhcQc8rsCrb3xa59lFquzR4xS2jZrfqFRFMi0tNGdT YQZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i11si197331otc.105.2020.01.28.16.28.21; Tue, 28 Jan 2020 16:28:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726549AbgA2A1N (ORCPT + 99 others); Tue, 28 Jan 2020 19:27:13 -0500 Received: from mga11.intel.com ([192.55.52.93]:12087 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726307AbgA2A1M (ORCPT ); Tue, 28 Jan 2020 19:27:12 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 16:27:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,375,1574150400"; d="scan'208";a="229456097" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga003.jf.intel.com with ESMTP; 28 Jan 2020 16:27:09 -0800 From: Wei Yang To: akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, kirill@shutemov.name, dan.j.williams@intel.com, yang.shi@linux.alibaba.com, thellstrom@vmware.com, richardw.yang@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, digetx@gmail.com Subject: [Patch v2 2/4] mm/mremap: it is sure to have enough space when extent meets requirement Date: Wed, 29 Jan 2020 08:26:40 +0800 Message-Id: <20200129002642.13508-3-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200129002642.13508-1-richardw.yang@linux.intel.com> References: <20200129002642.13508-1-richardw.yang@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org old_end is passed to these two function to check whether there is enough space to do the move, while this check is done before invoking these functions. These two functions only would be invoked when extent meets the requirement and there is one check before invoking these functions: if (extent > old_end - old_addr) extent = old_end - old_addr; This implies (old_end - old_addr) won't fail the check in these two functions. Signed-off-by: Wei Yang --- include/linux/huge_mm.h | 2 +- mm/huge_memory.c | 7 ++----- mm/mremap.c | 11 ++++------- 3 files changed, 7 insertions(+), 13 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 0b84e13e88e2..2a5281ca46c8 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, unsigned char *vec); extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd); extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b2876942639..8f1bbbf01f5b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1871,17 +1871,14 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd) } bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, - pmd_t *old_pmd, pmd_t *new_pmd) + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { spinlock_t *old_ptl, *new_ptl; pmd_t pmd; struct mm_struct *mm = vma->vm_mm; bool force_flush = false; - if ((old_addr & ~HPAGE_PMD_MASK) || - (new_addr & ~HPAGE_PMD_MASK) || - old_end - old_addr < HPAGE_PMD_SIZE) + if ((old_addr & ~HPAGE_PMD_MASK) || (new_addr & ~HPAGE_PMD_MASK)) return false; /* diff --git a/mm/mremap.c b/mm/mremap.c index bcc7aa62f2d9..c2af8ba4ba43 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -193,16 +193,13 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, - pmd_t *old_pmd, pmd_t *new_pmd) + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { spinlock_t *old_ptl, *new_ptl; struct mm_struct *mm = vma->vm_mm; pmd_t pmd; - if ((old_addr & ~PMD_MASK) || - (new_addr & ~PMD_MASK) || - old_end - old_addr < PMD_SIZE) + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)) return false; /* @@ -274,7 +271,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (need_rmap_locks) take_rmap_locks(vma); moved = move_huge_pmd(vma, old_addr, new_addr, - old_end, old_pmd, new_pmd); + old_pmd, new_pmd); if (need_rmap_locks) drop_rmap_locks(vma); if (moved) @@ -294,7 +291,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (need_rmap_locks) take_rmap_locks(vma); moved = move_normal_pmd(vma, old_addr, new_addr, - old_end, old_pmd, new_pmd); + old_pmd, new_pmd); if (need_rmap_locks) drop_rmap_locks(vma); if (moved) -- 2.17.1