Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp588235ybt; Fri, 26 Jun 2020 06:53:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw3bvUnyw6vnBIpEdMCLS4RDphVL2j5RddE8r0p0l3ABK434KksvQ+o1+2mIsTIHUDGG3uu X-Received: by 2002:aa7:c9c9:: with SMTP id i9mr3411190edt.166.1593179608403; Fri, 26 Jun 2020 06:53:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593179608; cv=none; d=google.com; s=arc-20160816; b=xnal8kZf5aI2HHTlkXsuXX8D9yVwb84Kon+T6i7vqH6F3Ds1YwRoxu8JPWjylPFuu0 jjH3mLGMuUkfgJlxAxTDVBLe6L20dZwhFj3xNKmGIugeyCm1gmAD4G7UGt/ewlMAPYsr /KfAkxMnc5QyheMvaW6xOaex7bqkBEdyCqq2dx1dgOSrL8/l/jBW33hof0bPz7U+lIE9 lardpCoU5N5yq4E7vcYacjA9SzF46ww8LE2EDQlyj8CGk7TAStdjKmy6pVqfSWHdSo5v 5PzlwzkaGnsSzaNZQHLxJ008JPeo7RCzoC3lQFCbL8gliY5VsN5NbCBtV4S9B/TaZKHG 3XEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KHai/sEXI3NTtiXbMbe984Ex2unUfJYsY3T+12ZxUlg=; b=LlkbGQ9fzFnl1MsDqiue4Px9O95oPOoot+lxMnitPcUpHF9q7smoKoc5lXTbq72ub+ jqr7es+u7hBWg4m+8Edd1vohA6ZLKjD+IsBeyoLs7xbaT6DBRyAQa1nuftpznt6YRecR pDht9XE91p43R2+HCRUzgi9KP2lFHDIY5XUqoG1wRLy+nbTFyzpsgSViXnjqodfi9xSE 7ykn/r1BuBiwvUujcYB7lRyHh9j1GzklhIqq5D3gTAKkE6PG0/zYeaUwwAKAImSSu4Wh qYRzddidsQ+GTNi3AlP5oqRThXnnsCMd30fqZAgZMI/M+Rih7I/peNb0vyb+8r5f9GEb 7IwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l91si9792282ede.155.2020.06.26.06.53.05; Fri, 26 Jun 2020 06:53:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729004AbgFZNwl (ORCPT + 99 others); Fri, 26 Jun 2020 09:52:41 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:55058 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728842AbgFZNwg (ORCPT ); Fri, 26 Jun 2020 09:52:36 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R611e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0U0lzELJ_1593179551; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U0lzELJ_1593179551) by smtp.aliyun-inc.com(127.0.0.1); Fri, 26 Jun 2020 21:52:31 +0800 From: Wei Yang To: akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, yang.shi@linux.alibaba.com, vbabka@suse.cz, willy@infradead.org, thomas_os@shipmail.org, thellstrom@vmware.com, anshuman.khandual@arm.com, sean.j.christopherson@intel.com, aneesh.kumar@linux.ibm.com, peterx@redhat.com, walken@google.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, digetx@gmail.com, Wei Yang Subject: [RESEND Patch v2 2/4] mm/mremap: it is sure to have enough space when extent meets requirement Date: Fri, 26 Jun 2020 21:52:14 +0800 Message-Id: <20200626135216.24314-3-richard.weiyang@linux.alibaba.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20200626135216.24314-1-richard.weiyang@linux.alibaba.com> References: <20200626135216.24314-1-richard.weiyang@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org old_end is passed to these two function to check whether there is enough space to do the move, while this check is done before invoking these functions. These two functions only would be invoked when extent meets the requirement and there is one check before invoking these functions: if (extent > old_end - old_addr) extent = old_end - old_addr; This implies (old_end - old_addr) won't fail the check in these two functions. Signed-off-by: Wei Yang Tested-by: Dmitry Osipenko --- include/linux/huge_mm.h | 2 +- mm/huge_memory.c | 7 ++----- mm/mremap.c | 11 ++++------- 3 files changed, 7 insertions(+), 13 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 71f20776b06c..17c4c4975145 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, unsigned char *vec); extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd); extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 78c84bee7e29..1e580fdad4d0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1722,17 +1722,14 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd) } bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, - pmd_t *old_pmd, pmd_t *new_pmd) + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { spinlock_t *old_ptl, *new_ptl; pmd_t pmd; struct mm_struct *mm = vma->vm_mm; bool force_flush = false; - if ((old_addr & ~HPAGE_PMD_MASK) || - (new_addr & ~HPAGE_PMD_MASK) || - old_end - old_addr < HPAGE_PMD_SIZE) + if ((old_addr & ~HPAGE_PMD_MASK) || (new_addr & ~HPAGE_PMD_MASK)) return false; /* diff --git a/mm/mremap.c b/mm/mremap.c index 97bf9a2a8bd5..de27b12c8a5a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -193,16 +193,13 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, - unsigned long new_addr, unsigned long old_end, - pmd_t *old_pmd, pmd_t *new_pmd) + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) { spinlock_t *old_ptl, *new_ptl; struct mm_struct *mm = vma->vm_mm; pmd_t pmd; - if ((old_addr & ~PMD_MASK) || - (new_addr & ~PMD_MASK) || - old_end - old_addr < PMD_SIZE) + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)) return false; /* @@ -274,7 +271,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (need_rmap_locks) take_rmap_locks(vma); moved = move_huge_pmd(vma, old_addr, new_addr, - old_end, old_pmd, new_pmd); + old_pmd, new_pmd); if (need_rmap_locks) drop_rmap_locks(vma); if (moved) @@ -294,7 +291,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (need_rmap_locks) take_rmap_locks(vma); moved = move_normal_pmd(vma, old_addr, new_addr, - old_end, old_pmd, new_pmd); + old_pmd, new_pmd); if (need_rmap_locks) drop_rmap_locks(vma); if (moved) -- 2.20.1 (Apple Git-117)