Received: by 2002:a05:6a10:6006:0:0:0:0 with SMTP id w6csp1024899pxa; Fri, 28 Aug 2020 01:14:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9zq7vK74YpQ1RFh7/UF5FlKIklW7qulyAStAonwauNPL9y4ZfrAeLHglG4nPatqyl1l4E X-Received: by 2002:a17:907:72d2:: with SMTP id du18mr616946ejc.359.1598602485638; Fri, 28 Aug 2020 01:14:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598602485; cv=none; d=google.com; s=arc-20160816; b=xb2EhGKJqDZDeaFbLbeHZkbZ4W602Z72OOr4jBivpuXkbiIdmBkwh68Ic9kh9bfJaU JvArVQsM6PGkPbyaFYibTa+idWM0HhTBAypCXzmIgzsANVLYemvh5LpJjhGPbylXD+B6 dfKY5vuc+VVyUUgL+xOMDpHMRI+TJ5bbKrHdzlJQi9LicEn/3LW0LHI3MZXFM0K65M8o UUp9xhNq8vcGiIMjzqI4tNZIGuZnNOSN+c08YTVaf46aM1a+Xf32uCUZ6PbtFD6leAPx UDXWb/00wgAzZHLqpfSjuVGGnDcrrjoT/0wHd9aMXuRdMeWiDLyGG7/4k/y1pWRHNaOM ny9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=XQLPcmx/t80Gj4Jl4IpJSStDKi8sN+0QPDgF18ucRbM=; b=c1RF79UUj7EVci6pbnO0Ug7SzfMLXiSc1Zi60y8ec7td6cw0vIa3vJUnc7YdAZDvc9 X/K6x6EVW6LhkI6N4I/j5+RkpF/kFzMIJAWEy6B3O9myZxQ3+QFODtOX1pcPiqAPjOf2 vC20cgm72iTDjO6/xFFLeDK0cHbGTROhEYI0zMYky7Q6E1zYMvKBnCm7rjPvSyA6Q5Yg J9kyI41rGdWLCDhttBUVQxzZoIPhB3Wjb4PgdAGOCkYG1b+zapE9jGH0KBCiXow1gJeD gwz8hX2AoZAFtymu70zHyydR8kRUx/gimjyLYukSdxWfeRq3xa57M3r0SgGecOqF2fD/ EydQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s5si130100ejz.589.2020.08.28.01.14.23; Fri, 28 Aug 2020 01:14:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728604AbgH1ILE (ORCPT + 99 others); Fri, 28 Aug 2020 04:11:04 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:42718 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728629AbgH1IKj (ORCPT ); Fri, 28 Aug 2020 04:10:39 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R961e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0U750IIp_1598602237; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U750IIp_1598602237) by smtp.aliyun-inc.com(127.0.0.1); Fri, 28 Aug 2020 16:10:37 +0800 From: Wei Yang To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Wei Yang Subject: [PATCH] mm/mmap: leave adjust_next as virtual address instead of page frame number Date: Fri, 28 Aug 2020 16:10:31 +0800 Message-Id: <20200828081031.11306-1-richard.weiyang@linux.alibaba.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Instead of convert adjust_next between virtual address and page frame number, let's just store the virtual address into adjust_next. Also, this patch fixes one typo in the comment of vma_adjust_trans_huge(). Signed-off-by: Wei Yang --- mm/huge_memory.c | 4 ++-- mm/mmap.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 78c84bee7e29..2c633ba14440 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2300,13 +2300,13 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, /* * If we're also updating the vma->vm_next->vm_start, if the new - * vm_next->vm_start isn't page aligned and it could previously + * vm_next->vm_start isn't hpage aligned and it could previously * contain an hugepage: check if we need to split an huge pmd. */ if (adjust_next > 0) { struct vm_area_struct *next = vma->vm_next; unsigned long nstart = next->vm_start; - nstart += adjust_next << PAGE_SHIFT; + nstart += adjust_next; if (nstart & ~HPAGE_PMD_MASK && (nstart & HPAGE_PMD_MASK) >= next->vm_start && (nstart & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= next->vm_end) diff --git a/mm/mmap.c b/mm/mmap.c index 90b1298d4222..e4c9bbfd4103 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -758,7 +758,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, * vma expands, overlapping part of the next: * mprotect case 5 shifting the boundary up. */ - adjust_next = (end - next->vm_start) >> PAGE_SHIFT; + adjust_next = (end - next->vm_start); exporter = next; importer = vma; VM_WARN_ON(expand != importer); @@ -768,7 +768,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, * split_vma inserting another: so it must be * mprotect case 4 shifting the boundary down. */ - adjust_next = -((vma->vm_end - end) >> PAGE_SHIFT); + adjust_next = -(vma->vm_end - end); exporter = vma; importer = next; VM_WARN_ON(expand != importer); @@ -840,8 +840,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, } vma->vm_pgoff = pgoff; if (adjust_next) { - next->vm_start += adjust_next << PAGE_SHIFT; - next->vm_pgoff += adjust_next; + next->vm_start += adjust_next; + next->vm_pgoff += adjust_next >> PAGE_SHIFT; } if (root) { -- 2.20.1 (Apple Git-117)