Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp4272860ybi; Mon, 29 Jul 2019 22:58:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqxewzRUq0ecWsdVr9bMJHXOyawY7BKF1RfdvLksk1Jjo7lidxezvmW/g20XLoRgoZUcAq5b X-Received: by 2002:a17:90a:eb08:: with SMTP id j8mr80333341pjz.72.1564466318059; Mon, 29 Jul 2019 22:58:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564466318; cv=none; d=google.com; s=arc-20160816; b=pVfXtNXhGS6TsDHRKv8PuVnRzu/cCWOUajp9c/EgNWwRfCeMYhhUHEum8oLt3Z2NSQ Q9FB4PCMbUn7Loeziy1RXUm4RWY71svu2Xoq7oOSn9F8iUGa32lrtvpC0Zf5+u9FiOMy 9nwumWR3b6V1N76pkGryw0WsuJtug2/TEBIIXx011jnSokUHvPs/8dN/9cZQUkehMYyo 8TSs0Nu89b6rOJ5KDm9BdjLDYp82WHp06KRDgjUngEm5lE0PoktEg9MvFZldWQzHbOvO tN/b4/o4VnbVGk0TaWJb/wzy0AFwfZsNc1qtqhiBkpHJk+OVX9KqnHz+4C4jI3M8Wt3B t4Aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gUN1ysYphbaom7o3SlTt3jkBtHdaNGKW2RkJa25nOPs=; b=v6LNuXjfN6sI+7RluoCjwiZbX/J1ka/5N7bhtFcfIkR5ERupmzbLxUltVxMsxMGQUG DDogn0TsejtPdvAGxfQChDymjHPeesUp3AxokDrBfgfTWEIjUgzHf6Th3tgvrwmqMer5 t9vzh6MXfRqvghOnwcYzHiWxi6eni5ouJNleVz0i76LFpxKHMzG7Vy5FzB8P9JMOgS/5 JNYXX4MYfjCAKstu9VGFTKOLVY8SHx8FYvXu+1jj6TTeL0+abTUBLyEJ0m+Ti4V/xGhl p/jkGu79lOP61N7bJvxepz11GQOOOzO2tUgBV9P42c9T28jJp0fUF0kOtKFDbhVS0fUl EOHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=HAiKpX1E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m99si27135419pjb.107.2019.07.29.22.58.23; Mon, 29 Jul 2019 22:58:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=HAiKpX1E; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729589AbfG3Fwj (ORCPT + 99 others); Tue, 30 Jul 2019 01:52:39 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:46206 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728335AbfG3Fwg (ORCPT ); Tue, 30 Jul 2019 01:52:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=gUN1ysYphbaom7o3SlTt3jkBtHdaNGKW2RkJa25nOPs=; b=HAiKpX1EVNOPs9UAiXpBK7RtOr zWukhNQVhuMZwGpox0xkimd9OFR2Ina8VGIPXExI+LErg/VnNm4GKw3KhxEHcuS6/LRmr138bugv5 vNpJF/+SAhyuf8MH5RCG47zhWLt0kP5bM2tUSa6/9lHeqc4acuR/Q/iWZCz3+2uVGPw4DAQIlnpoh fnaTSPET4vYNsZeP/lcOk31QyTLoeN7vASsyNjZ/ynhufNdRmdM/Jzi+1OeWixGX72h02GqhtFvg4 YlZr6QULf+iUQhX4wBYbOqwf03LD8S1F5OaHDFK8eFbdtQ5tnFaet/ViEYVOX5jwe2IVoZ0qJck9p yoAN3cVQ==; Received: from [195.167.85.94] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hsL3o-0001Iz-GC; Tue, 30 Jul 2019 05:52:32 +0000 From: Christoph Hellwig To: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jason Gunthorpe , Ben Skeggs , Felix Kuehling Cc: Ralph Campbell , linux-mm@kvack.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/13] mm: remove superflous arguments from hmm_range_register Date: Tue, 30 Jul 2019 08:51:56 +0300 Message-Id: <20190730055203.28467-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190730055203.28467-1-hch@lst.de> References: <20190730055203.28467-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The start, end and page_shift values are all saved in the range structure, so we might as well use that for argument passing. Signed-off-by: Christoph Hellwig --- Documentation/vm/hmm.rst | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 7 +++++-- drivers/gpu/drm/nouveau/nouveau_svm.c | 5 ++--- include/linux/hmm.h | 6 +----- mm/hmm.c | 20 +++++--------------- 5 files changed, 14 insertions(+), 26 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index ddcb5ca8b296..e63c11f7e0e0 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -222,7 +222,7 @@ The usage pattern is:: range.flags = ...; range.values = ...; range.pfn_shift = ...; - hmm_range_register(&range); + hmm_range_register(&range, mirror); /* * Just wait for range to be valid, safe to ignore return value as we diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index f0821638bbc6..71d6e7087b0b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -818,8 +818,11 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) 0 : range->flags[HMM_PFN_WRITE]; range->pfn_flags_mask = 0; range->pfns = pfns; - hmm_range_register(range, mirror, start, - start + ttm->num_pages * PAGE_SIZE, PAGE_SHIFT); + range->page_shift = PAGE_SHIFT; + range->start = start; + range->end = start + ttm->num_pages * PAGE_SIZE; + + hmm_range_register(range, mirror); /* * Just wait for range to be valid, safe to ignore return value as we diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index b889d5ec4c7e..40e706234554 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -492,9 +492,7 @@ nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) range->default_flags = 0; range->pfn_flags_mask = -1UL; - ret = hmm_range_register(range, &svmm->mirror, - range->start, range->end, - PAGE_SHIFT); + ret = hmm_range_register(range, &svmm->mirror); if (ret) { up_read(&range->hmm->mm->mmap_sem); return (int)ret; @@ -682,6 +680,7 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.addr + args.i.p.size, fn - fi); /* Have HMM fault pages within the fault window to the GPU. */ + range.page_shift = PAGE_SHIFT; range.start = args.i.p.addr; range.end = args.i.p.addr + args.i.p.size; range.pfns = args.phys; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 59be0aa2476d..c5b51376b453 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -400,11 +400,7 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ -int hmm_range_register(struct hmm_range *range, - struct hmm_mirror *mirror, - unsigned long start, - unsigned long end, - unsigned page_shift); +int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror); void hmm_range_unregister(struct hmm_range *range); /* diff --git a/mm/hmm.c b/mm/hmm.c index 3a3852660757..926735a3aef9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -843,35 +843,25 @@ static void hmm_pfns_clear(struct hmm_range *range, * hmm_range_register() - start tracking change to CPU page table over a range * @range: range * @mm: the mm struct for the range of virtual address - * @start: start virtual address (inclusive) - * @end: end virtual address (exclusive) - * @page_shift: expect page shift for the range + * * Return: 0 on success, -EFAULT if the address space is no longer valid * * Track updates to the CPU page table see include/linux/hmm.h */ -int hmm_range_register(struct hmm_range *range, - struct hmm_mirror *mirror, - unsigned long start, - unsigned long end, - unsigned page_shift) +int hmm_range_register(struct hmm_range *range, struct hmm_mirror *mirror) { - unsigned long mask = ((1UL << page_shift) - 1UL); + unsigned long mask = ((1UL << range->page_shift) - 1UL); struct hmm *hmm = mirror->hmm; unsigned long flags; range->valid = false; range->hmm = NULL; - if ((start & mask) || (end & mask)) + if ((range->start & mask) || (range->end & mask)) return -EINVAL; - if (start >= end) + if (range->start >= range->end) return -EINVAL; - range->page_shift = page_shift; - range->start = start; - range->end = end; - /* Prevent hmm_release() from running while the range is valid */ if (!mmget_not_zero(hmm->mm)) return -EFAULT; -- 2.20.1