Received: by 2002:a05:6a10:6006:0:0:0:0 with SMTP id w6csp741723pxa; Thu, 27 Aug 2020 14:41:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz1vMhOznDf9L7QIsy8KB+zlBplrLkSAmmKTq9S4Qg4jz/jUzB//MFprG0PFEGTeAV0G4YA X-Received: by 2002:a17:906:a1d7:: with SMTP id bx23mr23438956ejb.273.1598564486543; Thu, 27 Aug 2020 14:41:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598564486; cv=none; d=google.com; s=arc-20160816; b=Ibvod5usGc4v/HMY4yF3vTOU1VgHPODxBflUoHFMmWmuiOVkN/jkvqg88b6Y1hostC 2aodt5IzZBzhkvlEXoUjkWvLR2/bUGLTG3eavREGDi3gpqw+hqTbBfSssvuYWfxluDJ3 UoOd9FiVu6wNsBT6Dwcm8YDq/fB7b1f2Z9HI+dRBPa3vWRDNoodhIXQQZXHvrmlPqo0+ l6zz+A0uGScssaQWpIjkPuDJJHlL/DRZmIoSYYPABvJdA33ABAr4AcrU+xQgGJk+7vrv f3NUz2V6+JQQTy1kFid+kpKRl7WAt5XcS3cm8NkXhO+3LJubhzXRRoYdp9cHbHXtOFY9 qL1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from; bh=XBTnhsw8aeSeeaHO8PfVUgCnUydeHCWsSW21RUvyX4Q=; b=kEvqGRZKBPx7zwmANo/TpRoRwKiFedzKnenAsOwtpHBZhiamETbPjGhYD314QIuW33 +If3BC0jRdnKB40pm96H7nsXeygo7uCxKfcRCu5N/dwVVcaDiNSudQZdIFYp+CuXvvbH U7D+9BENb1P93stjfwHPSNZCeFg+MQ3FJLeiIgbQ9ytymzqUyxmWN+6NciWpjToOlfyg gbK4lc+m/wYEI5LoW3GhPCuSXzy0Jbc3lck43G0mQ2FJXvMGtnPLiDGNXe9F7MAC/Fyw aHUsGDN+MqWKzFvz2uuC4uASA/wj97vctKRQ8ebvgshiVg5eHMC5D2Zq5zY7JrG44/86 +Jng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Ptggp2jr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id li6si2121144ejb.627.2020.08.27.14.41.03; Thu, 27 Aug 2020 14:41:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Ptggp2jr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727093AbgH0Vhv (ORCPT + 99 others); Thu, 27 Aug 2020 17:37:51 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:7669 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbgH0Vhu (ORCPT ); Thu, 27 Aug 2020 17:37:50 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 27 Aug 2020 14:37:36 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 27 Aug 2020 14:37:50 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 27 Aug 2020 14:37:50 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 27 Aug 2020 21:37:50 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 27 Aug 2020 21:37:49 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 27 Aug 2020 14:37:49 -0700 From: Ralph Campbell To: , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , Ben Skeggs , Ralph Campbell Subject: [PATCH] nouveau: fix the start/end range for migration Date: Thu, 27 Aug 2020 14:37:44 -0700 Message-ID: <20200827213744.14074-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1598564256; bh=XBTnhsw8aeSeeaHO8PfVUgCnUydeHCWsSW21RUvyX4Q=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=Ptggp2jr4EwLFTIsViMB2Ms3s2SE3O5nFyqfdoMBNPbxAOfYyX/nOxJRPH6ulewi1 yFQK8ncrrpHfNTIiMwCQoTR/wRRHAX4N68N6YxPR+yAFjryU3pPZaxMGaQ1o9U8bkA SslGSWTrRTCO69jAco8QPrgrfmO/wt8tSHDDujOOAgByNGy+XywtezUg67YyYDVD5z b6YP3HsuWXLXDDoLobjXtB+Y7ITsOW9V2dmxGqTSLFllxfBDRhRP9vNspNCiFcviZ5 3PZCcRF3SpICdijGLlPn+/Rq49otNpHRaTJEuUiqH+D0uryYGqLxSjFjGVdMHaRbl5 AhJEhByBwlYKg== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The user level OpenCL code shouldn't have to align start and end addresses to a page boundary. That is better handled in the nouveau driver. The npages field is also redundant since it can be computed from the start and end addresses. Signed-off-by: Ralph Campbell --- This is for Ben Skegg's nouveau tree. I have been working with Karol Herbst on the OpenCL mesa changes for nouveau which will be merged upstream soon. With or without those changes, the user visible effect of this patch only extends the range by one page (round up vs. round down to page boundary). drivers/gpu/drm/nouveau/nouveau_svm.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 2df1c0460559..888aa0908c5a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -105,11 +105,14 @@ nouveau_svmm_bind(struct drm_device *dev, void *data, struct nouveau_cli *cli =3D nouveau_cli(file_priv); struct drm_nouveau_svm_bind *args =3D data; unsigned target, cmd, priority; - unsigned long addr, end, size; + unsigned long addr, end; struct mm_struct *mm; =20 args->va_start &=3D PAGE_MASK; - args->va_end &=3D PAGE_MASK; + args->va_end =3D ALIGN(args->va_end, PAGE_SIZE); + /* If no end address is given, assume a single page. */ + if (args->va_end =3D=3D 0) + args->va_end =3D args->va_start + PAGE_SIZE; =20 /* Sanity check arguments */ if (args->reserved0 || args->reserved1) @@ -118,8 +121,6 @@ nouveau_svmm_bind(struct drm_device *dev, void *data, return -EINVAL; if (args->va_start >=3D args->va_end) return -EINVAL; - if (!args->npages) - return -EINVAL; =20 cmd =3D args->header >> NOUVEAU_SVM_BIND_COMMAND_SHIFT; cmd &=3D NOUVEAU_SVM_BIND_COMMAND_MASK; @@ -151,12 +152,6 @@ nouveau_svmm_bind(struct drm_device *dev, void *data, if (args->stride) return -EINVAL; =20 - size =3D ((unsigned long)args->npages) << PAGE_SHIFT; - if ((args->va_start + size) <=3D args->va_start) - return -EINVAL; - if ((args->va_start + size) > args->va_end) - return -EINVAL; - /* * Ok we are ask to do something sane, for now we only support migrate * commands but we will add things like memory policy (what to do on @@ -171,7 +166,7 @@ nouveau_svmm_bind(struct drm_device *dev, void *data, return -EINVAL; } =20 - for (addr =3D args->va_start, end =3D args->va_start + size; addr < end;)= { + for (addr =3D args->va_start, end =3D args->va_end; addr < end;) { struct vm_area_struct *vma; unsigned long next; =20 --=20 2.20.1