Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp3938487ybf; Tue, 3 Mar 2020 16:15:00 -0800 (PST) X-Google-Smtp-Source: ADFU+vsdxjdttzCMgrXkyZqkJkGKEfpBhcd/nFQ6A/pt3N6Z6LLM03cNfIMDf8rH8P6vdN/Irfzo X-Received: by 2002:aca:b4c3:: with SMTP id d186mr37716oif.131.1583280900152; Tue, 03 Mar 2020 16:15:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583280900; cv=none; d=google.com; s=arc-20160816; b=Zxak0Ndmt5ymt33y2SlBRQ7WEOYpFyXtdWevGw24lDHayvkxLVO0fw/ztr9MaRdNd6 GZfRS8ZjO3f8eaTjB3fzUiX/nD+fi/TC9/W+Jh4ceg62atQMfVmfI4XL0+Ww7b6/Nzp2 WLP0L/bXJ5xTSjAAnd4uDJHInnNV73LO/Z3HGGy8ft2WMeOZ8+IG6ytygvvp1iHs7gy9 wgav4Rpla8NE4DZJ1ksnYkbXlEWzqdT1z19xr/ErO21Kj+S9hLBrRknoydudabiIhSGy UYfr0JQOMQxyGMhGfbT5nwJs+n0RpKKjLSQbFs69MMW1j+m58Gdt5u1pqKu8IuLBxwSU xnDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=DdvV6KJHy0yMzY3/FEo27gZsHgs8xrLnXDdawz1cm9E=; b=j+ic3ORrnihUK0EAvTmMZapOHOTxziC3ywoUIHwIz6kOsMFw7ICNwGwxSjKsfUmGhe QwwkH3S2A1TfrzUUzXUkKgOUv2QlwWRF4aBdad5dM/TWAYwMc2fmS/sevpPqdn3wpfJN vUI11yKwX9vSNQjYRn3kwKgXF0YWrno6Z4R82RBk5PcHyfJaUcN1NLXsMEmghaS5iwBR tkJcJhrxVe3aaoYz++lCHaXwDZLlYCt2txOqEn2hxaqZ/2Kx8xs+9tC7NzJHnwH18vwy Lno2xJ87suCUhaUpiXRA71X3tJSXuWNZI+i2SVR5WfXd3GbjO0VmLXwb4Dql1gla5glu vGjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=pcN1x75d; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j80si199351oib.144.2020.03.03.16.14.48; Tue, 03 Mar 2020 16:15:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=pcN1x75d; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728380AbgCDAOQ (ORCPT + 99 others); Tue, 3 Mar 2020 19:14:16 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:15464 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728140AbgCDAOK (ORCPT ); Tue, 3 Mar 2020 19:14:10 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 03 Mar 2020 16:13:56 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 03 Mar 2020 16:14:10 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 03 Mar 2020 16:14:10 -0800 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 4 Mar 2020 00:14:06 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 4 Mar 2020 00:14:06 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 03 Mar 2020 16:14:05 -0800 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Ben Skeggs , "Ralph Campbell" Subject: [PATCH v3 4/4] nouveau/hmm: map pages after migration Date: Tue, 3 Mar 2020 16:13:39 -0800 Message-ID: <20200304001339.8248-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200304001339.8248-1-rcampbell@nvidia.com> References: <20200304001339.8248-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1583280836; bh=DdvV6KJHy0yMzY3/FEo27gZsHgs8xrLnXDdawz1cm9E=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=pcN1x75dH322HJh14Ion5ZtYFNDtv3AXfoZbI20KWK24/jDb8s9jcsHANelvQtLXL D00tGbwaddsPPLXAbe4s3BCOOmYtDyrVRxJ3ji1y9fx0ca1Fvw5XBJr7UyntRRSNIg dXHRS4337jGdrkw36UY0vI1L3RF+FxW3DlTqq5s7MnNuDuN3Rm55Ve3UGATM/u1zoW i6HonyKyPTaCJUbTAy5jzjkjy9Ljwc4QIubyGQhENb7RuRwwvtbc+hbhYDeQ0VneqZ EflhXTjae2+Btakxb35XVcgNMV/c/TI+3Yj1tOvyTaeroSLYtcdaXjm9KCesa/ONMh 5Ugo6S7sdm3xw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When memory is migrated to the GPU, it is likely to be accessed by GPU code soon afterwards. Instead of waiting for a GPU fault, map the migrated memory into the GPU page tables with the same access permissions as the source CPU page table entries. This preserves copy on write semantics. Signed-off-by: Ralph Campbell Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: "J=C3=A9r=C3=B4me Glisse" Cc: Ben Skeggs --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 46 +++++++++++++------- drivers/gpu/drm/nouveau/nouveau_dmem.h | 2 + drivers/gpu/drm/nouveau/nouveau_svm.c | 59 +++++++++++++++++++++++++- drivers/gpu/drm/nouveau/nouveau_svm.h | 5 +++ 4 files changed, 95 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouve= au/nouveau_dmem.c index 0ad5d87b5a8e..981c05a2a6ca 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -25,11 +25,13 @@ #include "nouveau_dma.h" #include "nouveau_mem.h" #include "nouveau_bo.h" +#include "nouveau_svm.h" =20 #include #include #include #include +#include =20 #include #include @@ -558,10 +560,11 @@ nouveau_dmem_init(struct nouveau_drm *drm) } =20 static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm= , - unsigned long src, dma_addr_t *dma_addr) + unsigned long src, dma_addr_t *dma_addr, u64 *pfn) { struct device *dev =3D drm->dev->dev; struct page *dpage, *spage; + unsigned long paddr; =20 spage =3D migrate_pfn_to_page(src); if (!spage || !(src & MIGRATE_PFN_MIGRATE)) @@ -569,17 +572,21 @@ static unsigned long nouveau_dmem_migrate_copy_one(st= ruct nouveau_drm *drm, =20 dpage =3D nouveau_dmem_page_alloc_locked(drm); if (!dpage) - return 0; + goto out; =20 *dma_addr =3D dma_map_page(dev, spage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); if (dma_mapping_error(dev, *dma_addr)) goto out_free_page; =20 + paddr =3D nouveau_dmem_page_addr(dpage); if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_VRAM, - nouveau_dmem_page_addr(dpage), NOUVEAU_APER_HOST, - *dma_addr)) + paddr, NOUVEAU_APER_HOST, *dma_addr)) goto out_dma_unmap; =20 + *pfn =3D NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | + ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); + if (src & MIGRATE_PFN_WRITE) + *pfn |=3D NVIF_VMM_PFNMAP_V0_W; return migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; =20 out_dma_unmap: @@ -587,18 +594,20 @@ static unsigned long nouveau_dmem_migrate_copy_one(st= ruct nouveau_drm *drm, out_free_page: nouveau_dmem_page_free_locked(drm, dpage); out: + *pfn =3D NVIF_VMM_PFNMAP_V0_NONE; return 0; } =20 static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, - struct migrate_vma *args, dma_addr_t *dma_addrs) + struct nouveau_svmm *svmm, struct migrate_vma *args, + dma_addr_t *dma_addrs, u64 *pfns) { struct nouveau_fence *fence; unsigned long addr =3D args->start, nr_dma =3D 0, i; =20 for (i =3D 0; addr < args->end; i++) { args->dst[i] =3D nouveau_dmem_migrate_copy_one(drm, args->src[i], - dma_addrs + nr_dma); + dma_addrs + nr_dma, pfns + i); if (args->dst[i]) nr_dma++; addr +=3D PAGE_SIZE; @@ -607,20 +616,18 @@ static void nouveau_dmem_migrate_chunk(struct nouveau= _drm *drm, nouveau_fence_new(drm->dmem->migrate.chan, false, &fence); migrate_vma_pages(args); nouveau_dmem_fence_done(&fence); + nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i); =20 while (nr_dma--) { dma_unmap_page(drm->dev->dev, dma_addrs[nr_dma], PAGE_SIZE, DMA_BIDIRECTIONAL); } - /* - * FIXME optimization: update GPU page table to point to newly migrated - * memory. - */ migrate_vma_finalize(args); } =20 int nouveau_dmem_migrate_vma(struct nouveau_drm *drm, + struct nouveau_svmm *svmm, struct vm_area_struct *vma, unsigned long start, unsigned long end) @@ -632,7 +639,8 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, .vma =3D vma, .start =3D start, }; - unsigned long c, i; + unsigned long i; + u64 *pfns; int ret =3D -ENOMEM; =20 args.src =3D kcalloc(max, sizeof(*args.src), GFP_KERNEL); @@ -646,19 +654,25 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, if (!dma_addrs) goto out_free_dst; =20 - for (i =3D 0; i < npages; i +=3D c) { - c =3D min(SG_MAX_SINGLE_ALLOC, npages); - args.end =3D start + (c << PAGE_SHIFT); + pfns =3D nouveau_pfns_alloc(max); + if (!pfns) + goto out_free_dma; + + for (i =3D 0; i < npages; i +=3D max) { + args.end =3D start + (max << PAGE_SHIFT); ret =3D migrate_vma_setup(&args); if (ret) - goto out_free_dma; + goto out_free_pfns; =20 if (args.cpages) - nouveau_dmem_migrate_chunk(drm, &args, dma_addrs); + nouveau_dmem_migrate_chunk(drm, svmm, &args, dma_addrs, + pfns); args.start =3D args.end; } =20 ret =3D 0; +out_free_pfns: + nouveau_pfns_free(pfns); out_free_dma: kfree(dma_addrs); out_free_dst: diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.h b/drivers/gpu/drm/nouve= au/nouveau_dmem.h index 92394be5d649..3e03d9629a38 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.h +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.h @@ -25,6 +25,7 @@ struct drm_device; struct drm_file; struct nouveau_drm; +struct nouveau_svmm; struct hmm_range; =20 #if IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) @@ -34,6 +35,7 @@ void nouveau_dmem_suspend(struct nouveau_drm *); void nouveau_dmem_resume(struct nouveau_drm *); =20 int nouveau_dmem_migrate_vma(struct nouveau_drm *drm, + struct nouveau_svmm *svmm, struct vm_area_struct *vma, unsigned long start, unsigned long end); diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 8dfa5cb74826..d33ae94c28ba 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -70,6 +70,12 @@ struct nouveau_svm { #define SVM_DBG(s,f,a...) NV_DEBUG((s)->drm, "svm: "f"\n", ##a) #define SVM_ERR(s,f,a...) NV_WARN((s)->drm, "svm: "f"\n", ##a) =20 +struct nouveau_pfnmap_args { + struct nvif_ioctl_v0 i; + struct nvif_ioctl_mthd_v0 m; + struct nvif_vmm_pfnmap_v0 p; +}; + struct nouveau_ivmm { struct nouveau_svmm *svmm; u64 inst; @@ -187,7 +193,8 @@ nouveau_svmm_bind(struct drm_device *dev, void *data, addr =3D max(addr, vma->vm_start); next =3D min(vma->vm_end, end); /* This is a best effort so we ignore errors */ - nouveau_dmem_migrate_vma(cli->drm, vma, addr, next); + nouveau_dmem_migrate_vma(cli->drm, cli->svm.svmm, vma, addr, + next); addr =3D next; } =20 @@ -785,6 +792,56 @@ nouveau_svm_fault(struct nvif_notify *notify) return NVIF_NOTIFY_KEEP; } =20 +static struct nouveau_pfnmap_args * +nouveau_pfns_to_args(void *pfns) +{ + return container_of(pfns, struct nouveau_pfnmap_args, p.phys); +} + +u64 * +nouveau_pfns_alloc(unsigned long npages) +{ + struct nouveau_pfnmap_args *args; + + args =3D kzalloc(struct_size(args, p.phys, npages), GFP_KERNEL); + if (!args) + return NULL; + + args->i.type =3D NVIF_IOCTL_V0_MTHD; + args->m.method =3D NVIF_VMM_V0_PFNMAP; + args->p.page =3D PAGE_SHIFT; + + return args->p.phys; +} + +void +nouveau_pfns_free(u64 *pfns) +{ + struct nouveau_pfnmap_args *args =3D nouveau_pfns_to_args(pfns); + + kfree(args); +} + +void +nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm, + unsigned long addr, u64 *pfns, unsigned long npages) +{ + struct nouveau_pfnmap_args *args =3D nouveau_pfns_to_args(pfns); + int ret; + + args->p.addr =3D addr; + args->p.size =3D npages << PAGE_SHIFT; + + mutex_lock(&svmm->mutex); + + svmm->vmm->vmm.object.client->super =3D true; + ret =3D nvif_object_ioctl(&svmm->vmm->vmm.object, args, sizeof(*args) + + npages * sizeof(args->p.phys[0]), NULL); + svmm->vmm->vmm.object.client->super =3D false; + + mutex_unlock(&svmm->mutex); +} + static void nouveau_svm_fault_buffer_fini(struct nouveau_svm *svm, int id) { diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.h b/drivers/gpu/drm/nouvea= u/nouveau_svm.h index e839d8189461..f0fcd1b72e8b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.h +++ b/drivers/gpu/drm/nouveau/nouveau_svm.h @@ -18,6 +18,11 @@ void nouveau_svmm_fini(struct nouveau_svmm **); int nouveau_svmm_join(struct nouveau_svmm *, u64 inst); void nouveau_svmm_part(struct nouveau_svmm *, u64 inst); int nouveau_svmm_bind(struct drm_device *, void *, struct drm_file *); + +u64 *nouveau_pfns_alloc(unsigned long npages); +void nouveau_pfns_free(u64 *pfns); +void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm, + unsigned long addr, u64 *pfns, unsigned long npages); #else /* IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) */ static inline void nouveau_svm_init(struct nouveau_drm *drm) {} static inline void nouveau_svm_fini(struct nouveau_drm *drm) {} --=20 2.20.1