Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp6477522ybh; Thu, 8 Aug 2019 00:38:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqxI3bDnMqPV9rEKwec4jWNkdR1hNPhFBei1mVsFp29f5UBoirZw7fIk9XTFcdFgmNEEfcyz X-Received: by 2002:a63:1765:: with SMTP id 37mr11635541pgx.447.1565249891501; Thu, 08 Aug 2019 00:38:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565249891; cv=none; d=google.com; s=arc-20160816; b=BN5RBaLzHmbU20K/5FXsMSYmXQMImGe1Q6k1Z2fWEW0mARVm4jF+upqhAaM2/pzk7S JzmQSYTBGkEAjwMoTHqnUu4YHpV8/8SJ00L7rLyIBLlcsCjZQ6pI8SzbeYOcLdcbrhB5 cGW+Vwsl2A/kbvtZhGUdz+7j8/DRhLfpdlf6ASYWEQBXQXQY8q2kS9Vr/oA/9niS4oKC uaGxeqrgaBj+9bRDM5zvOjWK1eYzv+v6h4V6RzV7JUaI7MWj6OHx95RC6i0aUwjxfHum rCtg19CmGtFlM0zjB6saYW5XsduKgJcCCr7KIAh+VpnoMWrTSE/+/3MwR2h95NGoBdIW C6OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Ty0lHwkJORRV3g4Qm1PzmY9XHSFN3x1ox0hXcob6AVQ=; b=vGPB22BJpAWZJCB9coH9qwLXaiMlH1YkkSzxcwJN0dbJ3fCdbZAGqu8T3pHU9viiDO XzcsVgKwe4jX5u8fPFKJWc1PUbNgpfMMogcr5KuknI5A+YXIEV1Nkug8xwiyHC0jsgjY s9BMq+T4iETayfe/gcbitNW5nUeBaWEVg5JaSo7+O+yXYU9ooA1Brse+LOouvw6yfWXW OVDCyN0wIR8QertvYLUUdsxrgLAURlPnlEHKVxs4/xCvESA1iHSarwbMRUSFrhq3deoV rBGPtDgstw8FOlo29xPoA9r37061YBwjft4AqVwjI8foHsgpvq7G2wVnrpjtqoz3GKYB I0xw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n12si30191920pgh.210.2019.08.08.00.37.56; Thu, 08 Aug 2019 00:38:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731187AbfHHHHG (ORCPT + 99 others); Thu, 8 Aug 2019 03:07:06 -0400 Received: from verein.lst.de ([213.95.11.211]:43827 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728167AbfHHHHG (ORCPT ); Thu, 8 Aug 2019 03:07:06 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 3426E68B02; Thu, 8 Aug 2019 09:07:02 +0200 (CEST) Date: Thu, 8 Aug 2019 09:07:01 +0200 From: Christoph Hellwig To: Ralph Campbell Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, Christoph Hellwig , Jason Gunthorpe , =?iso-8859-1?B?Suly9G1l?= Glisse , Ben Skeggs Subject: Re: [PATCH] nouveau/hmm: map pages after migration Message-ID: <20190808070701.GC29382@lst.de> References: <20190807150214.3629-1-rcampbell@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190807150214.3629-1-rcampbell@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 07, 2019 at 08:02:14AM -0700, Ralph Campbell wrote: > When memory is migrated to the GPU it is likely to be accessed by GPU > code soon afterwards. Instead of waiting for a GPU fault, map the > migrated memory into the GPU page tables with the same access permissions > as the source CPU page table entries. This preserves copy on write > semantics. > > Signed-off-by: Ralph Campbell > Cc: Christoph Hellwig > Cc: Jason Gunthorpe > Cc: "J?r?me Glisse" > Cc: Ben Skeggs > --- > > This patch is based on top of Christoph Hellwig's 9 patch series > https://lore.kernel.org/linux-mm/20190729234611.GC7171@redhat.com/T/#u > "turn the hmm migrate_vma upside down" but without patch 9 > "mm: remove the unused MIGRATE_PFN_WRITE" and adds a use for the flag. This looks useful. I've already dropped that patch for the pending resend. > static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, > - struct vm_area_struct *vma, unsigned long addr, > - unsigned long src, dma_addr_t *dma_addr) > + struct vm_area_struct *vma, unsigned long src, > + dma_addr_t *dma_addr, u64 *pfn) I'll pick up the removal of the not needed addr argument for the patch introducing nouveau_dmem_migrate_copy_one, thanks, > static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, > - struct nouveau_drm *drm, dma_addr_t *dma_addrs) > + struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns) > { > struct nouveau_fence *fence; > unsigned long addr = args->start, nr_dma = 0, i; > > for (i = 0; addr < args->end; i++) { > args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, > - addr, args->src[i], &dma_addrs[nr_dma]); > + args->src[i], &dma_addrs[nr_dma], &pfns[i]); Nit: I find the &pfns[i] way to pass the argument a little weird to read. Why not "pfns + i"? > +u64 * > +nouveau_pfns_alloc(unsigned long npages) > +{ > + struct nouveau_pfnmap_args *args; > + > + args = kzalloc(sizeof(*args) + npages * sizeof(args->p.phys[0]), Can we use struct_size here? > + int ret; > + > + if (!svm) > + return; > + > + mutex_lock(&svm->mutex); > + svmm = nouveau_find_svmm(svm, mm); > + if (!svmm) { > + mutex_unlock(&svm->mutex); > + return; > + } > + mutex_unlock(&svm->mutex); Given that nouveau_find_svmm doesn't take any kind of reference, what gurantees svmm doesn't go away after dropping the lock? > @@ -44,5 +49,19 @@ static inline int nouveau_svmm_bind(struct drm_device *device, void *p, > { > return -ENOSYS; > } > + > +u64 *nouveau_pfns_alloc(unsigned long npages) > +{ > + return NULL; > +} > + > +void nouveau_pfns_free(u64 *pfns) > +{ > +} > + > +void nouveau_pfns_map(struct nouveau_drm *drm, struct mm_struct *mm, > + unsigned long addr, u64 *pfns, unsigned long npages) > +{ > +} > #endif /* IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) */ nouveau_dmem.c and nouveau_svm.c are both built conditional on CONFIG_DRM_NOUVEAU_SVM, so there is no need for stubs here.