Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3154401ybt; Mon, 22 Jun 2020 16:48:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwNR5gWsjDP2uW9phdr4iIKJsdx5H7v4eNSKDBUDXSMtj9bw9GcoRf5E5CIYX92dWxcTBxh X-Received: by 2002:a17:907:35c2:: with SMTP id ap2mr16880797ejc.530.1592869681949; Mon, 22 Jun 2020 16:48:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592869681; cv=none; d=google.com; s=arc-20160816; b=DoXlYFepya6IS+qnjsQL2e3U2GwW8+vxeojcZ7p0LmV+LSsVUhnhj6CcWkgngPcawt c5EYTK0Gilyu6RHOnO+MLueE0JEc0fcoW56/HqT6ENV1DTAhOzO1EQJi8uIQ4a+7lCfs GtkcORVUkFumdwdnCRofiPM7bh/U1Y1nxenRVqztxCYJ5oxj+Cna3/IR1CS6ZgTbdt6o 9CreEcSAxTDSLUhMgKV1W4Kelecx0CCqRDLW65cejjwdpc8s1zC0FaxJROshfRicv2V/ YaVgNp9zTfqdHljw242Tz7K2U4Lt5yAM7HEF7bml1YPMSAeNbK2ApfvdfenPhFgyvZhH 3t4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=ZfcMxwlEjaJ9jud4ZrslX8wzG6hxoVtsjt+UlX695yk=; b=jlRjsngj4AxCFsRtHWyAYj1VZdbLE1UFq6GQi5D11ITutyhB9/dxh4SkDaPWlZhSuX KoowP2lNEIMjoH4/Ik6kDYYWM35WUIdUH+B15utrmATIw5JWIaYx3PEf9PdsV3fTpjLM AQp3AOnRCg13DjGPujRdXtbTwBHLWfEt1Tm0Mbmy9BauYPx+Cq5Zcz92rsqkM5pnqvFn zRvevG2A4xo88/MeShIw6g3F7VcJuuzFujyVnbUk3HPLw0a0s0RtlQuy+p7IY4wORXkm E1mTbdquugG3SSgGYGCXTaR6TWAfrfo0SFLov23uzukSrnYzL96G0FeBrCYgUe2c+uqE W71w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=omMEPhG6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gt5si3596624ejb.679.2020.06.22.16.47.38; Mon, 22 Jun 2020 16:48:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=omMEPhG6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730845AbgFVXjT (ORCPT + 99 others); Mon, 22 Jun 2020 19:39:19 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:18838 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730789AbgFVXjS (ORCPT ); Mon, 22 Jun 2020 19:39:18 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 22 Jun 2020 16:37:46 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 22 Jun 2020 16:39:18 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 22 Jun 2020 16:39:18 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 22 Jun 2020 23:39:10 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 22 Jun 2020 23:39:09 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 22 Jun 2020 16:39:09 -0700 From: Ralph Campbell To: , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Ben Skeggs" , Ralph Campbell Subject: [RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration Date: Mon, 22 Jun 2020 16:38:53 -0700 Message-ID: <20200622233854.10889-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200622233854.10889-1-rcampbell@nvidia.com> References: <20200622233854.10889-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1592869067; bh=ZfcMxwlEjaJ9jud4ZrslX8wzG6hxoVtsjt+UlX695yk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=omMEPhG6aQgofgh8JB/wY8i6c4L08LQas11cdktO9N4pAN3nh2AyCizenczXX1JHo YBqQqeR+XhYqCtpKORfjRlll6+l7sLVNeD9gksfNzuAnQz4gpnvv88XowoCVc1wNRo UJMmkFVVbFku+vr7rA0ISYrJw5u1xd/xKjyUvJtZ+G29yM3QZcswpg9FaoXXo1gY00 PLwlL0/PQF4BzZ+uT8TYOhoffGycHclg3iHdTgsO2PHv+sng7wA3q+i8tK2vbb89S/ nA8OQCmw4fnC05SWnlKh6EGzzH7NyCgHpgwOps/w5Jo9AfjRIXqnXkdGUhGfBFJkcJ yX8j3BpOJV5NA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will migrate memory in the given address range to device private memory. The source pages might already have been migrated to device private memory. In that case, the source struct page is not checked to see if it is a device private page and incorrectly computes the GPU's physical address of local memory leading to data corruption. Fix this by checking the source struct page and computing the correct physical address. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouve= au/nouveau_dmem.c index cc9993837508..f6a806ba3caa 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -540,6 +540,12 @@ static unsigned long nouveau_dmem_migrate_copy_one(str= uct nouveau_drm *drm, if (!(src & MIGRATE_PFN_MIGRATE)) goto out; =20 + if (spage && is_device_private_page(spage)) { + paddr =3D nouveau_dmem_page_addr(spage); + *dma_addr =3D DMA_MAPPING_ERROR; + goto done; + } + dpage =3D nouveau_dmem_page_alloc_locked(drm); if (!dpage) goto out; @@ -560,6 +566,7 @@ static unsigned long nouveau_dmem_migrate_copy_one(stru= ct nouveau_drm *drm, goto out_free_page; } =20 +done: *pfn =3D NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) @@ -615,6 +622,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, struct migrate_vma args =3D { .vma =3D vma, .start =3D start, + .src_owner =3D drm->dev, }; unsigned long i; u64 *pfns; --=20 2.20.1