Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1020610ybt; Fri, 19 Jun 2020 21:52:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwr90GJ/8DFahvsHWmCWPCHMJZNfHkv6PFystY1VTLoRLXeBitk49vjVtRHN90Rzkyq1X2A X-Received: by 2002:a17:906:7e19:: with SMTP id e25mr6488727ejr.319.1592628721071; Fri, 19 Jun 2020 21:52:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592628721; cv=none; d=google.com; s=arc-20160816; b=YF9CPr6kivllz6ypkxeNDdc4a3XbZtqHGxUETdPGd/a5Qh0jFF/4S+JL4JPDdjE60B 4wtBxP3gbdzl0qmSP4pVacHALtUMZiycYUYhSI2kuKpN2OGVE0u25mnHlZqAuZlwEk7p FxBIIQ5J5Qw5Pmu0xignp3ZJehWHg75lCR+gygFsuLRUlTgnDFdECp+azEKuBXGsSZIg caFqQEbM/8zb3zlDN2bwN1SkgGG7binMUre3j7+DmGWQDE4OeTnxsXXy49gHRFQVXzA7 xHTCFr2zSfoVKfiYBavBXnrbOqZ4xpdkeBeKkxtp30hQ3i3ix07tGCAjLEGfoci5Xlh2 QSJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=ZfcMxwlEjaJ9jud4ZrslX8wzG6hxoVtsjt+UlX695yk=; b=GlYVwm9E9MwOCUQ1TGQXB0MPEWpq2ll1XpdjwRhm3Oa1gH2dSAwpJ0/XcxA1qBi+XL hBXELvTR3hY+qb7tfpiQSqN560lYQzns/Ww4X83b36wH+E8SyuFitOE8NpmHzidb2Cyb +6w6BrxLMqm801meW8l5YYKbh7GGeTuJalxgCPntKk41rZKBM1JLFxJXS2LNZWZxyI0v WrApK8L6v6LeipTTi0VTkt57+TNE93O+dsSmDqgaQMdlSVj869w/0nu2yOcLTMLy0ewF O878HGvFNNCT/xCOplhPPYHbllgkSWnPNA05Hvoo4+PNsW7j4Em9xoI/030LKFnTFnO1 U1mQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=mU9KnbzY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u24si5431480ejr.662.2020.06.19.21.51.39; Fri, 19 Jun 2020 21:52:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=mU9KnbzY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728521AbgFSV6A (ORCPT + 99 others); Fri, 19 Jun 2020 17:58:00 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:8527 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728307AbgFSV5L (ORCPT ); Fri, 19 Jun 2020 17:57:11 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 19 Jun 2020 14:55:26 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 19 Jun 2020 14:57:07 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 19 Jun 2020 14:57:07 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 19 Jun 2020 21:56:59 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 19 Jun 2020 21:56:59 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 19 Jun 2020 14:56:59 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Ben Skeggs" , Andrew Morton , Shuah Khan , Ralph Campbell Subject: [PATCH 03/16] nouveau: fix mixed normal and device private page migration Date: Fri, 19 Jun 2020 14:56:36 -0700 Message-ID: <20200619215649.32297-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200619215649.32297-1-rcampbell@nvidia.com> References: <20200619215649.32297-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1592603726; bh=ZfcMxwlEjaJ9jud4ZrslX8wzG6hxoVtsjt+UlX695yk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=mU9KnbzYIJxSfATGWHX9y6ZjjtDaywPixJ52o6MZeR6UGLlSiA1jlzu0XdSndhmPX iLpZb5OIKOq2tiNtGx2SA7+0C6IXXZAD9shfBEEKghxsY2VL2xNOEJ0jqCewxzd/9X BCEh4wzqf0r0kmc30QRvHFpcqe5YieRlMbBU1FpfSHDcf4CPVruyHz5ekH/aWtYHjS D/qYlmwKed8ltDQ+TmlI6p5lpb94ZcowbE7lH9kwSfM6QNDrxbl2lT9YB+2ea0oUMj nb1L/Bp2dHaJvaVln1/zPWYM1OSW5yuZj8HHjh3QHpnBLEgFzBA9f4B8q0OfmY9P5f T2eg4bo3r5Umg== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will migrate memory in the given address range to device private memory. The source pages might already have been migrated to device private memory. In that case, the source struct page is not checked to see if it is a device private page and incorrectly computes the GPU's physical address of local memory leading to data corruption. Fix this by checking the source struct page and computing the correct physical address. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouve= au/nouveau_dmem.c index cc9993837508..f6a806ba3caa 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -540,6 +540,12 @@ static unsigned long nouveau_dmem_migrate_copy_one(str= uct nouveau_drm *drm, if (!(src & MIGRATE_PFN_MIGRATE)) goto out; =20 + if (spage && is_device_private_page(spage)) { + paddr =3D nouveau_dmem_page_addr(spage); + *dma_addr =3D DMA_MAPPING_ERROR; + goto done; + } + dpage =3D nouveau_dmem_page_alloc_locked(drm); if (!dpage) goto out; @@ -560,6 +566,7 @@ static unsigned long nouveau_dmem_migrate_copy_one(stru= ct nouveau_drm *drm, goto out_free_page; } =20 +done: *pfn =3D NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) @@ -615,6 +622,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, struct migrate_vma args =3D { .vma =3D vma, .start =3D start, + .src_owner =3D drm->dev, }; unsigned long i; u64 *pfns; --=20 2.20.1