Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1002178ybt; Wed, 1 Jul 2020 15:54:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzd9x7Zyg3ucCHeOZO9VzWRpf/J41E7VG2ZP1KLg70AzJjo0Fl+W4H36Amd8lniU6GQrwW7 X-Received: by 2002:a17:906:3e13:: with SMTP id k19mr24475618eji.476.1593644097607; Wed, 01 Jul 2020 15:54:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593644097; cv=none; d=google.com; s=arc-20160816; b=E26ENBfpuQkBGkHMBOeF089rV/7lMwQBBPW8ZYbFwT+pkp0fJdTYAXYlmpyWTMpSdo KPkUIAMBKVX+2RCsQdNQj6h02vAmSC3Uhi/NbBQuq6gfZzC7woUfRcqe2Yq8dzY0Wulv SqbsN4++leeZS/yRNfoosZeY8Uf82+3nbsDyZjFD7LyP5lYcdEF/REVKIErzT7KxdNrV hfD9TIttnO4WenF4T9hoU7pIFz9fn7FV1KK40TXyF7vtrWw7jsvgbGUHYv1D9ivAkwFR WZepkREn6NuQKYB72BO7ObU5QkPDW6VBLmBXP06jCNDgwnYwztkdz+LQDHABYLmXtose sEsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=cSxX1xcaeaVcHhbqRAtPNiO5QBvYsOaavmMJfbi6bA8=; b=wYdBNqm9mEm/VGOb1O9WCBEon4Us+XduKkv2Koq/7IRhoqvIR0uw+pFroMZHd2qStS C2MXdZ/0nYHs8f5lwIpu2LfLil/rrfvexwue0Shj5wGHcuh/2AT+hI0FQdno8FDMtimQ cTXNoE2/fD15Re6DAb2tIgr8DAwS9HLW8NHPqkxwKEQMgZtRx2PVzg4K431F9vZGjlnp JKPAIh+asJ0XEjJcDiR0uDqjoM2g9TtcRPx9sztpeMH5DOLv/XvwF4muRI96Tv/MmAJl mNWYg7wgcur99+Jtik7jONk8pHAquMvuiLTBT9o42L/C33fSiYDOHCuVpPvaFSaEVS3M EO4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RneYORG0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d4si5324911edo.16.2020.07.01.15.54.34; Wed, 01 Jul 2020 15:54:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RneYORG0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727932AbgGAWyV (ORCPT + 99 others); Wed, 1 Jul 2020 18:54:21 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:19522 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727812AbgGAWyQ (ORCPT ); Wed, 1 Jul 2020 18:54:16 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 01 Jul 2020 15:53:25 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 01 Jul 2020 15:54:15 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 01 Jul 2020 15:54:15 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 1 Jul 2020 22:54:06 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 1 Jul 2020 22:54:05 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 01 Jul 2020 15:54:05 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v3 4/5] nouveau/hmm: support mapping large sysmem pages Date: Wed, 1 Jul 2020 15:53:51 -0700 Message-ID: <20200701225352.9649-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200701225352.9649-1-rcampbell@nvidia.com> References: <20200701225352.9649-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593644005; bh=cSxX1xcaeaVcHhbqRAtPNiO5QBvYsOaavmMJfbi6bA8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=RneYORG0eLMMF1TBC1VCffQdhZXUumV6NibVW/pUNBMR/CkOHfnTy1w3+B3dTBlSv TY2UgbpNs8kbKLVOs3NMsJmwsNbozTd5ja9B/ePtH/ID9aydeOjVW4B0JflCCXYQTZ qOcXy33CqC+UWJZ3wegUSbeq/QCBSE2gY80lRMumE7fZUEOChWCq619a1o/8fyoo6R rj3FkBqwQNHW+eyU2entd5XAya7yx2YRVFS2pg7z3Sbyy9vZ49N94WLpiEmZ5yKMmf 7DPZhAzx8FdAPBdES44V5Q3omggZgS9IxXwr9ltFr6V0Z19BmVjLxEn1+LLJtUkCjY lEqIVPY5FlScw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Nouveau currently only supports mapping PAGE_SIZE sized pages of system memory when shared virtual memory (SVM) is enabled. Use the new hmm_pfn_to_map_order() function to support mapping system memory pages that are PMD_SIZE. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_svm.c | 53 ++++++++++++++++++++------- 1 file changed, 40 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 665dede69bd1..c5f8ca6fb2e3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -514,38 +514,57 @@ static const struct mmu_interval_notifier_ops nouveau= _svm_mni_ops =3D { }; =20 static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, - struct hmm_range *range, u64 *ioctl_addr) + struct hmm_range *range, + struct nouveau_pfnmap_args *args) { struct page *page; =20 /* - * The ioctl_addr prepared here is passed through nvif_object_ioctl() + * The address prepared here is passed through nvif_object_ioctl() * to an eventual DMA map in something like gp100_vmm_pgt_pfn() * * This is all just encoding the internal hmm representation into a * different nouveau internal representation. */ if (!(range->hmm_pfns[0] & HMM_PFN_VALID)) { - ioctl_addr[0] =3D 0; + args->p.phys[0] =3D 0; return; } =20 page =3D hmm_pfn_to_page(range->hmm_pfns[0]); + /* + * Only map compound pages to the GPU if the CPU is also mapping the + * page as a compound page. Otherwise, the PTE protections might not be + * consistent (e.g., CPU only maps part of a compound page). + * Note that the underlying page might still be larger than the + * CPU mapping (e.g., a PUD sized compound page partially mapped with + * a PMD sized page table entry). + */ + if (hmm_pfn_to_map_order(range->hmm_pfns[0])) { + unsigned long addr =3D args->p.addr; + + args->p.page =3D hmm_pfn_to_map_order(range->hmm_pfns[0]) + + PAGE_SHIFT; + args->p.size =3D 1UL << args->p.page; + args->p.addr &=3D ~(args->p.size - 1); + page -=3D (addr - args->p.addr) >> PAGE_SHIFT; + } if (is_device_private_page(page)) - ioctl_addr[0] =3D nouveau_dmem_page_addr(page) | + args->p.phys[0] =3D nouveau_dmem_page_addr(page) | NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM; else - ioctl_addr[0] =3D page_to_phys(page) | + args->p.phys[0] =3D page_to_phys(page) | NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_HOST; if (range->hmm_pfns[0] & HMM_PFN_WRITE) - ioctl_addr[0] |=3D NVIF_VMM_PFNMAP_V0_W; + args->p.phys[0] |=3D NVIF_VMM_PFNMAP_V0_W; } =20 static int nouveau_range_fault(struct nouveau_svmm *svmm, - struct nouveau_drm *drm, void *data, u32 size, - u64 *ioctl_addr, unsigned long hmm_flags, + struct nouveau_drm *drm, + struct nouveau_pfnmap_args *args, u32 size, + unsigned long hmm_flags, struct svm_notifier *notifier) { unsigned long timeout =3D @@ -585,10 +604,10 @@ static int nouveau_range_fault(struct nouveau_svmm *s= vmm, break; } =20 - nouveau_hmm_convert_pfn(drm, &range, ioctl_addr); + nouveau_hmm_convert_pfn(drm, &range, args); =20 svmm->vmm->vmm.object.client->super =3D true; - ret =3D nvif_object_ioctl(&svmm->vmm->vmm.object, data, size, NULL); + ret =3D nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); svmm->vmm->vmm.object.client->super =3D false; mutex_unlock(&svmm->mutex); =20 @@ -717,12 +736,13 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.addr, args.i.p.size, &nouveau_svm_mni_ops); if (!ret) { - ret =3D nouveau_range_fault(svmm, svm->drm, &args, - sizeof(args), args.phys, hmm_flags, ¬ifier); + ret =3D nouveau_range_fault(svmm, svm->drm, &args.i, + sizeof(args), hmm_flags, ¬ifier); mmu_interval_notifier_remove(¬ifier.notifier); } mmput(mm); =20 + limit =3D args.i.p.addr + args.i.p.size; for (fn =3D fi; ++fn < buffer->fault_nr; ) { /* It's okay to skip over duplicate addresses from the * same SVMM as faults are ordered by access type such @@ -730,9 +750,16 @@ nouveau_svm_fault(struct nvif_notify *notify) * * ie. WRITE faults appear first, thus any handling of * pending READ faults will already be satisfied. + * But if a large page is mapped, make sure subsequent + * fault addresses have sufficient access permission. */ if (buffer->fault[fn]->svmm !=3D svmm || - buffer->fault[fn]->addr >=3D limit) + buffer->fault[fn]->addr >=3D limit || + (buffer->fault[fi]->access =3D=3D 0 /* READ. */ && + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_V)) || + (buffer->fault[fi]->access !=3D 0 /* READ. */ && + buffer->fault[fi]->access !=3D 3 /* PREFETCH. */ && + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_W))) break; } =20 --=20 2.20.1