This patch series provides a new UAPI for the Nouveau driver in order to
support Vulkan features, such as sparse bindings and sparse residency.
Furthermore, with the DRM GPUVA manager it provides a new DRM core feature to
keep track of GPU virtual address (VA) mappings in a more generic way.
The DRM GPUVA manager is indented to help drivers implement userspace-manageable
GPU VA spaces in reference to the Vulkan API. In order to achieve this goal it
serves the following purposes in this context.
1) Provide infrastructure to track GPU VA allocations and mappings,
using an interval tree (RB-tree).
2) Generically connect GPU VA mappings to their backing buffers, in
particular DRM GEM objects.
3) Provide a common implementation to perform more complex mapping
operations on the GPU VA space. In particular splitting and merging
of GPU VA mappings, e.g. for intersecting mapping requests or partial
unmap requests.
The new VM_BIND Nouveau UAPI build on top of the DRM GPUVA manager, itself
providing the following new interfaces.
1) Initialize a GPU VA space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl
for UMDs to specify the portion of VA space managed by the kernel and
userspace, respectively.
2) Allocate and free a VA space region as well as bind and unbind memory
to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl.
3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, make use of the DRM
scheduler to queue jobs and support asynchronous processing with DRM syncobjs
as synchronization mechanism.
By default DRM_IOCTL_NOUVEAU_VM_BIND does synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
The new VM_BIND UAPI for Nouveau makes also use of drm_exec (execution context
for GEM buffers) by Christian König. Since the patch implementing drm_exec was
not yet merged into drm-next it is part of this series, as well as a small fix
for this patch, which was found while testing this series.
This patch series is also available at [1].
There is a Mesa NVK merge request by Dave Airlie [2] implementing the
corresponding userspace parts for this series.
The Vulkan CTS test suite passes the sparse binding and sparse residency test
cases for the new UAPI together with Dave's Mesa work.
There are also some test cases in the igt-gpu-tools project [3] for the new UAPI
and hence the DRM GPU VA manager. However, most of them are testing the DRM GPU
VA manager's logic through Nouveau's new UAPI and should be considered just as
helper for implementation.
However, I absolutely intend to change those test cases to proper kunit test
cases for the DRM GPUVA manager, once and if we agree on it's usefulness and
design.
[1] https://gitlab.freedesktop.org/nouvelles/kernel/-/tree/new-uapi-drm-next /
https://gitlab.freedesktop.org/nouvelles/kernel/-/merge_requests/1
[2] https://gitlab.freedesktop.org/nouveau/mesa/-/merge_requests/150/
[3] https://gitlab.freedesktop.org/dakr/igt-gpu-tools/-/tree/wip_nouveau_vm_bind
Changes in V2:
==============
Nouveau:
- Reworked the Nouveau VM_BIND UAPI to avoid memory allocations in fence
signalling critical sections. Updates to the VA space are split up in three
separate stages, where only the 2. stage executes in a fence signalling
critical section:
1. update the VA space, allocate new structures and page tables
2. (un-)map the requested memory bindings
3. free structures and page tables
- Separated generic job scheduler code from specific job implementations.
- Separated the EXEC and VM_BIND implementation of the UAPI.
- Reworked the locking parts of the nvkm/vmm RAW interface, such that
(un-)map operations can be executed in fence signalling critical sections.
GPUVA Manager:
- made drm_gpuva_regions optional for users of the GPUVA manager
- allow NULL GEMs for drm_gpuva entries
- swichted from drm_mm to maple_tree for track drm_gpuva / drm_gpuva_region
entries
- provide callbacks for users to allocate custom drm_gpuva_op structures to
allow inheritance
- added user bits to drm_gpuva_flags
- added a prefetch operation type in order to support generating prefetch
operations in the same way other operations generated
- hand the responsibility for mutual exclusion for a GEM's
drm_gpuva list to the user; simplified corresponding (un-)link functions
Maple Tree:
- I added two maple tree patches to the series, one to support custom tree
walk macros and one to hand the locking responsibility to the user of the
GPUVA manager without pre-defined lockdep checks.
Changes in V3:
==============
Nouveau:
- Reworked the Nouveau VM_BIND UAPI to do the job cleanup (including page
table cleanup) within a workqueue rather than the job_free() callback of
the scheduler itself. A job_free() callback can stall the execution (run()
callback) of the next job in the queue. Since the page table cleanup
requires to take the same locks as need to be taken for page table
allocation, doing it directly in the job_free() callback would still
violate the fence signalling critical path.
- Separated Nouveau fence allocation and emit, such that we do not violate
the fence signalling critical path in EXEC jobs.
- Implement "regions" (for handling sparse mappings through PDEs and dual
page tables) within Nouveau.
- Drop the requirement for every mapping to be contained within a region.
- Add necassary synchronization of VM_BIND job operation sequences in order
to work around limitations in page table handling. This will be addressed
in a future re-work of Nouveau's page table handling.
- Fixed a couple of race conditions found through more testing. Thanks to
Dave for consitently trying to break it. :-)
GPUVA Manager:
- Implement pre-allocation capabilities for tree modifications within fence
signalling critical sections.
- Implement accessors to to apply tree modification while walking the GPUVA
tree in order to actually support processing of drm_gpuva_ops through
callbacks in fence signalling critical sections rather than through
pre-allocated operation lists.
- Remove merging of GPUVAs; the kernel has limited to none knowlege about
the semantics of mapping sequences. Hence, merging is purely speculative.
It seems that gaining a significant (or at least a measurable) performance
increase through merging is way more likely to happen when userspace is
responsible for merging mappings up to the next larger page size if
possible.
- Since merging was removed, regions pretty much loose their right to exist.
They might still be useful for handling dual page tables or similar
mechanisms, but since Nouveau seems to be the only driver having a need
for this for now, regions were removed from the GPUVA manager.
- Fixed a couple of maple_tree related issues; thanks to Liam for helping me
out.
Changes in V4:
==============
Nouveau:
- Refactored how specific VM_BIND and EXEC jobs are created and how their
arguments are passed to the generic job implementation.
- Fixed a UAF race condition where bind job ops could have been freed
already while still waiting for a job cleanup to finish. This is due to
in certain cases we need to wait for mappings actually being unmapped
before creating sparse regions in the same area.
- Re-based the code onto drm_exec v4 patch.
GPUVA Manager:
- Fixed a maple tree related bug when pre-allocating MA states.
(Boris Brezillion)
- Made struct drm_gpuva_fn_ops a const object in all occurrences.
(Boris Brezillion)
Changes in V5:
==============
Nouveau:
- Link and unlink GPUVAs outside the fence signalling critical path in
nouveau_uvmm_bind_job_submit() holding the dma-resv lock. Mutual exclusion
of BO evicts causing mapping invalidation and regular mapping operations
is ensured with dma-fences.
GPUVA Manager:
- Removed the separate GEMs GPUVA list lock. Link and unlink as well as
iterating the GEM's GPUVA list should be protected with the GEM's dma-resv
lock instead.
- Renamed DRM_GPUVA_EVICTED flag to DRM_GPUVA_INVALIDATED. Mappings do not
get eviced, they might get invalidated due to eviction.
- Maple tree uses the 'unsinged long' type for node entries. While this
works for GPU VA spaces larger than 32-bit on 64-bit kernel, the GPU VA
space is limited to 32-bit on 32-bit kernels as well.
As long as we do not have a 64-bit capable maple tree for 32-bit kernels,
the GPU VA manager contains checks to throw warnings when GPU VA entries
exceed the maple tree's storage capabilities.
- Extended the Documentation and added example code as requested by Donald
Robson.
Changes in V6
=============
Nouveau:
- Re-based the code onto drm_exec v5 patch.
GPUVA Manager:
- Switch from maple tree to RB-tree.
It turned out that mas_preallocate() requires the maple tree not to change
in between pre-allocating nodes with mas_preallocate() and inserting an
entry with the help of the pre-allocated memory (mas_insert_prealloc()).
However, considering that drivers typically implement interfaces where
jobs to create GPU mappings can be submitted by userspace, are queued up
by the kernel and are processed asynchronously in dma-fence signalling
critical paths, this is a major issue. In the ioctl() used to submit a job
we'd need to pre-allocated memory with mas_preallocate(), however,
previously queued up jobs could concurrently alter the maple tree
resulting in potentially insufficient pre-allocated memory for the
currently submitted job on execution time.
There is a detailed and still ongoing discussion about this topic one the
-mm list [1]. So far the only solution seems to be to use GFP_ATOMIC
and allocate memory directly in the fence signalling critical path, where
we need it. However, I think that is not what we want to rely on.
I think we should definitely continue in trying to find a solution on how
to fit in the maple tree (or how to make the maple tree fit in). However,
for now it seems to be more expedient to move on using a RB-tree.
[1] https://lore.kernel.org/lkml/[email protected]/
- Provide a flag to let driver optionally provide their own lock to lock
linking and unlinking of GPUVAs to GEM objects. The DRM GPUVA manager
still does not take the locks itself, but rather contains lockdep checks
on either the GEMs dma-resv lock (default) or, if
DRM_GPUVA_MANAGER_LOCK_EXTERN is set, the driver provided lock.
(Boris Brezillon)
Changes in V7
=============
Nouveau:
- Rebase to drm_exec v7.
- Move drm_gem_gpuva_init() before ttm_bo_init_validate(), but after
initialization of the corresponding dma-resv.
GPUVA Manager:
- Fix drm_gpuva_find_first() range parameter in drm_gpuva_for_each_va*
macros. (Boris)
- Simplify drm_gpuva_for_each_va* macros using a __drm_gpuva_next() helper.
(Boris)
- Move lockdep checks for an optional external GEM gpuva list lock out of
the GPUVA Manager to drm_gem.h. (Boris)
- Fix code style issues pointed out by Thomas.
- Switch to EXPORT_SYMBOL_GPL(). (Christoph)
Christian König (1):
drm: execution context for GEM buffers v7
Danilo Krummrich (12):
drm: manager to keep track of GPUs VA mappings
drm: debugfs: provide infrastructure to dump a DRM GPU VA space
drm/nouveau: new VM_BIND uapi interfaces
drm/nouveau: get vmm via nouveau_cli_vmm()
drm/nouveau: bo: initialize GEM GPU VA interface
drm/nouveau: move usercopy helpers to nouveau_drv.h
drm/nouveau: fence: separate fence alloc and emit
drm/nouveau: fence: fail to emit when fence context is killed
drm/nouveau: chan: provide nouveau_channel_kill()
drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm
drm/nouveau: implement new VM_BIND uAPI
drm/nouveau: debugfs: implement DRM GPU VA debugfs
Documentation/gpu/driver-uapi.rst | 11 +
Documentation/gpu/drm-mm.rst | 48 +
drivers/gpu/drm/Kconfig | 6 +
drivers/gpu/drm/Makefile | 3 +
drivers/gpu/drm/drm_debugfs.c | 40 +
drivers/gpu/drm/drm_exec.c | 333 +++
drivers/gpu/drm/drm_gem.c | 3 +
drivers/gpu/drm/drm_gpuva_mgr.c | 1730 +++++++++++++++
drivers/gpu/drm/nouveau/Kbuild | 3 +
drivers/gpu/drm/nouveau/Kconfig | 2 +
drivers/gpu/drm/nouveau/dispnv04/crtc.c | 9 +-
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 +-
drivers/gpu/drm/nouveau/include/nvif/vmm.h | 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers/gpu/drm/nouveau/nouveau_abi16.c | 24 +
drivers/gpu/drm/nouveau/nouveau_abi16.h | 1 +
drivers/gpu/drm/nouveau/nouveau_bo.c | 209 +-
drivers/gpu/drm/nouveau/nouveau_bo.h | 2 +-
drivers/gpu/drm/nouveau/nouveau_chan.c | 22 +-
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +
drivers/gpu/drm/nouveau/nouveau_dmem.c | 9 +-
drivers/gpu/drm/nouveau/nouveau_drm.c | 27 +-
drivers/gpu/drm/nouveau/nouveau_drv.h | 94 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 414 ++++
drivers/gpu/drm/nouveau/nouveau_exec.h | 54 +
drivers/gpu/drm/nouveau/nouveau_fence.c | 23 +-
drivers/gpu/drm/nouveau/nouveau_fence.h | 5 +-
drivers/gpu/drm/nouveau/nouveau_gem.c | 62 +-
drivers/gpu/drm/nouveau/nouveau_mem.h | 5 +
drivers/gpu/drm/nouveau/nouveau_prime.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_sched.c | 462 ++++
drivers/gpu/drm/nouveau/nouveau_sched.h | 123 +
drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1970 +++++++++++++++++
drivers/gpu/drm/nouveau/nouveau_uvmm.h | 107 +
drivers/gpu/drm/nouveau/nouveau_vmm.c | 4 +-
drivers/gpu/drm/nouveau/nvif/vmm.c | 100 +-
.../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c | 213 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 197 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 25 +
.../drm/nouveau/nvkm/subdev/mmu/vmmgf100.c | 16 +-
.../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 16 +-
.../gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c | 27 +-
include/drm/drm_debugfs.h | 25 +
include/drm/drm_drv.h | 6 +
include/drm/drm_exec.h | 123 +
include/drm/drm_gem.h | 79 +
include/drm/drm_gpuva_mgr.h | 706 ++++++
include/uapi/drm/nouveau_drm.h | 209 ++
50 files changed, 7416 insertions(+), 240 deletions(-)
create mode 100644 drivers/gpu/drm/drm_exec.c
create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.c
create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.h
create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.c
create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.h
create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.c
create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.h
create mode 100644 include/drm/drm_exec.h
create mode 100644 include/drm/drm_gpuva_mgr.h
base-commit: 6725f33228077902ddac2a05e0ab361dee36e4ba
--
2.41.0
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Signed-off-by: Danilo Krummrich <[email protected]>
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++++++++++++++++++++++++++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26 --------------------------
2 files changed, 26 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h
index 81350e685b50..20a7f31b9082 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -130,6 +130,32 @@ nouveau_cli(struct drm_file *fpriv)
return fpriv ? fpriv->driver_priv : NULL;
}
+static inline void
+u_free(void *addr)
+{
+ kvfree(addr);
+}
+
+static inline void *
+u_memcpya(uint64_t user, unsigned nmemb, unsigned size)
+{
+ void *mem;
+ void __user *userptr = (void __force __user *)(uintptr_t)user;
+
+ size *= nmemb;
+
+ mem = kvmalloc(size, GFP_KERNEL);
+ if (!mem)
+ return ERR_PTR(-ENOMEM);
+
+ if (copy_from_user(mem, userptr, size)) {
+ u_free(mem);
+ return ERR_PTR(-EFAULT);
+ }
+
+ return mem;
+}
+
#include <nvif/object.h>
#include <nvif/parent.h>
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 45ca4eb98f54..a48f42aaeab9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -613,32 +613,6 @@ nouveau_gem_pushbuf_validate(struct nouveau_channel *chan,
return 0;
}
-static inline void
-u_free(void *addr)
-{
- kvfree(addr);
-}
-
-static inline void *
-u_memcpya(uint64_t user, unsigned nmemb, unsigned size)
-{
- void *mem;
- void __user *userptr = (void __force __user *)(uintptr_t)user;
-
- size *= nmemb;
-
- mem = kvmalloc(size, GFP_KERNEL);
- if (!mem)
- return ERR_PTR(-ENOMEM);
-
- if (copy_from_user(mem, userptr, size)) {
- u_free(mem);
- return ERR_PTR(-EFAULT);
- }
-
- return mem;
-}
-
static int
nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli,
struct drm_nouveau_gem_pushbuf *req,
--
2.41.0
The new VM_BIND UAPI uses the DRM GPU VA manager to manage the VA space.
Hence, we a need a way to manipulate the MMUs page tables without going
through the internal range allocator implemented by nvkm/vmm.
This patch adds a raw interface for nvkm/vmm to pass the resposibility
for managing the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Signed-off-by: Danilo Krummrich <[email protected]>
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h | 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_vmm.c | 4 +-
drivers/gpu/drm/nouveau/nvif/vmm.c | 100 +++++++-
.../gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c | 213 ++++++++++++++++--
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 197 ++++++++++++----
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 25 ++
.../drm/nouveau/nvkm/subdev/mmu/vmmgf100.c | 16 +-
.../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 16 +-
.../gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c | 27 ++-
12 files changed, 566 insertions(+), 99 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/include/nvif/if000c.h b/drivers/gpu/drm/nouveau/include/nvif/if000c.h
index 9c7ff56831c5..a5a182b3c28d 100644
--- a/drivers/gpu/drm/nouveau/include/nvif/if000c.h
+++ b/drivers/gpu/drm/nouveau/include/nvif/if000c.h
@@ -3,7 +3,10 @@
struct nvif_vmm_v0 {
__u8 version;
__u8 page_nr;
- __u8 managed;
+#define NVIF_VMM_V0_TYPE_UNMANAGED 0x00
+#define NVIF_VMM_V0_TYPE_MANAGED 0x01
+#define NVIF_VMM_V0_TYPE_RAW 0x02
+ __u8 type;
__u8 pad03[5];
__u64 addr;
__u64 size;
@@ -17,6 +20,7 @@ struct nvif_vmm_v0 {
#define NVIF_VMM_V0_UNMAP 0x04
#define NVIF_VMM_V0_PFNMAP 0x05
#define NVIF_VMM_V0_PFNCLR 0x06
+#define NVIF_VMM_V0_RAW 0x07
#define NVIF_VMM_V0_MTHD(i) ((i) + 0x80)
struct nvif_vmm_page_v0 {
@@ -66,6 +70,26 @@ struct nvif_vmm_unmap_v0 {
__u64 addr;
};
+struct nvif_vmm_raw_v0 {
+ __u8 version;
+#define NVIF_VMM_RAW_V0_GET 0x0
+#define NVIF_VMM_RAW_V0_PUT 0x1
+#define NVIF_VMM_RAW_V0_MAP 0x2
+#define NVIF_VMM_RAW_V0_UNMAP 0x3
+#define NVIF_VMM_RAW_V0_SPARSE 0x4
+ __u8 op;
+ __u8 sparse;
+ __u8 ref;
+ __u8 shift;
+ __u32 argc;
+ __u8 pad01[7];
+ __u64 addr;
+ __u64 size;
+ __u64 offset;
+ __u64 memory;
+ __u64 argv;
+};
+
struct nvif_vmm_pfnmap_v0 {
__u8 version;
__u8 page;
diff --git a/drivers/gpu/drm/nouveau/include/nvif/vmm.h b/drivers/gpu/drm/nouveau/include/nvif/vmm.h
index a2ee92201ace..0ecedd0ee0a5 100644
--- a/drivers/gpu/drm/nouveau/include/nvif/vmm.h
+++ b/drivers/gpu/drm/nouveau/include/nvif/vmm.h
@@ -4,6 +4,12 @@
struct nvif_mem;
struct nvif_mmu;
+enum nvif_vmm_type {
+ UNMANAGED,
+ MANAGED,
+ RAW,
+};
+
enum nvif_vmm_get {
ADDR,
PTES,
@@ -30,8 +36,9 @@ struct nvif_vmm {
int page_nr;
};
-int nvif_vmm_ctor(struct nvif_mmu *, const char *name, s32 oclass, bool managed,
- u64 addr, u64 size, void *argv, u32 argc, struct nvif_vmm *);
+int nvif_vmm_ctor(struct nvif_mmu *, const char *name, s32 oclass,
+ enum nvif_vmm_type, u64 addr, u64 size, void *argv, u32 argc,
+ struct nvif_vmm *);
void nvif_vmm_dtor(struct nvif_vmm *);
int nvif_vmm_get(struct nvif_vmm *, enum nvif_vmm_get, bool sparse,
u8 page, u8 align, u64 size, struct nvif_vma *);
@@ -39,4 +46,12 @@ void nvif_vmm_put(struct nvif_vmm *, struct nvif_vma *);
int nvif_vmm_map(struct nvif_vmm *, u64 addr, u64 size, void *argv, u32 argc,
struct nvif_mem *, u64 offset);
int nvif_vmm_unmap(struct nvif_vmm *, u64);
+
+int nvif_vmm_raw_get(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift);
+int nvif_vmm_raw_put(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift);
+int nvif_vmm_raw_map(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift,
+ void *argv, u32 argc, struct nvif_mem *mem, u64 offset);
+int nvif_vmm_raw_unmap(struct nvif_vmm *vmm, u64 addr, u64 size,
+ u8 shift, bool sparse);
+int nvif_vmm_raw_sparse(struct nvif_vmm *vmm, u64 addr, u64 size, bool ref);
#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
index 70e7887ef4b4..2fd2f2433fc7 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h
@@ -17,6 +17,7 @@ struct nvkm_vma {
bool part:1; /* Region was split from an allocated region by map(). */
bool busy:1; /* Region busy (for temporarily preventing user access). */
bool mapped:1; /* Region contains valid pages. */
+ bool no_comp:1; /* Force no memory compression. */
struct nvkm_memory *memory; /* Memory currently mapped into VMA. */
struct nvkm_tags *tags; /* Compression tag reference. */
};
@@ -27,10 +28,26 @@ struct nvkm_vmm {
const char *name;
u32 debug;
struct kref kref;
- struct mutex mutex;
+
+ struct {
+ struct mutex vmm;
+ struct mutex ref;
+ struct mutex map;
+ } mutex;
u64 start;
u64 limit;
+ struct {
+ struct {
+ u64 addr;
+ u64 size;
+ } p;
+ struct {
+ u64 addr;
+ u64 size;
+ } n;
+ bool raw;
+ } managed;
struct nvkm_vmm_pt *pd;
struct list_head join;
@@ -70,6 +87,7 @@ struct nvkm_vmm_map {
const struct nvkm_vmm_page *page;
+ bool no_comp;
struct nvkm_tags *tags;
u64 next;
u64 type;
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index a74ba8d84ba7..186351ecf72f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -350,7 +350,7 @@ nouveau_svmm_init(struct drm_device *dev, void *data,
* VMM instead of the standard one.
*/
ret = nvif_vmm_ctor(&cli->mmu, "svmVmm",
- cli->vmm.vmm.object.oclass, true,
+ cli->vmm.vmm.object.oclass, MANAGED,
args->unmanaged_addr, args->unmanaged_size,
&(struct gp100_vmm_v0) {
.fault_replay = true,
diff --git a/drivers/gpu/drm/nouveau/nouveau_vmm.c b/drivers/gpu/drm/nouveau/nouveau_vmm.c
index 67d6619fcd5e..a6602c012671 100644
--- a/drivers/gpu/drm/nouveau/nouveau_vmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_vmm.c
@@ -128,8 +128,8 @@ nouveau_vmm_fini(struct nouveau_vmm *vmm)
int
nouveau_vmm_init(struct nouveau_cli *cli, s32 oclass, struct nouveau_vmm *vmm)
{
- int ret = nvif_vmm_ctor(&cli->mmu, "drmVmm", oclass, false, PAGE_SIZE,
- 0, NULL, 0, &vmm->vmm);
+ int ret = nvif_vmm_ctor(&cli->mmu, "drmVmm", oclass, UNMANAGED,
+ PAGE_SIZE, 0, NULL, 0, &vmm->vmm);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/nouveau/nvif/vmm.c b/drivers/gpu/drm/nouveau/nvif/vmm.c
index 6053d6dc2184..99296f03371a 100644
--- a/drivers/gpu/drm/nouveau/nvif/vmm.c
+++ b/drivers/gpu/drm/nouveau/nvif/vmm.c
@@ -104,6 +104,90 @@ nvif_vmm_get(struct nvif_vmm *vmm, enum nvif_vmm_get type, bool sparse,
return ret;
}
+int
+nvif_vmm_raw_get(struct nvif_vmm *vmm, u64 addr, u64 size,
+ u8 shift)
+{
+ struct nvif_vmm_raw_v0 args = {
+ .version = 0,
+ .op = NVIF_VMM_RAW_V0_GET,
+ .addr = addr,
+ .size = size,
+ .shift = shift,
+ };
+
+ return nvif_object_mthd(&vmm->object, NVIF_VMM_V0_RAW,
+ &args, sizeof(args));
+}
+
+int
+nvif_vmm_raw_put(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift)
+{
+ struct nvif_vmm_raw_v0 args = {
+ .version = 0,
+ .op = NVIF_VMM_RAW_V0_PUT,
+ .addr = addr,
+ .size = size,
+ .shift = shift,
+ };
+
+ return nvif_object_mthd(&vmm->object, NVIF_VMM_V0_RAW,
+ &args, sizeof(args));
+}
+
+int
+nvif_vmm_raw_map(struct nvif_vmm *vmm, u64 addr, u64 size, u8 shift,
+ void *argv, u32 argc, struct nvif_mem *mem, u64 offset)
+{
+ struct nvif_vmm_raw_v0 args = {
+ .version = 0,
+ .op = NVIF_VMM_RAW_V0_MAP,
+ .addr = addr,
+ .size = size,
+ .shift = shift,
+ .memory = nvif_handle(&mem->object),
+ .offset = offset,
+ .argv = (u64)(uintptr_t)argv,
+ .argc = argc,
+ };
+
+
+ return nvif_object_mthd(&vmm->object, NVIF_VMM_V0_RAW,
+ &args, sizeof(args));
+}
+
+int
+nvif_vmm_raw_unmap(struct nvif_vmm *vmm, u64 addr, u64 size,
+ u8 shift, bool sparse)
+{
+ struct nvif_vmm_raw_v0 args = {
+ .version = 0,
+ .op = NVIF_VMM_RAW_V0_UNMAP,
+ .addr = addr,
+ .size = size,
+ .shift = shift,
+ .sparse = sparse,
+ };
+
+ return nvif_object_mthd(&vmm->object, NVIF_VMM_V0_RAW,
+ &args, sizeof(args));
+}
+
+int
+nvif_vmm_raw_sparse(struct nvif_vmm *vmm, u64 addr, u64 size, bool ref)
+{
+ struct nvif_vmm_raw_v0 args = {
+ .version = 0,
+ .op = NVIF_VMM_RAW_V0_SPARSE,
+ .addr = addr,
+ .size = size,
+ .ref = ref,
+ };
+
+ return nvif_object_mthd(&vmm->object, NVIF_VMM_V0_RAW,
+ &args, sizeof(args));
+}
+
void
nvif_vmm_dtor(struct nvif_vmm *vmm)
{
@@ -112,8 +196,9 @@ nvif_vmm_dtor(struct nvif_vmm *vmm)
}
int
-nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass, bool managed,
- u64 addr, u64 size, void *argv, u32 argc, struct nvif_vmm *vmm)
+nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass,
+ enum nvif_vmm_type type, u64 addr, u64 size, void *argv, u32 argc,
+ struct nvif_vmm *vmm)
{
struct nvif_vmm_v0 *args;
u32 argn = sizeof(*args) + argc;
@@ -125,9 +210,18 @@ nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass, bool managed,
if (!(args = kmalloc(argn, GFP_KERNEL)))
return -ENOMEM;
args->version = 0;
- args->managed = managed;
args->addr = addr;
args->size = size;
+
+ switch (type) {
+ case UNMANAGED: args->type = NVIF_VMM_V0_TYPE_UNMANAGED; break;
+ case MANAGED: args->type = NVIF_VMM_V0_TYPE_MANAGED; break;
+ case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break;
+ default:
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
memcpy(args->data, argv, argc);
ret = nvif_object_ctor(&mmu->object, name ? name : "nvifVmm", 0,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c
index 524cd3c0e3fe..38b7ced934b1 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c
@@ -58,10 +58,13 @@ nvkm_uvmm_mthd_pfnclr(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
+ if (nvkm_vmm_in_managed_range(vmm, addr, size) && vmm->managed.raw)
+ return -EINVAL;
+
if (size) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
ret = nvkm_vmm_pfn_unmap(vmm, addr, size);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
}
return ret;
@@ -88,10 +91,13 @@ nvkm_uvmm_mthd_pfnmap(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
+ if (nvkm_vmm_in_managed_range(vmm, addr, size) && vmm->managed.raw)
+ return -EINVAL;
+
if (size) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
ret = nvkm_vmm_pfn_map(vmm, page, addr, size, phys);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
}
return ret;
@@ -113,7 +119,10 @@ nvkm_uvmm_mthd_unmap(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
- mutex_lock(&vmm->mutex);
+ if (nvkm_vmm_in_managed_range(vmm, addr, 0) && vmm->managed.raw)
+ return -EINVAL;
+
+ mutex_lock(&vmm->mutex.vmm);
vma = nvkm_vmm_node_search(vmm, addr);
if (ret = -ENOENT, !vma || vma->addr != addr) {
VMM_DEBUG(vmm, "lookup %016llx: %016llx",
@@ -134,7 +143,7 @@ nvkm_uvmm_mthd_unmap(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
nvkm_vmm_unmap_locked(vmm, vma, false);
ret = 0;
done:
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
return ret;
}
@@ -159,13 +168,16 @@ nvkm_uvmm_mthd_map(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
+ if (nvkm_vmm_in_managed_range(vmm, addr, size) && vmm->managed.raw)
+ return -EINVAL;
+
memory = nvkm_umem_search(client, handle);
if (IS_ERR(memory)) {
VMM_DEBUG(vmm, "memory %016llx %ld\n", handle, PTR_ERR(memory));
return PTR_ERR(memory);
}
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
if (ret = -ENOENT, !(vma = nvkm_vmm_node_search(vmm, addr))) {
VMM_DEBUG(vmm, "lookup %016llx", addr);
goto fail;
@@ -198,7 +210,7 @@ nvkm_uvmm_mthd_map(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
}
}
vma->busy = true;
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
ret = nvkm_memory_map(memory, offset, vmm, vma, argv, argc);
if (ret == 0) {
@@ -207,11 +219,11 @@ nvkm_uvmm_mthd_map(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
return 0;
}
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
vma->busy = false;
nvkm_vmm_unmap_region(vmm, vma);
fail:
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
nvkm_memory_unref(&memory);
return ret;
}
@@ -232,7 +244,7 @@ nvkm_uvmm_mthd_put(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
vma = nvkm_vmm_node_search(vmm, args->v0.addr);
if (ret = -ENOENT, !vma || vma->addr != addr || vma->part) {
VMM_DEBUG(vmm, "lookup %016llx: %016llx %d", addr,
@@ -248,7 +260,7 @@ nvkm_uvmm_mthd_put(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
nvkm_vmm_put_locked(vmm, vma);
ret = 0;
done:
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
return ret;
}
@@ -275,10 +287,10 @@ nvkm_uvmm_mthd_get(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
} else
return ret;
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
ret = nvkm_vmm_get_locked(vmm, getref, mapref, sparse,
page, align, size, &vma);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
if (ret)
return ret;
@@ -314,6 +326,167 @@ nvkm_uvmm_mthd_page(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
return 0;
}
+static inline int
+nvkm_uvmm_page_index(struct nvkm_uvmm *uvmm, u64 size, u8 shift, u8 *refd)
+{
+ struct nvkm_vmm *vmm = uvmm->vmm;
+ const struct nvkm_vmm_page *page;
+
+ if (likely(shift)) {
+ for (page = vmm->func->page; page->shift; page++) {
+ if (shift == page->shift)
+ break;
+ }
+
+ if (!page->shift || !IS_ALIGNED(size, 1ULL << page->shift)) {
+ VMM_DEBUG(vmm, "page %d %016llx", shift, size);
+ return -EINVAL;
+ }
+ } else {
+ return -EINVAL;
+ }
+ *refd = page - vmm->func->page;
+
+ return 0;
+}
+
+static int
+nvkm_uvmm_mthd_raw_get(struct nvkm_uvmm *uvmm, struct nvif_vmm_raw_v0 *args)
+{
+ struct nvkm_vmm *vmm = uvmm->vmm;
+ u8 refd;
+ int ret;
+
+ if (!nvkm_vmm_in_managed_range(vmm, args->addr, args->size))
+ return -EINVAL;
+
+ ret = nvkm_uvmm_page_index(uvmm, args->size, args->shift, &refd);
+ if (ret)
+ return ret;
+
+ return nvkm_vmm_raw_get(vmm, args->addr, args->size, refd);
+}
+
+static int
+nvkm_uvmm_mthd_raw_put(struct nvkm_uvmm *uvmm, struct nvif_vmm_raw_v0 *args)
+{
+ struct nvkm_vmm *vmm = uvmm->vmm;
+ u8 refd;
+ int ret;
+
+ if (!nvkm_vmm_in_managed_range(vmm, args->addr, args->size))
+ return -EINVAL;
+
+ ret = nvkm_uvmm_page_index(uvmm, args->size, args->shift, &refd);
+ if (ret)
+ return ret;
+
+ nvkm_vmm_raw_put(vmm, args->addr, args->size, refd);
+
+ return 0;
+}
+
+static int
+nvkm_uvmm_mthd_raw_map(struct nvkm_uvmm *uvmm, struct nvif_vmm_raw_v0 *args)
+{
+ struct nvkm_client *client = uvmm->object.client;
+ struct nvkm_vmm *vmm = uvmm->vmm;
+ struct nvkm_vma vma = {
+ .addr = args->addr,
+ .size = args->size,
+ .used = true,
+ .mapref = false,
+ .no_comp = true,
+ };
+ struct nvkm_memory *memory;
+ u64 handle = args->memory;
+ u8 refd;
+ int ret;
+
+ if (!nvkm_vmm_in_managed_range(vmm, args->addr, args->size))
+ return -EINVAL;
+
+ ret = nvkm_uvmm_page_index(uvmm, args->size, args->shift, &refd);
+ if (ret)
+ return ret;
+
+ vma.page = vma.refd = refd;
+
+ memory = nvkm_umem_search(client, args->memory);
+ if (IS_ERR(memory)) {
+ VMM_DEBUG(vmm, "memory %016llx %ld\n", handle, PTR_ERR(memory));
+ return PTR_ERR(memory);
+ }
+
+ ret = nvkm_memory_map(memory, args->offset, vmm, &vma,
+ (void *)args->argv, args->argc);
+
+ nvkm_memory_unref(&vma.memory);
+ nvkm_memory_unref(&memory);
+ return ret;
+}
+
+static int
+nvkm_uvmm_mthd_raw_unmap(struct nvkm_uvmm *uvmm, struct nvif_vmm_raw_v0 *args)
+{
+ struct nvkm_vmm *vmm = uvmm->vmm;
+ u8 refd;
+ int ret;
+
+ if (!nvkm_vmm_in_managed_range(vmm, args->addr, args->size))
+ return -EINVAL;
+
+ ret = nvkm_uvmm_page_index(uvmm, args->size, args->shift, &refd);
+ if (ret)
+ return ret;
+
+ nvkm_vmm_raw_unmap(vmm, args->addr, args->size,
+ args->sparse, refd);
+
+ return 0;
+}
+
+static int
+nvkm_uvmm_mthd_raw_sparse(struct nvkm_uvmm *uvmm, struct nvif_vmm_raw_v0 *args)
+{
+ struct nvkm_vmm *vmm = uvmm->vmm;
+
+ if (!nvkm_vmm_in_managed_range(vmm, args->addr, args->size))
+ return -EINVAL;
+
+ return nvkm_vmm_raw_sparse(vmm, args->addr, args->size, args->ref);
+}
+
+static int
+nvkm_uvmm_mthd_raw(struct nvkm_uvmm *uvmm, void *argv, u32 argc)
+{
+ union {
+ struct nvif_vmm_raw_v0 v0;
+ } *args = argv;
+ int ret = -ENOSYS;
+
+ if (!uvmm->vmm->managed.raw)
+ return -EINVAL;
+
+ if ((ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, true)))
+ return ret;
+
+ switch (args->v0.op) {
+ case NVIF_VMM_RAW_V0_GET:
+ return nvkm_uvmm_mthd_raw_get(uvmm, &args->v0);
+ case NVIF_VMM_RAW_V0_PUT:
+ return nvkm_uvmm_mthd_raw_put(uvmm, &args->v0);
+ case NVIF_VMM_RAW_V0_MAP:
+ return nvkm_uvmm_mthd_raw_map(uvmm, &args->v0);
+ case NVIF_VMM_RAW_V0_UNMAP:
+ return nvkm_uvmm_mthd_raw_unmap(uvmm, &args->v0);
+ case NVIF_VMM_RAW_V0_SPARSE:
+ return nvkm_uvmm_mthd_raw_sparse(uvmm, &args->v0);
+ default:
+ return -EINVAL;
+ };
+}
+
static int
nvkm_uvmm_mthd(struct nvkm_object *object, u32 mthd, void *argv, u32 argc)
{
@@ -326,6 +499,7 @@ nvkm_uvmm_mthd(struct nvkm_object *object, u32 mthd, void *argv, u32 argc)
case NVIF_VMM_V0_UNMAP : return nvkm_uvmm_mthd_unmap (uvmm, argv, argc);
case NVIF_VMM_V0_PFNMAP: return nvkm_uvmm_mthd_pfnmap(uvmm, argv, argc);
case NVIF_VMM_V0_PFNCLR: return nvkm_uvmm_mthd_pfnclr(uvmm, argv, argc);
+ case NVIF_VMM_V0_RAW : return nvkm_uvmm_mthd_raw (uvmm, argv, argc);
case NVIF_VMM_V0_MTHD(0x00) ... NVIF_VMM_V0_MTHD(0x7f):
if (uvmm->vmm->func->mthd) {
return uvmm->vmm->func->mthd(uvmm->vmm,
@@ -366,10 +540,11 @@ nvkm_uvmm_new(const struct nvkm_oclass *oclass, void *argv, u32 argc,
struct nvkm_uvmm *uvmm;
int ret = -ENOSYS;
u64 addr, size;
- bool managed;
+ bool managed, raw;
if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, more))) {
- managed = args->v0.managed != 0;
+ managed = args->v0.type == NVIF_VMM_V0_TYPE_MANAGED;
+ raw = args->v0.type == NVIF_VMM_V0_TYPE_RAW;
addr = args->v0.addr;
size = args->v0.size;
} else
@@ -377,12 +552,13 @@ nvkm_uvmm_new(const struct nvkm_oclass *oclass, void *argv, u32 argc,
if (!(uvmm = kzalloc(sizeof(*uvmm), GFP_KERNEL)))
return -ENOMEM;
+
nvkm_object_ctor(&nvkm_uvmm, oclass, &uvmm->object);
*pobject = &uvmm->object;
if (!mmu->vmm) {
- ret = mmu->func->vmm.ctor(mmu, managed, addr, size, argv, argc,
- NULL, "user", &uvmm->vmm);
+ ret = mmu->func->vmm.ctor(mmu, managed || raw, addr, size,
+ argv, argc, NULL, "user", &uvmm->vmm);
if (ret)
return ret;
@@ -393,6 +569,7 @@ nvkm_uvmm_new(const struct nvkm_oclass *oclass, void *argv, u32 argc,
uvmm->vmm = nvkm_vmm_ref(mmu->vmm);
}
+ uvmm->vmm->managed.raw = raw;
page = uvmm->vmm->func->page;
args->v0.page_nr = 0;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
index ae793f400ba1..eb5fcadcb39a 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
@@ -676,41 +676,18 @@ nvkm_vmm_ptes_sparse(struct nvkm_vmm *vmm, u64 addr, u64 size, bool ref)
return 0;
}
-static void
-nvkm_vmm_ptes_unmap_put(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
- u64 addr, u64 size, bool sparse, bool pfn)
-{
- const struct nvkm_vmm_desc_func *func = page->desc->func;
- nvkm_vmm_iter(vmm, page, addr, size, "unmap + unref",
- false, pfn, nvkm_vmm_unref_ptes, NULL, NULL,
- sparse ? func->sparse : func->invalid ? func->invalid :
- func->unmap);
-}
-
-static int
-nvkm_vmm_ptes_get_map(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
- u64 addr, u64 size, struct nvkm_vmm_map *map,
- nvkm_vmm_pte_func func)
-{
- u64 fail = nvkm_vmm_iter(vmm, page, addr, size, "ref + map", true,
- false, nvkm_vmm_ref_ptes, func, map, NULL);
- if (fail != ~0ULL) {
- if ((size = fail - addr))
- nvkm_vmm_ptes_unmap_put(vmm, page, addr, size, false, false);
- return -ENOMEM;
- }
- return 0;
-}
-
static void
nvkm_vmm_ptes_unmap(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
u64 addr, u64 size, bool sparse, bool pfn)
{
const struct nvkm_vmm_desc_func *func = page->desc->func;
+
+ mutex_lock(&vmm->mutex.map);
nvkm_vmm_iter(vmm, page, addr, size, "unmap", false, pfn,
NULL, NULL, NULL,
sparse ? func->sparse : func->invalid ? func->invalid :
func->unmap);
+ mutex_unlock(&vmm->mutex.map);
}
static void
@@ -718,33 +695,108 @@ nvkm_vmm_ptes_map(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
u64 addr, u64 size, struct nvkm_vmm_map *map,
nvkm_vmm_pte_func func)
{
+ mutex_lock(&vmm->mutex.map);
nvkm_vmm_iter(vmm, page, addr, size, "map", false, false,
NULL, func, map, NULL);
+ mutex_unlock(&vmm->mutex.map);
}
static void
-nvkm_vmm_ptes_put(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
- u64 addr, u64 size)
+nvkm_vmm_ptes_put_locked(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size)
{
nvkm_vmm_iter(vmm, page, addr, size, "unref", false, false,
nvkm_vmm_unref_ptes, NULL, NULL, NULL);
}
+static void
+nvkm_vmm_ptes_put(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size)
+{
+ mutex_lock(&vmm->mutex.ref);
+ nvkm_vmm_ptes_put_locked(vmm, page, addr, size);
+ mutex_unlock(&vmm->mutex.ref);
+}
+
static int
nvkm_vmm_ptes_get(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
u64 addr, u64 size)
{
- u64 fail = nvkm_vmm_iter(vmm, page, addr, size, "ref", true, false,
- nvkm_vmm_ref_ptes, NULL, NULL, NULL);
+ u64 fail;
+
+ mutex_lock(&vmm->mutex.ref);
+ fail = nvkm_vmm_iter(vmm, page, addr, size, "ref", true, false,
+ nvkm_vmm_ref_ptes, NULL, NULL, NULL);
if (fail != ~0ULL) {
if (fail != addr)
- nvkm_vmm_ptes_put(vmm, page, addr, fail - addr);
+ nvkm_vmm_ptes_put_locked(vmm, page, addr, fail - addr);
+ mutex_unlock(&vmm->mutex.ref);
+ return -ENOMEM;
+ }
+ mutex_unlock(&vmm->mutex.ref);
+ return 0;
+}
+
+static void
+__nvkm_vmm_ptes_unmap_put(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size, bool sparse, bool pfn)
+{
+ const struct nvkm_vmm_desc_func *func = page->desc->func;
+
+ nvkm_vmm_iter(vmm, page, addr, size, "unmap + unref",
+ false, pfn, nvkm_vmm_unref_ptes, NULL, NULL,
+ sparse ? func->sparse : func->invalid ? func->invalid :
+ func->unmap);
+}
+
+static void
+nvkm_vmm_ptes_unmap_put(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size, bool sparse, bool pfn)
+{
+ if (vmm->managed.raw) {
+ nvkm_vmm_ptes_unmap(vmm, page, addr, size, sparse, pfn);
+ nvkm_vmm_ptes_put(vmm, page, addr, size);
+ } else {
+ __nvkm_vmm_ptes_unmap_put(vmm, page, addr, size, sparse, pfn);
+ }
+}
+
+static int
+__nvkm_vmm_ptes_get_map(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size, struct nvkm_vmm_map *map,
+ nvkm_vmm_pte_func func)
+{
+ u64 fail = nvkm_vmm_iter(vmm, page, addr, size, "ref + map", true,
+ false, nvkm_vmm_ref_ptes, func, map, NULL);
+ if (fail != ~0ULL) {
+ if ((size = fail - addr))
+ nvkm_vmm_ptes_unmap_put(vmm, page, addr, size, false, false);
return -ENOMEM;
}
return 0;
}
-static inline struct nvkm_vma *
+static int
+nvkm_vmm_ptes_get_map(struct nvkm_vmm *vmm, const struct nvkm_vmm_page *page,
+ u64 addr, u64 size, struct nvkm_vmm_map *map,
+ nvkm_vmm_pte_func func)
+{
+ int ret;
+
+ if (vmm->managed.raw) {
+ ret = nvkm_vmm_ptes_get(vmm, page, addr, size);
+ if (ret)
+ return ret;
+
+ nvkm_vmm_ptes_map(vmm, page, addr, size, map, func);
+
+ return 0;
+ } else {
+ return __nvkm_vmm_ptes_get_map(vmm, page, addr, size, map, func);
+ }
+}
+
+struct nvkm_vma *
nvkm_vma_new(u64 addr, u64 size)
{
struct nvkm_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
@@ -1045,7 +1097,9 @@ nvkm_vmm_ctor(const struct nvkm_vmm_func *func, struct nvkm_mmu *mmu,
vmm->debug = mmu->subdev.debug;
kref_init(&vmm->kref);
- __mutex_init(&vmm->mutex, "&vmm->mutex", key ? key : &_key);
+ __mutex_init(&vmm->mutex.vmm, "&vmm->mutex.vmm", key ? key : &_key);
+ mutex_init(&vmm->mutex.ref);
+ mutex_init(&vmm->mutex.map);
/* Locate the smallest page size supported by the backend, it will
* have the deepest nesting of page tables.
@@ -1101,6 +1155,9 @@ nvkm_vmm_ctor(const struct nvkm_vmm_func *func, struct nvkm_mmu *mmu,
if (addr && (ret = nvkm_vmm_ctor_managed(vmm, 0, addr)))
return ret;
+ vmm->managed.p.addr = 0;
+ vmm->managed.p.size = addr;
+
/* NVKM-managed area. */
if (size) {
if (!(vma = nvkm_vma_new(addr, size)))
@@ -1114,6 +1171,9 @@ nvkm_vmm_ctor(const struct nvkm_vmm_func *func, struct nvkm_mmu *mmu,
size = vmm->limit - addr;
if (size && (ret = nvkm_vmm_ctor_managed(vmm, addr, size)))
return ret;
+
+ vmm->managed.n.addr = addr;
+ vmm->managed.n.size = size;
} else {
/* Address-space fully managed by NVKM, requiring calls to
* nvkm_vmm_get()/nvkm_vmm_put() to allocate address-space.
@@ -1362,9 +1422,9 @@ void
nvkm_vmm_unmap(struct nvkm_vmm *vmm, struct nvkm_vma *vma)
{
if (vma->memory) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
nvkm_vmm_unmap_locked(vmm, vma, false);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
}
}
@@ -1423,6 +1483,8 @@ nvkm_vmm_map_locked(struct nvkm_vmm *vmm, struct nvkm_vma *vma,
nvkm_vmm_pte_func func;
int ret;
+ map->no_comp = vma->no_comp;
+
/* Make sure we won't overrun the end of the memory object. */
if (unlikely(nvkm_memory_size(map->memory) < map->offset + vma->size)) {
VMM_DEBUG(vmm, "overrun %016llx %016llx %016llx",
@@ -1507,10 +1569,15 @@ nvkm_vmm_map(struct nvkm_vmm *vmm, struct nvkm_vma *vma, void *argv, u32 argc,
struct nvkm_vmm_map *map)
{
int ret;
- mutex_lock(&vmm->mutex);
+
+ if (nvkm_vmm_in_managed_range(vmm, vma->addr, vma->size) &&
+ vmm->managed.raw)
+ return nvkm_vmm_map_locked(vmm, vma, argv, argc, map);
+
+ mutex_lock(&vmm->mutex.vmm);
ret = nvkm_vmm_map_locked(vmm, vma, argv, argc, map);
vma->busy = false;
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
return ret;
}
@@ -1620,9 +1687,9 @@ nvkm_vmm_put(struct nvkm_vmm *vmm, struct nvkm_vma **pvma)
{
struct nvkm_vma *vma = *pvma;
if (vma) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
nvkm_vmm_put_locked(vmm, vma);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
*pvma = NULL;
}
}
@@ -1769,9 +1836,49 @@ int
nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma)
{
int ret;
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
ret = nvkm_vmm_get_locked(vmm, false, true, false, page, 0, size, pvma);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
+ return ret;
+}
+
+void
+nvkm_vmm_raw_unmap(struct nvkm_vmm *vmm, u64 addr, u64 size,
+ bool sparse, u8 refd)
+{
+ const struct nvkm_vmm_page *page = &vmm->func->page[refd];
+
+ nvkm_vmm_ptes_unmap(vmm, page, addr, size, sparse, false);
+}
+
+void
+nvkm_vmm_raw_put(struct nvkm_vmm *vmm, u64 addr, u64 size, u8 refd)
+{
+ const struct nvkm_vmm_page *page = vmm->func->page;
+
+ nvkm_vmm_ptes_put(vmm, &page[refd], addr, size);
+}
+
+int
+nvkm_vmm_raw_get(struct nvkm_vmm *vmm, u64 addr, u64 size, u8 refd)
+{
+ const struct nvkm_vmm_page *page = vmm->func->page;
+
+ if (unlikely(!size))
+ return -EINVAL;
+
+ return nvkm_vmm_ptes_get(vmm, &page[refd], addr, size);
+}
+
+int
+nvkm_vmm_raw_sparse(struct nvkm_vmm *vmm, u64 addr, u64 size, bool ref)
+{
+ int ret;
+
+ mutex_lock(&vmm->mutex.ref);
+ ret = nvkm_vmm_ptes_sparse(vmm, addr, size, ref);
+ mutex_unlock(&vmm->mutex.ref);
+
return ret;
}
@@ -1779,9 +1886,9 @@ void
nvkm_vmm_part(struct nvkm_vmm *vmm, struct nvkm_memory *inst)
{
if (inst && vmm && vmm->func->part) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
vmm->func->part(vmm, inst);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
}
}
@@ -1790,9 +1897,9 @@ nvkm_vmm_join(struct nvkm_vmm *vmm, struct nvkm_memory *inst)
{
int ret = 0;
if (vmm->func->join) {
- mutex_lock(&vmm->mutex);
+ mutex_lock(&vmm->mutex.vmm);
ret = vmm->func->join(vmm, inst);
- mutex_unlock(&vmm->mutex);
+ mutex_unlock(&vmm->mutex.vmm);
}
return ret;
}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
index f6188aa9171c..f9bc30cdb2b3 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
@@ -163,6 +163,7 @@ int nvkm_vmm_new_(const struct nvkm_vmm_func *, struct nvkm_mmu *,
u32 pd_header, bool managed, u64 addr, u64 size,
struct lock_class_key *, const char *name,
struct nvkm_vmm **);
+struct nvkm_vma *nvkm_vma_new(u64 addr, u64 size);
struct nvkm_vma *nvkm_vmm_node_search(struct nvkm_vmm *, u64 addr);
struct nvkm_vma *nvkm_vmm_node_split(struct nvkm_vmm *, struct nvkm_vma *,
u64 addr, u64 size);
@@ -173,6 +174,30 @@ void nvkm_vmm_put_locked(struct nvkm_vmm *, struct nvkm_vma *);
void nvkm_vmm_unmap_locked(struct nvkm_vmm *, struct nvkm_vma *, bool pfn);
void nvkm_vmm_unmap_region(struct nvkm_vmm *, struct nvkm_vma *);
+int nvkm_vmm_raw_get(struct nvkm_vmm *vmm, u64 addr, u64 size, u8 refd);
+void nvkm_vmm_raw_put(struct nvkm_vmm *vmm, u64 addr, u64 size, u8 refd);
+void nvkm_vmm_raw_unmap(struct nvkm_vmm *vmm, u64 addr, u64 size,
+ bool sparse, u8 refd);
+int nvkm_vmm_raw_sparse(struct nvkm_vmm *, u64 addr, u64 size, bool ref);
+
+static inline bool
+nvkm_vmm_in_managed_range(struct nvkm_vmm *vmm, u64 start, u64 size)
+{
+ u64 p_start = vmm->managed.p.addr;
+ u64 p_end = p_start + vmm->managed.p.size;
+ u64 n_start = vmm->managed.n.addr;
+ u64 n_end = n_start + vmm->managed.n.size;
+ u64 end = start + size;
+
+ if (start >= p_start && end <= p_end)
+ return true;
+
+ if (start >= n_start && end <= n_end)
+ return true;
+
+ return false;
+}
+
#define NVKM_VMM_PFN_ADDR 0xfffffffffffff000ULL
#define NVKM_VMM_PFN_ADDR_SHIFT 12
#define NVKM_VMM_PFN_APER 0x00000000000000f0ULL
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.c
index 5438384d9a67..5e857c02e9aa 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.c
@@ -287,15 +287,17 @@ gf100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
return -EINVAL;
}
- ret = nvkm_memory_tags_get(memory, device, tags,
- nvkm_ltc_tags_clear,
- &map->tags);
- if (ret) {
- VMM_DEBUG(vmm, "comp %d", ret);
- return ret;
+ if (!map->no_comp) {
+ ret = nvkm_memory_tags_get(memory, device, tags,
+ nvkm_ltc_tags_clear,
+ &map->tags);
+ if (ret) {
+ VMM_DEBUG(vmm, "comp %d", ret);
+ return ret;
+ }
}
- if (map->tags->mn) {
+ if (!map->no_comp && map->tags->mn) {
u64 tags = map->tags->mn->offset + (map->offset >> 17);
if (page->shift == 17 || !gm20x) {
map->type |= tags << 44;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
index 17899fc95b2d..f3630d0e0d55 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
@@ -453,15 +453,17 @@ gp100_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
return -EINVAL;
}
- ret = nvkm_memory_tags_get(memory, device, tags,
- nvkm_ltc_tags_clear,
- &map->tags);
- if (ret) {
- VMM_DEBUG(vmm, "comp %d", ret);
- return ret;
+ if (!map->no_comp) {
+ ret = nvkm_memory_tags_get(memory, device, tags,
+ nvkm_ltc_tags_clear,
+ &map->tags);
+ if (ret) {
+ VMM_DEBUG(vmm, "comp %d", ret);
+ return ret;
+ }
}
- if (map->tags->mn) {
+ if (!map->no_comp && map->tags->mn) {
tags = map->tags->mn->offset + (map->offset >> 16);
map->ctag |= ((1ULL << page->shift) >> 16) << 36;
map->type |= tags << 36;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c
index b7548dcd72c7..ff08ad5005a9 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.c
@@ -296,19 +296,22 @@ nv50_vmm_valid(struct nvkm_vmm *vmm, void *argv, u32 argc,
return -EINVAL;
}
- ret = nvkm_memory_tags_get(memory, device, tags, NULL,
- &map->tags);
- if (ret) {
- VMM_DEBUG(vmm, "comp %d", ret);
- return ret;
- }
+ if (!map->no_comp) {
+ ret = nvkm_memory_tags_get(memory, device, tags, NULL,
+ &map->tags);
+ if (ret) {
+ VMM_DEBUG(vmm, "comp %d", ret);
+ return ret;
+ }
- if (map->tags->mn) {
- u32 tags = map->tags->mn->offset + (map->offset >> 16);
- map->ctag |= (u64)comp << 49;
- map->type |= (u64)comp << 47;
- map->type |= (u64)tags << 49;
- map->next |= map->ctag;
+ if (map->tags->mn) {
+ u32 tags = map->tags->mn->offset +
+ (map->offset >> 16);
+ map->ctag |= (u64)comp << 49;
+ map->type |= (u64)comp << 47;
+ map->type |= (u64)tags << 49;
+ map->next |= map->ctag;
+ }
}
}
--
2.41.0
The new VM_BIND UAPI implementation introduced in subsequent commits
will allow asynchronous jobs processing push buffers and emitting fences.
If a job times out, we need a way to recover from this situation. For
now, simply kill the channel to unblock all hung up jobs and signal
userspace that the device is dead on the next EXEC or VM_BIND ioctl.
Signed-off-by: Danilo Krummrich <[email protected]>
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++++++++++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c
index f47c0363683c..a975f8b0e0e5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -40,6 +40,14 @@ MODULE_PARM_DESC(vram_pushbuf, "Create DMA push buffers in VRAM");
int nouveau_vram_pushbuf;
module_param_named(vram_pushbuf, nouveau_vram_pushbuf, int, 0400);
+void
+nouveau_channel_kill(struct nouveau_channel *chan)
+{
+ atomic_set(&chan->killed, 1);
+ if (chan->fence)
+ nouveau_fence_context_kill(chan->fence, -ENODEV);
+}
+
static int
nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc)
{
@@ -47,9 +55,9 @@ nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc)
struct nouveau_cli *cli = (void *)chan->user.client;
NV_PRINTK(warn, cli, "channel %d killed!\n", chan->chid);
- atomic_set(&chan->killed, 1);
- if (chan->fence)
- nouveau_fence_context_kill(chan->fence, -ENODEV);
+
+ if (unlikely(!atomic_read(&chan->killed)))
+ nouveau_channel_kill(chan);
return NVIF_EVENT_DROP;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.h b/drivers/gpu/drm/nouveau/nouveau_chan.h
index e06a8ffed31a..e483f4a254da 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.h
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.h
@@ -65,6 +65,7 @@ int nouveau_channel_new(struct nouveau_drm *, struct nvif_device *, bool priv,
u32 vram, u32 gart, struct nouveau_channel **);
void nouveau_channel_del(struct nouveau_channel **);
int nouveau_channel_idle(struct nouveau_channel *);
+void nouveau_channel_kill(struct nouveau_channel *);
extern int nouveau_vram_pushbuf;
--
2.41.0