2022-02-18 16:51:43

by Chia-I Wu

[permalink] [raw]
Subject: Re: [PATCH] drm/virtio: Add USE_INTERNAL blob flag

On Fri, Feb 18, 2022 at 7:57 AM Rob Clark <[email protected]> wrote:
>
> From: Rob Clark <[email protected]>
>
> With native userspace drivers in guest, a lot of GEM objects need to be
> neither shared nor mappable. And in fact making everything mappable
> and/or sharable results in unreasonably high fd usage in host VMM.
>
> Signed-off-by: Rob Clark <[email protected]>
> ---
> This is for a thing I'm working on, a new virtgpu context type that
> allows for running native userspace driver in the guest, with a
> thin shim in the host VMM. In this case, the guest has a lot of
> GEM buffer objects which need to be neither shared nor mappable.
>
> Alternative idea is to just drop the restriction that blob_flags
> be non-zero. I'm ok with either approach.
Dropping the restriction sounds better to me.

What is the use case for such a resource? Does the host need to know
such a resource exists?

>
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 7 ++++++-
> include/uapi/drm/virtgpu_drm.h | 1 +
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 69f1952f3144..92e1ba6b8078 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -36,7 +36,8 @@
>
> #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
> VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
> - VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
> + VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
> + VIRTGPU_BLOB_FLAG_USE_INTERNAL)
>
> static int virtio_gpu_fence_event_create(struct drm_device *dev,
> struct drm_file *file,
> @@ -662,6 +663,10 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
> params->size = rc_blob->size;
> params->blob = true;
> params->blob_flags = rc_blob->blob_flags;
> +
> + /* USE_INTERNAL is local to guest kernel, don't past to host: */
> + params->blob_flags &= ~VIRTGPU_BLOB_FLAG_USE_INTERNAL;
> +
> return 0;
> }
>
> diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
> index 0512fde5e697..62b7483e5c60 100644
> --- a/include/uapi/drm/virtgpu_drm.h
> +++ b/include/uapi/drm/virtgpu_drm.h
> @@ -163,6 +163,7 @@ struct drm_virtgpu_resource_create_blob {
> #define VIRTGPU_BLOB_FLAG_USE_MAPPABLE 0x0001
> #define VIRTGPU_BLOB_FLAG_USE_SHAREABLE 0x0002
> #define VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
> +#define VIRTGPU_BLOB_FLAG_USE_INTERNAL 0x0008 /* not-mappable, not-shareable */
> /* zero is invalid blob_mem */
> __u32 blob_mem;
> __u32 blob_flags;
> --
> 2.34.1
>


2022-02-19 21:56:49

by Rob Clark

[permalink] [raw]
Subject: Re: [PATCH] drm/virtio: Add USE_INTERNAL blob flag

On Fri, Feb 18, 2022 at 8:42 AM Chia-I Wu <[email protected]> wrote:
>
> On Fri, Feb 18, 2022 at 7:57 AM Rob Clark <[email protected]> wrote:
> >
> > From: Rob Clark <[email protected]>
> >
> > With native userspace drivers in guest, a lot of GEM objects need to be
> > neither shared nor mappable. And in fact making everything mappable
> > and/or sharable results in unreasonably high fd usage in host VMM.
> >
> > Signed-off-by: Rob Clark <[email protected]>
> > ---
> > This is for a thing I'm working on, a new virtgpu context type that
> > allows for running native userspace driver in the guest, with a
> > thin shim in the host VMM. In this case, the guest has a lot of
> > GEM buffer objects which need to be neither shared nor mappable.
> >
> > Alternative idea is to just drop the restriction that blob_flags
> > be non-zero. I'm ok with either approach.
> Dropping the restriction sounds better to me.
>
> What is the use case for such a resource? Does the host need to know
> such a resource exists?

There are a bunch of use cases, some internal (like visibility stream
buffers filled during binning pass and consumed during draw pass),
some external (tiled and/or UBWC buffers are never accessed on the
CPU).

In theory, at least currently, drm/virtgpu does not need to know about
them, but there are a lot of places in userspace that expect to have a
gem handle. Longer term, I think I want to extend virtgpu with
MADVISE ioctl so we can track DONTNEED state in guest and only release
buffers when host and/or guest is under memory pressure. For that we
will defn need guest side gem handles

BR,
-R

> >
> > drivers/gpu/drm/virtio/virtgpu_ioctl.c | 7 ++++++-
> > include/uapi/drm/virtgpu_drm.h | 1 +
> > 2 files changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > index 69f1952f3144..92e1ba6b8078 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > @@ -36,7 +36,8 @@
> >
> > #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
> > VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
> > - VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
> > + VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
> > + VIRTGPU_BLOB_FLAG_USE_INTERNAL)
> >
> > static int virtio_gpu_fence_event_create(struct drm_device *dev,
> > struct drm_file *file,
> > @@ -662,6 +663,10 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
> > params->size = rc_blob->size;
> > params->blob = true;
> > params->blob_flags = rc_blob->blob_flags;
> > +
> > + /* USE_INTERNAL is local to guest kernel, don't past to host: */
> > + params->blob_flags &= ~VIRTGPU_BLOB_FLAG_USE_INTERNAL;
> > +
> > return 0;
> > }
> >
> > diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
> > index 0512fde5e697..62b7483e5c60 100644
> > --- a/include/uapi/drm/virtgpu_drm.h
> > +++ b/include/uapi/drm/virtgpu_drm.h
> > @@ -163,6 +163,7 @@ struct drm_virtgpu_resource_create_blob {
> > #define VIRTGPU_BLOB_FLAG_USE_MAPPABLE 0x0001
> > #define VIRTGPU_BLOB_FLAG_USE_SHAREABLE 0x0002
> > #define VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
> > +#define VIRTGPU_BLOB_FLAG_USE_INTERNAL 0x0008 /* not-mappable, not-shareable */
> > /* zero is invalid blob_mem */
> > __u32 blob_mem;
> > __u32 blob_flags;
> > --
> > 2.34.1
> >

2022-02-23 00:29:54

by Chia-I Wu

[permalink] [raw]
Subject: Re: [PATCH] drm/virtio: Add USE_INTERNAL blob flag

On Fri, Feb 18, 2022 at 9:51 AM Rob Clark <[email protected]> wrote:
>
> On Fri, Feb 18, 2022 at 8:42 AM Chia-I Wu <[email protected]> wrote:
> >
> > On Fri, Feb 18, 2022 at 7:57 AM Rob Clark <[email protected]> wrote:
> > >
> > > From: Rob Clark <[email protected]>
> > >
> > > With native userspace drivers in guest, a lot of GEM objects need to be
> > > neither shared nor mappable. And in fact making everything mappable
> > > and/or sharable results in unreasonably high fd usage in host VMM.
> > >
> > > Signed-off-by: Rob Clark <[email protected]>
> > > ---
> > > This is for a thing I'm working on, a new virtgpu context type that
> > > allows for running native userspace driver in the guest, with a
> > > thin shim in the host VMM. In this case, the guest has a lot of
> > > GEM buffer objects which need to be neither shared nor mappable.
> > >
> > > Alternative idea is to just drop the restriction that blob_flags
> > > be non-zero. I'm ok with either approach.
> > Dropping the restriction sounds better to me.
> >
> > What is the use case for such a resource? Does the host need to know
> > such a resource exists?
>
> There are a bunch of use cases, some internal (like visibility stream
> buffers filled during binning pass and consumed during draw pass),
> some external (tiled and/or UBWC buffers are never accessed on the
> CPU).
For these use cases, it's true that userspace might want internal bos,
and serialize them as res_ids which the host maps to host gem_handles.
But userspace can also skip the internal bos and encode host
gem_handles directly.

But the kernel probably should not dictate what the userspace should
do by requiring non-zero blob flags.

>
> In theory, at least currently, drm/virtgpu does not need to know about
> them, but there are a lot of places in userspace that expect to have a
> gem handle. Longer term, I think I want to extend virtgpu with
> MADVISE ioctl so we can track DONTNEED state in guest and only release
> buffers when host and/or guest is under memory pressure. For that we
> will defn need guest side gem handles
MADVICE is a hint that userspace sets and is not based on memory
pressure. It is the receivers of the hint who take actions when under
memory pressure. I think it can be between the guest userspace and
the host?


>
> BR,
> -R
>
> > >
> > > drivers/gpu/drm/virtio/virtgpu_ioctl.c | 7 ++++++-
> > > include/uapi/drm/virtgpu_drm.h | 1 +
> > > 2 files changed, 7 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > index 69f1952f3144..92e1ba6b8078 100644
> > > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > > @@ -36,7 +36,8 @@
> > >
> > > #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
> > > VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
> > > - VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
> > > + VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
> > > + VIRTGPU_BLOB_FLAG_USE_INTERNAL)
> > >
> > > static int virtio_gpu_fence_event_create(struct drm_device *dev,
> > > struct drm_file *file,
> > > @@ -662,6 +663,10 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
> > > params->size = rc_blob->size;
> > > params->blob = true;
> > > params->blob_flags = rc_blob->blob_flags;
> > > +
> > > + /* USE_INTERNAL is local to guest kernel, don't past to host: */
> > > + params->blob_flags &= ~VIRTGPU_BLOB_FLAG_USE_INTERNAL;
> > > +
> > > return 0;
> > > }
> > >
> > > diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
> > > index 0512fde5e697..62b7483e5c60 100644
> > > --- a/include/uapi/drm/virtgpu_drm.h
> > > +++ b/include/uapi/drm/virtgpu_drm.h
> > > @@ -163,6 +163,7 @@ struct drm_virtgpu_resource_create_blob {
> > > #define VIRTGPU_BLOB_FLAG_USE_MAPPABLE 0x0001
> > > #define VIRTGPU_BLOB_FLAG_USE_SHAREABLE 0x0002
> > > #define VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
> > > +#define VIRTGPU_BLOB_FLAG_USE_INTERNAL 0x0008 /* not-mappable, not-shareable */
> > > /* zero is invalid blob_mem */
> > > __u32 blob_mem;
> > > __u32 blob_flags;
> > > --
> > > 2.34.1
> > >