2024-04-09 21:25:17

by Michael Grzeschik

[permalink] [raw]
Subject: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

This patch series is improving the size calculation and allocation
of the uvc requests. Using the currenlty setup frame duration of the
stream it is possible to calculate the number of requests based on the
interval length.

Signed-off-by: Michael Grzeschik <[email protected]>
---
Michael Grzeschik (3):
usb: gadget: function: uvc: set req_size once when the vb2 queue is calculated
usb: gadget: uvc: add g_parm and s_parm for frame interval
usb: gadget: uvc: set req_size and n_requests based on the frame interval

drivers/usb/gadget/function/uvc.h | 1 +
drivers/usb/gadget/function/uvc_queue.c | 30 ++++++++++++++-----
drivers/usb/gadget/function/uvc_v4l2.c | 52 +++++++++++++++++++++++++++++++++
drivers/usb/gadget/function/uvc_video.c | 17 ++---------
4 files changed, 79 insertions(+), 21 deletions(-)
---
base-commit: 3295f1b866bfbcabd625511968e8a5c541f9ab32
change-id: 20240403-uvc_request_length_by_interval-a7efd587d963

Best regards,
--
Michael Grzeschik <[email protected]>



2024-04-09 21:25:19

by Michael Grzeschik

[permalink] [raw]
Subject: [PATCH 2/3] usb: gadget: uvc: add g_parm and s_parm for frame interval

The uvc gadget driver is lacking the information which frame interval
was set by the host. We add this information by implementing the g_parm
and s_parm callbacks.

Signed-off-by: Michael Grzeschik <[email protected]>
---
drivers/usb/gadget/function/uvc.h | 1 +
drivers/usb/gadget/function/uvc_v4l2.c | 52 ++++++++++++++++++++++++++++++++++
2 files changed, 53 insertions(+)

diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
index cb35687b11e7e..d153bd9e35e31 100644
--- a/drivers/usb/gadget/function/uvc.h
+++ b/drivers/usb/gadget/function/uvc.h
@@ -97,6 +97,7 @@ struct uvc_video {
unsigned int width;
unsigned int height;
unsigned int imagesize;
+ unsigned int interval;
struct mutex mutex; /* protects frame parameters */

unsigned int uvc_num_requests;
diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
index a024aecb76dc3..5b579ec1f5040 100644
--- a/drivers/usb/gadget/function/uvc_v4l2.c
+++ b/drivers/usb/gadget/function/uvc_v4l2.c
@@ -307,6 +307,56 @@ uvc_v4l2_set_format(struct file *file, void *fh, struct v4l2_format *fmt)
return ret;
}

+static int uvc_v4l2_g_parm(struct file *file, void *fh,
+ struct v4l2_streamparm *parm)
+{
+ struct video_device *vdev = video_devdata(file);
+ struct uvc_device *uvc = video_get_drvdata(vdev);
+ struct uvc_video *video = &uvc->video;
+ struct v4l2_fract timeperframe;
+
+ if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ /* Return the actual frame period. */
+ timeperframe.numerator = video->interval;
+ timeperframe.denominator = 10000000;
+ v4l2_simplify_fraction(&timeperframe.numerator,
+ &timeperframe.denominator, 8, 333);
+
+ uvcg_dbg(&uvc->func, "Getting frame interval of %u/%u (%u)\n",
+ timeperframe.numerator, timeperframe.denominator,
+ video->interval);
+
+ parm->parm.output.timeperframe = timeperframe;
+ parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME;
+
+ return 0;
+}
+
+static int uvc_v4l2_s_parm(struct file *file, void *fh,
+ struct v4l2_streamparm *parm)
+{
+ struct video_device *vdev = video_devdata(file);
+ struct uvc_device *uvc = video_get_drvdata(vdev);
+ struct uvc_video *video = &uvc->video;
+ struct v4l2_fract timeperframe;
+
+ if (parm->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ timeperframe = parm->parm.output.timeperframe;
+
+ video->interval = v4l2_fraction_to_interval(timeperframe.numerator,
+ timeperframe.denominator);
+
+ uvcg_dbg(&uvc->func, "Setting frame interval to %u/%u (%u)\n",
+ timeperframe.numerator, timeperframe.denominator,
+ video->interval);
+
+ return 0;
+}
+
static int
uvc_v4l2_enum_frameintervals(struct file *file, void *fh,
struct v4l2_frmivalenum *fival)
@@ -577,6 +627,8 @@ const struct v4l2_ioctl_ops uvc_v4l2_ioctl_ops = {
.vidioc_dqbuf = uvc_v4l2_dqbuf,
.vidioc_streamon = uvc_v4l2_streamon,
.vidioc_streamoff = uvc_v4l2_streamoff,
+ .vidioc_s_parm = uvc_v4l2_s_parm,
+ .vidioc_g_parm = uvc_v4l2_g_parm,
.vidioc_subscribe_event = uvc_v4l2_subscribe_event,
.vidioc_unsubscribe_event = uvc_v4l2_unsubscribe_event,
.vidioc_default = uvc_v4l2_ioctl_default,

--
2.39.2


2024-04-09 21:25:42

by Michael Grzeschik

[permalink] [raw]
Subject: [PATCH 3/3] usb: gadget: uvc: set req_size and n_requests based on the frame interval

With the information of the interval frame length it is now possible to
calculate the number of usb requests by the frame duration. Based on the
request size and the imagesize we calculate the actual size per request.
This has calculation has the benefit that the frame data is equally
distributed over all allocated requests.

We keep the current req_size calculation as a fallback, if the interval
callbacks did not set the interval property.

Signed-off-by: Michael Grzeschik <[email protected]>
---
drivers/usb/gadget/function/uvc_queue.c | 28 +++++++++++++++++++++-------
drivers/usb/gadget/function/uvc_video.c | 2 +-
2 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
index ce51643fc4639..cd8a9793aa72e 100644
--- a/drivers/usb/gadget/function/uvc_queue.c
+++ b/drivers/usb/gadget/function/uvc_queue.c
@@ -44,7 +44,7 @@ static int uvc_queue_setup(struct vb2_queue *vq,
{
struct uvc_video_queue *queue = vb2_get_drv_priv(vq);
struct uvc_video *video = container_of(queue, struct uvc_video, queue);
- unsigned int req_size;
+ unsigned int req_size, max_req_size;
unsigned int nreq;

if (*nbuffers > UVC_MAX_VIDEO_BUFFERS)
@@ -54,15 +54,29 @@ static int uvc_queue_setup(struct vb2_queue *vq,

sizes[0] = video->imagesize;

- req_size = video->ep->maxpacket
+ nreq = DIV_ROUND_UP(video->interval, video->ep->desc->bInterval * 1250);
+
+ req_size = DIV_ROUND_UP(sizes[0], nreq);
+
+ max_req_size = video->ep->maxpacket
* max_t(unsigned int, video->ep->maxburst, 1)
* (video->ep->mult);

- /* We divide by two, to increase the chance to run
- * into fewer requests for smaller framesizes.
- */
- nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size);
- nreq = clamp(nreq, 4U, 64U);
+ if (!req_size) {
+ req_size = max_req_size;
+
+ /* We divide by two, to increase the chance to run
+ * into fewer requests for smaller framesizes.
+ */
+ nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size);
+ nreq = clamp(nreq, 4U, 64U);
+ } else if (req_size > max_req_size) {
+ /* The prepared interval length and expected buffer size
+ * is not possible to stream with the currently configured
+ * isoc bandwidth
+ */
+ return -EINVAL;
+ }

video->req_size = req_size;
video->uvc_num_requests = nreq;
diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
index 95bb64e16f3da..d197c46e93fb4 100644
--- a/drivers/usb/gadget/function/uvc_video.c
+++ b/drivers/usb/gadget/function/uvc_video.c
@@ -304,7 +304,7 @@ static int uvcg_video_usb_req_queue(struct uvc_video *video,
*/
if (list_empty(&video->req_free) || ureq->last_buf ||
!(video->req_int_count %
- DIV_ROUND_UP(video->uvc_num_requests, 4))) {
+ clamp(DIV_ROUND_UP(video->uvc_num_requests, 4), 4U, 16U))) {
video->req_int_count = 0;
req->no_interrupt = 0;
} else {

--
2.39.2


2024-04-09 21:25:44

by Michael Grzeschik

[permalink] [raw]
Subject: [PATCH 1/3] usb: gadget: function: uvc: set req_size once when the vb2 queue is calculated

The uvc gadget driver is calculating the req_size on every
video_enable/alloc_request and is based on the fixed configfs parameters
maxpacket, maxburst and mult.

As those parameters can not be changed once the gadget is started and
the same calculation is done already early on the
vb2_streamon/queue_setup path its save to remove one extra calculation
and reuse the calculation from uvc_queue_setup for the allocation step.

Signed-off-by: Michael Grzeschik <[email protected]>

---
Link to v1: https://lore.kernel.org/r/[email protected]
---
drivers/usb/gadget/function/uvc_queue.c | 2 ++
drivers/usb/gadget/function/uvc_video.c | 15 ++-------------
2 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
index 0aa3d7e1f3cc3..ce51643fc4639 100644
--- a/drivers/usb/gadget/function/uvc_queue.c
+++ b/drivers/usb/gadget/function/uvc_queue.c
@@ -63,6 +63,8 @@ static int uvc_queue_setup(struct vb2_queue *vq,
*/
nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size);
nreq = clamp(nreq, 4U, 64U);
+
+ video->req_size = req_size;
video->uvc_num_requests = nreq;

return 0;
diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
index d41f5f31dadd5..95bb64e16f3da 100644
--- a/drivers/usb/gadget/function/uvc_video.c
+++ b/drivers/usb/gadget/function/uvc_video.c
@@ -496,7 +496,6 @@ uvc_video_free_requests(struct uvc_video *video)
INIT_LIST_HEAD(&video->ureqs);
INIT_LIST_HEAD(&video->req_free);
INIT_LIST_HEAD(&video->req_ready);
- video->req_size = 0;
return 0;
}

@@ -504,16 +503,9 @@ static int
uvc_video_alloc_requests(struct uvc_video *video)
{
struct uvc_request *ureq;
- unsigned int req_size;
unsigned int i;
int ret = -ENOMEM;

- BUG_ON(video->req_size);
-
- req_size = video->ep->maxpacket
- * max_t(unsigned int, video->ep->maxburst, 1)
- * (video->ep->mult);
-
for (i = 0; i < video->uvc_num_requests; i++) {
ureq = kzalloc(sizeof(struct uvc_request), GFP_KERNEL);
if (ureq == NULL)
@@ -523,7 +515,7 @@ uvc_video_alloc_requests(struct uvc_video *video)

list_add_tail(&ureq->list, &video->ureqs);

- ureq->req_buffer = kmalloc(req_size, GFP_KERNEL);
+ ureq->req_buffer = kmalloc(video->req_size, GFP_KERNEL);
if (ureq->req_buffer == NULL)
goto error;

@@ -541,12 +533,10 @@ uvc_video_alloc_requests(struct uvc_video *video)
list_add_tail(&ureq->req->list, &video->req_free);
/* req_size/PAGE_SIZE + 1 for overruns and + 1 for header */
sg_alloc_table(&ureq->sgt,
- DIV_ROUND_UP(req_size - UVCG_REQUEST_HEADER_LEN,
+ DIV_ROUND_UP(video->req_size - UVCG_REQUEST_HEADER_LEN,
PAGE_SIZE) + 2, GFP_KERNEL);
}

- video->req_size = req_size;
-
return 0;

error:
@@ -699,7 +689,6 @@ uvcg_video_disable(struct uvc_video *video)
INIT_LIST_HEAD(&video->ureqs);
INIT_LIST_HEAD(&video->req_free);
INIT_LIST_HEAD(&video->req_ready);
- video->req_size = 0;
spin_unlock_irqrestore(&video->req_lock, flags);

/*

--
2.39.2


2024-04-21 23:26:08

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>This patch series is improving the size calculation and allocation
>of the uvc requests. Using the currenlty setup frame duration of the
>stream it is possible to calculate the number of requests based on the
>interval length.

The basic concept here is right. But unfortunatly we found out that
together with Patch [1] and the current zero length request pump
mechanism [2] and [3] this is not working as expected.

The conclusion that we can not queue more than one frame at once into
the hw led to [1]. The current implementation of zero length reqeusts
which will be queued while we are waiting for the frame to finish
transferring will enlarge the frame duration. Since every zero-length
request is still taking up at least one frame interval of 125 us.

This longer frameduration of each enqueued will therefor decrease the
absolut framerate.

Therefor to properly make those patches work, we will have to get rid of
the zero length pump mechanism again and make sure that the whole
business logic of what to be queued and when will only be done in the
pump worker. It is possible to let the dwc3 udc run dry, as we are
actively waiting for the frame to finish, the last request in the
prepared and started list will stop the current dwc3 stream and therfor
no underruns will occur with the next ep_queue.

Also keeping the prepared list and doing the preparation in any case
of the pump worker is still a good point we need to keep.

With all these pending patches the whole uvc saga of underruns and
flickering videostreams should come to an end™.

I already started with this but would be happy to see Avichal and others
to review the patches when they are ready in my eyes.

mgr

[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/all/[email protected]/
[3] https://lore.kernel.org/all/[email protected]/

>Signed-off-by: Michael Grzeschik <[email protected]>
>---
>Michael Grzeschik (3):
> usb: gadget: function: uvc: set req_size once when the vb2 queue is calculated
> usb: gadget: uvc: add g_parm and s_parm for frame interval
> usb: gadget: uvc: set req_size and n_requests based on the frame interval
>
> drivers/usb/gadget/function/uvc.h | 1 +
> drivers/usb/gadget/function/uvc_queue.c | 30 ++++++++++++++-----
> drivers/usb/gadget/function/uvc_v4l2.c | 52 +++++++++++++++++++++++++++++++++
> drivers/usb/gadget/function/uvc_video.c | 17 ++---------
> 4 files changed, 79 insertions(+), 21 deletions(-)
>---
>base-commit: 3295f1b866bfbcabd625511968e8a5c541f9ab32
>change-id: 20240403-uvc_request_length_by_interval-a7efd587d963
>
>Best regards,
>--
>Michael Grzeschik <[email protected]>
>
>

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (3.15 kB)
signature.asc (849.00 B)
Download all attachments

2024-04-23 00:37:38

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 4/21/24 16:25, Michael Grzeschik wrote:
> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>> This patch series is improving the size calculation and allocation
>> of the uvc requests. Using the currenlty setup frame duration of the
>> stream it is possible to calculate the number of requests based on the
>> interval length.
>
> The basic concept here is right. But unfortunatly we found out that
> together with Patch [1] and the current zero length request pump
> mechanism [2] and [3] this is not working as expected.
>
> The conclusion that we can not queue more than one frame at once into
> the hw led to [1]. The current implementation of zero length reqeusts
> which will be queued while we are waiting for the frame to finish
> transferring will enlarge the frame duration. Since every zero-length
> request is still taking up at least one frame interval of 125 us.

I haven't taken a super close look at your patches, so please feel free
to correct me if I am misunderstanding something.

It looks like the goal of the patches is to determine a better number
and size of usb_requests from the given framerate such that we send exactly
nreqs requests per frame where nreqs is determined to be the exact number
of requests that can be sent in one frame interval?

As the logic stands, we need some 0-length requests to be circulating to
ensure that we don't miss ISOC deadlines. The current logic unconditionally
sends half of all allocated requests to be circulated.

With those two things in mind, this means than video_pump can at encode
at most half a frame in one go, and then has to wait for complete
callbacks to come in. In such cases, the theoretical worst case for
encode time is
125us * (number of requests needed per frame / 2) + scheduling delays
as after the first half of the frame has been encoded, the video_pump
thread will have to wait 125us for each of the zero length requests to
be returned.

The underlying assumption behind the "queue 0-length requests" approach
was that video_pump encodes the frames in as few requests as possible
and that there are spare requests to maintain a pressure on the
ISOC queue without hindering the video_pump thread, and unfortunately
it seems like patch 3/3 is breaking both of them?

Assuming my understanding of your patches is correct, my question
is: Why do we want to spread the frame uniformly over the requests
instead of encoding it in as few requests as possible. Spreading
the frame over more requests artificially increases the encode time
required by video_pump, and AFAICT there is no real benefit to it?

> Therefor to properly make those patches work, we will have to get rid of
> the zero length pump mechanism again and make sure that the whole
> business logic of what to be queued and when will only be done in the
> pump worker. It is possible to let the dwc3 udc run dry, as we are
> actively waiting for the frame to finish, the last request in the
> prepared and started list will stop the current dwc3 stream and therf> no underruns will occur with the next ep_queue.

One thing to note here: The reason we moved to queuing 0-length requests
from complete callback was because even with realtime priority, video_pump
thread doesn't always meet the ISOC queueing cadence. I think stopping and
starting the stream was briefly discussed in our initial discussion in
https://lore.kernel.org/all/[email protected]/
and Thinh mentioned that dwc3 controller does it if it detects an underrun,
but I am not sure if starting and stopping an ISOC stream is good practice.
Someone better versed in USB protocol can probably confirm, but it seems
somewhat hacky to stop the ISOC stream at the end of the frame and restart
with the next frame.

> With all these pending patches the whole uvc saga of underruns and
> flickering videostreams should come to an end™.

This would indeed be nice!

>
> I already started with this but would be happy to see Avichal and others
> to review the patches when they are ready in my eyes.

Of course!

- Avi.

2024-04-23 14:25:26

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

Ccing:

Michael Riesch <[email protected]>
Thinh Nguyen <[email protected]>

On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
>On 4/21/24 16:25, Michael Grzeschik wrote:
>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>>> This patch series is improving the size calculation and allocation
>>> of the uvc requests. Using the currenlty setup frame duration of the
>>> stream it is possible to calculate the number of requests based on the
>>> interval length.
>>
>> The basic concept here is right. But unfortunatly we found out that
>> together with Patch [1] and the current zero length request pump
>> mechanism [2] and [3] this is not working as expected.
>>
>> The conclusion that we can not queue more than one frame at once into
>> the hw led to [1]. The current implementation of zero length reqeusts
>> which will be queued while we are waiting for the frame to finish
>> transferring will enlarge the frame duration. Since every zero-length
>> request is still taking up at least one frame interval of 125 us.
>
>I haven't taken a super close look at your patches, so please feel free
>to correct me if I am misunderstanding something.
>
>It looks like the goal of the patches is to determine a better number
>and size of usb_requests from the given framerate such that we send exactly
>nreqs requests per frame where nreqs is determined to be the exact number
>of requests that can be sent in one frame interval?

It does not need to be the exact time, actually it may not be exact.
Scattering the data over all requests would not leave any headroom for
any latencies or overhead.

>As the logic stands, we need some 0-length requests to be circulating to
>ensure that we don't miss ISOC deadlines. The current logic unconditionally
>sends half of all allocated requests to be circulated.
>
>With those two things in mind, this means than video_pump can at encode
>at most half a frame in one go, and then has to wait for complete
>callbacks to come in. In such cases, the theoretical worst case for
>encode time is
>125us * (number of requests needed per frame / 2) + scheduling delays
>as after the first half of the frame has been encoded, the video_pump
>thread will have to wait 125us for each of the zero length requests to
>be returned.
>
>The underlying assumption behind the "queue 0-length requests" approach
>was that video_pump encodes the frames in as few requests as possible
>and that there are spare requests to maintain a pressure on the
>ISOC queue without hindering the video_pump thread, and unfortunately
>it seems like patch 3/3 is breaking both of them?

Right.

>Assuming my understanding of your patches is correct, my question
>is: Why do we want to spread the frame uniformly over the requests
>instead of encoding it in as few requests as possible. Spreading
>the frame over more requests artificially increases the encode time
>required by video_pump, and AFAICT there is no real benefit to it?

Thinh gave me the advise that it is better to use the isoc stream
constantly filled. Rather then streaming big amounts of data in the
beginning of an frameinterval and having then a lot of spare time
where the bandwidth is completely unsused.

In our reallife scenario streaming big requests had the impact, that
the dwc3 core could not keep up with reading the amount of data
from the memory bus, as the bus is already under heavy load. When the
HW was then not able to transfer the requested and actually available
amount of data in the interval, the hw did give us the usual missed
interrupt answer.

Using smaller requests solved the problem here, as it really was
unnecessary to stress the memory and usb bus in the beginning as
we had enough headroom in the temporal domain.

Which then led to the conclusion that the number of needed requests
per image frame interval is calculatable since we know the usb
interval length.

@Thinh: Correct me if I am saying something wrong here.

>> Therefor to properly make those patches work, we will have to get rid of
>> the zero length pump mechanism again and make sure that the whole
>> business logic of what to be queued and when will only be done in the
>> pump worker. It is possible to let the dwc3 udc run dry, as we are
>> actively waiting for the frame to finish, the last request in the
>> prepared and started list will stop the current dwc3 stream and for
>> no underruns will occur with the next ep_queue.
>
>One thing to note here: The reason we moved to queuing 0-length requests
>from complete callback was because even with realtime priority, video_pump
>thread doesn't always meet the ISOC queueing cadence. I think stopping and
>starting the stream was briefly discussed in our initial discussion in
>https://lore.kernel.org/all/[email protected]/
>and Thinh mentioned that dwc3 controller does it if it detects an underrun,
>but I am not sure if starting and stopping an ISOC stream is good practice.

The realtime latency aspect is not an issue anymore if we ensure that we
always keep only one frame in the hw ring buffer. When the pump worker
ensure that it will always run through one full frame the scheduler has
no chance to break our running dwc3 stream. Since the pump is running
under a while(1) this should be possible.

Also with the request amount precalculation we can always encode the
whole frame into all available requests and don't have to wait for
requests to be available again.

Together with the latest knowladge about the underlying hw we even need to only
keep one frame in the HW ring buffer. Since we have some interrupt latency,
keeping more frames in the ring buffer, would mean that we are not able to tag
the currently streamed frame properly as errornous if the dwc3 hw ring buffer
is already telling the host some data about the next frame. And as we already
need to wait for the end of the frame to finish, based on the assumption that
only one frame is enqueued in the ring buffer the hw will stop the stream and
the next requst will start a new stream. So there will no missed underruns be
happening.

So the main fact here is, that telling the host some status about a
frame in the past is impossible! Therefor the first request of the next
hw stream need to be the one that is telling the Host if the previous frame
is ment to be drawn or not.

>Someone better versed in USB protocol can probably confirm, but it seems
>somewhat hacky to stop the ISOC stream at the end of the frame and restart
>with the next frame.

All I know is that the HW mechanism that is reading from the trb ring buffer is
started or stopped I don't know if really the ISOC stream is stopped and
restarted here or what that means on the real wire. And if so, I am unsure if
that is really a problem or not. Thinh?

Regards,
Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (7.11 kB)
signature.asc (849.00 B)
Download all attachments

2024-04-23 23:23:38

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 4/23/24 07:25, Michael Grzeschik wrote:
> Ccing:
>
> Michael Riesch <[email protected]>
> Thinh Nguyen <[email protected]>
>
> On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
>> On 4/21/24 16:25, Michael Grzeschik wrote:
>>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>>>> This patch series is improving the size calculation and allocation
>>>> of the uvc requests. Using the currenlty setup frame duration of the
>>>> stream it is possible to calculate the number of requests based on the
>>>> interval length.
>>>
>>> The basic concept here is right. But unfortunatly we found out that
>>> together with Patch [1] and the current zero length request pump
>>> mechanism [2] and [3] this is not working as expected.
>>>
>>> The conclusion that we can not queue more than one frame at once into
>>> the hw led to [1]. The current implementation of zero length reqeusts
>>> which will be queued while we are waiting for the frame to finish
>>> transferring will enlarge the frame duration. Since every zero-length
>>> request is still taking up at least one frame interval of 125 us.
>>
>> I haven't taken a super close look at your patches, so please feel free
>> to correct me if I am misunderstanding something.
>>
>> It looks like the goal of the patches is to determine a better number
>> and size of usb_requests from the given framerate such that we send exactly
>> nreqs requests per frame where nreqs is determined to be the exact number
>> of requests that can be sent in one frame interval?
>
> It does not need to be the exact time, actually it may not be exact.
> Scattering the data over all requests would not leave any headroom for
> any latencies or overhead.

IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
which gives us the number of requests we can send in exactly one frame interval,
and then sets the size of the request as max framesize / nreq, which means the
frames will be evenly divided up into all available requests (with a little
fuzz factor here and there).

This effectively means that (assuming no other delays) one frame will take
~one frameinterval to be transmitted?

>
>> As the logic stands, we need some 0-length requests to be circulating to
>> ensure that we don't miss ISOC deadlines. The current logic unconditionally
>> sends half of all allocated requests to be circulated.
>>
>> With those two things in mind, this means than video_pump can at encode
>> at most half a frame in one go, and then has to wait for complete
>> callbacks to come in. In such cases, the theoretical worst case for
>> encode time is
>> 125us * (number of requests needed per frame / 2) + scheduling delays
>> as after the first half of the frame has been encoded, the video_pump
>> thread will have to wait 125us for each of the zero length requests to
>> be returned.
>>
>> The underlying assumption behind the "queue 0-length requests" approach
>> was that video_pump encodes the frames in as few requests as possible
>> and that there are spare requests to maintain a pressure on the
>> ISOC queue without hindering the video_pump thread, and unfortunately
>> it seems like patch 3/3 is breaking both of them?
>
> Right.
>
>> Assuming my understanding of your patches is correct, my question
>> is: Why do we want to spread the frame uniformly over the requests
>> instead of encoding it in as few requests as possible. Spreading
>> the frame over more requests artificially increases the encode time
>> required by video_pump, and AFAICT there is no real benefit to it?
>
> Thinh gave me the advise that it is better to use the isoc stream
> constantly filled. Rather then streaming big amounts of data in the
> beginning of an frameinterval and having then a lot of spare time
> where the bandwidth is completely unsused.
>
> In our reallife scenario streaming big requests had the impact, that
> the dwc3 core could not keep up with reading the amount of data
> from the memory bus, as the bus is already under heavy load. When the
> HW was then not able to transfer the requested and actually available
> amount of data in the interval, the hw did give us the usual missed
> interrupt answer.
>
> Using smaller requests solved the problem here, as it really was
> unnecessary to stress the memory and usb bus in the beginning as
> we had enough headroom in the temporal domain.

Ah, I see. This was not a consideration, and it makes sense if USB
bus is under contention from a few different streams. So the solution
seems to be to spread the frame of as many requests as we can transmit
in one frameinterval?

As an experiment, while we wait for others to respond, could you try
doubling (or 2.5x'ing to be extra safe) the number of requests allocated
by patch 3/3 without changing the request's buffer size?

It won't help with the error reporting but should help with ensuring
that frames are sent out in one frameinterval with little to no
0-length requests between them.

The idea is that video_pump will have enough requests available to fully
encode the frame in one burst, and another frame's worth of request will be
re-added to req_free list for video_pump to fill up in the time that the next
frame comes in.

>
> Which then led to the conclusion that the number of needed requests
> per image frame interval is calculatable since we know the usb
> interval length.
>
> @Thinh: Correct me if I am saying something wrong here.
>
>>> Therefor to properly make those patches work, we will have to get rid of
>>> the zero length pump mechanism again and make sure that the whole
>>> business logic of what to be queued and when will only be done in the
>>> pump worker. It is possible to let the dwc3 udc run dry, as we are
>>> actively waiting for the frame to finish, the last request in the
>>> prepared and started list will stop the current dwc3 stream and  for
>>> no underruns will occur with the next ep_queue.
>>
>> One thing to note here: The reason we moved to queuing 0-length requests
>> from complete callback was because even with realtime priority, video_pump
>> thread doesn't always meet the ISOC queueing cadence. I think stopping and
>> starting the stream was briefly discussed in our initial discussion in
>> https://lore.kernel.org/all/[email protected]/
>> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
>> but I am not sure if starting and stopping an ISOC stream is good practice.
>
> The realtime latency aspect is not an issue anymore if we ensure that we
> always keep only one frame in the hw ring buffer. When the pump worker
> ensure that it will always run through one full frame the scheduler has
> no chance to break our running dwc3 stream. Since the pump is running
> under a while(1) this should be possible.

I'll wait for your patch to see, but are you saying that we should have the
pump worker busy spinning when there are no frames available? Cameras cannot
produce frames fast enough for video_pump to be constantly encoding frames.
IIRC, "encoding" a frame to usb_requests took less than a ms or two on my
device, and frame interval is 33ms for a 30fps stream, so the CPU would be
busy spinning for ~30ms which is an unreasonable time for a CPU to be
idling.

>
> Also with the request amount precalculation we can always encode the
> whole frame into all available requests and don't have to wait for
> requests to be available again.
>
> Together with the latest knowladge about the underlying hw we even need to only
> keep one frame in the HW ring buffer. Since we have some interrupt latency,
> keeping more frames in the ring buffer, would mean that we are not able to tag
> the currently streamed frame properly as errornous if the dwc3 hw ring buffer
> is already telling the host some data about the next frame. And as we already
> need to wait for the end of the frame to finish, based on the assumption that
> only one frame is enqueued in the ring buffer the hw will stop the stream and
> the next requst will start a new stream. So there will no missed underruns be
> happening.
>
> So the main fact here is, that telling the host some status about a
> frame in the past is impossible! Therefor the first request of the next
> hw stream need to be the one that is telling the Host if the previous frame
> is ment to be drawn or not.

This is a fair point, but the timing on this becomes a little difficult if
the frame is sent over the entire frameinterval. If we wait for the entire
frame to be transmitted, then we have 125us between the last request of a
frame being transmitted and the first request of the next frame being
queued. The userspace app producing the frames will have timing variations
larger than 125us, so we cannot rely on a frame being available exactly as
one frame is fully transmitted, or of us being notified of transmission
status by the time the next frame comes in.

>
>> Someone better versed in USB protocol can probably confirm, but it seems
>> somewhat hacky to stop the ISOC stream at the end of the frame and restart
>> with the next frame.
>
> All I know is that the HW mechanism that is reading from the trb ring buffer is
> started or stopped I don't know if really the ISOC stream is stopped and
> restarted here or what that means on the real wire. And if so, I am unsure if
> that is really a problem or not. Thinh?

Oh? That's great! If the controller can keep the ISOC stream from underruning
without the gadget feeding it 0-length requests, then we can simplify the
gadget side implementation quite a bit!


Regards,
Avi.

2024-04-24 02:29:10

by Thinh Nguyen

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, Apr 23, 2024, Avichal Rakesh wrote:
>
>
> On 4/23/24 07:25, Michael Grzeschik wrote:
> > Ccing:
> >
> > Michael Riesch <[email protected]>
> > Thinh Nguyen <[email protected]>
> >
> > On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
> >> On 4/21/24 16:25, Michael Grzeschik wrote:
> >>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
> >>>> This patch series is improving the size calculation and allocation
> >>>> of the uvc requests. Using the currenlty setup frame duration of the
> >>>> stream it is possible to calculate the number of requests based on the
> >>>> interval length.
> >>>
> >>> The basic concept here is right. But unfortunatly we found out that
> >>> together with Patch [1] and the current zero length request pump
> >>> mechanism [2] and [3] this is not working as expected.
> >>>
> >>> The conclusion that we can not queue more than one frame at once into
> >>> the hw led to [1]. The current implementation of zero length reqeusts
> >>> which will be queued while we are waiting for the frame to finish
> >>> transferring will enlarge the frame duration. Since every zero-length
> >>> request is still taking up at least one frame interval of 125 us.
> >>
> >> I haven't taken a super close look at your patches, so please feel free
> >> to correct me if I am misunderstanding something.
> >>
> >> It looks like the goal of the patches is to determine a better number
> >> and size of usb_requests from the given framerate such that we send exactly
> >> nreqs requests per frame where nreqs is determined to be the exact number
> >> of requests that can be sent in one frame interval?
> >
> > It does not need to be the exact time, actually it may not be exact.
> > Scattering the data over all requests would not leave any headroom for
> > any latencies or overhead.
>
> IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
> which gives us the number of requests we can send in exactly one frame interval,
> and then sets the size of the request as max framesize / nreq, which means the
> frames will be evenly divided up into all available requests (with a little
> fuzz factor here and there).
>
> This effectively means that (assuming no other delays) one frame will take
> ~one frameinterval to be transmitted?
>
> >
> >> As the logic stands, we need some 0-length requests to be circulating to
> >> ensure that we don't miss ISOC deadlines. The current logic unconditionally
> >> sends half of all allocated requests to be circulated.
> >>
> >> With those two things in mind, this means than video_pump can at encode
> >> at most half a frame in one go, and then has to wait for complete
> >> callbacks to come in. In such cases, the theoretical worst case for
> >> encode time is
> >> 125us * (number of requests needed per frame / 2) + scheduling delays
> >> as after the first half of the frame has been encoded, the video_pump
> >> thread will have to wait 125us for each of the zero length requests to
> >> be returned.
> >>
> >> The underlying assumption behind the "queue 0-length requests" approach
> >> was that video_pump encodes the frames in as few requests as possible
> >> and that there are spare requests to maintain a pressure on the
> >> ISOC queue without hindering the video_pump thread, and unfortunately
> >> it seems like patch 3/3 is breaking both of them?
> >
> > Right.
> >
> >> Assuming my understanding of your patches is correct, my question
> >> is: Why do we want to spread the frame uniformly over the requests
> >> instead of encoding it in as few requests as possible. Spreading
> >> the frame over more requests artificially increases the encode time
> >> required by video_pump, and AFAICT there is no real benefit to it?
> >
> > Thinh gave me the advise that it is better to use the isoc stream
> > constantly filled. Rather then streaming big amounts of data in the
> > beginning of an frameinterval and having then a lot of spare time
> > where the bandwidth is completely unsused.
> >
> > In our reallife scenario streaming big requests had the impact, that
> > the dwc3 core could not keep up with reading the amount of data
> > from the memory bus, as the bus is already under heavy load. When the
> > HW was then not able to transfer the requested and actually available
> > amount of data in the interval, the hw did give us the usual missed
> > interrupt answer.
> >
> > Using smaller requests solved the problem here, as it really was
> > unnecessary to stress the memory and usb bus in the beginning as
> > we had enough headroom in the temporal domain.
>
> Ah, I see. This was not a consideration, and it makes sense if USB
> bus is under contention from a few different streams. So the solution
> seems to be to spread the frame of as many requests as we can transmit
> in one frameinterval?
>
> As an experiment, while we wait for others to respond, could you try
> doubling (or 2.5x'ing to be extra safe) the number of requests allocated
> by patch 3/3 without changing the request's buffer size?
>
> It won't help with the error reporting but should help with ensuring
> that frames are sent out in one frameinterval with little to no
> 0-length requests between them.
>
> The idea is that video_pump will have enough requests available to fully
> encode the frame in one burst, and another frame's worth of request will be
> re-added to req_free list for video_pump to fill up in the time that the next
> frame comes in.
>
> >
> > Which then led to the conclusion that the number of needed requests
> > per image frame interval is calculatable since we know the usb
> > interval length.
> >
> > @Thinh: Correct me if I am saying something wrong here.

Right, if you max out the data rate per uframe, there's less opportunity
for the host to schedule everything for that interval (e.g. affected
from other endpoint/device traffics, link commands etc). It also
increases the latency of DMA. In many cases, many other vendor hosts
can't handle 48KB/uframe for SuperSpeed and 96KB/uframe for SuperSpeed
Plus. So, you'd need to test your platform find the optimal request size
so it can work for most hosts.

> >
> >>> Therefor to properly make those patches work, we will have to get rid of

Sorry if I may have missed the explaination, but why do we need to rid
of this?

> >>> the zero length pump mechanism again and make sure that the whole
> >>> business logic of what to be queued and when will only be done in the
> >>> pump worker. It is possible to let the dwc3 udc run dry, as we are
> >>> actively waiting for the frame to finish, the last request in the
> >>> prepared and started list will stop the current dwc3 stream and  for
> >>> no underruns will occur with the next ep_queue.
> >>
> >> One thing to note here: The reason we moved to queuing 0-length requests
> >> from complete callback was because even with realtime priority, video_pump
> >> thread doesn't always meet the ISOC queueing cadence. I think stopping and
> >> starting the stream was briefly discussed in our initial discussion in
> >> https://urldefense.com/v3/__https://lore.kernel.org/all/[email protected]/__;!!A4F2R9G_pg!ZmfvrPq4rs7MIhxNrrEqmgGrlYTJ12WgdzaqQhfEehKfjKqxPr2bC1RzUqaa9tvdBtAvXdyK2GpxYzvslpV6$
> >> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
> >> but I am not sure if starting and stopping an ISOC stream is good practice.

There's a workaround specific for UVC in dwc3 to "guess" when underrun
happen. It's not foolproof. dwc3 should not need to do that.

Isoc data is periodic and continuous. We should not expect this
unconventional re-synchronization.

> >
> > The realtime latency aspect is not an issue anymore if we ensure that we
> > always keep only one frame in the hw ring buffer. When the pump worker
> > ensure that it will always run through one full frame the scheduler has
> > no chance to break our running dwc3 stream. Since the pump is running
> > under a while(1) this should be possible.
>
> I'll wait for your patch to see, but are you saying that we should have the
> pump worker busy spinning when there are no frames available? Cameras cannot
> produce frames fast enough for video_pump to be constantly encoding frames.
> IIRC, "encoding" a frame to usb_requests took less than a ms or two on my
> device, and frame interval is 33ms for a 30fps stream, so the CPU would be
> busy spinning for ~30ms which is an unreasonable time for a CPU to be
> idling.
>
> >
> > Also with the request amount precalculation we can always encode the
> > whole frame into all available requests and don't have to wait for
> > requests to be available again.
> >
> > Together with the latest knowladge about the underlying hw we even need to only
> > keep one frame in the HW ring buffer. Since we have some interrupt latency,
> > keeping more frames in the ring buffer, would mean that we are not able to tag
> > the currently streamed frame properly as errornous if the dwc3 hw ring buffer
> > is already telling the host some data about the next frame. And as we already
> > need to wait for the end of the frame to finish, based on the assumption that
> > only one frame is enqueued in the ring buffer the hw will stop the stream and
> > the next requst will start a new stream. So there will no missed underruns be
> > happening.
> >
> > So the main fact here is, that telling the host some status about a
> > frame in the past is impossible! Therefor the first request of the next
> > hw stream need to be the one that is telling the Host if the previous frame
> > is ment to be drawn or not.
>
> This is a fair point, but the timing on this becomes a little difficult if
> the frame is sent over the entire frameinterval. If we wait for the entire
> frame to be transmitted, then we have 125us between the last request of a
> frame being transmitted and the first request of the next frame being
> queued. The userspace app producing the frames will have timing variations
> larger than 125us, so we cannot rely on a frame being available exactly as
> one frame is fully transmitted, or of us being notified of transmission
> status by the time the next frame comes in.
>
> >
> >> Someone better versed in USB protocol can probably confirm, but it seems
> >> somewhat hacky to stop the ISOC stream at the end of the frame and restart
> >> with the next frame.
> >
> > All I know is that the HW mechanism that is reading from the trb ring buffer is
> > started or stopped I don't know if really the ISOC stream is stopped and
> > restarted here or what that means on the real wire. And if so, I am unsure if
> > that is really a problem or not. Thinh?

For isoc IN endpoint, if the host requests for data while there's no TRB
prepared, the controller would respond with 0-length data. When we stop
and start again, we reschedule the prepared isoc data to go out on a new
interval.

>
> Oh? That's great! If the controller can keep the ISOC stream from underruning
> without the gadget feeding it 0-length requests, then we can simplify the
> gadget side implementation quite a bit!
>

I'm not entirely clear on why we cannot use 0-length requests now. I
hope we can resolve this from UVC function driver as it seems to be a
proper place to handle this issue.

Thanks,
Thinh

2024-05-12 22:10:27

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, Apr 24, 2024 at 02:28:10AM +0000, Thinh Nguyen wrote:
>On Tue, Apr 23, 2024, Avichal Rakesh wrote:
>>
>>
>> On 4/23/24 07:25, Michael Grzeschik wrote:
>> > Ccing:
>> >
>> > Michael Riesch <[email protected]>
>> > Thinh Nguyen <[email protected]>
>> >
>> > On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
>> >> On 4/21/24 16:25, Michael Grzeschik wrote:
>> >>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>> >>>> This patch series is improving the size calculation and allocation
>> >>>> of the uvc requests. Using the currenlty setup frame duration of the
>> >>>> stream it is possible to calculate the number of requests based on the
>> >>>> interval length.
>> >>>
>> >>> The basic concept here is right. But unfortunatly we found out that
>> >>> together with Patch [1] and the current zero length request pump
>> >>> mechanism [2] and [3] this is not working as expected.
>> >>>
>> >>> The conclusion that we can not queue more than one frame at once into
>> >>> the hw led to [1]. The current implementation of zero length reqeusts
>> >>> which will be queued while we are waiting for the frame to finish
>> >>> transferring will enlarge the frame duration. Since every zero-length
>> >>> request is still taking up at least one frame interval of 125 us.
>> >>
>> >> I haven't taken a super close look at your patches, so please feel free
>> >> to correct me if I am misunderstanding something.
>> >>
>> >> It looks like the goal of the patches is to determine a better number
>> >> and size of usb_requests from the given framerate such that we send exactly
>> >> nreqs requests per frame where nreqs is determined to be the exact number
>> >> of requests that can be sent in one frame interval?
>> >
>> > It does not need to be the exact time, actually it may not be exact.
>> > Scattering the data over all requests would not leave any headroom for
>> > any latencies or overhead.
>>
>> IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
>> which gives us the number of requests we can send in exactly one frame interval,
>> and then sets the size of the request as max framesize / nreq, which means the
>> frames will be evenly divided up into all available requests (with a little
>> fuzz factor here and there).
>>
>> This effectively means that (assuming no other delays) one frame will take
>> ~one frameinterval to be transmitted?
>>
>> >
>> >> As the logic stands, we need some 0-length requests to be circulating to
>> >> ensure that we don't miss ISOC deadlines. The current logic unconditionally
>> >> sends half of all allocated requests to be circulated.
>> >>
>> >> With those two things in mind, this means than video_pump can at encode
>> >> at most half a frame in one go, and then has to wait for complete
>> >> callbacks to come in. In such cases, the theoretical worst case for
>> >> encode time is
>> >> 125us * (number of requests needed per frame / 2) + scheduling delays
>> >> as after the first half of the frame has been encoded, the video_pump
>> >> thread will have to wait 125us for each of the zero length requests to
>> >> be returned.
>> >>
>> >> The underlying assumption behind the "queue 0-length requests" approach
>> >> was that video_pump encodes the frames in as few requests as possible
>> >> and that there are spare requests to maintain a pressure on the
>> >> ISOC queue without hindering the video_pump thread, and unfortunately
>> >> it seems like patch 3/3 is breaking both of them?
>> >
>> > Right.
>> >
>> >> Assuming my understanding of your patches is correct, my question
>> >> is: Why do we want to spread the frame uniformly over the requests
>> >> instead of encoding it in as few requests as possible. Spreading
>> >> the frame over more requests artificially increases the encode time
>> >> required by video_pump, and AFAICT there is no real benefit to it?
>> >
>> > Thinh gave me the advise that it is better to use the isoc stream
>> > constantly filled. Rather then streaming big amounts of data in the
>> > beginning of an frameinterval and having then a lot of spare time
>> > where the bandwidth is completely unsused.
>> >
>> > In our reallife scenario streaming big requests had the impact, that
>> > the dwc3 core could not keep up with reading the amount of data
>> > from the memory bus, as the bus is already under heavy load. When the
>> > HW was then not able to transfer the requested and actually available
>> > amount of data in the interval, the hw did give us the usual missed
>> > interrupt answer.
>> >
>> > Using smaller requests solved the problem here, as it really was
>> > unnecessary to stress the memory and usb bus in the beginning as
>> > we had enough headroom in the temporal domain.
>>
>> Ah, I see. This was not a consideration, and it makes sense if USB
>> bus is under contention from a few different streams. So the solution
>> seems to be to spread the frame of as many requests as we can transmit
>> in one frameinterval?
>>
>> As an experiment, while we wait for others to respond, could you try
>> doubling (or 2.5x'ing to be extra safe) the number of requests allocated
>> by patch 3/3 without changing the request's buffer size?
>>
>> It won't help with the error reporting but should help with ensuring
>> that frames are sent out in one frameinterval with little to no
>> 0-length requests between them.
>>
>> The idea is that video_pump will have enough requests available to fully
>> encode the frame in one burst, and another frame's worth of request will be
>> re-added to req_free list for video_pump to fill up in the time that the next
>> frame comes in.
>>
>> >
>> > Which then led to the conclusion that the number of needed requests
>> > per image frame interval is calculatable since we know the usb
>> > interval length.
>> >
>> > @Thinh: Correct me if I am saying something wrong here.
>
>Right, if you max out the data rate per uframe, there's less opportunity
>for the host to schedule everything for that interval (e.g. affected
>from other endpoint/device traffics, link commands etc). It also
>increases the latency of DMA. In many cases, many other vendor hosts
>can't handle 48KB/uframe for SuperSpeed and 96KB/uframe for SuperSpeed
>Plus. So, you'd need to test your platform find the optimal request size
>so it can work for most hosts.
>
>> >
>> >>> Therefor to properly make those patches work, we will have to get rid of
>
>Sorry if I may have missed the explaination, but why do we need to rid
>of this?


The uvc_video gadget is queueing requests with ep_queue whenever they
are prepared. However for uvc we may not send EOF to the host until
we know that the frame was transmitted correct or wrong.

To ensure this the gadget is waiting for the last request to be
completed from dwc3. Until this request was not received, the current
workflow is to enqueue zero-length requests into the dwc3 hw. With that,
the final EOF request for the frame will be transmitted after the
zero-length requests have passed the hw. (They have no data, but they
still take one frameinterval durtion). This sparsed frame with
zero-requests inbetween will interfere with the precalculation for
request data we fill every request with based on the expected frame
duration.

I know this seems very interlocked. It is very complex indeed. Tell
me if you still have questions and I will come up with some more
details to the current uvc_video driver.

>> >>> the zero length pump mechanism again and make sure that the whole
>> >>> business logic of what to be queued and when will only be done in the
>> >>> pump worker. It is possible to let the dwc3 udc run dry, as we are
>> >>> actively waiting for the frame to finish, the last request in the
>> >>> prepared and started list will stop the current dwc3 stream and? for
>> >>> no underruns will occur with the next ep_queue.
>> >>
>> >> One thing to note here: The reason we moved to queuing 0-length requests
>> >> from complete callback was because even with realtime priority, video_pump
>> >> thread doesn't always meet the ISOC queueing cadence. I think stopping and
>> >> starting the stream was briefly discussed in our initial discussion in
>> >> https://urldefense.com/v3/__https://lore.kernel.org/all/[email protected]/__;!!A4F2R9G_pg!ZmfvrPq4rs7MIhxNrrEqmgGrlYTJ12WgdzaqQhfEehKfjKqxPr2bC1RzUqaa9tvdBtAvXdyK2GpxYzvslpV6$
>> >> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
>> >> but I am not sure if starting and stopping an ISOC stream is good practice.
>
>There's a workaround specific for UVC in dwc3 to "guess" when underrun
>happen. It's not foolproof. dwc3 should not need to do that.
>
>Isoc data is periodic and continuous. We should not expect this
>unconventional re-synchronization.

I think we have to discuss what is ment by resynchronization here. If
the trb ring buffer did run dry and the software is aware of this
(elemnt in the started and prepared list) then the interrupt handler
already is calling End Stream Command.

When the stream is stopped, what implications does this have on the bus?

When the Endpoint is enabled, will the hardware then send zero-length
requests on its own?

With the next ep_queue we start another stream and when we keep up with
this stream there is no underruns, right?

I picture this scenario in my mind:

thread 1: uvc->queue_buf is called:
- we encode the frame buffer data into all available requests
and put them into the per uvc_buffer perpared list
(as we precalculated the amount of requests properly to the expected
frame duration and buffer size there will be enough requests
available)
- wake up the pump thread

thread 2: pump_worker is triggered
- take all requests from the prepared available buffer and enqueue them
into the hardware
(The pump worker is running with while(1) while it finds requests in
the per buffer prepared list) and therefor will have a high chance
to finish the pumping for one complete frame.
- check for any errors reported from the complete handlers
- on error
- stop enqueing new requests from current frame
- wait for the last request from errornous frame has returned
- only start pumping new requests from the next buffer when the last
request from the active frame has finished
- In the beginning of the next frame send one extra request with
EOF/ERR tag so the host knows that the last one was ok or not.

thread 3: complete handler (interrupt)
- give back the requests into the empty_list
- report EXDEV and errors
- wake up the pump thread

With this method we will continously drain the hw trb stream of the dwc3
controller per frame and therefor will not shoot into one window where
the current stream could be missed. With the data spreading over the
many requests we also avoid the missed requests when the DMA was to
slow.

>> > The realtime latency aspect is not an issue anymore if we ensure that we
>> > always keep only one frame in the hw ring buffer. When the pump worker
>> > ensure that it will always run through one full frame the scheduler has
>> > no chance to break our running dwc3 stream. Since the pump is running
>> > under a while(1) this should be possible.
>>
>> I'll wait for your patch to see, but are you saying that we should have the
>> pump worker busy spinning when there are no frames available? Cameras cannot
>> produce frames fast enough for video_pump to be constantly encoding frames.
>> IIRC, "encoding" a frame to usb_requests took less than a ms or two on my
>> device, and frame interval is 33ms for a 30fps stream, so the CPU would be
>> busy spinning for ~30ms which is an unreasonable time for a CPU to be
>> idling.
>>
>> >
>> > Also with the request amount precalculation we can always encode the
>> > whole frame into all available requests and don't have to wait for
>> > requests to be available again.
>> >
>> > Together with the latest knowladge about the underlying hw we even need to only
>> > keep one frame in the HW ring buffer. Since we have some interrupt latency,
>> > keeping more frames in the ring buffer, would mean that we are not able to tag
>> > the currently streamed frame properly as errornous if the dwc3 hw ring buffer
>> > is already telling the host some data about the next frame. And as we already
>> > need to wait for the end of the frame to finish, based on the assumption that
>> > only one frame is enqueued in the ring buffer the hw will stop the stream and
>> > the next requst will start a new stream. So there will no missed underruns be
>> > happening.
>> >
>> > So the main fact here is, that telling the host some status about a
>> > frame in the past is impossible! Therefor the first request of the next
>> > hw stream need to be the one that is telling the Host if the previous frame
>> > is ment to be drawn or not.
>>
>> This is a fair point, but the timing on this becomes a little difficult if
>> the frame is sent over the entire frameinterval. If we wait for the entire
>> frame to be transmitted, then we have 125us between the last request of a
>> frame being transmitted and the first request of the next frame being
>> queued. The userspace app producing the frames will have timing variations
>> larger than 125us, so we cannot rely on a frame being available exactly as
>> one frame is fully transmitted, or of us being notified of transmission
>> status by the time the next frame comes in.
>>
>> >
>> >> Someone better versed in USB protocol can probably confirm, but it seems
>> >> somewhat hacky to stop the ISOC stream at the end of the frame and restart
>> >> with the next frame.
>> >
>> > All I know is that the HW mechanism that is reading from the trb ring buffer is
>> > started or stopped I don't know if really the ISOC stream is stopped and
>> > restarted here or what that means on the real wire. And if so, I am unsure if
>> > that is really a problem or not. Thinh?
>
>For isoc IN endpoint, if the host requests for data while there's no TRB
>prepared, the controller would respond with 0-length data. When we stop
>and start again, we reschedule the prepared isoc data to go out on a new
>interval.
>
>>
>> Oh? That's great! If the controller can keep the ISOC stream from underruning
>> without the gadget feeding it 0-length requests, then we can simplify the
>> gadget side implementation quite a bit!

>I'm not entirely clear on why we cannot use 0-length requests now. I
>hope we can resolve this from UVC function driver as it seems to be a
>proper place to handle this issue.

See above.

Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (14.89 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-17 01:44:35

by Thinh Nguyen

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Mon, May 13, 2024, Michael Grzeschik wrote:
> On Wed, Apr 24, 2024 at 02:28:10AM +0000, Thinh Nguyen wrote:
> > On Tue, Apr 23, 2024, Avichal Rakesh wrote:
> > >
> > >
> > > On 4/23/24 07:25, Michael Grzeschik wrote:
> > > > Ccing:
> > > >
> > > > Michael Riesch <[email protected]>
> > > > Thinh Nguyen <[email protected]>
> > > >
> > > > On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
> > > >> On 4/21/24 16:25, Michael Grzeschik wrote:
> > > >>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
> > > >>>> This patch series is improving the size calculation and allocation
> > > >>>> of the uvc requests. Using the currenlty setup frame duration of the
> > > >>>> stream it is possible to calculate the number of requests based on the
> > > >>>> interval length.
> > > >>>
> > > >>> The basic concept here is right. But unfortunatly we found out that
> > > >>> together with Patch [1] and the current zero length request pump
> > > >>> mechanism [2] and [3] this is not working as expected.
> > > >>>
> > > >>> The conclusion that we can not queue more than one frame at once into
> > > >>> the hw led to [1]. The current implementation of zero length reqeusts
> > > >>> which will be queued while we are waiting for the frame to finish
> > > >>> transferring will enlarge the frame duration. Since every zero-length
> > > >>> request is still taking up at least one frame interval of 125 us.
> > > >>
> > > >> I haven't taken a super close look at your patches, so please feel free
> > > >> to correct me if I am misunderstanding something.
> > > >>
> > > >> It looks like the goal of the patches is to determine a better number
> > > >> and size of usb_requests from the given framerate such that we send exactly
> > > >> nreqs requests per frame where nreqs is determined to be the exact number
> > > >> of requests that can be sent in one frame interval?
> > > >
> > > > It does not need to be the exact time, actually it may not be exact.
> > > > Scattering the data over all requests would not leave any headroom for
> > > > any latencies or overhead.
> > >
> > > IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
> > > which gives us the number of requests we can send in exactly one frame interval,
> > > and then sets the size of the request as max framesize / nreq, which means the
> > > frames will be evenly divided up into all available requests (with a little
> > > fuzz factor here and there).
> > >
> > > This effectively means that (assuming no other delays) one frame will take
> > > ~one frameinterval to be transmitted?
> > >
> > > >
> > > >> As the logic stands, we need some 0-length requests to be circulating to
> > > >> ensure that we don't miss ISOC deadlines. The current logic unconditionally
> > > >> sends half of all allocated requests to be circulated.
> > > >>
> > > >> With those two things in mind, this means than video_pump can at encode
> > > >> at most half a frame in one go, and then has to wait for complete
> > > >> callbacks to come in. In such cases, the theoretical worst case for
> > > >> encode time is
> > > >> 125us * (number of requests needed per frame / 2) + scheduling delays
> > > >> as after the first half of the frame has been encoded, the video_pump
> > > >> thread will have to wait 125us for each of the zero length requests to
> > > >> be returned.
> > > >>
> > > >> The underlying assumption behind the "queue 0-length requests" approach
> > > >> was that video_pump encodes the frames in as few requests as possible
> > > >> and that there are spare requests to maintain a pressure on the
> > > >> ISOC queue without hindering the video_pump thread, and unfortunately
> > > >> it seems like patch 3/3 is breaking both of them?
> > > >
> > > > Right.
> > > >
> > > >> Assuming my understanding of your patches is correct, my question
> > > >> is: Why do we want to spread the frame uniformly over the requests
> > > >> instead of encoding it in as few requests as possible. Spreading
> > > >> the frame over more requests artificially increases the encode time
> > > >> required by video_pump, and AFAICT there is no real benefit to it?
> > > >
> > > > Thinh gave me the advise that it is better to use the isoc stream
> > > > constantly filled. Rather then streaming big amounts of data in the
> > > > beginning of an frameinterval and having then a lot of spare time
> > > > where the bandwidth is completely unsused.
> > > >
> > > > In our reallife scenario streaming big requests had the impact, that
> > > > the dwc3 core could not keep up with reading the amount of data
> > > > from the memory bus, as the bus is already under heavy load. When the
> > > > HW was then not able to transfer the requested and actually available
> > > > amount of data in the interval, the hw did give us the usual missed
> > > > interrupt answer.
> > > >
> > > > Using smaller requests solved the problem here, as it really was
> > > > unnecessary to stress the memory and usb bus in the beginning as
> > > > we had enough headroom in the temporal domain.
> > >
> > > Ah, I see. This was not a consideration, and it makes sense if USB
> > > bus is under contention from a few different streams. So the solution
> > > seems to be to spread the frame of as many requests as we can transmit
> > > in one frameinterval?
> > >
> > > As an experiment, while we wait for others to respond, could you try
> > > doubling (or 2.5x'ing to be extra safe) the number of requests allocated
> > > by patch 3/3 without changing the request's buffer size?
> > >
> > > It won't help with the error reporting but should help with ensuring
> > > that frames are sent out in one frameinterval with little to no
> > > 0-length requests between them.
> > >
> > > The idea is that video_pump will have enough requests available to fully
> > > encode the frame in one burst, and another frame's worth of request will be
> > > re-added to req_free list for video_pump to fill up in the time that the next
> > > frame comes in.
> > >
> > > >
> > > > Which then led to the conclusion that the number of needed requests
> > > > per image frame interval is calculatable since we know the usb
> > > > interval length.
> > > >
> > > > @Thinh: Correct me if I am saying something wrong here.
> >
> > Right, if you max out the data rate per uframe, there's less opportunity
> > for the host to schedule everything for that interval (e.g. affected
> > from other endpoint/device traffics, link commands etc). It also
> > increases the latency of DMA. In many cases, many other vendor hosts
> > can't handle 48KB/uframe for SuperSpeed and 96KB/uframe for SuperSpeed
> > Plus. So, you'd need to test your platform find the optimal request size
> > so it can work for most hosts.
> >
> > > >
> > > >>> Therefor to properly make those patches work, we will have to get rid of
> >
> > Sorry if I may have missed the explaination, but why do we need to rid
> > of this?
>
>
> The uvc_video gadget is queueing requests with ep_queue whenever they
> are prepared. However for uvc we may not send EOF to the host until
> we know that the frame was transmitted correct or wrong.
>
> To ensure this the gadget is waiting for the last request to be
> completed from dwc3. Until this request was not received, the current
> workflow is to enqueue zero-length requests into the dwc3 hw. With that,
> the final EOF request for the frame will be transmitted after the
> zero-length requests have passed the hw. (They have no data, but they
> still take one frameinterval durtion). This sparsed frame with
> zero-requests inbetween will interfere with the precalculation for
> request data we fill every request with based on the expected frame
> duration.
>
> I know this seems very interlocked. It is very complex indeed. Tell
> me if you still have questions and I will come up with some more
> details to the current uvc_video driver.
>
> > > >>> the zero length pump mechanism again and make sure that the whole
> > > >>> business logic of what to be queued and when will only be done in the
> > > >>> pump worker. It is possible to let the dwc3 udc run dry, as we are
> > > >>> actively waiting for the frame to finish, the last request in the
> > > >>> prepared and started list will stop the current dwc3 stream and  for
> > > >>> no underruns will occur with the next ep_queue.
> > > >>
> > > >> One thing to note here: The reason we moved to queuing 0-length requests
> > > >> from complete callback was because even with realtime priority, video_pump
> > > >> thread doesn't always meet the ISOC queueing cadence. I think stopping and
> > > >> starting the stream was briefly discussed in our initial discussion in
> > > >> https://urldefense.com/v3/__https://lore.kernel.org/all/[email protected]/__;!!A4F2R9G_pg!ZmfvrPq4rs7MIhxNrrEqmgGrlYTJ12WgdzaqQhfEehKfjKqxPr2bC1RzUqaa9tvdBtAvXdyK2GpxYzvslpV6$
> > > >> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
> > > >> but I am not sure if starting and stopping an ISOC stream is good practice.
> >
> > There's a workaround specific for UVC in dwc3 to "guess" when underrun
> > happen. It's not foolproof. dwc3 should not need to do that.
> >
> > Isoc data is periodic and continuous. We should not expect this
> > unconventional re-synchronization.
>
> I think we have to discuss what is ment by resynchronization here. If
> the trb ring buffer did run dry and the software is aware of this
> (elemnt in the started and prepared list) then the interrupt handler
> already is calling End Stream Command.

The driver only aware of this when the controller tells it, which may be
already too late.

>
> When the stream is stopped, what implications does this have on the bus?
>
> When the Endpoint is enabled, will the hardware then send zero-length
> requests on its own?

For isoc endpoint IN, yes. If the host requests for isoc data IN while
no TRB is prepared, then the controller will automatically send 0-length
packet respond.

>
> With the next ep_queue we start another stream and when we keep up with
> this stream there is no underruns, right?
>
> I picture this scenario in my mind:
>
> thread 1: uvc->queue_buf is called:
> - we encode the frame buffer data into all available requests
> and put them into the per uvc_buffer perpared list
> (as we precalculated the amount of requests properly to the expected
> frame duration and buffer size there will be enough requests
> available)
> - wake up the pump thread
>
> thread 2: pump_worker is triggered
> - take all requests from the prepared available buffer and enqueue them
> into the hardware
> (The pump worker is running with while(1) while it finds requests in
> the per buffer prepared list) and therefor will have a high chance
> to finish the pumping for one complete frame.
> - check for any errors reported from the complete handlers
> - on error
> - stop enqueing new requests from current frame
> - wait for the last request from errornous frame has returned
> - only start pumping new requests from the next buffer when the last
> request from the active frame has finished
> - In the beginning of the next frame send one extra request with
> EOF/ERR tag so the host knows that the last one was ok or not.
>
> thread 3: complete handler (interrupt)
> - give back the requests into the empty_list
> - report EXDEV and errors
> - wake up the pump thread
>
> With this method we will continously drain the hw trb stream of the dwc3
> controller per frame and therefor will not shoot into one window where
> the current stream could be missed. With the data spreading over the
> many requests we also avoid the missed requests when the DMA was to
> slow.
>

This sounds good.

As long as we can maintain more than X number of requests enqueued to
accomodate for the worst latency, then we can avoid underrun. The driver
should monitor how many requests are enqueued and hopefully can keep up
the queue with zero-length requests.

Thanks,
Thinh

2024-05-22 00:08:32

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>On Mon, May 13, 2024, Michael Grzeschik wrote:
>> On Wed, Apr 24, 2024 at 02:28:10AM +0000, Thinh Nguyen wrote:
>> > On Tue, Apr 23, 2024, Avichal Rakesh wrote:
>> > >
>> > >
>> > > On 4/23/24 07:25, Michael Grzeschik wrote:
>> > > > Ccing:
>> > > >
>> > > > Michael Riesch <[email protected]>
>> > > > Thinh Nguyen <[email protected]>
>> > > >
>> > > > On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
>> > > >> On 4/21/24 16:25, Michael Grzeschik wrote:
>> > > >>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>> > > >>>> This patch series is improving the size calculation and allocation
>> > > >>>> of the uvc requests. Using the currenlty setup frame duration of the
>> > > >>>> stream it is possible to calculate the number of requests based on the
>> > > >>>> interval length.
>> > > >>>
>> > > >>> The basic concept here is right. But unfortunatly we found out that
>> > > >>> together with Patch [1] and the current zero length request pump
>> > > >>> mechanism [2] and [3] this is not working as expected.
>> > > >>>
>> > > >>> The conclusion that we can not queue more than one frame at once into
>> > > >>> the hw led to [1]. The current implementation of zero length reqeusts
>> > > >>> which will be queued while we are waiting for the frame to finish
>> > > >>> transferring will enlarge the frame duration. Since every zero-length
>> > > >>> request is still taking up at least one frame interval of 125 us.
>> > > >>
>> > > >> I haven't taken a super close look at your patches, so please feel free
>> > > >> to correct me if I am misunderstanding something.
>> > > >>
>> > > >> It looks like the goal of the patches is to determine a better number
>> > > >> and size of usb_requests from the given framerate such that we send exactly
>> > > >> nreqs requests per frame where nreqs is determined to be the exact number
>> > > >> of requests that can be sent in one frame interval?
>> > > >
>> > > > It does not need to be the exact time, actually it may not be exact.
>> > > > Scattering the data over all requests would not leave any headroom for
>> > > > any latencies or overhead.
>> > >
>> > > IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
>> > > which gives us the number of requests we can send in exactly one frame interval,
>> > > and then sets the size of the request as max framesize / nreq, which means the
>> > > frames will be evenly divided up into all available requests (with a little
>> > > fuzz factor here and there).
>> > >
>> > > This effectively means that (assuming no other delays) one frame will take
>> > > ~one frameinterval to be transmitted?
>> > >
>> > > >
>> > > >> As the logic stands, we need some 0-length requests to be circulating to
>> > > >> ensure that we don't miss ISOC deadlines. The current logic unconditionally
>> > > >> sends half of all allocated requests to be circulated.
>> > > >>
>> > > >> With those two things in mind, this means than video_pump can at encode
>> > > >> at most half a frame in one go, and then has to wait for complete
>> > > >> callbacks to come in. In such cases, the theoretical worst case for
>> > > >> encode time is
>> > > >> 125us * (number of requests needed per frame / 2) + scheduling delays
>> > > >> as after the first half of the frame has been encoded, the video_pump
>> > > >> thread will have to wait 125us for each of the zero length requests to
>> > > >> be returned.
>> > > >>
>> > > >> The underlying assumption behind the "queue 0-length requests" approach
>> > > >> was that video_pump encodes the frames in as few requests as possible
>> > > >> and that there are spare requests to maintain a pressure on the
>> > > >> ISOC queue without hindering the video_pump thread, and unfortunately
>> > > >> it seems like patch 3/3 is breaking both of them?
>> > > >
>> > > > Right.
>> > > >
>> > > >> Assuming my understanding of your patches is correct, my question
>> > > >> is: Why do we want to spread the frame uniformly over the requests
>> > > >> instead of encoding it in as few requests as possible. Spreading
>> > > >> the frame over more requests artificially increases the encode time
>> > > >> required by video_pump, and AFAICT there is no real benefit to it?
>> > > >
>> > > > Thinh gave me the advise that it is better to use the isoc stream
>> > > > constantly filled. Rather then streaming big amounts of data in the
>> > > > beginning of an frameinterval and having then a lot of spare time
>> > > > where the bandwidth is completely unsused.
>> > > >
>> > > > In our reallife scenario streaming big requests had the impact, that
>> > > > the dwc3 core could not keep up with reading the amount of data
>> > > > from the memory bus, as the bus is already under heavy load. When the
>> > > > HW was then not able to transfer the requested and actually available
>> > > > amount of data in the interval, the hw did give us the usual missed
>> > > > interrupt answer.
>> > > >
>> > > > Using smaller requests solved the problem here, as it really was
>> > > > unnecessary to stress the memory and usb bus in the beginning as
>> > > > we had enough headroom in the temporal domain.
>> > >
>> > > Ah, I see. This was not a consideration, and it makes sense if USB
>> > > bus is under contention from a few different streams. So the solution
>> > > seems to be to spread the frame of as many requests as we can transmit
>> > > in one frameinterval?
>> > >
>> > > As an experiment, while we wait for others to respond, could you try
>> > > doubling (or 2.5x'ing to be extra safe) the number of requests allocated
>> > > by patch 3/3 without changing the request's buffer size?
>> > >
>> > > It won't help with the error reporting but should help with ensuring
>> > > that frames are sent out in one frameinterval with little to no
>> > > 0-length requests between them.
>> > >
>> > > The idea is that video_pump will have enough requests available to fully
>> > > encode the frame in one burst, and another frame's worth of request will be
>> > > re-added to req_free list for video_pump to fill up in the time that the next
>> > > frame comes in.
>> > >
>> > > >
>> > > > Which then led to the conclusion that the number of needed requests
>> > > > per image frame interval is calculatable since we know the usb
>> > > > interval length.
>> > > >
>> > > > @Thinh: Correct me if I am saying something wrong here.
>> >
>> > Right, if you max out the data rate per uframe, there's less opportunity
>> > for the host to schedule everything for that interval (e.g. affected
>> > from other endpoint/device traffics, link commands etc). It also
>> > increases the latency of DMA. In many cases, many other vendor hosts
>> > can't handle 48KB/uframe for SuperSpeed and 96KB/uframe for SuperSpeed
>> > Plus. So, you'd need to test your platform find the optimal request size
>> > so it can work for most hosts.
>> >
>> > > >
>> > > >>> Therefor to properly make those patches work, we will have to get rid of
>> >
>> > Sorry if I may have missed the explaination, but why do we need to rid
>> > of this?
>>
>>
>> The uvc_video gadget is queueing requests with ep_queue whenever they
>> are prepared. However for uvc we may not send EOF to the host until
>> we know that the frame was transmitted correct or wrong.
>>
>> To ensure this the gadget is waiting for the last request to be
>> completed from dwc3. Until this request was not received, the current
>> workflow is to enqueue zero-length requests into the dwc3 hw. With that,
>> the final EOF request for the frame will be transmitted after the
>> zero-length requests have passed the hw. (They have no data, but they
>> still take one frameinterval durtion). This sparsed frame with
>> zero-requests inbetween will interfere with the precalculation for
>> request data we fill every request with based on the expected frame
>> duration.
>>
>> I know this seems very interlocked. It is very complex indeed. Tell
>> me if you still have questions and I will come up with some more
>> details to the current uvc_video driver.
>>
>> > > >>> the zero length pump mechanism again and make sure that the whole
>> > > >>> business logic of what to be queued and when will only be done in the
>> > > >>> pump worker. It is possible to let the dwc3 udc run dry, as we are
>> > > >>> actively waiting for the frame to finish, the last request in the
>> > > >>> prepared and started list will stop the current dwc3 stream and? for
>> > > >>> no underruns will occur with the next ep_queue.
>> > > >>
>> > > >> One thing to note here: The reason we moved to queuing 0-length requests
>> > > >> from complete callback was because even with realtime priority, video_pump
>> > > >> thread doesn't always meet the ISOC queueing cadence. I think stopping and
>> > > >> starting the stream was briefly discussed in our initial discussion in
>> > > >> https://urldefense.com/v3/__https://lore.kernel.org/all/[email protected]/__;!!A4F2R9G_pg!ZmfvrPq4rs7MIhxNrrEqmgGrlYTJ12WgdzaqQhfEehKfjKqxPr2bC1RzUqaa9tvdBtAvXdyK2GpxYzvslpV6$
>> > > >> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
>> > > >> but I am not sure if starting and stopping an ISOC stream is good practice.
>> >
>> > There's a workaround specific for UVC in dwc3 to "guess" when underrun
>> > happen. It's not foolproof. dwc3 should not need to do that.
>> >
>> > Isoc data is periodic and continuous. We should not expect this
>> > unconventional re-synchronization.
>>
>> I think we have to discuss what is ment by resynchronization here. If
>> the trb ring buffer did run dry and the software is aware of this
>> (elemnt in the started and prepared list) then the interrupt handler
>> already is calling End Stream Command.
>
>The driver only aware of this when the controller tells it, which may be
>already too late.

In our special case there should not be any too late any more. Since we
ensure that all requests will be enqueued for one transfer (which will
represent one frame) in time and we are not dependent on the complete
handler for nothing else than telling the uvc driver that the last
request came back or if there was some error in the current active
frame.

As already stated we also have to wait with enqueueing the next frame
to the hardware and only are allowed to enqueue one frame at a time.
Otherwise it is not possible to tell the host if the frame was broken or
not.

I have the following scheme in my mind. It is simplified to take frames
of only 4 requests into account. (>80 chars warning)


frameinterval: | 125 us | 125 us | 125 us | 125 us | 125 us | 125 us | 125 us |
| | | | | | | |
pump thread: queue |rqA1 rqA2 rqA3 rqA4'| | | | |rqB0 rqB1 rqB2 rqB3 |rqB4' |
irq thread: complete | |rqA1 |rqA2 |rqA3 |rqA4' | |rqB0 | rqB1
qbuf thread: encode |rqB1 rqB2 rqB3 rqB4'| | | | |rqA1 rqA2 rqA3 rqA4'| |

dwc3 thread: Hardware < Start Transfer End Transfer > < Start Transfer ....

legend:
- rq' : last request of a frame
- rqB0 : first request of the next transfer with no payload but the header only
telling the host that the last frame was ok/broken

assumption:

- pump thread is never interrupted by a kernel thread but only by some short running irq
- if one request comes back with -EXDEV the rest of the enqueued requests should be flushed

In the no_interrupt case we would also only generate the interrupt for
the last request and giveback all four requests in the last interval.
This should still work fine.

We also only start streaming when one frame is totally available to be
enqueued in one run. So in case frames with rqA and rqB both did come back
with errors the start of the next frame will only begin after the next
frame was completely and fully encoded.

>> When the stream is stopped, what implications does this have on the bus?
>>
>> When the Endpoint is enabled, will the hardware then send zero-length
>> requests on its own?
>
>For isoc endpoint IN, yes. If the host requests for isoc data IN while
>no TRB is prepared, then the controller will automatically send 0-length
>packet respond.

Perfect! This will help a lot and will make active queueing of own
zero-length requests run unnecessary.

>> With the next ep_queue we start another stream and when we keep up with
>> this stream there is no underruns, right?
>>
>> I picture this scenario in my mind:
>>
>> thread 1: uvc->queue_buf is called:
>> - we encode the frame buffer data into all available requests
>> and put them into the per uvc_buffer perpared list
>> (as we precalculated the amount of requests properly to the expected
>> frame duration and buffer size there will be enough requests
>> available)
>> - wake up the pump thread
>>
>> thread 2: pump_worker is triggered
>> - take all requests from the prepared available buffer and enqueue them
>> into the hardware
>> (The pump worker is running with while(1) while it finds requests in
>> the per buffer prepared list) and therefor will have a high chance
>> to finish the pumping for one complete frame.
>> - check for any errors reported from the complete handlers
>> - on error
>> - stop enqueing new requests from current frame
>> - wait for the last request from errornous frame has returned
>> - only start pumping new requests from the next buffer when the last
>> request from the active frame has finished
>> - In the beginning of the next frame send one extra request with
>> EOF/ERR tag so the host knows that the last one was ok or not.
>>
>> thread 3: complete handler (interrupt)
>> - give back the requests into the empty_list
>> - report EXDEV and errors
>> - wake up the pump thread
>>
>> With this method we will continously drain the hw trb stream of the dwc3
>> controller per frame and therefor will not shoot into one window where
>> the current stream could be missed. With the data spreading over the
>> many requests we also avoid the missed requests when the DMA was to
>> slow.
>>
>
>This sounds good.
>
>As long as we can maintain more than X number of requests enqueued to
>accomodate for the worst latency, then we can avoid underrun. The driver
>should monitor how many requests are enqueued and hopefully can keep up
>the queue with zero-length requests.

Except I totally misunderstood something or did oversimplify to much,
the above explanation should run this unnecessary.

Although we are able to track the amount of enqueued requests in the udc
driver and compare that with the amount of completed requests.

Also we have the usb_gadget_frame_number callback to the udc controller
to ask in which frame it is operating at the moment. This way we would
be able to calculate not to enqueue requests into a transfer that did
not come back yet completely but would run into missed transfers.

Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (15.84 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-22 01:59:05

by Thinh Nguyen

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024, Michael Grzeschik wrote:
> On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
> > On Mon, May 13, 2024, Michael Grzeschik wrote:
> > >
> > > I think we have to discuss what is ment by resynchronization here. If
> > > the trb ring buffer did run dry and the software is aware of this
> > > (elemnt in the started and prepared list) then the interrupt handler
> > > already is calling End Stream Command.
> >
> > The driver only aware of this when the controller tells it, which may be
> > already too late.
>
> In our special case there should not be any too late any more. Since we
> ensure that all requests will be enqueued for one transfer (which will
> represent one frame) in time and we are not dependent on the complete
> handler for nothing else than telling the uvc driver that the last
> request came back or if there was some error in the current active
> frame.
>
> As already stated we also have to wait with enqueueing the next frame
> to the hardware and only are allowed to enqueue one frame at a time.
> Otherwise it is not possible to tell the host if the frame was broken or
> not.
>
> I have the following scheme in my mind. It is simplified to take frames
> of only 4 requests into account. (>80 chars warning)
>
>
> frameinterval: | 125 us | 125 us | 125 us | 125 us | 125 us | 125 us | 125 us |
> | | | | | | | |
> pump thread: queue |rqA1 rqA2 rqA3 rqA4'| | | | |rqB0 rqB1 rqB2 rqB3 |rqB4' |
> irq thread: complete | |rqA1 |rqA2 |rqA3 |rqA4' | |rqB0 | rqB1
> qbuf thread: encode |rqB1 rqB2 rqB3 rqB4'| | | | |rqA1 rqA2 rqA3 rqA4'| |
>
> dwc3 thread: Hardware < Start Transfer End Transfer > < Start Transfer ....
>
> legend:
> - rq' : last request of a frame
> - rqB0 : first request of the next transfer with no payload but the header only
> telling the host that the last frame was ok/broken
>
> assumption:
>
> - pump thread is never interrupted by a kernel thread but only by some short running irq
> - if one request comes back with -EXDEV the rest of the enqueued requests should be flushed
>
> In the no_interrupt case we would also only generate the interrupt for
> the last request and giveback all four requests in the last interval.
> This should still work fine.
>
> We also only start streaming when one frame is totally available to be
> enqueued in one run. So in case frames with rqA and rqB both did come back
> with errors the start of the next frame will only begin after the next
> frame was completely and fully encoded.

Yes. This is better.

>
> > > When the stream is stopped, what implications does this have on the bus?
> > >
> > > When the Endpoint is enabled, will the hardware then send zero-length
> > > requests on its own?
> >
> > For isoc endpoint IN, yes. If the host requests for isoc data IN while
> > no TRB is prepared, then the controller will automatically send 0-length
> > packet respond.
>
> Perfect! This will help a lot and will make active queueing of own
> zero-length requests run unnecessary.

Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
then this will work.

>
> > > With the next ep_queue we start another stream and when we keep up with
> > > this stream there is no underruns, right?
> > >
> > > I picture this scenario in my mind:
> > >
> > > thread 1: uvc->queue_buf is called:
> > > - we encode the frame buffer data into all available requests
> > > and put them into the per uvc_buffer perpared list
> > > (as we precalculated the amount of requests properly to the expected
> > > frame duration and buffer size there will be enough requests
> > > available)
> > > - wake up the pump thread
> > >
> > > thread 2: pump_worker is triggered
> > > - take all requests from the prepared available buffer and enqueue them
> > > into the hardware
> > > (The pump worker is running with while(1) while it finds requests in
> > > the per buffer prepared list) and therefor will have a high chance
> > > to finish the pumping for one complete frame.
> > > - check for any errors reported from the complete handlers
> > > - on error
> > > - stop enqueing new requests from current frame
> > > - wait for the last request from errornous frame has returned
> > > - only start pumping new requests from the next buffer when the last
> > > request from the active frame has finished
> > > - In the beginning of the next frame send one extra request with
> > > EOF/ERR tag so the host knows that the last one was ok or not.
> > >
> > > thread 3: complete handler (interrupt)
> > > - give back the requests into the empty_list
> > > - report EXDEV and errors
> > > - wake up the pump thread
> > >
> > > With this method we will continously drain the hw trb stream of the dwc3
> > > controller per frame and therefor will not shoot into one window where
> > > the current stream could be missed. With the data spreading over the
> > > many requests we also avoid the missed requests when the DMA was to
> > > slow.
> > >
> >
> > This sounds good.
> >
> > As long as we can maintain more than X number of requests enqueued to
> > accomodate for the worst latency, then we can avoid underrun. The driver
> > should monitor how many requests are enqueued and hopefully can keep up
> > the queue with zero-length requests.
>
> Except I totally misunderstood something or did oversimplify to much,
> the above explanation should run this unnecessary.
>
> Although we are able to track the amount of enqueued requests in the udc
> driver and compare that with the amount of completed requests.
>
> Also we have the usb_gadget_frame_number callback to the udc controller
> to ask in which frame it is operating at the moment. This way we would
> be able to calculate not to enqueue requests into a transfer that did
> not come back yet completely but would run into missed transfers.
>

I would not depend too much on usb_gadget_frame_number(). There's not
really a hard requirement for the output. It's controller specific. For
dwc3 controller, if operate in highspeed or higher, it returns
microframe number. For fullspeed, it returns frame number.

As you noted, if you can wait and queue all the requests of a video
frame at once, then this will work also.

Thanks,
Thinh

2024-05-22 11:16:26

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, Apr 23, 2024 at 04:23:23PM -0700, Avichal Rakesh wrote:
>
>
>On 4/23/24 07:25, Michael Grzeschik wrote:
>> Ccing:
>>
>> Michael Riesch <[email protected]>
>> Thinh Nguyen <[email protected]>
>>
>> On Mon, Apr 22, 2024 at 05:21:09PM -0700, Avichal Rakesh wrote:
>>> On 4/21/24 16:25, Michael Grzeschik wrote:
>>>> On Tue, Apr 09, 2024 at 11:24:56PM +0200, Michael Grzeschik wrote:
>>>>> This patch series is improving the size calculation and allocation
>>>>> of the uvc requests. Using the currenlty setup frame duration of the
>>>>> stream it is possible to calculate the number of requests based on the
>>>>> interval length.
>>>>
>>>> The basic concept here is right. But unfortunatly we found out that
>>>> together with Patch [1] and the current zero length request pump
>>>> mechanism [2] and [3] this is not working as expected.
>>>>
>>>> The conclusion that we can not queue more than one frame at once into
>>>> the hw led to [1]. The current implementation of zero length reqeusts
>>>> which will be queued while we are waiting for the frame to finish
>>>> transferring will enlarge the frame duration. Since every zero-length
>>>> request is still taking up at least one frame interval of 125 us.
>>>
>>> I haven't taken a super close look at your patches, so please feel free
>>> to correct me if I am misunderstanding something.
>>>
>>> It looks like the goal of the patches is to determine a better number
>>> and size of usb_requests from the given framerate such that we send exactly
>>> nreqs requests per frame where nreqs is determined to be the exact number
>>> of requests that can be sent in one frame interval?
>>
>> It does not need to be the exact time, actually it may not be exact.
>> Scattering the data over all requests would not leave any headroom for
>> any latencies or overhead.
>
>IIUC, patch 3/3 sets the number of requests to frameinterval / 125 us,
>which gives us the number of requests we can send in exactly one frame interval,
>and then sets the size of the request as max framesize / nreq, which means the
>frames will be evenly divided up into all available requests (with a little
>fuzz factor here and there).
>
>This effectively means that (assuming no other delays) one frame will take
>~one frameinterval to be transmitted?

In theory yes. But we have to add some headroom delay into the
calculation. The worst case we have to takle here is to leave the
encoding thread enough time to encode one full frame.

>>> As the logic stands, we need some 0-length requests to be circulating to
>>> ensure that we don't miss ISOC deadlines. The current logic unconditionally
>>> sends half of all allocated requests to be circulated.
>>>
>>> With those two things in mind, this means than video_pump can at encode
>>> at most half a frame in one go, and then has to wait for complete
>>> callbacks to come in. In such cases, the theoretical worst case for
>>> encode time is
>>> 125us * (number of requests needed per frame / 2) + scheduling delays
>>> as after the first half of the frame has been encoded, the video_pump
>>> thread will have to wait 125us for each of the zero length requests to
>>> be returned.
>>>
>>> The underlying assumption behind the "queue 0-length requests" approach
>>> was that video_pump encodes the frames in as few requests as possible
>>> and that there are spare requests to maintain a pressure on the
>>> ISOC queue without hindering the video_pump thread, and unfortunately
>>> it seems like patch 3/3 is breaking both of them?
>>
>> Right.
>>
>>> Assuming my understanding of your patches is correct, my question
>>> is: Why do we want to spread the frame uniformly over the requests
>>> instead of encoding it in as few requests as possible. Spreading
>>> the frame over more requests artificially increases the encode time
>>> required by video_pump, and AFAICT there is no real benefit to it?
>>
>> Thinh gave me the advise that it is better to use the isoc stream
>> constantly filled. Rather then streaming big amounts of data in the
>> beginning of an frameinterval and having then a lot of spare time
>> where the bandwidth is completely unsused.
>>
>> In our reallife scenario streaming big requests had the impact, that
>> the dwc3 core could not keep up with reading the amount of data
>> from the memory bus, as the bus is already under heavy load. When the
>> HW was then not able to transfer the requested and actually available
>> amount of data in the interval, the hw did give us the usual missed
>> interrupt answer.
>>
>> Using smaller requests solved the problem here, as it really was
>> unnecessary to stress the memory and usb bus in the beginning as
>> we had enough headroom in the temporal domain.
>
>Ah, I see. This was not a consideration, and it makes sense if USB
>bus is under contention from a few different streams. So the solution
>seems to be to spread the frame of as many requests as we can transmit
>in one frameinterval?

Right.

>As an experiment, while we wait for others to respond, could you try
>doubling (or 2.5x'ing to be extra safe) the number of requests allocated
>by patch 3/3 without changing the request's buffer size?
>
>It won't help with the error reporting but should help with ensuring
>that frames are sent out in one frameinterval with little to no
>0-length requests between them.

This is okay, but will not help since we have to wait for the frame to
finish.

>The idea is that video_pump will have enough requests available to fully
>encode the frame in one burst, and another frame's worth of request will be
>re-added to req_free list for video_pump to fill up in the time that the next
>frame comes in.

Right but because of the zero requests between the frame finish request
we will unnecessary blow up the frame duration. Although there are not
many zero-requests.

>> Which then led to the conclusion that the number of needed requests
>> per image frame interval is calculatable since we know the usb
>> interval length.
>>
>> @Thinh: Correct me if I am saying something wrong here.
>>
>>>> Therefor to properly make those patches work, we will have to get rid of
>>>> the zero length pump mechanism again and make sure that the whole
>>>> business logic of what to be queued and when will only be done in the
>>>> pump worker. It is possible to let the dwc3 udc run dry, as we are
>>>> actively waiting for the frame to finish, the last request in the
>>>> prepared and started list will stop the current dwc3 stream and? for
>>>> no underruns will occur with the next ep_queue.
>>>
>>> One thing to note here: The reason we moved to queuing 0-length requests
>>> from complete callback was because even with realtime priority, video_pump
>>> thread doesn't always meet the ISOC queueing cadence. I think stopping and
>>> starting the stream was briefly discussed in our initial discussion in
>>> https://lore.kernel.org/all/[email protected]/
>>> and Thinh mentioned that dwc3 controller does it if it detects an underrun,
>>> but I am not sure if starting and stopping an ISOC stream is good practice.
>>
>> The realtime latency aspect is not an issue anymore if we ensure that we
>> always keep only one frame in the hw ring buffer. When the pump worker
>> ensure that it will always run through one full frame the scheduler has
>> no chance to break our running dwc3 stream. Since the pump is running
>> under a while(1) this should be possible.
>
>I'll wait for your patch to see, but are you saying that we should have the
>pump worker busy spinning when there are no frames available? Cameras cannot
>produce frames fast enough for video_pump to be constantly encoding frames.
>IIRC, "encoding" a frame to usb_requests took less than a ms or two on my
>device, and frame interval is 33ms for a 30fps stream, so the CPU would be
>busy spinning for ~30ms which is an unreasonable time for a CPU to be
>idling.

I hope the whole Idea got much clearer to you while I was discussing the
topic and the roadmap with Thinh the last days.

>> Also with the request amount precalculation we can always encode the
>> whole frame into all available requests and don't have to wait for
>> requests to be available again.
>>
>> Together with the latest knowladge about the underlying hw we even need to only
>> keep one frame in the HW ring buffer. Since we have some interrupt latency,
>> keeping more frames in the ring buffer, would mean that we are not able to tag
>> the currently streamed frame properly as errornous if the dwc3 hw ring buffer
>> is already telling the host some data about the next frame. And as we already
>> need to wait for the end of the frame to finish, based on the assumption that
>> only one frame is enqueued in the ring buffer the hw will stop the stream and
>> the next requst will start a new stream. So there will no missed underruns be
>> happening.
>>
>> So the main fact here is, that telling the host some status about a
>> frame in the past is impossible! Therefor the first request of the next
>> hw stream need to be the one that is telling the Host if the previous frame
>> is ment to be drawn or not.
>
>This is a fair point, but the timing on this becomes a little difficult if
>the frame is sent over the entire frameinterval. If we wait for the entire
>frame to be transmitted, then we have 125us between the last request of a
>frame being transmitted and the first request of the next frame being
>queued. The userspace app producing the frames will have timing variations
>larger than 125us, so we cannot rely on a frame being available exactly as
>one frame is fully transmitted, or of us being notified of transmission
>status by the time the next frame comes in.
>
>>
>>> Someone better versed in USB protocol can probably confirm, but it seems
>>> somewhat hacky to stop the ISOC stream at the end of the frame and restart
>>> with the next frame.
>>
>> All I know is that the HW mechanism that is reading from the trb ring buffer is
>> started or stopped I don't know if really the ISOC stream is stopped and
>> restarted here or what that means on the real wire. And if so, I am unsure if
>> that is really a problem or not. Thinh?
>
>Oh? That's great! If the controller can keep the ISOC stream from underruning
>without the gadget feeding it 0-length requests, then we can simplify the
>gadget side implementation quite a bit!

Exactly that is the case.

Can you followup to my discussion with Thinh and add your thougts
to the whole idea.

Regards,
Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (10.78 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-22 14:50:45

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
> On Wed, May 22, 2024, Michael Grzeschik wrote:
> > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
> > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
> > > no TRB is prepared, then the controller will automatically send 0-length
> > > packet respond.
> >
> > Perfect! This will help a lot and will make active queueing of own
> > zero-length requests run unnecessary.
>
> Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
> then this will work.

You shouldn't rely on this behavior. Other device controllers might not
behave this way; they might send no packet at all to the host (causing a
USB protocol error) instead of sending a zero-length packet.

On the other hand, it may not make any difference. The host's UVC
driver most likely won't care about the difference between no packet and
a 0-length packet. :-)

Alan Stern

2024-05-22 17:18:07

by Thinh Nguyen

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024, Alan Stern wrote:
> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
> > On Wed, May 22, 2024, Michael Grzeschik wrote:
> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
> > > > no TRB is prepared, then the controller will automatically send 0-length
> > > > packet respond.
> > >
> > > Perfect! This will help a lot and will make active queueing of own
> > > zero-length requests run unnecessary.
> >
> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
> > then this will work.
>
> You shouldn't rely on this behavior. Other device controllers might not
> behave this way; they might send no packet at all to the host (causing a
> USB protocol error) instead of sending a zero-length packet.

I agree. The dwc3 driver has this workaround to somewhat work with the
UVC. This behavior is not something the controller expected, and this
workaround should not be a common behavior for different function
driver/protocol. Since this behavior was added a long time ago, it will
remain the default behavior in dwc3 to avoid regression with UVC (at
least until the UVC is changed). However, it would be nice for UVC to
not rely on this.

Side note, when the dwc3 driver reschedules/starts isoc transfer again,
the first transfer will be scheduled go out at some future interval and
not the next immediate microframe. For UVC, it probably won't be a
problem since it doesn't seem to need data going out every interval.

BR,
Thinh

>
> On the other hand, it may not make any difference. The host's UVC
> driver most likely won't care about the difference between no packet and
> a 0-length packet. :-)
>
> Alan Stern

2024-05-22 17:38:32

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>On Wed, May 22, 2024, Alan Stern wrote:
>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>> > > > no TRB is prepared, then the controller will automatically send 0-length
>> > > > packet respond.
>> > >
>> > > Perfect! This will help a lot and will make active queueing of own
>> > > zero-length requests run unnecessary.
>> >
>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>> > then this will work.
>>
>> You shouldn't rely on this behavior. Other device controllers might not
>> behave this way; they might send no packet at all to the host (causing a
>> USB protocol error) instead of sending a zero-length packet.
>
>I agree. The dwc3 driver has this workaround to somewhat work with the
>UVC. This behavior is not something the controller expected, and this
>workaround should not be a common behavior for different function
>driver/protocol. Since this behavior was added a long time ago, it will
>remain the default behavior in dwc3 to avoid regression with UVC (at
>least until the UVC is changed). However, it would be nice for UVC to
>not rely on this.

With "this" you mean exactly the following commit, right?

(f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)

When we start questioning this, then lets dig deeper here.

With the fast datarate of at least usb superspeed shouldn't they not all
completely work asynchronous with their in flight trbs?

In my understanding this validates that, with at least superspeed we are
unlikely to react fast enough to maintain a steady isoc dataflow, since
the driver above has to react to errors in the processing context.

This runs the above patch (f5e46aa4) a gadget independent solution
which has nothing to do with uvc in particular IMHO.

How do other controllers and their drivers work?

>Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>the first transfer will be scheduled go out at some future interval and
>not the next immediate microframe. For UVC, it probably won't be a
>problem since it doesn't seem to need data going out every interval.

It should not make a difference. [TM]

>>
>> On the other hand, it may not make any difference. The host's UVC
>> driver most likely won't care about the difference between no packet and
>> a 0-length packet. :-)
>>
>> Alan Stern

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (2.91 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-22 18:24:32

by Thinh Nguyen

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024, Michael Grzeschik wrote:
> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
> > On Wed, May 22, 2024, Alan Stern wrote:
> > > On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
> > > > On Wed, May 22, 2024, Michael Grzeschik wrote:
> > > > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
> > > > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
> > > > > > no TRB is prepared, then the controller will automatically send 0-length
> > > > > > packet respond.
> > > > >
> > > > > Perfect! This will help a lot and will make active queueing of own
> > > > > zero-length requests run unnecessary.
> > > >
> > > > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
> > > > then this will work.
> > >
> > > You shouldn't rely on this behavior. Other device controllers might not
> > > behave this way; they might send no packet at all to the host (causing a
> > > USB protocol error) instead of sending a zero-length packet.
> >
> > I agree. The dwc3 driver has this workaround to somewhat work with the
> > UVC. This behavior is not something the controller expected, and this
> > workaround should not be a common behavior for different function
> > driver/protocol. Since this behavior was added a long time ago, it will
> > remain the default behavior in dwc3 to avoid regression with UVC (at
> > least until the UVC is changed). However, it would be nice for UVC to
> > not rely on this.
>
> With "this" you mean exactly the following commit, right?
>
> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)

I believe that was the case. However, there were changes prior to that
that restarts the isoc on missed isoc, which shouldn't be the default
behavior.

>
> When we start questioning this, then lets dig deeper here.
>
> With the fast datarate of at least usb superspeed shouldn't they not all
> completely work asynchronous with their in flight trbs?

I'm not sure what you mean by asynchronous here.

>
> In my understanding this validates that, with at least superspeed we are
> unlikely to react fast enough to maintain a steady isoc dataflow, since
> the driver above has to react to errors in the processing context.

The point is that we don't stop the isoc transfers unless there's a
change to the interface. The dwc3 driver should not need to do anything
else except reporting the missed request to the function driver. The
application/function driver should keep up with the continous data base
on the established periodic service interval. This is not bulk transfer.

>
> This runs the above patch (f5e46aa4) a gadget independent solution
> which has nothing to do with uvc in particular IMHO.

If that's not the case, I stand corrected since I thought you sent
that patch in particular for UVC. Regardless, my other points still
stand.

>
> How do other controllers and their drivers work?

If we stop and start the transfer because the function driver can't keep
up, then the data will go out in the wrong interval. The UVC host seems
to work base on the video frame rather than at the USB microframe
interval that it uses. It may not be the case for other protocols. If
isoc endpoint is used, then we expect timeliness matter.

BR,
Thinh

>
> > Side note, when the dwc3 driver reschedules/starts isoc transfer again,
> > the first transfer will be scheduled go out at some future interval and
> > not the next immediate microframe. For UVC, it probably won't be a
> > problem since it doesn't seem to need data going out every interval.
>
> It should not make a difference. [TM]
>
> > >
> > > On the other hand, it may not make any difference. The host's UVC
> > > driver most likely won't care about the difference between no packet and
> > > a 0-length packet. :-)
> > >
> > > Alan Stern
>
> --
> Pengutronix e.K. | |
> Steuerwalder Str. 21 | http://www.pengutronix.de/ |
> 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
> Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |

2024-05-22 18:58:35

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Wed, May 22, 2024 at 07:37:59PM +0200, Michael Grzeschik wrote:
> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
> > I agree. The dwc3 driver has this workaround to somewhat work with the
> > UVC. This behavior is not something the controller expected, and this
> > workaround should not be a common behavior for different function
> > driver/protocol. Since this behavior was added a long time ago, it will
> > remain the default behavior in dwc3 to avoid regression with UVC (at
> > least until the UVC is changed). However, it would be nice for UVC to
> > not rely on this.
>
> With "this" you mean exactly the following commit, right?
>
> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>
> When we start questioning this, then lets dig deeper here.
>
> With the fast datarate of at least usb superspeed shouldn't they not all
> completely work asynchronous with their in flight trbs?
>
> In my understanding this validates that, with at least superspeed we are
> unlikely to react fast enough to maintain a steady isoc dataflow, since
> the driver above has to react to errors in the processing context.
>
> This runs the above patch (f5e46aa4) a gadget independent solution
> which has nothing to do with uvc in particular IMHO.
>
> How do other controllers and their drivers work?

You should think of isochronous transfer requests as a pipeline flowing
from the function driver to the UDC driver. As long as the pipeline
never becomes empty, transfers will occur with the desired timing: one
packet (ignoring high-bandwidth issues) per isochronous interval.

But if the pipeline does become empty, because the function driver
doesn't submit requests to the UDC driver quickly enough, the behavior
is undetermined. Obviously no data can be sent before the next request
is submitted. And even if the next request is submitted before the next
time interval expires, the UDC driver might delay transferring the
request's data until a later time interval. Or it might not. In short,
when the function driver does submit its next request, there's no way to
know in which interval its data will get sent to the host. Furthermore,
there's no way in the gadget framework for the function driver to ask
that the request be sent in a particular transfer window.

This means that function drivers should do their best to make sure the
pipeline never becomes empty, that there always is at least one request
in progress at all times. Even if this requires submitting zero-length
requests because the necessary data isn't available yet.

Remember, isochronous transfers are meant only to be a best-effort
attempt at low-latency data delivery, without guarantees (other than a
reserved amount of bandwidth). If a packet gets lost or dropped from
time to time, whether because of transmission errors or because the data
source was unable to keep up, it shouldn't matter very much in the end.
The receiver (i.e., the host) should be able to recover, resynchronize,
and move on.

Alan Stern

2024-05-28 17:30:45

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 5/22/24 10:37, Michael Grzeschik wrote:
> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>> On Wed, May 22, 2024, Alan Stern wrote:
>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>> > > > packet respond.
>>> > >
>>> > > Perfect! This will help a lot and will make active queueing of own
>>> > > zero-length requests run unnecessary.
>>> >
>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>> > then this will work.
>>>
>>> You shouldn't rely on this behavior.  Other device controllers might not
>>> behave this way; they might send no packet at all to the host (causing a
>>> USB protocol error) instead of sending a zero-length packet.
>>
>> I agree. The dwc3 driver has this workaround to somewhat work with the
>> UVC. This behavior is not something the controller expected, and this
>> workaround should not be a common behavior for different function
>> driver/protocol. Since this behavior was added a long time ago, it will
>> remain the default behavior in dwc3 to avoid regression with UVC (at
>> least until the UVC is changed). However, it would be nice for UVC to
>> not rely on this.
>
> With "this" you mean exactly the following commit, right?
>
> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>
> When we start questioning this, then lets dig deeper here.
>
> With the fast datarate of at least usb superspeed shouldn't they not all
> completely work asynchronous with their in flight trbs?
>
> In my understanding this validates that, with at least superspeed we are
> unlikely to react fast enough to maintain a steady isoc dataflow, since
> the driver above has to react to errors in the processing context.
>
> This runs the above patch (f5e46aa4) a gadget independent solution
> which has nothing to do with uvc in particular IMHO.
>
> How do other controllers and their drivers work?
>
>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>> the first transfer will be scheduled go out at some future interval and
>> not the next immediate microframe. For UVC, it probably won't be a
>> problem since it doesn't seem to need data going out every interval.
>
> It should not make a difference. [TM]
>


Sorry for being absent for a lot of this discussion.

I want to take a step back from the details of how we're
solving the problem to what problems we're trying to solve.

So, question(s) for Michael, because I don't see an explicit
answer in this thread (and my sincerest apologies if they've
been answered already and I missed it):

What exactly is the bug (or bugs) we're trying to solve here?

So far, it looks like there are two main problems being
discussed:

1. Reducing the bandwidth usage of individual usb_requests
2. Error handling for when transmission over the wire fails.

Is that correct, or are there other issues at play here?

(1) in isolation should be relatively easy to solve: Just
smear the encoded frame across some percentage
(prefereably < 100%) of the usb_requests in one
video frame interval.

(2) is more complicated, and your suggestion is to have a
sentinel request between two video frames that tells the
host if the transmission of "current" video frame was
successful or not. (Is that correct?)

Assuming my understanding of (2) is correct, how should
the host behave if the transmission of the sentinel
request fails after the video frame was sent
successfully (isoc is best effort so transmission is
never guaranteed)?


Best,
Avi.

2024-05-28 20:41:33

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>
>
>On 5/22/24 10:37, Michael Grzeschik wrote:
>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>> On Wed, May 22, 2024, Alan Stern wrote:
>>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>>> > > > packet respond.
>>>> > >
>>>> > > Perfect! This will help a lot and will make active queueing of own
>>>> > > zero-length requests run unnecessary.
>>>> >
>>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>>> > then this will work.
>>>>
>>>> You shouldn't rely on this behavior.? Other device controllers might not
>>>> behave this way; they might send no packet at all to the host (causing a
>>>> USB protocol error) instead of sending a zero-length packet.
>>>
>>> I agree. The dwc3 driver has this workaround to somewhat work with the
>>> UVC. This behavior is not something the controller expected, and this
>>> workaround should not be a common behavior for different function
>>> driver/protocol. Since this behavior was added a long time ago, it will
>>> remain the default behavior in dwc3 to avoid regression with UVC (at
>>> least until the UVC is changed). However, it would be nice for UVC to
>>> not rely on this.
>>
>> With "this" you mean exactly the following commit, right?
>>
>> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>>
>> When we start questioning this, then lets dig deeper here.
>>
>> With the fast datarate of at least usb superspeed shouldn't they not all
>> completely work asynchronous with their in flight trbs?
>>
>> In my understanding this validates that, with at least superspeed we are
>> unlikely to react fast enough to maintain a steady isoc dataflow, since
>> the driver above has to react to errors in the processing context.
>>
>> This runs the above patch (f5e46aa4) a gadget independent solution
>> which has nothing to do with uvc in particular IMHO.
>>
>> How do other controllers and their drivers work?
>>
>>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>>> the first transfer will be scheduled go out at some future interval and
>>> not the next immediate microframe. For UVC, it probably won't be a
>>> problem since it doesn't seem to need data going out every interval.
>>
>> It should not make a difference. [TM]
>>
>
>
>Sorry for being absent for a lot of this discussion.
>
>I want to take a step back from the details of how we're
>solving the problem to what problems we're trying to solve.
>
>So, question(s) for Michael, because I don't see an explicit
>answer in this thread (and my sincerest apologies if they've
>been answered already and I missed it):
>
>What exactly is the bug (or bugs) we're trying to solve here?
>
>So far, it looks like there are two main problems being
>discussed:
>
>1. Reducing the bandwidth usage of individual usb_requests
>2. Error handling for when transmission over the wire fails.
>
>Is that correct, or are there other issues at play here?

That is correct.

>(1) in isolation should be relatively easy to solve: Just
>smear the encoded frame across some percentage
>(prefereably < 100%) of the usb_requests in one
>video frame interval.

Right.

>(2) is more complicated, and your suggestion is to have a
>sentinel request between two video frames that tells the
>host if the transmission of "current" video frame was
>successful or not. (Is that correct?)

Right.

>Assuming my understanding of (2) is correct, how should
>the host behave if the transmission of the sentinel
>request fails after the video frame was sent
>successfully (isoc is best effort so transmission is
>never guaranteed)?

If we have transmitted all requests of the frame but would only miss the
sentinel request to the host that includes the EOF, the host could
rather show it or drop it. The drop would not be necessary since the
host did see all data of the frame. The user would not see any
distortion in both cases.

If we have transmitted only partial data of the frame it would be
preferred if the host would drop the frame that did not finish with an
proper EOF tag.

AFAIK the linux kernel would still show the frame if the FID of the
currently handled request would change and would take this as the end of
frame indication. But I am not totally sure if this is by spec or
optional.

One option to be totally sure would be to resend the sentinel request to
be properly transmitted before starting the next frame. This resend
polling would probably include some extra zero-length requests. But also
if this resend keeps failing for n times, the driver should doubt there
is anything sane going on with the USB connection and bail out somehow.

Since we try to tackle case (1) to avoid transmit errors and also avoid
creating late enqueued requests in the running isoc transfer, the over
all chance to trigger missed transfers should be minimal.

Regards,
Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (5.53 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-28 21:28:13

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 5/28/24 13:22, Michael Grzeschik wrote:
> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>
>>
>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>> On Wed, May 22, 2024, Alan Stern wrote:
>>>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>>>> > > > packet respond.
>>>>> > >
>>>>> > > Perfect! This will help a lot and will make active queueing of own
>>>>> > > zero-length requests run unnecessary.
>>>>> >
>>>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>>>> > then this will work.
>>>>>
>>>>> You shouldn't rely on this behavior.  Other device controllers might not
>>>>> behave this way; they might send no packet at all to the host (causing a
>>>>> USB protocol error) instead of sending a zero-length packet.
>>>>
>>>> I agree. The dwc3 driver has this workaround to somewhat work with the
>>>> UVC. This behavior is not something the controller expected, and this
>>>> workaround should not be a common behavior for different function
>>>> driver/protocol. Since this behavior was added a long time ago, it will
>>>> remain the default behavior in dwc3 to avoid regression with UVC (at
>>>> least until the UVC is changed). However, it would be nice for UVC to
>>>> not rely on this.
>>>
>>> With "this" you mean exactly the following commit, right?
>>>
>>> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>>>
>>> When we start questioning this, then lets dig deeper here.
>>>
>>> With the fast datarate of at least usb superspeed shouldn't they not all
>>> completely work asynchronous with their in flight trbs?
>>>
>>> In my understanding this validates that, with at least superspeed we are
>>> unlikely to react fast enough to maintain a steady isoc dataflow, since
>>> the driver above has to react to errors in the processing context.
>>>
>>> This runs the above patch (f5e46aa4) a gadget independent solution
>>> which has nothing to do with uvc in particular IMHO.
>>>
>>> How do other controllers and their drivers work?
>>>
>>>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>>>> the first transfer will be scheduled go out at some future interval and
>>>> not the next immediate microframe. For UVC, it probably won't be a
>>>> problem since it doesn't seem to need data going out every interval.
>>>
>>> It should not make a difference. [TM]
>>>
>>
>>
>> Sorry for being absent for a lot of this discussion.
>>
>> I want to take a step back from the details of how we're
>> solving the problem to what problems we're trying to solve.
>>
>> So, question(s) for Michael, because I don't see an explicit
>> answer in this thread (and my sincerest apologies if they've
>> been answered already and I missed it):
>>
>> What exactly is the bug (or bugs) we're trying to solve here?
>>
>> So far, it looks like there are two main problems being
>> discussed:
>>
>> 1. Reducing the bandwidth usage of individual usb_requests
>> 2. Error handling for when transmission over the wire fails.
>>
>> Is that correct, or are there other issues at play here?
>
> That is correct.
>
>> (1) in isolation should be relatively easy to solve: Just
>> smear the encoded frame across some percentage
>> (prefereably < 100%) of the usb_requests in one
>> video frame interval.
>
> Right.
>
>> (2) is more complicated, and your suggestion is to have a
>> sentinel request between two video frames that tells the
>> host if the transmission of "current" video frame was
>> successful or not. (Is that correct?)
>
> Right.
>
>> Assuming my understanding of (2) is correct, how should
>> the host behave if the transmission of the sentinel
>> request fails after the video frame was sent
>> successfully (isoc is best effort so transmission is
>> never guaranteed)?
>
> If we have transmitted all requests of the frame but would only miss the
> sentinel request to the host that includes the EOF, the host could
> rather show it or drop it. The drop would not be necessary since the
> host did see all data of the frame. The user would not see any
> distortion in both cases.
>
> If we have transmitted only partial data of the frame it would be
> preferred if the host would drop the frame that did not finish with an
> proper EOF tag.
>
> AFAIK the linux kernel would still show the frame if the FID of the
> currently handled request would change and would take this as the end of
> frame indication. But I am not totally sure if this is by spec or
> optional.
>
> One option to be totally sure would be to resend the sentinel request to
> be properly transmitted before starting the next frame. This resend
> polling would probably include some extra zero-length requests. But also
> if this resend keeps failing for n times, the driver should doubt there
> is anything sane going on with the USB connection and bail out somehow.
>
> Since we try to tackle case (1) to avoid transmit errors and also avoid
> creating late enqueued requests in the running isoc transfer, the over
> all chance to trigger missed transfers should be minimal.

Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
flag will be used although the userspace application can technically
make it optional.

Summarizing some of the discussions above:
1. UVC gadget driver should _not_ rely on the usb controller to
enqueue 0-length requests on UVC gadget drivers behalf;
2. However keeping up the backpressure to the controller means the
EOF request will be delayed behind all the zero-length requests.

Out of curiosity: What is wrong with letting the host rely on
FID alone? Decoding the jpeg payload _should_ fail if any of the
usb_requests containing the payload failed to transmit.

Was there a situation where usb_requests were failing but jpeg
decoding succeeded (i.e. the host was unaware of failure)? Looking at
the error handling on the gadget driver side, I see there is space to
improve it, but maybe we can do it in a way that doesn't add more time
constraints!

- Avii.



2024-05-28 22:44:07

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, May 28, 2024 at 02:27:34PM -0700, Avichal Rakesh wrote:
>
>
>On 5/28/24 13:22, Michael Grzeschik wrote:
>> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>>
>>>
>>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>>> On Wed, May 22, 2024, Alan Stern wrote:
>>>>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>>>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>>>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>>>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>>>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>>>>> > > > packet respond.
>>>>>> > >
>>>>>> > > Perfect! This will help a lot and will make active queueing of own
>>>>>> > > zero-length requests run unnecessary.
>>>>>> >
>>>>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>>>>> > then this will work.
>>>>>>
>>>>>> You shouldn't rely on this behavior.? Other device controllers might not
>>>>>> behave this way; they might send no packet at all to the host (causing a
>>>>>> USB protocol error) instead of sending a zero-length packet.
>>>>>
>>>>> I agree. The dwc3 driver has this workaround to somewhat work with the
>>>>> UVC. This behavior is not something the controller expected, and this
>>>>> workaround should not be a common behavior for different function
>>>>> driver/protocol. Since this behavior was added a long time ago, it will
>>>>> remain the default behavior in dwc3 to avoid regression with UVC (at
>>>>> least until the UVC is changed). However, it would be nice for UVC to
>>>>> not rely on this.
>>>>
>>>> With "this" you mean exactly the following commit, right?
>>>>
>>>> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>>>>
>>>> When we start questioning this, then lets dig deeper here.
>>>>
>>>> With the fast datarate of at least usb superspeed shouldn't they not all
>>>> completely work asynchronous with their in flight trbs?
>>>>
>>>> In my understanding this validates that, with at least superspeed we are
>>>> unlikely to react fast enough to maintain a steady isoc dataflow, since
>>>> the driver above has to react to errors in the processing context.
>>>>
>>>> This runs the above patch (f5e46aa4) a gadget independent solution
>>>> which has nothing to do with uvc in particular IMHO.
>>>>
>>>> How do other controllers and their drivers work?
>>>>
>>>>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>>>>> the first transfer will be scheduled go out at some future interval and
>>>>> not the next immediate microframe. For UVC, it probably won't be a
>>>>> problem since it doesn't seem to need data going out every interval.
>>>>
>>>> It should not make a difference. [TM]
>>>>
>>>
>>>
>>> Sorry for being absent for a lot of this discussion.
>>>
>>> I want to take a step back from the details of how we're
>>> solving the problem to what problems we're trying to solve.
>>>
>>> So, question(s) for Michael, because I don't see an explicit
>>> answer in this thread (and my sincerest apologies if they've
>>> been answered already and I missed it):
>>>
>>> What exactly is the bug (or bugs) we're trying to solve here?
>>>
>>> So far, it looks like there are two main problems being
>>> discussed:
>>>
>>> 1. Reducing the bandwidth usage of individual usb_requests
>>> 2. Error handling for when transmission over the wire fails.
>>>
>>> Is that correct, or are there other issues at play here?
>>
>> That is correct.
>>
>>> (1) in isolation should be relatively easy to solve: Just
>>> smear the encoded frame across some percentage
>>> (prefereably < 100%) of the usb_requests in one
>>> video frame interval.
>>
>> Right.
>>
>>> (2) is more complicated, and your suggestion is to have a
>>> sentinel request between two video frames that tells the
>>> host if the transmission of "current" video frame was
>>> successful or not. (Is that correct?)
>>
>> Right.
>>
>>> Assuming my understanding of (2) is correct, how should
>>> the host behave if the transmission of the sentinel
>>> request fails after the video frame was sent
>>> successfully (isoc is best effort so transmission is
>>> never guaranteed)?
>>
>> If we have transmitted all requests of the frame but would only miss the
>> sentinel request to the host that includes the EOF, the host could
>> rather show it or drop it. The drop would not be necessary since the
>> host did see all data of the frame. The user would not see any
>> distortion in both cases.
>>
>> If we have transmitted only partial data of the frame it would be
>> preferred if the host would drop the frame that did not finish with an
>> proper EOF tag.
>>
>> AFAIK the linux kernel would still show the frame if the FID of the
>> currently handled request would change and would take this as the end of
>> frame indication. But I am not totally sure if this is by spec or
>> optional.
>>
>> One option to be totally sure would be to resend the sentinel request to
>> be properly transmitted before starting the next frame. This resend
>> polling would probably include some extra zero-length requests. But also
>> if this resend keeps failing for n times, the driver should doubt there
>> is anything sane going on with the USB connection and bail out somehow.
>>
>> Since we try to tackle case (1) to avoid transmit errors and also avoid
>> creating late enqueued requests in the running isoc transfer, the over
>> all chance to trigger missed transfers should be minimal.
>
>Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
>flag will be used although the userspace application can technically
>make it optional.

That is not all. The additional UVC_STREAM_ERR tag on the sentinel
request can be set optional by the host driver. But by spec the
userspace application has to drop the frame when the flag was set.

With my proposal this flag will be set, whenever we find out that
the currently transferred frame was erroneous.

>Summarizing some of the discussions above:
>1. UVC gadget driver should _not_ rely on the usb controller to
> enqueue 0-length requests on UVC gadget drivers behalf;
>2. However keeping up the backpressure to the controller means the
> EOF request will be delayed behind all the zero-length requests.

Exactly, this is why we have to somehow finetune the timedelay between
requests that trigger interrupts. And also monitor the amount of
requests currently enqueued in the hw ringbuffer. So that our drivers
enqueue dequeue mechanism is virtually adding only the minimum amount
of necessary zero-length requests in the hardware. This should be
possible.

I am currently thinking through the remaining steps the pump worker has
to do on each wakeup to maintain the minimum threshold while waiting
with submitting requests that contain actual image payload.

>Out of curiosity: What is wrong with letting the host rely on
>FID alone? Decoding the jpeg payload _should_ fail if any of the
>usb_requests containing the payload failed to transmit.

This is not totally true. We saw partially rendered jpeg frames on the
host stream. How the host behaves with broken data is totally undefined
if the typical uvc flags EOF/ERR are not used as specified. Then think
about uncompressed formats. So relying on the transferred image format
to solve our problems is just as wrong as relying on the gadgets
hardware behavior.

>Was there a situation where usb_requests were failing but jpeg
>decoding succeeded (i.e. the host was unaware of failure)? Looking at
>the error handling on the gadget driver side, I see there is space to
>improve it, but maybe we can do it in a way that doesn't add more time
>constraints!

I think there is no way around time constraints. Since the gadget
hardware is introducing interrupt latency, we have to address with
the sentinel request and some well adjusted amount of zero-length
requests.

Regards,
Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (8.33 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-29 00:34:05

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 5/28/24 15:43, Michael Grzeschik wrote:
> On Tue, May 28, 2024 at 02:27:34PM -0700, Avichal Rakesh wrote:
>>
>>
>> On 5/28/24 13:22, Michael Grzeschik wrote:
>>> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>>>
>>>>
>>>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>>>> On Wed, May 22, 2024, Alan Stern wrote:
>>>>>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>>>>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>>>>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>>>>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>>>>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>>>>>> > > > packet respond.
>>>>>>> > >
>>>>>>> > > Perfect! This will help a lot and will make active queueing of own
>>>>>>> > > zero-length requests run unnecessary.
>>>>>>> >
>>>>>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>>>>>> > then this will work.
>>>>>>>
>>>>>>> You shouldn't rely on this behavior.  Other device controllers might not
>>>>>>> behave this way; they might send no packet at all to the host (causing a
>>>>>>> USB protocol error) instead of sending a zero-length packet.
>>>>>>
>>>>>> I agree. The dwc3 driver has this workaround to somewhat work with the
>>>>>> UVC. This behavior is not something the controller expected, and this
>>>>>> workaround should not be a common behavior for different function
>>>>>> driver/protocol. Since this behavior was added a long time ago, it will
>>>>>> remain the default behavior in dwc3 to avoid regression with UVC (at
>>>>>> least until the UVC is changed). However, it would be nice for UVC to
>>>>>> not rely on this.
>>>>>
>>>>> With "this" you mean exactly the following commit, right?
>>>>>
>>>>> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>>>>>
>>>>> When we start questioning this, then lets dig deeper here.
>>>>>
>>>>> With the fast datarate of at least usb superspeed shouldn't they not all
>>>>> completely work asynchronous with their in flight trbs?
>>>>>
>>>>> In my understanding this validates that, with at least superspeed we are
>>>>> unlikely to react fast enough to maintain a steady isoc dataflow, since
>>>>> the driver above has to react to errors in the processing context.
>>>>>
>>>>> This runs the above patch (f5e46aa4) a gadget independent solution
>>>>> which has nothing to do with uvc in particular IMHO.
>>>>>
>>>>> How do other controllers and their drivers work?
>>>>>
>>>>>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>>>>>> the first transfer will be scheduled go out at some future interval and
>>>>>> not the next immediate microframe. For UVC, it probably won't be a
>>>>>> problem since it doesn't seem to need data going out every interval.
>>>>>
>>>>> It should not make a difference. [TM]
>>>>>
>>>>
>>>>
>>>> Sorry for being absent for a lot of this discussion.
>>>>
>>>> I want to take a step back from the details of how we're
>>>> solving the problem to what problems we're trying to solve.
>>>>
>>>> So, question(s) for Michael, because I don't see an explicit
>>>> answer in this thread (and my sincerest apologies if they've
>>>> been answered already and I missed it):
>>>>
>>>> What exactly is the bug (or bugs) we're trying to solve here?
>>>>
>>>> So far, it looks like there are two main problems being
>>>> discussed:
>>>>
>>>> 1. Reducing the bandwidth usage of individual usb_requests
>>>> 2. Error handling for when transmission over the wire fails.
>>>>
>>>> Is that correct, or are there other issues at play here?
>>>
>>> That is correct.
>>>
>>>> (1) in isolation should be relatively easy to solve: Just
>>>> smear the encoded frame across some percentage
>>>> (prefereably < 100%) of the usb_requests in one
>>>> video frame interval.
>>>
>>> Right.
>>>
>>>> (2) is more complicated, and your suggestion is to have a
>>>> sentinel request between two video frames that tells the
>>>> host if the transmission of "current" video frame was
>>>> successful or not. (Is that correct?)
>>>
>>> Right.
>>>
>>>> Assuming my understanding of (2) is correct, how should
>>>> the host behave if the transmission of the sentinel
>>>> request fails after the video frame was sent
>>>> successfully (isoc is best effort so transmission is
>>>> never guaranteed)?
>>>
>>> If we have transmitted all requests of the frame but would only miss the
>>> sentinel request to the host that includes the EOF, the host could
>>> rather show it or drop it. The drop would not be necessary since the
>>> host did see all data of the frame. The user would not see any
>>> distortion in both cases.
>>>
>>> If we have transmitted only partial data of the frame it would be
>>> preferred if the host would drop the frame that did not finish with an
>>> proper EOF tag.
>>>
>>> AFAIK the linux kernel would still show the frame if the FID of the
>>> currently handled request would change and would take this as the end of
>>> frame indication. But I am not totally sure if this is by spec or
>>> optional.
>>>
>>> One option to be totally sure would be to resend the sentinel request to
>>> be properly transmitted before starting the next frame. This resend
>>> polling would probably include some extra zero-length requests. But also
>>> if this resend keeps failing for n times, the driver should doubt there
>>> is anything sane going on with the USB connection and bail out somehow.
>>>
>>> Since we try to tackle case (1) to avoid transmit errors and also avoid
>>> creating late enqueued requests in the running isoc transfer, the over
>>> all chance to trigger missed transfers should be minimal.
>>
>> Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
>> flag will be used although the userspace application can technically
>> make it optional.
>
> That is not all. The additional UVC_STREAM_ERR tag on the sentinel
> request can be set optional by the host driver. But by spec the
> userspace application has to drop the frame when the flag was set.

Looking at the UVC specs, the ERR bit doesn't seem to refer to actual
transmission error, only errors in frame generation (Section 4.3.1.7
of UVC 1.5 Class Specification). Maybe "data discontinuity" can be
used but the examples given are bad media, and encoder issues, which
suggests errors at higher level than the wire.

>
> With my proposal this flag will be set, whenever we find out that
> the currently transferred frame was erroneous.
>
>> Summarizing some of the discussions above:
>> 1. UVC gadget driver should _not_ rely on the usb controller to
>>   enqueue 0-length requests on UVC gadget drivers behalf;
>> 2. However keeping up the backpressure to the controller means the
>>   EOF request will be delayed behind all the zero-length requests.
>
> Exactly, this is why we have to somehow finetune the timedelay between
> requests that trigger interrupts. And also monitor the amount of
> requests currently enqueued in the hw ringbuffer. So that our drivers
> enqueue dequeue mechanism is virtually adding only the minimum amount
> of necessary zero-length requests in the hardware. This should be
> possible.
>
> I am currently thinking through the remaining steps the pump worker has
> to do on each wakeup to maintain the minimum threshold while waiting
> with submitting requests that contain actual image payload.
>
>> Out of curiosity: What is wrong with letting the host rely on
>> FID alone? Decoding the jpeg payload _should_ fail if any of the
>> usb_requests containing the payload failed to transmit.
>
> This is not totally true. We saw partially rendered jpeg frames on the
> host stream. How the host behaves with broken data is totally undefined
> if the typical uvc flags EOF/ERR are not used as specified. Then think
> about uncompressed formats. So relying on the transferred image format
> to solve our problems is just as wrong as relying on the gadgets
> hardware behavior.

Do you know if the partially rendered frames were valid JPEGs, or
if the host was simply making a best effort at displaying a broken
JPEG? Perhaps the fix should go to the host instead?

Following is my opinion, feel free to disagree (and correct me if
something is factually incorrect):

The fundamental issue here is that ISOC doesn't guarantee
delivery of usb_requests or even basic data consistency upon delivery.
So the gadget driver has no way to know the state of transmitted data.
The gadget driver is notified of underruns but not of any other issues,
and ideally we should never have an underrun if the zero-length
backpressure is working as intended.

So, UVC gadget driver can reduce the number of errors, but it'll never be
able to guarantee that the data transmitted to the host isn't somehow
corrupted or missing unless a more reliable mode of transmission
(bulk, for example) is used.

All of this to say: The host absolutely needs to be able to handle
all sorts of invalid and broken payloads. How the host handles it
might be undefined, but the host can never rely on perfect knowledge
about the transmission state. In cases like these, where the underlying
transport is unreliable, the burden of enforcing consistency moves up
a layer, i.e. to the encoded payload in this case. So it is perfectly
fine for the host to rely on the encoding to determine if the payload
is corrupt and handle it accordingly.

As for uncompressed format, you're correct that subtle corruptions
may not be caught, but outright missing usb_requests can be easily
checked by simply looking at the number of bytes in the payload. YUV
frames are all of the same (predetermined) size for a given resolution.

So my recommendation is the following:
1. Fix the bandwidth problem by splitting the encoded video frame
into more usb_requests (as your patch already does) making sure
there are enough free usb_request to encode the video frame in
one burst so we don't accidentally inflate the transmission
duration of a video frame by sneaking in zero-length requests in
the middle.
2. Unless there is an unusually high rate of transmission failures
when using the UVC gadget driver, it might be worth fixing the
host side driver to handle broken frames better instead (assuming
host is linux as well).
2. Tighten up the error checking in UVC gadget driver -- We drop the
current frame whenever an EXDEV happens which is wrong. We should
only be dropping the current frame if the EXDEV corresponds to the
frame currently being encoded. If the frame is already fully queued
to the usb controller, the host can handle missing payload as it
sees fit.


- Avi.

2024-05-29 21:24:46

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, May 28, 2024 at 05:33:46PM -0700, Avichal Rakesh wrote:
>
>
>On 5/28/24 15:43, Michael Grzeschik wrote:
>> On Tue, May 28, 2024 at 02:27:34PM -0700, Avichal Rakesh wrote:
>>>
>>>
>>> On 5/28/24 13:22, Michael Grzeschik wrote:
>>>> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>>>>
>>>>>
>>>>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>>>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>>>>> On Wed, May 22, 2024, Alan Stern wrote:
>>>>>>>> On Wed, May 22, 2024 at 01:41:42AM +0000, Thinh Nguyen wrote:
>>>>>>>> > On Wed, May 22, 2024, Michael Grzeschik wrote:
>>>>>>>> > > On Fri, May 17, 2024 at 01:44:05AM +0000, Thinh Nguyen wrote:
>>>>>>>> > > > For isoc endpoint IN, yes. If the host requests for isoc data IN while
>>>>>>>> > > > no TRB is prepared, then the controller will automatically send 0-length
>>>>>>>> > > > packet respond.
>>>>>>>> > >
>>>>>>>> > > Perfect! This will help a lot and will make active queueing of own
>>>>>>>> > > zero-length requests run unnecessary.
>>>>>>>> >
>>>>>>>> > Yes, if we rely on the current start/stop isoc transfer scheme for UVC,
>>>>>>>> > then this will work.
>>>>>>>>
>>>>>>>> You shouldn't rely on this behavior.? Other device controllers might not
>>>>>>>> behave this way; they might send no packet at all to the host (causing a
>>>>>>>> USB protocol error) instead of sending a zero-length packet.
>>>>>>>
>>>>>>> I agree. The dwc3 driver has this workaround to somewhat work with the
>>>>>>> UVC. This behavior is not something the controller expected, and this
>>>>>>> workaround should not be a common behavior for different function
>>>>>>> driver/protocol. Since this behavior was added a long time ago, it will
>>>>>>> remain the default behavior in dwc3 to avoid regression with UVC (at
>>>>>>> least until the UVC is changed). However, it would be nice for UVC to
>>>>>>> not rely on this.
>>>>>>
>>>>>> With "this" you mean exactly the following commit, right?
>>>>>>
>>>>>> (f5e46aa4 usb: dwc3: gadget: when the started list is empty stop the active xfer)
>>>>>>
>>>>>> When we start questioning this, then lets dig deeper here.
>>>>>>
>>>>>> With the fast datarate of at least usb superspeed shouldn't they not all
>>>>>> completely work asynchronous with their in flight trbs?
>>>>>>
>>>>>> In my understanding this validates that, with at least superspeed we are
>>>>>> unlikely to react fast enough to maintain a steady isoc dataflow, since
>>>>>> the driver above has to react to errors in the processing context.
>>>>>>
>>>>>> This runs the above patch (f5e46aa4) a gadget independent solution
>>>>>> which has nothing to do with uvc in particular IMHO.
>>>>>>
>>>>>> How do other controllers and their drivers work?
>>>>>>
>>>>>>> Side note, when the dwc3 driver reschedules/starts isoc transfer again,
>>>>>>> the first transfer will be scheduled go out at some future interval and
>>>>>>> not the next immediate microframe. For UVC, it probably won't be a
>>>>>>> problem since it doesn't seem to need data going out every interval.
>>>>>>
>>>>>> It should not make a difference. [TM]
>>>>>>
>>>>>
>>>>>
>>>>> Sorry for being absent for a lot of this discussion.
>>>>>
>>>>> I want to take a step back from the details of how we're
>>>>> solving the problem to what problems we're trying to solve.
>>>>>
>>>>> So, question(s) for Michael, because I don't see an explicit
>>>>> answer in this thread (and my sincerest apologies if they've
>>>>> been answered already and I missed it):
>>>>>
>>>>> What exactly is the bug (or bugs) we're trying to solve here?
>>>>>
>>>>> So far, it looks like there are two main problems being
>>>>> discussed:
>>>>>
>>>>> 1. Reducing the bandwidth usage of individual usb_requests
>>>>> 2. Error handling for when transmission over the wire fails.
>>>>>
>>>>> Is that correct, or are there other issues at play here?
>>>>
>>>> That is correct.
>>>>
>>>>> (1) in isolation should be relatively easy to solve: Just
>>>>> smear the encoded frame across some percentage
>>>>> (prefereably < 100%) of the usb_requests in one
>>>>> video frame interval.
>>>>
>>>> Right.
>>>>
>>>>> (2) is more complicated, and your suggestion is to have a
>>>>> sentinel request between two video frames that tells the
>>>>> host if the transmission of "current" video frame was
>>>>> successful or not. (Is that correct?)
>>>>
>>>> Right.
>>>>
>>>>> Assuming my understanding of (2) is correct, how should
>>>>> the host behave if the transmission of the sentinel
>>>>> request fails after the video frame was sent
>>>>> successfully (isoc is best effort so transmission is
>>>>> never guaranteed)?
>>>>
>>>> If we have transmitted all requests of the frame but would only miss the
>>>> sentinel request to the host that includes the EOF, the host could
>>>> rather show it or drop it. The drop would not be necessary since the
>>>> host did see all data of the frame. The user would not see any
>>>> distortion in both cases.
>>>>
>>>> If we have transmitted only partial data of the frame it would be
>>>> preferred if the host would drop the frame that did not finish with an
>>>> proper EOF tag.
>>>>
>>>> AFAIK the linux kernel would still show the frame if the FID of the
>>>> currently handled request would change and would take this as the end of
>>>> frame indication. But I am not totally sure if this is by spec or
>>>> optional.
>>>>
>>>> One option to be totally sure would be to resend the sentinel request to
>>>> be properly transmitted before starting the next frame. This resend
>>>> polling would probably include some extra zero-length requests. But also
>>>> if this resend keeps failing for n times, the driver should doubt there
>>>> is anything sane going on with the USB connection and bail out somehow.
>>>>
>>>> Since we try to tackle case (1) to avoid transmit errors and also avoid
>>>> creating late enqueued requests in the running isoc transfer, the over
>>>> all chance to trigger missed transfers should be minimal.
>>>
>>> Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
>>> flag will be used although the userspace application can technically
>>> make it optional.
>>
>> That is not all. The additional UVC_STREAM_ERR tag on the sentinel
>> request can be set optional by the host driver. But by spec the
>> userspace application has to drop the frame when the flag was set.
>
>Looking at the UVC specs, the ERR bit doesn't seem to refer to actual
>transmission error, only errors in frame generation (Section 4.3.1.7
>of UVC 1.5 Class Specification). Maybe "data discontinuity" can be
>used but the examples given are bad media, and encoder issues, which
>suggests errors at higher level than the wire.

Oh! That is a new perspective I did not consider.

With the definition of UVC_STREAM_ERR by spec, the uvc_video driver
would in no case set this header bit for the current frame on its own?
Is that correct?

>> With my proposal this flag will be set, whenever we find out that
>> the currently transferred frame was erroneous.
>>
>>> Summarizing some of the discussions above:
>>> 1. UVC gadget driver should _not_ rely on the usb controller to
>>> ? enqueue 0-length requests on UVC gadget drivers behalf;
>>> 2. However keeping up the backpressure to the controller means the
>>> ? EOF request will be delayed behind all the zero-length requests.
>>
>> Exactly, this is why we have to somehow finetune the timedelay between
>> requests that trigger interrupts. And also monitor the amount of
>> requests currently enqueued in the hw ringbuffer. So that our drivers
>> enqueue dequeue mechanism is virtually adding only the minimum amount
>> of necessary zero-length requests in the hardware. This should be
>> possible.
>>
>> I am currently thinking through the remaining steps the pump worker has
>> to do on each wakeup to maintain the minimum threshold while waiting
>> with submitting requests that contain actual image payload.
>>
>>> Out of curiosity: What is wrong with letting the host rely on
>>> FID alone? Decoding the jpeg payload _should_ fail if any of the
>>> usb_requests containing the payload failed to transmit.
>>
>> This is not totally true. We saw partially rendered jpeg frames on the
>> host stream. How the host behaves with broken data is totally undefined
>> if the typical uvc flags EOF/ERR are not used as specified. Then think
>> about uncompressed formats. So relying on the transferred image format
>> to solve our problems is just as wrong as relying on the gadgets
>> hardware behavior.
>
>Do you know if the partially rendered frames were valid JPEGs, or
>if the host was simply making a best effort at displaying a broken
>JPEG? Perhaps the fix should go to the host instead?

I can fully reproduce this with linux and windows hosts. For linux
machines I saw that the host was taking the FID change as a marker
to see the previous frame as ready and just rendered what got through.
This did not lead to garbage but only to partially displayed frames
with jpeg macroblock alignment.

>Following is my opinion, feel free to disagree (and correct me if
>something is factually incorrect):
>
>The fundamental issue here is that ISOC doesn't guarantee
>delivery of usb_requests or even basic data consistency upon delivery.
>So the gadget driver has no way to know the state of transmitted data.
>The gadget driver is notified of underruns but not of any other issues,
>and ideally we should never have an underrun if the zero-length
>backpressure is working as intended.
>
>So, UVC gadget driver can reduce the number of errors, but it'll never be
>able to guarantee that the data transmitted to the host isn't somehow
>corrupted or missing unless a more reliable mode of transmission
>(bulk, for example) is used.
>
>All of this to say: The host absolutely needs to be able to handle
>all sorts of invalid and broken payloads. How the host handles it
>might be undefined, but the host can never rely on perfect knowledge
>about the transmission state. In cases like these, where the underlying
>transport is unreliable, the burden of enforcing consistency moves up
>a layer, i.e. to the encoded payload in this case. So it is perfectly
>fine for the host to rely on the encoding to determine if the payload
>is corrupt and handle it accordingly.

Right.

>As for uncompressed format, you're correct that subtle corruptions
>may not be caught, but outright missing usb_requests can be easily
>checked by simply looking at the number of bytes in the payload. YUV
>frames are all of the same (predetermined) size for a given resolution.

That was also my thought about five minutes after I did send you the
previous mail. So sure, this is no real issue for the host.

>So my recommendation is the following:
>1. Fix the bandwidth problem by splitting the encoded video frame
> into more usb_requests (as your patch already does) making sure
> there are enough free usb_request to encode the video frame in
> one burst so we don't accidentally inflate the transmission
> duration of a video frame by sneaking in zero-length requests in
> the middle.

Ack. This should already solve a lot of issues.

For this I would still suggest to move the usb_ep_queue to be done in
the pump worker again. Its a bit back and forth, but IMHO its worth the
extra mile since only this way we would respect the dwc3 interrupt
threads assumption to run *very* short.

>2. Unless there is an unusually high rate of transmission failures
> when using the UVC gadget driver, it might be worth fixing the
> host side driver to handle broken frames better instead (assuming
> host is linux as well).

Agreed, but this needs a separate scoped undestanding of the host side
behaviour over all layers.

>2. Tighten up the error checking in UVC gadget driver -- We drop the
> current frame whenever an EXDEV happens which is wrong. We should
> only be dropping the current frame if the EXDEV corresponds to the
> frame currently being encoded.

What do you mean by drop?

I would suggest to immediatly switch the uvc_buffer that is being
enqueued and start queueing prepared requests from the next buffers prep
list. As suggested, the idea is to have per uvc_buffer prep_list
requests which would make this task easy.

> If the frame is already fully queued to the usb controller, the host
> can handle missing payload as it sees fit.

Ack

This roadmap sounds like a good one.

Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (12.70 kB)
signature.asc (849.00 B)
Download all attachments

2024-06-04 22:32:46

by Avichal Rakesh

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize



On 5/29/24 14:24, Michael Grzeschik wrote:
> On Tue, May 28, 2024 at 05:33:46PM -0700, Avichal Rakesh wrote:
>>
>>
>> On 5/28/24 15:43, Michael Grzeschik wrote:
>>> On Tue, May 28, 2024 at 02:27:34PM -0700, Avichal Rakesh wrote:
>>>>
>>>>
>>>> On 5/28/24 13:22, Michael Grzeschik wrote:
>>>>> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>>>>>
>>>>>>
>>>>>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>>>>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>>> One option to be totally sure would be to resend the sentinel request to
>>>>> be properly transmitted before starting the next frame. This resend
>>>>> polling would probably include some extra zero-length requests. But also
>>>>> if this resend keeps failing for n times, the driver should doubt there
>>>>> is anything sane going on with the USB connection and bail out somehow.
>>>>>
>>>>> Since we try to tackle case (1) to avoid transmit errors and also avoid
>>>>> creating late enqueued requests in the running isoc transfer, the over
>>>>> all chance to trigger missed transfers should be minimal.
>>>>
>>>> Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
>>>> flag will be used although the userspace application can technically
>>>> make it optional.
>>>
>>> That is not all. The additional UVC_STREAM_ERR tag on the sentinel
>>> request can be set optional by the host driver. But by spec the
>>> userspace application has to drop the frame when the flag was set.
>>
>> Looking at the UVC specs, the ERR bit doesn't seem to refer to actual
>> transmission error, only errors in frame generation (Section 4.3.1.7
>> of UVC 1.5 Class Specification). Maybe "data discontinuity" can be
>> used but the examples given are bad media, and encoder issues, which
>> suggests errors at higher level than the wire.
>
> Oh! That is a new perspective I did not consider.
>
> With the definition of UVC_STREAM_ERR by spec, the uvc_video driver
> would in no case set this header bit for the current frame on its own?
> Is that correct?

It would indeed seem so. The way gadget driver is architected makes
is impossible for the userspace application to notify the host of
any errors.

>
>>> With my proposal this flag will be set, whenever we find out that
>>> the currently transferred frame was erroneous.
>>>
>>>> Summarizing some of the discussions above:
>>>> 1. UVC gadget driver should _not_ rely on the usb controller to
>>>>   enqueue 0-length requests on UVC gadget drivers behalf;
>>>> 2. However keeping up the backpressure to the controller means the
>>>>   EOF request will be delayed behind all the zero-length requests.
>>>
>>> Exactly, this is why we have to somehow finetune the timedelay between
>>> requests that trigger interrupts. And also monitor the amount of
>>> requests currently enqueued in the hw ringbuffer. So that our drivers
>>> enqueue dequeue mechanism is virtually adding only the minimum amount
>>> of necessary zero-length requests in the hardware. This should be
>>> possible.
>>>
>>> I am currently thinking through the remaining steps the pump worker has
>>> to do on each wakeup to maintain the minimum threshold while waiting
>>> with submitting requests that contain actual image payload.
>>>
>>>> Out of curiosity: What is wrong with letting the host rely on
>>>> FID alone? Decoding the jpeg payload _should_ fail if any of the
>>>> usb_requests containing the payload failed to transmit.
>>>
>>> This is not totally true. We saw partially rendered jpeg frames on the
>>> host stream. How the host behaves with broken data is totally undefined
>>> if the typical uvc flags EOF/ERR are not used as specified. Then think
>>> about uncompressed formats. So relying on the transferred image format
>>> to solve our problems is just as wrong as relying on the gadgets
>>> hardware behavior.
>>
>> Do you know if the partially rendered frames were valid JPEGs, or
>> if the host was simply making a best effort at displaying a broken
>> JPEG? Perhaps the fix should go to the host instead?
>
> I can fully reproduce this with linux and windows hosts. For linux
> machines I saw that the host was taking the FID change as a marker
> to see the previous frame as ready and just rendered what got through.
> This did not lead to garbage but only to partially displayed frames
> with jpeg macroblock alignment.

I was aware of linux doing so, but I only ever saw this behavior on
Windows if there were a lot of invalid frames back to back.

I am not super familiar with the guarantees of JPEG, but I suppose
it is possible to have a "valid" JPEG with some middle blocks
missing as long the EOI bits make it through? I am not sure how we
go about solving that.

>
>> Following is my opinion, feel free to disagree (and correct me if
>> something is factually incorrect):
>>
>> The fundamental issue here is that ISOC doesn't guarantee
>> delivery of usb_requests or even basic data consistency upon delivery.
>> So the gadget driver has no way to know the state of transmitted data.
>> The gadget driver is notified of underruns but not of any other issues,
>> and ideally we should never have an underrun if the zero-length
>> backpressure is working as intended.
>>
>> So, UVC gadget driver can reduce the number of errors, but it'll never be
>> able to guarantee that the data transmitted to the host isn't somehow
>> corrupted or missing unless a more reliable mode of transmission
>> (bulk, for example) is used.
>>
>> All of this to say: The host absolutely needs to be able to handle
>> all sorts of invalid and broken payloads. How the host handles it
>> might be undefined, but the host can never rely on perfect knowledge
>> about the transmission state. In cases like these, where the underlying
>> transport is unreliable, the burden of enforcing consistency moves up
>> a layer, i.e. to the encoded payload in this case. So it is perfectly
>> fine for the host to rely on the encoding to determine if the payload
>> is corrupt and handle it accordingly.
>
> Right.
>
>> As for uncompressed format, you're correct that subtle corruptions
>> may not be caught, but outright missing usb_requests can be easily
>> checked by simply looking at the number of bytes in the payload. YUV
>> frames are all of the same (predetermined) size for a given resolution.
>
> That was also my thought about five minutes after I did send you the
> previous mail. So sure, this is no real issue for the host.
>
>> So my recommendation is the following:
>> 1. Fix the bandwidth problem by splitting the encoded video frame
>>   into more usb_requests (as your patch already does) making sure
>>   there are enough free usb_request to encode the video frame in
>>   one burst so we don't accidentally inflate the transmission
>>   duration of a video frame by sneaking in zero-length requests in
>>   the middle.
>
> Ack. This should already solve a lot of issues.
>
> For this I would still suggest to move the usb_ep_queue to be done in
> the pump worker again. Its a bit back and forth, but IMHO its worth the
> extra mile since only this way we would respect the dwc3 interrupt
> threads assumption to run *very* short.

The main reason for queuing the requests from the complete handler
was to have a single point of usb_ep_queue call, which made reasoning
through the locking simpler. But if you find a way to do so from
the video_pump thread without making the locking a nightmare, then go
for it!

>
>> 2. Unless there is an unusually high rate of transmission failures
>>   when using the UVC gadget driver, it might be worth fixing the
>>   host side driver to handle broken frames better instead (assuming
>>   host is linux as well).
>
> Agreed, but this needs a separate scoped undestanding of the host side
> behaviour over all layers.

Agreed!

>
>> 2. Tighten up the error checking in UVC gadget driver -- We drop the
>>   current frame whenever an EXDEV happens which is wrong. We should
>>   only be dropping the current frame if the EXDEV corresponds to the
>>   frame currently being encoded.
>
> What do you mean by drop?
>
> I would suggest to immediatly switch the uvc_buffer that is being
> enqueued and start queueing prepared requests from the next buffers prep
> list. As suggested, the idea is to have per uvc_buffer prep_list
> requests which would make this task easy.

Currently, if uvc gadget driver receives an EXDEV complete callback
all it does is set the UVC_QUEUE_DROP_INCOMPLETE flag.

So let's say that we receive an EXDEV for a usb_request containing data
for video frame N. With how video_pump is currently configured, chances
are that all usb_requests containing data for video frame N has already
been queued to the controller.

When the next video frame (N+1) comes in, video_pump's encode methods
will look at the UVC_QUEUE_DROP_INCOMPLETE flag and incorrectly
determine that "current" frame needs to be dropped, and stop encoding
video frame N+1 even though the error was for video frame N. So the
encode methods incorrectly drop video frame N+1 which isn't needed.

The encode methods should only be dropping the video frame if we
received an EXDEV for a usb_request for the video frame currently
being encoded.

I hope that makes sense!


- Avi.

2024-06-10 21:45:03

by Michael Grzeschik

[permalink] [raw]
Subject: Re: [PATCH 0/3] usb: gadget: uvc: allocate requests based on frame interval length and buffersize

On Tue, Jun 04, 2024 at 03:32:15PM -0700, Avichal Rakesh wrote:
>
>
>On 5/29/24 14:24, Michael Grzeschik wrote:
>> On Tue, May 28, 2024 at 05:33:46PM -0700, Avichal Rakesh wrote:
>>>
>>>
>>> On 5/28/24 15:43, Michael Grzeschik wrote:
>>>> On Tue, May 28, 2024 at 02:27:34PM -0700, Avichal Rakesh wrote:
>>>>>
>>>>>
>>>>> On 5/28/24 13:22, Michael Grzeschik wrote:
>>>>>> On Tue, May 28, 2024 at 10:30:30AM -0700, Avichal Rakesh wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 5/22/24 10:37, Michael Grzeschik wrote:
>>>>>>>> On Wed, May 22, 2024 at 05:17:02PM +0000, Thinh Nguyen wrote:
>>>>>> One option to be totally sure would be to resend the sentinel request to
>>>>>> be properly transmitted before starting the next frame. This resend
>>>>>> polling would probably include some extra zero-length requests. But also
>>>>>> if this resend keeps failing for n times, the driver should doubt there
>>>>>> is anything sane going on with the USB connection and bail out somehow.
>>>>>>
>>>>>> Since we try to tackle case (1) to avoid transmit errors and also avoid
>>>>>> creating late enqueued requests in the running isoc transfer, the over
>>>>>> all chance to trigger missed transfers should be minimal.
>>>>>
>>>>> Gotcha. It seems like the UVC gadget driver implicitly assumes that EOF
>>>>> flag will be used although the userspace application can technically
>>>>> make it optional.
>>>>
>>>> That is not all. The additional UVC_STREAM_ERR tag on the sentinel
>>>> request can be set optional by the host driver. But by spec the
>>>> userspace application has to drop the frame when the flag was set.
>>>
>>> Looking at the UVC specs, the ERR bit doesn't seem to refer to actual
>>> transmission error, only errors in frame generation (Section 4.3.1.7
>>> of UVC 1.5 Class Specification). Maybe "data discontinuity" can be
>>> used but the examples given are bad media, and encoder issues, which
>>> suggests errors at higher level than the wire.
>>
>> Oh! That is a new perspective I did not consider.
>>
>> With the definition of UVC_STREAM_ERR by spec, the uvc_video driver
>> would in no case set this header bit for the current frame on its own?
>> Is that correct?
>
>It would indeed seem so. The way gadget driver is architected makes
>is impossible for the userspace application to notify the host of
>any errors.
>
>>
>>>> With my proposal this flag will be set, whenever we find out that
>>>> the currently transferred frame was erroneous.
>>>>
>>>>> Summarizing some of the discussions above:
>>>>> 1. UVC gadget driver should _not_ rely on the usb controller to
>>>>> ? enqueue 0-length requests on UVC gadget drivers behalf;
>>>>> 2. However keeping up the backpressure to the controller means the
>>>>> ? EOF request will be delayed behind all the zero-length requests.
>>>>
>>>> Exactly, this is why we have to somehow finetune the timedelay between
>>>> requests that trigger interrupts. And also monitor the amount of
>>>> requests currently enqueued in the hw ringbuffer. So that our drivers
>>>> enqueue dequeue mechanism is virtually adding only the minimum amount
>>>> of necessary zero-length requests in the hardware. This should be
>>>> possible.
>>>>
>>>> I am currently thinking through the remaining steps the pump worker has
>>>> to do on each wakeup to maintain the minimum threshold while waiting
>>>> with submitting requests that contain actual image payload.
>>>>
>>>>> Out of curiosity: What is wrong with letting the host rely on
>>>>> FID alone? Decoding the jpeg payload _should_ fail if any of the
>>>>> usb_requests containing the payload failed to transmit.
>>>>
>>>> This is not totally true. We saw partially rendered jpeg frames on the
>>>> host stream. How the host behaves with broken data is totally undefined
>>>> if the typical uvc flags EOF/ERR are not used as specified. Then think
>>>> about uncompressed formats. So relying on the transferred image format
>>>> to solve our problems is just as wrong as relying on the gadgets
>>>> hardware behavior.
>>>
>>> Do you know if the partially rendered frames were valid JPEGs, or
>>> if the host was simply making a best effort at displaying a broken
>>> JPEG? Perhaps the fix should go to the host instead?
>>
>> I can fully reproduce this with linux and windows hosts. For linux
>> machines I saw that the host was taking the FID change as a marker
>> to see the previous frame as ready and just rendered what got through.
>> This did not lead to garbage but only to partially displayed frames
>> with jpeg macroblock alignment.
>
>I was aware of linux doing so, but I only ever saw this behavior on
>Windows if there were a lot of invalid frames back to back.
>
>I am not super familiar with the guarantees of JPEG, but I suppose
>it is possible to have a "valid" JPEG with some middle blocks
>missing as long the EOI bits make it through? I am not sure how we
>go about solving that.

It is even worse. Since we don't necessary need the EOF tag set but the
host will draw the content that it got after the FID has changed. It is
always possible that an frame that was errornous and therefor dropped
on the sendin side, will be shown on the host to the last macroblock it
received. So these partially drawn frames are more common then expected.

>>> Following is my opinion, feel free to disagree (and correct me if
>>> something is factually incorrect):
>>>
>>> The fundamental issue here is that ISOC doesn't guarantee
>>> delivery of usb_requests or even basic data consistency upon delivery.
>>> So the gadget driver has no way to know the state of transmitted data.
>>> The gadget driver is notified of underruns but not of any other issues,
>>> and ideally we should never have an underrun if the zero-length
>>> backpressure is working as intended.
>>>
>>> So, UVC gadget driver can reduce the number of errors, but it'll never be
>>> able to guarantee that the data transmitted to the host isn't somehow
>>> corrupted or missing unless a more reliable mode of transmission
>>> (bulk, for example) is used.
>>>
>>> All of this to say: The host absolutely needs to be able to handle
>>> all sorts of invalid and broken payloads. How the host handles it
>>> might be undefined, but the host can never rely on perfect knowledge
>>> about the transmission state. In cases like these, where the underlying
>>> transport is unreliable, the burden of enforcing consistency moves up
>>> a layer, i.e. to the encoded payload in this case. So it is perfectly
>>> fine for the host to rely on the encoding to determine if the payload
>>> is corrupt and handle it accordingly.
>>
>> Right.
>>
>>> As for uncompressed format, you're correct that subtle corruptions
>>> may not be caught, but outright missing usb_requests can be easily
>>> checked by simply looking at the number of bytes in the payload. YUV
>>> frames are all of the same (predetermined) size for a given resolution.
>>
>> That was also my thought about five minutes after I did send you the
>> previous mail. So sure, this is no real issue for the host.
>>
>>> So my recommendation is the following:
>>> 1. Fix the bandwidth problem by splitting the encoded video frame
>>> ? into more usb_requests (as your patch already does) making sure
>>> ? there are enough free usb_request to encode the video frame in
>>> ? one burst so we don't accidentally inflate the transmission
>>> ? duration of a video frame by sneaking in zero-length requests in
>>> ? the middle.
>>
>> Ack. This should already solve a lot of issues.
>>
>> For this I would still suggest to move the usb_ep_queue to be done in
>> the pump worker again. Its a bit back and forth, but IMHO its worth the
>> extra mile since only this way we would respect the dwc3 interrupt
>> threads assumption to run *very* short.
>
>The main reason for queuing the requests from the complete handler
>was to have a single point of usb_ep_queue call, which made reasoning
>through the locking simpler. But if you find a way to do so from
>the video_pump thread without making the locking a nightmare, then go
>for it!
>
>>
>>> 2. Unless there is an unusually high rate of transmission failures
>>> ? when using the UVC gadget driver, it might be worth fixing the
>>> ? host side driver to handle broken frames better instead (assuming
>>> ? host is linux as well).
>>
>> Agreed, but this needs a separate scoped undestanding of the host side
>> behaviour over all layers.
>
>Agreed!
>
>>
>>> 2. Tighten up the error checking in UVC gadget driver -- We drop the
>>> ? current frame whenever an EXDEV happens which is wrong. We should
>>> ? only be dropping the current frame if the EXDEV corresponds to the
>>> ? frame currently being encoded.
>>
>> What do you mean by drop?
>>
>> I would suggest to immediatly switch the uvc_buffer that is being
>> enqueued and start queueing prepared requests from the next buffers prep
>> list. As suggested, the idea is to have per uvc_buffer prep_list
>> requests which would make this task easy.
>
>Currently, if uvc gadget driver receives an EXDEV complete callback
>all it does is set the UVC_QUEUE_DROP_INCOMPLETE flag.
>
>So let's say that we receive an EXDEV for a usb_request containing data
>for video frame N. With how video_pump is currently configured, chances
>are that all usb_requests containing data for video frame N has already
>been queued to the controller.
>
>When the next video frame (N+1) comes in, video_pump's encode methods
>will look at the UVC_QUEUE_DROP_INCOMPLETE flag and incorrectly
>determine that "current" frame needs to be dropped, and stop encoding
>video frame N+1 even though the error was for video frame N. So the
>encode methods incorrectly drop video frame N+1 which isn't needed.
>
>The encode methods should only be dropping the video frame if we
>received an EXDEV for a usb_request for the video frame currently
>being encoded.
>
>I hope that makes sense!

This totally makes sense. I just wanted to make sure that this does not
involve any UVC_STREAM_ERR tagging from your understanding.

I totally agree with this concept. So now we "only" have to implement
this. :)

First I will review and update my patches that will increase the amount
of requests per frame.

Regards,
Michael

--
Pengutronix e.K. | |
Steuerwalder Str. 21 | http://www.pengutronix.de/ |
31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Attachments:
(No filename) (10.45 kB)
signature.asc (849.00 B)
Download all attachments