Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934092AbbGUXey (ORCPT ); Tue, 21 Jul 2015 19:34:54 -0400 Received: from mail-qk0-f171.google.com ([209.85.220.171]:34319 "EHLO mail-qk0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933971AbbGUXef (ORCPT ); Tue, 21 Jul 2015 19:34:35 -0400 Message-ID: <55AED708.903@hurleysoftware.com> Date: Tue, 21 Jul 2015 19:34:32 -0400 From: Peter Hurley User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Sven Brauch CC: Oliver Neukum , Johan Hovold , Linux Kernel Mailing List , One Thousand Gnomes , Toby Gray , linux-usb@vger.kernel.org, linux-serial@vger.kernel.org Subject: Re: [PATCH] Fix data loss in cdc-acm References: <55AC1883.4050605@svenbrauch.de> <20150720172546.GF20628@localhost> <55AD38E5.1090807@svenbrauch.de> <1437486195.3823.13.camel@suse.com> <55AEBD06.6020402@svenbrauch.de> In-Reply-To: <55AEBD06.6020402@svenbrauch.de> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2950 Lines: 64 On 07/21/2015 05:43 PM, Sven Brauch wrote: > Hi, > > Thank you for your comments. > > On 21/07/15 15:43, Oliver Neukum wrote: >> But others won't and we'd preserve stale data in preference over fresh >> data. > If that is important for your device, you should be using an isochronous > endpoint, not bulk, no? > Also note that the driver currently does this anyways. It loses a few kB > of data, and _then_ it throttles the producer and forces it to wait. > > On 21/07/15 11:18, Johan Hovold wrote: >> In general if data isn't being consumed fast enough we eventually need >> to drop it (unless we can throttle the producer). > Yes, maybe this is the first thing which should be cleared up: Is > "throttle the producer" always preferable over "lose data"? I'd say yes > for bulk transfers, no for isochronous. It is in principle easy enough > to throttle the producer, that is what e.g. my patch does. Whether a > different approach may be more appropriate than the "don't resubmit the > urbs" thing is then of course open to debate. > > As far as I can see, throttling the producer is the only way to > guarantee data delivery. So if we want that (and I certainly want it for > my application, I don't know about the general case), I think all > changes to the tty buffers or throttling mechanisms are "just" > performance optimization, since no such modification will ever guarantee > delivery if the producer is not throttled in time. > And, this I want to mention again, if your producer is timing-sensitive > you would not be using bulk anyways. The USB controller could just > decide that your device cannot send data for the next five seconds, and > it will have to handle that case as well. Thus I see no reason to not > throttle the producer if necessary. It's unclear to me that you haven't hit some other bug (buffer miscalc, failure to progress, etc.) which is affecting other users but just not to the extent you're experiencing. For example, I made changes to the conditions required to restart the input worker; I may have omitted some necessary condition which you've triggered. > On 21/07/15 18:45, Peter Hurley wrote: >> 1. Instrument acm_read_bulk_callback with tty_buffer_space_avail() >> to track the (approx) fill level of the tty buffers. I would >> use ftrace/trace_printk() to record this. > I already did this while debugging. For a while, the buffer is almost > empty (fluctuates by a few kB), then it rapidly drops to zero bytes > free. Only after a few urbs where submitted (or rather, not submitted) > into the full buffer, the throttle flag gets set. I'd like to see that data, if you can, which will help me understand at least the timing. Regards, Peter Hurley -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/