Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751869AbdHGNz2 (ORCPT ); Mon, 7 Aug 2017 09:55:28 -0400 Received: from webclient5.webclient5.de ([136.243.32.179]:41330 "EHLO webclient5.webclient5.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751681AbdHGNz1 (ORCPT ); Mon, 7 Aug 2017 09:55:27 -0400 Subject: Re: [alsa-devel] [PATCH 08/11] ALSA: vsnd: Add timer for period interrupt emulation To: Oleksandr Andrushchenko Cc: alsa-devel@alsa-project.org, xen-devel@lists.xen.org, linux-kernel@vger.kernel.org, Oleksandr Andrushchenko , tiwai@suse.com References: <1502091796-14413-1-git-send-email-andr2000@gmail.com> <1502091796-14413-9-git-send-email-andr2000@gmail.com> <4ce45eec-8657-66c4-c8c7-b851250da46a@ladisch.de> <45738e58-7ed2-8315-8d9c-138c3d3a2ecc@gmail.com> From: Clemens Ladisch Message-ID: Date: Mon, 7 Aug 2017 15:55:25 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <45738e58-7ed2-8315-8d9c-138c3d3a2ecc@gmail.com> Content-Type: text/plain; charset=us-ascii Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1006 Lines: 29 Oleksandr Andrushchenko wrote: > On 08/07/2017 04:11 PM, Clemens Ladisch wrote: >> How does that interface work? > > For the buffer received in .copy_user/.copy_kernel we send > a request to the backend and get response back (async) when it has copied > the bytes into HW/mixer/etc, so the buffer at frontend side can be reused. So if the frontend sends too many (too large) requests, does the backend wait until there is enough free space in the buffer before it does the actual copying and then acks? If yes, then these acks can be used as interrupts. (You still have to count frames, and call snd_pcm_period_elapsed() exactly when a period boundary was reached or crossed.) Splitting a large read/write into smaller requests to the backend would improve the granularity of the known stream position. The overall latency would be the sum of the sizes of the frontend and backend buffers. Why is the protocol designed this way? Wasn't the goal to expose some 'real' sound card? Regards, Clemens