Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp87182pxb; Tue, 2 Mar 2021 20:13:31 -0800 (PST) X-Google-Smtp-Source: ABdhPJxfWhScZrGqE8xih78lVtUeDLRqMBoLzlRKDxkGM9LAYeyfbGmFFkceDGUgUDDu8cplsAvv X-Received: by 2002:a17:906:c181:: with SMTP id g1mr23173874ejz.96.1614744811490; Tue, 02 Mar 2021 20:13:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614744811; cv=none; d=google.com; s=arc-20160816; b=JxC8+lMgMjMXuN9CG5521IBKeMAA+v7YBMxeMenwJzbJh/ZAM1T/nsThh3qiZZrfgb TaGMQLHep3zoSiIBYgrwE86wT/+FhOdVu6PVnxVRqT8a4N0MJsVJJFogsrKGppx6odmI CbE1nbB/0G6QEqa7TRLNvi+p+jTLPaHAPlKdqFdPPU/hjk7PuyzKYaQSVlZRkOsxJGBp 4LFZ8b+o+6XTIp94aFo0WYJEcqJLJ4wBTohT8kQAMI+wmZrRJWBf+Xygq1HVAAX0WHA+ 1vCsy7etMomG5bD459UIryFmyErFBGFzQVXjic8k4rlFYA1Mr8W5wbKhn9aCiSyEFY6t lsrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:in-reply-to :subject:cc:to:from:message-id:date; bh=JMcMnk9DUvBDUSktgW+7V8vcyFKWhxZ27r46TRVLK8s=; b=akeknE4sK3LAGjIki3eF4fjRbUa5Y2CKZtw/gAjBbJvki/Hu2/0WJokXo4flOKRmoI 4PCIfRk/7gfikoPJ29zHnTkk3BiUAzcVeTtzibhAlgRK4oxM7/gmGsc/s94J7i0HAkOd DG+SaFDkXqr6ttWL0BzZHpdzTkt8bGCGhWncYe+lfb/LfgeF9cSD4InyoNiPiM6VgaYh zbJ2Knn2GCLbvEmRhhk8AFc1eV6eNWTp7jMgQkT4Sol+VL7nSMhdfdTuAiefXMYSx8lL 24rj/9PbFgokHn+In1caNwgGrMDZh8u0gNYzANvUSA3gHJhjOIAwIUS2lA+hJ1qH+BA5 s9CA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u18si14782640edv.166.2021.03.02.20.13.08; Tue, 02 Mar 2021 20:13:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236730AbhCAO5U (ORCPT + 99 others); Mon, 1 Mar 2021 09:57:20 -0500 Received: from mx2.suse.de ([195.135.220.15]:43936 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236705AbhCAO4s (ORCPT ); Mon, 1 Mar 2021 09:56:48 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 23B3BAE5C; Mon, 1 Mar 2021 14:56:01 +0000 (UTC) Date: Mon, 01 Mar 2021 15:56:01 +0100 Message-ID: From: Takashi Iwai To: Anton Yakovlev Cc: , , , "Michael S. Tsirkin" , Jaroslav Kysela , Takashi Iwai , Subject: Re: [PATCH v6 5/9] ALSA: virtio: handling control and I/O messages for the PCM device In-Reply-To: <85bbc067-e7ec-903a-1518-5aab01577655@opensynergy.com> References: <20210227085956.1700687-1-anton.yakovlev@opensynergy.com> <20210227085956.1700687-6-anton.yakovlev@opensynergy.com> <85bbc067-e7ec-903a-1518-5aab01577655@opensynergy.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/25.3 (x86_64-suse-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 01 Mar 2021 15:47:46 +0100, Anton Yakovlev wrote: > > On 01.03.2021 14:32, Takashi Iwai wrote: > > On Mon, 01 Mar 2021 10:25:05 +0100, > > Anton Yakovlev wrote: > >> > >> On 28.02.2021 12:27, Takashi Iwai wrote: > >>> On Sat, 27 Feb 2021 09:59:52 +0100, > >>> Anton Yakovlev wrote: > >>>> +/** > >>>> + * virtsnd_pcm_event() - Handle the PCM device event notification. > >>>> + * @snd: VirtIO sound device. > >>>> + * @event: VirtIO sound event. > >>>> + * > >>>> + * Context: Interrupt context. > >>> > >>> OK, then nonatomic PCM flag is invalid... > >> > >> Well, no. Here, events are kind of independent entities. PCM-related > >> events are just a special case of more generic events, which can carry > >> any kind of notification/payload. (And at the moment, only XRUN > >> notification is supported for PCM substreams.) So it has nothing to do > >> with the atomicity of the PCM device itself. > > > > OK, thanks. > > > > Basically the only question is how snd_pcm_period_elapsed() is called. > > And I see that it's called inside queue->lock, and this already > > invalidates the nonatomic PCM mode. So the code needs the fix: either > > fix this locking (and the context is guaranteed not to be an irq > > context), or change to the normal PCM mode without nonatomic flag. > > Both would bring some side-effect, and we need further changes, I > > suppose... > > Ok, I understood the problem. Well, I would say the nonatomic PCM mode > is more important option, since in this mode we can guarantee the > correct operation of the device. Which operation (except for the resume) in the trigger and the pointer callbacks need the action that need to sleep? I thought the sync with the command queue is done in the sync_stop. And most of others seem already taking the spinlock in themselves, so the non-atomic operation has little merit for them. > And if you say, that we need to get rid > of irq context here, then probably workqueue for calling > snd_pcm_period_elapsed() should be fine (of course, it should be shared > between all available substreams). That would work, but it's of course just papering over it :) > >>>> +/** > >>>> + * virtsnd_pcm_sg_num() - Count the number of sg-elements required to represent > >>>> + * vmalloc'ed buffer. > >>>> + * @data: Pointer to vmalloc'ed buffer. > >>>> + * @length: Buffer size. > >>>> + * > >>>> + * Context: Any context. > >>>> + * Return: Number of physically contiguous parts in the @data. > >>>> + */ > >>>> +static int virtsnd_pcm_sg_num(u8 *data, unsigned int length) > >>>> +{ > >>>> + phys_addr_t sg_address; > >>>> + unsigned int sg_length; > >>>> + int num = 0; > >>>> + > >>>> + while (length) { > >>>> + struct page *pg = vmalloc_to_page(data); > >>>> + phys_addr_t pg_address = page_to_phys(pg); > >>>> + size_t pg_length; > >>>> + > >>>> + pg_length = PAGE_SIZE - offset_in_page(data); > >>>> + if (pg_length > length) > >>>> + pg_length = length; > >>>> + > >>>> + if (!num || sg_address + sg_length != pg_address) { > >>>> + sg_address = pg_address; > >>>> + sg_length = pg_length; > >>>> + num++; > >>>> + } else { > >>>> + sg_length += pg_length; > >>>> + } > >>>> + > >>>> + data += pg_length; > >>>> + length -= pg_length; > >>>> + } > >>>> + > >>>> + return num; > >>>> +} > >>>> + > >>>> +/** > >>>> + * virtsnd_pcm_sg_from() - Build sg-list from vmalloc'ed buffer. > >>>> + * @sgs: Preallocated sg-list to populate. > >>>> + * @nsgs: The maximum number of elements in the @sgs. > >>>> + * @data: Pointer to vmalloc'ed buffer. > >>>> + * @length: Buffer size. > >>>> + * > >>>> + * Splits the buffer into physically contiguous parts and makes an sg-list of > >>>> + * such parts. > >>>> + * > >>>> + * Context: Any context. > >>>> + */ > >>>> +static void virtsnd_pcm_sg_from(struct scatterlist *sgs, int nsgs, u8 *data, > >>>> + unsigned int length) > >>>> +{ > >>>> + int idx = -1; > >>>> + > >>>> + while (length) { > >>>> + struct page *pg = vmalloc_to_page(data); > >>>> + size_t pg_length; > >>>> + > >>>> + pg_length = PAGE_SIZE - offset_in_page(data); > >>>> + if (pg_length > length) > >>>> + pg_length = length; > >>>> + > >>>> + if (idx == -1 || > >>>> + sg_phys(&sgs[idx]) + sgs[idx].length != page_to_phys(pg)) { > >>>> + if (idx + 1 == nsgs) > >>>> + break; > >>>> + sg_set_page(&sgs[++idx], pg, pg_length, > >>>> + offset_in_page(data)); > >>>> + } else { > >>>> + sgs[idx].length += pg_length; > >>>> + } > >>>> + > >>>> + data += pg_length; > >>>> + length -= pg_length; > >>>> + } > >>>> + > >>>> + sg_mark_end(&sgs[idx]); > >>>> +} > >>> > >>> Hmm, I thought there can be already a handy helper to convert vmalloc > >>> to sglist, but apparently not. It should have been trivial to get the > >>> page list from vmalloc, e.g. > >>> > >>> int vmalloc_to_page_list(void *p, struct page **page_ret) > >>> { > >>> struct vmap_area *va; > >>> > >>> va = find_vmap_area((unsigned long)p); > >>> if (!va) > >>> return 0; > >>> *page_ret = va->vm->pages; > >>> return va->vm->nr_pages; > >>> } > >>> > >>> Then you can set up the sg list in a single call from the given page > >>> list. > >>> > >>> But it's just a cleanup, and let's mark it as a room for > >>> improvements. > >> > >> Yeah, we can take a look into some kind of optimizations here. But I > >> suspect, the overall code will look similar. It is not enough just to > >> get a list of pages, you also need to build a list of physically > >> contiguous regions from it. > > > > I believe the standard helper does it. But it's for sg_table, hence > > the plain scatterlist needs to be extracted from there, but most of > > complex things are in the standard code. > > > > But it's merely an optimization and something for future. > > I quickly checked it. I think it's hardly possible to do anything here. > These functions to deal with vmalloced areas are not exported. And, > according to comments, require some proper locking on top of that. At > least, it does not look like a trivial things. Sure, it needs a function exposed from vmalloc.c. But I don't think the locking is the problem as find_vmap_area() already takes care of it, and we don't release the vmalloced pages while invoking this function. Of course I might overlook something, but my point is that this kind of work should be rather in the core (or at least most of the important steps in the code should be in the core code). Takashi