Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3248570pxb; Mon, 1 Mar 2021 05:35:02 -0800 (PST) X-Google-Smtp-Source: ABdhPJwigvGUqE534VqG8JgsHMXkOf3PBZjquFR0OVKYDDIjQKR1FcZ49JbmWGmGxX1L2vPiz51V X-Received: by 2002:a17:907:3f98:: with SMTP id hr24mr15811479ejc.429.1614605702168; Mon, 01 Mar 2021 05:35:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614605702; cv=none; d=google.com; s=arc-20160816; b=Q5SCg/gIuv19j2g0obQ77Ay0FLh2gI9kv24Bfl4bGCeMxxO3L8jxbhT0tctMrMe0u7 rQkyVdp+blK9GRwULokbh/bQrJY7SaMnG9qlWZ6i1EAnt1ZR5lPoBVi6tPawTYk1/Fp6 NcGlK8dz9fE6Qekp4WGe52bH11GwWxdNlU2UTg6ZMHoKf/wFaEvgEQAgzNwLaGaUo03K lYj6EeNzJxBqTq6vMPtpSdmTzBTwu/RZAuKmGZ3auiwUfcHxfg7CcggzvG2JCmy7gAYM 5cAm7rsfwHxQSn4e1pYHOuew9LSsotFqId7HmJIky6yZqj/MUI6AzRy6TI20uvlW9ORb aXAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:in-reply-to :subject:cc:to:from:message-id:date; bh=puffAYANwa3KVNn1P/p0yePuhzu1253GPPYNEws1mmk=; b=yddU1nbJM3ILKfm1i2lD761HqMZHvNqcYzHqshcUF5WByWMzMUDvG7hsBHlcYxQAi9 GLcRjpX5h4xOwIKWezeL1LYKyfuE2XDSEJr/Qq0e6BzEK46L5F0ZgOwbwxMWOWtJ3nJQ 81ju86AglIHttm1QwykM4yZf5yTHso03mN+TOWxNfQiwPYj2VEMo0CZw1wMXPJ78qp+g vyh+5U28/Zq7luL+IIBwcynPmAac/6MRtiwZSNQ3pOD43k9037PLjVWVKZUy/WRBok1D +ZkiWzsR38BWaz4pVe7hzxo/FIxEpQoH2flc7cquPtMzu2zU1FWO/IgyUVLY4tdeGlGK Cx6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id he14si11178159ejc.274.2021.03.01.05.34.39; Mon, 01 Mar 2021 05:35:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235743AbhCANdC (ORCPT + 99 others); Mon, 1 Mar 2021 08:33:02 -0500 Received: from mx2.suse.de ([195.135.220.15]:38926 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235776AbhCANcq (ORCPT ); Mon, 1 Mar 2021 08:32:46 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A073DAB8C; Mon, 1 Mar 2021 13:32:04 +0000 (UTC) Date: Mon, 01 Mar 2021 14:32:04 +0100 Message-ID: From: Takashi Iwai To: Anton Yakovlev Cc: , , , "Michael S. Tsirkin" , Jaroslav Kysela , Takashi Iwai , Subject: Re: [PATCH v6 5/9] ALSA: virtio: handling control and I/O messages for the PCM device In-Reply-To: References: <20210227085956.1700687-1-anton.yakovlev@opensynergy.com> <20210227085956.1700687-6-anton.yakovlev@opensynergy.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/25.3 (x86_64-suse-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 01 Mar 2021 10:25:05 +0100, Anton Yakovlev wrote: > > On 28.02.2021 12:27, Takashi Iwai wrote: > > On Sat, 27 Feb 2021 09:59:52 +0100, > > Anton Yakovlev wrote: > >> +/** > >> + * virtsnd_pcm_event() - Handle the PCM device event notification. > >> + * @snd: VirtIO sound device. > >> + * @event: VirtIO sound event. > >> + * > >> + * Context: Interrupt context. > > > > OK, then nonatomic PCM flag is invalid... > > Well, no. Here, events are kind of independent entities. PCM-related > events are just a special case of more generic events, which can carry > any kind of notification/payload. (And at the moment, only XRUN > notification is supported for PCM substreams.) So it has nothing to do > with the atomicity of the PCM device itself. OK, thanks. Basically the only question is how snd_pcm_period_elapsed() is called. And I see that it's called inside queue->lock, and this already invalidates the nonatomic PCM mode. So the code needs the fix: either fix this locking (and the context is guaranteed not to be an irq context), or change to the normal PCM mode without nonatomic flag. Both would bring some side-effect, and we need further changes, I suppose... > >> +/** > >> + * virtsnd_pcm_sg_num() - Count the number of sg-elements required to represent > >> + * vmalloc'ed buffer. > >> + * @data: Pointer to vmalloc'ed buffer. > >> + * @length: Buffer size. > >> + * > >> + * Context: Any context. > >> + * Return: Number of physically contiguous parts in the @data. > >> + */ > >> +static int virtsnd_pcm_sg_num(u8 *data, unsigned int length) > >> +{ > >> + phys_addr_t sg_address; > >> + unsigned int sg_length; > >> + int num = 0; > >> + > >> + while (length) { > >> + struct page *pg = vmalloc_to_page(data); > >> + phys_addr_t pg_address = page_to_phys(pg); > >> + size_t pg_length; > >> + > >> + pg_length = PAGE_SIZE - offset_in_page(data); > >> + if (pg_length > length) > >> + pg_length = length; > >> + > >> + if (!num || sg_address + sg_length != pg_address) { > >> + sg_address = pg_address; > >> + sg_length = pg_length; > >> + num++; > >> + } else { > >> + sg_length += pg_length; > >> + } > >> + > >> + data += pg_length; > >> + length -= pg_length; > >> + } > >> + > >> + return num; > >> +} > >> + > >> +/** > >> + * virtsnd_pcm_sg_from() - Build sg-list from vmalloc'ed buffer. > >> + * @sgs: Preallocated sg-list to populate. > >> + * @nsgs: The maximum number of elements in the @sgs. > >> + * @data: Pointer to vmalloc'ed buffer. > >> + * @length: Buffer size. > >> + * > >> + * Splits the buffer into physically contiguous parts and makes an sg-list of > >> + * such parts. > >> + * > >> + * Context: Any context. > >> + */ > >> +static void virtsnd_pcm_sg_from(struct scatterlist *sgs, int nsgs, u8 *data, > >> + unsigned int length) > >> +{ > >> + int idx = -1; > >> + > >> + while (length) { > >> + struct page *pg = vmalloc_to_page(data); > >> + size_t pg_length; > >> + > >> + pg_length = PAGE_SIZE - offset_in_page(data); > >> + if (pg_length > length) > >> + pg_length = length; > >> + > >> + if (idx == -1 || > >> + sg_phys(&sgs[idx]) + sgs[idx].length != page_to_phys(pg)) { > >> + if (idx + 1 == nsgs) > >> + break; > >> + sg_set_page(&sgs[++idx], pg, pg_length, > >> + offset_in_page(data)); > >> + } else { > >> + sgs[idx].length += pg_length; > >> + } > >> + > >> + data += pg_length; > >> + length -= pg_length; > >> + } > >> + > >> + sg_mark_end(&sgs[idx]); > >> +} > > > > Hmm, I thought there can be already a handy helper to convert vmalloc > > to sglist, but apparently not. It should have been trivial to get the > > page list from vmalloc, e.g. > > > > int vmalloc_to_page_list(void *p, struct page **page_ret) > > { > > struct vmap_area *va; > > > > va = find_vmap_area((unsigned long)p); > > if (!va) > > return 0; > > *page_ret = va->vm->pages; > > return va->vm->nr_pages; > > } > > > > Then you can set up the sg list in a single call from the given page > > list. > > > > But it's just a cleanup, and let's mark it as a room for > > improvements. > > Yeah, we can take a look into some kind of optimizations here. But I > suspect, the overall code will look similar. It is not enough just to > get a list of pages, you also need to build a list of physically > contiguous regions from it. I believe the standard helper does it. But it's for sg_table, hence the plain scatterlist needs to be extracted from there, but most of complex things are in the standard code. But it's merely an optimization and something for future. Takashi