Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1026657pxb; Thu, 24 Mar 2022 11:17:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygX+/HqFXy6yTxbgamyk2jgwQ0hlUetQrECmXBwv15xg/Lw/As5uDlldRNGH81joHgWZTu X-Received: by 2002:a17:90b:3850:b0:1c6:572e:f39a with SMTP id nl16-20020a17090b385000b001c6572ef39amr19800539pjb.233.1648145859795; Thu, 24 Mar 2022 11:17:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648145859; cv=none; d=google.com; s=arc-20160816; b=Jl1lBOdtgGAmFF+vpuGbEycDSLnXX2SfntOD4e48miCajosFlW1QrHhEiBO1h2rXzi IBDmD3uMmvSNLmZW/FDPJVNHldlOudLTKuF+5A/uL0iBymkJXW0fBvFFx3AOcFTCVJSF RxGqLAKbafDhBJeUSGlil8MXR/X4BlgKL9kb2T7LpKOvhL11yDAMY5R7YgZKwZ/Z92Xs aaC11ktk4Uo4Pf4c695NiC7cld5sTbICvPodVu/ekBomtiht3Ve1kWso1OnZdoZql33v n2zL6jFvA02bSGey+SjbzPmmlRaEtoEEWo367DRoxjn94nSO09xYGSrU/EdzFbOylWRq 9ePA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:in-reply-to :subject:cc:to:from:message-id:date:dkim-signature:dkim-signature; bh=g1osbJeXlQpB5//z8F0X+vu5CcbRy6xr+WwjWLHu7ME=; b=WOz5r/CbkcZcxvvDcMfsXNdRkL7Cbt18vmfcA2WNyWtN2wA7Zc5MxKHGrGcQzWPpNS eVl30kKQ+UR5j/3+k7mLcODjmy8VwqbwV7PJxlZBk1eNbhQE+iOVDYNSsNdvtaI2NEoR Q+Hb+md3e34H9K5WnzEzWFsEP4Dr9ubjbSEcfSAoWJCWxmnybiVI5Mwo8kpVbVXOk0B6 QaiBFMteC/0ercfkNoS4vbH7idCagOXrg556BzETDPHboWVJ17TXMGS2aeWTSxJ/nRaM inr4u9wIduPRp994zVoLdsiULBvAHnHDnmNiwIrvdvzWkhRtl0hONF92yDJlIXKgzime CU9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=Goihwp7Z; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519 header.b=sC0InqTC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oc16-20020a17090b1c1000b001c7e0af3c8esi102329pjb.75.2022.03.24.11.17.21; Thu, 24 Mar 2022 11:17:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=Goihwp7Z; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519 header.b=sC0InqTC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238776AbiCWIQx (ORCPT + 99 others); Wed, 23 Mar 2022 04:16:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229480AbiCWIQu (ORCPT ); Wed, 23 Mar 2022 04:16:50 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73EC95BE47 for ; Wed, 23 Mar 2022 01:15:21 -0700 (PDT) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 093B41F37F; Wed, 23 Mar 2022 08:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1648023320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=g1osbJeXlQpB5//z8F0X+vu5CcbRy6xr+WwjWLHu7ME=; b=Goihwp7Zy9WnNOAl84wO3Q/qKMVMCFj/rWC1JoKHO8SvxgV9u3GK9IUpAB687OYCoBGRaL 1PDX7tSszzm8Lx7GqvDWTKs0KXlF3NKQ4X5JQ4NJ/YZJhMsfvtn3W+2x3kRby+cTVHWLhF bFfrcKHpeDcVDmZDpKU2ayVKXhbHloY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1648023320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=g1osbJeXlQpB5//z8F0X+vu5CcbRy6xr+WwjWLHu7ME=; b=sC0InqTC4Xpudf7v+aBXZzhHaywMhlBzP21ZVPjCq8EC+vBAPxeA8r7+CM8N6TvhLDSX+b nykL+KVgp4vomXCg== Received: from alsa1.suse.de (alsa1.suse.de [10.160.4.42]) by relay2.suse.de (Postfix) with ESMTP id F0A8DA3B81; Wed, 23 Mar 2022 08:15:19 +0000 (UTC) Date: Wed, 23 Mar 2022 09:15:19 +0100 Message-ID: From: Takashi Iwai To: Amadeusz SX2awiX4ski Cc: alsa-devel@alsa-project.org, Hu Jiahui , linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/4] ALSA: pcm: Fix races among concurrent prepare and hw_params/hw_free calls In-Reply-To: References: <20220322170720.3529-1-tiwai@suse.de> <20220322170720.3529-4-tiwai@suse.de> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/25.3 (x86_64-suse-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 23 Mar 2022 09:08:25 +0100, Amadeusz SX2awiX4ski wrote: > > On 3/22/2022 6:07 PM, Takashi Iwai wrote: > > Like the previous fixes to hw_params and hw_free ioctl races, we need > > to paper over the concurrent prepare ioctl calls against hw_params and > > hw_free, too. > > > > This patch implements the locking with the existing > > runtime->buffer_mutex for prepare ioctls. Unlike the previous case > > for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is > > performed to the linked streams, hence the lock can't be applied > > simply on the top. For tracking the lock in each linked substream, we > > modify snd_pcm_action_group() slightly and apply the buffer_mutex for > > the case stream_lock=false (formerly there was no lock applied) > > there. > > > > Cc: > > Signed-off-by: Takashi Iwai > > --- > > sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- > > 1 file changed, 18 insertions(+), 14 deletions(-) > > > > diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c > > index 266895374b83..0e4fbf5fd87b 100644 > > --- a/sound/core/pcm_native.c > > +++ b/sound/core/pcm_native.c > > @@ -1190,15 +1190,17 @@ struct action_ops { > > static int snd_pcm_action_group(const struct action_ops *ops, > > struct snd_pcm_substream *substream, > > snd_pcm_state_t state, > > - bool do_lock) > > + bool stream_lock) > > { > > struct snd_pcm_substream *s = NULL; > > struct snd_pcm_substream *s1; > > int res = 0, depth = 1; > > snd_pcm_group_for_each_entry(s, substream) { > > - if (do_lock && s != substream) { > > - if (s->pcm->nonatomic) > > + if (s != substream) { > > + if (!stream_lock) > > + mutex_lock_nested(&s->runtime->buffer_mutex, depth); > > + else if (s->pcm->nonatomic) > > mutex_lock_nested(&s->self_group.mutex, depth); > > else > > spin_lock_nested(&s->self_group.lock, depth); > > Maybe > if (!stream_lock) > mutex_lock_nested(&s->runtime->buffer_mutex, depth); > else > snd_pcm_group_lock(&s->self_group, s->pcm->nonatomic); > ? No, it must be nested locks with the given subclass. That's why it has been the open code beforehand, too. > > @@ -1226,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops, > > ops->post_action(s, state); > > } > > _unlock: > > - if (do_lock) { > > - /* unlock streams */ > > - snd_pcm_group_for_each_entry(s1, substream) { > > - if (s1 != substream) { > > - if (s1->pcm->nonatomic) > > - mutex_unlock(&s1->self_group.mutex); > > - else > > - spin_unlock(&s1->self_group.lock); > > - } > > - if (s1 == s) /* end */ > > - break; > > + /* unlock streams */ > > + snd_pcm_group_for_each_entry(s1, substream) { > > + if (s1 != substream) { > > + if (!stream_lock) > > + mutex_unlock(&s1->runtime->buffer_mutex); > > + else if (s1->pcm->nonatomic) > > + mutex_unlock(&s1->self_group.mutex); > > + else > > + spin_unlock(&s1->self_group.lock); > > And similarly to above, use snd_pcm_group_unlock() here? This side would be possible to use that macro but it's still better to have the consistent call pattern. thanks, Takashi