Received: by 10.223.185.116 with SMTP id b49csp908991wrg; Wed, 14 Feb 2018 08:45:09 -0800 (PST) X-Google-Smtp-Source: AH8x224hFDvXfljDMCXfxn4WxteKFkF8ACC6NAsP/X2/12qvSug9FSwZHsQDnIRzfYclsfl+DBoV X-Received: by 10.99.184.1 with SMTP id p1mr4383958pge.96.1518626709043; Wed, 14 Feb 2018 08:45:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518626709; cv=none; d=google.com; s=arc-20160816; b=WDISUReq1zxx4krRaQzHDo/JV4Nhm+YRjj+Yp8rAwVyWBEFYV+wHNRmlbFaKZvgYEO Gd3IhV805H64r6K7Npa8OqBPDbimSorfPbyAueLUIiKDuB6X0YujxUOxh3Nylg7HqUre eVmz9I309b/+NOd4a10AQso8Mn/EnHGXYrUVw/jlThSxXvnkG3pu5nKH/xR3QqFAoLGo nn/xli3/S+ZZ2gWlnpFIXvptnQ0iDn928j9hlxkvgPgiyIcoDcs8DoF0AYblczEEymH2 US+xnzVXvI8jFLA5/0A6Y2MTn5U1e7c8LRvdYDv7PLj5EZ9TEo2GTZNhWftEZ4SZWLkD G3Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:subject:cc:to:from:message-id :date:arc-authentication-results; bh=KER9B/tFdxPlGrogMX9UejWnizOJr3el68nRjr9ZpYk=; b=ev3K/K3GfwQ1YJ/Mo9tjnZMimRKKdPP2sCBRLDuw+94N61YsTsgszUJW1B9vYJK5Jx yRPwdzNerlKl/yht4kzdmFArvWmbJyOeVo+QUgym5DP9t13/GbtBFYFwqCmqT1+M9mA/ E8HC29PsOkGPlzwxLoY3NG9d/qrScMiyp5dNhxSAVdR27z75UDO0+/iHg4dxaCfxDS84 zmOWqdk2wmZWZjVec4NHIhuZzWGyGrtuMPnfLemiS+tX37mGoO7LfqEdi9wwmfLlhJsg Z/HqmZwv8ZVB8ETr1OWMD7uLaX/df7yMGhQhxmxrGDtXLUGYgzBFTJADxh3C147cwtR9 Y6Ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33-v6si2595483pll.161.2018.02.14.08.44.53; Wed, 14 Feb 2018 08:45:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032855AbeBNQnN (ORCPT + 99 others); Wed, 14 Feb 2018 11:43:13 -0500 Received: from mx2.suse.de ([195.135.220.15]:45604 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1032509AbeBNQnL (ORCPT ); Wed, 14 Feb 2018 11:43:11 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 33678AE25; Wed, 14 Feb 2018 16:43:09 +0000 (UTC) Date: Wed, 14 Feb 2018 17:43:09 +0100 Message-ID: From: Takashi Iwai To: Ben Hutchings Cc: stable@vger.kernel.org, Greg Kroah-Hartman , LKML Subject: Re: [PATCH 4.4 20/87] ALSA: pcm: Allow aborting mutex lock at OSS read/write loops In-Reply-To: <1518625223.3422.19.camel@codethink.co.uk> References: <20180115123349.252309699@linuxfoundation.org> <20180115123351.309626066@linuxfoundation.org> <1516750548.3417.34.camel@codethink.co.uk> <1518625223.3422.19.camel@codethink.co.uk> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/25.3 (x86_64-suse-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 14 Feb 2018 17:20:23 +0100, Ben Hutchings wrote: > > On Mon, 2018-02-12 at 09:34 +0100, Takashi Iwai wrote: > > On Wed, 24 Jan 2018 00:35:48 +0100, > > Ben Hutchings wrote: > > > > > > On Mon, 2018-01-15 at 13:34 +0100, Greg Kroah-Hartman wrote: > > > > 4.4-stable review patch.  If anyone has any objections, please let me know. > > > > > > > > ------------------ > > > > > > > > From: Takashi Iwai > > > > > > > > commit 900498a34a3ac9c611e9b425094c8106bdd7dc1c upstream. > > > > > > > > PCM OSS read/write loops keep taking the mutex lock for the whole > > > > read/write, and this might take very long when the exceptionally high > > > > amount of data is given.  Also, since it invokes with mutex_lock(), > > > > the concurrent read/write becomes unbreakable. > > > > > > > > This patch tries to address these issues by replacing mutex_lock() > > > > with mutex_lock_interruptible(), and also splits / re-takes the lock > > > > at each read/write period chunk, so that it can switch the context > > > > more finely if requested. > > > > > > [...] > > > > @@ -1414,18 +1417,18 @@ static ssize_t snd_pcm_oss_write1(struct > > > >   xfer += tmp; > > > >   if ((substream->f_flags & O_NONBLOCK) != 0 && > > > >       tmp != runtime->oss.period_bytes) > > > > - break; > > > > + tmp = -EAGAIN; > > > >   } > > > > + err: > > > > + mutex_unlock(&runtime->oss.params_lock); > > > > + if (tmp < 0) > > > > + break; > > > >   if (signal_pending(current)) { > > > >   tmp = -ERESTARTSYS; > > > > - goto err; > > > > + break; > > > >   } > > > > + tmp = 0; > > > >   } > > > > - mutex_unlock(&runtime->oss.params_lock); > > > > - return xfer; > > > > - > > > > - err: > > > > - mutex_unlock(&runtime->oss.params_lock); > > > >   return xfer > 0 ? (snd_pcm_sframes_t)xfer : tmp; > > > >  } > > > > > > [...] > > > > > > Some of the "goto err" statements in the loop are conditional on tmp <= > > > 0, but if tmp == 0 this will no longer terminate the loop.  Is that > > > intentional or a bug? > > > > The patch rather fixes the endless loop: the signal_pending() check is > > added after goto err, so that it aborts the loop properly. > > Let me rephrase then: if snd_pcm_oss_write2() returns 0, does that > imply that signal_pending() is true? If there is any other reason that > it could return 0, then this appears to introduce a bug. In some condition (depending on the plugin / conversion and partial write) it may return zero, but practically seen it doesn't happen in the whole this loop. Takashi