Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969068AbdIZQwr (ORCPT ); Tue, 26 Sep 2017 12:52:47 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:34466 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935998AbdIZQwp (ORCPT ); Tue, 26 Sep 2017 12:52:45 -0400 X-Google-Smtp-Source: AOwi7QBBy+bDCspUYc1r62tDkiKYqmuOa5NxAT9FHJVzAFQhb0rZIZj4+XdsSlTDFzVEURMaBYReaw== Date: Tue, 26 Sep 2017 09:52:41 -0700 From: Dmitry Torokhov To: Pankaj Gupta Cc: Amos Kong , linux-crypto@vger.kernel.org, virtualization@lists.linux-foundation.org, Herbert Xu , Rusty Russell , kvm@vger.kernel.org, Michael Buesch , Matt Mackall , amit shah , lkml Subject: Re: [PATCH v5 REPOST 1/6] hw_random: place mutex around read functions and buffers. Message-ID: <20170926165241.GB14833@dtor-ws> References: <1418028640-4891-1-git-send-email-akong@redhat.com> <1418028640-4891-2-git-send-email-akong@redhat.com> <369186187.14365871.1506407817157.JavaMail.zimbra@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <369186187.14365871.1506407817157.JavaMail.zimbra@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1944 Lines: 52 On Tue, Sep 26, 2017 at 02:36:57AM -0400, Pankaj Gupta wrote: > > > > > A bit late to a party, but: > > > > On Mon, Dec 8, 2014 at 12:50 AM, Amos Kong wrote: > > > From: Rusty Russell > > > > > > There's currently a big lock around everything, and it means that we > > > can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current) > > > while the rng is reading. This is a real problem when the rng is slow, > > > or blocked (eg. virtio_rng with qemu's default /dev/random backend) > > > > > > This doesn't help (it leaves the current lock untouched), just adds a > > > lock to protect the read function and the static buffers, in preparation > > > for transition. > > > > > > Signed-off-by: Rusty Russell > > > --- > > ... > > > > > > @@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp, char > > > __user *buf, > > > goto out_unlock; > > > } > > > > > > + mutex_lock(&reading_mutex); > > > > I think this breaks O_NONBLOCK: we have hwrng core thread that is > > constantly pumps underlying rng for data; the thread takes the mutex > > and calls rng_get_data() that blocks until RNG responds. This means > > that even user specified O_NONBLOCK here we'll be waiting until > > [hwrng] thread releases reading_mutex before we can continue. > > I think for 'virtio_rng' for 'O_NON_BLOCK' 'rng_get_data' returns > without waiting for data which can let mutex to be used by other > threads waiting if any? > > rng_dev_read > rng_get_data > virtio_read As I said in the paragraph above the code that potentially holds the mutex for long time is the thread in hwrng core: hwrng_fillfn(). As it calls rng_get_data() with "wait" argument == 1 it may block while holding reading_mutex, which, in turn, will block rng_dev_read(), even if it was called with O_NONBLOCK. Thanks. -- Dmitry