Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936457AbdIZGhG (ORCPT ); Tue, 26 Sep 2017 02:37:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39896 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934342AbdIZGhC (ORCPT ); Tue, 26 Sep 2017 02:37:02 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 11F5113AB3 Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=pagupta@redhat.com Date: Tue, 26 Sep 2017 02:36:57 -0400 (EDT) From: Pankaj Gupta To: Dmitry Torokhov Cc: Amos Kong , linux-crypto@vger.kernel.org, virtualization@lists.linux-foundation.org, Herbert Xu , Rusty Russell , kvm@vger.kernel.org, Michael Buesch , Matt Mackall , amit shah , lkml Message-ID: <369186187.14365871.1506407817157.JavaMail.zimbra@redhat.com> In-Reply-To: References: <1418028640-4891-1-git-send-email-akong@redhat.com> <1418028640-4891-2-git-send-email-akong@redhat.com> Subject: Re: [PATCH v5 REPOST 1/6] hw_random: place mutex around read functions and buffers. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.65.193.190, 10.4.195.25] Thread-Topic: hw_random: place mutex around read functions and buffers. Thread-Index: M7gvmS2sDvCYiFGaU0GlJV/d6FLS0w== X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 26 Sep 2017 06:37:02 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2684 Lines: 85 > > A bit late to a party, but: > > On Mon, Dec 8, 2014 at 12:50 AM, Amos Kong wrote: > > From: Rusty Russell > > > > There's currently a big lock around everything, and it means that we > > can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current) > > while the rng is reading. This is a real problem when the rng is slow, > > or blocked (eg. virtio_rng with qemu's default /dev/random backend) > > > > This doesn't help (it leaves the current lock untouched), just adds a > > lock to protect the read function and the static buffers, in preparation > > for transition. > > > > Signed-off-by: Rusty Russell > > --- > ... > > > > @@ -160,13 +166,14 @@ static ssize_t rng_dev_read(struct file *filp, char > > __user *buf, > > goto out_unlock; > > } > > > > + mutex_lock(&reading_mutex); > > I think this breaks O_NONBLOCK: we have hwrng core thread that is > constantly pumps underlying rng for data; the thread takes the mutex > and calls rng_get_data() that blocks until RNG responds. This means > that even user specified O_NONBLOCK here we'll be waiting until > [hwrng] thread releases reading_mutex before we can continue. I think for 'virtio_rng' for 'O_NON_BLOCK' 'rng_get_data' returns without waiting for data which can let mutex to be used by other threads waiting if any? rng_dev_read rng_get_data virtio_read static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait) { int ret; struct virtrng_info *vi = (struct virtrng_info *)rng->priv; if (vi->hwrng_removed) return -ENODEV; if (!vi->busy) { vi->busy = true; init_completion(&vi->have_data); register_buffer(vi, buf, size); } if (!wait) return 0; ret = wait_for_completion_killable(&vi->have_data); if (ret < 0) return ret; vi->busy = false; return vi->data_avail; } > > > if (!data_avail) { > > bytes_read = rng_get_data(current_rng, rng_buffer, > > rng_buffer_size(), > > !(filp->f_flags & O_NONBLOCK)); > > if (bytes_read < 0) { > > err = bytes_read; > > - goto out_unlock; > > + goto out_unlock_reading; > > } > > data_avail = bytes_read; > > } > > Thanks. > > -- > Dmitry >