Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752324AbdFNVDM (ORCPT ); Wed, 14 Jun 2017 17:03:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:43236 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751664AbdFNVDL (ORCPT ); Wed, 14 Jun 2017 17:03:11 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A0E9219A8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=sstabellini@kernel.org Date: Wed, 14 Jun 2017 14:03:08 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-X260 To: Boris Ostrovsky cc: Stefano Stabellini , xen-devel@lists.xen.org, linux-kernel@vger.kernel.org, jgross@suse.com, Stefano Stabellini Subject: Re: [PATCH v3 06/18] xen/pvcalls: handle commands from the frontend In-Reply-To: <3f119021-c866-5138-fe4b-263befa377b2@oracle.com> Message-ID: References: <1496431915-20774-1-git-send-email-sstabellini@kernel.org> <1496431915-20774-6-git-send-email-sstabellini@kernel.org> <3f119021-c866-5138-fe4b-263befa377b2@oracle.com> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1891 Lines: 58 On Mon, 12 Jun 2017, Boris Ostrovsky wrote: > > + > > static void pvcalls_back_work(struct work_struct *work) > > { > > + struct pvcalls_fedata *priv = container_of(work, > > + struct pvcalls_fedata, register_work); > > + int notify, notify_all = 0, more = 1; > > + struct xen_pvcalls_request req; > > + struct xenbus_device *dev = priv->dev; > > + > > + while (more) { > > + while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) { > > + RING_COPY_REQUEST(&priv->ring, > > + priv->ring.req_cons++, > > + &req); > > + > > + if (!pvcalls_back_handle_cmd(dev, &req)) { > > + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY( > > + &priv->ring, notify); > > + notify_all += notify; > > + } > > + } > > + > > + if (notify_all) > > + notify_remote_via_irq(priv->irq); > > + > > + RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more); > > + } > > } > > > > static irqreturn_t pvcalls_back_event(int irq, void *dev_id) > > { > > + struct xenbus_device *dev = dev_id; > > + struct pvcalls_fedata *priv = NULL; > > + > > + if (dev == NULL) > > + return IRQ_HANDLED; > > + > > + priv = dev_get_drvdata(&dev->dev); > > + if (priv == NULL) > > + return IRQ_HANDLED; > > + > > + /* > > + * TODO: a small theoretical race exists if we try to queue work > > + * after pvcalls_back_work checked for final requests and before > > + * it returns. The queuing will fail, and pvcalls_back_work > > + * won't do the work because it is about to return. In that > > + * case, we lose the notification. > > + */ > > + queue_work(priv->wq, &priv->register_work); > > Would queuing delayed work (if queue_work() failed) help? And canceling > it on next invocation of pvcalls_back_event()? Looking at the implementation of queue_delayed_work_on and queue_work_on, it looks like that if queue_work fails then also queue_delayed_work would fail: they both test on WORK_STRUCT_PENDING_BIT.