Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757282AbdIIAH1 (ORCPT ); Fri, 8 Sep 2017 20:07:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:44018 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757173AbdIIAH0 (ORCPT ); Fri, 8 Sep 2017 20:07:26 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5FF1221B81 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=sstabellini@kernel.org Date: Fri, 8 Sep 2017 17:07:24 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-X260 To: Boris Ostrovsky cc: Stefano Stabellini , xen-devel@lists.xen.org, linux-kernel@vger.kernel.org, jgross@suse.com, Stefano Stabellini Subject: Re: [PATCH v3 02/13] xen/pvcalls: implement frontend disconnect In-Reply-To: Message-ID: References: <1501541855-7354-1-git-send-email-sstabellini@kernel.org> <1501541855-7354-2-git-send-email-sstabellini@kernel.org> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3684 Lines: 114 On Fri, 11 Aug 2017, Boris Ostrovsky wrote: > On 07/31/2017 06:57 PM, Stefano Stabellini wrote: > > Introduce a data structure named pvcalls_bedata. It contains pointers to > > the command ring, the event channel, a list of active sockets and a list > > of passive sockets. Lists accesses are protected by a spin_lock. > > > > Introduce a waitqueue to allow waiting for a response on commands sent > > to the backend. > > > > Introduce an array of struct xen_pvcalls_response to store commands > > responses. > > > > Implement pvcalls frontend removal function. Go through the list of > > active and passive sockets and free them all, one at a time. > > > > Signed-off-by: Stefano Stabellini > > CC: boris.ostrovsky@oracle.com > > CC: jgross@suse.com > > --- > > drivers/xen/pvcalls-front.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 51 insertions(+) > > > > diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c > > index a8d38c2..a126195 100644 > > --- a/drivers/xen/pvcalls-front.c > > +++ b/drivers/xen/pvcalls-front.c > > @@ -20,6 +20,29 @@ > > #include > > #include > > > > +#define PVCALLS_INVALID_ID UINT_MAX > > +#define PVCALLS_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER > > +#define PVCALLS_NR_REQ_PER_RING __CONST_RING_SIZE(xen_pvcalls, XEN_PAGE_SIZE) > > + > > +struct pvcalls_bedata { > > + struct xen_pvcalls_front_ring ring; > > + grant_ref_t ref; > > + int irq; > > + > > + struct list_head socket_mappings; > > + struct list_head socketpass_mappings; > > + spinlock_t pvcallss_lock; > > In the backend this is called socket_lock and (subjectively) it would > sound as a better name here too. I'll rename > > + > > + wait_queue_head_t inflight_req; > > + struct xen_pvcalls_response rsp[PVCALLS_NR_REQ_PER_RING]; > > +}; > > +static struct xenbus_device *pvcalls_front_dev; > > + > > +static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id) > > +{ > > + return IRQ_HANDLED; > > +} > > + > > static const struct xenbus_device_id pvcalls_front_ids[] = { > > { "pvcalls" }, > > { "" } > > @@ -27,6 +50,34 @@ > > > > static int pvcalls_front_remove(struct xenbus_device *dev) > > { > > + struct pvcalls_bedata *bedata; > > + struct sock_mapping *map = NULL, *n; > > + > > + bedata = dev_get_drvdata(&pvcalls_front_dev->dev); > > + > > + list_for_each_entry_safe(map, n, &bedata->socket_mappings, list) { > > + mutex_lock(&map->active.in_mutex); > > + mutex_lock(&map->active.out_mutex); > > + pvcalls_front_free_map(bedata, map); > > + mutex_unlock(&map->active.out_mutex); > > + mutex_unlock(&map->active.in_mutex); > > + kfree(map); > > I think this is the same issue as the one discussed for some other patch > --- unlocking and then immediately freeing a lock. Yes, I'll fix this too. > > + } > > + list_for_each_entry_safe(map, n, &bedata->socketpass_mappings, list) { > > + spin_lock(&bedata->pvcallss_lock); > > + list_del_init(&map->list); > > + spin_unlock(&bedata->pvcallss_lock); > > + kfree(map); > > + } > > + if (bedata->irq > 0) > > + unbind_from_irqhandler(bedata->irq, dev); > > + if (bedata->ref >= 0) > > + gnttab_end_foreign_access(bedata->ref, 0, 0); > > + kfree(bedata->ring.sring); > > + kfree(bedata); > > + dev_set_drvdata(&dev->dev, NULL); > > + xenbus_switch_state(dev, XenbusStateClosed); > > Should we first move the state to Closed and then free things up? Or it > doesn't matter? I believe that is already done by the xenbus driver: this function is supposed to be called after the frontend state is set to Closing. > > + pvcalls_front_dev = NULL; > > return 0; > > } > > >