Received: by 10.223.176.5 with SMTP id f5csp484023wra; Fri, 2 Feb 2018 00:55:24 -0800 (PST) X-Google-Smtp-Source: AH8x225n4TD5bDjJDzdgtrXDwSONS74uDUkINbUNL6gwyIPUbH0ozMxQBATqCRrJ/6orZz+3/i6F X-Received: by 2002:a17:902:9897:: with SMTP id s23-v6mr33870926plp.238.1517561724682; Fri, 02 Feb 2018 00:55:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517561724; cv=none; d=google.com; s=arc-20160816; b=AJ6b86TF3Wz8txu+jvYCsQ8rxswa0uQwwIN14UyaMZbIrDog4a2sbfu0aX3TLr1487 3KtWN9X59JcH7qTPqlasnbCuu63G9e5mwVVq2qX8fHh2Hol1bkNOxFsKyO6uX7tMlte4 07cCcy0WBeVUh5KBVqdhi00ewRXNy9Ved2NjNunAyMq0cFm1994deKKt+4UojweA3iFF Bt1acHEy0i/eT+xJRudkf3+h3mheNPr62s2rJalaSJw85Fr6Wm7nyMcB+G1l86vQJi15 UUv+4kOZY+IfQaiQNuJ9nqSRnBWakDRTWJDXVnTSciDgkiD2z4KT/ITklHB+UIXPQAUi ZVqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=dPoS3728GHtrdZgiAbi6fFZprajaDA5Lx0IO6oaL9XQ=; b=qj0dP36rZZSUSbYOqiz4vD1LPONZaB9b2UXPK3PAVf4c/Nc93v53BJV3gMEKRPwmFT r4IfHmSrFmy8nfgYjjlzMhURlLNRwwUvJ6gZI3GJv/l6t0V0UqrcQM1pmFSNB2sRXUPS yLx3ayMCVwSWi3IMGK3Ep68wxMWPmwojK+n66AeA8y1OO23BIRJ4YPLf1fWa+EZegNAv sg3MKSRHqZ2/97JnqBpRf4rgHuxyMLacdB50v+A9o/J37C3ymbX5U+oyx3mL+2t+mw6F jtarjbVBXvT+CtiIZSk8hbBdpWqDIWtikcu5LzmVC5Upff0YhYATqE5yH4gW0ysXzkmA BuYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t1si1114190pgu.7.2018.02.02.00.55.10; Fri, 02 Feb 2018 00:55:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751887AbeBBIyn (ORCPT + 99 others); Fri, 2 Feb 2018 03:54:43 -0500 Received: from mx1.redhat.com ([209.132.183.28]:46994 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751180AbeBBIyg (ORCPT ); Fri, 2 Feb 2018 03:54:36 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6F66828215; Fri, 2 Feb 2018 08:54:36 +0000 (UTC) Received: from vader (ovpn-117-126.ams2.redhat.com [10.36.117.126]) by smtp.corp.redhat.com (Postfix) with SMTP id E0E10619FB; Fri, 2 Feb 2018 08:54:28 +0000 (UTC) Date: Fri, 2 Feb 2018 09:54:28 +0100 From: Eduardo Otubo To: Oleksandr Andrushchenko Cc: xen-devel@lists.xenproject.org, jgross@suse.com, wei.liu2@citrix.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, paul.durrant@citrix.com, cheshi@redhat.com, vkuznets@redhat.com, mgamal@redhat.com, cavery@redhat.com, boris.ostrovsky@oracle.com Subject: Re: [Xen-devel] [PATCHv2] xen-netfront: remove warning when unloading module Message-ID: <20180202085428.GA26899@vader> References: <20171123141835.5820-1-otubo@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.8.3+47 (5f034395e53d) (2017-05-23) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Fri, 02 Feb 2018 08:54:36 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 31, 2018 at 05:00:23PM +0200, Oleksandr Andrushchenko wrote: > Hi, Eduardo! > > I am working on a frontend driver (PV DRM) and also seeing some strange > > things on driver unloading: > > xt# rmmod -f drm_xen_front.ko > [ 3236.462497] [drm] Unregistering XEN PV vdispl > [ 3236.485745] [drm:xen_drv_remove [drm_xen_front]] *ERROR* Backend state is > InitWait while removing driver > [ 3236.486950] vdispl vdispl-0: 22 freeing event channel 11 > [ 3236.496123] vdispl vdispl-0: failed to write error node for > device/vdispl/0 (22 freeing event channel 11) > [ 3236.496271] vdispl vdispl-0: 22 freeing event channel 12 > [ 3236.501633] vdispl vdispl-0: failed to write error node for > device/vdispl/0 (22 freeing event channel 12) > > These are somewhat different from your use-case with grant references, but I > have a question: > > do you really see that XenbusStateClosed and XenbusStateClosing are > > called? In my driver I can't see those and once I tried to dig deeper into > the problem > > I saw that on driver removal it is disconnected from XenBus, so no backend > state > > change events come in via .otherend_changed callback. > > The only difference I see here is that the backend is a user-space > application > > Thank you, > Oleksandr To be honest, most of the things I assumed were true, according to some talks on IRC with maintainers. Since I assumed it was true I started to write code based on that and all the behaviors that followed were correct according to my assumptions (and discussions). But if you find something else weird, please let me know and we can fix it. > > On 11/23/2017 04:18 PM, Eduardo Otubo wrote: > > v2: > > * Replace busy wait with wait_event()/wake_up_all() > > * Cannot garantee that at the time xennet_remove is called, the > > xen_netback state will not be XenbusStateClosed, so added a > > condition for that > > * There's a small chance for the xen_netback state is > > XenbusStateUnknown by the time the xen_netfront switches to Closed, > > so added a condition for that. > > > > When unloading module xen_netfront from guest, dmesg would output > > warning messages like below: > > > > [ 105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use! > > [ 105.236839] deferring g.e. 0x903 (pfn 0x35805) > > > > This problem relies on netfront and netback being out of sync. By the time > > netfront revokes the g.e.'s netback didn't have enough time to free all of > > them, hence displaying the warnings on dmesg. > > > > The trick here is to make netfront to wait until netback frees all the g.e.'s > > and only then continue to cleanup for the module removal, and this is done by > > manipulating both device states. > > > > Signed-off-by: Eduardo Otubo > > --- > > drivers/net/xen-netfront.c | 18 ++++++++++++++++++ > > 1 file changed, 18 insertions(+) > > > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c > > index 8b8689c6d887..391432e2725d 100644 > > --- a/drivers/net/xen-netfront.c > > +++ b/drivers/net/xen-netfront.c > > @@ -87,6 +87,8 @@ struct netfront_cb { > > /* IRQ name is queue name with "-tx" or "-rx" appended */ > > #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3) > > +static DECLARE_WAIT_QUEUE_HEAD(module_unload_q); > > + > > struct netfront_stats { > > u64 packets; > > u64 bytes; > > @@ -2021,10 +2023,12 @@ static void netback_changed(struct xenbus_device *dev, > > break; > > case XenbusStateClosed: > > + wake_up_all(&module_unload_q); > > if (dev->state == XenbusStateClosed) > > break; > > /* Missed the backend's CLOSING state -- fallthrough */ > > case XenbusStateClosing: > > + wake_up_all(&module_unload_q); > > xenbus_frontend_closed(dev); > > break; > > } > > @@ -2130,6 +2134,20 @@ static int xennet_remove(struct xenbus_device *dev) > > dev_dbg(&dev->dev, "%s\n", dev->nodename); > > + if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) { > > + xenbus_switch_state(dev, XenbusStateClosing); > > + wait_event(module_unload_q, > > + xenbus_read_driver_state(dev->otherend) == > > + XenbusStateClosing); > > + > > + xenbus_switch_state(dev, XenbusStateClosed); > > + wait_event(module_unload_q, > > + xenbus_read_driver_state(dev->otherend) == > > + XenbusStateClosed || > > + xenbus_read_driver_state(dev->otherend) == > > + XenbusStateUnknown); > > + } > > + > > xennet_disconnect_backend(info); > > unregister_netdev(info->netdev); > -- Eduardo Otubo