Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3656752imm; Mon, 30 Jul 2018 00:45:27 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd59658jmtHdmBCR7IrUxtr87kaRJ2gA6oRebOVcfMpykkBGh7w4t04WD++SLKqnXKK6nDB X-Received: by 2002:a63:8b44:: with SMTP id j65-v6mr15506464pge.248.1532936727575; Mon, 30 Jul 2018 00:45:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532936727; cv=none; d=google.com; s=arc-20160816; b=YJCzCqCi5GiNbZnOE06A8u/f7jb1h6V7fc1HWVSX18JSdYN6DdpdRuLQ0GC0r6VsPt vcCd/AYyvwkL4ji3ARPpZ4+sZCpUZprCO5HLeEJDpgVluQ7dMgJiiLqwYI+0ELWW7G7u gYr+KgjhGET9Eq3oc2C9YB8o1wBqNUi/3q5EQvhKmJnBjCF6Meg9b7XrqgjQ+FUOONbA vTTFe1Mnb49tOZZMSeUZHqbQ7Wj+EECw+p6JdWdfYBk7L4w4I0YlDQtqJvjsU+iMXeJo bHt7sa9F98Vbr5rXshO48VztCGVUjeAWKYXp8nbIfx+ZeFWlsL9wbQauQqjfONH4gK+n soqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=0flpYWOjO64zVHOBY3A9gpvN9cJVm7w76oGPAdq5Gkk=; b=moFUSBF19GcXuBV2O2hcjMdu5nXtL/NuOnMTGoG9kk/7zuYq+pMcJ74vhtZAqKw8Rs Xzj7NaCbarun38y+46Amfm8jM2IjqZWufpZNiTP1YkkPCmi+hNpDeYbS+azxY+McRdCc gDDby7+DQV3G0pROhKDU7rOZ3g7W40HoABmBg7/ZbFuRykfUy5u4KQUqhq2xraG3rpw3 uLAr9t0pdyj9jvtc8o8gy+BUSnADbnWYtdiCkL6vhy58CvPfeb6INCvlkAW0AVjYIco7 jNi9FOvLb+O9q+fgjIQW7vskr9A6aKrsM9mT3y7jzhkq+BheqZs+Ef4Z/KTg8yHjoV0j mDlw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a24-v6si8998778pls.353.2018.07.30.00.45.13; Mon, 30 Jul 2018 00:45:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726651AbeG3JRe (ORCPT + 99 others); Mon, 30 Jul 2018 05:17:34 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:42796 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726481AbeG3JRe (ORCPT ); Mon, 30 Jul 2018 05:17:34 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2ED0340216FC; Mon, 30 Jul 2018 07:43:52 +0000 (UTC) Received: from dhcp-2-242.nay.redhat.com (dhcp-2-242.nay.redhat.com [10.66.2.242]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B79D32156701; Mon, 30 Jul 2018 07:43:48 +0000 (UTC) Subject: Re: [Xen-devel] [PATCH] xen-netfront: wait xenbus state change when load module manually To: Boris Ostrovsky , netdev@vger.kernel.org, xen-devel@lists.xenproject.org, davem@davemloft.net, jgross@suse.com Cc: linux-kernel@vger.kernel.org References: <20180727095608.25210-1-xiliang@redhat.com> <4dfca465-8da3-7ebf-aac1-08ffe34a74ac@oracle.com> From: Xiao Liang Message-ID: <41022efa-141a-01d9-3084-8460b5017592@redhat.com> Date: Mon, 30 Jul 2018 15:43:45 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <4dfca465-8da3-7ebf-aac1-08ffe34a74ac@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 30 Jul 2018 07:43:52 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 30 Jul 2018 07:43:52 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'xiliang@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thanks, Boris Please see my reply inline. On 07/28/2018 02:40 AM, Boris Ostrovsky wrote: > On 07/27/2018 05:56 AM, Xiao Liang wrote: >> When loading module manually, after call xenbus_switch_state to initializes >> the state of the netfront device, the driver state did not change so fast >> that may lead no dev created in latest kernel. This patch adds wait to make >> sure xenbus knows the driver is not in closed/unknown state. >> >> Current state: >> [vm]# ethtool eth0 >> Settings for eth0: >> Link detected: yes >> [vm]# modprobe -r xen_netfront >> [vm]# modprobe xen_netfront >> [vm]# ethtool eth0 >> Settings for eth0: >> Cannot get device settings: No such device >> Cannot get wake-on-lan settings: No such device >> Cannot get message level: No such device >> Cannot get link status: No such device >> No data available >> >> With the patch installed. >> [vm]# ethtool eth0 >> Settings for eth0: >> Link detected: yes >> [vm]# modprobe -r xen_netfront >> [vm]# modprobe xen_netfront >> [vm]# ethtool eth0 >> Settings for eth0: >> Link detected: yes >> >> Signed-off-by: Xiao Liang >> --- >> drivers/net/xen-netfront.c | 6 ++++++ >> 1 file changed, 6 insertions(+) >> >> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c >> index a57daecf1d57..2d8812dd1534 100644 >> --- a/drivers/net/xen-netfront.c >> +++ b/drivers/net/xen-netfront.c >> @@ -87,6 +87,7 @@ struct netfront_cb { >> /* IRQ name is queue name with "-tx" or "-rx" appended */ >> #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3) >> >> +static DECLARE_WAIT_QUEUE_HEAD(module_load_q); >> static DECLARE_WAIT_QUEUE_HEAD(module_unload_q); >> >> struct netfront_stats { >> @@ -1330,6 +1331,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev) >> netif_carrier_off(netdev); >> >> xenbus_switch_state(dev, XenbusStateInitialising); >> + wait_event(module_load_q, >> + xenbus_read_driver_state(dev->otherend) != >> + XenbusStateClosed && >> + xenbus_read_driver_state(dev->otherend) != >> + XenbusStateUnknown); >> return netdev; >> >> exit: > > Should we have a wake_up somewhere? In my understanding, netback_changed handles it if dev state is in XenbusStateInitialising and otherend is in XenbusStateInitWait, and then create connection to backend. But in most cases, it breaks out as dev->state not in XenbusStateInitialising. So I added a wait here. > And what about other states --- is, > for example, XenbusStateClosing a valid reason to continue? I think XenbusStateClosing should not be a valid reason to continue. My purpose is waiting otherend status to be XenbusStateInitWait(after new dev created).To avoid unnecessary impact, I  only check it leaves closed and unknow state in this patch. In my testing, hotplug vifs from guest in host or load/unload module in guest over 100 times, only waiting XenbusStateInitWait or as this patch does, both are working. vifs can be created each time successfully. Thanks, Xiao Liang > > > -boris > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel