Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp5822682imm; Tue, 12 Jun 2018 14:02:33 -0700 (PDT) X-Google-Smtp-Source: ADUXVKL9L+W1IvVepvaGbl5gM4RUj7geDDckKYEOK2bz3bLV6dpBuHJEDbwvyTqCpcOSTp7FzL2i X-Received: by 2002:a62:6cc4:: with SMTP id h187-v6mr1988460pfc.145.1528837353102; Tue, 12 Jun 2018 14:02:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528837353; cv=none; d=google.com; s=arc-20160816; b=W9w4WhPJ8zrgj7a+ng/qwtR6rcEvYUMuMInfWxrx1n3wr6xuyTP8lylZPqT42Fjrme y3m+MNtNIl3saBzIRniE/dP7A3WLj5IbEgvQeocIoFpYELFMGcIr245DTvU3HI/zDbwZ y9ES+J6rOU9OcjgDMWFQ1vH2+CXPPAD3AOnPaLCIz7s7DdlWOlYMT3vGizWQMwopHUBO 8bS1jhFn2n0YK4O+NzNwcuvBZxxKVRN2PMdlxbgjU7VJBysHFcZwAtTYPMOLAWIL/kR7 AD5SIDa9BE7IIJuNsir4QHkCmKVIEVu9rdjITe3lmD3fWxuC+80OCm3Cj2FOcRZD2x7l Fr5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=8yVx+fhxiJVPeOdXRafxKd/h88vCdvFuGApot6cY/oE=; b=DlbgqyC1s+a9q+MMC9zOW4PwwppkgTOv0riH22ixTc59t0RGEoouDSXPBmjusRz809 5eza+dolQ2K3Y4JkQv6I4RD3et6J0dk9oRvhEV3u3e67PF60ni3BbAf+8GHmuhNSooqn Kc0dmrzAWfZSWxbMBKM4tR19LfjARCYmBvOKjW5NJxYxcXvaaAwWEHBbbOv+2eCFArwL EDgtb6Y/qJv8yV1AtN7XMX2/Z0PpwcFCVFmYUulatctvxO25lR2DFqG04XlhtlcLDNC1 Vi6Ss7xeHl5hgXfnb3AWZncTbRfLNIMUmKrJFFZEHNEufak7YL2wN4cIzFIYk9cLHgJY BOAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b="j9++/qjP"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y16-v6si930471pfn.111.2018.06.12.14.02.11; Tue, 12 Jun 2018 14:02:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b="j9++/qjP"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934732AbeFLU7j (ORCPT + 99 others); Tue, 12 Jun 2018 16:59:39 -0400 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:65159 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934156AbeFLU4i (ORCPT ); Tue, 12 Jun 2018 16:56:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1528836998; x=1560372998; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=8yVx+fhxiJVPeOdXRafxKd/h88vCdvFuGApot6cY/oE=; b=j9++/qjPqAHK6+1RGHeIf16CIuH9GR2ELyYb+YsZLxUtIHGntDGbEkk+ VKhYyaWXaZr3/RlShXHS4SRoI0Z0U7q66zYBl54HIhoP7xru8TVtCppdt dsQpsoseZHpbAOKLavyyNFt7JBbw9AF8NkxK9sKUXP7K1fTDEtEU+B4FH Q=; X-IronPort-AV: E=Sophos;i="5.51,216,1526342400"; d="scan'208";a="745800150" Received: from sea3-co-svc-lb6-vlan3.sea.amazon.com (HELO email-inbound-relay-2c-168cbb73.us-west-2.amazon.com) ([10.47.22.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Jun 2018 20:56:36 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2c-168cbb73.us-west-2.amazon.com (8.14.7/8.14.7) with ESMTP id w5CKuYMg122817 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Tue, 12 Jun 2018 20:56:34 GMT Received: from EX13D02UWB003.ant.amazon.com (10.43.161.48) by EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 12 Jun 2018 20:56:33 +0000 Received: from EX13MTAUEB001.ant.amazon.com (10.43.60.96) by EX13D02UWB003.ant.amazon.com (10.43.161.48) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 12 Jun 2018 20:56:33 +0000 Received: from localhost (10.25.15.63) by mail-relay.amazon.com (10.43.60.129) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 12 Jun 2018 20:56:32 +0000 From: Anchal Agarwal To: , , , CC: , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 07/12] xen-netfront: add callbacks for PM suspend and hibernation support Date: Tue, 12 Jun 2018 20:56:14 +0000 Message-ID: <20180612205619.28156-8-anchalag@amazon.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180612205619.28156-1-anchalag@amazon.com> References: <20180612205619.28156-1-anchalag@amazon.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Munehisa Kamata Add freeze and restore callbacks for PM suspend and hibernation support. The freeze handler simply disconnects the frotnend from the backend and frees resources associated with queues after disabling the net_device from the system. The restore handler just changes the frontend state and let the xenbus handler to re-allocate the resources and re-connect to the backend. This can be performed transparently to the rest of the system. The handlers are used for both PM suspend and hibernation so that we can keep the existing suspend/resume callbacks for Xen suspend without modification. Freezing netfront devices is normally expected to finish within a few hundred milliseconds, but it can rarely take more than 5 seconds and hit the hard coded timeout, it would depend on backend state which may be congested and/or have complex configuration. While it's rare case, longer default timeout seems a bit more reasonable here to avoid hitting the timeout. Also, make it configurable via module parameter so that we can cover broader setups than what we know currently. Signed-off-by: Munehisa Kamata Signed-off-by: Anchal Agarwal Reviewed-by: Eduardo Valentin Reviewed-by: Munehisa Kamata --- drivers/net/xen-netfront.c | 97 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 96 insertions(+), 1 deletion(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 4dd0668..4ea9284 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include @@ -56,6 +57,12 @@ #include #include +enum netif_freeze_state { + NETIF_FREEZE_STATE_UNFROZEN, + NETIF_FREEZE_STATE_FREEZING, + NETIF_FREEZE_STATE_FROZEN, +}; + /* Module parameters */ #define MAX_QUEUES_DEFAULT 8 static unsigned int xennet_max_queues; @@ -63,6 +70,12 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644); MODULE_PARM_DESC(max_queues, "Maximum number of queues per virtual interface"); +static unsigned int netfront_freeze_timeout_secs = 10; +module_param_named(freeze_timeout_secs, + netfront_freeze_timeout_secs, uint, 0644); +MODULE_PARM_DESC(freeze_timeout_secs, + "timeout when freezing netfront device in seconds"); + static const struct ethtool_ops xennet_ethtool_ops; struct netfront_cb { @@ -160,6 +173,10 @@ struct netfront_info { struct netfront_stats __percpu *tx_stats; atomic_t rx_gso_checksum_fixup; + + int freeze_state; + + struct completion wait_backend_disconnected; }; struct netfront_rx_info { @@ -723,6 +740,21 @@ static int xennet_close(struct net_device *dev) return 0; } +static int xennet_disable_interrupts(struct net_device *dev) +{ + struct netfront_info *np = netdev_priv(dev); + unsigned int num_queues = dev->real_num_tx_queues; + unsigned int i; + struct netfront_queue *queue; + + for (i = 0; i < num_queues; ++i) { + queue = &np->queues[i]; + disable_irq(queue->tx_irq); + disable_irq(queue->rx_irq); + } + return 0; +} + static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb, grant_ref_t ref) { @@ -1296,6 +1328,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev) np->queues = NULL; + init_completion(&np->wait_backend_disconnected); + err = -ENOMEM; np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats); if (np->rx_stats == NULL) @@ -1782,6 +1816,50 @@ static int xennet_create_queues(struct netfront_info *info, return 0; } +static int netfront_freeze(struct xenbus_device *dev) +{ + struct netfront_info *info = dev_get_drvdata(&dev->dev); + unsigned long timeout = netfront_freeze_timeout_secs * HZ; + int err = 0; + + xennet_disable_interrupts(info->netdev); + + netif_device_detach(info->netdev); + + info->freeze_state = NETIF_FREEZE_STATE_FREEZING; + + /* Kick the backend to disconnect */ + xenbus_switch_state(dev, XenbusStateClosing); + + /* We don't want to move forward before the frontend is diconnected + * from the backend cleanly. + */ + timeout = wait_for_completion_timeout(&info->wait_backend_disconnected, + timeout); + if (!timeout) { + err = -EBUSY; + xenbus_dev_error(dev, err, "Freezing timed out;" + "the device may become inconsistent state"); + return err; + } + + /* Tear down queues */ + xennet_disconnect_backend(info); + xennet_destroy_queues(info); + + info->freeze_state = NETIF_FREEZE_STATE_FROZEN; + + return err; +} + +static int netfront_restore(struct xenbus_device *dev) +{ + /* Kick the backend to re-connect */ + xenbus_switch_state(dev, XenbusStateInitialising); + + return 0; +} + /* Common code used when first setting up, and when resuming. */ static int talk_to_netback(struct xenbus_device *dev, struct netfront_info *info) @@ -1986,6 +2064,8 @@ static int xennet_connect(struct net_device *dev) spin_unlock_bh(&queue->rx_lock); } + np->freeze_state = NETIF_FREEZE_STATE_UNFROZEN; + return 0; } @@ -2025,11 +2105,23 @@ static void netback_changed(struct xenbus_device *dev, case XenbusStateClosed: wake_up_all(&module_unload_q); - if (dev->state == XenbusStateClosed) + if (dev->state == XenbusStateClosed) { + /* dpm context is waiting for the backend */ + if (np->freeze_state == NETIF_FREEZE_STATE_FREEZING) + complete(&np->wait_backend_disconnected); break; + } /* Missed the backend's CLOSING state -- fallthrough */ case XenbusStateClosing: wake_up_all(&module_unload_q); + /* We may see unexpected Closed or Closing from the backend. + * Just ignore it not to prevent the frontend from being + * re-connected in the case of PM suspend or hibernation. + */ + if (np->freeze_state == NETIF_FREEZE_STATE_FROZEN && + dev->state == XenbusStateInitialising) { + break; + } xenbus_frontend_closed(dev); break; } @@ -2176,6 +2268,9 @@ static struct xenbus_driver netfront_driver = { .probe = netfront_probe, .remove = xennet_remove, .resume = netfront_resume, + .freeze = netfront_freeze, + .thaw = netfront_restore, + .restore = netfront_restore, .otherend_changed = netback_changed, }; -- 2.7.4