Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp2282562ybv; Fri, 14 Feb 2020 15:26:38 -0800 (PST) X-Google-Smtp-Source: APXvYqytv9HEittcmRNyhxk7vfRkUwEtF4GHzmUwVST05BbqL0T1V2A4RWADHOeKo3MU49j2suJi X-Received: by 2002:aca:a857:: with SMTP id r84mr3391687oie.41.1581722797770; Fri, 14 Feb 2020 15:26:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581722797; cv=none; d=google.com; s=arc-20160816; b=y/9HYSttBn4apTj69xb1eF9xn4EO0Eg//4L43kYn66NhWoklO/2skyCz/C+G4RVcNC NhfhrRxPNlXle9tZcv5t1gClwSQ8WN7/9AH+U2+QCAOvRnjzu6IbqN9/z/55J7ozSt4W rqNS8uNgoew5q+qfDT0JqtNUCyQ8fAk4cvXxF28vZBJ01mJtfkLlddOG1m+olc9/vLjN /BocQQ7nPRpwQ6yQYb/x6JUrblCBjj+jx8Rfwzw2Wfn1zVnJLJeWZHuOzLR2wFt6Nyml LsZhDk2c4YcyrR+VuostKg749CeBsBnP+f8fePtX11pviS8/clQLwFcqn4cyF3lSCiX5 Rk9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:to :from:date:ironport-sdr:dkim-signature; bh=65wFrGrYmXjd0UW/pA31otT2ZrHynjVNpCVt+utpQWU=; b=wu4cYTBzDHZL7VwfuAJ4Eg/L0FTzeRnLk2ta5nTEBLRz4Et73rePEaL/7pvb5D9kB7 zzuDzcRSXU/urVrYWH2p2/F7S7lVzgFzxDcQlfyDxdec14dMB4/yCWKmkMVf/If7jr28 pITHq+nDdMG/l9b/mfU04x2P+HhMk0IHE11Ax4p1cBpW30/OdE9bUEWwelp9miwZGT5K VVibUzQcOGiOxrZHZhwZzSYrYgKUGDyoJN2ovFCTxvG1V2ECofug9DsazBDfjYUnIt8u 55/3BQ2kHIZvLetUBho7Gs1fRGu0S2Sqfv7qKpTO6J6LYLDr9ODQrZlEjTDyUyq0oop8 lgNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=r8tTtiOf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n2si3596728otq.315.2020.02.14.15.26.25; Fri, 14 Feb 2020 15:26:37 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.com header.s=amazon201209 header.b=r8tTtiOf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728274AbgBNXZx (ORCPT + 99 others); Fri, 14 Feb 2020 18:25:53 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:17717 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728022AbgBNXZx (ORCPT ); Fri, 14 Feb 2020 18:25:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1581722752; x=1613258752; h=date:from:to:subject:message-id:references:mime-version: in-reply-to; bh=65wFrGrYmXjd0UW/pA31otT2ZrHynjVNpCVt+utpQWU=; b=r8tTtiOf76Futw7LkvLm87BeP5zwH1loaG6laKLHpNY3hcH0HRYSK53z ka9iC1Vf+mNS2Z/nJAGAKYhbgkZ4NId4DiIa6VBAU8tFgOmzh3D8kbgl9 iFQgyMuB3eSFqYafLs0PPw7wYU1i/KCsxPlaJ1gLVTGhhFis99ZAptAjS Y=; IronPort-SDR: bL7Fsnk+QmgPzaLi2dVu7FU9qWoIvjANaCWDzTME9a1GTC9kcbC0NMrraFxDGeXgb6TxgZFuDt MZ9MFEE9KDJQ== X-IronPort-AV: E=Sophos;i="5.70,442,1574121600"; d="scan'208";a="25192150" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-807d4a99.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 14 Feb 2020 23:25:50 +0000 Received: from EX13MTAUEE002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1a-807d4a99.us-east-1.amazon.com (Postfix) with ESMTPS id 2FCECA2466; Fri, 14 Feb 2020 23:25:42 +0000 (UTC) Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 14 Feb 2020 23:25:35 +0000 Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 14 Feb 2020 23:25:35 +0000 Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 14 Feb 2020 23:25:35 +0000 Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix, from userid 4335130) id D719E4028E; Fri, 14 Feb 2020 23:25:34 +0000 (UTC) Date: Fri, 14 Feb 2020 23:25:34 +0000 From: Anchal Agarwal To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Message-ID: <890c404c585d7790514527f0c021056a7be6e748.1581721799.git.anchalag@amazon.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Munehisa Kamata Signed-off-by: Munehisa Kamata --- drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++--- 1 file changed, 112 insertions(+), 7 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 478120233750..d715ed3cb69a 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -47,6 +47,8 @@ #include #include #include +#include +#include #include #include @@ -79,6 +81,8 @@ enum blkif_state { BLKIF_STATE_DISCONNECTED, BLKIF_STATE_CONNECTED, BLKIF_STATE_SUSPENDED, + BLKIF_STATE_FREEZING, + BLKIF_STATE_FROZEN }; struct grant { @@ -220,6 +224,7 @@ struct blkfront_info struct list_head requests; struct bio_list bio_list; struct list_head info_list; + struct completion wait_backend_disconnected; }; static unsigned int nr_minors; @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock); static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo); static void blkfront_gather_backend_features(struct blkfront_info *info); static int negotiate_mq(struct blkfront_info *info); +static void __blkif_free(struct blkfront_info *info); static int get_id_from_freelist(struct blkfront_ring_info *rinfo) { @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size, info->sector_size = sector_size; info->physical_sector_size = physical_sector_size; blkif_set_queue_limits(info); + init_completion(&info->wait_backend_disconnected); return 0; } @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info) /* Already hold rinfo->ring_lock. */ static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo) { + if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING)) + return; if (!RING_FULL(&rinfo->ring)) blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true); } @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo) static void blkif_free(struct blkfront_info *info, int suspend) { - unsigned int i; - /* Prevent new requests being issued until we fix things up. */ info->connected = suspend ? BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED; @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend) if (info->rq) blk_mq_stop_hw_queues(info->rq); + __blkif_free(info); +} + +static void __blkif_free(struct blkfront_info *info) +{ + unsigned int i; + for (i = 0; i < info->nr_rings; i++) blkif_free_ring(&info->rinfo[i]); @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id; struct blkfront_info *info = rinfo->dev_info; - if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) - return IRQ_HANDLED; + if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { + if (info->connected != BLKIF_STATE_FREEZING) + return IRQ_HANDLED; + } spin_lock_irqsave(&rinfo->ring_lock, flags); again: @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info) struct bio *bio; unsigned int segs; + bool frozen = info->connected == BLKIF_STATE_FROZEN; blkfront_gather_backend_features(info); /* Reset limits changed by blk_mq_update_nr_hw_queues(). */ blkif_set_queue_limits(info); @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info) kick_pending_request_queues(rinfo); } + if (frozen) + return 0; + list_for_each_entry_safe(req, n, &info->requests, queuelist) { /* Requeue pending requests (flush or discard) */ list_del_init(&req->queuelist); @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info) return; case BLKIF_STATE_SUSPENDED: + case BLKIF_STATE_FROZEN: /* * If we are recovering from suspension, we need to wait * for the backend to announce it's features before @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev, break; case XenbusStateClosed: - if (dev->state == XenbusStateClosed) + if (dev->state == XenbusStateClosed) { + if (info->connected == BLKIF_STATE_FREEZING) { + __blkif_free(info); + info->connected = BLKIF_STATE_FROZEN; + complete(&info->wait_backend_disconnected); + break; + } + break; + } + + /* + * We may somehow receive backend's Closed again while thawing + * or restoring and it causes thawing or restoring to fail. + * Ignore such unexpected state anyway. + */ + if (info->connected == BLKIF_STATE_FROZEN && + dev->state == XenbusStateInitialised) { + dev_dbg(&dev->dev, + "ignore the backend's Closed state: %s", + dev->nodename); + break; + } /* fall through */ case XenbusStateClosing: - if (info) - blkfront_closing(info); + if (info) { + if (info->connected == BLKIF_STATE_FREEZING) + xenbus_frontend_closed(dev); + else + blkfront_closing(info); + } break; } } @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode) mutex_unlock(&blkfront_mutex); } +static int blkfront_freeze(struct xenbus_device *dev) +{ + unsigned int i; + struct blkfront_info *info = dev_get_drvdata(&dev->dev); + struct blkfront_ring_info *rinfo; + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */ + unsigned int timeout = 5 * HZ; + int err = 0; + + info->connected = BLKIF_STATE_FREEZING; + + blk_mq_freeze_queue(info->rq); + blk_mq_quiesce_queue(info->rq); + + for (i = 0; i < info->nr_rings; i++) { + rinfo = &info->rinfo[i]; + + gnttab_cancel_free_callback(&rinfo->callback); + flush_work(&rinfo->work); + } + + /* Kick the backend to disconnect */ + xenbus_switch_state(dev, XenbusStateClosing); + + /* + * We don't want to move forward before the frontend is diconnected + * from the backend cleanly. + */ + timeout = wait_for_completion_timeout(&info->wait_backend_disconnected, + timeout); + if (!timeout) { + err = -EBUSY; + xenbus_dev_error(dev, err, "Freezing timed out;" + "the device may become inconsistent state"); + } + + return err; +} + +static int blkfront_restore(struct xenbus_device *dev) +{ + struct blkfront_info *info = dev_get_drvdata(&dev->dev); + int err = 0; + + err = talk_to_blkback(dev, info); + blk_mq_unquiesce_queue(info->rq); + blk_mq_unfreeze_queue(info->rq); + + if (err) + goto out; + blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings); + +out: + return err; +} + static const struct block_device_operations xlvbd_block_fops = { .owner = THIS_MODULE, @@ -2647,6 +2749,9 @@ static struct xenbus_driver blkfront_driver = { .resume = blkfront_resume, .otherend_changed = blkback_changed, .is_ready = blkfront_is_ready, + .freeze = blkfront_freeze, + .thaw = blkfront_restore, + .restore = blkfront_restore }; static void purge_persistent_grants(struct blkfront_info *info) -- 2.24.1.AMZN