Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3813987imm; Mon, 6 Aug 2018 11:02:23 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfSvVyrszuIFTZYKg8dS3xjl6r302/PMiR6zZcrvfhKGZzWgupF7hyp2H8/3sd+2bn0NbgN X-Received: by 2002:a62:404e:: with SMTP id n75-v6mr18134779pfa.232.1533578543939; Mon, 06 Aug 2018 11:02:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533578543; cv=none; d=google.com; s=arc-20160816; b=kclTMzfWKWdeA2/AJeS6cdZka/XTRoedaw2qQBq7+d0TOjrmdsaB3Dn9scmxHUQmJx 4iuZo+AIs20rIS06ZQhH1f3thRR61+l33b/zfn8TEJOvXdafGI/sNAsGwh5dhctstez3 kGJscB3gTfLziWL/S9v1uw7scrod2OAxW1mJDzSPHPPdbckFX4WTWo7Z8/ri4xapOPbz nfPm4tlGg+SMN07FeV+oJxgrezhXE2VroLB2v6GATdhg306+nfVeGTXsXcfYWuc7pldd X0NTxXlPgTILKyo0bExSmzhCkYWq9qOP81mUXMMcGMLBtcfbnWZg66dNtlhEaT286DCP NWAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=6jmRMNDEhzP6QXkLVYMhUMUAAR8JHvGy8LNGQSw+0J8=; b=0bPGJDYCs8oBPUi4IjNswGTx4t6Fax+M6dCpvuDtHVXNbcbswardWoP2hc45lhzbKS yXTpxZCgpKzyutPu3fBE8+0/N3loxuYg4ZNuZVrKXKtgkJJY1ajX8YeOZZUvh7i4FTy8 Uu7IZuial7Y+yFlWs3i+cstWokMaUu6gHLYwugoaP+arNe7tWStrznPVjWhyy1VySj7J jpYlxVrqO59vojUln9wMxLSa5t3p1bNQkMKD9MqCVtdULbv5dn+FetDrg064d/RTyRd4 Bpw/M22BNCFteCgwX8U368NaanY+QPTO1Jh6iN/JRNtp1G6hVd+RZYJh8vaBdKAMbv0y TehA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q16-v6si10353338pls.404.2018.08.06.11.02.07; Mon, 06 Aug 2018 11:02:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733179AbeHFS0j (ORCPT + 99 others); Mon, 6 Aug 2018 14:26:39 -0400 Received: from smtp.ctxuk.citrix.com ([185.25.65.24]:38463 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeHFS0j (ORCPT ); Mon, 6 Aug 2018 14:26:39 -0400 X-IronPort-AV: E=Sophos;i="5.51,452,1526342400"; d="scan'208";a="77238302" Date: Mon, 6 Aug 2018 18:16:38 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Juergen Gross CC: , , , , , Subject: Re: [PATCH 2/4] xen/blkfront: cleanup stale persistent grants Message-ID: <20180806161638.nmjamflckekeuyzb@mac> References: <20180806113403.24728-1-jgross@suse.com> <20180806113403.24728-4-jgross@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20180806113403.24728-4-jgross@suse.com> User-Agent: NeoMutt/20180716 X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To AMSPEX02CL02.citrite.net (10.69.22.126) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 06, 2018 at 01:34:01PM +0200, Juergen Gross wrote: > Add a periodic cleanup function to remove old persistent grants which > are no longer in use on the backend side. This avoids starvation in > case there are lots of persistent grants for a device which no longer > is involved in I/O business. > > Signed-off-by: Juergen Gross > --- > drivers/block/xen-blkfront.c | 99 ++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 95 insertions(+), 4 deletions(-) > > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > index b5cedccb5d7d..19feb8835fc4 100644 > --- a/drivers/block/xen-blkfront.c > +++ b/drivers/block/xen-blkfront.c > @@ -46,6 +46,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -121,6 +122,9 @@ static inline struct blkif_req *blkif_req(struct request *rq) > > static DEFINE_MUTEX(blkfront_mutex); > static const struct block_device_operations xlvbd_block_fops; > +static struct delayed_work blkfront_work; > +static LIST_HEAD(info_list); > +static bool blkfront_work_active; > > /* > * Maximum number of segments in indirect requests, the actual value used by > @@ -216,6 +220,7 @@ struct blkfront_info > /* Save uncomplete reqs and bios for migration. */ > struct list_head requests; > struct bio_list bio_list; > + struct list_head info_list; > }; > > static unsigned int nr_minors; > @@ -1764,6 +1769,12 @@ static int write_per_ring_nodes(struct xenbus_transaction xbt, > return err; > } > > +static void free_info(struct blkfront_info *info) > +{ > + list_del(&info->info_list); > + kfree(info); > +} > + > /* Common code used when first setting up, and when resuming. */ > static int talk_to_blkback(struct xenbus_device *dev, > struct blkfront_info *info) > @@ -1885,7 +1896,10 @@ static int talk_to_blkback(struct xenbus_device *dev, > destroy_blkring: > blkif_free(info, 0); > > - kfree(info); > + mutex_lock(&blkfront_mutex); > + free_info(info); > + mutex_unlock(&blkfront_mutex); > + > dev_set_drvdata(&dev->dev, NULL); > > return err; > @@ -1996,6 +2010,10 @@ static int blkfront_probe(struct xenbus_device *dev, > info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0); > dev_set_drvdata(&dev->dev, info); > > + mutex_lock(&blkfront_mutex); > + list_add(&info->info_list, &info_list); > + mutex_unlock(&blkfront_mutex); > + > return 0; > } > > @@ -2306,6 +2324,15 @@ static void blkfront_gather_backend_features(struct blkfront_info *info) > if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST) > indirect_segments = 0; > info->max_indirect_segments = indirect_segments; > + > + if (info->feature_persistent) { > + mutex_lock(&blkfront_mutex); > + if (!blkfront_work_active) { > + blkfront_work_active = true; > + schedule_delayed_work(&blkfront_work, HZ * 10); Does it make sense to provide a module parameter to rune the schedule of the cleanup routine? > + } > + mutex_unlock(&blkfront_mutex); Is it really necessary to have the blkfront_work_active boolean? What happens if you queue the same delayed work more than once? Thanks, Roger.