Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp4787560imm; Tue, 7 Aug 2018 07:21:58 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdGzRRcc5PM3nVcmBzvBfRmkJ2nl04EBXDPytuofTxwIfskroRXZLF+UqiUVTfg/xjFJ0uq X-Received: by 2002:a63:e457:: with SMTP id i23-v6mr18934316pgk.127.1533651718117; Tue, 07 Aug 2018 07:21:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533651718; cv=none; d=google.com; s=arc-20160816; b=BGy0f9GXQt1nQYR7/0mK0koJQAd8b7EyKcOVSHdSAyguSxGf+YOF9LWyUqWQUxtFdJ fTVPDePX6Bq99aAcof+wReV7pgOSxLdvKuJSTeBi/XFFUEhssCwTg9IWWZ/ov0MK65lo UNC1S71b0yw48Rc9YwsAu3i5tnft3FjYMqxlKOWUssD1a9mY4rYlyAivB2j0m2XR4caF AsLL/05W6AmaGrEBZUL+kTPiFN6W2Thoh5KdaB3DRBKXnjLBE6bMotvuynGD2cc1yhUc qlLvTj+SqF22MYAm8hHkIeCNdYZ8xX3LWhQsktTPO21QLl7lSy+jqA28wuhAMh+Ai0SC kdkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=paf9AzhLcxrZbUKIU/3C8eE/OVVmQvTa0JsbN/bWvZg=; b=LafhRR8k46Og/szDJGOvFjpSfqlnGfwJOEUC62G5OQbUEui7RzfHwLLWzYxi9dUnXh uMUR5PoY+Gi8eT+Tunh3GvJS36Cze3yRqWLhZHRkDQ2HvP3tXvEtV4Nc9XmmDWRGeAEB WWp5IIW7XEHHdhOCJfgUuQcoWUuD8gR4RrPWgmxttB6Y2eJMMKaLBiH+ZQwNOWiY9lJE E6iLt79ZRqmhgYlJVQWlUN2jZ3JpBOKxuR699LNAnjocAccmP6D8Kx9QK6j/e7aJmsl6 idQoM23jmnn0gvkh19FVMHKJEuyPt43+kGdGmSexZ6dgtSPRT3+imdHjfuuDLlfZhhyU ndxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n8-v6si1167266pgl.101.2018.08.07.07.21.43; Tue, 07 Aug 2018 07:21:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389480AbeHGQeJ (ORCPT + 99 others); Tue, 7 Aug 2018 12:34:09 -0400 Received: from smtp.eu.citrix.com ([185.25.65.24]:47647 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732639AbeHGQeJ (ORCPT ); Tue, 7 Aug 2018 12:34:09 -0400 X-IronPort-AV: E=Sophos;i="5.51,455,1526342400"; d="scan'208";a="77291350" Date: Tue, 7 Aug 2018 16:14:04 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Juergen Gross CC: , , , , , Subject: Re: [PATCH 2/4] xen/blkfront: cleanup stale persistent grants Message-ID: <20180807141404.lzsqtdd2seqgwtgx@mac> References: <20180806113403.24728-1-jgross@suse.com> <20180806113403.24728-4-jgross@suse.com> <20180806161638.nmjamflckekeuyzb@mac> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To AMSPEX02CL02.citrite.net (10.69.22.126) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 07, 2018 at 08:31:31AM +0200, Juergen Gross wrote: > On 06/08/18 18:16, Roger Pau Monn? wrote: > > On Mon, Aug 06, 2018 at 01:34:01PM +0200, Juergen Gross wrote: > >> Add a periodic cleanup function to remove old persistent grants which > >> are no longer in use on the backend side. This avoids starvation in > >> case there are lots of persistent grants for a device which no longer > >> is involved in I/O business. > >> > >> Signed-off-by: Juergen Gross > >> --- > >> drivers/block/xen-blkfront.c | 99 ++++++++++++++++++++++++++++++++++++++++++-- > >> 1 file changed, 95 insertions(+), 4 deletions(-) > >> > >> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > >> index b5cedccb5d7d..19feb8835fc4 100644 > >> --- a/drivers/block/xen-blkfront.c > >> +++ b/drivers/block/xen-blkfront.c > >> @@ -46,6 +46,7 @@ > >> #include > >> #include > >> #include > >> +#include > >> > >> #include > >> #include > >> @@ -121,6 +122,9 @@ static inline struct blkif_req *blkif_req(struct request *rq) > >> > >> static DEFINE_MUTEX(blkfront_mutex); > >> static const struct block_device_operations xlvbd_block_fops; > >> +static struct delayed_work blkfront_work; > >> +static LIST_HEAD(info_list); > >> +static bool blkfront_work_active; > >> > >> /* > >> * Maximum number of segments in indirect requests, the actual value used by > >> @@ -216,6 +220,7 @@ struct blkfront_info > >> /* Save uncomplete reqs and bios for migration. */ > >> struct list_head requests; > >> struct bio_list bio_list; > >> + struct list_head info_list; > >> }; > >> > >> static unsigned int nr_minors; > >> @@ -1764,6 +1769,12 @@ static int write_per_ring_nodes(struct xenbus_transaction xbt, > >> return err; > >> } > >> > >> +static void free_info(struct blkfront_info *info) > >> +{ > >> + list_del(&info->info_list); > >> + kfree(info); > >> +} > >> + > >> /* Common code used when first setting up, and when resuming. */ > >> static int talk_to_blkback(struct xenbus_device *dev, > >> struct blkfront_info *info) > >> @@ -1885,7 +1896,10 @@ static int talk_to_blkback(struct xenbus_device *dev, > >> destroy_blkring: > >> blkif_free(info, 0); > >> > >> - kfree(info); > >> + mutex_lock(&blkfront_mutex); > >> + free_info(info); > >> + mutex_unlock(&blkfront_mutex); > >> + > >> dev_set_drvdata(&dev->dev, NULL); > >> > >> return err; > >> @@ -1996,6 +2010,10 @@ static int blkfront_probe(struct xenbus_device *dev, > >> info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0); > >> dev_set_drvdata(&dev->dev, info); > >> > >> + mutex_lock(&blkfront_mutex); > >> + list_add(&info->info_list, &info_list); > >> + mutex_unlock(&blkfront_mutex); > >> + > >> return 0; > >> } > >> > >> @@ -2306,6 +2324,15 @@ static void blkfront_gather_backend_features(struct blkfront_info *info) > >> if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST) > >> indirect_segments = 0; > >> info->max_indirect_segments = indirect_segments; > >> + > >> + if (info->feature_persistent) { > >> + mutex_lock(&blkfront_mutex); > >> + if (!blkfront_work_active) { > >> + blkfront_work_active = true; > >> + schedule_delayed_work(&blkfront_work, HZ * 10); > > > > Does it make sense to provide a module parameter to rune the schedule > > of the cleanup routine? > > I don't think this is something anyone would like to tune. > > In case you think it should be tunable I can add a parameter, of course. We can always add it later if required. I'm fine as-is now. > > > >> + } > >> + mutex_unlock(&blkfront_mutex); > > > > Is it really necessary to have the blkfront_work_active boolean? What > > happens if you queue the same delayed work more than once? > > In case there is already work queued later calls of > schedule_delayed_work() will be ignored. > > So yes, I can drop the global boolean (I still need a local flag in > blkfront_delay_work() for controlling the need to call > schedule_delayed_work() again). Can't you just call schedule_delayed_work if info->feature_persistent is set, even if that means calling it multiple times if multiple blkfront instances are using persistent grants? Thanks, Roger.