Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp557780imm; Wed, 8 Aug 2018 01:29:51 -0700 (PDT) X-Google-Smtp-Source: AA+uWPwgR8CayTQRIe5jBl1DVuS2+oGLsk6FRqoTt0ty7CYeD7sChVcgQo/KjTuh2RDdaKlSTEM+ X-Received: by 2002:a63:e811:: with SMTP id s17-v6mr1582331pgh.176.1533716991245; Wed, 08 Aug 2018 01:29:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533716991; cv=none; d=google.com; s=arc-20160816; b=KSkVMplYY/bWrfcKWUXT52USCxJe/EeRC5MefEQg3ypbqtbAaaIXdy1Myvidjzjw1M fj5mPH2lyVTZchJFPRVDwcn28dLxMVwQLpFcuvbJg8GTaVsa++D6hABb7jzEPLO7Efge GT2Z9mswP1Id2MzstGcApuHIRXBrfw2etlg46OqpSZU8fcmZ7Y4uWen3VqoEkD3q2/nD gSJ09D7BzfKaqW/QNaAXK3t7reNxhK+2ifuoEXSEE/xODiBbmXt0Bts9o404an2qdree isTn/hLzT0EwOWAmMXBcSudN1EH8v0AxU0Geq1ZxVtIZlaaOl3QVzyDO5/K77QfPeuvz f9Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=SvyBNqJMtpm/o3PV0Av+B3B0395azf9X5htzaqhsnDk=; b=QWmhqJo0TZmCRIVIegiRLtrUpdvjWNSOoD8GRKxj+e7XtRoplqISfCU532Kkir5Xoa H5pmVjKqUeOAbVFUTfmW0ehXEff4qLMRoguvxKA2uwTRN++EIchqGMbXPJNOjSe5w+oW ztUTPmZ6yceTVuPTXSdGSotPO5Oxn9ozrsEA2IWmqoyb9Q1wWbhKSGmbR4Mj9mnG/KPu uzogoHMkuzsVRP6n41EV4iIUsbivGK4lB7YZMM8HBWx2SnSMM9VVDZ5/scjB1iM6XQ2M ZT4gxzVieHv53L7RoqxceepDppIieIwZpoFbsa6qhgtmHOow6YjNDS7dKN+//lQ9uDvr NKoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b20-v6si2880047pls.78.2018.08.08.01.29.36; Wed, 08 Aug 2018 01:29:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727268AbeHHKqD (ORCPT + 99 others); Wed, 8 Aug 2018 06:46:03 -0400 Received: from smtp.ctxuk.citrix.com ([185.25.65.24]:6841 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727026AbeHHKqD (ORCPT ); Wed, 8 Aug 2018 06:46:03 -0400 X-IronPort-AV: E=Sophos;i="5.51,456,1526342400"; d="scan'208";a="77334652" Date: Wed, 8 Aug 2018 10:27:16 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Juergen Gross CC: , , , , , Subject: Re: [PATCH 2/4] xen/blkfront: cleanup stale persistent grants Message-ID: <20180808082716.y4nwqz4y2gzp3yok@mac> References: <20180806113403.24728-1-jgross@suse.com> <20180806113403.24728-4-jgross@suse.com> <20180806161638.nmjamflckekeuyzb@mac> <20180807141404.lzsqtdd2seqgwtgx@mac> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To AMSPEX02CL02.citrite.net (10.69.22.126) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 07, 2018 at 05:56:38PM +0200, Juergen Gross wrote: > On 07/08/18 16:14, Roger Pau Monn? wrote: > > On Tue, Aug 07, 2018 at 08:31:31AM +0200, Juergen Gross wrote: > >> On 06/08/18 18:16, Roger Pau Monn? wrote: > >>> On Mon, Aug 06, 2018 at 01:34:01PM +0200, Juergen Gross wrote: > >>>> Add a periodic cleanup function to remove old persistent grants which > >>>> are no longer in use on the backend side. This avoids starvation in > >>>> case there are lots of persistent grants for a device which no longer > >>>> is involved in I/O business. > >>>> > >>>> Signed-off-by: Juergen Gross > >>>> --- > >>>> drivers/block/xen-blkfront.c | 99 ++++++++++++++++++++++++++++++++++++++++++-- > >>>> 1 file changed, 95 insertions(+), 4 deletions(-) > >>>> > >>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > >>>> index b5cedccb5d7d..19feb8835fc4 100644 > >>>> --- a/drivers/block/xen-blkfront.c > >>>> +++ b/drivers/block/xen-blkfront.c > >>>> @@ -46,6 +46,7 @@ > >>>> #include > >>>> #include > >>>> #include > >>>> +#include > >>>> > >>>> #include > >>>> #include > >>>> @@ -121,6 +122,9 @@ static inline struct blkif_req *blkif_req(struct request *rq) > >>>> > >>>> static DEFINE_MUTEX(blkfront_mutex); > >>>> static const struct block_device_operations xlvbd_block_fops; > >>>> +static struct delayed_work blkfront_work; > >>>> +static LIST_HEAD(info_list); > >>>> +static bool blkfront_work_active; > >>>> > >>>> /* > >>>> * Maximum number of segments in indirect requests, the actual value used by > >>>> @@ -216,6 +220,7 @@ struct blkfront_info > >>>> /* Save uncomplete reqs and bios for migration. */ > >>>> struct list_head requests; > >>>> struct bio_list bio_list; > >>>> + struct list_head info_list; > >>>> }; > >>>> > >>>> static unsigned int nr_minors; > >>>> @@ -1764,6 +1769,12 @@ static int write_per_ring_nodes(struct xenbus_transaction xbt, > >>>> return err; > >>>> } > >>>> > >>>> +static void free_info(struct blkfront_info *info) > >>>> +{ > >>>> + list_del(&info->info_list); > >>>> + kfree(info); > >>>> +} > >>>> + > >>>> /* Common code used when first setting up, and when resuming. */ > >>>> static int talk_to_blkback(struct xenbus_device *dev, > >>>> struct blkfront_info *info) > >>>> @@ -1885,7 +1896,10 @@ static int talk_to_blkback(struct xenbus_device *dev, > >>>> destroy_blkring: > >>>> blkif_free(info, 0); > >>>> > >>>> - kfree(info); > >>>> + mutex_lock(&blkfront_mutex); > >>>> + free_info(info); > >>>> + mutex_unlock(&blkfront_mutex); > >>>> + > >>>> dev_set_drvdata(&dev->dev, NULL); > >>>> > >>>> return err; > >>>> @@ -1996,6 +2010,10 @@ static int blkfront_probe(struct xenbus_device *dev, > >>>> info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0); > >>>> dev_set_drvdata(&dev->dev, info); > >>>> > >>>> + mutex_lock(&blkfront_mutex); > >>>> + list_add(&info->info_list, &info_list); > >>>> + mutex_unlock(&blkfront_mutex); > >>>> + > >>>> return 0; > >>>> } > >>>> > >>>> @@ -2306,6 +2324,15 @@ static void blkfront_gather_backend_features(struct blkfront_info *info) > >>>> if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST) > >>>> indirect_segments = 0; > >>>> info->max_indirect_segments = indirect_segments; > >>>> + > >>>> + if (info->feature_persistent) { > >>>> + mutex_lock(&blkfront_mutex); > >>>> + if (!blkfront_work_active) { > >>>> + blkfront_work_active = true; > >>>> + schedule_delayed_work(&blkfront_work, HZ * 10); > >>> > >>> Does it make sense to provide a module parameter to rune the schedule > >>> of the cleanup routine? > >> > >> I don't think this is something anyone would like to tune. > >> > >> In case you think it should be tunable I can add a parameter, of course. > > > > We can always add it later if required. I'm fine as-is now. > > > >>> > >>>> + } > >>>> + mutex_unlock(&blkfront_mutex); > >>> > >>> Is it really necessary to have the blkfront_work_active boolean? What > >>> happens if you queue the same delayed work more than once? > >> > >> In case there is already work queued later calls of > >> schedule_delayed_work() will be ignored. > >> > >> So yes, I can drop the global boolean (I still need a local flag in > >> blkfront_delay_work() for controlling the need to call > >> schedule_delayed_work() again). > > > > Can't you just call schedule_delayed_work if info->feature_persistent > > is set, even if that means calling it multiple times if multiple > > blkfront instances are using persistent grants? > > I don't like that. With mq we have a high chance for multiple instances > to use persistent grants and a local bool is much cheaper than unneeded > calls of schedule_delayed_work(). OK, I'm convinced with the local bool. Thanks, Roger.