Received: by 10.213.65.68 with SMTP id h4csp452905imn; Fri, 6 Apr 2018 03:16:10 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/JqOln4Hi4WHCQHr1nYMKsf0yP9KRPrHsUfpYs7p6VqucoYKqcDnUONIO/cvOprUNlVI/U X-Received: by 2002:a17:902:7087:: with SMTP id z7-v6mr27203523plk.315.1523009770333; Fri, 06 Apr 2018 03:16:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523009770; cv=none; d=google.com; s=arc-20160816; b=OMdZcffE4IFwaijT6NkdFhxTYMgYVkpXHIbDs+APTPWIpFg/V7h8cA9pgsHdLGpD7u cvr/q3rtnE1xEIjWS2aHMDxNK50BP+NwTVewDbIwZjm6pZh4pqlC+dJqJcNxckGvzUQl efonDN7eToR1Z2OhT4em4Ju5Q8Cqh94r8TsA/iymwRmzQwfox1/g8x910pBpEBLtYE4j PV0NOjfiV0PhbKSNix5kC+qFlJiQC1uNcWFj/HWtwHUV4aXhEq5WF/QeO7WlNHT7+Q6+ 2hoMNj45Nke9J8vCpqSBQWkfx9dD2YyAZgh5YQXDa/Z1T5oGYvMgy4zeJ5y9A02dBX/E +jGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=bCKxewLhBhomfx53GiEAQj+ClgkVenXMznv9rIORr3A=; b=wkhNH2QEPNYsxe2bRaDYEmPCJsRa2ZzAzpaJ2TAXPznNf11LtqE+D897Dw7sZxzdjB ffr9GffdSmaz8eYJ73m1S5DXSl3yQ+ec8BXh1XD654JXwSI6DzMYAJQ1SMgSLavtTNCF jc3/O3TOSF9QiZAFfFfRwo08/vy8QehqBQawVV1qNzplFIE6g+Udf3C1L0JNVd8aSOys 4MZIom8JorjaYvyQxcuxTzUv2yVqLYC9nmRri71TvQLSkVnsntfNhIAcN7Z5iYfMMYXD SDnBjD30+bfT/3yHcatvTkNpzBLQD8qC6eQ7cLIce8ipQ+AgliMCv6fDtZY2JNtY/8AK im/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bRu31O4k; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e24si5737425pgn.318.2018.04.06.03.15.55; Fri, 06 Apr 2018 03:16:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bRu31O4k; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752125AbeDFKNh (ORCPT + 99 others); Fri, 6 Apr 2018 06:13:37 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:52274 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751492AbeDFKNf (ORCPT ); Fri, 6 Apr 2018 06:13:35 -0400 Received: by mail-wm0-f67.google.com with SMTP id g8so2324185wmd.2 for ; Fri, 06 Apr 2018 03:13:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=bCKxewLhBhomfx53GiEAQj+ClgkVenXMznv9rIORr3A=; b=bRu31O4kKRtJLaYyHLTi1y++xInzY9lgSKSngWQq0GS1RR+JuXghAyPSA2BFgnCuVC gwicdrox6xYCAn+MRP25Mf2uGa7ssakaFHoH5eo8sJyP+XexcOY4IKNreHf70KD69r/q 0b71GSZCspgLYM02cPRLurRjz+GRY56KnYCwO9hvdE7MBDaWUl06lI+3704t4tmYMtdA 5CIG/nwN+EG43z/SCjiJ5tNicX6ONRlTj7Akwsu80WIkc0EhF/bZ+ZUbTwUDSScML+d2 +d5+h3NGlRdz6sueJMJu6uxnJ/kFSfKLLblKVQpwtpfiISO9n2SWP4dKsWrnobOpvKdK Bl2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=bCKxewLhBhomfx53GiEAQj+ClgkVenXMznv9rIORr3A=; b=PbS0FfShRUNgaOdFlKqB6CUNJv/3qG8XwUBP57HldVcE3ck8XbB8CZ5IRmibVl3vTf EN19XgFlMgwncgP6qXZl5AW6qBYI2YK6uKTuR/mLPJCkN35OLolrzf4lx3gNXJd4M2RD pEQZAZQEv5gSP2gI5klzMOKPRCSlK7sYIa58j0UGbOME51nYci+9kasJBG1F7YqHY3X7 9JB/oxd/SPG4B/ZK5Djrldq1StZ76HKxeRog2WQrKJGFv2y0VPhI10ssnzC9mwfItE+0 JZUSZ83GH8zFEdbUN7deyR4s7F2fEqT1th7H0ccblX1DuZ3AF7k7dcfpT2BjPlPSp9JP RDxA== X-Gm-Message-State: AElRT7Ev6admt1lPcq7YiWiCoYQB2D+Ie2iQUJqKZXy9LO3R0Y6U8KAy 8faLCSdXjKnzLEAPyfA6PR8= X-Received: by 10.46.148.72 with SMTP id o8mr16107862ljh.74.1523009613741; Fri, 06 Apr 2018 03:13:33 -0700 (PDT) Received: from [10.17.182.9] (ll-53.209.223.85.sovam.net.ua. [85.223.209.53]) by smtp.gmail.com with ESMTPSA id q18sm1707524ljg.35.2018.04.06.03.13.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 06 Apr 2018 03:13:32 -0700 (PDT) Subject: Re: [Xen-devel] [PATCH v1] xen-blkfront: dynamic configuration of per-vbd resources To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , Konrad Rzeszutek Wilk Cc: Somasundaram Krishnasamy , Bob Liu , linux-kernel@vger.kernel.org, xen-devel@lists.xen.org References: <20180402174232.28642-1-konrad.wilk@oracle.com> <20180402174232.28642-2-konrad.wilk@oracle.com> <20180403112244.vlg7j7ynxwkxwhs7@MacBook-Pro-de-Roger.local> From: Oleksandr Andrushchenko Message-ID: <586dead3-7392-2873-e23a-1236ef14da3b@gmail.com> Date: Fri, 6 Apr 2018 13:13:32 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180403112244.vlg7j7ynxwkxwhs7@MacBook-Pro-de-Roger.local> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/03/2018 02:22 PM, Roger Pau Monné wrote: > On Mon, Apr 02, 2018 at 01:42:32PM -0400, Konrad Rzeszutek Wilk wrote: >> From: Bob Liu >> >> The current VBD layer reserves buffer space for each attached device based on >> three statically configured settings which are read at boot time. >> * max_indirect_segs: Maximum amount of segments. >> * max_ring_page_order: Maximum order of pages to be used for the shared ring. >> * max_queues: Maximum of queues(rings) to be used. >> >> But the storage backend, workload, and guest memory result in very different >> tuning requirements. It's impossible to centrally predict application >> characteristics so it's best to leave allow the settings can be dynamiclly >> adjusted based on workload inside the Guest. >> >> Usage: >> Show current values: >> cat /sys/devices/vbd-xxx/max_indirect_segs >> cat /sys/devices/vbd-xxx/max_ring_page_order >> cat /sys/devices/vbd-xxx/max_queues >> >> Write new values: >> echo > /sys/devices/vbd-xxx/max_indirect_segs >> echo > /sys/devices/vbd-xxx/max_ring_page_order >> echo > /sys/devices/vbd-xxx/max_queues >> >> Signed-off-by: Bob Liu >> Signed-off-by: Somasundaram Krishnasamy >> Signed-off-by: Konrad Rzeszutek Wilk >> --- >> drivers/block/xen-blkfront.c | 320 ++++++++++++++++++++++++++++++++++++++++--- >> 1 file changed, 304 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c >> index 92ec1bbece51..4ebd368f4d1a 100644 >> --- a/drivers/block/xen-blkfront.c >> +++ b/drivers/block/xen-blkfront.c >> @@ -46,6 +46,7 @@ >> #include >> #include >> #include >> +#include >> >> #include >> #include >> @@ -217,6 +218,11 @@ struct blkfront_info >> /* Save uncomplete reqs and bios for migration. */ >> struct list_head requests; >> struct bio_list bio_list; >> + /* For dynamic configuration. */ >> + unsigned int reconfiguring:1; > bool reconfiguring:1 maybe? > > And I would likely place it together with the feature_ fields, so that > no more padding is added to the struct. > >> + int new_max_indirect_segments; >> + int new_max_ring_page_order; >> + int new_max_queues; > All the ints should be unsigned ints AFAICT. > >> }; >> >> static unsigned int nr_minors; >> @@ -1355,6 +1361,31 @@ static void blkif_free(struct blkfront_info *info, int suspend) >> for (i = 0; i < info->nr_rings; i++) >> blkif_free_ring(&info->rinfo[i]); >> >> + /* Remove old xenstore nodes. */ >> + if (info->nr_ring_pages > 1) >> + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-page-order"); >> + >> + if (info->nr_rings == 1) { >> + if (info->nr_ring_pages == 1) { >> + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-ref"); >> + } else { >> + for (i = 0; i < info->nr_ring_pages; i++) { >> + char ring_ref_name[RINGREF_NAME_LEN]; >> + >> + snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i); >> + xenbus_rm(XBT_NIL, info->xbdev->nodename, ring_ref_name); >> + } >> + } >> + } else { >> + xenbus_rm(XBT_NIL, info->xbdev->nodename, "multi-queue-num-queues"); >> + >> + for (i = 0; i < info->nr_rings; i++) { >> + char queuename[QUEUE_NAME_LEN]; >> + >> + snprintf(queuename, QUEUE_NAME_LEN, "queue-%u", i); >> + xenbus_rm(XBT_NIL, info->xbdev->nodename, queuename); >> + } >> + } >> kfree(info->rinfo); >> info->rinfo = NULL; >> info->nr_rings = 0; >> @@ -1778,10 +1809,18 @@ static int talk_to_blkback(struct xenbus_device *dev, >> if (!info) >> return -ENODEV; >> >> - max_page_order = xenbus_read_unsigned(info->xbdev->otherend, >> - "max-ring-page-order", 0); >> - ring_page_order = min(xen_blkif_max_ring_order, max_page_order); >> - info->nr_ring_pages = 1 << ring_page_order; >> + err = xenbus_scanf(XBT_NIL, info->xbdev->otherend, >> + "max-ring-page-order", "%u", &max_page_order); >> + if (err != 1) >> + info->nr_ring_pages = 1; >> + else { >> + ring_page_order = min(xen_blkif_max_ring_order, max_page_order); >> + if (info->new_max_ring_page_order) { >> + BUG_ON(info->new_max_ring_page_order > max_page_order); Do you really want to BUG_ON here? IMO, this is just a misconfiguration which can happen, but you will make the whole domain down with this... >> + ring_page_order = info->new_max_ring_page_order; >> + } >> + info->nr_ring_pages = 1 << ring_page_order; >> + } > You could likely simply this as: > > max_page_order = xenbus_read_unsigned(info->xbdev->otherend, > "max-ring-page-order", 0); > if ((info->new_max_ring_page_order) { > BUG_ON(info->new_max_ring_page_order > max_page_order); > info->nr_ring_pages = 1 << info->new_max_ring_page_order; > } else > info->nr_ring_pages = 1 << min(xen_blkif_max_ring_order, max_page_order); > > I'm not sure of the benefit of switching the xenbus_read_unsigned to a > xenbus_scanf. IMO it seems to make the code more complex. > >> >> err = negotiate_mq(info); >> if (err) >> @@ -1903,6 +1942,10 @@ static int negotiate_mq(struct blkfront_info *info) >> backend_max_queues = xenbus_read_unsigned(info->xbdev->otherend, >> "multi-queue-max-queues", 1); >> info->nr_rings = min(backend_max_queues, xen_blkif_max_queues); >> + if (info->new_max_queues) { >> + BUG_ON(info->new_max_queues > backend_max_queues); Again, why BUG_ON? >> + info->nr_rings = info->new_max_queues; >> + } >> /* We need at least one ring. */ >> if (!info->nr_rings) >> info->nr_rings = 1; >> @@ -2261,6 +2304,8 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) >> */ >> static void blkfront_gather_backend_features(struct blkfront_info *info) >> { >> + int err; >> + int persistent; > unsigned int. You use the '%u' format specifier below. > >> unsigned int indirect_segments; >> >> info->feature_flush = 0; >> @@ -2291,19 +2336,241 @@ static void blkfront_gather_backend_features(struct blkfront_info *info) >> if (xenbus_read_unsigned(info->xbdev->otherend, "feature-discard", 0)) >> blkfront_setup_discard(info); >> >> - info->feature_persistent = >> - !!xenbus_read_unsigned(info->xbdev->otherend, >> - "feature-persistent", 0); >> + err = xenbus_gather(XBT_NIL, info->xbdev->otherend, >> + "feature-persistent", "%u", &persistent, >> + NULL); >> + >> + info->feature_persistent = err ? 0 : persistent; >> + >> + err = xenbus_gather(XBT_NIL, info->xbdev->otherend, >> + "feature-max-indirect-segments", "%u", &indirect_segments, >> + NULL); >> + if (err) >> + info->max_indirect_segments = 0; >> + else { >> + info->max_indirect_segments = min(indirect_segments, >> + xen_blkif_max_segments); >> + if (info->new_max_indirect_segments) { >> + BUG_ON(info->new_max_indirect_segments > indirect_segments); And here >> + info->max_indirect_segments = info->new_max_indirect_segments; >> + } >> + } > Again I think using xenbus_read_unsigned makes the code simpler, see > the suggestion regarding new_max_ring_page_order. > >> +} >> + >> +static ssize_t max_ring_page_order_show(struct device *dev, >> + struct device_attribute *attr, char *page) >> +{ >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + >> + return sprintf(page, "%u\n", get_order(info->nr_ring_pages * XEN_PAGE_SIZE)); get_order returns int? "%u" -> "%d"? >> +} >> + >> +static ssize_t max_indirect_segs_show(struct device *dev, >> + struct device_attribute *attr, char *page) >> +{ >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + >> + return sprintf(page, "%u\n", info->max_indirect_segments); new_max_indirect_segments is currently defined as int >> +} >> + >> +static ssize_t max_queues_show(struct device *dev, >> + struct device_attribute *attr, char *page) >> +{ >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + >> + return sprintf(page, "%u\n", info->nr_rings); >> +} >> + >> +static ssize_t dynamic_reconfig_device(struct blkfront_info *info, ssize_t count) > Not sure you need to pass 'count' here. dynamic_reconfig_device > doesn't care about count at all. This function should just return < 0 > for error or 0 on success. and also why ssize_t, not size_t? > >> +{ >> + unsigned int i; >> + int err = -EBUSY; >> + unsigned int inflight; >> + >> + /* >> + * Make sure no migration in parallel, device lock is actually a >> + * mutex. >> + */ >> + if (!device_trylock(&info->xbdev->dev)) { >> + pr_err("Fail to acquire dev:%s lock, may be in migration.\n", >> + dev_name(&info->xbdev->dev)); >> + return err; >> + } >> + >> + /* >> + * Prevent new requests and guarantee no uncompleted reqs. >> + */ >> + blk_mq_freeze_queue(info->rq); >> + inflight = atomic_read(&info->gd->part0.in_flight[0]) + >> + atomic_read(&info->gd->part0.in_flight[1]); >> + if (inflight) >> + goto out; > Er, I'm not sure I like this approach. Why not just switch the state > to closed, wait for the backend to also switch to closed, reconnect > and then requeue any pending requests on the shadow copy of the ring? > > Basically like what is currently done for migration. > >> + >> + /* >> + * Front Backend >> + * Switch to XenbusStateClosed >> + * frontend_changed(): >> + * case XenbusStateClosed: >> + * xen_blkif_disconnect() >> + * Switch to XenbusStateClosed >> + * blkfront_resume(): >> + * frontend_changed(): >> + * reconnect >> + * Wait until XenbusStateConnected >> + */ >> + info->reconfiguring = true; Not sure if this is directly applicable, but can we finally make use of XenbusStateReconfiguring/XenbusStateReconfigured bus states? We have it somewhat implemented in PV DRM [1] >> + xenbus_switch_state(info->xbdev, XenbusStateClosed); >> + >> + /* Poll every 100ms, 1 minute timeout. */ >> + for (i = 0; i < 600; i++) { >> + /* >> + * Wait backend enter XenbusStateClosed, blkback_changed() >> + * will clear reconfiguring. >> + */ >> + if (!info->reconfiguring) >> + goto resume; >> + schedule_timeout_interruptible(msecs_to_jiffies(100)); >> + } >> + goto out; > This shouldn't be done with a busy loop. Why not do this in > blkback_changed instead? > >> + >> +resume: >> + if (blkfront_resume(info->xbdev)) >> + goto out; >> + >> + /* Poll every 100ms, 1 minute timeout. */ >> + for (i = 0; i < 600; i++) { >> + /* Wait blkfront enter StateConnected which is done by blkif_recover(). */ >> + if (info->xbdev->state == XenbusStateConnected) { >> + err = count; >> + goto out; >> + } >> + schedule_timeout_interruptible(msecs_to_jiffies(100)); >> + } >> + >> +out: >> + blk_mq_unfreeze_queue(info->rq); >> + device_unlock(&info->xbdev->dev); >> + >> + return err; >> +} >> + >> +static ssize_t max_indirect_segs_store(struct device *dev, >> + struct device_attribute *attr, const char *buf, size_t count) >> +{ >> + ssize_t ret; >> + unsigned int max_segs = 0, backend_max_segs = 0; >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + int err; >> + >> + ret = kstrtouint(buf, 10, &max_segs); >> + if (ret < 0) >> + return ret; >> + >> + if (max_segs == info->max_indirect_segments) >> + return count; >> + >> + err = xenbus_gather(XBT_NIL, info->xbdev->otherend, >> + "feature-max-indirect-segments", "%u", &backend_max_segs, > Having to read all the backend features every time the user writes to > the device nodes seems inefficient, although I assume this is not > supposed to happen frequently... > >> + NULL); >> + if (err) { >> + pr_err("Backend %s doesn't support feature-indirect-segments.\n", >> + info->xbdev->otherend); >> + return -EOPNOTSUPP; >> + } >> + >> + if (max_segs > backend_max_segs) { >> + pr_err("Invalid max indirect segment (%u), backend-max: %u.\n", >> + max_segs, backend_max_segs); >> + return -EINVAL; >> + } >> >> - indirect_segments = xenbus_read_unsigned(info->xbdev->otherend, >> - "feature-max-indirect-segments", 0); >> - if (indirect_segments > xen_blkif_max_segments) >> - indirect_segments = xen_blkif_max_segments; >> - if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST) >> - indirect_segments = 0; >> - info->max_indirect_segments = indirect_segments; >> + info->new_max_indirect_segments = max_segs; >> + >> + return dynamic_reconfig_device(info, count); > No need to pass count, just use: > > return dynamic_reconfig_device(info) :? count; > > (same for all the cases below). > >> } >> >> +static ssize_t max_ring_page_order_store(struct device *dev, >> + struct device_attribute *attr, >> + const char *buf, size_t count) >> +{ >> + ssize_t ret; >> + unsigned int max_order = 0, backend_max_order = 0; >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + int err; >> + >> + ret = kstrtouint(buf, 10, &max_order); >> + if (ret < 0) >> + return ret; >> + >> + if ((1 << max_order) == info->nr_ring_pages) >> + return count; >> + >> + if (max_order > XENBUS_MAX_RING_GRANT_ORDER) { >> + pr_err("Invalid max_ring_page_order (%u), max: %u.\n", >> + max_order, XENBUS_MAX_RING_GRANT_ORDER); >> + return -EINVAL; >> + } >> + >> + err = xenbus_scanf(XBT_NIL, info->xbdev->otherend, >> + "max-ring-page-order", "%u", &backend_max_order); >> + if (err != 1) { >> + pr_err("Backend %s doesn't support feature multi-page-ring.\n", >> + info->xbdev->otherend); >> + return -EOPNOTSUPP; >> + } >> + if (max_order > backend_max_order) { >> + pr_err("Invalid max_ring_page_order (%u), backend supports max: %u.\n", >> + max_order, backend_max_order); >> + return -EINVAL; >> + } >> + info->new_max_ring_page_order = max_order; >> + >> + return dynamic_reconfig_device(info, count); >> +} >> + >> +static ssize_t max_queues_store(struct device *dev, >> + struct device_attribute *attr, >> + const char *buf, size_t count) >> +{ >> + ssize_t ret; >> + unsigned int max_queues = 0, backend_max_queues = 0; >> + struct blkfront_info *info = dev_get_drvdata(dev); >> + int err; >> + >> + ret = kstrtouint(buf, 10, &max_queues); >> + if (ret < 0) >> + return ret; >> + >> + if (max_queues == info->nr_rings) >> + return count; >> + >> + if (max_queues > num_online_cpus()) { >> + pr_err("Invalid max_queues (%u), can't bigger than online cpus: %u.\n", >> + max_queues, num_online_cpus()); >> + return -EINVAL; >> + } >> + >> + err = xenbus_scanf(XBT_NIL, info->xbdev->otherend, >> + "multi-queue-max-queues", "%u", &backend_max_queues); >> + if (err != 1) { >> + pr_err("Backend %s doesn't support block multi queue.\n", >> + info->xbdev->otherend); >> + return -EOPNOTSUPP; >> + } >> + if (max_queues > backend_max_queues) { >> + pr_err("Invalid max_queues (%u), backend supports max: %u.\n", >> + max_queues, backend_max_queues); >> + return -EINVAL; >> + } >> + info->new_max_queues = max_queues; >> + >> + return dynamic_reconfig_device(info, count); >> +} >> + >> +static DEVICE_ATTR_RW(max_queues); >> +static DEVICE_ATTR_RW(max_ring_page_order); >> +static DEVICE_ATTR_RW(max_indirect_segs); > Can't you just use the same attribute for all the nodes? Also this > could be: > > const static DEVICE_ATTR_RW(node_attr); > > Thanks, Roger. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel [1] https://cgit.freedesktop.org/drm-misc/commit/?id=c575b7eeb89f94356997abd62d6d5a0590e259b7