Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp779243ybk; Wed, 20 May 2020 11:43:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7vbXvAM+XCXV56FOliFBfaQDwEEXOBhkpJGo5VeZe/gfD2fnjv8mha3oEbcHPZqNRK4rw X-Received: by 2002:a17:906:f885:: with SMTP id lg5mr376954ejb.327.1590000234231; Wed, 20 May 2020 11:43:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590000234; cv=none; d=google.com; s=arc-20160816; b=z0G3SSKfGcEnuofE1WYYr8fYt1spAt8oRe/dTje87aF97bFZU6QtfU4z7R5/+Vl1fO BTRYY7qPQQqpbUYHsm6hmoDqHngO3QB4e5471Cmrc6o4sStjJFzRaX3jAsom7Zeq68Pt MhnOCDUf8i9j/niVu81UG4/II1XGQJOKiaBLw6XvSZwhbUVdj3XapaK449nLx5pdizDk iF27+XqbRQCfAt08isoxVF/OJKXU6F48rYjE4zz9hAC+W8t4oKMAt3ZJSx4yVKSkT/Wh xvU3s0kzYc+85lUcPry/vf3XYDBtedBGT19Qct9yT2JfadyU2vPfi+H5dAT2+J0VghLM ZSJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature; bh=1LhWSjQVv/PmOc0Bc3wQul5HJHPou5XsZu88Tt/CsN4=; b=TFq0jKE7eyKfOzW4HrNFnCesqNJSPPRPUSab94gldUkN1yPsDsSclcr/1NSjzDpngN +clR3uypugC7f8XlO+ko6gBSEr1OhZheUSJm0PDsp3+zjlANGYR1DynsJZjS4iNCfwnd Z/5ftPhd1rTJi/1pT0zfrVuvpiNht7AUxaVYNFdny8h1lD7mnkALMnTTbeRJ1oJEqMD/ xnbUpTHe1Wz/gf9gCIGT9LklYyM6i2pjbzzhhRKzQf3ZSybT8JS81Nws3EkeZaA/7dUt nEIMo9KuwM9uJJ3NcyZYv8ftMvqvMwJDi5BKHClQfs3IJBOTXCyVng6Th2T8LbZCrQMD 1WkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=BiuUkA8Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i19si2355281ejy.14.2020.05.20.11.43.30; Wed, 20 May 2020 11:43:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=BiuUkA8Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726886AbgETSlV (ORCPT + 99 others); Wed, 20 May 2020 14:41:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:48554 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726548AbgETSlV (ORCPT ); Wed, 20 May 2020 14:41:21 -0400 Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D813A207D3; Wed, 20 May 2020 18:41:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590000080; bh=XMxptrvNj+mtd+IirGS6+uvsvJbFxSvrUCTSYI+SSDQ=; h=From:To:Cc:Subject:Date:From; b=BiuUkA8Z2bjqWFNEiZxOHABQXBdkmQ2BQ8GVOhrzdXHaY8jsPs6xTjOWajWwkI1xy UUwxVehuSvmywW37PRO8qtPrneVtcXogxeG0/FOXuwSLo4S0ovdDN2ywFCZSBHxPGR kPqEwR0VKJWc+pUWeO/jDLX6kWeSfvmEhR+l1wMo= From: Stefano Stabellini To: jgross@suse.com Cc: boris.ostrovsky@oracle.com, sstabellini@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, ericvh@gmail.com, rminnich@sandia.gov, lucho@ionkov.net, Stefano Stabellini Subject: [PATCH] 9p/xen: increase XEN_9PFS_RING_ORDER Date: Wed, 20 May 2020 11:41:13 -0700 Message-Id: <20200520184113.24727-1-sstabellini@kernel.org> X-Mailer: git-send-email 2.17.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Stefano Stabellini Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the max allowed by the protocol. We can't assume that all backends will support order 9. The xenstore property max-ring-page-order specifies the max order supported by the backend. We'll use max-ring-page-order for the size of the ring. This means that the size of the ring is not static (XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an argument and base the calculation on the order chosen at setup time. Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the original value. We need to divide it by 2 because we have two rings coming off the same order allocation: the in and out rings. This was a mistake in the original code. Also divide it further by 2 because we don't want a single request/reply to fill up the entire ring. There can be multiple requests/replies outstanding at any given time and if we use the full ring with one, we risk forcing the backend to wait for the client to read back more replies before continuing, which is not performant. Signed-off-by: Stefano Stabellini --- net/9p/trans_xen.c | 61 ++++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 27 deletions(-) diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c index 086a4abdfa7c..cf5ea74be7cc 100644 --- a/net/9p/trans_xen.c +++ b/net/9p/trans_xen.c @@ -44,8 +44,8 @@ #include #define XEN_9PFS_NUM_RINGS 2 -#define XEN_9PFS_RING_ORDER 6 -#define XEN_9PFS_RING_SIZE XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER) +#define XEN_9PFS_RING_ORDER 9 +#define XEN_9PFS_RING_SIZE(ring) XEN_FLEX_RING_SIZE(ring->intf->ring_order) struct xen_9pfs_header { uint32_t size; @@ -130,8 +130,8 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size) prod = ring->intf->out_prod; virt_mb(); - return XEN_9PFS_RING_SIZE - - xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) >= size; + return XEN_9PFS_RING_SIZE(ring) - + xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) >= size; } static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) @@ -165,17 +165,18 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) prod = ring->intf->out_prod; virt_mb(); - if (XEN_9PFS_RING_SIZE - xen_9pfs_queued(prod, cons, - XEN_9PFS_RING_SIZE) < size) { + if (XEN_9PFS_RING_SIZE(ring) - + xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) < size) { spin_unlock_irqrestore(&ring->lock, flags); goto again; } - masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); - masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring)); + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring)); xen_9pfs_write_packet(ring->data.out, p9_req->tc->sdata, size, - &masked_prod, masked_cons, XEN_9PFS_RING_SIZE); + &masked_prod, masked_cons, + XEN_9PFS_RING_SIZE(ring)); p9_req->status = REQ_STATUS_SENT; virt_wmb(); /* write ring before updating pointer */ @@ -204,19 +205,19 @@ static void p9_xen_response(struct work_struct *work) prod = ring->intf->in_prod; virt_rmb(); - if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) < sizeof(h)) { notify_remote_via_irq(ring->irq); return; } - masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); - masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring)); + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring)); /* First, read just the header */ xen_9pfs_read_packet(&h, ring->data.in, sizeof(h), masked_prod, &masked_cons, - XEN_9PFS_RING_SIZE); + XEN_9PFS_RING_SIZE(ring)); req = p9_tag_lookup(priv->client, h.tag); if (!req || req->status != REQ_STATUS_SENT) { @@ -230,11 +231,11 @@ static void p9_xen_response(struct work_struct *work) memcpy(req->rc, &h, sizeof(h)); req->rc->offset = 0; - masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring)); /* Then, read the whole packet (including the header) */ xen_9pfs_read_packet(req->rc->sdata, ring->data.in, h.size, masked_prod, &masked_cons, - XEN_9PFS_RING_SIZE); + XEN_9PFS_RING_SIZE(ring)); virt_mb(); cons += h.size; @@ -264,7 +265,7 @@ static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r) static struct p9_trans_module p9_xen_trans = { .name = "xen", - .maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT), + .maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT - 2), .def = 1, .create = p9_xen_create, .close = p9_xen_close, @@ -292,14 +293,16 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv) if (priv->rings[i].irq > 0) unbind_from_irqhandler(priv->rings[i].irq, priv->dev); if (priv->rings[i].data.in) { - for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++) { + for (j = 0; + j < (1 << priv->rings[i].intf->ring_order); + j++) { grant_ref_t ref; ref = priv->rings[i].intf->ref[j]; gnttab_end_foreign_access(ref, 0, 0); } free_pages((unsigned long)priv->rings[i].data.in, - XEN_9PFS_RING_ORDER - + priv->rings[i].intf->ring_order - (PAGE_SHIFT - XEN_PAGE_SHIFT)); } gnttab_end_foreign_access(priv->rings[i].ref, 0, 0); @@ -320,7 +323,8 @@ static int xen_9pfs_front_remove(struct xenbus_device *dev) } static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev, - struct xen_9pfs_dataring *ring) + struct xen_9pfs_dataring *ring, + unsigned int order) { int i = 0; int ret = -ENOMEM; @@ -339,21 +343,21 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev, goto out; ring->ref = ret; bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, - XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT)); + order - (PAGE_SHIFT - XEN_PAGE_SHIFT)); if (!bytes) { ret = -ENOMEM; goto out; } - for (; i < (1 << XEN_9PFS_RING_ORDER); i++) { + for (; i < (1 << order); i++) { ret = gnttab_grant_foreign_access( dev->otherend_id, virt_to_gfn(bytes) + i, 0); if (ret < 0) goto out; ring->intf->ref[i] = ret; } - ring->intf->ring_order = XEN_9PFS_RING_ORDER; + ring->intf->ring_order = order; ring->data.in = bytes; - ring->data.out = bytes + XEN_9PFS_RING_SIZE; + ring->data.out = bytes + XEN_FLEX_RING_SIZE(order); ret = xenbus_alloc_evtchn(dev, &ring->evtchn); if (ret) @@ -371,7 +375,7 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev, for (i--; i >= 0; i--) gnttab_end_foreign_access(ring->intf->ref[i], 0, 0); free_pages((unsigned long)bytes, - XEN_9PFS_RING_ORDER - + ring->intf->ring_order - (PAGE_SHIFT - XEN_PAGE_SHIFT)); } gnttab_end_foreign_access(ring->ref, 0, 0); @@ -401,8 +405,10 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev, return -EINVAL; max_ring_order = xenbus_read_unsigned(dev->otherend, "max-ring-page-order", 0); - if (max_ring_order < XEN_9PFS_RING_ORDER) - return -EINVAL; + if (max_ring_order > XEN_9PFS_RING_ORDER) + max_ring_order = XEN_9PFS_RING_ORDER; + if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order)) + p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order); priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) @@ -419,7 +425,8 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev, for (i = 0; i < priv->num_rings; i++) { priv->rings[i].priv = priv; - ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]); + ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i], + max_ring_order); if (ret < 0) goto error; } -- 2.17.1