Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751184AbbETI05 (ORCPT ); Wed, 20 May 2015 04:26:57 -0400 Received: from smtp.citrix.com ([66.165.176.89]:18827 "EHLO SMTP.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752709AbbETI0x (ORCPT ); Wed, 20 May 2015 04:26:53 -0400 X-IronPort-AV: E=Sophos;i="5.13,464,1427760000"; d="scan'208";a="264309716" Date: Wed, 20 May 2015 09:26:50 +0100 From: Wei Liu To: Julien Grall CC: Wei Liu , , , , , , , Subject: Re: [Xen-devel] [RFC 21/23] net/xen-netback: Make it running on 64KB page granularity Message-ID: <20150520082650.GU26335@zion.uk.xensource.com> References: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> <1431622863-28575-22-git-send-email-julien.grall@citrix.com> <20150515023534.GE19352@zion.uk.xensource.com> <5555E81E.8070803@citrix.com> <20150515153143.GA8521@zion.uk.xensource.com> <5559D6EE.3030400@citrix.com> <20150518125406.GA9503@zion.uk.xensource.com> <555BBFA7.8030502@citrix.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <555BBFA7.8030502@citrix.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-DLP: MIA2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2216 Lines: 58 On Tue, May 19, 2015 at 11:56:39PM +0100, Julien Grall wrote: > > >>diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h > >>index 0eda6e9..c2a5402 100644 > >>--- a/drivers/net/xen-netback/common.h > >>+++ b/drivers/net/xen-netback/common.h > >>@@ -204,7 +204,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ > >> /* Maximum number of Rx slots a to-guest packet may use, including the > >> * slot needed for GSO meta-data. > >> */ > >>-#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1) > >>+#define XEN_NETBK_RX_SLOTS_MAX ((MAX_SKB_FRAGS + 1) * XEN_PFN_PER_PAGE) > >> > >> enum state_bit_shift { > >> /* This bit marks that the vif is connected */ > >> > >>The function xenvif_wait_for_rx_work never returns. I guess it's because there > >>is not enough slot available. > >> > >>For 64KB page granularity we ask for 16 times more slots than 4KB page > >>granularity. Although, it's very unlikely that all the slot will be used. > >> > >>FWIW I pointed out the same problem on blkfront. > >> > > > >This is not going to work. The ring in netfront / netback has only 256 > >slots. Now you ask for netback to reserve more than 256 slots -- (17 + > >1) * (64 / 4) = 288, which can never be fulfilled. See the call to > >xenvif_rx_ring_slots_available. > > > >I think XEN_NETBK_RX_SLOTS_MAX derived from the fact the each packet to > >the guest cannot be larger than 64K. So you might be able to > > > >#define XEN_NETBK_RX_SLOTS_MAX ((65536 / XEN_PAGE_SIZE) + 1) > > I didn't know that packet cannot be larger than 64KB. That's simply a lot > the problem. > I think about this more, you will need one more slot for GSO information, so make it ((65536 / XEN_PAGE_SIZE) + 1 + 1). > > > >Blk driver may have a different story. But the default ring size (1 > >page) yields even less slots than net (given that sizeof(union(req/rsp)) > >is larger IIRC). > > I will see with Roger for Blkback. > > > -- > Julien Grall -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/