Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp5201320imm; Fri, 18 May 2018 19:29:49 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpf46IZew+i0flbSYDpCSgLRQNG0TMBbHqJ/O+z7FxmvARCbA23H6yf5yl5ME3IRJlWXbQp X-Received: by 2002:a63:9e01:: with SMTP id s1-v6mr9313273pgd.66.1526696989599; Fri, 18 May 2018 19:29:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526696989; cv=none; d=google.com; s=arc-20160816; b=aRLvlLR91LwpDR3yY5/UF6a2U++STc+yUhcfEo0R3iMIFfabadL4r4yT5mKUHhXV9i D7bZ5C8Z8tPwVWuNjg3jkwsdyEMTNGDoCIhVHW2RzewQYHtWpKde0tjGAPWGDEECT5al qbWqfK3I9EWnFNNDU9X8rsac4QXL5BdNo04Vi2ruWfwIe3QLMsYB6GFBMx6NHioMSsgB 1d8Fcb7Gjbcqa0m3F0BzoQDPR06c9NsvdeXdbj1IYgBvPdJLEiPpoOjoeB1NI6NEV2Qy 8lXBrgCgWuPcjIisP2M6Z9ThfDMShKpv7ICnsIdBGKH6rTBT8hRQ5PZgzzGeiG8H7lNN Qo2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=diA030ZB9PjatzPfYqjODD3Yz2JfqHjZmm92MjCyl3w=; b=Y7TQg/CvHt1u4tdjhwm7308D4gm0xCW7+qni4OE3//yhSEjbo2wtBvgzZPHyoqihk5 E+hn8BSsrxaCUJIqYeDkVpQWr5145E+VcbeftAq203zmwHIjZpqyF+KPBu0jtxRbhTbb k7ni7ncwLfxSnmOFgvt/rIU/jiXDoHqPC7G2FSOLewyz0bFOSd2cJkLzfNOPaFjKHMgM jr3JUC08CmUifyQ3q/O1SmHnsL1/5oMaszNvcnm/qEw39CIqXV4DJIMl/WJAn0ChgCEo +gav+eUzzJWcUNOoHbcEpYR7qzv3F7PujwhsoQUGJU39U4g+nqD4czUaqXEx0cU+hMS1 VLqg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i6-v6si6860640pgs.398.2018.05.18.19.29.35; Fri, 18 May 2018 19:29:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752188AbeESC3T (ORCPT + 99 others); Fri, 18 May 2018 22:29:19 -0400 Received: from mga12.intel.com ([192.55.52.136]:35476 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751117AbeESC3O (ORCPT ); Fri, 18 May 2018 22:29:14 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 May 2018 19:29:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,417,1520924400"; d="scan'208";a="225498268" Received: from debian.sh.intel.com (HELO debian) ([10.67.104.203]) by orsmga005.jf.intel.com with ESMTP; 18 May 2018 19:29:12 -0700 Date: Sat, 19 May 2018 10:29:38 +0800 From: Tiwei Bie To: Jason Wang Cc: mst@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, wexu@redhat.com, jfreimann@redhat.com Subject: Re: [RFC v4 3/5] virtio_ring: add packed ring support Message-ID: <20180519022938.GA18888@debian> References: <20180516123909.GB986@debian> <20180516134550.GB4171@debian> <20180516143332.GA1957@debian> <20180518112950.GA28224@debian> <20180518143334.GA4537@debian> <1a661df0-8ca9-b31d-9c17-8684d608a33a@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1a661df0-8ca9-b31d-9c17-8684d608a33a@redhat.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, May 19, 2018 at 09:12:30AM +0800, Jason Wang wrote: > On 2018年05月18日 22:33, Tiwei Bie wrote: > > On Fri, May 18, 2018 at 09:17:05PM +0800, Jason Wang wrote: > > > On 2018年05月18日 19:29, Tiwei Bie wrote: > > > > On Thu, May 17, 2018 at 08:01:52PM +0800, Jason Wang wrote: > > > > > On 2018年05月16日 22:33, Tiwei Bie wrote: > > > > > > On Wed, May 16, 2018 at 10:05:44PM +0800, Jason Wang wrote: > > > > > > > On 2018年05月16日 21:45, Tiwei Bie wrote: > > > > > > > > On Wed, May 16, 2018 at 08:51:43PM +0800, Jason Wang wrote: > > > > > > > > > On 2018年05月16日 20:39, Tiwei Bie wrote: > > > > > > > > > > On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote: > > > > > > > > > > > On 2018年05月16日 16:37, Tiwei Bie wrote: > > > > > > [...] > > > > > > > > > > > > +static void detach_buf_packed(struct vring_virtqueue *vq, unsigned int head, > > > > > > > > > > > > + unsigned int id, void **ctx) > > > > > > > > > > > > +{ > > > > > > > > > > > > + struct vring_packed_desc *desc; > > > > > > > > > > > > + unsigned int i, j; > > > > > > > > > > > > + > > > > > > > > > > > > + /* Clear data ptr. */ > > > > > > > > > > > > + vq->desc_state[id].data = NULL; > > > > > > > > > > > > + > > > > > > > > > > > > + i = head; > > > > > > > > > > > > + > > > > > > > > > > > > + for (j = 0; j < vq->desc_state[id].num; j++) { > > > > > > > > > > > > + desc = &vq->vring_packed.desc[i]; > > > > > > > > > > > > + vring_unmap_one_packed(vq, desc); > > > > > > > > > > > As mentioned in previous discussion, this probably won't work for the case > > > > > > > > > > > of out of order completion since it depends on the information in the > > > > > > > > > > > descriptor ring. We probably need to extend ctx to record such information. > > > > > > > > > > Above code doesn't depend on the information in the descriptor > > > > > > > > > > ring. The vq->desc_state[] is the extended ctx. > > > > > > > > > > > > > > > > > > > > Best regards, > > > > > > > > > > Tiwei Bie > > > > > > > > > Yes, but desc is a pointer to descriptor ring I think so > > > > > > > > > vring_unmap_one_packed() still depends on the content of descriptor ring? > > > > > > > > > > > > > > > > > I got your point now. I think it makes sense to reserve > > > > > > > > the bits of the addr field. Driver shouldn't try to get > > > > > > > > addrs from the descriptors when cleanup the descriptors > > > > > > > > no matter whether we support out-of-order or not. > > > > > > > Maybe I was wrong, but I remember spec mentioned something like this. > > > > > > You're right. Spec mentioned this. I was just repeating > > > > > > the spec to emphasize that it does make sense. :) > > > > > > > > > > > > > > But combining it with the out-of-order support, it will > > > > > > > > mean that the driver still needs to maintain a desc/ctx > > > > > > > > list that is very similar to the desc ring in the split > > > > > > > > ring. I'm not quite sure whether it's something we want. > > > > > > > > If it is true, I'll do it. So do you think we also want > > > > > > > > to maintain such a desc/ctx list for packed ring? > > > > > > > To make it work for OOO backends I think we need something like this > > > > > > > (hardware NIC drivers are usually have something like this). > > > > > > Which hardware NIC drivers have this? > > > > > It's quite common I think, e.g driver track e.g dma addr and page frag > > > > > somewhere. e.g the ring->rx_info in mlx4 driver. > > > > It seems that I had a misunderstanding on your > > > > previous comments. I know it's quite common for > > > > drivers to track e.g. DMA addrs somewhere (and > > > > I think one reason behind this is that they want > > > > to reuse the bits of addr field). > > > Yes, we may want this for virtio-net as well in the future. > > > > > > > But tracking > > > > addrs somewhere doesn't means supporting OOO. > > > > I thought you were saying it's quite common for > > > > hardware NIC drivers to support OOO (i.e. NICs > > > > will return the descriptors OOO): > > > > > > > > I'm not familiar with mlx4, maybe I'm wrong. > > > > I just had a quick glance. And I found below > > > > comments in mlx4_en_process_rx_cq(): > > > > > > > > ``` > > > > /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx > > > > * descriptor offset can be deduced from the CQE index instead of > > > > * reading 'cqe->index' */ > > > > index = cq->mcq.cons_index & ring->size_mask; > > > > cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor; > > > > ``` > > > > > > > > It seems that although they have a completion > > > > queue, they are still using the ring in order. > > > I guess so (at least from the above bits). Git grep -i "out of order" in > > > drivers/net gives some hints. Looks like there're few deivces do this. > > > > > > > I guess maybe storage device may want OOO. > > > Right, some iSCSI did. > > > > > > But tracking them elsewhere is not only for OOO. > > > > > > Spec said: > > > > > > for element address > > > > > > " > > > In a used descriptor, Element Address is unused. > > > " > > > > > > for Next flag: > > > > > > " > > > For example, if descriptors are used in the same order in which they are > > > made available, this will result in > > > the used descriptor overwriting the first available descriptor in the list, > > > the used descriptor for the next list > > > overwriting the first available descriptor in the next list, etc. > > > " > > > > > > for in order completion: > > > > > > " > > > This will result in the used descriptor overwriting the first available > > > descriptor in the batch, the used descriptor > > > for the next batch overwriting the first available descriptor in the next > > > batch, etc. > > > " > > > > > > So: > > > > > > - It's an alignment to the spec > > > - device may (or should) overwrite the descriptor make also make address > > > field useless. > > You didn't get my point... > > I don't hope so. > > > I agreed driver should track the DMA addrs or some > > other necessary things from the very beginning. And > > I also repeated the spec to emphasize that it does > > make sense. And I'd like to do that. > > > > What I was saying is that, to support OOO, we may > > need to manage these context (which saves DMA addrs > > etc) via a list which is similar to the desc list > > maintained via `next` in split ring instead of an > > array whose elements always can be indexed directly. > > My point is these context is a must (not only for OOO). Yeah, and I have the exactly same point after you pointed that I shouldn't get the addrs from descs. I do think it makes sense. I'll do it in the next version. I don't have any doubt about it. All my questions are about the OOO, instead of whether we should save context or not. It just seems that you thought I don't want to do it, and were trying to convince me that I should do it. > > > > > The desc ring in split ring is an array, but its > > free entries are managed as list via next. I was > > just wondering, do we want to manage such a list > > because of OOO. It's just a very simple question > > that I want to hear your opinion... (It doesn't > > means anything, e.g. It doesn't mean I don't want > > to support OOO. It's just a simple question...) > > So the question is yes. But I admit I don't have better idea other than what > you propose here (something like split ring which is a little bit sad). > Maybe Michael had. Yeah, that's why I asked this question. It will make the packed ring a bit similar to split ring at least in the driver part. So I want to draw your attention on this to make sure that we're on the same page. Best regards, Tiwei Bie > > Thanks > > > > > Best regards, > > Tiwei Bie > > > > > Thanks > > > > > > > Best regards, > > > > Tiwei Bie > > > > > > > > > Thanks > > > > > > > > > > > > Not for the patch, but it looks like having a OUT_OF_ORDER feature bit is > > > > > > > much more simpler to be started with. > > > > > > +1 > > > > > > > > > > > > Best regards, > > > > > > Tiwei Bie >