Received: by 10.192.165.156 with SMTP id m28csp152073imm; Tue, 17 Apr 2018 08:00:24 -0700 (PDT) X-Google-Smtp-Source: AIpwx48Xs+qtjpYL7AxPySHI7YC9l7ZPLLelgP+kn3Fe7J9QAz0tzT5F0C0HGtoDmjXt2BolkTc6 X-Received: by 10.99.133.193 with SMTP id u184mr2102033pgd.442.1523977224693; Tue, 17 Apr 2018 08:00:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523977224; cv=none; d=google.com; s=arc-20160816; b=dygNqA72jyLkdh4K4Yt6uV2jtp2lLKJwG/n9rLfn7lf/JUdFbTeLYlCNoVzu5nv9aK wsNEYY7R3W72qBjQqRwNzTV47r1naAD4Qvzwt4Ui/GF5YlvtJDRL4yGYYys8uK2k5whg MlvdiRhucn0JDHwvLnSuX9YH4CqukFwH4YbWXAT8e6nZ1DN7l7A4j1vR322/4QDDwqQt qBqzl2gB0Rc1N9bVVMWugjXE0oDEWIzIYxCeC6HWVFLws01y60mhsRhCGzaZ/jJIswP5 okJRVIIJovSEI5cvO/6tZf7BhW099xuJlqfwgv7kZofuh0XaFtSEAia9AtvuAPna50/d DWTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=U01ipF5BNXZZqlAnn1aviUXtDoqLf8VJZZRuzNfFi0w=; b=MtqJtzeCCRXjeOaYlJqTUyWav8O1n3p7qJQXIEEavuhvhdsRvkB68yNxosQmhoxQpe iS95mllSy+bpE2s74ggDTHdnxWZsvPOQB7T4akZSFJBX5VmchRsW/l3AsuSA5+1Z36dV lBpGZJ6CmmKPFIZrsos6VwbsmhGiIla4BqMiOm06Zch0V0dAabyMt+IhLooC3wIwklxd alBazios5NlAfzffstGEXLfbPaseT3c+dncKYSiKwozxBEn8Vw4whRoWs8oDFFzzELhY kKp+o7eHseCQ+mlpaWDBzcl5BR4BpQJYG5GPAiz+UvnycwyKP+6DB/8XKvQNS90Q4WAc yBIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m13si11514047pgp.482.2018.04.17.08.00.09; Tue, 17 Apr 2018 08:00:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752417AbeDQO6d (ORCPT + 99 others); Tue, 17 Apr 2018 10:58:33 -0400 Received: from mga18.intel.com ([134.134.136.126]:64247 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbeDQO6b (ORCPT ); Tue, 17 Apr 2018 10:58:31 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2018 07:58:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,463,1517904000"; d="scan'208";a="51467121" Received: from debian.sh.intel.com (HELO debian) ([10.67.104.164]) by orsmga002.jf.intel.com with ESMTP; 17 Apr 2018 07:58:29 -0700 Date: Tue, 17 Apr 2018 22:56:26 +0800 From: Tiwei Bie To: "Michael S. Tsirkin" Cc: Jason Wang , wexu@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, jfreimann@redhat.com Subject: Re: [RFC v2] virtio: support packed ring Message-ID: <20180417145626.y5vei4y6irrdw7ky@debian> References: <20180401141216.8969-1-tiwei.bie@intel.com> <20180413071529.f4esh654dakodf4f@debian> <8dee7d62-ac0b-54ba-6bec-4bc4a6fb34e9@redhat.com> <20180417025133.7t7exmizgolr565z@debian> <20180417151654-mutt-send-email-mst@kernel.org> <20180417124716.wsypd5zl4n4galrz@debian> <20180417170354-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20180417170354-mutt-send-email-mst@kernel.org> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 17, 2018 at 05:04:59PM +0300, Michael S. Tsirkin wrote: > On Tue, Apr 17, 2018 at 08:47:16PM +0800, Tiwei Bie wrote: > > On Tue, Apr 17, 2018 at 03:17:41PM +0300, Michael S. Tsirkin wrote: > > > On Tue, Apr 17, 2018 at 10:51:33AM +0800, Tiwei Bie wrote: > > > > On Tue, Apr 17, 2018 at 10:11:58AM +0800, Jason Wang wrote: > > > > > On 2018年04月13日 15:15, Tiwei Bie wrote: > > > > > > On Fri, Apr 13, 2018 at 12:30:24PM +0800, Jason Wang wrote: > > > > > > > On 2018年04月01日 22:12, Tiwei Bie wrote: > > > > [...] > > > > > > > > +static int detach_buf_packed(struct vring_virtqueue *vq, unsigned int head, > > > > > > > > + void **ctx) > > > > > > > > +{ > > > > > > > > + struct vring_packed_desc *desc; > > > > > > > > + unsigned int i, j; > > > > > > > > + > > > > > > > > + /* Clear data ptr. */ > > > > > > > > + vq->desc_state[head].data = NULL; > > > > > > > > + > > > > > > > > + i = head; > > > > > > > > + > > > > > > > > + for (j = 0; j < vq->desc_state[head].num; j++) { > > > > > > > > + desc = &vq->vring_packed.desc[i]; > > > > > > > > + vring_unmap_one_packed(vq, desc); > > > > > > > > + desc->flags = 0x0; > > > > > > > Looks like this is unnecessary. > > > > > > It's safer to zero it. If we don't zero it, after we > > > > > > call virtqueue_detach_unused_buf_packed() which calls > > > > > > this function, the desc is still available to the > > > > > > device. > > > > > > > > > > Well detach_unused_buf_packed() should be called after device is stopped, > > > > > otherwise even if you try to clear, there will still be a window that device > > > > > may use it. > > > > > > > > This is not about whether the device has been stopped or > > > > not. We don't have other places to re-initialize the ring > > > > descriptors and wrap_counter. So they need to be set to > > > > the correct values when doing detach_unused_buf. > > > > > > > > Best regards, > > > > Tiwei Bie > > > > > > find vqs is the time to do it. > > > > The .find_vqs() will call .setup_vq() which will eventually > > call vring_create_virtqueue(). It's a different case. Here > > we're talking about re-initializing the descs and updating > > the wrap counter when detaching the unused descs (In this > > case, split ring just needs to decrease vring.avail->idx). > > > > Best regards, > > Tiwei Bie > > There's no requirement that virtqueue_detach_unused_buf re-initializes > the descs. It happens on cleanup path just before drivers delete the > vqs. Cool, I wasn't aware of it. I saw split ring decrease vring.avail->idx after detaching an unused desc, so I thought detaching unused desc also needs to make sure that the ring state will be updated correspondingly. If there is no such requirement, do you think it's OK to remove below two lines: vq->avail_idx_shadow--; vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow); from virtqueue_detach_unused_buf(), and we could have one generic function to handle both rings: void *virtqueue_detach_unused_buf(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); unsigned int num, i; void *buf; START_USE(vq); num = vq->packed ? vq->vring_packed.num : vq->vring.num; for (i = 0; i < num; i++) { if (!vq->desc_state[i].data) continue; /* detach_buf clears data, so grab it now. */ buf = vq->desc_state[i].data; detach_buf(vq, i, NULL); END_USE(vq); return buf; } /* That should have freed everything. */ BUG_ON(vq->vq.num_free != num); END_USE(vq); return NULL; } Best regards, Tiwei Bie