Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932284AbcKILDv (ORCPT ); Wed, 9 Nov 2016 06:03:51 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:53638 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932247AbcKILDq (ORCPT ); Wed, 9 Nov 2016 06:03:46 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Michael S. Tsirkin" , Ladi Prosek Subject: [PATCH 4.4 38/69] virtio_ring: Make interrupt suppression spec compliant Date: Wed, 9 Nov 2016 11:44:16 +0100 Message-Id: <20161109102902.717693565@linuxfoundation.org> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161109102901.127641653@linuxfoundation.org> References: <20161109102901.127641653@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2972 Lines: 71 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ladi Prosek commit 0ea1e4a6d9b62cf29e210d2b4ba9fd43917522e3 upstream. According to the spec, if the VIRTIO_RING_F_EVENT_IDX feature bit is negotiated the driver MUST set flags to 0. Not dirtying the available ring in virtqueue_disable_cb also has a minor positive performance impact, improving L1 dcache load missed by ~0.5% in vring_bench. Writes to the used event field (vring_used_event) are still unconditional. Cc: Michael S. Tsirkin Signed-off-by: Ladi Prosek Signed-off-by: Michael S. Tsirkin Signed-off-by: Greg Kroah-Hartman --- drivers/virtio/virtio_ring.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -548,7 +548,8 @@ void virtqueue_disable_cb(struct virtque if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); + if (!vq->event) + vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); } } @@ -580,7 +581,8 @@ unsigned virtqueue_enable_cb_prepare(str * entry. Always do both to keep code simple. */ if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); + if (!vq->event) + vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); } vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx); END_USE(vq); @@ -648,10 +650,11 @@ bool virtqueue_enable_cb_delayed(struct * more to do. */ /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to * either clear the flags bit or point the event index at the next - * entry. Always do both to keep code simple. */ + * entry. Always update the event index to keep code simple. */ if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; - vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); + if (!vq->event) + vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow); } /* TODO: tune this threshold */ bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4; @@ -770,7 +773,8 @@ struct virtqueue *vring_new_virtqueue(un /* No callback? Tell other side not to bother us. */ if (!callback) { vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; - vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow); + if (!vq->event) + vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow); } /* Put everything in free lists. */