Received: by 2002:a05:6a10:6d25:0:0:0:0 with SMTP id gq37csp1537623pxb; Sun, 12 Sep 2021 23:09:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzPHAM2cpLjhpr+sy9vmkLeBXDm3IcJ+HhcxIVSBqI8AMRaqW6VO1GgVSiK23OmnTTiGcED X-Received: by 2002:a17:906:a0ce:: with SMTP id bh14mr11313666ejb.434.1631513399260; Sun, 12 Sep 2021 23:09:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631513399; cv=none; d=google.com; s=arc-20160816; b=XuLFsm89MP73JyaSULQuDh7rTaMwcxwZXK+JQgEUnj+JQuWGwNr5jtJO4lH9BxnFT1 znuZQnMz8sO1/ittTP6W29O7dCciX5EYh0IFWhp7sqgPoCquNu/9QBIrmg2gvtypDnRf 9/0maHRp98SBg152EC1NXrn3ZWrlNsuMrKExRJv0OrBCRbofaNQe+HTbEiELHD4ZYHZK PgnWNSz8h0h1wXRW6wEgWK5fvFJOP1H/yQ1+2HI7ITZ5D3yXmMACYCTKoIYn/J4muWYD HzlS2a14lAPY/WH/2FI2jlpNdovvzlf0TtpH2ti2e4b+jxM6YdwzHGjsVZ874Hrnlt2m D3Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=H3hy39aR4W35XwAnR6w4lMHYQiAg4avzwmB3IQ+RBLg=; b=s9wuauVlHK6S/korZk1QriDckwiiR0iIiw4Q+VzYYC9gb6c14huCukQLmoH4j2Q908 cdBYxY6egcQL7zrJTqUpzFUY9yMVj4R86Sbml/6hzptEAa/2t7W+Nelg+3QWZFYSC0GJ s9wj23YAPIEMDAudTcsXYp0RqpDOW0RparIfMZ8MhxTj0kiYJNaCy/adN1n+3v+luwZx MHDsMWv6r4W2R/d/uEu7+4SAqqxWdLqXUSz2lQ+CYh2TXnddbJf4KRCpduq4ac0fTWUV OEmdnMoU4Ucar57esa7XmVcRT3Nr7YOHCXxYjwWlP4AYqyp+tPPEIh3DV6HAPGx4LvSO knOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=iUklj6r1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p16si6431621edq.563.2021.09.12.23.09.35; Sun, 12 Sep 2021 23:09:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=iUklj6r1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237222AbhIMGJd (ORCPT + 99 others); Mon, 13 Sep 2021 02:09:33 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:44193 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237115AbhIMGJc (ORCPT ); Mon, 13 Sep 2021 02:09:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631513296; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=H3hy39aR4W35XwAnR6w4lMHYQiAg4avzwmB3IQ+RBLg=; b=iUklj6r16WqJ1BqISJfSN1oDW0W/9DHqIm6O2IUbAxhM5ksqf+oDFL/4vTBBkjH5Bfc/sz dBjjY1I4efMPueP73aPYZbf+D8rB47BEC5sJ+sYCB2/rWohN+MRr0ASNL2FnhBraW5iFA0 MoE7lDTPHYKFSPK+Es+5tjmrPSiYWlk= Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-285-bO3YaRqMORugmcxKvFCjSA-1; Mon, 13 Sep 2021 02:08:15 -0400 X-MC-Unique: bO3YaRqMORugmcxKvFCjSA-1 Received: by mail-lj1-f197.google.com with SMTP id b29-20020a2ebc1d000000b001ba014dfa94so3683890ljf.9 for ; Sun, 12 Sep 2021 23:08:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=H3hy39aR4W35XwAnR6w4lMHYQiAg4avzwmB3IQ+RBLg=; b=TNCk45tSxmZr/IVbkFs1RVUiR9yT6DLgSse9yZADxp8+LcYjaONpn6ZlgBVSs9EIQt ZS62C4Dypz4gPidc1McE/kT//vDwNGxcUHkU7CVvtcbLtgjalGvbZrA40yAy8RHjAUq7 7YfJsZsP56vDJmjANklu2YtR/I4943vC4vxJsG2l+3ecWS695VETjZb1XgAHBY2evY72 srqGXzfJ8e+ReqGSjWDhqRuxqOK8xq16WzpDOanEP7o0c5S8yYxDZwEoy3yzu77Wf8op oDjMUmbbioyeD9kQDjxDeIMFsMj5V12fZKCZG4HEvf5YuTkQzSNKzUbH4+26hBZHc5Hy qqGg== X-Gm-Message-State: AOAM530IoPMSbLMULfB7tYvrV9UKFaH4XW5XsquVGJNDWZRt1LXipOUN 4LvpjjAkYalQQCieP6mXaiYbxUgD1EkzksRYdmWI1lj0pfHIITp0uJR1um7KdwgnPCSgOZl4kUt RrHLAEZI4ohJOLgqgzfu61B74oKDCTkzwwkWyRC1c X-Received: by 2002:a05:6512:110b:: with SMTP id l11mr7678175lfg.199.1631513293534; Sun, 12 Sep 2021 23:08:13 -0700 (PDT) X-Received: by 2002:a05:6512:110b:: with SMTP id l11mr7678168lfg.199.1631513293305; Sun, 12 Sep 2021 23:08:13 -0700 (PDT) MIME-Version: 1.0 References: <20210913055353.35219-1-jasowang@redhat.com> <20210913055353.35219-7-jasowang@redhat.com> <20210913015711-mutt-send-email-mst@kernel.org> In-Reply-To: <20210913015711-mutt-send-email-mst@kernel.org> From: Jason Wang Date: Mon, 13 Sep 2021 14:08:02 +0800 Message-ID: Subject: Re: [PATCH 6/9] virtio_pci: harden MSI-X interrupts To: "Michael S. Tsirkin" Cc: virtualization , linux-kernel , "Hetzelt, Felicitas" , "kaplan, david" , Konrad Rzeszutek Wilk , pbonzini , Andi Kleen , Dan Williams , "Kuppuswamy, Sathyanarayanan" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , Peter H Anvin , Dave Hansen , Tony Luck , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , X86 ML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 13, 2021 at 2:04 PM Michael S. Tsirkin wrote: > > On Mon, Sep 13, 2021 at 01:53:50PM +0800, Jason Wang wrote: > > We used to synchronize pending MSI-X irq handlers via > > synchronize_irq(), this may not work for the untrusted device which > > may keep sending interrupts after reset which may lead unexpected > > results. Similarly, we should not enable MSI-X interrupt until the > > device is ready. So this patch fixes those two issues by: > > > > 1) switching to use disable_irq() to prevent the virtio interrupt > > handlers to be called after the device is reset. > > 2) using IRQF_NO_AUTOEN and enable the MSI-X irq during .ready() > > > > This can make sure the virtio interrupt handler won't be called before > > virtio_device_ready() and after reset. > > > > Signed-off-by: Jason Wang > > I don't get the threat model here. Isn't disabling irqs done by the > hypervisor anyway? Is there a reason to trust disable_irq but not > device reset? My understanding is that e.g in the case of SEV/TDX we don't trust the hypervisor. So the hypervisor can keep sending interrupts even if the device is reset. The guest can only trust its own software interrupt management logic to avoid call virtio callback in this case. Thanks > > Cc a bunch more people ... > > > > --- > > drivers/virtio/virtio_pci_common.c | 27 +++++++++++++++++++++------ > > drivers/virtio/virtio_pci_common.h | 6 ++++-- > > drivers/virtio/virtio_pci_legacy.c | 5 +++-- > > drivers/virtio/virtio_pci_modern.c | 6 ++++-- > > 4 files changed, 32 insertions(+), 12 deletions(-) > > > > diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c > > index b35bb2d57f62..0b9523e6dd39 100644 > > --- a/drivers/virtio/virtio_pci_common.c > > +++ b/drivers/virtio/virtio_pci_common.c > > @@ -24,8 +24,8 @@ MODULE_PARM_DESC(force_legacy, > > "Force legacy mode for transitional virtio 1 devices"); > > #endif > > > > -/* wait for pending irq handlers */ > > -void vp_synchronize_vectors(struct virtio_device *vdev) > > +/* disable irq handlers */ > > +void vp_disable_vectors(struct virtio_device *vdev) > > { > > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > > int i; > > @@ -34,7 +34,20 @@ void vp_synchronize_vectors(struct virtio_device *vdev) > > synchronize_irq(vp_dev->pci_dev->irq); > > > > for (i = 0; i < vp_dev->msix_vectors; ++i) > > - synchronize_irq(pci_irq_vector(vp_dev->pci_dev, i)); > > + disable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > > +} > > + > > +/* enable irq handlers */ > > +void vp_enable_vectors(struct virtio_device *vdev) > > +{ > > + struct virtio_pci_device *vp_dev = to_vp_device(vdev); > > + int i; > > + > > + if (vp_dev->intx_enabled) > > + return; > > + > > + for (i = 0; i < vp_dev->msix_vectors; ++i) > > + enable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > > } > > > > /* the notify function used when creating a virt queue */ > > @@ -141,7 +154,8 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors, > > snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, > > "%s-config", name); > > err = request_irq(pci_irq_vector(vp_dev->pci_dev, v), > > - vp_config_changed, 0, vp_dev->msix_names[v], > > + vp_config_changed, IRQF_NO_AUTOEN, > > + vp_dev->msix_names[v], > > vp_dev); > > if (err) > > goto error; > > @@ -160,7 +174,8 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors, > > snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names, > > "%s-virtqueues", name); > > err = request_irq(pci_irq_vector(vp_dev->pci_dev, v), > > - vp_vring_interrupt, 0, vp_dev->msix_names[v], > > + vp_vring_interrupt, IRQF_NO_AUTOEN, > > + vp_dev->msix_names[v], > > vp_dev); > > if (err) > > goto error; > > @@ -337,7 +352,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs, > > "%s-%s", > > dev_name(&vp_dev->vdev.dev), names[i]); > > err = request_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec), > > - vring_interrupt, 0, > > + vring_interrupt, IRQF_NO_AUTOEN, > > vp_dev->msix_names[msix_vec], > > vqs[i]); > > if (err) > > diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h > > index beec047a8f8d..a235ce9ff6a5 100644 > > --- a/drivers/virtio/virtio_pci_common.h > > +++ b/drivers/virtio/virtio_pci_common.h > > @@ -102,8 +102,10 @@ static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev) > > return container_of(vdev, struct virtio_pci_device, vdev); > > } > > > > -/* wait for pending irq handlers */ > > -void vp_synchronize_vectors(struct virtio_device *vdev); > > +/* disable irq handlers */ > > +void vp_disable_vectors(struct virtio_device *vdev); > > +/* enable irq handlers */ > > +void vp_enable_vectors(struct virtio_device *vdev); > > /* the notify function used when creating a virt queue */ > > bool vp_notify(struct virtqueue *vq); > > /* the config->del_vqs() implementation */ > > diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c > > index d62e9835aeec..bdf6bc667ab5 100644 > > --- a/drivers/virtio/virtio_pci_legacy.c > > +++ b/drivers/virtio/virtio_pci_legacy.c > > @@ -97,8 +97,8 @@ static void vp_reset(struct virtio_device *vdev) > > /* Flush out the status write, and flush in device writes, > > * including MSi-X interrupts, if any. */ > > ioread8(vp_dev->ioaddr + VIRTIO_PCI_STATUS); > > - /* Flush pending VQ/configuration callbacks. */ > > - vp_synchronize_vectors(vdev); > > + /* Disable VQ/configuration callbacks. */ > > + vp_disable_vectors(vdev); > > } > > > > static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) > > @@ -194,6 +194,7 @@ static void del_vq(struct virtio_pci_vq_info *info) > > } > > > > static const struct virtio_config_ops virtio_pci_config_ops = { > > + .ready = vp_enable_vectors, > > .get = vp_get, > > .set = vp_set, > > .get_status = vp_get_status, > > diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c > > index 30654d3a0b41..acf0f6b6381d 100644 > > --- a/drivers/virtio/virtio_pci_modern.c > > +++ b/drivers/virtio/virtio_pci_modern.c > > @@ -172,8 +172,8 @@ static void vp_reset(struct virtio_device *vdev) > > */ > > while (vp_modern_get_status(mdev)) > > msleep(1); > > - /* Flush pending VQ/configuration callbacks. */ > > - vp_synchronize_vectors(vdev); > > + /* Disable VQ/configuration callbacks. */ > > + vp_disable_vectors(vdev); > > } > > > > static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) > > @@ -380,6 +380,7 @@ static bool vp_get_shm_region(struct virtio_device *vdev, > > } > > > > static const struct virtio_config_ops virtio_pci_config_nodev_ops = { > > + .ready = vp_enable_vectors, > > .get = NULL, > > .set = NULL, > > .generation = vp_generation, > > @@ -397,6 +398,7 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = { > > }; > > > > static const struct virtio_config_ops virtio_pci_config_ops = { > > + .ready = vp_enable_vectors, > > .get = vp_get, > > .set = vp_set, > > .generation = vp_generation, > > -- > > 2.25.1 >