Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp4241620pxf; Tue, 23 Mar 2021 06:22:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz6qv7mu0Rl0dT9WA/79H4I7poqD6Hhll5AACUuCdL+bgTFBS1JMJ3b9ULS/0mp6eY6tOn2 X-Received: by 2002:a50:fc94:: with SMTP id f20mr4612962edq.370.1616505770942; Tue, 23 Mar 2021 06:22:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616505770; cv=none; d=google.com; s=arc-20160816; b=JzfdIkX3BWBqjOsdZpChl8h/3d8QGNT2fflVEg7K1mw3ptw7XjSg83yTjFCF5cKXfZ FW3Ar1SDWvao2vh0Fsga5blpSMeUCuVs3FR++ZmZ6LIXHfYffkPt4dzj//xK8Rtu10Wa VQxR86Cet1MmkojwVx6lK87fbH40YZYi1oIifGHGy0/eFC38vL8K/MmjqPPcEq1mFT3c 7y17nyKt2CqhziCkDuVM0rtj5pNM6TjsQJZJ+IP/fcPdIasNkMZrp99boIMvdZIF9wh6 Rse2BjmeQBd8i8vK8aRAQsmghXSHEcTZMVoF1MpXCfzVGeaKggkU3qTyOOsXuW8vejwa ZFAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=+Ltr4EtInMVTt889965WTJokkDu74lh9KJx2MOEuV6E=; b=0TRBVYcGsXDjEM8+FbSbOYU0CtEXwTwdIi8pMiJpFMWxOlfICpBSKFYkdo3GW50KvZ xUDyMz+3IFnxkPodc1my/PwMDIhfSRadkNHZq+4OTpLi1ly/ciku4sL+HG3IBy6EXx/D TwMs78MkUaFZyzaQrX0M9Mjw7y2GEgQ3TD9rXmnwpRdjQD/EY9lLP3npelH+fGXfGdtz KdwfKr+ONDiSiQkerLg38akyiurMFnSvCvKjHO6n9wr/jli0UyIoUbVNz4Ar1R62Sq0h PTDXBKWiEHoJ4vMeUeOP3ipA/Mz3X/f5fpbJSRygAJ4XsZYLLvXF9gDEaTv3CN6yscIb g3Sw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w25si14070729eju.430.2021.03.23.06.22.28; Tue, 23 Mar 2021 06:22:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231355AbhCWNSS (ORCPT + 99 others); Tue, 23 Mar 2021 09:18:18 -0400 Received: from verein.lst.de ([213.95.11.211]:60542 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231838AbhCWNRM (ORCPT ); Tue, 23 Mar 2021 09:17:12 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 5642568C4E; Tue, 23 Mar 2021 14:17:09 +0100 (CET) Date: Tue, 23 Mar 2021 14:17:09 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Christoph Hellwig , Alex Williamson , Max Gurtovoy , Alexey Kardashevskiy , cohuck@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, liranl@nvidia.com, oren@nvidia.com, tzahio@nvidia.com, leonro@nvidia.com, yarong@nvidia.com, aviadye@nvidia.com, shahafs@nvidia.com, artemp@nvidia.com, kwankhede@nvidia.com, ACurrid@nvidia.com, cjia@nvidia.com, yishaih@nvidia.com, mjrosato@linux.ibm.com Subject: Re: [PATCH 8/9] vfio/pci: export nvlink2 support into vendor vfio_pci drivers Message-ID: <20210323131709.GA1982@lst.de> References: <8941cf42-0c40-776e-6c02-9227146d3d66@nvidia.com> <20210319092341.14bb179a@omen.home.shazbot.org> <20210319161722.GY2356281@nvidia.com> <20210319162033.GA18218@lst.de> <20210319162848.GZ2356281@nvidia.com> <20210319163449.GA19186@lst.de> <20210319113642.4a9b0be1@omen.home.shazbot.org> <20210319200749.GB2356281@nvidia.com> <20210322151125.GA1051@lst.de> <20210322164411.GV2356281@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210322164411.GV2356281@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 22, 2021 at 01:44:11PM -0300, Jason Gunthorpe wrote: > This isn't quite the scenario that needs solving. Lets go back to > Max's V1 posting: > > The mlx5_vfio_pci.c pci_driver matches this: > > + { PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, 0x1042, > + PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID) }, /* Virtio SNAP controllers */ > > This overlaps with the match table in > drivers/virtio/virtio_pci_common.c: > > { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) }, > > So, if we do as you propose we have to add something mellanox specific > to virtio_pci_common which seems to me to just repeating this whole > problem except in more drivers. Oh, yikes. > The general thing that that is happening is people are adding VM > migration capability to existing standard PCI interfaces like VFIO, > NVMe, etc Well, if a migration capability is added to virtio (or NVMe) it should be standardized and not vendor specific.