Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp4240226ybg; Mon, 8 Jun 2020 02:48:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydoCUrhKrQ/kraIgqWJ6LwgPP39ZMUdVuQqcooBus4LwN8YCKZUD5tg9WNTTvPfy2T2DgF X-Received: by 2002:a05:6402:1216:: with SMTP id c22mr22022746edw.208.1591609727999; Mon, 08 Jun 2020 02:48:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591609727; cv=none; d=google.com; s=arc-20160816; b=htxB6lLPDu1lGPFQW5+gbtYN1BNcesRdABEOkvIHGreC41/TFne1ixD0y59A1muKaL LAPipbcmbWkMRcBWGwUCYl+mYd0Sa2DgBV27g+2ehJ1xEuX0Cnj4OO1xxFXK1u/VjUjH TdNsoCINFBwRJwEGwPdsvDK4GkxiNQc1QDoDhWhofF2Lg+s6BFejMBUwBeNVgGwAmQUn 2C09bG/qp48C223QP+ijrgX7lXW4l6L3UN5Ez7biIG6QxiJWaovhSimo/R68umd2mWvN Eko6MtDYYBWDRxdwJ2J3xf2UjK2ApSpgsiPFsqauieOTYh1sdCKBC1V4urlwFtMQzvnR nBUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=/UosqSxCh5JuCf83w6SVNyttSVeqWDozCD9jkpIiB4s=; b=XBaqsQd+oxCm3oaR68XJ3qm6GlXBNhEQZuBreA23oqoiUJTxgJQzQatfhAXbkwdsB6 T2lA8SBJrdDbJ4fZT8t70/84kGQ1Qg/sSqSBdNtVNRZHBvrAZ+uPIbv7FThPWeLSF2cX lVlKVREq9EOatjgoXvdLF3Ri8o9IHFgSd9yJlVjlYQRmaGGGjrqQy19G+CZY5iELMfOD 323uyIIwMnCcsZZGvtNogPxVUS4T87Y7hlmUZrFRwWBrqfPMOLJU/GKAuZyzGXAGvy5s I1Ut7trhIRRovohv+g6VrvyR9q1jYK7vYitco2kzydXTqU90IpaHeT3zkLgyXermfuxW sG9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=D5dYpCpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r25si7963265ejz.752.2020.06.08.02.48.25; Mon, 08 Jun 2020 02:48:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=D5dYpCpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729293AbgFHJo2 (ORCPT + 99 others); Mon, 8 Jun 2020 05:44:28 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:52683 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729267AbgFHJo1 (ORCPT ); Mon, 8 Jun 2020 05:44:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1591609465; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/UosqSxCh5JuCf83w6SVNyttSVeqWDozCD9jkpIiB4s=; b=D5dYpCpCgX0V9rzCQGH7WCd+iiU/zNy3bMD7Dpbhe5aJ7x/EMe+tplvVx2DsgG7rl5M6gC 91HGbYny9395GzR0J1q5os0hu6t0uhM9ZX165KFyh0wOCOi62haCKlsCXyMq2VUO8tedsA dasWTjQLrkQYfLgnKTRCBwDM7OTPWRo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-196-C0fOS73UNGSerR78MqhKCw-1; Mon, 08 Jun 2020 05:44:21 -0400 X-MC-Unique: C0fOS73UNGSerR78MqhKCw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1D37A107ACF3; Mon, 8 Jun 2020 09:44:19 +0000 (UTC) Received: from [10.72.13.71] (ovpn-13-71.pek2.redhat.com [10.72.13.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id 456F21084436; Mon, 8 Jun 2020 09:44:02 +0000 (UTC) Subject: Re: [PATCH 5/6] vdpa: introduce virtio pci driver To: "Michael S. Tsirkin" Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, rob.miller@broadcom.com, lingshan.zhu@intel.com, eperezma@redhat.com, lulu@redhat.com, shahafs@mellanox.com, hanand@xilinx.com, mhabets@solarflare.com, gdawar@xilinx.com, saugatm@xilinx.com, vmireyno@marvell.com, zhangweining@ruijie.com.cn, eli@mellanox.com References: <20200529080303.15449-1-jasowang@redhat.com> <20200529080303.15449-6-jasowang@redhat.com> <20200602010332-mutt-send-email-mst@kernel.org> <5dbb0386-beeb-5bf4-d12e-fb5427486bb8@redhat.com> <6b1d1ef3-d65e-08c2-5b65-32969bb5ecbc@redhat.com> <20200607095012-mutt-send-email-mst@kernel.org> <9b1abd2b-232c-aa0f-d8bb-03e65fd47de2@redhat.com> <20200608021438-mutt-send-email-mst@kernel.org> <20200608052041-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: <9d2571b6-0b95-53b3-6989-b4d801eeb623@redhat.com> Date: Mon, 8 Jun 2020 17:43:58 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200608052041-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/6/8 下午5:31, Michael S. Tsirkin wrote: > On Mon, Jun 08, 2020 at 05:18:44PM +0800, Jason Wang wrote: >> On 2020/6/8 下午2:32, Michael S. Tsirkin wrote: >>> On Mon, Jun 08, 2020 at 11:32:31AM +0800, Jason Wang wrote: >>>> On 2020/6/7 下午9:51, Michael S. Tsirkin wrote: >>>>> On Fri, Jun 05, 2020 at 04:54:17PM +0800, Jason Wang wrote: >>>>>> On 2020/6/2 下午3:08, Jason Wang wrote: >>>>>>>>> +static const struct pci_device_id vp_vdpa_id_table[] = { >>>>>>>>> +    { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) }, >>>>>>>>> +    { 0 } >>>>>>>>> +}; >>>>>>>> This looks like it'll create a mess with either virtio pci >>>>>>>> or vdpa being loaded at random. Maybe just don't specify >>>>>>>> any IDs for now. Down the road we could get a >>>>>>>> distinct vendor ID or a range of device IDs for this. >>>>>>> Right, will do. >>>>>>> >>>>>>> Thanks >>>>>> Rethink about this. If we don't specify any ID, the binding won't work. >>>>> We can bind manually. It's not really for production anyway, so >>>>> not a big deal imho. >>>> I think you mean doing it via "new_id", right. >>> I really meant driver_override. This is what people have been using >>> with pci-stub for years now. >> >> Do you want me to implement "driver_overrid" in this series, or a NULL >> id_table is sufficient? > > Doesn't the pci subsystem create driver_override for all devices > on the pci bus? Yes, I miss this. >>>>>> How about using a dedicated subsystem vendor id for this? >>>>>> >>>>>> Thanks >>>>> If virtio vendor id is used then standard driver is expected >>>>> to bind, right? Maybe use a dedicated vendor id? >>>> I meant something like: >>>> >>>> static const struct pci_device_id vp_vdpa_id_table[] = { >>>>     { PCI_DEVICE_SUB(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID, >>>> VP_TEST_VENDOR_ID, VP_TEST_DEVICE_ID) }, >>>>     { 0 } >>>> }; >>>> >>>> Thanks >>>> >>> Then regular virtio will still bind to it. It has >>> >>> drivers/virtio/virtio_pci_common.c: { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) }, >>> >>> >> IFCVF use this to avoid the binding to regular virtio device. > > Ow. Indeed: > > #define IFCVF_VENDOR_ID 0x1AF4 > > Which is of course not an IFCVF vendor id, it's the Red Hat vendor ID. > > I missed that. > > Does it actually work if you bind a virtio driver to it? It works. > I'm guessing no otherwise they wouldn't need IFC driver, right? > Looking at the driver, they used a dedicated bar for dealing with virtqueue state save/restore. It > > >> Looking at >> pci_match_one_device() it checks both subvendor and subdevice there. >> >> Thanks > > But IIUC there is no guarantee that driver with a specific subvendor > matches in presence of a generic one. > So either IFC or virtio pci can win, whichever binds first. I'm not sure I get there. But I try manually bind IFCVF to qemu's virtio-net-pci, and it fails. Thanks > > I guess we need to blacklist IFC in virtio pci probe code. Ugh. >