Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp50085ybg; Mon, 27 Jul 2020 23:06:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyz8W/4xL8REZgYEj2+8D/V4gEdgPJz6CnLqd6ayEUfnUKg3Z53VAmi+OvZHDHFQR6uFKBn X-Received: by 2002:aa7:ca4b:: with SMTP id j11mr19394197edt.385.1595916396208; Mon, 27 Jul 2020 23:06:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595916396; cv=none; d=google.com; s=arc-20160816; b=HZuVugZJyx4GTV1oH4kSkX9HnlL5I4PJxK28F4dvpDcQmd1stTuQpfw5ik3FUf7UAJ kxsGvsytYFCRbG0yTsq5hNdS+LiwaoPmXZ9MCiZidyTtB2GZunXuTOGE0H5jEb3KClmH oE2z7HkyWfNrb5Y0+hMFJ5moESny+OXuzcwPTivCcR7aSlemmsIvqZNS7Dt+19OPPFQE md4OMCEdsGjH8BkZ3oSCG6N3SWU2S2wLlJ4vMOxf0qSERCqfMchYBkl8Jzcss0uMWz/s qqW6MwWst1N/Lb3M2VHP+BZajibOJxkcM/ED7A2HmEVE+E/oQ7fagn1EheMstjImOA1V Wmbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=9Y8lBwTxutwGq2BKYDlGuDm0VqZmcKeibKpd1UTdkf4=; b=iRKYyfcUub9jzK4quuzxwh4cWiJ4U17ADM28Ftt8fQMbCabKHfkAnGb+F+FTHZcN7M qwW5OvipnMFLkaqzJV0JMAkBRzN54XhUQdMMi/SlodLMrABl01aef3pb6D+VuFD5orXL 1cgumPJG4GfHGO03WZCz+9+vRscSuqUg3dAypMXhbrlgQpVxZyP+Ffr01ddDYhJ/jqKA lUOrfoGJ0D6by8L4fxIYhrO9UFEHfDNztO5VfEhTYMRuQxq0O8aqxV8dTl3LSF7r9HQr Q6ML7dD9aAAvrOVvUmmuPiPrA8Yeb33ReoAA4HsgTgKzP4FCe2F9hPPbypKEKn9anpir Ypvg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mellanox.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t15si5859239ejx.660.2020.07.27.23.06.14; Mon, 27 Jul 2020 23:06:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mellanox.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727051AbgG1GGB (ORCPT + 99 others); Tue, 28 Jul 2020 02:06:01 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:34817 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726299AbgG1GGB (ORCPT ); Tue, 28 Jul 2020 02:06:01 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from eli@mellanox.com) with SMTP; 28 Jul 2020 09:05:57 +0300 Received: from nps-server-21.mtl.labs.mlnx (nps-server-21.mtl.labs.mlnx [10.237.240.120]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 06S65vmk016645; Tue, 28 Jul 2020 09:05:57 +0300 Received: from nps-server-21.mtl.labs.mlnx (localhost [127.0.0.1]) by nps-server-21.mtl.labs.mlnx (8.14.7/8.14.7) with ESMTP id 06S65vxK004217; Tue, 28 Jul 2020 09:05:57 +0300 Received: (from eli@localhost) by nps-server-21.mtl.labs.mlnx (8.14.7/8.14.7/Submit) id 06S65uFi004216; Tue, 28 Jul 2020 09:05:56 +0300 From: Eli Cohen To: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: shahafs@mellanox.com, saeedm@mellanox.com, parav@mellanox.com, Eli Cohen Subject: [PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices Date: Tue, 28 Jul 2020 09:05:29 +0300 Message-Id: <20200728060539.4163-1-eli@mellanox.com> X-Mailer: git-send-email 2.26.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michael, please note that this series depends on mlx5 core device driver patches in mlx5-next branch in git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git. git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next They also depend Jason Wang's patches submitted a couple of weeks ago. vdpa_sim: use the batching API vhost-vdpa: support batch updating The following series of patches provide VDPA support for Mellanox devices. The supported devices are ConnectX6 DX and newer. Currently, only a network driver is implemented; future patches will introduce a block device driver. iperf performance on a single queue is around 12 Gbps. Future patches will introduce multi queue support. The files are organized in such a way that code that can be used by different VDPA implementations will be placed in a common are resides in drivers/vdpa/mlx5/core. Only virtual functions are currently supported. Also, certain firmware capabilities must be set to enable the driver. Physical functions (PFs) are skipped by the driver. To make use of the VDPA net driver, one must load mlx5_vdpa. In such case, VFs will be operated by the VDPA driver. Although one can see a regular instance of a network driver on the VF, the VDPA driver takes precedence over the NIC driver, steering-wize. Currently, the device/interface infrastructure in mlx5_core is used to probe drivers. Future patches will introduce virtbus as a means to register devices and drivers and VDPA will be adapted to it. The mlx5 mode of operation required to support VDPA is switchdev mode. Once can use Linux or OVS bridge to take care of layer 2 switching. In order to provide virtio networking to a guest, an updated version of qemu is required. This version has been tested by the following quemu version: url: https://github.com/jasowang/qemu.git branch: vdpa Commit ID: 6f4e59b807db V2->V3 Fix makefile to use include path relative to the root of the kernel Eli Cohen (7): net/vdpa: Use struct for set/get vq state vhost: Fix documentation vdpa: Modify get_vq_state() to return error code vdpa/mlx5: Add hardware descriptive header file vdpa/mlx5: Add support library for mlx5 VDPA implementation vdpa/mlx5: Add shared memory registration code vdpa/mlx5: Add VDPA driver for supported mlx5 devices Jason Wang (2): vhost-vdpa: support batch updating vdpa_sim: use the batching API Max Gurtovoy (1): vdpa: remove hard coded virtq num drivers/vdpa/Kconfig | 18 + drivers/vdpa/Makefile | 1 + drivers/vdpa/ifcvf/ifcvf_base.c | 4 +- drivers/vdpa/ifcvf/ifcvf_base.h | 4 +- drivers/vdpa/ifcvf/ifcvf_main.c | 13 +- drivers/vdpa/mlx5/Makefile | 4 + drivers/vdpa/mlx5/core/mlx5_vdpa.h | 91 ++ drivers/vdpa/mlx5/core/mlx5_vdpa_ifc.h | 168 ++ drivers/vdpa/mlx5/core/mr.c | 473 ++++++ drivers/vdpa/mlx5/core/resources.c | 284 ++++ drivers/vdpa/mlx5/net/main.c | 76 + drivers/vdpa/mlx5/net/mlx5_vnet.c | 1950 ++++++++++++++++++++++++ drivers/vdpa/mlx5/net/mlx5_vnet.h | 24 + drivers/vdpa/vdpa.c | 3 + drivers/vdpa/vdpa_sim/vdpa_sim.c | 35 +- drivers/vhost/iotlb.c | 4 +- drivers/vhost/vdpa.c | 46 +- include/linux/vdpa.h | 24 +- include/uapi/linux/vhost_types.h | 2 + 19 files changed, 3165 insertions(+), 59 deletions(-) create mode 100644 drivers/vdpa/mlx5/Makefile create mode 100644 drivers/vdpa/mlx5/core/mlx5_vdpa.h create mode 100644 drivers/vdpa/mlx5/core/mlx5_vdpa_ifc.h create mode 100644 drivers/vdpa/mlx5/core/mr.c create mode 100644 drivers/vdpa/mlx5/core/resources.c create mode 100644 drivers/vdpa/mlx5/net/main.c create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.c create mode 100644 drivers/vdpa/mlx5/net/mlx5_vnet.h -- 2.26.0