Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp4782227pxv; Tue, 20 Jul 2021 11:22:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzFOM/GRqVvoeHf3GQo4dRLmazkEDqxC4lHYBJre/mC99KdnQZiAwXLrkhMVcw2ZlYOtYDR X-Received: by 2002:a17:906:4e85:: with SMTP id v5mr34151260eju.67.1626805323047; Tue, 20 Jul 2021 11:22:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626805323; cv=none; d=google.com; s=arc-20160816; b=QxQa8YhdBC7g26R3vreRr69NBZZOV7se47aRchxWFW3kzSaXBq5LYDW6VO4Hmyaxcw zQ379xryQtIqecoN23ovwZr7Dj1VMbELFm+U2UYORR1NYXFFNDMcp7N1VHTZaB60DsKp ZfwCnOzUq6DcKKIATrtk4TjLr5+FfAiXhk09/pMcg039RqKWShVsxHz1pYCe17ihyhGd We80kX0ONgXQjLtPO++2dxIDR9USUvDpe9Yd0yp2fK9ve+VkCn9fra7pUb1afu7pSEWX CL2hIVgcNTvf+QJBTLtSjYuVREzrsDZf41GW/FAOIcwm0HG8VdIYN+yWkcdPL8VGfV3i SQTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=xIZOPMVnjcB8w+MNjQUKIzs1v/FyFnxZ7AcIRfYPv4M=; b=LVT+3AUZOzTErjNEU+nacNdak79U0F+CWrIsrPBuOsxvOF+UlXxgeTiLA1qL9ZBrOq LrMWLWWNtGVGRWlgMrkiFqboo4nqY5DL2Zis+eiauECAQiNFTaPAEqxkdhrC83xbRZse W/C5E4bI3FsUgu5AVmBcTUFm7Vr0Tiz/XA88vrQ8ufM3ix9mp0GVC3ED44Tnyk1pkuML HBKjL1h8RvgSRjf0alkwMZbFe+f/NSJoA2Do6szxPaUUZxchk9ghauOf+srEEZ3hb/0L LuUxfDCKYNRXKVpXUMEBBVqqMVK/ih7b8S56xIiMkDpi5+G10n/FAEVzpouEl6XAmBqD +PBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=exWEcznw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s19si25729783ejy.541.2021.07.20.11.21.38; Tue, 20 Jul 2021 11:22:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=exWEcznw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbhGTRgp (ORCPT + 99 others); Tue, 20 Jul 2021 13:36:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:42254 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232555AbhGTRf2 (ORCPT ); Tue, 20 Jul 2021 13:35:28 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 106746101B; Tue, 20 Jul 2021 18:16:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626804966; bh=qtVSgG+A7D7xSwJx0othdtiD/Ap1K//pwigS721/SdY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=exWEcznwBv5PYBDFJUMGm142uvKdPXgvphn/JOQbPq9biO7F1Sp1tkKcA9EqPNXjz 7DGjpoKu4TVGCrgAnwCGCp/0oekEAZ1nktojKVGGkC62l5dyKgZZyFWA9xtO/Lwkcb bkMiOzAeu0IY4pP90uGwbK739Fa/1Y1sasmy7tZc= Date: Tue, 20 Jul 2021 20:16:03 +0200 From: "gregkh@linuxfoundation.org" To: Long Li Cc: Bart Van Assche , Christoph Hellwig , "longli@linuxonhyperv.com" , "linux-fs@vger.kernel.org" , "linux-block@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-hyperv@vger.kernel.org" Subject: Re: [Patch v4 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob Message-ID: References: <1626751866-15765-1-git-send-email-longli@linuxonhyperv.com> <82e8bec6-4f6f-08d7-90db-9661f675749d@acm.org> <115d864c-46c2-2bc8-c392-fd63d34c9ed0@acm.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 20, 2021 at 05:33:47PM +0000, Long Li wrote: > > Subject: Re: [Patch v4 0/3] Introduce a driver to support host accelerated > > access to Microsoft Azure Blob > > > > On 7/20/21 12:05 AM, Long Li wrote: > > >> Subject: Re: [Patch v4 0/3] Introduce a driver to support host > > >> accelerated access to Microsoft Azure Blob > > >> > > >> On Mon, Jul 19, 2021 at 09:37:56PM -0700, Bart Van Assche wrote: > > >>> such that this object storage driver can be implemented as a > > >>> user-space library instead of as a kernel driver? As you may know > > >>> vfio users can either use eventfds for completion notifications or polling. > > >>> An interface like io_uring can be built easily on top of vfio. > > >> > > >> Yes. Similar to say the NVMe K/V command set this does not look like > > >> a candidate for a kernel driver. > > > > > > The driver is modeled to support multiple processes/users over a VMBUS > > > channel. I don't see a way that this can be implemented through VFIO? > > > > > > Even if it can be done, this exposes a security risk as the same VMBUS > > > channel is shared by multiple processes in user-mode. > > > > Sharing a VMBUS channel among processes is not necessary. I propose to > > assign one VMBUS channel to each process and to multiplex I/O submitted to > > channels associated with the same blob storage object inside e.g. the > > hypervisor. This is not a new idea. In the NVMe specification there is a > > diagram that shows that multiple NVMe controllers can provide access to the > > same NVMe namespace. See also diagram "Figure 416: NVM Subsystem with > > Three I/O Controllers" in version 1.4 of the NVMe specification. > > > > Bart. > > Currently, the Hyper-V is not designed to have one VMBUS channel for each process. So it's a slow interface :( > In Hyper-V, a channel is offered from the host to the guest VM. The host doesn't > know in advance how many processes are going to use this service so it can't > offer those channels in advance. There is no mechanism to offer dynamic > per-process allocated channels based on guest needs. Some devices (e.g. > network and storage) use multiple channels for scalability but they are not > for serving individual processes. > > Assigning one VMBUS channel per process needs significant change on the Hyper-V side. What is the throughput of a single channel as-is? You provided no benchmarks or numbers at all in this patchset which would justify this new kernel driver :( thanks, greg k-h