Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:47764 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbdIVP25 (ORCPT ); Fri, 22 Sep 2017 11:28:57 -0400 Date: Fri, 22 Sep 2017 16:28:55 +0100 From: Stefan Hajnoczi To: Jeff Layton Cc: Matt Benjamin , Steven Whitehouse , "J. Bruce Fields" , "Daniel P. Berrange" , Chuck Lever , Steve Dickson , Linux NFS Mailing List , Justin Mitchell Subject: Re: [PATCH nfs-utils v3 00/14] add NFS over AF_VSOCK support Message-ID: <20170922152855.GD13709@stefanha-x1.localdomain> References: <67608054-B771-44F4-8B2F-5F7FDC506CDD@oracle.com> <20170919151051.GS9536@redhat.com> <3534278B-FC7B-4AA5-AF86-92AA19BFD1DC@oracle.com> <20170919164427.GV9536@redhat.com> <20170919172452.GB29104@fieldses.org> <20170921170017.GK32364@stefanha-x1.localdomain> <1506079954.4740.21.camel@redhat.com> <1506083199.4740.38.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: <1506083199.4740.38.camel@redhat.com> Sender: linux-nfs-owner@vger.kernel.org List-ID: On Fri, Sep 22, 2017 at 08:26:39AM -0400, Jeff Layton wrote: > I'm not sure there is a strong one. I most just thought it sounded like > a possible solution here. > > There's already a standard in place for doing RPC over AF_LOCAL, so > there's less work to be done there. We also already have AF_LOCAL > transport in the kernel (mostly for talking to rpcbind), so there's > helps reduce the maintenance burden there. > > It utilizes something that looks like a traditional unix socket, which > may make it easier to alter other applications to use it. > > There's also a clear way to "firewall" this -- just don't mount hvsockfs > (or whatever), or don't build it into the kernel. No filesystem, no > sockets. > > I'm not sure I'd agree about this being more restrictive, necessarily. > If we did this, you could envision eventually building something that > looks like this to a running host, but where the remote end is something > else entirely. Whether that's truly useful, IDK... This approach where communications channels appear on the file system is similar to the existing virtio-serial device. The guest driver creates a character device for each serial communications channel configured on the host. It's a character device node though and not a UNIX domain socket. One of the main reasons for adding virtio-vsock was to get native Sockets API communications that most applications expect (including NFS!). Serial char device semantics are awkward. Sticking with AF_LOCAL for a moment, another approach is for AF_VSOCK tunnel to the NFS traffic: (host)# vsock-proxy-daemon --unix-domain-socket path/to/local.sock --listen --port 2049 (host)# nfsd --local path/to/local.sock ... (guest)# vsock-proxy-daemon --unix-domain-socket path/to/local.sock --cid 2 --port 2049 (guest)# mount -t nfs -o proto=local path/to/local.sock /mnt It has drawbacks over native AF_VSOCK support: 1. Certain NFS protocol features become impossible to implement since there is no meaningful address information that can be exchanged between client and server (e.g. separate backchannel connection, pNFS, etc). Are you sure AF_LOCAL makes sense for NFS? 2. Performance is worse due to extra proxy daemon. If I understand correctly both Linux and nfs-utils lack NFS AF_LOCAL support although it is present in sunrpc. For example, today fs/nfsd/nfsctl.c cannot add UNIX domain sockets. Similarly, the nfs-utils nsfd program has no command-line syntax for UNIX domain sockets. Funnily enough making AF_LOCAL work for NFS requires similar changes to the patches I've posted for AF_VSOCK. I think AF_LOCAL tunnelling is a technically inferior solution than native AF_VSOCK support (for the reasons mentioned above), but I appreciate that it insulates NFS from AF_VSOCK specifics and could be used in other use cases too. Can someone with more knowledge than myself confirm that NFS over AF_LOCAL would actually work? I thought the ability to exchange addressing information across RPC was quite important for the NFS protocol. Stefan