Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754153AbbESOnA (ORCPT ); Tue, 19 May 2015 10:43:00 -0400 Received: from mail-ig0-f175.google.com ([209.85.213.175]:33545 "EHLO mail-ig0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753979AbbESOm6 (ORCPT ); Tue, 19 May 2015 10:42:58 -0400 MIME-Version: 1.0 In-Reply-To: References: Date: Tue, 19 May 2015 10:42:58 -0400 Message-ID: Subject: Re: Userspace Block Device From: Bill Speirs To: Rob Landley Cc: Kernel Mailing List Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2097 Lines: 43 On Tue, May 19, 2015 at 1:34 AM, Rob Landley wrote: > On Mon, May 18, 2015 at 2:01 PM, Bill Speirs wrote: >> My goal is to provide Amazon S3 or Google Cloud Storage as a block >> device. I would like to leverage the libraries that exist for both >> systems by servicing requests via a user space program. >> ... ndb seems like a bit of a Rube Goldberg solution. > > I wrote the busybox and toybox nbd clients, and have a todo list item > to write an nbd server for toybox. I believe there's also an nbd > server in qemu. I haven't found any decent documentation on the > protocol yet, but what specifically makes you describe it as rube > goldberg? My understanding of using nbd is: - Write an ndb-server that is essentially a gateway between nbd and S3/Google. For each nbd request, I translate it into the appropriate S3/Google request and respond appropriately. - I'd run the above server on the machine on some port. - I'd run a client on the same server using 127.0.0.1 and the above port, providing the nbd block device. - Go drink a beer as I rack up a huge bill with Amazon or Google Seems a bit much to run a client & server on the same machine with socket overhead, etc. In looking at the code for your nbd-client (https://github.com/landley/toybox/blob/master/toys/other/nbd_client.c) I'm wondering if I couldn't just set a pipe instead of a socket in the ioctl(nbd, NBD_SET_SOCK, sock) step, then have the same proc (or fork) listening on the pipe so it's all in a single process/codebase. Thoughts on this approach? That said, clearly my bottleneck in all of this will be the communication with S3/Google, and using something like dm-cache would make it appear fast for most requests. So maybe my Rube Goldberg comment was too over-the-top. Thank you for the pointers and the feedback! Bill- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/