Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp18104398ybl; Thu, 2 Jan 2020 18:36:06 -0800 (PST) X-Google-Smtp-Source: APXvYqyiWLyzo3bRiVQLeVrUAQXQOj9ltBjZtXfRlm+fGevv7Q7Ie9EFAf7I4LApNxvJInTpFETl X-Received: by 2002:a9d:70cb:: with SMTP id w11mr79693392otj.230.1578018966251; Thu, 02 Jan 2020 18:36:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578018966; cv=none; d=google.com; s=arc-20160816; b=qfPAWfv7Q675H//J/gTpielq/SLyhEdmwWCwnIX4Z70Daab3+RpN8gS5K8lA4HzTr9 h581TTCMQ+dq9SFk0/nhUZF9eUna79xfWNAZ3YUyvcFwYk0mhq+VxjN8iOuqjW7ey4o7 CXIKkKLiTcN/05HgMuscDSXyhUKM4WZivmTsA2RUJNdeiMkP8G3Hhu5+w4CYgQpgc7gW f0fKFX4lvFb2zzc/ovXq3B5SXifRvDrrxAoqiR7sKPgb3C5Vsa2t4aeWxoLiVc/Heo54 ZNSGKbnI9p46eT6vPzAsawFAVwENWZDc+D57TBL++JjxteT7MTBBWTTdRjXHt9PhQiPV bykA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :wdcironportexception:ironport-sdr:ironport-sdr:ironport-sdr :dkim-signature; bh=g1zmWajt6asBm/4XGN47l3byKgWocwBwuLghQAyivYo=; b=MiJZM/SdiS0kiSSxvxOrjv9qmO0aYPL4zHNLi0Gd4lwKrK5xYvcl8Axh51Y8aFBLBK A/jACxsTO0J9/v0j6tw9rx0SVp9aDzeLxwxe5ZduZJ3TuhxCh1BB70g/G7y0VXwpYQFn TMSbW7HXWdBSrpPuf3U/D6yzS3tGmALT68IGHTi1LGjLyodP9VMMNKYKZMerONW6beO1 u6MO2UPwxX6vm2u54murvu1VBgmXWiNlHBfaMyzIQXCzmoO5rMF52M8JPFtS3SvZM9Qq YIA4JTwUPV/DR5folzGWSO3RCeJUVb2u9VQ5JVlMJC4gWhh8bUJHmfetQhpznaEEuzGI 8dlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=Lju653+H; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=wdc.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l204si20434525oif.261.2020.01.02.18.35.54; Thu, 02 Jan 2020 18:36:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=Lju653+H; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=wdc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727360AbgACCe5 (ORCPT + 99 others); Thu, 2 Jan 2020 21:34:57 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:40683 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727318AbgACCew (ORCPT ); Thu, 2 Jan 2020 21:34:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1578018891; x=1609554891; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vxyKPc8sVUtM/DENYuNPcNhOmR3qq7Jx0ITL6S40+vc=; b=Lju653+HH22wONmoGJoM68rdAyIB65FLtce+9RORlJphwqG1mfrS3G0C 9jCrYOOxble/UIqBSVIzMTWi/Zwn5qBt1gkVNN6B9y6aH4CcG3CCHoLYV 3Cx6EeEewePA03slOUNaMmjOZmaKdZ1yeKg1YbDCVJbuDmuo3v8g3lQXw UAN2JpiO+DW4y3pdiXTFFeK8idfC8dLkjwXziMI7/S7E/D9a++l/zahye 3zuhyE44emydOXj4MnlXGpCJcJX3Ls6Y7P2dxkHP2bweSOswLKFajEBG/ OWNkOkDRmd+BQdvmAkC8iIEGL7MqlCWRykDpAvLOAip3d0QnAvVPU6Cxd w==; IronPort-SDR: +saqlpB9+kdthrIqTqn/BN3ylXgo5nGeRG6KDDb/M0mCNzxkiyHd16ZC/v+VQEuMd4sQZg0Usm 4gqqK8Fj3uYSuVySqdw0ANwdULOdGQk3OoKuy281NUj2IJkCakIQQL6Jv7IX/hDx1z1hdlFylQ 2VVflKxvlgn48rz6TN6kYhFtuo/GIlkgZjZpPl13Q9VMA6HTAHf7fONSNPscdb/DLRdgygBouI kXGV8fQeFK/Qn867hoLAaaEnyr4no95aaNQNqHMIQkOzbAOGSND0ccuGhBM5gmRAOAV4zLEbjf Kug= X-IronPort-AV: E=Sophos;i="5.69,389,1571673600"; d="scan'208";a="131113646" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 03 Jan 2020 10:34:51 +0800 IronPort-SDR: MFCjXP1qGehaqjHcA6QkOvDF5TOBhIUoSfC1t7lefLl9VV3aUXwaE+jOGkBnSFT2okdccadTSI VuxxnBSDr16aMpdoWm2VETn/btZNfzoNZAVFp5R3BwG+KbVt4YopLhE3gOTPdRDJzNud8SpoLt LK/iixWdE7/aD+bGwa/Sg83qdSgBGDeiTngghancXP++2Lf2TnmMZPtGbNwHjrYUo3sNF1YZbH Bb+11rdSilxz+ATz0bNTn3mdhypuT1i2ra7UMrXjcjXiU9shS8WGcwL8KwB0NQJDX87LpshA0Q zPp6tpygkAzvmnB8XgX8nlKq Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2020 18:28:48 -0800 IronPort-SDR: RSc/FOYzVQaTSP/ZMoLyHgvGQ8ISpqyujDefvg2agqwWYTNliZFcaVkQKY7t5FqTOYUI86WSy+ u2jEABFNB5eA88oF/AwDaXUECKsAkuRLA2qmcAz7F/fEDF1xIlcQlozQfgdBkVWsenR6FT653/ jZDoswR5j3E06yiwiPwm5MYSR6u8CJNHVPpYkIv/9Omo6rUHArqthdBtomgE47b1o7qvfzDRrL Rh2y2AcZWmofezi+DhlbX5Ywc3NoQMvtIOiTOaTC3GRVwBn+PDgtbnom8n2yUlsc1Lm0dd9bKe 8/M= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 02 Jan 2020 18:34:49 -0800 From: Damien Le Moal To: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds Cc: Johannes Thumshirn , Naohiro Aota , "Darrick J . Wong" , Hannes Reinecke Subject: [PATCH v5 2/2] zonefs: Add documentation Date: Fri, 3 Jan 2020 11:34:45 +0900 Message-Id: <20200103023445.1352524-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200103023445.1352524-1-damien.lemoal@wdc.com> References: <20200103023445.1352524-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add the new file Documentation/filesystems/zonefs.txt to document zonefs principles and user-space tool usage. Signed-off-by: Damien Le Moal Reviewed-by: Darrick J. Wong --- Documentation/filesystems/zonefs.txt | 215 +++++++++++++++++++++++++++ MAINTAINERS | 1 + 2 files changed, 216 insertions(+) create mode 100644 Documentation/filesystems/zonefs.txt diff --git a/Documentation/filesystems/zonefs.txt b/Documentation/filesystems/zonefs.txt new file mode 100644 index 000000000000..4623c659e5ee --- /dev/null +++ b/Documentation/filesystems/zonefs.txt @@ -0,0 +1,215 @@ +ZoneFS - Zone filesystem for Zoned block devices + +Overview +======== + +zonefs is a very simple file system exposing each zone of a zoned block device +as a file. Unlike a regular file system with zoned block device support (e.g. +f2fs), zonefs does not hide the sequential write constraint of zoned block +devices to the user. Files representing sequential write zones of the device +must be written sequentially starting from the end of the file (append only +writes). + +As such, zonefs is in essence closer to a raw block device access interface +than to a full featured POSIX file system. The goal of zonefs is to simplify +the implementation of zoned block device support in applications by replacing +raw block device file accesses with a richer file API, avoiding relying on +direct block device file ioctls which may be more obscure to developers. One +example of this approach is the implementation of LSM (log-structured merge) +tree structures (such as used in RocksDB and LevelDB) on zoned block devices +by allowing SSTables to be stored in a zone file similarly to a regular file +system rather than as a range of sectors of the entire disk. The introduction +of the higher level construct "one file is one zone" can help reducing the +amount of changes needed in the application as well as introducing support for +different application programming languages. + +zonefs on-disk metadata +----------------------- + +zonefs on-disk metadata is reduced to an immutable super block which +persistently stores a magic number and optional feature flags and values. On +mount, zonefs uses blkdev_report_zones() to obtain the device zone configuration +and populates the mount point with a static file tree solely based on this +information. File sizes come from the device zone type and write pointer +position managed by the device itself. + +The super block is always written on disk at sector 0. The first zone of the +device storing the super block is never exposed as a zone file by zonefs. If +the zone containing the super block is a sequential zone, the mkzonefs format +tool always "finishes" the zone, that is, it transitions the zone to a full +state to make it read-only, preventing any data write. + +Zone type sub-directories +------------------------- + +Files representing zones of the same type are grouped together under the same +sub-directory automatically created on mount. + +For conventional zones, the sub-directory "cnv" is used. This directory is +however created if and only if the device has usable conventional zones. If +the device only has a single conventional zone at sector 0, the zone will not +be exposed as a file as it will be used to store the zonefs super block. For +such devices, the "cnv" sub-directory will not be created. + +For sequential write zones, the sub-directory "seq" is used. + +These two directories are the only directories that exist in zonefs. Users +cannot create other directories and cannot rename nor delete the "cnv" and +"seq" sub-directories. + +The size of the directories indicated by the st_size field of struct stat, +obtained with the stat() or fstat() system calls, indicates the number of files +existing under the directory. + +Zone files +---------- + +Zone files are named using the number of the zone they represent within the set +of zones of a particular type. That is, both the "cnv" and "seq" directories +contain files named "0", "1", "2", ... The file numbers also represent +increasing zone start sector on the device. + +All read and write operations to zone files are not allowed beyond the file +maximum size, that is, beyond the zone size. Any access exceeding the zone +size is failed with the -EFBIG error. + +Creating, deleting, renaming or modifying any attribute of files and +sub-directories is not allowed. + +The number of blocks of a file as reported by stat() and fstat() indicates the +size of the file zone, or in other words, the maximum file size. + +Conventional zone files +----------------------- + +The size of conventional zone files is fixed to the size of the zone they +represent. Conventional zone files cannot be truncated. + +These files can be randomly read and written, using any form of IO operation: +buffered IOs, direct IOs, memory mapped IOs (mmap), etc. There are no IO +constraint for these files beyond the file size limit mentioned above. + +Sequential zone files +--------------------- + +The size of sequential zone files present in the "seq" sub-directory represents +the file's zone write pointer position relative to the zone start sector. + +Sequential zone files can only be written sequentially, starting from the file +end, that is, write operations can only be append writes. Zonefs makes no +attempt at accepting random writes and will fail any write request that has a +start offset not corresponding to the end of the last issued write. + +In order to give guarantees regarding write ordering, zonefs also prevents +buffered writes and mmap writes for sequential files. Only direct IO writes are +accepted. There are no restrictions on read operations nor on the type of IO +used to request reads (buffered IOs, direct IOs and mmap reads are all +accepted). + +Truncating sequential zone files is allowed only down to 0, in which case, the +zone is reset to rewind the file zone write pointer position to the start of +the zone, or up to the zone size, in which case the file's zone is transitioned +to the FULL state (finish zone operation). + +zonefs format options +--------------------- + +Several optional features of zonefs can be enabled at format time. +* Conventional zone aggregation: ranges of contiguous conventional zones can be + aggregated into a single larger file instead of the default one file per zone. +* File ownership: The owner UID and GID of zone files is by default 0 (root) + but can be changed to any valid UID/GID. +* File access permissions: the default 640 access permissions can be changed. + +User Space Tools +---------------- + +The mkzonefs tool is used to format zoned block devices for use with zonefs. +This tool is available on Github at: + +https://github.com/damien-lemoal/zonefs-tools + +zonefs-tools also includes a test suite which can be run against any zoned +block device, including null_blk block device created with zoned mode. + +Examples +-------- + +The following formats a 15TB host-managed SMR HDD with 256 MB zones +with the conventional zones aggregation feature enabled. + +# mkzonefs -o aggr_cnv /dev/sdX +# mount -t zonefs /dev/sdX /mnt +# ls -l /mnt/ +total 0 +dr-xr-xr-x 2 root root 1 Nov 25 13:23 cnv +dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq + +The size of the zone files sub-directories indicate the number of files +existing for each type of zones. In this example, there is only one +conventional zone file (all conventional zones are aggregated under a single +file). + +# ls -l /mnt/cnv +total 137101312 +-rw-r----- 1 root root 140391743488 Nov 25 13:23 0 + +This aggregated conventional zone file can be used as a regular file. + +# mkfs.ext4 /mnt/cnv/0 +# mount -o loop /mnt/cnv/0 /data + +The "seq" sub-directory grouping files for sequential write zones has in this +example 55356 zones. + +# ls -lv /mnt/seq +total 14511243264 +-rw-r----- 1 root root 0 Nov 25 13:23 0 +-rw-r----- 1 root root 0 Nov 25 13:23 1 +-rw-r----- 1 root root 0 Nov 25 13:23 2 +... +-rw-r----- 1 root root 0 Nov 25 13:23 55354 +-rw-r----- 1 root root 0 Nov 25 13:23 55355 + +For sequential write zone files, the file size changes as data is appended at +the end of the file, similarly to any regular file system. + +# dd if=/dev/zero of=/mnt/seq/0 bs=4096 count=1 conv=notrunc oflag=direct +1+0 records in +1+0 records out +4096 bytes (4.1 kB, 4.0 KiB) copied, 1.05112 s, 3.9 kB/s + +# ls -l /mnt/seq/0 +-rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/seq/0 + +The written file can be truncated to the zone size, preventing any further +write operation. + +# truncate -s 268435456 /mnt/seq/0 +# ls -l /mnt/seq/0 +-rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0 + +Truncation to 0 size allows freeing the file zone storage space and restart +append-writes to the file. + +# truncate -s 0 /mnt/seq/0 +# ls -l /mnt/seq/0 +-rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0 + +Since files are statically mapped to zones on the disk, the number of blocks of +a file as reported by stat() and fstat() indicates the size of the file zone. + +# stat /mnt/seq/0 + File: /mnt/seq/0 + Size: 0 Blocks: 524288 IO Block: 4096 regular empty file +Device: 870h/2160d Inode: 50431 Links: 1 +Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 0/ root) +Access: 2019-11-25 13:23:57.048971997 +0900 +Modify: 2019-11-25 13:52:25.553805765 +0900 +Change: 2019-11-25 13:52:25.553805765 +0900 + Birth: - + +The number of blocks of the file ("Blocks") in units of 512B blocks gives the +maximum file size of 524288 * 512 B = 256 MB, corresponding to the device zone +size in this example. Of note is that the "IO block" field always indicates the +minimum IO size for writes and corresponds to the device physical sector size. diff --git a/MAINTAINERS b/MAINTAINERS index 8eb6f02a1efa..66f348fa90df 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -18292,6 +18292,7 @@ L: linux-fsdevel@vger.kernel.org T: git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs.git S: Maintained F: fs/zonefs/ +F: Documentation/filesystems/zonefs.txt ZPOOL COMPRESSED PAGE STORAGE API M: Dan Streetman -- 2.24.1