This series brings a nice refresh to the cramfs filesystem, adding the
following capabilities:
- Direct memory access, bypassing the block and/or MTD layers entirely.
- Ability to store individual data blocks uncompressed.
- Ability to locate individual data blocks anywhere in the filesystem.
The end result is a very tight filesystem that can be accessed directly
from ROM without any other subsystem underneath. Also this allows for
user space XIP which is a very important feature for tiny embedded
systems.
Why cramfs?
Because cramfs is very simple and small. With CONFIG_CRAMFS_BLOCK=n and
CONFIG_CRAMFS_PHYSMEM=y the cramfs driver may use as little as 3704 bytes
of code. That's many times smaller than squashfs. And the runtime memory
usage is also much less with cramfs than squashfs. It packs very tightly
already compared to romfs which has no compression support. And the cramfs
format was simple to extend, allowing for both compressed and uncompressed
blocks within the same file.
Why not accessing ROM via MTD?
The MTD layer is nice and flexible. It also represents a huge overhead
considering its core with no other enabled options weights 19KB.
That's many times the size of the cramfs code for something that
essentially boils down to a glorified argument parser and a call to
memremap() in this case. And if someone still wants to use cramfs via
MTD then it is already possible with mtdblock.
Why not using DAX?
DAX stands for "Direct Access" and is a generic kernel layer helping
with the necessary tasks involved with XIP. It is tailored for large
writable filesystems and relies on the presence of an MMU. It also has
the following shortcoming: "The DAX code does not work correctly on
architectures which have virtually mapped caches such as ARM, MIPS and
SPARC." That makes it unsuitable for a large portion of the intended
targets for this series. And due to the read-only nature of cramfs, it is
possible to achieve the intended result with a much simpler approach making
DAX somewhat overkill in this context.
The maximum size of a cramfs image can't exceed 272MB. In practice it is
likely to be much much less. Given this series is concerned with small
memory systems, even in the MMU case there is always plenty of vmalloc
space left to map it all and even a 272MB memremap() wouldn't be a
problem. If it is then maybe your system is big enough with large
resources to manage already and you're pretty unlikely to be using cramfs
in the first place.
Of course, while this cramfs remains backward compatible with existing
filesystem images, a newer mkcramfs version is necessary to take advantage
of the extended data layout. I created a version of mkcramfs that
detects ELF files and marks text+rodata segments for XIP and compresses the
rest of those ELF files automatically.
So here it is. I'm also willing to step up as cramfs maintainer given
that no sign of any maintenance activities appeared for years.
This series is also available based on v4.13-rc4 via git here:
http://git.linaro.org/people/nicolas.pitre/linux xipcramfs
Changes from v1:
- Improved mmap() support by adding the ability to partially populate a
mapping and lazily split the non directly mapable pages to a separate
vma at fault time (thanks to Chris Brandt for testing).
- Clarified the documentation some more.
diffstat:
Documentation/filesystems/cramfs.txt | 42 ++
MAINTAINERS | 4 +-
fs/cramfs/Kconfig | 39 +-
fs/cramfs/README | 31 +-
fs/cramfs/inode.c | 621 +++++++++++++++++++++++++----
include/uapi/linux/cramfs_fs.h | 20 +-
init/do_mounts.c | 8 +
7 files changed, 688 insertions(+), 77 deletions(-)
Two new capabilities are introduced here:
- The ability to store some blocks uncompressed.
- The ability to locate blocks anywhere.
Those capabilities can be used independently, but the combination
opens the possibility for execute-in-place (XIP) of program text segments
that must remain uncompressed, and in the MMU case, must have a specific
alignment. It is even possible to still have the writable data segments
from the same file compressed as they have to be copied into RAM anyway.
This is achieved by giving special meanings to some unused block pointer
bits while remaining compatible with legacy cramfs images.
Signed-off-by: Nicolas Pitre <[email protected]>
---
fs/cramfs/README | 31 ++++++++++++++-
fs/cramfs/inode.c | 87 +++++++++++++++++++++++++++++++++---------
include/uapi/linux/cramfs_fs.h | 20 +++++++++-
3 files changed, 118 insertions(+), 20 deletions(-)
diff --git a/fs/cramfs/README b/fs/cramfs/README
index 9d4e7ea311..d71b27e0ff 100644
--- a/fs/cramfs/README
+++ b/fs/cramfs/README
@@ -49,17 +49,46 @@ same as the start of the (i+1)'th <block> if there is one). The first
<block> immediately follows the last <block_pointer> for the file.
<block_pointer>s are each 32 bits long.
+When the CRAMFS_FLAG_EXT_BLOCK_POINTERS capability bit is set, each
+<block_pointer>'s top bits may contain special flags as follows:
+
+CRAMFS_BLK_FLAG_UNCOMPRESSED (bit 31):
+ The block data is not compressed and should be copied verbatim.
+
+CRAMFS_BLK_FLAG_DIRECT_PTR (bit 30):
+ The <block_pointer> stores the actual block start offset and not
+ its end, shifted right by 2 bits. The block must therefore be
+ aligned to a 4-byte boundary. The block size is either blksize
+ if CRAMFS_BLK_FLAG_UNCOMPRESSED is also specified, otherwise
+ the compressed data length is included in the first 2 bytes of
+ the block data. This is used to allow discontiguous data layout
+ and specific data block alignments e.g. for XIP applications.
+
+
The order of <file_data>'s is a depth-first descent of the directory
tree, i.e. the same order as `find -size +0 \( -type f -o -type l \)
-print'.
<block>: The i'th <block> is the output of zlib's compress function
-applied to the i'th blksize-sized chunk of the input data.
+applied to the i'th blksize-sized chunk of the input data if the
+corresponding CRAMFS_BLK_FLAG_UNCOMPRESSED <block_ptr> bit is not set,
+otherwise it is the input data directly.
(For the last <block> of the file, the input may of course be smaller.)
Each <block> may be a different size. (See <block_pointer> above.)
+
<block>s are merely byte-aligned, not generally u32-aligned.
+When CRAMFS_BLK_FLAG_DIRECT_PTR is specified then the corresponding
+<block> may be located anywhere and not necessarily contiguous with
+the previous/next blocks. In that case it is minimally u32-aligned.
+If CRAMFS_BLK_FLAG_UNCOMPRESSED is also specified then the size is always
+blksize except for the last block which is limited by the file length.
+If CRAMFS_BLK_FLAG_DIRECT_PTR is set and CRAMFS_BLK_FLAG_UNCOMPRESSED
+is not set then the first 2 bytes of the block contains the size of the
+remaining block data as this cannot be determined from the placement of
+logically adjacent blocks.
+
Holes
-----
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 393eb27ef4..b825ae162c 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -636,33 +636,84 @@ static int cramfs_readpage(struct file *file, struct page *page)
if (page->index < maxblock) {
struct super_block *sb = inode->i_sb;
u32 blkptr_offset = OFFSET(inode) + page->index*4;
- u32 start_offset, compr_len;
+ u32 block_ptr, block_start, block_len;
+ bool uncompressed, direct;
- start_offset = OFFSET(inode) + maxblock*4;
mutex_lock(&read_mutex);
- if (page->index)
- start_offset = *(u32 *) cramfs_read(sb, blkptr_offset-4,
- 4);
- compr_len = (*(u32 *) cramfs_read(sb, blkptr_offset, 4) -
- start_offset);
- mutex_unlock(&read_mutex);
+ block_ptr = *(u32 *) cramfs_read(sb, blkptr_offset, 4);
+ uncompressed = (block_ptr & CRAMFS_BLK_FLAG_UNCOMPRESSED);
+ direct = (block_ptr & CRAMFS_BLK_FLAG_DIRECT_PTR);
+ block_ptr &= ~CRAMFS_BLK_FLAGS;
+
+ if (direct) {
+ /*
+ * The block pointer is an absolute start pointer,
+ * shifted by 2 bits. The size is included in the
+ * first 2 bytes of the data block when compressed,
+ * or PAGE_SIZE otherwise.
+ */
+ block_start = block_ptr << 2;
+ if (uncompressed) {
+ block_len = PAGE_SIZE;
+ /* if last block: cap to file length */
+ if (page->index == maxblock - 1)
+ block_len = offset_in_page(inode->i_size);
+ } else {
+ block_len = *(u16 *)
+ cramfs_read(sb, block_start, 2);
+ block_start += 2;
+ }
+ } else {
+ /*
+ * The block pointer indicates one past the end of
+ * the current block (start of next block). If this
+ * is the first block then it starts where the block
+ * pointer table ends, otherwise its start comes
+ * from the previous block's pointer.
+ */
+ block_start = OFFSET(inode) + maxblock*4;
+ if (page->index)
+ block_start = *(u32 *)
+ cramfs_read(sb, blkptr_offset-4, 4);
+ /* Beware... previous ptr might be a direct ptr */
+ if (unlikely(block_start & CRAMFS_BLK_FLAG_DIRECT_PTR)) {
+ /* See comments on earlier code. */
+ u32 prev_start = block_start;
+ block_start = prev_start & ~CRAMFS_BLK_FLAGS;
+ block_start <<= 2;
+ if (prev_start & CRAMFS_BLK_FLAG_UNCOMPRESSED) {
+ block_start += PAGE_SIZE;
+ } else {
+ block_len = *(u16 *)
+ cramfs_read(sb, block_start, 2);
+ block_start += 2 + block_len;
+ }
+ }
+ block_start &= ~CRAMFS_BLK_FLAGS;
+ block_len = block_ptr - block_start;
+ }
- if (compr_len == 0)
+ if (block_len == 0)
; /* hole */
- else if (unlikely(compr_len > (PAGE_SIZE << 1))) {
- pr_err("bad compressed blocksize %u\n",
- compr_len);
+ else if (unlikely(block_len > 2*PAGE_SIZE ||
+ (uncompressed && block_len > PAGE_SIZE))) {
+ mutex_unlock(&read_mutex);
+ pr_err("bad data blocksize %u\n", block_len);
goto err;
+ } else if (uncompressed) {
+ memcpy(pgdata,
+ cramfs_read(sb, block_start, block_len),
+ block_len);
+ bytes_filled = block_len;
} else {
- mutex_lock(&read_mutex);
bytes_filled = cramfs_uncompress_block(pgdata,
PAGE_SIZE,
- cramfs_read(sb, start_offset, compr_len),
- compr_len);
- mutex_unlock(&read_mutex);
- if (unlikely(bytes_filled < 0))
- goto err;
+ cramfs_read(sb, block_start, block_len),
+ block_len);
}
+ mutex_unlock(&read_mutex);
+ if (unlikely(bytes_filled < 0))
+ goto err;
}
memset(pgdata + bytes_filled, 0, PAGE_SIZE - bytes_filled);
diff --git a/include/uapi/linux/cramfs_fs.h b/include/uapi/linux/cramfs_fs.h
index e4611a9b92..ed250aa372 100644
--- a/include/uapi/linux/cramfs_fs.h
+++ b/include/uapi/linux/cramfs_fs.h
@@ -73,6 +73,7 @@ struct cramfs_super {
#define CRAMFS_FLAG_HOLES 0x00000100 /* support for holes */
#define CRAMFS_FLAG_WRONG_SIGNATURE 0x00000200 /* reserved */
#define CRAMFS_FLAG_SHIFTED_ROOT_OFFSET 0x00000400 /* shifted root fs */
+#define CRAMFS_FLAG_EXT_BLOCK_POINTERS 0x00000800 /* block pointer extensions */
/*
* Valid values in super.flags. Currently we refuse to mount
@@ -82,7 +83,24 @@ struct cramfs_super {
#define CRAMFS_SUPPORTED_FLAGS ( 0x000000ff \
| CRAMFS_FLAG_HOLES \
| CRAMFS_FLAG_WRONG_SIGNATURE \
- | CRAMFS_FLAG_SHIFTED_ROOT_OFFSET )
+ | CRAMFS_FLAG_SHIFTED_ROOT_OFFSET \
+ | CRAMFS_FLAG_EXT_BLOCK_POINTERS )
+/*
+ * Block pointer flags
+ *
+ * The maximum block offset that needs to be represented is roughly:
+ *
+ * (1 << CRAMFS_OFFSET_WIDTH) * 4 +
+ * (1 << CRAMFS_SIZE_WIDTH) / PAGE_SIZE * (4 + PAGE_SIZE)
+ * = 0x11004000
+ *
+ * That leaves room for 3 flag bits in the block pointer table.
+ */
+#define CRAMFS_BLK_FLAG_UNCOMPRESSED (1 << 31)
+#define CRAMFS_BLK_FLAG_DIRECT_PTR (1 << 30)
+
+#define CRAMFS_BLK_FLAGS ( CRAMFS_BLK_FLAG_UNCOMPRESSED \
+ | CRAMFS_BLK_FLAG_DIRECT_PTR )
#endif /* _UAPI__CRAMFS_H */
--
2.9.5
Signed-off-by: Nicolas Pitre <[email protected]>
---
init/do_mounts.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/init/do_mounts.c b/init/do_mounts.c
index c2de5104aa..43b5817f60 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -556,6 +556,14 @@ void __init prepare_namespace(void)
ssleep(root_delay);
}
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) && root_fs_names &&
+ !strcmp(root_fs_names, "cramfs_physmem")) {
+ int err = do_mount_root("cramfs", "cramfs_physmem",
+ root_mountflags, root_mount_data);
+ if (!err)
+ goto out;
+ }
+
/*
* wait for the known devices to complete their probing
*
--
2.9.5
Small embedded systems typically execute the kernel code in place (XIP)
directly from flash to save on precious RAM usage. This adds the ability
to consume filesystem data directly from flash to the cramfs filesystem
as well. Cramfs is particularly well suited to this feature as it is
very simple and its RAM usage is already very low, and with this feature
it is possible to use it with no block device support and even lower RAM
usage.
This patch was inspired by a similar patch from Shane Nay dated 17 years
ago that used to be very popular in embedded circles but never made it
into mainline. This is a cleaned-up implementation that uses far fewer
memory address at run time when both methods are configured in. In the
context of small IoT deployments, this functionality has become relevant and useful again.
To distinguish between both access types, the cramfs_physmem filesystem
type must be specified when using a memory accessible cramfs image, and
the physaddr argument must provide the actual filesystem image's physical
memory location.
Signed-off-by: Nicolas Pitre <[email protected]>
---
fs/cramfs/Kconfig | 30 ++++++-
fs/cramfs/inode.c | 264 +++++++++++++++++++++++++++++++++++++++++++-----------
2 files changed, 242 insertions(+), 52 deletions(-)
diff --git a/fs/cramfs/Kconfig b/fs/cramfs/Kconfig
index 11b29d491b..5eed4ad2d5 100644
--- a/fs/cramfs/Kconfig
+++ b/fs/cramfs/Kconfig
@@ -1,6 +1,5 @@
config CRAMFS
tristate "Compressed ROM file system support (cramfs) (OBSOLETE)"
- depends on BLOCK
select ZLIB_INFLATE
help
Saying Y here includes support for CramFs (Compressed ROM File
@@ -20,3 +19,32 @@ config CRAMFS
in terms of performance and features.
If unsure, say N.
+
+config CRAMFS_BLOCKDEV
+ bool "Support CramFs image over a regular block device" if EXPERT
+ depends on CRAMFS && BLOCK
+ default y
+ help
+ This option allows the CramFs driver to load data from a regular
+ block device such a disk partition or a ramdisk.
+
+config CRAMFS_PHYSMEM
+ bool "Support CramFs image directly mapped in physical memory"
+ depends on CRAMFS
+ default y if !CRAMFS_BLOCKDEV
+ help
+ This option allows the CramFs driver to load data directly from
+ a linear adressed memory range (usually non volatile memory
+ like flash) instead of going through the block device layer.
+ This saves some memory since no intermediate buffering is
+ necessary.
+
+ The filesystem type for this feature is "cramfs_physmem".
+ The location of the CramFs image in memory is board
+ dependent. Therefore, if you say Y, you must know the proper
+ physical address where to store the CramFs image and specify
+ it using the physaddr=0x******** mount option (for example:
+ "mount -t cramfs_physmem -o physaddr=0x100000 none /mnt").
+
+ If unsure, say N.
+
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 7919967488..393eb27ef4 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -24,6 +24,7 @@
#include <linux/mutex.h>
#include <uapi/linux/cramfs_fs.h>
#include <linux/uaccess.h>
+#include <linux/io.h>
#include "internal.h"
@@ -36,6 +37,8 @@ struct cramfs_sb_info {
unsigned long blocks;
unsigned long files;
unsigned long flags;
+ void *linear_virt_addr;
+ phys_addr_t linear_phys_addr;
};
static inline struct cramfs_sb_info *CRAMFS_SB(struct super_block *sb)
@@ -140,6 +143,9 @@ static struct inode *get_cramfs_inode(struct super_block *sb,
* BLKS_PER_BUF*PAGE_SIZE, so that the caller doesn't need to
* worry about end-of-buffer issues even when decompressing a full
* page cache.
+ *
+ * Note: This is all optimized away at compile time when
+ * CONFIG_CRAMFS_BLOCKDEV=n.
*/
#define READ_BUFFERS (2)
/* NEXT_BUFFER(): Loop over [0..(READ_BUFFERS-1)]. */
@@ -160,10 +166,10 @@ static struct super_block *buffer_dev[READ_BUFFERS];
static int next_buffer;
/*
- * Returns a pointer to a buffer containing at least LEN bytes of
- * filesystem starting at byte offset OFFSET into the filesystem.
+ * Populate our block cache and return a pointer from it.
*/
-static void *cramfs_read(struct super_block *sb, unsigned int offset, unsigned int len)
+static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
+ unsigned int len)
{
struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping;
struct page *pages[BLKS_PER_BUF];
@@ -239,7 +245,39 @@ static void *cramfs_read(struct super_block *sb, unsigned int offset, unsigned i
return read_buffers[buffer] + offset;
}
-static void cramfs_kill_sb(struct super_block *sb)
+/*
+ * Return a pointer to the linearly addressed cramfs image in memory.
+ */
+static void *cramfs_direct_read(struct super_block *sb, unsigned int offset,
+ unsigned int len)
+{
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+
+ if (!len)
+ return NULL;
+ if (len > sbi->size || offset > sbi->size - len)
+ return page_address(ZERO_PAGE(0));
+ return sbi->linear_virt_addr + offset;
+}
+
+/*
+ * Returns a pointer to a buffer containing at least LEN bytes of
+ * filesystem starting at byte offset OFFSET into the filesystem.
+ */
+static void *cramfs_read(struct super_block *sb, unsigned int offset,
+ unsigned int len)
+{
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) && sbi->linear_virt_addr)
+ return cramfs_direct_read(sb, offset, len);
+ else if (IS_ENABLED(CONFIG_CRAMFS_BLOCKDEV))
+ return cramfs_blkdev_read(sb, offset, len);
+ else
+ return NULL;
+}
+
+static void cramfs_blkdev_kill_sb(struct super_block *sb)
{
struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
@@ -247,6 +285,16 @@ static void cramfs_kill_sb(struct super_block *sb)
kfree(sbi);
}
+static void cramfs_physmem_kill_sb(struct super_block *sb)
+{
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+
+ if (sbi->linear_virt_addr)
+ memunmap(sbi->linear_virt_addr);
+ kill_anon_super(sb);
+ kfree(sbi);
+}
+
static int cramfs_remount(struct super_block *sb, int *flags, char *data)
{
sync_filesystem(sb);
@@ -254,34 +302,24 @@ static int cramfs_remount(struct super_block *sb, int *flags, char *data)
return 0;
}
-static int cramfs_fill_super(struct super_block *sb, void *data, int silent)
+static int cramfs_read_super(struct super_block *sb,
+ struct cramfs_super *super, int silent)
{
- int i;
- struct cramfs_super super;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
unsigned long root_offset;
- struct cramfs_sb_info *sbi;
- struct inode *root;
-
- sb->s_flags |= MS_RDONLY;
-
- sbi = kzalloc(sizeof(struct cramfs_sb_info), GFP_KERNEL);
- if (!sbi)
- return -ENOMEM;
- sb->s_fs_info = sbi;
- /* Invalidate the read buffers on mount: think disk change.. */
- mutex_lock(&read_mutex);
- for (i = 0; i < READ_BUFFERS; i++)
- buffer_blocknr[i] = -1;
+ /* We don't know the real size yet */
+ sbi->size = PAGE_SIZE;
/* Read the first block and get the superblock from it */
- memcpy(&super, cramfs_read(sb, 0, sizeof(super)), sizeof(super));
+ mutex_lock(&read_mutex);
+ memcpy(super, cramfs_read(sb, 0, sizeof(*super)), sizeof(*super));
mutex_unlock(&read_mutex);
/* Do sanity checks on the superblock */
- if (super.magic != CRAMFS_MAGIC) {
+ if (super->magic != CRAMFS_MAGIC) {
/* check for wrong endianness */
- if (super.magic == CRAMFS_MAGIC_WEND) {
+ if (super->magic == CRAMFS_MAGIC_WEND) {
if (!silent)
pr_err("wrong endianness\n");
return -EINVAL;
@@ -289,10 +327,10 @@ static int cramfs_fill_super(struct super_block *sb, void *data, int silent)
/* check at 512 byte offset */
mutex_lock(&read_mutex);
- memcpy(&super, cramfs_read(sb, 512, sizeof(super)), sizeof(super));
+ memcpy(super, cramfs_read(sb, 512, sizeof(*super)), sizeof(*super));
mutex_unlock(&read_mutex);
- if (super.magic != CRAMFS_MAGIC) {
- if (super.magic == CRAMFS_MAGIC_WEND && !silent)
+ if (super->magic != CRAMFS_MAGIC) {
+ if (super->magic == CRAMFS_MAGIC_WEND && !silent)
pr_err("wrong endianness\n");
else if (!silent)
pr_err("wrong magic\n");
@@ -301,34 +339,34 @@ static int cramfs_fill_super(struct super_block *sb, void *data, int silent)
}
/* get feature flags first */
- if (super.flags & ~CRAMFS_SUPPORTED_FLAGS) {
+ if (super->flags & ~CRAMFS_SUPPORTED_FLAGS) {
pr_err("unsupported filesystem features\n");
return -EINVAL;
}
/* Check that the root inode is in a sane state */
- if (!S_ISDIR(super.root.mode)) {
+ if (!S_ISDIR(super->root.mode)) {
pr_err("root is not a directory\n");
return -EINVAL;
}
/* correct strange, hard-coded permissions of mkcramfs */
- super.root.mode |= (S_IRUSR | S_IXUSR | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH);
+ super->root.mode |= (S_IRUSR | S_IXUSR | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH);
- root_offset = super.root.offset << 2;
- if (super.flags & CRAMFS_FLAG_FSID_VERSION_2) {
- sbi->size = super.size;
- sbi->blocks = super.fsid.blocks;
- sbi->files = super.fsid.files;
+ root_offset = super->root.offset << 2;
+ if (super->flags & CRAMFS_FLAG_FSID_VERSION_2) {
+ sbi->size = super->size;
+ sbi->blocks = super->fsid.blocks;
+ sbi->files = super->fsid.files;
} else {
sbi->size = 1<<28;
sbi->blocks = 0;
sbi->files = 0;
}
- sbi->magic = super.magic;
- sbi->flags = super.flags;
+ sbi->magic = super->magic;
+ sbi->flags = super->flags;
if (root_offset == 0)
pr_info("empty filesystem");
- else if (!(super.flags & CRAMFS_FLAG_SHIFTED_ROOT_OFFSET) &&
+ else if (!(super->flags & CRAMFS_FLAG_SHIFTED_ROOT_OFFSET) &&
((root_offset != sizeof(struct cramfs_super)) &&
(root_offset != 512 + sizeof(struct cramfs_super))))
{
@@ -336,9 +374,18 @@ static int cramfs_fill_super(struct super_block *sb, void *data, int silent)
return -EINVAL;
}
+ return 0;
+}
+
+static int cramfs_finalize_super(struct super_block *sb,
+ struct cramfs_inode *cramfs_root)
+{
+ struct inode *root;
+
/* Set it all up.. */
+ sb->s_flags |= MS_RDONLY;
sb->s_op = &cramfs_ops;
- root = get_cramfs_inode(sb, &super.root, 0);
+ root = get_cramfs_inode(sb, cramfs_root, 0);
if (IS_ERR(root))
return PTR_ERR(root);
sb->s_root = d_make_root(root);
@@ -347,6 +394,92 @@ static int cramfs_fill_super(struct super_block *sb, void *data, int silent)
return 0;
}
+static int cramfs_blkdev_fill_super(struct super_block *sb, void *data, int silent)
+{
+ struct cramfs_sb_info *sbi;
+ struct cramfs_super super;
+ int i, err;
+
+ sbi = kzalloc(sizeof(struct cramfs_sb_info), GFP_KERNEL);
+ if (!sbi)
+ return -ENOMEM;
+ sb->s_fs_info = sbi;
+
+ /* Invalidate the read buffers on mount: think disk change.. */
+ for (i = 0; i < READ_BUFFERS; i++)
+ buffer_blocknr[i] = -1;
+
+ err = cramfs_read_super(sb, &super, silent);
+ if (err)
+ return err;
+ return cramfs_finalize_super(sb, &super.root);
+}
+
+static int cramfs_physmem_fill_super(struct super_block *sb, void *data, int silent)
+{
+ struct cramfs_sb_info *sbi;
+ struct cramfs_super super;
+ char *p;
+ int err;
+
+ sbi = kzalloc(sizeof(struct cramfs_sb_info), GFP_KERNEL);
+ if (!sbi)
+ return -ENOMEM;
+ sb->s_fs_info = sbi;
+
+ /*
+ * The physical location of the cramfs image is specified as
+ * a mount parameter. This parameter is mandatory for obvious
+ * reasons. Some validation is made on the phys address but this
+ * is not exhaustive and we count on the fact that someone using
+ * this feature is supposed to know what he/she's doing.
+ */
+ if (!data || !(p = strstr((char *)data, "physaddr="))) {
+ pr_err("unknown physical address for linear cramfs image\n");
+ return -EINVAL;
+ }
+ sbi->linear_phys_addr = memparse(p + 9, NULL);
+ if (!sbi->linear_phys_addr) {
+ pr_err("bad value for cramfs image physical address\n");
+ return -EINVAL;
+ }
+ if (sbi->linear_phys_addr & (PAGE_SIZE-1)) {
+ pr_err("physical address %pap for linear cramfs isn't aligned to a page boundary\n",
+ &sbi->linear_phys_addr);
+ return -EINVAL;
+ }
+
+ /*
+ * Map only one page for now. Will remap it when fs size is known.
+ * Although we'll only read from it, we want the CPU cache to
+ * kick in for the higher throughput it provides, hence MEMREMAP_WB.
+ */
+ pr_info("checking physical address %pap for linear cramfs image\n", &sbi->linear_phys_addr);
+ sbi->linear_virt_addr = memremap(sbi->linear_phys_addr, PAGE_SIZE,
+ MEMREMAP_WB);
+ if (!sbi->linear_virt_addr) {
+ pr_err("ioremap of the linear cramfs image failed\n");
+ return -ENOMEM;
+ }
+
+ err = cramfs_read_super(sb, &super, silent);
+ if (err)
+ return err;
+
+ /* Remap the whole filesystem now */
+ pr_info("linear cramfs image appears to be %lu KB in size\n",
+ sbi->size/1024);
+ memunmap(sbi->linear_virt_addr);
+ sbi->linear_virt_addr = memremap(sbi->linear_phys_addr, sbi->size,
+ MEMREMAP_WB);
+ if (!sbi->linear_virt_addr) {
+ pr_err("ioremap of the linear cramfs image failed\n");
+ return -ENOMEM;
+ }
+
+ return cramfs_finalize_super(sb, &super.root);
+}
+
static int cramfs_statfs(struct dentry *dentry, struct kstatfs *buf)
{
struct super_block *sb = dentry->d_sb;
@@ -573,38 +706,67 @@ static const struct super_operations cramfs_ops = {
.statfs = cramfs_statfs,
};
-static struct dentry *cramfs_mount(struct file_system_type *fs_type,
- int flags, const char *dev_name, void *data)
+static struct dentry *cramfs_blkdev_mount(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *data)
+{
+ return mount_bdev(fs_type, flags, dev_name, data, cramfs_blkdev_fill_super);
+}
+
+static struct dentry *cramfs_physmem_mount(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *data)
{
- return mount_bdev(fs_type, flags, dev_name, data, cramfs_fill_super);
+ return mount_nodev(fs_type, flags, data, cramfs_physmem_fill_super);
}
static struct file_system_type cramfs_fs_type = {
.owner = THIS_MODULE,
.name = "cramfs",
- .mount = cramfs_mount,
- .kill_sb = cramfs_kill_sb,
+ .mount = cramfs_blkdev_mount,
+ .kill_sb = cramfs_blkdev_kill_sb,
.fs_flags = FS_REQUIRES_DEV,
};
+
+static struct file_system_type cramfs_physmem_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "cramfs_physmem",
+ .mount = cramfs_physmem_mount,
+ .kill_sb = cramfs_physmem_kill_sb,
+};
+
+#ifdef CONFIG_CRAMFS_BLOCKDEV
MODULE_ALIAS_FS("cramfs");
+#endif
+#ifdef CONFIG_CRAMFS_PHYSMEM
+MODULE_ALIAS_FS("cramfs_physmem");
+#endif
static int __init init_cramfs_fs(void)
{
int rv;
- rv = cramfs_uncompress_init();
- if (rv < 0)
- return rv;
- rv = register_filesystem(&cramfs_fs_type);
- if (rv < 0)
- cramfs_uncompress_exit();
- return rv;
+ if ((rv = cramfs_uncompress_init()) < 0)
+ goto err0;
+ if (IS_ENABLED(CONFIG_CRAMFS_BLOCKDEV) &&
+ (rv = register_filesystem(&cramfs_fs_type)) < 0)
+ goto err1;
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) &&
+ (rv = register_filesystem(&cramfs_physmem_fs_type)) < 0)
+ goto err2;
+ return 0;
+
+err2: if (IS_ENABLED(CONFIG_CRAMFS_BLOCKDEV))
+ unregister_filesystem(&cramfs_fs_type);
+err1: cramfs_uncompress_exit();
+err0: return rv;
}
static void __exit exit_cramfs_fs(void)
{
cramfs_uncompress_exit();
- unregister_filesystem(&cramfs_fs_type);
+ if (IS_ENABLED(CONFIG_CRAMFS_BLOCKDEV))
+ unregister_filesystem(&cramfs_fs_type);
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM))
+ unregister_filesystem(&cramfs_physmem_fs_type);
}
module_init(init_cramfs_fs)
--
2.9.5
When cramfs_physmem is used then we have the opportunity to map files
directly from ROM, directly into user space, saving on RAM usage.
This gives us Execute-In-Place (XIP) support.
For a file to be mmap()-able, the map area has to correspond to a range
of uncompressed and contiguous blocks, and in the MMU case it also has
to be page aligned. A version of mkcramfs with appropriate support is
necessary to create such a filesystem image.
In the MMU case it may happen for a vma structure to extend beyond the
actual file size. This is notably the case in binfmt_elf.c:elf_map().
Or the file's last block is shared with other files and cannot be mapped
as is. Rather than refusing to mmap it, we do a partial map and set up a
special vm_ops fault handler that splits the vma in two: the direct mapping
vma and the memory-backed vma populated by the readpage method.
In the non-MMU case it is the get_unmapped_area method that is responsible
for providing the address where the actual data can be found. No mapping
is necessary of course.
Signed-off-by: Nicolas Pitre <[email protected]>
---
fs/cramfs/inode.c | 270 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 270 insertions(+)
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index b825ae162c..e3884c607b 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/pagemap.h>
+#include <linux/ramfs.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/blkdev.h>
@@ -49,6 +50,7 @@ static inline struct cramfs_sb_info *CRAMFS_SB(struct super_block *sb)
static const struct super_operations cramfs_ops;
static const struct inode_operations cramfs_dir_inode_operations;
static const struct file_operations cramfs_directory_operations;
+static const struct file_operations cramfs_physmem_fops;
static const struct address_space_operations cramfs_aops;
static DEFINE_MUTEX(read_mutex);
@@ -96,6 +98,10 @@ static struct inode *get_cramfs_inode(struct super_block *sb,
case S_IFREG:
inode->i_fop = &generic_ro_fops;
inode->i_data.a_ops = &cramfs_aops;
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) &&
+ CRAMFS_SB(sb)->flags & CRAMFS_FLAG_EXT_BLOCK_POINTERS &&
+ CRAMFS_SB(sb)->linear_phys_addr)
+ inode->i_fop = &cramfs_physmem_fops;
break;
case S_IFDIR:
inode->i_op = &cramfs_dir_inode_operations;
@@ -277,6 +283,270 @@ static void *cramfs_read(struct super_block *sb, unsigned int offset,
return NULL;
}
+/*
+ * For a mapping to be possible, we need a range of uncompressed and
+ * contiguous blocks. Return the offset for the first block and number of
+ * valid blocks for which that is true, or zero otherwise.
+ */
+static u32 cramfs_get_block_range(struct inode *inode, u32 pgoff, u32 *pages)
+{
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ int i;
+ u32 *blockptrs, blockaddr;
+
+ /*
+ * We can dereference memory directly here as this code may be
+ * reached only when there is a direct filesystem image mapping
+ * available in memory.
+ */
+ blockptrs = (u32 *)(sbi->linear_virt_addr + OFFSET(inode) + pgoff*4);
+ blockaddr = blockptrs[0] & ~CRAMFS_BLK_FLAGS;
+ i = 0;
+ do {
+ u32 expect = blockaddr + i * (PAGE_SIZE >> 2);
+ expect |= CRAMFS_BLK_FLAG_DIRECT_PTR|CRAMFS_BLK_FLAG_UNCOMPRESSED;
+ if (blockptrs[i] != expect) {
+ pr_debug("range: block %d/%d got %#x expects %#x\n",
+ pgoff+i, pgoff+*pages-1, blockptrs[i], expect);
+ if (i == 0)
+ return 0;
+ break;
+ }
+ } while (++i < *pages);
+
+ *pages = i;
+
+ /* stored "direct" block ptrs are shifted down by 2 bits */
+ return blockaddr << 2;
+}
+
+/*
+ * It is possible for cramfs_physmem_mmap() to partially populate the mapping
+ * causing page faults in the unmapped area. When that happens, we need to
+ * split the vma so that the unmapped area gets its own vma that can be backed
+ * with actual memory pages and loaded normally. This is necessary because
+ * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and filemap_fault()
+ * no longer works with it. Furthermore this makes /proc/x/maps right.
+ * Q: is there a way to do split vma at mmap() time?
+ */
+static const struct vm_operations_struct cramfs_vmasplit_ops;
+static int cramfs_vmasplit_fault(struct vm_fault *vmf)
+{
+ struct mm_struct *mm = vmf->vma->vm_mm;
+ struct vm_area_struct *vma, *new_vma;
+ unsigned long split_val, split_addr;
+ unsigned int split_pgoff, split_page;
+ int ret;
+
+ /* Retrieve the vma split address and validate it */
+ vma = vmf->vma;
+ split_val = (unsigned long)vma->vm_private_data;
+ split_pgoff = split_val & 0xffff;
+ split_page = split_val >> 16;
+ split_addr = vma->vm_start + split_page * PAGE_SIZE;
+ pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n",
+ vmf->address, vma->vm_start, vma->vm_end, split_addr);
+ if (!split_val || split_addr >= vma->vm_end || vmf->address < split_addr)
+ return VM_FAULT_SIGSEGV;
+
+ /* We have some vma surgery to do and need the write lock. */
+ up_read(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return VM_FAULT_RETRY;
+
+ /* Make sure the vma didn't change between the locks */
+ vma = find_vma(mm, vmf->address);
+ if (vma->vm_ops != &cramfs_vmasplit_ops) {
+ /*
+ * Someone else raced with us and could have handled the fault.
+ * Let it go back to user space and fault again if necessary.
+ */
+ downgrade_write(&mm->mmap_sem);
+ return VM_FAULT_NOPAGE;
+ }
+
+ /* Split the vma between the directly mapped area and the rest */
+ ret = split_vma(mm, vma, split_addr, 0);
+ if (ret) {
+ downgrade_write(&mm->mmap_sem);
+ return VM_FAULT_OOM;
+ }
+
+ /* The direct vma should no longer ever fault */
+ vma->vm_ops = NULL;
+
+ /* Retrieve the new vma covering the unmapped area */
+ new_vma = find_vma(mm, split_addr);
+ BUG_ON(new_vma == vma);
+ if (!new_vma) {
+ downgrade_write(&mm->mmap_sem);
+ return VM_FAULT_SIGSEGV;
+ }
+
+ /*
+ * Readjust the new vma with the actual file based pgoff and
+ * process the fault normally on it.
+ */
+ new_vma->vm_pgoff = split_pgoff;
+ new_vma->vm_ops = &generic_file_vm_ops;
+ vmf->vma = new_vma;
+ vmf->pgoff = split_pgoff;
+ vmf->pgoff += (vmf->address - new_vma->vm_start) >> PAGE_SHIFT;
+ downgrade_write(&mm->mmap_sem);
+ return filemap_fault(vmf);
+}
+
+static const struct vm_operations_struct cramfs_vmasplit_ops = {
+ .fault = cramfs_vmasplit_fault,
+};
+
+static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ unsigned int pages, vma_pages, max_pages, offset;
+ unsigned long address;
+ char *fail_reason;
+ int ret;
+
+ if (!IS_ENABLED(CONFIG_MMU))
+ return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -ENOSYS;
+
+ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
+ return -EINVAL;
+
+ /* Could COW work here? */
+ fail_reason = "vma is writable";
+ if (vma->vm_flags & VM_WRITE)
+ goto fail;
+
+ vma_pages = (vma->vm_end - vma->vm_start + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ max_pages = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ fail_reason = "beyond file limit";
+ if (vma->vm_pgoff >= max_pages)
+ goto fail;
+ pages = vma_pages;
+ if (pages > max_pages - vma->vm_pgoff)
+ pages = max_pages - vma->vm_pgoff;
+
+ offset = cramfs_get_block_range(inode, vma->vm_pgoff, &pages);
+ fail_reason = "unsuitable block layout";
+ if (!offset)
+ goto fail;
+ address = sbi->linear_phys_addr + offset;
+ fail_reason = "data is not page aligned";
+ if (!PAGE_ALIGNED(address))
+ goto fail;
+
+ /* Don't map the last page if it contains some other data */
+ if (unlikely(vma->vm_pgoff + pages == max_pages)) {
+ unsigned int partial = offset_in_page(inode->i_size);
+ if (partial) {
+ char *data = sbi->linear_virt_addr + offset;
+ data += (max_pages - 1) * PAGE_SIZE + partial;
+ while ((unsigned long)data & 7)
+ if (*data++ != 0)
+ goto nonzero;
+ while (offset_in_page(data)) {
+ if (*(u64 *)data != 0) {
+ nonzero:
+ pr_debug("mmap: %s: last page is shared\n",
+ file_dentry(file)->d_name.name);
+ pages--;
+ break;
+ }
+ data += 8;
+ }
+ }
+ }
+
+ if (pages) {
+ /*
+ * If we can't map it all, page faults will occur if the
+ * unmapped area is accessed. Let's handle them to split the
+ * vma and let the normal paging machinery take care of the
+ * rest through cramfs_readpage(). Because remap_pfn_range()
+ * repurposes vma->vm_pgoff, we have to save it somewhere.
+ * Let's use vma->vm_private_data to hold both the pgoff and the actual address split point.
+ * Maximum file size is 16MB so we can pack both together.
+ */
+ if (pages != vma_pages) {
+ unsigned int split_pgoff = vma->vm_pgoff + pages;
+ unsigned long split_val = split_pgoff + (pages << 16);
+ vma->vm_private_data = (void *)split_val;
+ vma->vm_ops = &cramfs_vmasplit_ops;
+ /* to keep remap_pfn_range() happy */
+ vma->vm_end = vma->vm_start + pages * PAGE_SIZE;
+ }
+
+ ret = remap_pfn_range(vma, vma->vm_start, address >> PAGE_SHIFT,
+ pages * PAGE_SIZE, vma->vm_page_prot);
+ /* restore vm_end in case we cheated it above */
+ vma->vm_end = vma->vm_start + vma_pages * PAGE_SIZE;
+ if (ret)
+ return ret;
+
+ pr_debug("mapped %s at 0x%08lx (%u/%u pages) to vma 0x%08lx, "
+ "page_prot 0x%llx\n", file_dentry(file)->d_name.name,
+ address, pages, vma_pages, vma->vm_start,
+ (unsigned long long)pgprot_val(vma->vm_page_prot));
+ return 0;
+ }
+ fail_reason = "no suitable block remaining";
+
+fail:
+ pr_debug("%s: direct mmap failed: %s\n",
+ file_dentry(file)->d_name.name, fail_reason);
+
+ /* We failed to do a direct map, but normal paging will do it */
+ vma->vm_ops = &generic_file_vm_ops;
+ return 0;
+}
+
+#ifndef CONFIG_MMU
+
+static unsigned long cramfs_physmem_get_unmapped_area(struct file *file,
+ unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ unsigned int pages, block_pages, max_pages, offset;
+
+ pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ max_pages = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ if (pgoff >= max_pages || pages > max_pages - pgoff)
+ return -EINVAL;
+ block_pages = pages;
+ offset = cramfs_get_block_range(inode, pgoff, &block_pages);
+ if (!offset || block_pages != pages)
+ return -ENOSYS;
+ addr = sbi->linear_phys_addr + offset;
+ pr_debug("get_unmapped for %s ofs %#lx siz %lu at 0x%08lx\n",
+ file_dentry(file)->d_name.name, pgoff*PAGE_SIZE, len, addr);
+ return addr;
+}
+
+static unsigned cramfs_physmem_mmap_capabilities(struct file *file)
+{
+ return NOMMU_MAP_COPY | NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_EXEC;
+}
+#endif
+
+static const struct file_operations cramfs_physmem_fops = {
+ .llseek = generic_file_llseek,
+ .read_iter = generic_file_read_iter,
+ .splice_read = generic_file_splice_read,
+ .mmap = cramfs_physmem_mmap,
+#ifndef CONFIG_MMU
+ .get_unmapped_area = cramfs_physmem_get_unmapped_area,
+ .mmap_capabilities = cramfs_physmem_mmap_capabilities,
+#endif
+};
+
static void cramfs_blkdev_kill_sb(struct super_block *sb)
{
struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
--
2.9.5
Update documentation, pointer to latest tools, appoint myself as maintainer.
Given it's been unloved for so long, I don't expect anyone will protest.
Signed-off-by: Nicolas Pitre <[email protected]>
---
Documentation/filesystems/cramfs.txt | 42 ++++++++++++++++++++++++++++++++++++
MAINTAINERS | 4 ++--
fs/cramfs/Kconfig | 9 +++++---
3 files changed, 50 insertions(+), 5 deletions(-)
diff --git a/Documentation/filesystems/cramfs.txt b/Documentation/filesystems/cramfs.txt
index 4006298f67..8875d306bc 100644
--- a/Documentation/filesystems/cramfs.txt
+++ b/Documentation/filesystems/cramfs.txt
@@ -45,6 +45,48 @@ you can just change the #define in mkcramfs.c, so long as you don't
mind the filesystem becoming unreadable to future kernels.
+Memory Mapped cramfs image
+--------------------------
+
+The CRAMFS_PHYSMEM Kconfig option adds support for loading data directly
+from a physical linear memory range (usually non volatile memory like Flash)
+to cramfs instead of going through the block device layer. This saves some
+memory since no intermediate buffering is necessary to hold the data before
+decompressing.
+
+And when data blocks are kept uncompressed and properly aligned, they will
+automatically be mapped directly into user space whenever possible providing
+eXecute-In-Place (XIP) from ROM of read-only segments. Data segments mapped
+read-write (hence they have to be copied to RAM) may still be compressed in
+the cramfs image in the same file along with non compressed read-only
+segments. Both MMU and no-MMU systems are supported. This is particularly
+handy for tiny embedded systems with very tight memory constraints.
+
+The filesystem type for this feature is "cramfs_physmem" to distinguish it
+from the block device (or MTD) based access. The location of the cramfs
+image in memory is system dependent. You must know the proper physical
+address where the cramfs image is located and specify it using the
+physaddr=0x******** mount option (for example, if the physical address
+of the cramfs image is 0x80100000, the following command would mount it
+on /mnt:
+
+$ mount -t cramfs_physmem -o physaddr=0x80100000 none /mnt
+
+To boot such an image as the root filesystem, the following kernel
+commandline parameters must be provided:
+
+ "rootfstype=cramfs_physmem rootflags=physaddr=0x80100000"
+
+
+Tools
+-----
+
+A version of mkcramfs that can take advantage of the latest capabilities
+described above can be found here:
+
+https://github.com/npitre/cramfs-tools
+
+
For /usr/share/magic
--------------------
diff --git a/MAINTAINERS b/MAINTAINERS
index 44cb004c76..12f8155cfe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3612,8 +3612,8 @@ F: drivers/cpuidle/*
F: include/linux/cpuidle.h
CRAMFS FILESYSTEM
-W: http://sourceforge.net/projects/cramfs/
-S: Orphan / Obsolete
+M: Nicolas Pitre <[email protected]>
+S: Maintained
F: Documentation/filesystems/cramfs.txt
F: fs/cramfs/
diff --git a/fs/cramfs/Kconfig b/fs/cramfs/Kconfig
index 5eed4ad2d5..8ed27e41bd 100644
--- a/fs/cramfs/Kconfig
+++ b/fs/cramfs/Kconfig
@@ -1,5 +1,5 @@
config CRAMFS
- tristate "Compressed ROM file system support (cramfs) (OBSOLETE)"
+ tristate "Compressed ROM file system support (cramfs)"
select ZLIB_INFLATE
help
Saying Y here includes support for CramFs (Compressed ROM File
@@ -15,8 +15,11 @@ config CRAMFS
cramfs. Note that the root file system (the one containing the
directory /) cannot be compiled as a module.
- This filesystem is obsoleted by SquashFS, which is much better
- in terms of performance and features.
+ This filesystem is limited in capabilities and performance on
+ purpose to remain small and low on RAM usage. It is most suitable
+ for small embedded systems. For a more capable compressed filesystem
+ you should look at SquashFS which is much better in terms of
+ performance and features.
If unsure, say N.
--
2.9.5
On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> Small embedded systems typically execute the kernel code in place (XIP)
> directly from flash to save on precious RAM usage. This adds the ability
> to consume filesystem data directly from flash to the cramfs filesystem
> as well. Cramfs is particularly well suited to this feature as it is
> very simple and its RAM usage is already very low, and with this feature
> it is possible to use it with no block device support and even lower RAM
> usage.
>
> This patch was inspired by a similar patch from Shane Nay dated 17 years
> ago that used to be very popular in embedded circles but never made it
> into mainline. This is a cleaned-up implementation that uses far fewer
> memory address at run time when both methods are configured in. In the
> context of small IoT deployments, this functionality has become relevant
> and useful again.
>
> To distinguish between both access types, the cramfs_physmem filesystem
> type must be specified when using a memory accessible cramfs image, and
> the physaddr argument must provide the actual filesystem image's physical
> memory location.
>
> Signed-off-by: Nicolas Pitre <[email protected]>
> ---
> fs/cramfs/Kconfig | 30 ++++++-
> fs/cramfs/inode.c | 264 +++++++++++++++++++++++++++++++++++++++++++------
> -----
> 2 files changed, 242 insertions(+), 52 deletions(-)
>
> diff --git a/fs/cramfs/Kconfig b/fs/cramfs/Kconfig
> index 11b29d491b..5eed4ad2d5 100644
> --- a/fs/cramfs/Kconfig
> +++ b/fs/cramfs/Kconfig
> @@ -1,6 +1,5 @@
> config CRAMFS
> tristate "Compressed ROM file system support (cramfs) (OBSOLETE)"
> - depends on BLOCK
> select ZLIB_INFLATE
> help
> Saying Y here includes support for CramFs (Compressed ROM File
> @@ -20,3 +19,32 @@ config CRAMFS
> in terms of performance and features.
>
> If unsure, say N.
> +
> +config CRAMFS_BLOCKDEV
> + bool "Support CramFs image over a regular block device" if EXPERT
> + depends on CRAMFS && BLOCK
> + default y
> + help
> + This option allows the CramFs driver to load data from a regular
> + block device such a disk partition or a ramdisk.
> +
trailing whitespace
> +config CRAMFS_PHYSMEM
> + bool "Support CramFs image directly mapped in physical memory"
> + depends on CRAMFS
> + default y if !CRAMFS_BLOCKDEV
> + help
> + This option allows the CramFs driver to load data directly from
> + a linear adressed memory range (usually non volatile memory
> + like flash) instead of going through the block device layer.
> + This saves some memory since no intermediate buffering is
> + necessary.
> +
> + The filesystem type for this feature is "cramfs_physmem".
> + The location of the CramFs image in memory is board
> + dependent. Therefore, if you say Y, you must know the proper
> + physical address where to store the CramFs image and specify
> + it using the physaddr=0x******** mount option (for example:
> + "mount -t cramfs_physmem -o physaddr=0x100000 none /mnt").
> +
> + If unsure, say N.
> +
new blank line at EOF
-Chris
On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> Two new capabilities are introduced here:
>
> - The ability to store some blocks uncompressed.
>
> - The ability to locate blocks anywhere.
>
> Those capabilities can be used independently, but the combination
> opens the possibility for execute-in-place (XIP) of program text segments
> that must remain uncompressed, and in the MMU case, must have a specific
> alignment. It is even possible to still have the writable data segments
> from the same file compressed as they have to be copied into RAM anyway.
>
> This is achieved by giving special meanings to some unused block pointer
> bits while remaining compatible with legacy cramfs images.
>
> Signed-off-by: Nicolas Pitre <[email protected]>
> ---
> fs/cramfs/README | 31 ++++++++++++++-
> fs/cramfs/inode.c | 87 +++++++++++++++++++++++++++++++++----
> -----
> include/uapi/linux/cramfs_fs.h | 20 +++++++++-
> 3 files changed, 118 insertions(+), 20 deletions(-)
>
> diff --git a/fs/cramfs/README b/fs/cramfs/README
> index 9d4e7ea311..d71b27e0ff 100644
> --- a/fs/cramfs/README
> +++ b/fs/cramfs/README
> @@ -49,17 +49,46 @@ same as the start of the (i+1)'th <block> if there is
> one). The first
> <block> immediately follows the last <block_pointer> for the file.
> <block_pointer>s are each 32 bits long.
>
> +When the CRAMFS_FLAG_EXT_BLOCK_POINTERS capability bit is set, each
> +<block_pointer>'s top bits may contain special flags as follows:
> +
> +CRAMFS_BLK_FLAG_UNCOMPRESSED (bit 31):
> + The block data is not compressed and should be copied verbatim.
> +
> +CRAMFS_BLK_FLAG_DIRECT_PTR (bit 30):
> + The <block_pointer> stores the actual block start offset and not
> + its end, shifted right by 2 bits. The block must therefore be
> + aligned to a 4-byte boundary. The block size is either blksize
> + if CRAMFS_BLK_FLAG_UNCOMPRESSED is also specified, otherwise
> + the compressed data length is included in the first 2 bytes of
> + the block data. This is used to allow discontiguous data layout
> + and specific data block alignments e.g. for XIP applications.
> +
> +
> The order of <file_data>'s is a depth-first descent of the directory
> tree, i.e. the same order as `find -size +0 \( -type f -o -type l \)
> -print'.
>
>
> <block>: The i'th <block> is the output of zlib's compress function
> -applied to the i'th blksize-sized chunk of the input data.
> +applied to the i'th blksize-sized chunk of the input data if the
> +corresponding CRAMFS_BLK_FLAG_UNCOMPRESSED <block_ptr> bit is not set,
> +otherwise it is the input data directly.
> (For the last <block> of the file, the input may of course be smaller.)
> Each <block> may be a different size. (See <block_pointer> above.)
> +
> <block>s are merely byte-aligned, not generally u32-aligned.
>
> +When CRAMFS_BLK_FLAG_DIRECT_PTR is specified then the corresponding
> +<block> may be located anywhere and not necessarily contiguous with
> +the previous/next blocks. In that case it is minimally u32-aligned.
> +If CRAMFS_BLK_FLAG_UNCOMPRESSED is also specified then the size is always
> +blksize except for the last block which is limited by the file length.
> +If CRAMFS_BLK_FLAG_DIRECT_PTR is set and CRAMFS_BLK_FLAG_UNCOMPRESSED
> +is not set then the first 2 bytes of the block contains the size of the
> +remaining block data as this cannot be determined from the placement of
> +logically adjacent blocks.
> +
>
> Holes
> -----
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index 393eb27ef4..b825ae162c 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -636,33 +636,84 @@ static int cramfs_readpage(struct file *file, struct
> page *page)
> if (page->index < maxblock) {
> struct super_block *sb = inode->i_sb;
> u32 blkptr_offset = OFFSET(inode) + page->index*4;
> - u32 start_offset, compr_len;
> + u32 block_ptr, block_start, block_len;
> + bool uncompressed, direct;
>
> - start_offset = OFFSET(inode) + maxblock*4;
> mutex_lock(&read_mutex);
> - if (page->index)
> - start_offset = *(u32 *) cramfs_read(sb, blkptr_offset-4,
> - 4);
> - compr_len = (*(u32 *) cramfs_read(sb, blkptr_offset, 4) -
> - start_offset);
> - mutex_unlock(&read_mutex);
> + block_ptr = *(u32 *) cramfs_read(sb, blkptr_offset, 4);
> + uncompressed = (block_ptr & CRAMFS_BLK_FLAG_UNCOMPRESSED);
> + direct = (block_ptr & CRAMFS_BLK_FLAG_DIRECT_PTR);
> + block_ptr &= ~CRAMFS_BLK_FLAGS;
> +
> + if (direct) {
> + /*
> + * The block pointer is an absolute start pointer,
> + * shifted by 2 bits. The size is included in the
> + * first 2 bytes of the data block when compressed,
> + * or PAGE_SIZE otherwise.
> + */
> + block_start = block_ptr << 2;
> + if (uncompressed) {
> + block_len = PAGE_SIZE;
> + /* if last block: cap to file length */
> + if (page->index == maxblock - 1)
> + block_len = offset_in_page(inode->i_size);
> + } else {
> + block_len = *(u16 *)
> + cramfs_read(sb, block_start, 2);
> + block_start += 2;
> + }
> + } else {
> + /*
> + * The block pointer indicates one past the end of
> + * the current block (start of next block). If this
> + * is the first block then it starts where the block
> + * pointer table ends, otherwise its start comes
> + * from the previous block's pointer.
> + */
> + block_start = OFFSET(inode) + maxblock*4;
> + if (page->index)
> + block_start = *(u32 *)
> + cramfs_read(sb, blkptr_offset-4, 4);
> + /* Beware... previous ptr might be a direct ptr */
> + if (unlikely(block_start & CRAMFS_BLK_FLAG_DIRECT_PTR))
> {
> + /* See comments on earlier code. */
> + u32 prev_start = block_start;
> + block_start = prev_start & ~CRAMFS_BLK_FLAGS;
> + block_start <<= 2;
> + if (prev_start & CRAMFS_BLK_FLAG_UNCOMPRESSED) {
> + block_start += PAGE_SIZE;
> + } else {
> + block_len = *(u16 *)
> + cramfs_read(sb, block_start, 2);
> + block_start += 2 + block_len;
> + }
> + }
> + block_start &= ~CRAMFS_BLK_FLAGS;
> + block_len = block_ptr - block_start;
> + }
>
> - if (compr_len == 0)
> + if (block_len == 0)
> ; /* hole */
> - else if (unlikely(compr_len > (PAGE_SIZE << 1))) {
> - pr_err("bad compressed blocksize %u\n",
> - compr_len);
> + else if (unlikely(block_len > 2*PAGE_SIZE ||
> + (uncompressed && block_len > PAGE_SIZE))) {
> + mutex_unlock(&read_mutex);
> + pr_err("bad data blocksize %u\n", block_len);
> goto err;
> + } else if (uncompressed) {
> + memcpy(pgdata,
> + cramfs_read(sb, block_start, block_len),
> + block_len);
> + bytes_filled = block_len;
> } else {
> - mutex_lock(&read_mutex);
> bytes_filled = cramfs_uncompress_block(pgdata,
> PAGE_SIZE,
> - cramfs_read(sb, start_offset, compr_len),
> - compr_len);
> - mutex_unlock(&read_mutex);
> - if (unlikely(bytes_filled < 0))
> - goto err;
> + cramfs_read(sb, block_start, block_len),
> + block_len);
> }
> + mutex_unlock(&read_mutex);
> + if (unlikely(bytes_filled < 0))
> + goto err;
> }
>
> memset(pgdata + bytes_filled, 0, PAGE_SIZE - bytes_filled);
> diff --git a/include/uapi/linux/cramfs_fs.h
> b/include/uapi/linux/cramfs_fs.h
> index e4611a9b92..ed250aa372 100644
> --- a/include/uapi/linux/cramfs_fs.h
> +++ b/include/uapi/linux/cramfs_fs.h
> @@ -73,6 +73,7 @@ struct cramfs_super {
> #define CRAMFS_FLAG_HOLES 0x00000100 /* support for holes */
> #define CRAMFS_FLAG_WRONG_SIGNATURE 0x00000200 /* reserved */
> #define CRAMFS_FLAG_SHIFTED_ROOT_OFFSET 0x00000400 /* shifted root fs
> */
> +#define CRAMFS_FLAG_EXT_BLOCK_POINTERS 0x00000800 /* block pointer
> extensions */
>
> /*
> * Valid values in super.flags. Currently we refuse to mount
> @@ -82,7 +83,24 @@ struct cramfs_super {
> #define CRAMFS_SUPPORTED_FLAGS ( 0x000000ff \
> | CRAMFS_FLAG_HOLES \
> | CRAMFS_FLAG_WRONG_SIGNATURE \
> - | CRAMFS_FLAG_SHIFTED_ROOT_OFFSET )
> + | CRAMFS_FLAG_SHIFTED_ROOT_OFFSET \
> + | CRAMFS_FLAG_EXT_BLOCK_POINTERS )
>
> +/*
> + * Block pointer flags
> + *
> + * The maximum block offset that needs to be represented is roughly:
> + *
trailing whitespace
-Chris
On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> When cramfs_physmem is used then we have the opportunity to map files
> directly from ROM, directly into user space, saving on RAM usage.
> This gives us Execute-In-Place (XIP) support.
>
> For a file to be mmap()-able, the map area has to correspond to a range
> of uncompressed and contiguous blocks, and in the MMU case it also has
> to be page aligned. A version of mkcramfs with appropriate support is
> necessary to create such a filesystem image.
>
> In the MMU case it may happen for a vma structure to extend beyond the
> actual file size. This is notably the case in binfmt_elf.c:elf_map().
> Or the file's last block is shared with other files and cannot be mapped
> as is. Rather than refusing to mmap it, we do a partial map and set up a
> special vm_ops fault handler that splits the vma in two: the direct
> mapping
> vma and the memory-backed vma populated by the readpage method.
>
> In the non-MMU case it is the get_unmapped_area method that is responsible
> for providing the address where the actual data can be found. No mapping
> is necessary of course.
>
> Signed-off-by: Nicolas Pitre <[email protected]>
> ---
> fs/cramfs/inode.c | 270
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 270 insertions(+)
>
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index b825ae162c..e3884c607b 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -16,6 +16,7 @@
> #include <linux/module.h>
> #include <linux/fs.h>
> #include <linux/pagemap.h>
> +#include <linux/ramfs.h>
> #include <linux/init.h>
> #include <linux/string.h>
> #include <linux/blkdev.h>
> @@ -49,6 +50,7 @@ static inline struct cramfs_sb_info *CRAMFS_SB(struct
> super_block *sb)
> static const struct super_operations cramfs_ops;
> static const struct inode_operations cramfs_dir_inode_operations;
> static const struct file_operations cramfs_directory_operations;
> +static const struct file_operations cramfs_physmem_fops;
> static const struct address_space_operations cramfs_aops;
>
> static DEFINE_MUTEX(read_mutex);
> @@ -96,6 +98,10 @@ static struct inode *get_cramfs_inode(struct
> super_block *sb,
> case S_IFREG:
> inode->i_fop = &generic_ro_fops;
> inode->i_data.a_ops = &cramfs_aops;
> + if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) &&
> + CRAMFS_SB(sb)->flags & CRAMFS_FLAG_EXT_BLOCK_POINTERS &&
> + CRAMFS_SB(sb)->linear_phys_addr)
> + inode->i_fop = &cramfs_physmem_fops;
> break;
> case S_IFDIR:
> inode->i_op = &cramfs_dir_inode_operations;
> @@ -277,6 +283,270 @@ static void *cramfs_read(struct super_block *sb,
> unsigned int offset,
> return NULL;
> }
>
> +/*
> + * For a mapping to be possible, we need a range of uncompressed and
> + * contiguous blocks. Return the offset for the first block and number of
> + * valid blocks for which that is true, or zero otherwise.
> + */
> +static u32 cramfs_get_block_range(struct inode *inode, u32 pgoff, u32
> *pages)
> +{
> + struct super_block *sb = inode->i_sb;
> + struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
> + int i;
> + u32 *blockptrs, blockaddr;
> +
> + /*
> + * We can dereference memory directly here as this code may be
> + * reached only when there is a direct filesystem image mapping
> + * available in memory.
> + */
> + blockptrs = (u32 *)(sbi->linear_virt_addr + OFFSET(inode) +
> pgoff*4);
> + blockaddr = blockptrs[0] & ~CRAMFS_BLK_FLAGS;
> + i = 0;
> + do {
> + u32 expect = blockaddr + i * (PAGE_SIZE >> 2);
> + expect |=
> CRAMFS_BLK_FLAG_DIRECT_PTR|CRAMFS_BLK_FLAG_UNCOMPRESSED;
> + if (blockptrs[i] != expect) {
> + pr_debug("range: block %d/%d got %#x expects %#x\n",
> + pgoff+i, pgoff+*pages-1, blockptrs[i], expect);
> + if (i == 0)
> + return 0;
> + break;
> + }
> + } while (++i < *pages);
> +
> + *pages = i;
> +
> + /* stored "direct" block ptrs are shifted down by 2 bits */
> + return blockaddr << 2;
> +}
> +
> +/*
> + * It is possible for cramfs_physmem_mmap() to partially populate the
> mapping
> + * causing page faults in the unmapped area. When that happens, we need
> to
> + * split the vma so that the unmapped area gets its own vma that can be
> backed
> + * with actual memory pages and loaded normally. This is necessary
> because
> + * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and
> filemap_fault()
> + * no longer works with it. Furthermore this makes /proc/x/maps right.
> + * Q: is there a way to do split vma at mmap() time?
> + */
> +static const struct vm_operations_struct cramfs_vmasplit_ops;
> +static int cramfs_vmasplit_fault(struct vm_fault *vmf)
> +{
> + struct mm_struct *mm = vmf->vma->vm_mm;
> + struct vm_area_struct *vma, *new_vma;
> + unsigned long split_val, split_addr;
> + unsigned int split_pgoff, split_page;
> + int ret;
> +
> + /* Retrieve the vma split address and validate it */
> + vma = vmf->vma;
> + split_val = (unsigned long)vma->vm_private_data;
> + split_pgoff = split_val & 0xffff;
> + split_page = split_val >> 16;
> + split_addr = vma->vm_start + split_page * PAGE_SIZE;
> + pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n",
> + vmf->address, vma->vm_start, vma->vm_end, split_addr);
> + if (!split_val || split_addr >= vma->vm_end || vmf->address <
> split_addr)
> + return VM_FAULT_SIGSEGV;
> +
> + /* We have some vma surgery to do and need the write lock. */
> + up_read(&mm->mmap_sem);
> + if (down_write_killable(&mm->mmap_sem))
> + return VM_FAULT_RETRY;
> +
> + /* Make sure the vma didn't change between the locks */
> + vma = find_vma(mm, vmf->address);
> + if (vma->vm_ops != &cramfs_vmasplit_ops) {
> + /*
> + * Someone else raced with us and could have handled the fault.
> + * Let it go back to user space and fault again if necessary.
> + */
> + downgrade_write(&mm->mmap_sem);
> + return VM_FAULT_NOPAGE;
> + }
> +
> + /* Split the vma between the directly mapped area and the rest */
> + ret = split_vma(mm, vma, split_addr, 0);
> + if (ret) {
> + downgrade_write(&mm->mmap_sem);
> + return VM_FAULT_OOM;
> + }
> +
> + /* The direct vma should no longer ever fault */
> + vma->vm_ops = NULL;
> +
> + /* Retrieve the new vma covering the unmapped area */
> + new_vma = find_vma(mm, split_addr);
> + BUG_ON(new_vma == vma);
> + if (!new_vma) {
> + downgrade_write(&mm->mmap_sem);
> + return VM_FAULT_SIGSEGV;
> + }
> +
> + /*
> + * Readjust the new vma with the actual file based pgoff and
> + * process the fault normally on it.
> + */
> + new_vma->vm_pgoff = split_pgoff;
> + new_vma->vm_ops = &generic_file_vm_ops;
> + vmf->vma = new_vma;
> + vmf->pgoff = split_pgoff;
> + vmf->pgoff += (vmf->address - new_vma->vm_start) >> PAGE_SHIFT;
> + downgrade_write(&mm->mmap_sem);
> + return filemap_fault(vmf);
> +}
> +
> +static const struct vm_operations_struct cramfs_vmasplit_ops = {
> + .fault = cramfs_vmasplit_fault,
> +};
> +
> +static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct
> *vma)
> +{
> + struct inode *inode = file_inode(file);
> + struct super_block *sb = inode->i_sb;
> + struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
> + unsigned int pages, vma_pages, max_pages, offset;
> + unsigned long address;
> + char *fail_reason;
> + int ret;
> +
> + if (!IS_ENABLED(CONFIG_MMU))
> + return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -
> ENOSYS;
> +
> + if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
> + return -EINVAL;
> +
> + /* Could COW work here? */
> + fail_reason = "vma is writable";
> + if (vma->vm_flags & VM_WRITE)
> + goto fail;
> +
> + vma_pages = (vma->vm_end - vma->vm_start + PAGE_SIZE - 1) >>
> PAGE_SHIFT;
> + max_pages = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
> + fail_reason = "beyond file limit";
> + if (vma->vm_pgoff >= max_pages)
> + goto fail;
> + pages = vma_pages;
> + if (pages > max_pages - vma->vm_pgoff)
> + pages = max_pages - vma->vm_pgoff;
> +
> + offset = cramfs_get_block_range(inode, vma->vm_pgoff, &pages);
> + fail_reason = "unsuitable block layout";
> + if (!offset)
> + goto fail;
> + address = sbi->linear_phys_addr + offset;
> + fail_reason = "data is not page aligned";
> + if (!PAGE_ALIGNED(address))
> + goto fail;
> +
> + /* Don't map the last page if it contains some other data */
> + if (unlikely(vma->vm_pgoff + pages == max_pages)) {
> + unsigned int partial = offset_in_page(inode->i_size);
> + if (partial) {
> + char *data = sbi->linear_virt_addr + offset;
> + data += (max_pages - 1) * PAGE_SIZE + partial;
> + while ((unsigned long)data & 7)
> + if (*data++ != 0)
> + goto nonzero;
> + while (offset_in_page(data)) {
> + if (*(u64 *)data != 0) {
> + nonzero:
> + pr_debug("mmap: %s: last page is shared\n",
> + file_dentry(file)->d_name.name);
> + pages--;
> + break;
> + }
> + data += 8;
> + }
> + }
> + }
> +
> + if (pages) {
> + /*
> + * If we can't map it all, page faults will occur if the
> + * unmapped area is accessed. Let's handle them to split the
> + * vma and let the normal paging machinery take care of the
> + * rest through cramfs_readpage(). Because remap_pfn_range()
> + * repurposes vma->vm_pgoff, we have to save it somewhere.
> + * Let's use vma->vm_private_data to hold both the pgoff and
> the actual address split point.
> + * Maximum file size is 16MB so we can pack both together.
> + */
> + if (pages != vma_pages) {
> + unsigned int split_pgoff = vma->vm_pgoff + pages;
> + unsigned long split_val = split_pgoff + (pages << 16);
> + vma->vm_private_data = (void *)split_val;
> + vma->vm_ops = &cramfs_vmasplit_ops;
> + /* to keep remap_pfn_range() happy */
> + vma->vm_end = vma->vm_start + pages * PAGE_SIZE;
> + }
> +
> + ret = remap_pfn_range(vma, vma->vm_start, address >>
> PAGE_SHIFT,
> + pages * PAGE_SIZE, vma->vm_page_prot);
space before tab in indent
-Chris
On Wed, 16 Aug 2017, Chris Brandt wrote:
> On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> > + bool "Support CramFs image over a regular block device" if EXPERT
> > + depends on CRAMFS && BLOCK
> > + default y
> > + help
> > + This option allows the CramFs driver to load data from a regular
> > + block device such a disk partition or a ramdisk.
> > +
>
>
> trailing whitespace
Yeah... Fixed it and the others in my git repo, thanks.
This is something that can be done with git apply --whitespace=fix so I
won't repost unless I get more comments.
Nicolas
On Wed, Aug 16, 2017 at 01:35:35PM -0400, Nicolas Pitre wrote:
> +static const struct vm_operations_struct cramfs_vmasplit_ops;
> +static int cramfs_vmasplit_fault(struct vm_fault *vmf)
> +{
> + struct mm_struct *mm = vmf->vma->vm_mm;
> + struct vm_area_struct *vma, *new_vma;
> + unsigned long split_val, split_addr;
> + unsigned int split_pgoff, split_page;
> + int ret;
> +
> + /* Retrieve the vma split address and validate it */
> + vma = vmf->vma;
> + split_val = (unsigned long)vma->vm_private_data;
> + split_pgoff = split_val & 0xffff;
> + split_page = split_val >> 16;
> + split_addr = vma->vm_start + split_page * PAGE_SIZE;
> + pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n",
> + vmf->address, vma->vm_start, vma->vm_end, split_addr);
> + if (!split_val || split_addr >= vma->vm_end || vmf->address < split_addr)
> + return VM_FAULT_SIGSEGV;
> +
> + /* We have some vma surgery to do and need the write lock. */
> + up_read(&mm->mmap_sem);
> + if (down_write_killable(&mm->mmap_sem))
> + return VM_FAULT_RETRY;
> +
> + /* Make sure the vma didn't change between the locks */
> + vma = find_vma(mm, vmf->address);
> + if (vma->vm_ops != &cramfs_vmasplit_ops) {
> + /*
> + * Someone else raced with us and could have handled the fault.
> + * Let it go back to user space and fault again if necessary.
> + */
> + downgrade_write(&mm->mmap_sem);
> + return VM_FAULT_NOPAGE;
> + }
> +
> + /* Split the vma between the directly mapped area and the rest */
> + ret = split_vma(mm, vma, split_addr, 0);
Egads... Everything else aside, who said that your split_... will have
anything to do with the vma you get from find_vma()?
On Mon, 28 Aug 2017, Al Viro wrote:
> On Wed, Aug 16, 2017 at 01:35:35PM -0400, Nicolas Pitre wrote:
>
> > +static const struct vm_operations_struct cramfs_vmasplit_ops;
> > +static int cramfs_vmasplit_fault(struct vm_fault *vmf)
> > +{
> > + struct mm_struct *mm = vmf->vma->vm_mm;
> > + struct vm_area_struct *vma, *new_vma;
> > + unsigned long split_val, split_addr;
> > + unsigned int split_pgoff, split_page;
> > + int ret;
> > +
> > + /* Retrieve the vma split address and validate it */
> > + vma = vmf->vma;
> > + split_val = (unsigned long)vma->vm_private_data;
> > + split_pgoff = split_val & 0xffff;
> > + split_page = split_val >> 16;
> > + split_addr = vma->vm_start + split_page * PAGE_SIZE;
> > + pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n",
> > + vmf->address, vma->vm_start, vma->vm_end, split_addr);
> > + if (!split_val || split_addr >= vma->vm_end || vmf->address < split_addr)
> > + return VM_FAULT_SIGSEGV;
> > +
> > + /* We have some vma surgery to do and need the write lock. */
> > + up_read(&mm->mmap_sem);
> > + if (down_write_killable(&mm->mmap_sem))
> > + return VM_FAULT_RETRY;
> > +
> > + /* Make sure the vma didn't change between the locks */
> > + vma = find_vma(mm, vmf->address);
> > + if (vma->vm_ops != &cramfs_vmasplit_ops) {
> > + /*
> > + * Someone else raced with us and could have handled the fault.
> > + * Let it go back to user space and fault again if necessary.
> > + */
> > + downgrade_write(&mm->mmap_sem);
> > + return VM_FAULT_NOPAGE;
> > + }
> > +
> > + /* Split the vma between the directly mapped area and the rest */
> > + ret = split_vma(mm, vma, split_addr, 0);
>
> Egads... Everything else aside, who said that your split_... will have
> anything to do with the vma you get from find_vma()?
When vma->vm_ops == &cramfs_vmasplit_ops it is guaranteed that the vma
is not fully populated and that the unpopulated area starts at
split_addr. That split_addr was stored in vma->vm_private_data at the
same time as vma->vm_ops. Given that mm->mmap_sem is held all along
across find_vma(), split_vma() and the second find_vma() I hope that I
can trust that things will be related.
Nicolas
On Mon, Aug 28, 2017 at 09:29:58AM -0400, Nicolas Pitre wrote:
> > > + /* Make sure the vma didn't change between the locks */
> > > + vma = find_vma(mm, vmf->address);
> > > + if (vma->vm_ops != &cramfs_vmasplit_ops) {
> > > + /*
> > > + * Someone else raced with us and could have handled the fault.
> > > + * Let it go back to user space and fault again if necessary.
> > > + */
> > > + downgrade_write(&mm->mmap_sem);
> > > + return VM_FAULT_NOPAGE;
> > > + }
> > > +
> > > + /* Split the vma between the directly mapped area and the rest */
> > > + ret = split_vma(mm, vma, split_addr, 0);
> >
> > Egads... Everything else aside, who said that your split_... will have
> > anything to do with the vma you get from find_vma()?
>
> When vma->vm_ops == &cramfs_vmasplit_ops it is guaranteed that the vma
> is not fully populated and that the unpopulated area starts at
> split_addr. That split_addr was stored in vma->vm_private_data at the
> same time as vma->vm_ops. Given that mm->mmap_sem is held all along
> across find_vma(), split_vma() and the second find_vma() I hope that I
> can trust that things will be related.
Huh? You do realize that another thread might've been blocked on that ->mmap_sem
in mremap(), get it, have ours block on attempt to get ->mmap_sem exclusive,
exterminate the original vma and put there a vma that has also come from cramfs,
but other than that had not a damn thing in common with the original. Different
memory area, etc.
Matching ->vm_ops is nowhere near enough.
While we are at it, what happens if you mmap 120Kb, then munmap() the middle
40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is. Will your
->vm_private_data be correct for both?
On Mon, 28 Aug 2017, Al Viro wrote:
> On Mon, Aug 28, 2017 at 09:29:58AM -0400, Nicolas Pitre wrote:
> > > > + /* Make sure the vma didn't change between the locks */
> > > > + vma = find_vma(mm, vmf->address);
> > > > + if (vma->vm_ops != &cramfs_vmasplit_ops) {
> > > > + /*
> > > > + * Someone else raced with us and could have handled the fault.
> > > > + * Let it go back to user space and fault again if necessary.
> > > > + */
> > > > + downgrade_write(&mm->mmap_sem);
> > > > + return VM_FAULT_NOPAGE;
> > > > + }
> > > > +
> > > > + /* Split the vma between the directly mapped area and the rest */
> > > > + ret = split_vma(mm, vma, split_addr, 0);
> > >
> > > Egads... Everything else aside, who said that your split_... will have
> > > anything to do with the vma you get from find_vma()?
> >
> > When vma->vm_ops == &cramfs_vmasplit_ops it is guaranteed that the vma
> > is not fully populated and that the unpopulated area starts at
> > split_addr. That split_addr was stored in vma->vm_private_data at the
> > same time as vma->vm_ops. Given that mm->mmap_sem is held all along
> > across find_vma(), split_vma() and the second find_vma() I hope that I
> > can trust that things will be related.
>
> Huh? You do realize that another thread might've been blocked on that ->mmap_sem
> in mremap(), get it, have ours block on attempt to get ->mmap_sem exclusive,
> exterminate the original vma and put there a vma that has also come from cramfs,
> but other than that had not a damn thing in common with the original. Different
> memory area, etc.
>
> Matching ->vm_ops is nowhere near enough.
Right... good point.
OK I moved the lock promotion right at the beginning _before_ validating
the split point. Also got a reference on the file to make sure that
hasn't changed too.
> While we are at it, what happens if you mmap 120Kb, then munmap() the middle
> 40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is. Will your
> ->vm_private_data be correct for both?
It wouldn't, but I now changed it to contain absolute values so now it
will. And if the split point lands in the hole then the code just
readjusts the pgoff at the beginning of the remaining part.
Here's the revised patch:
From: Nicolas Pitre <[email protected]>
Subject: [PATCH] cramfs: add mmap support
When cramfs_physmem is used then we have the opportunity to map files
directly from ROM, directly into user space, saving on RAM usage.
This gives us Execute-In-Place (XIP) support.
For a file to be mmap()-able, the map area has to correspond to a range
of uncompressed and contiguous blocks, and in the MMU case it also has
to be page aligned. A version of mkcramfs with appropriate support is
necessary to create such a filesystem image.
In the MMU case it may happen for a vma structure to extend beyond the
actual file size. This is notably the case in binfmt_elf.c:elf_map().
Or the file's last block is shared with other files and cannot be mapped
as is. Rather than refusing to mmap it, we do a partial map and set up
a special vm_ops fault handler that splits the vma in two: the direct
mapping vma and the memory-backed vma populated by the readpage method.
In practice the unmapped area is seldom accessed so the split might never
occur before this area is discarded.
In the non-MMU case it is the get_unmapped_area method that is responsible
for providing the address where the actual data can be found. No mapping
is necessary of course.
Signed-off-by: Nicolas Pitre <[email protected]>
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 2fc886092b..1d7d61354b 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -15,7 +15,9 @@
#include <linux/module.h>
#include <linux/fs.h>
+#include <linux/file.h>
#include <linux/pagemap.h>
+#include <linux/ramfs.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/blkdev.h>
@@ -49,6 +51,7 @@ static inline struct cramfs_sb_info *CRAMFS_SB(struct super_block *sb)
static const struct super_operations cramfs_ops;
static const struct inode_operations cramfs_dir_inode_operations;
static const struct file_operations cramfs_directory_operations;
+static const struct file_operations cramfs_physmem_fops;
static const struct address_space_operations cramfs_aops;
static DEFINE_MUTEX(read_mutex);
@@ -96,6 +99,10 @@ static struct inode *get_cramfs_inode(struct super_block *sb,
case S_IFREG:
inode->i_fop = &generic_ro_fops;
inode->i_data.a_ops = &cramfs_aops;
+ if (IS_ENABLED(CONFIG_CRAMFS_PHYSMEM) &&
+ CRAMFS_SB(sb)->flags & CRAMFS_FLAG_EXT_BLOCK_POINTERS &&
+ CRAMFS_SB(sb)->linear_phys_addr)
+ inode->i_fop = &cramfs_physmem_fops;
break;
case S_IFDIR:
inode->i_op = &cramfs_dir_inode_operations;
@@ -277,6 +284,294 @@ static void *cramfs_read(struct super_block *sb, unsigned int offset,
return NULL;
}
+/*
+ * For a mapping to be possible, we need a range of uncompressed and
+ * contiguous blocks. Return the offset for the first block and number of
+ * valid blocks for which that is true, or zero otherwise.
+ */
+static u32 cramfs_get_block_range(struct inode *inode, u32 pgoff, u32 *pages)
+{
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ int i;
+ u32 *blockptrs, blockaddr;
+
+ /*
+ * We can dereference memory directly here as this code may be
+ * reached only when there is a direct filesystem image mapping
+ * available in memory.
+ */
+ blockptrs = (u32 *)(sbi->linear_virt_addr + OFFSET(inode) + pgoff*4);
+ blockaddr = blockptrs[0] & ~CRAMFS_BLK_FLAGS;
+ i = 0;
+ do {
+ u32 expect = blockaddr + i * (PAGE_SIZE >> 2);
+ expect |= CRAMFS_BLK_FLAG_DIRECT_PTR|CRAMFS_BLK_FLAG_UNCOMPRESSED;
+ if (blockptrs[i] != expect) {
+ pr_debug("range: block %d/%d got %#x expects %#x\n",
+ pgoff+i, pgoff+*pages-1, blockptrs[i], expect);
+ if (i == 0)
+ return 0;
+ break;
+ }
+ } while (++i < *pages);
+
+ *pages = i;
+
+ /* stored "direct" block ptrs are shifted down by 2 bits */
+ return blockaddr << 2;
+}
+
+/*
+ * It is possible for cramfs_physmem_mmap() to partially populate the mapping
+ * causing page faults in the unmapped area. When that happens, we need to
+ * split the vma so that the unmapped area gets its own vma that can be backed
+ * with actual memory pages and loaded normally. This is necessary because
+ * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and filemap_fault()
+ * no longer works with it. Furthermore this makes /proc/x/maps right.
+ * Q: is there a way to do split vma at mmap() time?
+ */
+static const struct vm_operations_struct cramfs_vmasplit_ops;
+static int cramfs_vmasplit_fault(struct vm_fault *vmf)
+{
+ struct mm_struct *mm = vmf->vma->vm_mm;
+ struct vm_area_struct *vma, *new_vma;
+ struct file *vma_file = get_file(vmf->vma->vm_file);
+ unsigned long split_val, split_addr;
+ unsigned int split_pgoff;
+ int ret;
+
+ /* We have some vma surgery to do and need the write lock. */
+ up_read(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem)) {
+ fput(vma_file);
+ return VM_FAULT_RETRY;
+ }
+
+ /* Make sure the vma didn't change between the locks */
+ ret = VM_FAULT_SIGSEGV;
+ vma = find_vma(mm, vmf->address);
+ if (!vma)
+ goto out_fput;
+
+ /*
+ * Someone else might have raced with us and handled the fault,
+ * changed the vma, etc. If so let it go back to user space and
+ * fault again if necessary.
+ */
+ ret = VM_FAULT_NOPAGE;
+ if (vma->vm_ops != &cramfs_vmasplit_ops || vma->vm_file != vma_file)
+ goto out_fput;
+ fput(vma_file);
+
+ /* Retrieve the vma split address and validate it */
+ split_val = (unsigned long)vma->vm_private_data;
+ split_pgoff = split_val & 0xfff;
+ split_addr = (split_val >> 12) << PAGE_SHIFT;
+ if (split_addr < vma->vm_start) {
+ /* bottom of vma was unmapped */
+ split_pgoff += (vma->vm_start - split_addr) >> PAGE_SHIFT;
+ split_addr = vma->vm_start;
+ }
+ pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n",
+ vmf->address, vma->vm_start, vma->vm_end, split_addr);
+ ret = VM_FAULT_SIGSEGV;
+ if (!split_val || split_addr > vmf->address || vma->vm_end <= vmf->address)
+ goto out;
+
+ if (unlikely(vma->vm_start == split_addr)) {
+ /* nothing to split */
+ new_vma = vma;
+ } else {
+ /* Split away the directly mapped area */
+ ret = VM_FAULT_OOM;
+ if (split_vma(mm, vma, split_addr, 0) != 0)
+ goto out;
+
+ /* The direct vma should no longer ever fault */
+ vma->vm_ops = NULL;
+
+ /* Retrieve the new vma covering the unmapped area */
+ new_vma = find_vma(mm, split_addr);
+ BUG_ON(new_vma == vma);
+ ret = VM_FAULT_SIGSEGV;
+ if (!new_vma)
+ goto out;
+ }
+
+ /*
+ * Readjust the new vma with the actual file based pgoff and
+ * process the fault normally on it.
+ */
+ new_vma->vm_pgoff = split_pgoff;
+ new_vma->vm_ops = &generic_file_vm_ops;
+ new_vma->vm_flags &= ~(VM_IO | VM_PFNMAP | VM_DONTEXPAND);
+ vmf->vma = new_vma;
+ vmf->pgoff = split_pgoff;
+ vmf->pgoff += (vmf->address - new_vma->vm_start) >> PAGE_SHIFT;
+ downgrade_write(&mm->mmap_sem);
+ return filemap_fault(vmf);
+
+out_fput:
+ fput(vma_file);
+out:
+ downgrade_write(&mm->mmap_sem);
+ return ret;
+}
+
+static const struct vm_operations_struct cramfs_vmasplit_ops = {
+ .fault = cramfs_vmasplit_fault,
+};
+
+static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ unsigned int pages, vma_pages, max_pages, offset;
+ unsigned long address;
+ char *fail_reason;
+ int ret;
+
+ if (!IS_ENABLED(CONFIG_MMU))
+ return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -ENOSYS;
+
+ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
+ return -EINVAL;
+
+ /* Could COW work here? */
+ fail_reason = "vma is writable";
+ if (vma->vm_flags & VM_WRITE)
+ goto fail;
+
+ vma_pages = (vma->vm_end - vma->vm_start + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ max_pages = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ fail_reason = "beyond file limit";
+ if (vma->vm_pgoff >= max_pages)
+ goto fail;
+ pages = vma_pages;
+ if (pages > max_pages - vma->vm_pgoff)
+ pages = max_pages - vma->vm_pgoff;
+
+ offset = cramfs_get_block_range(inode, vma->vm_pgoff, &pages);
+ fail_reason = "unsuitable block layout";
+ if (!offset)
+ goto fail;
+ address = sbi->linear_phys_addr + offset;
+ fail_reason = "data is not page aligned";
+ if (!PAGE_ALIGNED(address))
+ goto fail;
+
+ /* Don't map the last page if it contains some other data */
+ if (unlikely(vma->vm_pgoff + pages == max_pages)) {
+ unsigned int partial = offset_in_page(inode->i_size);
+ if (partial) {
+ char *data = sbi->linear_virt_addr + offset;
+ data += (max_pages - 1) * PAGE_SIZE + partial;
+ while ((unsigned long)data & 7)
+ if (*data++ != 0)
+ goto nonzero;
+ while (offset_in_page(data)) {
+ if (*(u64 *)data != 0) {
+ nonzero:
+ pr_debug("mmap: %s: last page is shared\n",
+ file_dentry(file)->d_name.name);
+ pages--;
+ break;
+ }
+ data += 8;
+ }
+ }
+ }
+
+ if (pages) {
+ /*
+ * If we can't map it all, page faults will occur if the
+ * unmapped area is accessed. Let's handle them to split the
+ * vma and let the normal paging machinery take care of the
+ * rest through cramfs_readpage(). Because remap_pfn_range()
+ * repurposes vma->vm_pgoff, we have to save it somewhere.
+ * Let's use vma->vm_private_data to hold both the pgoff and
+ * the actual address split point. Maximum file size is 16MB
+ * (12 bits pgoff) and max 20 bits pfn where a long is 32 bits
+ * so we can pack both together.
+ */
+ if (pages != vma_pages) {
+ unsigned int split_pgoff = vma->vm_pgoff + pages;
+ unsigned long split_pfn = (vma->vm_start >> PAGE_SHIFT) + pages;
+ unsigned long split_val = split_pgoff | (split_pfn << 12);
+ vma->vm_private_data = (void *)split_val;
+ vma->vm_ops = &cramfs_vmasplit_ops;
+ /* to keep remap_pfn_range() happy */
+ vma->vm_end = vma->vm_start + pages * PAGE_SIZE;
+ }
+
+ ret = remap_pfn_range(vma, vma->vm_start, address >> PAGE_SHIFT,
+ pages * PAGE_SIZE, vma->vm_page_prot);
+ /* restore vm_end in case we cheated it above */
+ vma->vm_end = vma->vm_start + vma_pages * PAGE_SIZE;
+ if (ret)
+ return ret;
+
+ pr_debug("mapped %s at 0x%08lx (%u/%u pages) to vma 0x%08lx, "
+ "page_prot 0x%llx\n", file_dentry(file)->d_name.name,
+ address, pages, vma_pages, vma->vm_start,
+ (unsigned long long)pgprot_val(vma->vm_page_prot));
+ return 0;
+ }
+ fail_reason = "no suitable block remaining";
+
+fail:
+ pr_debug("%s: direct mmap failed: %s\n",
+ file_dentry(file)->d_name.name, fail_reason);
+
+ /* We failed to do a direct map, but normal paging will do it */
+ vma->vm_ops = &generic_file_vm_ops;
+ return 0;
+}
+
+#ifndef CONFIG_MMU
+
+static unsigned long cramfs_physmem_get_unmapped_area(struct file *file,
+ unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
+ unsigned int pages, block_pages, max_pages, offset;
+
+ pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ max_pages = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ if (pgoff >= max_pages || pages > max_pages - pgoff)
+ return -EINVAL;
+ block_pages = pages;
+ offset = cramfs_get_block_range(inode, pgoff, &block_pages);
+ if (!offset || block_pages != pages)
+ return -ENOSYS;
+ addr = sbi->linear_phys_addr + offset;
+ pr_debug("get_unmapped for %s ofs %#lx siz %lu at 0x%08lx\n",
+ file_dentry(file)->d_name.name, pgoff*PAGE_SIZE, len, addr);
+ return addr;
+}
+
+static unsigned cramfs_physmem_mmap_capabilities(struct file *file)
+{
+ return NOMMU_MAP_COPY | NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_EXEC;
+}
+#endif
+
+static const struct file_operations cramfs_physmem_fops = {
+ .llseek = generic_file_llseek,
+ .read_iter = generic_file_read_iter,
+ .splice_read = generic_file_splice_read,
+ .mmap = cramfs_physmem_mmap,
+#ifndef CONFIG_MMU
+ .get_unmapped_area = cramfs_physmem_get_unmapped_area,
+ .mmap_capabilities = cramfs_physmem_mmap_capabilities,
+#endif
+};
+
static void cramfs_blkdev_kill_sb(struct super_block *sb)
{
struct cramfs_sb_info *sbi = CRAMFS_SB(sb);
On Monday, August 28, 2017, Nicolas Pitre wrote:
> OK I moved the lock promotion right at the beginning _before_ validating
> the split point. Also got a reference on the file to make sure that
> hasn't changed too.
>
> > While we are at it, what happens if you mmap 120Kb, then munmap() the
> middle
> > 40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is. Will
> your
> > ->vm_private_data be correct for both?
>
> It wouldn't, but I now changed it to contain absolute values so now it
> will. And if the split point lands in the hole then the code just
> readjusts the pgoff at the beginning of the remaining part.
>
> Here's the revised patch:
For whatever it's worth, as soon as I moved to 4.13-rc7,
CONFIG_CRAMFS_PHYSMEM=y crashes my XIP_KERNEL system before it can even
get to any console output.
(both the old patch and the new patch)
If CONFIG_CRAMFS_PHYSMEM is not set, my XIP system boots fine.
However, if I boot -rc7 as a uImage, the new patch works just as good as
the old patch.
(mounting after boot, or booting with rootfstype=cramfs_physmem)
I guess I'll have to figure out what happened between -rc4 and -rc7.
Damn!
Chris
On Tue, 29 Aug 2017, Chris Brandt wrote:
> On Monday, August 28, 2017, Nicolas Pitre wrote:
> > OK I moved the lock promotion right at the beginning _before_ validating
> > the split point. Also got a reference on the file to make sure that
> > hasn't changed too.
> >
> > > While we are at it, what happens if you mmap 120Kb, then munmap() the
> > middle
> > > 40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is. Will
> > your
> > > ->vm_private_data be correct for both?
> >
> > It wouldn't, but I now changed it to contain absolute values so now it
> > will. And if the split point lands in the hole then the code just
> > readjusts the pgoff at the beginning of the remaining part.
> >
> > Here's the revised patch:
>
>
> For whatever it's worth, as soon as I moved to 4.13-rc7,
> CONFIG_CRAMFS_PHYSMEM=y crashes my XIP_KERNEL system before it can even
> get to any console output.
>
> (both the old patch and the new patch)
>
> If CONFIG_CRAMFS_PHYSMEM is not set, my XIP system boots fine.
>
> However, if I boot -rc7 as a uImage, the new patch works just as good as
> the old patch.
When not a uImage, do you mean by that a XIP kernel? If so you should
know by now from that other thread on LAK that the XIP linker script is
broken and probably just worked by luck till now. Still, if you could
bisect between -rc4 and -rc7 and pinpoint the change that makes it not
work that would be better than speculations.
Nicolas
On Tuesday, August 29, 2017, Nicolas Pitre wrote:
> On Tue, 29 Aug 2017, Chris Brandt wrote:
>
> > On Monday, August 28, 2017, Nicolas Pitre wrote:
> > > OK I moved the lock promotion right at the beginning _before_
> validating
> > > the split point. Also got a reference on the file to make sure that
> > > hasn't changed too.
> > >
> > > > While we are at it, what happens if you mmap 120Kb, then munmap()
> the
> > > middle
> > > > 40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is.
> Will
> > > your
> > > > ->vm_private_data be correct for both?
> > >
> > > It wouldn't, but I now changed it to contain absolute values so now it
> > > will. And if the split point lands in the hole then the code just
> > > readjusts the pgoff at the beginning of the remaining part.
> > >
> > > Here's the revised patch:
> >
> >
> > For whatever it's worth, as soon as I moved to 4.13-rc7,
> > CONFIG_CRAMFS_PHYSMEM=y crashes my XIP_KERNEL system before it can even
> > get to any console output.
> >
> > (both the old patch and the new patch)
> >
> > If CONFIG_CRAMFS_PHYSMEM is not set, my XIP system boots fine.
> >
> > However, if I boot -rc7 as a uImage, the new patch works just as good as
> > the old patch.
>
> When not a uImage, do you mean by that a XIP kernel?
Yes, CONFIG_XIP_KERNEL.
> If so you should
> know by now from that other thread on LAK that the XIP linker script is
> broken and probably just worked by luck till now. Still, if you could
> bisect between -rc4 and -rc7 and pinpoint the change that makes it not
> work that would be better than speculations.
Note that everything else seem OK when CONFIG_XIP_KERNEL=y. It's just
when CONFIG_XIP_KERNEL=y CONFIG_CRAMFS_PHYSMEM=y which is odd. So hopefully
that means it will be easy to track down.
Chris
On Tuesday, August 29, 2017, Chris Brandt wrote:
> On Tuesday, August 29, 2017, Nicolas Pitre wrote:
> > On Tue, 29 Aug 2017, Chris Brandt wrote:
> >
> > > On Monday, August 28, 2017, Nicolas Pitre wrote:
> > > > OK I moved the lock promotion right at the beginning _before_
> > validating
> > > > the split point. Also got a reference on the file to make sure that
> > > > hasn't changed too.
> > > >
> > > > > While we are at it, what happens if you mmap 120Kb, then munmap()
> > the
> > > > middle
> > > > > 40Kb. Leaving two 40Kb VMAs with 40Kb gap between them, that is.
> > Will
> > > > your
> > > > > ->vm_private_data be correct for both?
> > > >
> > > > It wouldn't, but I now changed it to contain absolute values so now
> it
> > > > will. And if the split point lands in the hole then the code just
> > > > readjusts the pgoff at the beginning of the remaining part.
> > > >
> > > > Here's the revised patch:
> > >
> > >
> > > For whatever it's worth, as soon as I moved to 4.13-rc7,
> > > CONFIG_CRAMFS_PHYSMEM=y crashes my XIP_KERNEL system before it can
> even
> > > get to any console output.
> > >
> > > (both the old patch and the new patch)
> > >
> > > If CONFIG_CRAMFS_PHYSMEM is not set, my XIP system boots fine.
> > >
> > > However, if I boot -rc7 as a uImage, the new patch works just as good
> as
> > > the old patch.
> >
> > When not a uImage, do you mean by that a XIP kernel?
>
> Yes, CONFIG_XIP_KERNEL.
>
> > If so you should
> > know by now from that other thread on LAK that the XIP linker script is
> > broken and probably just worked by luck till now. Still, if you could
> > bisect between -rc4 and -rc7 and pinpoint the change that makes it not
> > work that would be better than speculations.
>
> Note that everything else seem OK when CONFIG_XIP_KERNEL=y. It's just
> when CONFIG_XIP_KERNEL=y CONFIG_CRAMFS_PHYSMEM=y which is odd. So
> hopefully
> that means it will be easy to track down.
Update:
My issue was caused by the XIP linker script (vmlinux-xip.lds.S).
Therefore, by applying the following patch series from the
linux-arm-kernel mailing list, my system could boot normally.
[PATCH v2 0/5] make XIP kernel .data compressed in ROM
[PATCH v2 1/5] ARM: head-common.S: speed up startup code
[PATCH v2 2/5] ARM: vmlinux*.lds.S: some decruftification
[PATCH v2 3/5] ARM: vmlinux.lds.S: replace open coded .data sections with generic macros
[PATCH v2 4/5] ARM: vmlinux-xip.lds.S: fix multiple issues
[PATCH v2 5/5] ARM: XIP kernel: store .data compressed in ROM
Now that I could boot again, this cramfs series of patches operates as
designed.
Notice that busybox, libc and ld have physical addresses in Flash (ie, XIP)
$ cat /proc/self/maps
00008000-000a1000 r-xp 1b005000 00:0c 18192 /bin/busybox
000a9000-000aa000 rw-p 00099000 00:0c 18192 /bin/busybox
000aa000-000ac000 rw-p 00000000 00:00 0 [heap]
b6eed000-b6fc6000 r-xp 1b0bc000 00:0c 766540 /lib/libc-2.18-2013.10.so
b6fc6000-b6fce000 ---p 1b195000 00:0c 766540 /lib/libc-2.18-2013.10.so
b6fce000-b6fd0000 r--p 000d9000 00:0c 766540 /lib/libc-2.18-2013.10.so
b6fd0000-b6fd1000 rw-p 000db000 00:0c 766540 /lib/libc-2.18-2013.10.so
b6fd1000-b6fd4000 rw-p 00000000 00:00 0
b6fd4000-b6feb000 r-xp 1b0a4000 00:0c 670372 /lib/ld-2.18-2013.10.so
b6fee000-b6fef000 rw-p 00000000 00:00 0
b6ff0000-b6ff2000 rw-p 00000000 00:00 0
b6ff2000-b6ff3000 r--p 00016000 00:0c 670372 /lib/ld-2.18-2013.10.so
b6ff3000-b6ff4000 rw-p 00017000 00:0c 670372 /lib/ld-2.18-2013.10.so
bee27000-bee48000 rw-p 00000000 00:00 0 [stack]
beea4000-beea5000 r-xp 00000000 00:00 0 [sigpage]
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]
Tested-by: Chris Brandt <[email protected]>
On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> This series brings a nice refresh to the cramfs filesystem, adding the
> following capabilities:
>
> - Direct memory access, bypassing the block and/or MTD layers entirely.
>
> - Ability to store individual data blocks uncompressed.
>
> - Ability to locate individual data blocks anywhere in the filesystem.
>
> The end result is a very tight filesystem that can be accessed directly
> from ROM without any other subsystem underneath. Also this allows for
> user space XIP which is a very important feature for tiny embedded
> systems.
>
> Why cramfs?
>
> Because cramfs is very simple and small. With CONFIG_CRAMFS_BLOCK=n and
> CONFIG_CRAMFS_PHYSMEM=y the cramfs driver may use as little as 3704
> bytes
> of code. That's many times smaller than squashfs. And the runtime memory
> usage is also much less with cramfs than squashfs. It packs very tightly
> already compared to romfs which has no compression support. And the
> cramfs
> format was simple to extend, allowing for both compressed and
> uncompressed
> blocks within the same file.
>
> Why not accessing ROM via MTD?
>
> The MTD layer is nice and flexible. It also represents a huge overhead
> considering its core with no other enabled options weights 19KB.
> That's many times the size of the cramfs code for something that
> essentially boils down to a glorified argument parser and a call to
> memremap() in this case. And if someone still wants to use cramfs via
> MTD then it is already possible with mtdblock.
>
> Why not using DAX?
>
> DAX stands for "Direct Access" and is a generic kernel layer helping
> with the necessary tasks involved with XIP. It is tailored for large
> writable filesystems and relies on the presence of an MMU. It also has
> the following shortcoming: "The DAX code does not work correctly on
> architectures which have virtually mapped caches such as ARM, MIPS and
> SPARC." That makes it unsuitable for a large portion of the intended
> targets for this series. And due to the read-only nature of cramfs, it
> is
> possible to achieve the intended result with a much simpler approach
> making
> DAX somewhat overkill in this context.
>
> The maximum size of a cramfs image can't exceed 272MB. In practice it is
> likely to be much much less. Given this series is concerned with small
> memory systems, even in the MMU case there is always plenty of vmalloc
> space left to map it all and even a 272MB memremap() wouldn't be a
> problem. If it is then maybe your system is big enough with large
> resources to manage already and you're pretty unlikely to be using cramfs
> in the first place.
>
> Of course, while this cramfs remains backward compatible with existing
> filesystem images, a newer mkcramfs version is necessary to take advantage
> of the extended data layout. I created a version of mkcramfs that
> detects ELF files and marks text+rodata segments for XIP and compresses
> the
> rest of those ELF files automatically.
>
> So here it is. I'm also willing to step up as cramfs maintainer given
> that no sign of any maintenance activities appeared for years.
>
> This series is also available based on v4.13-rc4 via git here:
>
> http://git.linaro.org/people/nicolas.pitre/linux xipcramfs
>
>
> Changes from v1:
>
> - Improved mmap() support by adding the ability to partially populate a
> mapping and lazily split the non directly mapable pages to a separate
> vma at fault time (thanks to Chris Brandt for testing).
>
> - Clarified the documentation some more.
>
>
> diffstat:
>
> Documentation/filesystems/cramfs.txt | 42 ++
> MAINTAINERS | 4 +-
> fs/cramfs/Kconfig | 39 +-
> fs/cramfs/README | 31 +-
> fs/cramfs/inode.c | 621 +++++++++++++++++++++++++----
> include/uapi/linux/cramfs_fs.h | 20 +-
> init/do_mounts.c | 8 +
> 7 files changed, 688 insertions(+), 77 deletions(-)
For this whole series:
Tested-by: Chris Brandt <[email protected]>
On Thu, 31 Aug 2017, Chris Brandt wrote:
> On Wednesday, August 16, 2017, Nicolas Pitre wrote:
> > This series brings a nice refresh to the cramfs filesystem, adding the
> > following capabilities:
[...]
> For this whole series:
>
> Tested-by: Chris Brandt <[email protected]>
Thanks
Nicolas