2020-06-23 00:52:14

by Sven Van Asbroeck

[permalink] [raw]
Subject: [PATCH v1 2/2] romfs: address performance regression since v3.10

Problem
-------
romfs sequential read performance has regressed very badly since
v3.10. Currently, reading a large file inside a romfs image is
up to 12x slower compared to reading the romfs image directly.

Benchmarks:
- use a romfs image which contains a single 250M file
- calculate the md5sum of the romfs image directly (test 1)
$ time md5sum image.romfs
- loop-mount the romfs image, and calc the md5sum of the file
inside it (test 2)
$ mount -o loop,ro image.romfs /mnt/romfs
$ time md5sum /mnt/romfs/file
- drop caches in between
$ echo 3 > /proc/sys/vm/drop_caches

imx6 (arm cortex a9) on emmc, running v5.7.2:
(test 1) 5 seconds
(test 2) 60 seconds (12x slower)

Intel i7-3630QM on Samsung SSD 850 EVO (EMT02B6Q),
running Ubuntu with v4.15.0-106-generic:
(test 1) 1.3 seconds
(test 2) 3.3 seconds (2.5x slower)

To show that a regression has occurred since v3.10:

imx6 on emmc, running v3.10.17:
(test 1) 16 seconds
(test 2) 18 seconds

Proposed Solution
-----------------
Increase the blocksize from 1K to PAGE_SIZE. This brings the
sequential read performance close to where it was on v3.10:

imx6 on emmc, running v5.7.2:
(test 2 1K blocksize) 60 seconds
(test 2 4K blocksize) 22 seconds

Intel on Ubuntu running v4.15:
(test 2 1K blocksize) 3.3 seconds
(test 2 4K blocksize) 1.9 seconds

There is a risk that this may increase latency on random-
access workloads. But the test below suggests that this
is not a concern:

Benchmark:
- use a 630M romfs image consisting of 9600 files
- loop-mount the romfs image
$ mount -o loop,ro image.romfs /mnt/romfs
- drop all caches
- list all files in the filesystem (test 3)
$ time find /mnt/romfs > /dev/null

imx6 on emmc, running v5.7.2:
(test 3 1K blocksize) 9.5 seconds
(test 3 4K blocksize) 9 seconds

Intel on Ubuntu, running v4.15:
(test 3 1K blocksize) 1.4 seconds
(test 3 4K blocksize) 1.2 seconds

Practical Solution
------------------
Introduce a mount-option called 'largeblocks'. If present,
increase the blocksize for much better sequential performance.

Note that the Linux block layer can only support n-K blocks if
the underlying block device length is also aligned to n-K. This
may not always be the case. Therefore, the driver will pick the
largest blocksize which the underlying block device can support.

Cc: Al Viro <[email protected]>
Cc: Deepa Dinamani <[email protected]>
Cc: David Howells <[email protected]>
Cc: "Darrick J. Wong" <[email protected]>
Cc: Janos Farkas <[email protected]>
Cc: Jeff Layton <[email protected]>
To: [email protected]
Signed-off-by: Sven Van Asbroeck <[email protected]>
---
fs/romfs/super.c | 62 ++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 57 insertions(+), 5 deletions(-)

diff --git a/fs/romfs/super.c b/fs/romfs/super.c
index 6fecdea791f1..93565aeaa43c 100644
--- a/fs/romfs/super.c
+++ b/fs/romfs/super.c
@@ -65,7 +65,7 @@
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/blkdev.h>
-#include <linux/fs_context.h>
+#include <linux/fs_parser.h>
#include <linux/mount.h>
#include <linux/namei.h>
#include <linux/statfs.h>
@@ -460,6 +460,54 @@ static __u32 romfs_checksum(const void *data, int size)
return sum;
}

+enum romfs_param {
+ Opt_largeblocks,
+};
+
+static const struct fs_parameter_spec romfs_fs_parameters[] = {
+ fsparam_flag("largeblocks", Opt_largeblocks),
+ {}
+};
+
+/*
+ * Parse a single mount parameter.
+ */
+static int romfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+{
+ struct fs_parse_result result;
+ int opt;
+
+ opt = fs_parse(fc, romfs_fs_parameters, param, &result);
+ if (opt < 0)
+ return opt;
+
+ switch (opt) {
+ case Opt_largeblocks:
+ fc->fs_private = (void *) 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * pick the largest blocksize which the underlying block device
+ * is a multiple of. Or fall back to legacy (ROMBSIZE).
+ */
+static int romfs_largest_blocksize(struct super_block *sb)
+{
+ loff_t device_sz = i_size_read(sb->s_bdev->bd_inode);
+ int blksz;
+
+ for (blksz = PAGE_SIZE; blksz > ROMBSIZE; blksz >>= 1)
+ if ((device_sz % blksz) == 0)
+ break;
+
+ return blksz;
+}
+
/*
* fill in the superblock
*/
@@ -467,17 +515,19 @@ static int romfs_fill_super(struct super_block *sb, struct fs_context *fc)
{
struct romfs_super_block *rsb;
struct inode *root;
- unsigned long pos, img_size;
+ unsigned long pos, img_size, dev_blocksize;
const char *storage;
size_t len;
int ret;

#ifdef CONFIG_BLOCK
+ dev_blocksize = fc->fs_private ? romfs_largest_blocksize(sb) :
+ ROMBSIZE;
if (!sb->s_mtd) {
- sb_set_blocksize(sb, ROMBSIZE);
+ sb_set_blocksize(sb, dev_blocksize);
} else {
- sb->s_blocksize = ROMBSIZE;
- sb->s_blocksize_bits = blksize_bits(ROMBSIZE);
+ sb->s_blocksize = dev_blocksize;
+ sb->s_blocksize_bits = blksize_bits(dev_blocksize);
}
#endif

@@ -573,6 +623,7 @@ static int romfs_get_tree(struct fs_context *fc)
static const struct fs_context_operations romfs_context_ops = {
.get_tree = romfs_get_tree,
.reconfigure = romfs_reconfigure,
+ .parse_param = romfs_parse_param,
};

/*
@@ -607,6 +658,7 @@ static struct file_system_type romfs_fs_type = {
.owner = THIS_MODULE,
.name = "romfs",
.init_fs_context = romfs_init_fs_context,
+ .parameters = romfs_fs_parameters,
.kill_sb = romfs_kill_sb,
.fs_flags = FS_REQUIRES_DEV,
};
--
2.17.1


2020-06-23 16:47:18

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v1 2/2] romfs: address performance regression since v3.10

Hi Sven,

I love your patch! Yet something to improve:

[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.8-rc2 next-20200623]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Sven-Van-Asbroeck/address-romfs-performance-regression/20200623-085856
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 5e857ce6eae7ca21b2055cca4885545e29228fe2
config: i386-randconfig-r015-20200623 (attached as .config)
compiler: gcc-5 (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010
reproduce (this is a W=1 build):
# save the attached .config to linux build tree
make W=1 ARCH=i386

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

ld: fs/romfs/super.o: in function `romfs_largest_blocksize':
>> fs/romfs/super.c:505: undefined reference to `__moddi3'

vim +505 fs/romfs/super.c

494
495 /*
496 * pick the largest blocksize which the underlying block device
497 * is a multiple of. Or fall back to legacy (ROMBSIZE).
498 */
499 static int romfs_largest_blocksize(struct super_block *sb)
500 {
501 loff_t device_sz = i_size_read(sb->s_bdev->bd_inode);
502 int blksz;
503
504 for (blksz = PAGE_SIZE; blksz > ROMBSIZE; blksz >>= 1)
> 505 if ((device_sz % blksz) == 0)
506 break;
507
508 return blksz;
509 }
510

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (1.72 kB)
.config.gz (41.84 kB)
Download all attachments

2020-06-23 17:06:47

by Sven Van Asbroeck

[permalink] [raw]
Subject: Re: [PATCH v1 2/2] romfs: address performance regression since v3.10

On Tue, Jun 23, 2020 at 12:17 PM kernel test robot <[email protected]> wrote:
>
> ld: fs/romfs/super.o: in function `romfs_largest_blocksize':
> >> fs/romfs/super.c:505: undefined reference to `__moddi3'
>
> > 505 if ((device_sz % blksz) == 0)

I can change this to device_sz & (blksz-1) if this patch gets
positive feedback.

2020-07-09 12:18:11

by Sven Van Asbroeck

[permalink] [raw]
Subject: Re: [PATCH v1 2/2] romfs: address performance regression since v3.10

Hello Al,

You are the closest I could find to a romfs maintainer. get_maintainer.pl
doesn't appear to list any.

This attempted performance regression fix didn't generate much feedback
(to say the least). It's however a real issue for us when supporting a legacy
product where we don't have the luxury of switching to a better-supported
fs.

Is there anything I can do to further this? Is lkml
currently accepting bug / regression fixes to romfs, or is it obsolete?

Many thanks,
Sven

On Mon, Jun 22, 2020 at 8:45 PM Sven Van Asbroeck <[email protected]> wrote:
>
> Problem
> -------
> romfs sequential read performance has regressed very badly since
> v3.10. Currently, reading a large file inside a romfs image is
> up to 12x slower compared to reading the romfs image directly.
>
> Benchmarks:
> - use a romfs image which contains a single 250M file
> - calculate the md5sum of the romfs image directly (test 1)
> $ time md5sum image.romfs
> - loop-mount the romfs image, and calc the md5sum of the file
> inside it (test 2)
> $ mount -o loop,ro image.romfs /mnt/romfs
> $ time md5sum /mnt/romfs/file
> - drop caches in between
> $ echo 3 > /proc/sys/vm/drop_caches
>
> imx6 (arm cortex a9) on emmc, running v5.7.2:
> (test 1) 5 seconds
> (test 2) 60 seconds (12x slower)
>
> Intel i7-3630QM on Samsung SSD 850 EVO (EMT02B6Q),
> running Ubuntu with v4.15.0-106-generic:
> (test 1) 1.3 seconds
> (test 2) 3.3 seconds (2.5x slower)
>
> To show that a regression has occurred since v3.10:
>
> imx6 on emmc, running v3.10.17:
> (test 1) 16 seconds
> (test 2) 18 seconds
>
> Proposed Solution
> -----------------
> Increase the blocksize from 1K to PAGE_SIZE. This brings the
> sequential read performance close to where it was on v3.10:
>
> imx6 on emmc, running v5.7.2:
> (test 2 1K blocksize) 60 seconds
> (test 2 4K blocksize) 22 seconds
>
> Intel on Ubuntu running v4.15:
> (test 2 1K blocksize) 3.3 seconds
> (test 2 4K blocksize) 1.9 seconds
>
> There is a risk that this may increase latency on random-
> access workloads. But the test below suggests that this
> is not a concern:
>
> Benchmark:
> - use a 630M romfs image consisting of 9600 files
> - loop-mount the romfs image
> $ mount -o loop,ro image.romfs /mnt/romfs
> - drop all caches
> - list all files in the filesystem (test 3)
> $ time find /mnt/romfs > /dev/null
>
> imx6 on emmc, running v5.7.2:
> (test 3 1K blocksize) 9.5 seconds
> (test 3 4K blocksize) 9 seconds
>
> Intel on Ubuntu, running v4.15:
> (test 3 1K blocksize) 1.4 seconds
> (test 3 4K blocksize) 1.2 seconds
>
> Practical Solution
> ------------------
> Introduce a mount-option called 'largeblocks'. If present,
> increase the blocksize for much better sequential performance.
>
> Note that the Linux block layer can only support n-K blocks if
> the underlying block device length is also aligned to n-K. This
> may not always be the case. Therefore, the driver will pick the
> largest blocksize which the underlying block device can support.
>
> Cc: Al Viro <[email protected]>
> Cc: Deepa Dinamani <[email protected]>
> Cc: David Howells <[email protected]>
> Cc: "Darrick J. Wong" <[email protected]>
> Cc: Janos Farkas <[email protected]>
> Cc: Jeff Layton <[email protected]>
> To: [email protected]
> Signed-off-by: Sven Van Asbroeck <[email protected]>
> ---
> fs/romfs/super.c | 62 ++++++++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 57 insertions(+), 5 deletions(-)
>
> diff --git a/fs/romfs/super.c b/fs/romfs/super.c
> index 6fecdea791f1..93565aeaa43c 100644
> --- a/fs/romfs/super.c
> +++ b/fs/romfs/super.c
> @@ -65,7 +65,7 @@
> #include <linux/slab.h>
> #include <linux/init.h>
> #include <linux/blkdev.h>
> -#include <linux/fs_context.h>
> +#include <linux/fs_parser.h>
> #include <linux/mount.h>
> #include <linux/namei.h>
> #include <linux/statfs.h>
> @@ -460,6 +460,54 @@ static __u32 romfs_checksum(const void *data, int size)
> return sum;
> }
>
> +enum romfs_param {
> + Opt_largeblocks,
> +};
> +
> +static const struct fs_parameter_spec romfs_fs_parameters[] = {
> + fsparam_flag("largeblocks", Opt_largeblocks),
> + {}
> +};
> +
> +/*
> + * Parse a single mount parameter.
> + */
> +static int romfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
> +{
> + struct fs_parse_result result;
> + int opt;
> +
> + opt = fs_parse(fc, romfs_fs_parameters, param, &result);
> + if (opt < 0)
> + return opt;
> +
> + switch (opt) {
> + case Opt_largeblocks:
> + fc->fs_private = (void *) 1;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * pick the largest blocksize which the underlying block device
> + * is a multiple of. Or fall back to legacy (ROMBSIZE).
> + */
> +static int romfs_largest_blocksize(struct super_block *sb)
> +{
> + loff_t device_sz = i_size_read(sb->s_bdev->bd_inode);
> + int blksz;
> +
> + for (blksz = PAGE_SIZE; blksz > ROMBSIZE; blksz >>= 1)
> + if ((device_sz % blksz) == 0)
> + break;
> +
> + return blksz;
> +}
> +
> /*
> * fill in the superblock
> */
> @@ -467,17 +515,19 @@ static int romfs_fill_super(struct super_block *sb, struct fs_context *fc)
> {
> struct romfs_super_block *rsb;
> struct inode *root;
> - unsigned long pos, img_size;
> + unsigned long pos, img_size, dev_blocksize;
> const char *storage;
> size_t len;
> int ret;
>
> #ifdef CONFIG_BLOCK
> + dev_blocksize = fc->fs_private ? romfs_largest_blocksize(sb) :
> + ROMBSIZE;
> if (!sb->s_mtd) {
> - sb_set_blocksize(sb, ROMBSIZE);
> + sb_set_blocksize(sb, dev_blocksize);
> } else {
> - sb->s_blocksize = ROMBSIZE;
> - sb->s_blocksize_bits = blksize_bits(ROMBSIZE);
> + sb->s_blocksize = dev_blocksize;
> + sb->s_blocksize_bits = blksize_bits(dev_blocksize);
> }
> #endif
>
> @@ -573,6 +623,7 @@ static int romfs_get_tree(struct fs_context *fc)
> static const struct fs_context_operations romfs_context_ops = {
> .get_tree = romfs_get_tree,
> .reconfigure = romfs_reconfigure,
> + .parse_param = romfs_parse_param,
> };
>
> /*
> @@ -607,6 +658,7 @@ static struct file_system_type romfs_fs_type = {
> .owner = THIS_MODULE,
> .name = "romfs",
> .init_fs_context = romfs_init_fs_context,
> + .parameters = romfs_fs_parameters,
> .kill_sb = romfs_kill_sb,
> .fs_flags = FS_REQUIRES_DEV,
> };
> --
> 2.17.1
>