Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp1899479rwn; Fri, 16 Sep 2022 02:32:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6ewlgqPFHCowCPnZFW39r/DOQ9zlPucqDwHls3as9NG9VCMKrhHzcka+Q7GtL76qRfTyzs X-Received: by 2002:a17:903:2cb:b0:171:4f0d:beb6 with SMTP id s11-20020a17090302cb00b001714f0dbeb6mr3816627plk.53.1663320777608; Fri, 16 Sep 2022 02:32:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663320777; cv=none; d=google.com; s=arc-20160816; b=wowxslArOojOhArYQwd8RWSVHdzt+MqRSODbJcVvH/TvkDpfKUwJbHY10vD3spF2eb PEt0z/vM0h9TaKEu/ADKayre9wTube3qWN3deSr8stPksJstq/4yXrQ13AOhMlnTBis2 Dj75RUUaqPBEiieQv03wt6+n4TBN+8IIJDkS9ZqfsozDN3m7U0UJ4eDHHmn2Ma2hStKS an0ezsGaed3czIF21TpzWNOZtfTvOzsWTb6veLeQRv5JjmeYjpZxHwzoPQJmoCFN9Qmi 9K3XOGhegKmkHmy57I9Uzg05GusQKodhnANagaAex8F8ZRymnmU04+zv9LKNFHX4DYIS faqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=44W6pK6tTwRJo02j4/KD4J6le7QygegmPOpK9XgCDxM=; b=GzPVOwkyS9JJCJaBpzyVQHOzatGv1c1wKRmuW+ItLtyTsqVRJDF/sycKMahUq+IICH GfoUqo4bMzLk8JSD26Kcke7Qd9Is+wb93nkET+avw1MZkawjyEs4iegcRPEzjmsIlG2M Xs4PUYKnQBw6jhVEHV0B43lmBMFAutCQRdDmVgBnCSqCS13hZMspiHg5No7vCmNEWh7/ sQfO8/TDFfWiyKBYDDC9ge/9L1lvfUw0XKo8REDgRKwj/KYy/Oq0EqC9OKNERBCDtGUm H3t4Kf7iAAGpzsJOpQIR0g+okwwm97XmS0/OjC6ZIq3BQ84T1sNN5rDYaEAltExD31oZ 31YA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j15-20020a63ec0f000000b00435d0911bfbsi20703505pgh.657.2022.09.16.02.32.46; Fri, 16 Sep 2022 02:32:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230238AbiIPIgP (ORCPT + 99 others); Fri, 16 Sep 2022 04:36:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229831AbiIPIgL (ORCPT ); Fri, 16 Sep 2022 04:36:11 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D24D6A74E5 for ; Fri, 16 Sep 2022 01:36:09 -0700 (PDT) Received: from canpemm500006.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MTS3q5NKwzlVjC; Fri, 16 Sep 2022 16:32:07 +0800 (CST) Received: from use12-sp2.huawei.com (10.67.189.174) by canpemm500006.china.huawei.com (7.192.105.130) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Fri, 16 Sep 2022 16:36:08 +0800 From: Xiaoming Ni To: , CC: , , , , , Subject: [PATCH v4 2/2] squashfs: Allows users to configure the number of decompression threads Date: Fri, 16 Sep 2022 16:36:04 +0800 Message-ID: <20220916083604.33408-3-nixiaoming@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220916083604.33408-1-nixiaoming@huawei.com> References: <20220902094855.22666-1-nixiaoming@huawei.com> <20220916083604.33408-1-nixiaoming@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.67.189.174] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To canpemm500006.china.huawei.com (7.192.105.130) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The maximum number of threads in the decompressor_multi.c file is fixed and cannot be adjusted according to user needs. Therefore, the mount parameter needs to be added to allow users to configure the number of threads as required. The upper limit is num_online_cpus() * 2. Signed-off-by: Xiaoming Ni --- fs/squashfs/Kconfig | 16 ++++++++-- fs/squashfs/decompressor_multi.c | 4 +-- fs/squashfs/squashfs_fs_sb.h | 1 + fs/squashfs/super.c | 67 +++++++++++++++++++++++++++++++++------- 4 files changed, 72 insertions(+), 16 deletions(-) diff --git a/fs/squashfs/Kconfig b/fs/squashfs/Kconfig index 218bacdd4298..60fc98bdf421 100644 --- a/fs/squashfs/Kconfig +++ b/fs/squashfs/Kconfig @@ -73,11 +73,10 @@ config SQUASHFS_CHOICE_DECOMP_BY_MOUNT select SQUASHFS_DECOMP_SINGLE select SQUASHFS_DECOMP_MULTI select SQUASHFS_DECOMP_MULTI_PERCPU + select SQUASHFS_MOUNT_DECOMP_THREADS help Compile all parallel decompression modes and specify the decompression mode by setting "threads=" during mount. - threads= - default Decompressor parallelisation is SQUASHFS_DECOMP_SINGLE choice @@ -127,6 +126,19 @@ config SQUASHFS_COMPILE_DECOMP_MULTI_PERCPU decompression is load-balanced across the cores. endchoice +config SQUASHFS_MOUNT_DECOMP_THREADS + bool "Add the mount parameter 'threads=' for squashfs" + depends on SQUASHFS + depends on SQUASHFS_DECOMP_MULTI + default n + help + Use threads= to set the decompression parallel mode and the number of threads. + If SQUASHFS_CHOICE_DECOMP_BY_MOUNT=y + threads= + else + threads=<2|3|...> + The upper limit is num_online_cpus() * 2. + config SQUASHFS_XATTR bool "Squashfs XATTR support" depends on SQUASHFS diff --git a/fs/squashfs/decompressor_multi.c b/fs/squashfs/decompressor_multi.c index eb25432bd149..416c53eedbd1 100644 --- a/fs/squashfs/decompressor_multi.c +++ b/fs/squashfs/decompressor_multi.c @@ -144,7 +144,7 @@ static struct decomp_stream *get_decomp_stream(struct squashfs_sb_info *msblk, * If there is no available decomp and already full, * let's wait for releasing decomp from other users. */ - if (stream->avail_decomp >= MAX_DECOMPRESSOR) + if (stream->avail_decomp >= msblk->max_thread_num) goto wait; /* Let's allocate new decomp */ @@ -160,7 +160,7 @@ static struct decomp_stream *get_decomp_stream(struct squashfs_sb_info *msblk, } stream->avail_decomp++; - WARN_ON(stream->avail_decomp > MAX_DECOMPRESSOR); + WARN_ON(stream->avail_decomp > msblk->max_thread_num); mutex_unlock(&stream->mutex); break; diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h index f1e5dad8ae0a..659082e9e51d 100644 --- a/fs/squashfs/squashfs_fs_sb.h +++ b/fs/squashfs/squashfs_fs_sb.h @@ -67,5 +67,6 @@ struct squashfs_sb_info { unsigned int ids; bool panic_on_errors; const struct squashfs_decompressor_thread_ops *thread_ops; + int max_thread_num; }; #endif diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c index 2a33ffe85292..cd7b3e530dcf 100644 --- a/fs/squashfs/super.c +++ b/fs/squashfs/super.c @@ -53,6 +53,7 @@ enum squashfs_param { struct squashfs_mount_opts { enum Opt_errors errors; const struct squashfs_decompressor_thread_ops *thread_ops; + int thread_num; }; static const struct constant_table squashfs_param_errors[] = { @@ -67,7 +68,8 @@ static const struct fs_parameter_spec squashfs_fs_parameters[] = { {} }; -static int squashfs_parse_param_threads(const char *str, struct squashfs_mount_opts *opts) + +static int squashfs_parse_param_threads_str(const char *str, struct squashfs_mount_opts *opts) { #ifdef CONFIG_SQUASHFS_CHOICE_DECOMP_BY_MOUNT if (strcmp(str, "single") == 0) { @@ -86,6 +88,42 @@ static int squashfs_parse_param_threads(const char *str, struct squashfs_mount_o return -EINVAL; } +static int squashfs_parse_param_threads_num(const char *str, struct squashfs_mount_opts *opts) +{ +#ifdef CONFIG_SQUASHFS_MOUNT_DECOMP_THREADS + int ret; + unsigned long num; + + ret = kstrtoul(str, 0, &num); + if (ret != 0) + return -EINVAL; + if (num > 1) { + opts->thread_ops = &squashfs_decompressor_multi; + if (num > opts->thread_ops->max_decompressors()) + return -EINVAL; + opts->thread_num = (int)num; + return 0; + } +#ifdef CONFIG_SQUASHFS_DECOMP_SINGLE + if (num == 1) { + opts->thread_ops = &squashfs_decompressor_single; + opts->thread_num = 1; + return 0; + } +#endif +#endif /* !CONFIG_SQUASHFS_MOUNT_DECOMP_THREADS */ + return -EINVAL; +} + +static int squashfs_parse_param_threads(const char *str, struct squashfs_mount_opts *opts) +{ + int ret = squashfs_parse_param_threads_str(str, opts); + + if (ret == 0) + return ret; + return squashfs_parse_param_threads_num(str, opts); +} + static int squashfs_parse_param(struct fs_context *fc, struct fs_parameter *param) { struct squashfs_mount_opts *opts = fc->fs_private; @@ -194,6 +232,11 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc) goto failed_mount; } msblk->thread_ops = opts->thread_ops; + if (opts->thread_num == 0) { + msblk->max_thread_num = msblk->thread_ops->max_decompressors(); + } else { + msblk->max_thread_num = opts->thread_num; + } /* Check the MAJOR & MINOR versions and lookup compression type */ msblk->decompressor = supported_squashfs_filesystem( @@ -279,7 +322,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc) /* Allocate read_page block */ msblk->read_page = squashfs_cache_init("data", - msblk->thread_ops->max_decompressors(), msblk->block_size); + msblk->max_thread_num, msblk->block_size); if (msblk->read_page == NULL) { errorf(fc, "Failed to allocate read_page block"); goto failed_mount; @@ -463,18 +506,17 @@ static int squashfs_show_options(struct seq_file *s, struct dentry *root) seq_puts(s, ",errors=continue"); #ifdef CONFIG_SQUASHFS_CHOICE_DECOMP_BY_MOUNT - if (msblk->thread_ops == &squashfs_decompressor_single) { - seq_puts(s, ",threads=single"); - return 0; - } - if (msblk->thread_ops == &squashfs_decompressor_multi) { - seq_puts(s, ",threads=multi"); - return 0; - } if (msblk->thread_ops == &squashfs_decompressor_percpu) { seq_puts(s, ",threads=percpu"); return 0; } + if (msblk->thread_ops == &squashfs_decompressor_single) { + seq_puts(s, ",threads=single"); + return 0; + } +#endif +#ifdef CONFIG_SQUASHFS_MOUNT_DECOMP_THREADS + seq_printf(s, ",threads=%d", msblk->max_thread_num); #endif return 0; } @@ -489,11 +531,12 @@ static int squashfs_init_fs_context(struct fs_context *fc) #ifdef CONFIG_SQUASHFS_DECOMP_SINGLE opts->thread_ops = &squashfs_decompressor_single; -#elif CONFIG_SQUASHFS_DECOMP_MULTI +#elif defined(CONFIG_SQUASHFS_DECOMP_MULTI) opts->thread_ops = &squashfs_decompressor_multi; -#elif CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU +#elif defined(CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU) opts->thread_ops = &squashfs_decompressor_percpu; #endif + opts->thread_num = 0; fc->fs_private = opts; fc->ops = &squashfs_context_ops; return 0; -- 2.12.3