Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp4300102ybe; Mon, 9 Sep 2019 07:21:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqxj3HnIHKtZMGyBLSjs4IQGxTh/SJtzvWHFZJ6ZSZ0SjpNp7n1xV1EP463VFpfIpIc9Rs2n X-Received: by 2002:a50:fa99:: with SMTP id w25mr24617723edr.259.1568038874963; Mon, 09 Sep 2019 07:21:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568038874; cv=none; d=google.com; s=arc-20160816; b=UInQSOVj6VqDSrQm3shhRqoibws97Hi2yl4NPMbNBw0Yy9cGJq8pw13N+4ICzw7y6u HnbA1VVl/55d8RjW7ey1yxNyjWy/SGMW8+eomLHhANHA5WP8DlqB2JKJqvbi1jg98Lll D1RYgATpvz3dHniy+OPQfdNl9yNUjGZRuD6/UAHC7qMJThELhyDmDU1SRs+XGe9XGLvY LfVi0evfeBZ6rP2aJFIamFY9Ww4kDBJzXYjyzVI4KMUm1JSLVnToGq6/1TMdRghR5+Tl 5Cpsq1SePbpplw9DeNvAT88IAgCM0IcKS9R0IWnyjHvQ0ovcfbRKUMNDf7q5fH0Gb1hX E+MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=JfUjDnZAOJstmgt3I+pGhvz9PqcKpmeodH47o30DH+0=; b=foU4aYgnvLOeTeZUICDul8s6nda06k4UMbdPIq69CD8c8BNHAQBFiwioTbyNEBHOYH o3LTi3GCtAx3zSFhi6+0DpRVSi2+WhxLGdlWPP/y4X2NAfimRr3l4ZHqQNiX8jL3KS5Z 5SdiqPTtUMjUjbBOg8tbU5MMJFRsLs/uIoHcD3XPgq9R+bdVt09KcLBaP/u+aKvcWaqy vDvcnjDFzxM6joS/OWhwUIWCYhsN1Loa/We8A1DaVxrQVpSaUFmI5vFyJLkARDBts8M1 VSFuBVy4IYTt5WGG0Mdoh0g/waTo82aD7Dwg6wQ2J8cAlDMerO2exQV/RL47I6LE4zDN 3bXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g31si8981096eda.399.2019.09.09.07.20.51; Mon, 09 Sep 2019 07:21:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388069AbfIID5C (ORCPT + 99 others); Sun, 8 Sep 2019 23:57:02 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:50934 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387978AbfIID5B (ORCPT ); Sun, 8 Sep 2019 23:57:01 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92.2 #3 (Red Hat Linux)) id 1i7AnN-0002nw-PW; Mon, 09 Sep 2019 03:56:53 +0000 Date: Mon, 9 Sep 2019 04:56:53 +0100 From: Al Viro To: Hugh Dickins Cc: kernel test robot , David Howells , Andrew Morton , LKML , linux-fsdevel@vger.kernel.org, lkp@01.org Subject: Re: [vfs] 8bb3c61baf: vm-scalability.median -23.7% regression Message-ID: <20190909035653.GF1131@ZenIV.linux.org.uk> References: <20190903084122.GH15734@shao2-debian> <20190908214601.GC1131@ZenIV.linux.org.uk> <20190908234722.GE1131@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.12.0 (2019-05-25) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Sep 08, 2019 at 08:10:17PM -0700, Hugh Dickins wrote: Hmm... FWIW, I'd ended up redoing most of the thing, with hopefully sane carve-up. Differences: * we really care only about three things having been set - ->max_blocks, ->max_inodes and ->huge. This __set_bit() hack is cute, but asking for trouble (and getting it). Explicit ctx->seen & SHMEM_SEEN_BLOCKS, etc. is cleaner. * > const struct fs_parameter_description shmem_fs_parameters = { > - .name = "shmem", > + .name = "tmpfs", > .specs = shmem_param_specs, > .enums = shmem_param_enums, > }; Missed that one, will fold. * > @@ -3448,9 +3446,9 @@ static void shmem_apply_options(struct s The whole "apply" thing is useless - in remount we need to copy max_inode/max_blocks/huge/mpol under the lock after checks, and we can do that manually just fine. Other options (uid/gid/mode) get ignored. There very little overlap with fill_super case, really. > - old = sbinfo->mpol; > - sbinfo->mpol = ctx->mpol; > + /* > + * Update sbinfo->mpol now while stat_lock is held. > + * Leave shmem_free_fc() to free the old mpol if any. > + */ > + swap(sbinfo->mpol, ctx->mpol); Umm... Missed that use-after-free due to destructor, TBH (in remount, that is). Fixed (in a slightly different way). > } > if (*rest) > - return invalf(fc, "shmem: Invalid size"); > + goto bad_value; > ctx->max_blocks = DIV_ROUND_UP(size, PAGE_SIZE); > break; FWIW, I had those with s/shmem/tmpfs/, no problem with merging like that. Will fold. [snip] > case Opt_huge: > -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE > - if (!has_transparent_hugepage() && > - result.uint_32 != SHMEM_HUGE_NEVER) > - return invalf(fc, "shmem: Huge pages disabled"); > - > ctx->huge = result.uint_32; > + if (ctx->huge != SHMEM_HUGE_NEVER && > + !(IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && > + has_transparent_hugepage())) > + goto unsupported_parameter; > break; > -#else > - return invalf(fc, "shmem: huge= option disabled"); > -#endif > - > - case Opt_mpol: { > -#ifdef CONFIG_NUMA > - struct mempolicy *mpol; > - if (mpol_parse_str(param->string, &mpol)) > - return invalf(fc, "shmem: Invalid mpol="); > - mpol_put(ctx->mpol); > - ctx->mpol = mpol; > -#endif > - break; > - } OK... > + case Opt_mpol: > + if (IS_ENABLED(CONFIG_NUMA)) { > + struct mempolicy *mpol; > + if (mpol_parse_str(param->string, &mpol)) > + goto bad_value; > + mpol_put(ctx->mpol); > + ctx->mpol = mpol; > + break; > + } > + goto unsupported_parameter; Slightly different here - I'd done that bit as mpol_put(ctx->mpol); ctx->mpol = NULL; if (mpol_parse_str(param->string, &ctx->mpol)) return invalf (goto bad_value now) > +unsupported_parameter: > + return invalf(fc, "tmpfs: Unsupported parameter '%s'", param->key); > +bad_value: > + return invalf(fc, "tmpfs: Bad value for '%s'", param->key); > - return invalf(fc, "shmem: Can't retroactively limit nr_blocks"); > + return invalf(fc, "tmpfs: Cannot retroactively limit size"); > } > if (percpu_counter_compare(&sbinfo->used_blocks, ctx->max_blocks) > 0) { > spin_unlock(&sbinfo->stat_lock); > - return invalf(fc, "shmem: Too few blocks for current use"); > + return invalf(fc, "tmpfs: Too small a size for current use"); > } > } > > @@ -3587,11 +3591,11 @@ static int shmem_reconfigure(struct fs_c > if (test_bit(Opt_nr_inodes, &ctx->changes)) { > if (ctx->max_inodes && !sbinfo->max_inodes) { > spin_unlock(&sbinfo->stat_lock); > - return invalf(fc, "shmem: Can't retroactively limit nr_inodes"); > + return invalf(fc, "tmpfs: Cannot retroactively limit inodes"); > } > if (ctx->max_inodes < inodes_in_use) { > spin_unlock(&sbinfo->stat_lock); > - return invalf(fc, "shmem: Too few inodes for current use"); > + return invalf(fc, "tmpfs: Too few inodes for current use"); > } > } s/Can't/Cannot/ and s/few blocks/small a size/? No problem, except that I'd done err = "Too few inodes for current use"; goto out; ... out: return invalf(fc, "tmpfs: %s", err); Anyway, see vfs.git#uncertain.shmem for what I've got with those folded in. Do you see any problems with that one? That's the last 5 commits in there...