Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp21031477ybl; Sun, 5 Jan 2020 18:05:30 -0800 (PST) X-Google-Smtp-Source: APXvYqwUJqKHT+VwxZ85scJsyVsdKHfm3YRFDForMNDKwKwCNMWQEKKXcfxhRiJQjyVOMGmkd3wn X-Received: by 2002:a05:6830:1294:: with SMTP id z20mr109440729otp.60.1578276330328; Sun, 05 Jan 2020 18:05:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578276330; cv=none; d=google.com; s=arc-20160816; b=CzlZmEKJ2LuJTkdLZdENYTcG+6i4oRX79yRZos4s60OVoWvOkOFVV/eP/zGy1E/hm2 Dq77aEzut93a7qRJgHyASTjLcM3+canp64b0PgQrxgQ3ZnCfnHlPwrVcW3iVqcrmhLDu bzE42bIaVzY8ND+NV+TQDb7lry5CvN5UzfH58uBsJr68LILarjEwAzZjU+mwqZn1dCH2 AtAAtzlGqjTKb9zD3JvWW1S4KbT24qS1oW1La1J++Oa+33RG9Xv+sGAhaWWdtcB2Xsh6 fEJXz8p1ZvrDJIUZQqZW+whXh/h7MfiAmme7Q6dMA7isNJj3CXgGjYJ80eAgrUcU+TYI Dc+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=bgm9EzQ3VtgSJssTTPL1tv8BSnPwB8+mW1dviveGAgo=; b=iTbMjkOWkeG/kzJZ9Sb6aQIo6PgZKHplRcWoLTeTRkkQ51WKTMJNa81lL8n8qK28qk qSjLKiiv6Uq+msCDGvuRSenwjTbe8i4GFO2ahcHNmIYAScgjx7x5jmRXZwl6fvIdrLcO a7LlsbTPXIPwOc9/vKopqI0ZwzZxaqGCGbjm7JkwQ2Lz/3LF8cMfFUZBsXio9jgHFIqt AqZ1swACklJGKiFIkEn2SCVuwXpXwkHOjTMuhJPt9qVh7G3fWSRnzbUI235Ee/7yxCB1 +eQK569kifs98atFp4fF4JMkhHX5w8luXG6k65qG8tBLakCljNji00rn9VRmYPqociYG BXKA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g93si35906665otg.231.2020.01.05.18.05.16; Sun, 05 Jan 2020 18:05:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727318AbgAFCEG (ORCPT + 99 others); Sun, 5 Jan 2020 21:04:06 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:56458 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727226AbgAFCEF (ORCPT ); Sun, 5 Jan 2020 21:04:05 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 82B3A5B94C23C3898F14; Mon, 6 Jan 2020 10:04:01 +0800 (CST) Received: from [127.0.0.1] (10.184.213.217) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Mon, 6 Jan 2020 10:03:58 +0800 Subject: Re: [PATCH v5 1/2] tmpfs: Add per-superblock i_ino support To: Chris Down , CC: Hugh Dickins , Andrew Morton , Al Viro , "Matthew Wilcox" , Amir Goldstein , "Jeff Layton" , Johannes Weiner , Tejun Heo , , , References: <91b4ed6727712cb6d426cf60c740fe2f473f7638.1578225806.git.chris@chrisdown.name> From: "zhengbin (A)" Message-ID: <4106bf3f-5c99-77a4-717e-10a0ffa6a3fa@huawei.com> Date: Mon, 6 Jan 2020 10:03:57 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0 MIME-Version: 1.0 In-Reply-To: <91b4ed6727712cb6d426cf60c740fe2f473f7638.1578225806.git.chris@chrisdown.name> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [10.184.213.217] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/1/5 20:06, Chris Down wrote: > get_next_ino has a number of problems: > > - It uses and returns a uint, which is susceptible to become overflowed > if a lot of volatile inodes that use get_next_ino are created. > - It's global, with no specificity per-sb or even per-filesystem. This > means it's not that difficult to cause inode number wraparounds on a > single device, which can result in having multiple distinct inodes > with the same inode number. > > This patch adds a per-superblock counter that mitigates the second case. > This design also allows us to later have a specific i_ino size > per-device, for example, allowing users to choose whether to use 32- or > 64-bit inodes for each tmpfs mount. This is implemented in the next > commit. > > Signed-off-by: Chris Down > Reviewed-by: Amir Goldstein > Cc: Hugh Dickins > Cc: Andrew Morton > Cc: Al Viro > Cc: Matthew Wilcox > Cc: Jeff Layton > Cc: Johannes Weiner > Cc: Tejun Heo > Cc: linux-mm@kvack.org > Cc: linux-fsdevel@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > Cc: kernel-team@fb.com > --- > include/linux/shmem_fs.h | 1 + > mm/shmem.c | 30 +++++++++++++++++++++++++++++- > 2 files changed, 30 insertions(+), 1 deletion(-) > > v5: Nothing in code, just resending with correct linux-mm domain. > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index de8e4b71e3ba..7fac91f490dc 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -35,6 +35,7 @@ struct shmem_sb_info { > unsigned char huge; /* Whether to try for hugepages */ > kuid_t uid; /* Mount uid for root directory */ > kgid_t gid; /* Mount gid for root directory */ > + ino_t next_ino; /* The next per-sb inode number to use */ > struct mempolicy *mpol; /* default memory policy for mappings */ > spinlock_t shrinklist_lock; /* Protects shrinklist */ > struct list_head shrinklist; /* List of shinkable inodes */ > diff --git a/mm/shmem.c b/mm/shmem.c > index 8793e8cc1a48..9e97ba972225 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2236,6 +2236,12 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) > return 0; > } > > +/* > + * shmem_get_inode - reserve, allocate, and initialise a new inode > + * > + * If this tmpfs is from kern_mount we use get_next_ino, which is global, since > + * inum churn there is low and this avoids taking locks. > + */ > static struct inode *shmem_get_inode(struct super_block *sb, const struct inode *dir, > umode_t mode, dev_t dev, unsigned long flags) > { > @@ -2248,7 +2254,28 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode > > inode = new_inode(sb); > if (inode) { > - inode->i_ino = get_next_ino(); > + if (sb->s_flags & SB_KERNMOUNT) { > + /* > + * __shmem_file_setup, one of our callers, is lock-free: > + * it doesn't hold stat_lock in shmem_reserve_inode > + * since max_inodes is always 0, and is called from > + * potentially unknown contexts. As such, use the global > + * allocator which doesn't require the per-sb stat_lock. > + */ > + inode->i_ino = get_next_ino(); > + } else { > + spin_lock(&sbinfo->stat_lock); Use spin_lock will affect performance, how about define unsigned long __percpu *last_ino_number; /* Last inode number */ atomic64_t shared_last_ino_number; /* Shared last inode number */ in shmem_sb_info, whose performance will be better? > + if (unlikely(sbinfo->next_ino > UINT_MAX)) { > + /* > + * Emulate get_next_ino uint wraparound for > + * compatibility > + */ > + sbinfo->next_ino = 1; > + } > + inode->i_ino = sbinfo->next_ino++; > + spin_unlock(&sbinfo->stat_lock); > + } > + > inode_init_owner(inode, dir, mode); > inode->i_blocks = 0; > inode->i_atime = inode->i_mtime = inode->i_ctime = current_time(inode); > @@ -3662,6 +3689,7 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) > #else > sb->s_flags |= SB_NOUSER; > #endif > + sbinfo->next_ino = 1; > sbinfo->max_blocks = ctx->blocks; > sbinfo->free_inodes = sbinfo->max_inodes = ctx->inodes; > sbinfo->uid = ctx->uid;