Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4706343ioa; Wed, 27 Apr 2022 09:21:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxF5mlIwb7wmW0s13+eDNqQkA3b5Y9WpnWFaj95dMdIe/2fUJ4vNSi4MfCa/q7W3U0y5Dxa X-Received: by 2002:a17:90b:4d86:b0:1da:3cba:bd56 with SMTP id oj6-20020a17090b4d8600b001da3cbabd56mr3779428pjb.116.1651076467272; Wed, 27 Apr 2022 09:21:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651076467; cv=none; d=google.com; s=arc-20160816; b=ChvaBlrObV4dTs6HnpmHAO/+rHgcwq1CLiZIQ6FMRSkci0ccCEgVrBoBDj4ALdtnV7 Rw/Valxz9ZebbQfHz/T9sVyCUYm9b4tn53Nx7nh94eC51ZmT5vOmvRpiJPJZ5KykMSlY d5MgR+PhBLbw9Yb1TNpsw4Vg+O4pazXjKo0BvroCpIoWajvXriyXb2yZ9JWo5vrbzcDk M8//azf/7tJ36UFkdwsEk7yG0gyJzUozOi1Ajsv/OFBDPXCtp1blyRRglod5ByDbUnSb rHXc9ee3ZBnAXEpvewIGeHfqB6ztDZVx2EC3RUBCQ5u5Dpn6geH+SPaIETHHqnLKaenk +FGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:dkim-signature; bh=TaceuM3WU4XI+Zz9j3zfhCCykahUsXu9N5LvjuOhrMk=; b=IPyHNXX0FyVywxkpd7ynGWdjav/sZw14U8qLRc0kj/87pBy/q5GALMMHOVPKwYqk5X 8a4KFXDgjiWkyAhy9CkSzPnuD3dlpGPPqnQigkuibsV2SR63pJo7Bkc4lDgyfK26QvEG vkIv003iz20CpKMuDxX3MyasA9895/6g0uagKbL16t8sxgLCbMH1q1MKNsNFNgQwrUcs gzQgyJOJJfxXx35Uh2ZTevuSuNRtvYcp+pwTmqiY85OS1wGplxfQlcc7saG4gFwGsM/4 z6kV/ssvmZUAxMX2ifj5WGQlOvuv7q6oACIEEPBE0BgEJJBD/Qlp1qdWIGnsRLsDoN7o NRig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=FqmVrKKf; dkim=neutral (no key) header.i=@suse.de header.b=cjgDf8RE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id h19-20020a170902eed300b0015d093f3891si1931080plb.584.2022.04.27.09.21.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 09:21:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=FqmVrKKf; dkim=neutral (no key) header.i=@suse.de header.b=cjgDf8RE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=suse.de Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 355315676E; Wed, 27 Apr 2022 08:51:22 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240460AbiD0PyH (ORCPT + 99 others); Wed, 27 Apr 2022 11:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240449AbiD0Pxu (ORCPT ); Wed, 27 Apr 2022 11:53:50 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E20055218; Wed, 27 Apr 2022 08:50:35 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id CA450210DF; Wed, 27 Apr 2022 15:50:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1651074633; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TaceuM3WU4XI+Zz9j3zfhCCykahUsXu9N5LvjuOhrMk=; b=FqmVrKKfoWSKm0TFOR3wPpbllqhBkmkw2Vag9oTMTjZHVFGlR37HOoT9GbKBR2o4z84qJF kPLvvnJYFylrfPOowj0HwMSf61u9kZeT/kdDk1rTEhSpQp+x/w1jHTYiFSc+9AF1KoK78f 7VYLTh59YeczY0MmW4HHGAj8q7+27jU= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1651074633; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TaceuM3WU4XI+Zz9j3zfhCCykahUsXu9N5LvjuOhrMk=; b=cjgDf8REStmgY8xcupAN2SoGbSHsH9NjmfzGRN/JeF/MDb+q2eLaC5mBJFrofXbHUoYhOc 9VK6QyFzgxOtT3DA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5F5BD1323E; Wed, 27 Apr 2022 15:50:33 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id elZCFElmaWI+HgAAMHmgww (envelope-from ); Wed, 27 Apr 2022 15:50:33 +0000 Received: from localhost (brahms.olymp [local]) by brahms.olymp (OpenSMTPD) with ESMTPA id 6c0e8c47; Wed, 27 Apr 2022 15:51:03 +0000 (UTC) Date: Wed, 27 Apr 2022 16:51:03 +0100 From: =?iso-8859-1?Q?Lu=EDs?= Henriques To: Jeff Layton Cc: Xiubo Li , Ilya Dryomov , ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Ryan Taylor Subject: Re: [PATCH v2] ceph: fix statfs for subdir mounts Message-ID: References: <20220427143303.950-1-lhenriques@suse.de> <9beddcc315595751d5fbcb83f73cd94533b62cbd.camel@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <9beddcc315595751d5fbcb83f73cd94533b62cbd.camel@kernel.org> X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 27, 2022 at 11:07:40AM -0400, Jeff Layton wrote: > On Wed, 2022-04-27 at 15:33 +0100, Lu?s Henriques wrote: > > When doing a mount using as base a directory that has 'max_bytes' quotas > > statfs uses that value as the total; if a subdirectory is used instead, > > the same 'max_bytes' too in statfs, unless there is another quota set. > > > > Unfortunately, if this subdirectory only has the 'max_files' quota set, > > then statfs uses the filesystem total. Fix this by making sure we only > > lookup realms that contain the 'max_bytes' quota. > > > > Link: https://tracker.ceph.com/issues/55090 > > Cc: Ryan Taylor > > Signed-off-by: Lu?s Henriques > > --- > > As I mentioned in v1, I do *not* think this really fixes the tracker > > above, as the bug reporter never mentioned setting quotas in the subdir. > > > > Changes since v1: > > Moved some more logic into __ceph_has_any_quota() function. > > > > fs/ceph/inode.c | 2 +- > > fs/ceph/quota.c | 19 +++++++++++-------- > > fs/ceph/super.h | 28 ++++++++++++++++++++++++---- > > 3 files changed, 36 insertions(+), 13 deletions(-) > > > > diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c > > index 5de7bb9048b7..4b7406d6fbe4 100644 > > --- a/fs/ceph/inode.c > > +++ b/fs/ceph/inode.c > > @@ -691,7 +691,7 @@ void ceph_evict_inode(struct inode *inode) > > > > __ceph_remove_caps(ci); > > > > - if (__ceph_has_any_quota(ci)) > > + if (__ceph_has_any_quota(ci, QUOTA_GET_ANY)) > > ceph_adjust_quota_realms_count(inode, false); > > > > /* > > diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c > > index a338a3ec0dc4..e9f7ca18cdb7 100644 > > --- a/fs/ceph/quota.c > > +++ b/fs/ceph/quota.c > > @@ -195,9 +195,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc) > > > > /* > > * This function walks through the snaprealm for an inode and returns the > > - * ceph_snap_realm for the first snaprealm that has quotas set (either max_files > > - * or max_bytes). If the root is reached, return the root ceph_snap_realm > > - * instead. > > + * ceph_snap_realm for the first snaprealm that has quotas set (max_files, > > + * max_bytes, or any, depending on the 'which_quota' argument). If the root is > > + * reached, return the root ceph_snap_realm instead. > > * > > * Note that the caller is responsible for calling ceph_put_snap_realm() on the > > * returned realm. > > @@ -209,7 +209,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc) > > * will be restarted. > > */ > > static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc, > > - struct inode *inode, bool retry) > > + struct inode *inode, > > + enum quota_get_realm which_quota, > > + bool retry) > > { > > struct ceph_inode_info *ci = NULL; > > struct ceph_snap_realm *realm, *next; > > @@ -248,7 +250,7 @@ static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc, > > } > > > > ci = ceph_inode(in); > > - has_quota = __ceph_has_any_quota(ci); > > + has_quota = __ceph_has_any_quota(ci, which_quota); > > iput(in); > > > > next = realm->parent; > > @@ -279,8 +281,8 @@ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new) > > * dropped and we can then restart the whole operation. > > */ > > down_read(&mdsc->snap_rwsem); > > - old_realm = get_quota_realm(mdsc, old, true); > > - new_realm = get_quota_realm(mdsc, new, false); > > + old_realm = get_quota_realm(mdsc, old, QUOTA_GET_ANY, true); > > + new_realm = get_quota_realm(mdsc, new, QUOTA_GET_ANY, false); > > if (PTR_ERR(new_realm) == -EAGAIN) { > > up_read(&mdsc->snap_rwsem); > > if (old_realm) > > @@ -483,7 +485,8 @@ bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, struct kstatfs *buf) > > bool is_updated = false; > > > > down_read(&mdsc->snap_rwsem); > > - realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), true); > > + realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), > > + QUOTA_GET_MAX_BYTES, true); > > up_read(&mdsc->snap_rwsem); > > if (!realm) > > return false; > > diff --git a/fs/ceph/super.h b/fs/ceph/super.h > > index a2e1c83ab29a..3cd96720f14a 100644 > > --- a/fs/ceph/super.h > > +++ b/fs/ceph/super.h > > @@ -1317,9 +1317,29 @@ extern void ceph_fs_debugfs_init(struct ceph_fs_client *client); > > extern void ceph_fs_debugfs_cleanup(struct ceph_fs_client *client); > > > > /* quota.c */ > > -static inline bool __ceph_has_any_quota(struct ceph_inode_info *ci) > > + > > +enum quota_get_realm { > > + QUOTA_GET_MAX_FILES, > > + QUOTA_GET_MAX_BYTES, > > + QUOTA_GET_ANY > > +}; > > + > > +static inline bool __ceph_has_any_quota(struct ceph_inode_info *ci, > > + enum quota_get_realm which) > > { > > - return ci->i_max_files || ci->i_max_bytes; > > + bool has_quota = false; > > + > > + switch (which) { > > + case QUOTA_GET_MAX_BYTES: > > + has_quota = !!ci->i_max_bytes; > > + break; > > + case QUOTA_GET_MAX_FILES: > > + has_quota = !!ci->i_max_files; > > + break; > > + default: > > + has_quota = !!(ci->i_max_files || ci->i_max_bytes); > > + } > > + return has_quota; > > } > > > > extern void ceph_adjust_quota_realms_count(struct inode *inode, bool inc); > > @@ -1328,10 +1348,10 @@ static inline void __ceph_update_quota(struct ceph_inode_info *ci, > > u64 max_bytes, u64 max_files) > > { > > bool had_quota, has_quota; > > - had_quota = __ceph_has_any_quota(ci); > > + had_quota = __ceph_has_any_quota(ci, QUOTA_GET_ANY); > > ci->i_max_bytes = max_bytes; > > ci->i_max_files = max_files; > > - has_quota = __ceph_has_any_quota(ci); > > + has_quota = __ceph_has_any_quota(ci, QUOTA_GET_ANY); > > > > if (had_quota != has_quota) > > ceph_adjust_quota_realms_count(&ci->vfs_inode, has_quota); > > Code looks fine. I think Xiubo had suggested renaming the funtion to > __ceph_has_quota(), but other than that this looks good. Doh! Yeah, of course. Let me quickly respin v3 to fix that. Thanks. > Reviewed-by: Jeff Layton Cheers, -- Lu?s