Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4736909ioa; Wed, 27 Apr 2022 10:00:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz7h458ILea7nxeqJUqZFTocAqVfzk4OE7KxRJC0kW3nx3j1morU2CuQdxG+Pa1VtLEQ4bh X-Received: by 2002:a05:6a00:1955:b0:505:7902:36d3 with SMTP id s21-20020a056a00195500b00505790236d3mr30503914pfk.77.1651078819307; Wed, 27 Apr 2022 10:00:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651078819; cv=none; d=google.com; s=arc-20160816; b=coY/GJasTsVZv9ucWqi6HFfrd58AgQUaPDbpOBw+J0VAQe4tGfPUyv4K8VM2kAPvQL HVe/ZQhx9VCRGeh7AUdOV/EwAzHrUV8eO6CIr2zfWv6zS55UcKgW5mfOm/UyR0EC2cul BAPz9KNrAyPTYEtO7rRjPe78r/YB59d0Actm/c5eXqmpGXD66gyPsTVK4M60isY2MGNX TttkGCOUzkYeG4ZhwQRrXrVKGDedBIODl57YR+ioaQi7VwgmMOWobjWmcEIBICAX06tr HH+O07KiFgTcfIe01C68WfWWWkLkDlX02PuaKVB9kdzdtX38nRJrWLYqSYmaZ+DjyUJw DUcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=JWAw13NDzaRrV50lbV2xKpcKyKcbnMM+ovfB6Ic1/Aw=; b=ggV+m46wINjydJbyqDm48ZVD6WNHVrnt+39cez1KxuUAhzJhMrRol/bEhDJod2cW5T sCBzIQEaG+uAVIyeZArfZR0wCBFQSPVbvKHk6/w+9X30XnPCWL3xkC2IP5QMAErn7bVT 9pD5ER+UZTtax7X1YWLZg33dn7pvaoI6MCGZAIxqD1z+Rm9ps3ioJYD2rob6R+UoY3XO Q8NeT2Gr4knhqKRxwtQon3aVyezV7MDYcFcPqujK6kIEnWMSZdG9ghHAY3HfgWvd7Jz7 E5B2BKTWqomszR6djb5zoMf/yfLK5MNR6qlCEmtcXaQa0SuCBFiDWvj+oohBi4zP0nI5 nqng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aqNYgc5m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id oc13-20020a17090b1c0d00b001cb8bfcc721si2759301pjb.7.2022.04.27.10.00.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 10:00:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aqNYgc5m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 489135A5B7; Wed, 27 Apr 2022 09:30:25 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243304AbiD0Qbg (ORCPT + 99 others); Wed, 27 Apr 2022 12:31:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244887AbiD0QaK (ORCPT ); Wed, 27 Apr 2022 12:30:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C076DDFB5; Wed, 27 Apr 2022 09:26:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 506EE61DB3; Wed, 27 Apr 2022 16:26:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F3F6C385A9; Wed, 27 Apr 2022 16:26:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1651076817; bh=+OKlmgWgdB+jhvrK301YGaf77tLtn1CuLBwYTY305wY=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=aqNYgc5miFVZD/qMi5HRONesbzvwbbU4e9Oo/2xvgNh/B/iMJ08os7ubDGhnxj+qD KAJw1LDv2JGcF99wuJAW/bWLj4UcrpnXVa6wKNngIfgQz8p441ON3YbnLfpwh8tbxK JoObX2LOJJ30uTydMhljm7JAMU2GmRtKchKsllyIFXgRdyZf5RlBt8taIRlhukACvX Z+okLtvkST1TO4DaGDs1QlyiwQoCKGpUZ3cDlo8wN2WkytIL/zIcGYbUlTvy+TUxRY SK8Qqy2DtQnz6IKkIs06db+ypxhzE5/5kj5LKYPh1DaJZ82O34o+Fwl10uhOWwYDdX btBOm1o59dvyQ== Message-ID: <878e726c3caf36c624d2f43d6b8e6f8a488f97b0.camel@kernel.org> Subject: Re: [PATCH v3] ceph: fix statfs for subdir mounts From: Jeff Layton To: =?ISO-8859-1?Q?Lu=EDs?= Henriques , Xiubo Li , Ilya Dryomov Cc: ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Ryan Taylor Date: Wed, 27 Apr 2022 12:26:55 -0400 In-Reply-To: <20220427155704.4758-1-lhenriques@suse.de> References: <20220427155704.4758-1-lhenriques@suse.de> Content-Type: text/plain; charset="ISO-8859-15" User-Agent: Evolution 3.42.4 (3.42.4-2.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2022-04-27 at 16:57 +0100, Lu?s Henriques wrote: > When doing a mount using as base a directory that has 'max_bytes' quotas > statfs uses that value as the total; if a subdirectory is used instead, > the same 'max_bytes' too in statfs, unless there is another quota set. > > Unfortunately, if this subdirectory only has the 'max_files' quota set, > then statfs uses the filesystem total. Fix this by making sure we only > lookup realms that contain the 'max_bytes' quota. > > Link: https://tracker.ceph.com/issues/55090 > Cc: Ryan Taylor > Signed-off-by: Lu?s Henriques > --- > Changes since v2: > Renamed function __ceph_has_any_quota() to __ceph_has_quota() > > Changes since v1: > Moved some more logic into __ceph_has_any_quota() function. > > fs/ceph/inode.c | 2 +- > fs/ceph/quota.c | 19 +++++++++++-------- > fs/ceph/super.h | 28 ++++++++++++++++++++++++---- > 3 files changed, 36 insertions(+), 13 deletions(-) > > diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c > index 5de7bb9048b7..1067209cf6f6 100644 > --- a/fs/ceph/inode.c > +++ b/fs/ceph/inode.c > @@ -691,7 +691,7 @@ void ceph_evict_inode(struct inode *inode) > > __ceph_remove_caps(ci); > > - if (__ceph_has_any_quota(ci)) > + if (__ceph_has_quota(ci, QUOTA_GET_ANY)) > ceph_adjust_quota_realms_count(inode, false); > > /* > diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c > index a338a3ec0dc4..64592adfe48f 100644 > --- a/fs/ceph/quota.c > +++ b/fs/ceph/quota.c > @@ -195,9 +195,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc) > > /* > * This function walks through the snaprealm for an inode and returns the > - * ceph_snap_realm for the first snaprealm that has quotas set (either max_files > - * or max_bytes). If the root is reached, return the root ceph_snap_realm > - * instead. > + * ceph_snap_realm for the first snaprealm that has quotas set (max_files, > + * max_bytes, or any, depending on the 'which_quota' argument). If the root is > + * reached, return the root ceph_snap_realm instead. > * > * Note that the caller is responsible for calling ceph_put_snap_realm() on the > * returned realm. > @@ -209,7 +209,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc) > * will be restarted. > */ > static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc, > - struct inode *inode, bool retry) > + struct inode *inode, > + enum quota_get_realm which_quota, > + bool retry) > { > struct ceph_inode_info *ci = NULL; > struct ceph_snap_realm *realm, *next; > @@ -248,7 +250,7 @@ static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc, > } > > ci = ceph_inode(in); > - has_quota = __ceph_has_any_quota(ci); > + has_quota = __ceph_has_quota(ci, which_quota); > iput(in); > > next = realm->parent; > @@ -279,8 +281,8 @@ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new) > * dropped and we can then restart the whole operation. > */ > down_read(&mdsc->snap_rwsem); > - old_realm = get_quota_realm(mdsc, old, true); > - new_realm = get_quota_realm(mdsc, new, false); > + old_realm = get_quota_realm(mdsc, old, QUOTA_GET_ANY, true); > + new_realm = get_quota_realm(mdsc, new, QUOTA_GET_ANY, false); > if (PTR_ERR(new_realm) == -EAGAIN) { > up_read(&mdsc->snap_rwsem); > if (old_realm) > @@ -483,7 +485,8 @@ bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, struct kstatfs *buf) > bool is_updated = false; > > down_read(&mdsc->snap_rwsem); > - realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), true); > + realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root), > + QUOTA_GET_MAX_BYTES, true); > up_read(&mdsc->snap_rwsem); > if (!realm) > return false; > diff --git a/fs/ceph/super.h b/fs/ceph/super.h > index a2e1c83ab29a..0ecde1c12fee 100644 > --- a/fs/ceph/super.h > +++ b/fs/ceph/super.h > @@ -1317,9 +1317,29 @@ extern void ceph_fs_debugfs_init(struct ceph_fs_client *client); > extern void ceph_fs_debugfs_cleanup(struct ceph_fs_client *client); > > /* quota.c */ > -static inline bool __ceph_has_any_quota(struct ceph_inode_info *ci) > + > +enum quota_get_realm { > + QUOTA_GET_MAX_FILES, > + QUOTA_GET_MAX_BYTES, > + QUOTA_GET_ANY > +}; > + > +static inline bool __ceph_has_quota(struct ceph_inode_info *ci, > + enum quota_get_realm which) > { > - return ci->i_max_files || ci->i_max_bytes; > + bool has_quota = false; > + > + switch (which) { > + case QUOTA_GET_MAX_BYTES: > + has_quota = !!ci->i_max_bytes; > + break; > + case QUOTA_GET_MAX_FILES: > + has_quota = !!ci->i_max_files; > + break; > + default: > + has_quota = !!(ci->i_max_files || ci->i_max_bytes); > + } > + return has_quota; > } > > extern void ceph_adjust_quota_realms_count(struct inode *inode, bool inc); > @@ -1328,10 +1348,10 @@ static inline void __ceph_update_quota(struct ceph_inode_info *ci, > u64 max_bytes, u64 max_files) > { > bool had_quota, has_quota; > - had_quota = __ceph_has_any_quota(ci); > + had_quota = __ceph_has_quota(ci, QUOTA_GET_ANY); > ci->i_max_bytes = max_bytes; > ci->i_max_files = max_files; > - has_quota = __ceph_has_any_quota(ci); > + has_quota = __ceph_has_quota(ci, QUOTA_GET_ANY); > > if (had_quota != has_quota) > ceph_adjust_quota_realms_count(&ci->vfs_inode, has_quota); Reviewed-by: Jeff Layton