Received: by 2002:a05:6359:6284:b0:131:369:b2a3 with SMTP id se4csp3089128rwb; Mon, 7 Aug 2023 08:06:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGVoZ9OhTA7Y1U5a/LO0X6A0y9dDJXi1JhkwMg/arymF5x6GCNVUoLS9j67ipL2BTSnkp4/ X-Received: by 2002:a17:906:1045:b0:98d:f4a7:71cf with SMTP id j5-20020a170906104500b0098df4a771cfmr10122239ejj.62.1691420803757; Mon, 07 Aug 2023 08:06:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691420803; cv=none; d=google.com; s=arc-20160816; b=pqC6/yV/gO4A/8cBg9ZWYS0cMFB28m3hslV7jSSgr9qrXtlogwJ4jzbAg5P38lTSsj smtb74+XN4RusMJxcSVTjcy+zxbTGRKG/mD/LG9AqTg/8xoEL7sbsl7IjgbrCvsaHiOa gjoxdVI3/sF6mIuWBDmoZljtux4WSZ2BKdcNVG42o888P5ockDz8gcqmh1Wy80V2wBkc gaXMotggRoDcK/iMjE5dzvZ8v97MhjkBld8bI4HFSB+hJ64g4+LzF56p4JtTDPA6fE+0 ES1Lexh3Vtj/Yt3BU7Mp2etRhTgtDr/bk92JuasV93KQQ0CXCxjgMTLrl29C42icCjUg Pmxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent :content-transfer-encoding:references:in-reply-to:date:cc:to:from :subject:message-id:dkim-signature; bh=+3iy4ToqFX7rk9M3ByyrohqdzUewftLDh4/DyfRVPQ8=; fh=3naabSKsqNbylI38KA7Qc41GwpQSPYzwmQNsWkPLMvk=; b=uLlNBGnzV8PNPO5z9ch4EnF20OMqad6YTNVguOYKvp09VGgaohkGSrP4vsjpOdNnyA RRrsH1U7tSH0eKBEdv9dXtzis/9hNf0k9kwKGDMtQR4XZseNqa2iX8FbN89d1wsXmsd1 NzQGsjE0rX6+AILWAWHqvE9QfzEtH94uh/PGavJZF3oHx/yZYBxeKNtK/hwOb1IfF8VG n8CS5cvg246nXidvW9bUyqe02sSBHCpQpwnGhBlPz4tfJeqFjOea9fVnDKnG/7Rpa/6g 0rY8sWyCs99d5un8trPTx6lHJbFDbXr7KScYmDZ7NRzVvuxeyl4MoSvAl4ksBy4cSZhD 6j9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aEJLlN9a; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bk21-20020a170906b0d500b00997ea7d6658si5922222ejb.61.2023.08.07.08.06.14; Mon, 07 Aug 2023 08:06:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aEJLlN9a; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234551AbjHGOia (ORCPT + 99 others); Mon, 7 Aug 2023 10:38:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234537AbjHGOi3 (ORCPT ); Mon, 7 Aug 2023 10:38:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F1BB10DC for ; Mon, 7 Aug 2023 07:38:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1199461D51 for ; Mon, 7 Aug 2023 14:38:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DB2B1C433C7; Mon, 7 Aug 2023 14:38:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1691419106; bh=YV5cMAg6H+nqS2jcIVdnQJj7xAlD19PjMuoOPlbBW3E=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=aEJLlN9a3YV9UfwEwoBB/KiUmBMlsOTVr9/Hmxh7lZDTt7Hsgt4x5dp+wfDbalxPV SkQ/fQUMNUZk/7SkCr4JLq0GjtCgwQqgn9C1a8NBHMxwuaT1Vz5JnzB4JOQwZuHHC4 z6vM2Si4ERFerAlrui+C5SITVckgwmPD0SVHK4+U7RBDP/Unk3JmrqSwQNWJJfUfHS wLcdHaqpcOletWYekbF0xiBsFp3TZ+zTYbAmMwUNNdFwNNVNbeNiQ0rTS+QE3iyjZW gPWzo3CYuiVG8flXkECqZVTF8vdImjgnhYNP8Lk34VT97/oamMiDPXVThU14lAaVGX 5Cv4I30Bu11cg== Message-ID: <35b35315a248188a67ff998cb018286411b0a813.camel@kernel.org> Subject: Re: [PATCH v5 2/2] NFSD: add rpc_status entry in nfsd debug filesystem From: Jeff Layton To: Chuck Lever , Lorenzo Bianconi Cc: linux-nfs@vger.kernel.org, lorenzo.bianconi@redhat.com, neilb@suse.de Date: Mon, 07 Aug 2023 10:38:24 -0400 In-Reply-To: References: Content-Type: text/plain; charset="ISO-8859-15" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.48.4 (3.48.4-1.fc38) MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Mon, 2023-08-07 at 10:25 -0400, Chuck Lever wrote: > On Fri, Aug 04, 2023 at 07:16:08PM +0200, Lorenzo Bianconi wrote: > > Introduce rpc_status entry in nfsd debug filesystem in order to dump > > pending RPC requests debugging information. > >=20 > > Link: https://bugzilla.linux-nfs.org/show_bug.cgi?id=3D366 > > Signed-off-by: Lorenzo Bianconi >=20 > Hi Lorenzo, thanks for this new feature. It's been applied to the > nfsd-next branch (for v6.6). I've played with it a little using: >=20 > # watch cat /proc/fs/nfsd/rpc_status >=20 > And it works a lot like a simple "top" command for RPCs. Nice! >=20 > Until this work is merged upstream in a few weeks, there is still an > easy opportunity to refine the information and format of the new > file, if anyone sees the need. The only thing I might think of > adding is a comment in line one like this: >=20 > # version 1 > > to make extending the file format easier. >=20 Good idea. I guess we could also add a header to the file after all too, and just prefix it with '#'. Then any scripting we want to write will always know that any line with a # is part of the header. > Thinking aloud, it occurs to me a similar status file for NFSv4 > callback operations would be great to have. >=20 ACK, that would be nice. I don't think there is a handy list of nfsd4_callback structures though. We'd probably need to add one. >=20 > > --- > > fs/nfsd/nfs4proc.c | 4 +- > > fs/nfsd/nfsctl.c | 9 +++ > > fs/nfsd/nfsd.h | 7 ++ > > fs/nfsd/nfssvc.c | 140 +++++++++++++++++++++++++++++++++++++ > > include/linux/sunrpc/svc.h | 1 + > > net/sunrpc/svc.c | 2 +- > > 6 files changed, 159 insertions(+), 4 deletions(-) > >=20 > > diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c > > index f0f318e78630..b7ad3081bc36 100644 > > --- a/fs/nfsd/nfs4proc.c > > +++ b/fs/nfsd/nfs4proc.c > > @@ -2497,8 +2497,6 @@ static inline void nfsd4_increment_op_stats(u32 o= pnum) > > =20 > > static const struct nfsd4_operation nfsd4_ops[]; > > =20 > > -static const char *nfsd4_op_name(unsigned opnum); > > - > > /* > > * Enforce NFSv4.1 COMPOUND ordering rules: > > * > > @@ -3628,7 +3626,7 @@ void warn_on_nonidempotent_op(struct nfsd4_op *op= ) > > } > > } > > =20 > > -static const char *nfsd4_op_name(unsigned opnum) > > +const char *nfsd4_op_name(unsigned opnum) > > { > > if (opnum < ARRAY_SIZE(nfsd4_ops)) > > return nfsd4_ops[opnum].op_name; > > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c > > index 35d2e2cde1eb..d47b98bad96e 100644 > > --- a/fs/nfsd/nfsctl.c > > +++ b/fs/nfsd/nfsctl.c > > @@ -47,6 +47,7 @@ enum { > > NFSD_MaxBlkSize, > > NFSD_MaxConnections, > > NFSD_Filecache, > > + NFSD_Rpc_Status, > > /* > > * The below MUST come last. Otherwise we leave a hole in nfsd_files= [] > > * with !CONFIG_NFSD_V4 and simple_fill_super() goes oops > > @@ -195,6 +196,13 @@ static inline struct net *netns(struct file *file) > > return file_inode(file)->i_sb->s_fs_info; > > } > > =20 > > +static const struct file_operations nfsd_rpc_status_operations =3D { > > + .open =3D nfsd_rpc_status_open, > > + .read =3D seq_read, > > + .llseek =3D seq_lseek, > > + .release =3D nfsd_pool_stats_release, > > +}; > > + > > /* > > * write_unlock_ip - Release all locks used by a client > > * > > @@ -1400,6 +1408,7 @@ static int nfsd_fill_super(struct super_block *sb= , struct fs_context *fc) > > [NFSD_RecoveryDir] =3D {"nfsv4recoverydir", &transaction_ops, S_IWUS= R|S_IRUSR}, > > [NFSD_V4EndGrace] =3D {"v4_end_grace", &transaction_ops, S_IWUSR|S_I= RUGO}, > > #endif > > + [NFSD_Rpc_Status] =3D {"rpc_status", &nfsd_rpc_status_operations, S_= IRUGO}, > > /* last one */ {""} > > }; > > =20 > > diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h > > index d88498f8b275..50c82bb42e88 100644 > > --- a/fs/nfsd/nfsd.h > > +++ b/fs/nfsd/nfsd.h > > @@ -94,6 +94,7 @@ int nfsd_get_nrthreads(int n, int *, struct net *); > > int nfsd_set_nrthreads(int n, int *, struct net *); > > int nfsd_pool_stats_open(struct inode *, struct file *); > > int nfsd_pool_stats_release(struct inode *, struct file *); > > +int nfsd_rpc_status_open(struct inode *inode, struct file *file); > > void nfsd_shutdown_threads(struct net *net); > > =20 > > void nfsd_put(struct net *net); > > @@ -506,12 +507,18 @@ extern void nfsd4_ssc_init_umount_work(struct nfs= d_net *nn); > > =20 > > extern void nfsd4_init_leases_net(struct nfsd_net *nn); > > =20 > > +const char *nfsd4_op_name(unsigned opnum); > > #else /* CONFIG_NFSD_V4 */ > > static inline int nfsd4_is_junction(struct dentry *dentry) > > { > > return 0; > > } > > =20 > > +static inline const char *nfsd4_op_name(unsigned opnum) > > +{ > > + return "unknown_operation"; > > +} > > + > > static inline void nfsd4_init_leases_net(struct nfsd_net *nn) { }; > > =20 > > #define register_cld_notifier() 0 > > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c > > index 97830e28c140..5e115dbbe9dc 100644 > > --- a/fs/nfsd/nfssvc.c > > +++ b/fs/nfsd/nfssvc.c > > @@ -1057,6 +1057,15 @@ int nfsd_dispatch(struct svc_rqst *rqstp) > > if (!proc->pc_decode(rqstp, &rqstp->rq_arg_stream)) > > goto out_decode_err; > > =20 > > + /* > > + * Release rq_status_counter setting it to an odd value after the rpc > > + * request has been properly parsed. rq_status_counter is used to > > + * notify the consumers if the rqstp fields are stable > > + * (rq_status_counter is odd) or not meaningful (rq_status_counter > > + * is even). > > + */ > > + smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter= | 1); > > + > > rp =3D NULL; > > switch (nfsd_cache_lookup(rqstp, &rp)) { > > case RC_DOIT: > > @@ -1074,6 +1083,12 @@ int nfsd_dispatch(struct svc_rqst *rqstp) > > if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream)) > > goto out_encode_err; > > =20 > > + /* > > + * Release rq_status_counter setting it to an even value after the rp= c > > + * request has been properly processed. > > + */ > > + smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter= + 1); > > + > > nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1); > > out_cached_reply: > > return 1; > > @@ -1149,3 +1164,128 @@ int nfsd_pool_stats_release(struct inode *inode= , struct file *file) > > mutex_unlock(&nfsd_mutex); > > return ret; > > } > > + > > +static int nfsd_rpc_status_show(struct seq_file *m, void *v) > > +{ > > + struct inode *inode =3D file_inode(m->file); > > + struct nfsd_net *nn =3D net_generic(inode->i_sb->s_fs_info, nfsd_net_= id); > > + int i; > > + > > + rcu_read_lock(); > > + > > + for (i =3D 0; i < nn->nfsd_serv->sv_nrpools; i++) { > > + struct svc_rqst *rqstp; > > + > > + list_for_each_entry_rcu(rqstp, > > + &nn->nfsd_serv->sv_pools[i].sp_all_threads, > > + rq_all) { > > + struct { > > + struct sockaddr daddr; > > + struct sockaddr saddr; > > + unsigned long rq_flags; > > + const char *pc_name; > > + ktime_t rq_stime; > > + __be32 rq_xid; > > + u32 rq_prog; > > + u32 rq_vers; > > + /* NFSv4 compund */ > > + u32 opnum[NFSD_MAX_OPS_PER_COMPOUND]; > > + u8 opcnt; > > + } rqstp_info; > > + unsigned int status_counter; > > + char buf[RPC_MAX_ADDRBUFLEN]; > > + int j; > > + > > + /* > > + * Acquire rq_status_counter before parsing the rqst > > + * fields. rq_status_counter is set to an odd value in > > + * order to notify the consumers the rqstp fields are > > + * meaningful. > > + */ > > + status_counter =3D smp_load_acquire(&rqstp->rq_status_counter); > > + if (!(status_counter & 1)) > > + continue; > > + > > + rqstp_info.rq_xid =3D rqstp->rq_xid; > > + rqstp_info.rq_flags =3D rqstp->rq_flags; > > + rqstp_info.rq_prog =3D rqstp->rq_prog; > > + rqstp_info.rq_vers =3D rqstp->rq_vers; > > + rqstp_info.pc_name =3D svc_proc_name(rqstp); > > + rqstp_info.rq_stime =3D rqstp->rq_stime; > > + rqstp_info.opcnt =3D 0; > > + memcpy(&rqstp_info.daddr, svc_daddr(rqstp), > > + sizeof(struct sockaddr)); > > + memcpy(&rqstp_info.saddr, svc_addr(rqstp), > > + sizeof(struct sockaddr)); > > + > > +#ifdef CONFIG_NFSD_V4 > > + if (rqstp->rq_vers =3D=3D NFS4_VERSION && > > + rqstp->rq_proc =3D=3D NFSPROC4_COMPOUND) { > > + /* NFSv4 compund */ > > + struct nfsd4_compoundargs *args =3D rqstp->rq_argp; > > + > > + rqstp_info.opcnt =3D args->opcnt; > > + for (j =3D 0; j < rqstp_info.opcnt; j++) { > > + struct nfsd4_op *op =3D &args->ops[j]; > > + > > + rqstp_info.opnum[j] =3D op->opnum; > > + } > > + } > > +#endif /* CONFIG_NFSD_V4 */ > > + > > + /* > > + * Acquire rq_status_counter before reporting the rqst > > + * fields to the user. > > + */ > > + if (smp_load_acquire(&rqstp->rq_status_counter) !=3D status_counter= ) > > + continue; > > + > > + seq_printf(m, > > + "0x%08x 0x%08lx 0x%08x NFSv%d %s %016lld", > > + be32_to_cpu(rqstp_info.rq_xid), > > + rqstp_info.rq_flags, > > + rqstp_info.rq_prog, > > + rqstp_info.rq_vers, > > + rqstp_info.pc_name, > > + ktime_to_us(rqstp_info.rq_stime)); > > + seq_printf(m, " %s", > > + __svc_print_addr(&rqstp_info.saddr, buf, > > + sizeof(buf), false)); > > + seq_printf(m, " %s", > > + __svc_print_addr(&rqstp_info.daddr, buf, > > + sizeof(buf), false)); > > + for (j =3D 0; j < rqstp_info.opcnt; j++) > > + seq_printf(m, " %s", > > + nfsd4_op_name(rqstp_info.opnum[j])); > > + seq_puts(m, "\n"); > > + } > > + } > > + > > + rcu_read_unlock(); > > + > > + return 0; > > +} > > + > > +/** > > + * nfsd_rpc_status_open - open routine for nfsd_rpc_status handler > > + * @inode: entry inode pointer. > > + * @file: entry file pointer. > > + * > > + * nfsd_rpc_status_open is the open routine for nfsd_rpc_status procfs= handler. > > + * nfsd_rpc_status dumps pending RPC requests info queued into nfs ser= ver. > > + */ > > +int nfsd_rpc_status_open(struct inode *inode, struct file *file) > > +{ > > + struct nfsd_net *nn =3D net_generic(inode->i_sb->s_fs_info, nfsd_net_= id); > > + > > + mutex_lock(&nfsd_mutex); > > + if (!nn->nfsd_serv) { > > + mutex_unlock(&nfsd_mutex); > > + return -ENODEV; > > + } > > + > > + svc_get(nn->nfsd_serv); > > + mutex_unlock(&nfsd_mutex); > > + > > + return single_open(file, nfsd_rpc_status_show, inode->i_private); > > +} > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h > > index fe1394cc1371..542a60b78bab 100644 > > --- a/include/linux/sunrpc/svc.h > > +++ b/include/linux/sunrpc/svc.h > > @@ -270,6 +270,7 @@ struct svc_rqst { > > * net namespace > > */ > > void ** rq_lease_breaker; /* The v4 client breaking a lease */ > > + unsigned int rq_status_counter; /* RPC processing counter */ > > }; > > =20 > > #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->= rq_bc_net) > > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c > > index 587811a002c9..44eac83b35a1 100644 > > --- a/net/sunrpc/svc.c > > +++ b/net/sunrpc/svc.c > > @@ -1629,7 +1629,7 @@ const char *svc_proc_name(const struct svc_rqst *= rqstp) > > return rqstp->rq_procinfo->pc_name; > > return "unknown"; > > } > > - > > +EXPORT_SYMBOL_GPL(svc_proc_name); > > =20 > > /** > > * svc_encode_result_payload - mark a range of bytes as a result paylo= ad > > --=20 > > 2.41.0 > >=20 >=20 --=20 Jeff Layton