Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp198871ybx; Wed, 30 Oct 2019 13:35:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqwPgdqru/7V/Il6GWxMeotSVStN7qKhQ8KFVeaQOCHxZXzEmix6lgkfiObSlQQMEhJQji9N X-Received: by 2002:a17:906:6942:: with SMTP id c2mr202447ejs.152.1572467752268; Wed, 30 Oct 2019 13:35:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572467752; cv=none; d=google.com; s=arc-20160816; b=U10yDgDchOFaNvPOTxAtaQHdHyskredrjSAqBQED2rO19xE7jl5WtjNB5f5W7Tjmum vLFN5x4E61FDqftDF4J36cV27SgU8NkUFC1ucUyMQO5j0iN+VYdQzMRSGJCHqxWIGiDa 3tgQlXt8Udukb+pb33KzGEUc+zSRLcXq28Q+D/ompwQ0jBrAM0G7noHg0rJaeuIALKt1 qZ2zwP61nT1aTBo19MG+yIuuzBXiyWBd0cMAVcyk4PvHM9O+PtChlu7lCYYAlAT61xUr KjEMAtiB41WabLqVRn7MhBFhX6HbHhzKWLqfxxvSFNjKZm+eBN7LBj2jsqRIc18z2O3/ rM+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=RK4Ks6DAlFEisV6rfmLY572dIjpIrDeY29GnFkRKm3A=; b=dYBSJPVH5709AXIBqzdPkR8odcf7vMGrYXplv32bh/L6nLW+HoMZp8+s+/oMlNWWg1 kM+jYioJL9FRn5734kqZ+JZfn8U3LUkd0Xw5LEeVudXo25tLMfkrQuQ7nZXZ0BniZ2N0 UHvWM7yzHaj6XmzJmc2zEG1KtZhof/UQRahtMgoYcm68oCIYhIJMG/vFYQva2k9k4xQ+ HvltkSng5fsrfueJVz1tsdo3qDS6lcBc9wV0tOxf6+Ccy3EymwLXlklk2lSPOCRLwijW ROM0Ir6B2rLkIlzKIdEhN03bwdPlNsDO0LGmu5XCIuLy+cpRbDHaBigcOyQrxDLa9znn 71bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k5si2042550ejj.356.2019.10.30.13.35.14; Wed, 30 Oct 2019 13:35:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726783AbfJ3Udi (ORCPT + 99 others); Wed, 30 Oct 2019 16:33:38 -0400 Received: from fieldses.org ([173.255.197.46]:36952 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726780AbfJ3Udi (ORCPT ); Wed, 30 Oct 2019 16:33:38 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 8FE101B90; Wed, 30 Oct 2019 16:33:37 -0400 (EDT) Date: Wed, 30 Oct 2019 16:33:37 -0400 From: Bruce Fields To: Simo Sorce Cc: Chuck Lever , Linux NFS Mailing List Subject: Re: [PATCH v1 2/2] SUNRPC: Fix svcauth_gss_proxy_init() Message-ID: <20191030203337.GA13537@fieldses.org> References: <20191024133410.2148.3456.stgit@klimt.1015granger.net> <20191024133416.2148.96218.stgit@klimt.1015granger.net> <1DD83F8D-10A7-4273-A53F-3AB858EBE2D1@oracle.com> <59d0188a2195f7f8b2416179a7cc8813a147939f.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <59d0188a2195f7f8b2416179a7cc8813a147939f.camel@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Thu, Oct 24, 2019 at 10:02:27AM -0400, Simo Sorce wrote: > On Thu, 2019-10-24 at 09:35 -0400, Chuck Lever wrote: > > Whoops, was going to Cc: Simo on this one... > > Doesn't look like it is changing any behavior wrt GSS, so fine with me. Thanks, Simo. I've added a Reviewed-by: Simo Sorce to this one. --b. > > Simo. > > > > On Oct 24, 2019, at 9:34 AM, Chuck Lever wrote: > > > > > > gss_read_proxy_verf() assumes things about the XDR buffer containing > > > the RPC Call that are not true for buffers generated by > > > svc_rdma_recv(). > > > > > > RDMA's buffers look more like what the upper layer generates for > > > sending: head is a kmalloc'd buffer; it does not point to a page > > > whose contents are contiguous with the first page in the buffers' > > > page array. The result is that ACCEPT_SEC_CONTEXT via RPC/RDMA has > > > stopped working on Linux NFS servers that use gssproxy. > > > > > > This does not affect clients that use only TCP to send their > > > ACCEPT_SEC_CONTEXT operation (that's all Linux clients). Other > > > clients, like Solaris NFS clients, send ACCEPT_SEC_CONTEXT on the > > > same transport as they send all other NFS operations. Such clients > > > can send ACCEPT_SEC_CONTEXT via RPC/RDMA. > > > > > > I thought I had found every direct reference in the server RPC code > > > to the rqstp->rq_pages field. > > > > > > Bug found at the 2019 Westford NFS bake-a-thon. > > > > > > Fixes: 3316f0631139 ("svcrdma: Persistently allocate and DMA- ... ") > > > Signed-off-by: Chuck Lever > > > Tested-by: Bill Baker > > > --- > > > net/sunrpc/auth_gss/svcauth_gss.c | 84 ++++++++++++++++++++++++++++--------- > > > 1 file changed, 63 insertions(+), 21 deletions(-) > > > > > > diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c > > > index f130990..c62d1f1 100644 > > > --- a/net/sunrpc/auth_gss/svcauth_gss.c > > > +++ b/net/sunrpc/auth_gss/svcauth_gss.c > > > @@ -1078,24 +1078,32 @@ struct gss_svc_data { > > > return 0; > > > } > > > > > > -/* Ok this is really heavily depending on a set of semantics in > > > - * how rqstp is set up by svc_recv and pages laid down by the > > > - * server when reading a request. We are basically guaranteed that > > > - * the token lays all down linearly across a set of pages, starting > > > - * at iov_base in rq_arg.head[0] which happens to be the first of a > > > - * set of pages stored in rq_pages[]. > > > - * rq_arg.head[0].iov_base will provide us the page_base to pass > > > - * to the upcall. > > > - */ > > > -static inline int > > > -gss_read_proxy_verf(struct svc_rqst *rqstp, > > > - struct rpc_gss_wire_cred *gc, __be32 *authp, > > > - struct xdr_netobj *in_handle, > > > - struct gssp_in_token *in_token) > > > +static void gss_free_in_token_pages(struct gssp_in_token *in_token) > > > { > > > - struct kvec *argv = &rqstp->rq_arg.head[0]; > > > u32 inlen; > > > - int res; > > > + int i; > > > + > > > + i = 0; > > > + inlen = in_token->page_len; > > > + while (inlen) { > > > + if (in_token->pages[i]) > > > + put_page(in_token->pages[i]); > > > + inlen -= inlen > PAGE_SIZE ? PAGE_SIZE : inlen; > > > + } > > > + > > > + kfree(in_token->pages); > > > + in_token->pages = NULL; > > > +} > > > + > > > +static int gss_read_proxy_verf(struct svc_rqst *rqstp, > > > + struct rpc_gss_wire_cred *gc, __be32 *authp, > > > + struct xdr_netobj *in_handle, > > > + struct gssp_in_token *in_token) > > > +{ > > > + struct kvec *argv = &rqstp->rq_arg.head[0]; > > > + unsigned int page_base, length; > > > + int pages, i, res; > > > + size_t inlen; > > > > > > res = gss_read_common_verf(gc, argv, authp, in_handle); > > > if (res) > > > @@ -1105,10 +1113,36 @@ struct gss_svc_data { > > > if (inlen > (argv->iov_len + rqstp->rq_arg.page_len)) > > > return SVC_DENIED; > > > > > > - in_token->pages = rqstp->rq_pages; > > > - in_token->page_base = (ulong)argv->iov_base & ~PAGE_MASK; > > > + pages = DIV_ROUND_UP(inlen, PAGE_SIZE); > > > + in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL); > > > + if (!in_token->pages) > > > + return SVC_DENIED; > > > + in_token->page_base = 0; > > > in_token->page_len = inlen; > > > + for (i = 0; i < pages; i++) { > > > + in_token->pages[i] = alloc_page(GFP_KERNEL); > > > + if (!in_token->pages[i]) { > > > + gss_free_in_token_pages(in_token); > > > + return SVC_DENIED; > > > + } > > > + } > > > > > > + length = min_t(unsigned int, inlen, argv->iov_len); > > > + memcpy(page_address(in_token->pages[0]), argv->iov_base, length); > > > + inlen -= length; > > > + > > > + i = 1; > > > + page_base = rqstp->rq_arg.page_base; > > > + while (inlen) { > > > + length = min_t(unsigned int, inlen, PAGE_SIZE); > > > + memcpy(page_address(in_token->pages[i]), > > > + page_address(rqstp->rq_arg.pages[i]) + page_base, > > > + length); > > > + > > > + inlen -= length; > > > + page_base = 0; > > > + i++; > > > + } > > > return 0; > > > } > > > > > > @@ -1282,8 +1316,11 @@ static int svcauth_gss_proxy_init(struct svc_rqst *rqstp, > > > break; > > > case GSS_S_COMPLETE: > > > status = gss_proxy_save_rsc(sn->rsc_cache, &ud, &handle); > > > - if (status) > > > + if (status) { > > > + pr_info("%s: gss_proxy_save_rsc failed (%d)\n", > > > + __func__, status); > > > goto out; > > > + } > > > cli_handle.data = (u8 *)&handle; > > > cli_handle.len = sizeof(handle); > > > break; > > > @@ -1294,15 +1331,20 @@ static int svcauth_gss_proxy_init(struct svc_rqst *rqstp, > > > > > > /* Got an answer to the upcall; use it: */ > > > if (gss_write_init_verf(sn->rsc_cache, rqstp, > > > - &cli_handle, &ud.major_status)) > > > + &cli_handle, &ud.major_status)) { > > > + pr_info("%s: gss_write_init_verf failed\n", __func__); > > > goto out; > > > + } > > > if (gss_write_resv(resv, PAGE_SIZE, > > > &cli_handle, &ud.out_token, > > > - ud.major_status, ud.minor_status)) > > > + ud.major_status, ud.minor_status)) { > > > + pr_info("%s: gss_write_resv failed\n", __func__); > > > goto out; > > > + } > > > > > > ret = SVC_COMPLETE; > > > out: > > > + gss_free_in_token_pages(&ud.in_token); > > > gssp_free_upcall_data(&ud); > > > return ret; > > > } > > > > > > > -- > > Chuck Lever > > > > > > > > -- > Simo Sorce > RHEL Crypto Team > Red Hat, Inc > > >