Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp3638587ybg; Fri, 25 Oct 2019 06:58:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqzcLtwjhFhvJxDEWmr7DSpMgk+UrKseZ5XE5dadyjBtIGC3qkxvsrt1vpyDfcawGXIjTawU X-Received: by 2002:aa7:dcc2:: with SMTP id w2mr4033815edu.92.1572011925068; Fri, 25 Oct 2019 06:58:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572011925; cv=none; d=google.com; s=arc-20160816; b=u0mMJmJdY0IuIHSI++ht2yxX+2Uopl8GncLwgES9s7W9E8kK35JlRBC67CVJ5NYvek jMPAH1CDNVwc1bh8XJVZY37yuqSr5Lv3iSMKIVdroqf4iKIaBYKxIPLcTq6jTGT02fCP 9U0Pc9k3TNXLaaWZ+O6kzl4vIoahlH4GfO+u4eutWXMCTMF6BFOvWqmd4s80g2QEDH6K qB5TzVP4PllvBavtJ305eOXOqTz3S3mXEHmwp0A7c8/fpMvOrb8dhdJjT2AEA4dXclGA z+zbWqOyanpwQUOt3sp3PwksN2MT4KbYFITaRbp5HfstPDXNVEpzeLxlVClMjrOK4Q3l rWcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=lEryrETcLbZNFJ7paT0e9BvhsVUmri037rg2Hh/6l1E=; b=FEWFzWSxWq4JpjdsJ+aVGkyjF2mg+TuVE/7zzYffR52GTqc8MM/NIlWnllREBRfnK7 copuBsUoip7DkkJAKQz9pD1eNhCSF1s6aSdZ2ytjeXzpSXkl85s1V5MyrCzDlizGVWGK 3/vKyBUbEQd9BSJX/rDCyJJcDcZta6/pKwE/Sm0vkGHB7offn4FsloSVa1LESfQka8uf TrdJ9za4AzflOlKdhmou7foD9yG4+9+zhMtD8Exzi4K2IymgBCxsPqQbg1jd10XSpgRd ak4UzvHs4mA0LyadVsKTEZSTAFJtjPXEZOrYkkJR6t6NZkMaU3HyHNImkdRSsit1kT43 pRMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=A1VyS9zH; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d10si1282505ejb.54.2019.10.25.06.58.12; Fri, 25 Oct 2019 06:58:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=A1VyS9zH; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731381AbfJXOCh (ORCPT + 99 others); Thu, 24 Oct 2019 10:02:37 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:59421 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730547AbfJXOCh (ORCPT ); Thu, 24 Oct 2019 10:02:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1571925754; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lEryrETcLbZNFJ7paT0e9BvhsVUmri037rg2Hh/6l1E=; b=A1VyS9zHVW1XakUMfSePxGfoXG+TdNZtMA41Qug6Dilree1NaLcrK9mkE4dOcceN74qspg cOCB041glb4tMNKRmBvTfoZAL/t2xprWEjJTsr+FyG1JR/pu3EX+/69V5291y4ESTz9qO/ y2JsOJYSa2RPPhHXP/1ghniBbuGWBJY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-288-AvkGbVv7MLmw4737OlzLbg-1; Thu, 24 Oct 2019 10:02:30 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D9C4B5EC; Thu, 24 Oct 2019 14:02:28 +0000 (UTC) Received: from ovpn-117-17.phx2.redhat.com (ovpn-117-17.phx2.redhat.com [10.3.117.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2C2575D9D5; Thu, 24 Oct 2019 14:02:28 +0000 (UTC) Message-ID: <59d0188a2195f7f8b2416179a7cc8813a147939f.camel@redhat.com> Subject: Re: [PATCH v1 2/2] SUNRPC: Fix svcauth_gss_proxy_init() From: Simo Sorce To: Chuck Lever Cc: Bruce Fields , Linux NFS Mailing List Date: Thu, 24 Oct 2019 10:02:27 -0400 In-Reply-To: <1DD83F8D-10A7-4273-A53F-3AB858EBE2D1@oracle.com> References: <20191024133410.2148.3456.stgit@klimt.1015granger.net> <20191024133416.2148.96218.stgit@klimt.1015granger.net> <1DD83F8D-10A7-4273-A53F-3AB858EBE2D1@oracle.com> Organization: Red Hat, Inc. Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: AvkGbVv7MLmw4737OlzLbg-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Thu, 2019-10-24 at 09:35 -0400, Chuck Lever wrote: > Whoops, was going to Cc: Simo on this one... Doesn't look like it is changing any behavior wrt GSS, so fine with me. Simo. > > On Oct 24, 2019, at 9:34 AM, Chuck Lever wrote= : > >=20 > > gss_read_proxy_verf() assumes things about the XDR buffer containing > > the RPC Call that are not true for buffers generated by > > svc_rdma_recv(). > >=20 > > RDMA's buffers look more like what the upper layer generates for > > sending: head is a kmalloc'd buffer; it does not point to a page > > whose contents are contiguous with the first page in the buffers' > > page array. The result is that ACCEPT_SEC_CONTEXT via RPC/RDMA has > > stopped working on Linux NFS servers that use gssproxy. > >=20 > > This does not affect clients that use only TCP to send their > > ACCEPT_SEC_CONTEXT operation (that's all Linux clients). Other > > clients, like Solaris NFS clients, send ACCEPT_SEC_CONTEXT on the > > same transport as they send all other NFS operations. Such clients > > can send ACCEPT_SEC_CONTEXT via RPC/RDMA. > >=20 > > I thought I had found every direct reference in the server RPC code > > to the rqstp->rq_pages field. > >=20 > > Bug found at the 2019 Westford NFS bake-a-thon. > >=20 > > Fixes: 3316f0631139 ("svcrdma: Persistently allocate and DMA- ... ") > > Signed-off-by: Chuck Lever > > Tested-by: Bill Baker > > --- > > net/sunrpc/auth_gss/svcauth_gss.c | 84 ++++++++++++++++++++++++++++--= ------- > > 1 file changed, 63 insertions(+), 21 deletions(-) > >=20 > > diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/sv= cauth_gss.c > > index f130990..c62d1f1 100644 > > --- a/net/sunrpc/auth_gss/svcauth_gss.c > > +++ b/net/sunrpc/auth_gss/svcauth_gss.c > > @@ -1078,24 +1078,32 @@ struct gss_svc_data { > > =09return 0; > > } > >=20 > > -/* Ok this is really heavily depending on a set of semantics in > > - * how rqstp is set up by svc_recv and pages laid down by the > > - * server when reading a request. We are basically guaranteed that > > - * the token lays all down linearly across a set of pages, starting > > - * at iov_base in rq_arg.head[0] which happens to be the first of a > > - * set of pages stored in rq_pages[]. > > - * rq_arg.head[0].iov_base will provide us the page_base to pass > > - * to the upcall. > > - */ > > -static inline int > > -gss_read_proxy_verf(struct svc_rqst *rqstp, > > -=09=09 struct rpc_gss_wire_cred *gc, __be32 *authp, > > -=09=09 struct xdr_netobj *in_handle, > > -=09=09 struct gssp_in_token *in_token) > > +static void gss_free_in_token_pages(struct gssp_in_token *in_token) > > { > > -=09struct kvec *argv =3D &rqstp->rq_arg.head[0]; > > =09u32 inlen; > > -=09int res; > > +=09int i; > > + > > +=09i =3D 0; > > +=09inlen =3D in_token->page_len; > > +=09while (inlen) { > > +=09=09if (in_token->pages[i]) > > +=09=09=09put_page(in_token->pages[i]); > > +=09=09inlen -=3D inlen > PAGE_SIZE ? PAGE_SIZE : inlen; > > +=09} > > + > > +=09kfree(in_token->pages); > > +=09in_token->pages =3D NULL; > > +} > > + > > +static int gss_read_proxy_verf(struct svc_rqst *rqstp, > > +=09=09=09 struct rpc_gss_wire_cred *gc, __be32 *authp, > > +=09=09=09 struct xdr_netobj *in_handle, > > +=09=09=09 struct gssp_in_token *in_token) > > +{ > > +=09struct kvec *argv =3D &rqstp->rq_arg.head[0]; > > +=09unsigned int page_base, length; > > +=09int pages, i, res; > > +=09size_t inlen; > >=20 > > =09res =3D gss_read_common_verf(gc, argv, authp, in_handle); > > =09if (res) > > @@ -1105,10 +1113,36 @@ struct gss_svc_data { > > =09if (inlen > (argv->iov_len + rqstp->rq_arg.page_len)) > > =09=09return SVC_DENIED; > >=20 > > -=09in_token->pages =3D rqstp->rq_pages; > > -=09in_token->page_base =3D (ulong)argv->iov_base & ~PAGE_MASK; > > +=09pages =3D DIV_ROUND_UP(inlen, PAGE_SIZE); > > +=09in_token->pages =3D kcalloc(pages, sizeof(struct page *), GFP_KERNE= L); > > +=09if (!in_token->pages) > > +=09=09return SVC_DENIED; > > +=09in_token->page_base =3D 0; > > =09in_token->page_len =3D inlen; > > +=09for (i =3D 0; i < pages; i++) { > > +=09=09in_token->pages[i] =3D alloc_page(GFP_KERNEL); > > +=09=09if (!in_token->pages[i]) { > > +=09=09=09gss_free_in_token_pages(in_token); > > +=09=09=09return SVC_DENIED; > > +=09=09} > > +=09} > >=20 > > +=09length =3D min_t(unsigned int, inlen, argv->iov_len); > > +=09memcpy(page_address(in_token->pages[0]), argv->iov_base, length); > > +=09inlen -=3D length; > > + > > +=09i =3D 1; > > +=09page_base =3D rqstp->rq_arg.page_base; > > +=09while (inlen) { > > +=09=09length =3D min_t(unsigned int, inlen, PAGE_SIZE); > > +=09=09memcpy(page_address(in_token->pages[i]), > > +=09=09 page_address(rqstp->rq_arg.pages[i]) + page_base, > > +=09=09 length); > > + > > +=09=09inlen -=3D length; > > +=09=09page_base =3D 0; > > +=09=09i++; > > +=09} > > =09return 0; > > } > >=20 > > @@ -1282,8 +1316,11 @@ static int svcauth_gss_proxy_init(struct svc_rqs= t *rqstp, > > =09=09break; > > =09case GSS_S_COMPLETE: > > =09=09status =3D gss_proxy_save_rsc(sn->rsc_cache, &ud, &handle); > > -=09=09if (status) > > +=09=09if (status) { > > +=09=09=09pr_info("%s: gss_proxy_save_rsc failed (%d)\n", > > +=09=09=09=09__func__, status); > > =09=09=09goto out; > > +=09=09} > > =09=09cli_handle.data =3D (u8 *)&handle; > > =09=09cli_handle.len =3D sizeof(handle); > > =09=09break; > > @@ -1294,15 +1331,20 @@ static int svcauth_gss_proxy_init(struct svc_rq= st *rqstp, > >=20 > > =09/* Got an answer to the upcall; use it: */ > > =09if (gss_write_init_verf(sn->rsc_cache, rqstp, > > -=09=09=09=09&cli_handle, &ud.major_status)) > > +=09=09=09=09&cli_handle, &ud.major_status)) { > > +=09=09pr_info("%s: gss_write_init_verf failed\n", __func__); > > =09=09goto out; > > +=09} > > =09if (gss_write_resv(resv, PAGE_SIZE, > > =09=09=09 &cli_handle, &ud.out_token, > > -=09=09=09 ud.major_status, ud.minor_status)) > > +=09=09=09 ud.major_status, ud.minor_status)) { > > +=09=09pr_info("%s: gss_write_resv failed\n", __func__); > > =09=09goto out; > > +=09} > >=20 > > =09ret =3D SVC_COMPLETE; > > out: > > +=09gss_free_in_token_pages(&ud.in_token); > > =09gssp_free_upcall_data(&ud); > > =09return ret; > > } > >=20 >=20 > -- > Chuck Lever >=20 >=20 >=20 --=20 Simo Sorce RHEL Crypto Team Red Hat, Inc