Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp2092536rwl; Thu, 30 Mar 2023 06:03:59 -0700 (PDT) X-Google-Smtp-Source: AKy350ZGqWLd1NQy27CUZHfGzaZUzcsrl5vt9f3mxmW7NDT8ceRyCdxZPZH4QaR79RSVTM1e8Y+2 X-Received: by 2002:a17:90a:ba04:b0:240:cf04:c997 with SMTP id s4-20020a17090aba0400b00240cf04c997mr1211388pjr.2.1680181439281; Thu, 30 Mar 2023 06:03:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680181439; cv=none; d=google.com; s=arc-20160816; b=U+Lge6FOzWKGL8yepaN9Q79d5E1moA2XKyDHCCk8Llk886+HetaThhFOxd+JLfl/6y Yks3RY3jEud8926nW5H1/fhcYyi3RzNjD8WtUAr2A2xUO1Fd+MV4wRT79HVFWpnoTL7Q wN93TwJIF3LxMT1Rn0wH69UF2ziHVVMhu3DKAMfLlI54sxd2TFSCX5uSpmOtU+GgZTyF rHDCze4ziMhoYuCiqt2mZwic9Wx7AyrkwYs2J3/8hFTAJ+iFSddww5kRN69dvIgQQN4P saK95O4sYeDACfUvWl9vjB4JWhPMAL9/x5pl5rBwr1dZwmTHXIrQpQbp/OYeuNRTEFjk eafQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:content-transfer-encoding :content-id:mime-version:subject:cc:to:references:in-reply-to:from :organization:dkim-signature; bh=HxqtYWKgKbfjvx86WVpEFEl8GyNFF30+hE41KO//gFo=; b=R+53Qo0mOGceow3tKqWgQWkwO01zE2sfV7vKMCTkeNPkEya4yyJo2ivFZBy3ZMEnJP k9qUApcBLiblXRscQq2NSj7jceK93ip0iubAmNQUek05N4DVmw47qkvRVOayM5Lf346S v7wrvTggLUQyra+rplxVRt2KXT0YuF9iWTcxPNAvqqvaU2o8l5BpCaF+M5MeCVFyjhc2 oW6ftZQB/A8Z+f58Mr3N1/F5RInTOfnBaU08RU+KvCuCA4sUXPAcqMWJhXgguxb07c4M 9i8oF0dct89szP4Gb0IN94WvdNB4RKZxyzuELGOnp9r8McTk5gU29LTwxWx2i0ffYlYU pTRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CP6xJEhU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h62-20020a638341000000b0051309268f34si18437515pge.632.2023.03.30.06.03.29; Thu, 30 Mar 2023 06:03:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CP6xJEhU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231790AbjC3NCi (ORCPT + 99 others); Thu, 30 Mar 2023 09:02:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231795AbjC3NCa (ORCPT ); Thu, 30 Mar 2023 09:02:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 567D59EE7 for ; Thu, 30 Mar 2023 06:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680181293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HxqtYWKgKbfjvx86WVpEFEl8GyNFF30+hE41KO//gFo=; b=CP6xJEhUzaA7pA8VOuWbu8fhwxeVjyZfAhFqGDQSrfV+EtTPOrhJm3PrPIopNCiJdRyhZG scn8ZTEyhScqSyyXp2oMTwdy+mjP5mNJtWUFriuz2M8J7Hzt+JWsdQbKoyo2fK8ueSuhST 3XSGsZ9vEuWd7+hawF4wUelAxoxHGag= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-262-WjAFCiU1Pgy2CiKK5qhhIw-1; Thu, 30 Mar 2023 09:01:30 -0400 X-MC-Unique: WjAFCiU1Pgy2CiKK5qhhIw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 18845884EC5; Thu, 30 Mar 2023 13:01:28 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id A8A721402C07; Thu, 30 Mar 2023 13:01:25 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 From: David Howells In-Reply-To: <6F2985FF-2474-4F36-BD94-5F8E97E46AC2@oracle.com> References: <6F2985FF-2474-4F36-BD94-5F8E97E46AC2@oracle.com> <20230329141354.516864-1-dhowells@redhat.com> <20230329141354.516864-41-dhowells@redhat.com> To: Chuck Lever III Cc: dhowells@redhat.com, Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Linus Torvalds , "open list:NETWORKING [GENERAL]" , linux-fsdevel , Linux Kernel Mailing List , Linux Memory Management List , Trond Myklebust , Anna Schumaker , Linux NFS Mailing List Subject: Re: [RFC PATCH v2 40/48] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <812033.1680181285.1@warthog.procyon.org.uk> Content-Transfer-Encoding: quoted-printable Date: Thu, 30 Mar 2023 14:01:25 +0100 Message-ID: <812034.1680181285@warthog.procyon.org.uk> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Chuck Lever III wrote: > Simply replacing the kernel_sendpage() loop would be a > straightforward change and easy to evaluate and test, and > I'd welcome that without hesitation. How about the attached for a first phase? It does three sendmsgs, one for the marker + header, one for the body and = one for the tail. David --- sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage When transmitting data, call down into TCP using sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing sendpage calls to transmit header, data pages and trailer. The marker and the header are passed in an array of kvecs. The marker wil= l get copied and the header will get spliced. Signed-off-by: David Howells cc: Trond Myklebust cc: Anna Schumaker cc: Chuck Lever cc: Jeff Layton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- include/linux/sunrpc/svc.h | 11 +++--- net/sunrpc/svcsock.c | 75 ++++++++++++++-------------------------= ------ 2 files changed, 29 insertions(+), 57 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..456ae554aa11 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv = *serv) extern u32 svc_max_payload(const struct svc_rqst *rqstp); = /* - * RPC Requsts and replies are stored in one or more pages. + * RPC Requests and replies are stored in one or more pages. * We maintain an array of pages for each server thread. * Requests are copied into these pages as they arrive. Remaining * pages are available to write the reply into. * - * Pages are sent using ->sendpage so each server thread needs to - * allocate more to replace those used in sending. To help keep track - * of these pages we have a receive list where all pages initialy live, - * and a send list where pages are moved to when there are to be part - * of a reply. + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server th= read + * needs to allocate more to replace those used in sending. To help keep= track + * of these pages we have a receive list where all pages initialy live, a= nd a + * send list where pages are moved to when there are to be part of a repl= y. * * We use xdr_buf for holding responses as it fits well with NFS * read responses (that have a header, and some data pages, and possibly diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 03a4f5615086..14efcc08c6f8 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1060,16 +1060,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) return 0; /* record not complete */ } = -static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, - int flags) -{ - return kernel_sendpage(sock, virt_to_page(vec->iov_base), - offset_in_page(vec->iov_base), - vec->iov_len, flags); -} - /* - * kernel_sendpage() is used exclusively to reduce the number of + * MSG_SPLICE_PAGES is used exclusively to reduce the number of * copy operations in this path. Therefore the caller must ensure * that the pages backing @xdr are unchanging. * @@ -1081,13 +1073,9 @@ static int svc_tcp_sendmsg(struct socket *sock, str= uct xdr_buf *xdr, { const struct kvec *head =3D xdr->head; const struct kvec *tail =3D xdr->tail; - struct kvec rm =3D { - .iov_base =3D &marker, - .iov_len =3D sizeof(marker), - }; - struct msghdr msg =3D { - .msg_flags =3D 0, - }; + struct kvec kv[2]; + struct msghdr msg =3D { .msg_flags =3D MSG_SPLICE_PAGES | MSG_MORE, }; + size_t sent; int ret; = *sentp =3D 0; @@ -1095,51 +1083,36 @@ static int svc_tcp_sendmsg(struct socket *sock, st= ruct xdr_buf *xdr, if (ret < 0) return ret; = - ret =3D kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len); + kv[0].iov_base =3D ▮ + kv[0].iov_len =3D sizeof(marker); + kv[1] =3D *head; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, kv, 2, sizeof(marker) + head->= iov_len); + ret =3D sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp +=3D ret; - if (ret !=3D rm.iov_len) - return -EAGAIN; + sent =3D ret; = - ret =3D svc_tcp_send_kvec(sock, head, 0); + if (!tail->iov_len) + msg.msg_flags &=3D ~MSG_MORE; + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, + xdr_buf_pagecount(xdr), xdr->page_len); + ret =3D sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp +=3D ret; - if (ret !=3D head->iov_len) - goto out; - - if (xdr->page_len) { - unsigned int offset, len, remaining; - struct bio_vec *bvec; - - bvec =3D xdr->bvec + (xdr->page_base >> PAGE_SHIFT); - offset =3D offset_in_page(xdr->page_base); - remaining =3D xdr->page_len; - while (remaining > 0) { - len =3D min(remaining, bvec->bv_len - offset); - ret =3D kernel_sendpage(sock, bvec->bv_page, - bvec->bv_offset + offset, - len, 0); - if (ret < 0) - return ret; - *sentp +=3D ret; - if (ret !=3D len) - goto out; - remaining -=3D len; - offset =3D 0; - bvec++; - } - } + sent +=3D ret; = if (tail->iov_len) { - ret =3D svc_tcp_send_kvec(sock, tail, 0); + msg.msg_flags &=3D ~MSG_MORE; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, tail, 1, tail->iov_len); + ret =3D sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp +=3D ret; + sent +=3D ret; } - -out: + if (sent > 0) + *sentp =3D sent; + if (sent !=3D sizeof(marker) + xdr->len) + return -EAGAIN; return 0; } =