Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp4285683rwo; Tue, 25 Jul 2023 03:55:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlEiUuDsnh2IcydWz98H1Nt7bp/OfBWZuZMct6uuLVcqlKtMra3zLRVFVBMZ7lQbpgKMvbKX X-Received: by 2002:a05:6a00:9a1:b0:668:9bf9:fa70 with SMTP id u33-20020a056a0009a100b006689bf9fa70mr9730711pfg.34.1690282508577; Tue, 25 Jul 2023 03:55:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690282508; cv=none; d=google.com; s=arc-20160816; b=ZzDvW3LEYVD1Ae9X+KSps72b3f73buLcXTLXtC9wAioi2QU//zm6lbqmlXqbzV/8rR M3YK/rQruJx84V79pwBQHTU1FQxrQxNr1+0JsK9hqsmwOnYDlfQQV094An3PAoCc3IMB uF2RywehBUlu5dphBX6aQ/+oR8yTwS8Wz/olqtEAgYtyKCBpxDdRKmKxenK7v7a51Hnd mB+C1TQL6EWKYenmpy/rtJthckw+bd1pmuoKyKDwNAXY8wnoAImgWP4CkWc1APapS48L bfXja9UyYspmUPIAgtIw9YyqL2132KuX+k35fj8EaoQmG1k1us01VClQGMUxtodWya/j s5dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :cc:to:from:content-language:subject:user-agent:mime-version:date :message-id:dkim-signature:dkim-filter; bh=MHIPsamUmQbKMwNkoSzT9zEAo6qthaefwN5i6X/CBH0=; fh=92/MIHujEmU5wZENBkSaEyNTc4BRQ2rNPxiDf91TFTU=; b=dQZnQMA3qYNRugW3mkU5wc81XLNJDlxlajznT1LmuQi6wjR6eJefAnqv7HpuNBC2kq jTTBk94pj1J8kxbNdV9mVYEocqKgxbK2H7CFDDytK1onVpKVeCMbM+IXTjDqW+6rZJa9 u1pdSyEQjMfTJuLEjZhuk0/+7ODkGutMUBHkWVyBo9n1kJb2bqJ05MvncGbyQLa094yD nk8muOVJt7cYxC0H/dzc4CcjuhlbNe/y9+EEzrlbaXZFVl+ykSmxpBOXKUvfP/ZybTx+ PGQgeYxXrHFRPmgZv0s/4wqGjcIpJWKZbT0daNdwJWtYSp3rqhy+VGJeGX09kay5CSMe UxoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b=TGldPWov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s34-20020a056a0017a200b0063b356e36fesi11682857pfg.372.2023.07.25.03.54.56; Tue, 25 Jul 2023 03:55:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b=TGldPWov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232466AbjGYJWm (ORCPT + 99 others); Tue, 25 Jul 2023 05:22:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232233AbjGYJWV (ORCPT ); Tue, 25 Jul 2023 05:22:21 -0400 Received: from mx1.sberdevices.ru (mx1.sberdevices.ru [37.18.73.165]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C58D2704; Tue, 25 Jul 2023 02:21:43 -0700 (PDT) Received: from p-infra-ksmg-sc-msk01 (localhost [127.0.0.1]) by mx1.sberdevices.ru (Postfix) with ESMTP id 8D5C0100027; Tue, 25 Jul 2023 12:21:41 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.sberdevices.ru 8D5C0100027 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1690276901; bh=MHIPsamUmQbKMwNkoSzT9zEAo6qthaefwN5i6X/CBH0=; h=Message-ID:Date:MIME-Version:Subject:From:To:Content-Type:From; b=TGldPWov1i0iePRAKw3n+Om+/kpskl1Etblr2GjNyMA+tcSJ6BNtFID17zpm6Low/ RGuAkSIeK2eJt4U0yDhYbDyc1Rc3Nrt2fQ4m9Ry3vpk3UT2X6H+wxyFJBvkeWz1R90 MB0j5xCO910mkT4bTqT87gSoeGa78Z6cz4IWmRoNyBZqkXtQ3NAX8qBrwMsK4/cJh6 g2cLbQEGeS3bQ8y7vE0Y6r0Vp/9UJFKBDXya8lK61ZBtDOhkOdDvlfnkSOySx8OEx8 HB2X2xObIVamTq2ONtghABGScFidVbQXMvXG7N6h7tkVMGu+FcyY2/0ZktSKCP+ndd OwGbTcP7XQ4rw== Received: from p-i-exch-sc-m01.sberdevices.ru (p-i-exch-sc-m01.sberdevices.ru [172.16.192.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.sberdevices.ru (Postfix) with ESMTPS; Tue, 25 Jul 2023 12:21:41 +0300 (MSK) Received: from [192.168.0.104] (100.64.160.123) by p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Tue, 25 Jul 2023 12:20:40 +0300 Message-ID: <3d1d76c9-2fdb-3dfe-222a-b2184cf17708@sberdevices.ru> Date: Tue, 25 Jul 2023 12:16:11 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [PATCH net-next v3 4/4] vsock/virtio: MSG_ZEROCOPY flag support Content-Language: en-US From: Arseniy Krasnov To: Stefano Garzarella CC: Stefan Hajnoczi , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Bobby Eshleman , , , , , , References: <20230720214245.457298-1-AVKrasnov@sberdevices.ru> <20230720214245.457298-5-AVKrasnov@sberdevices.ru> <091c067b-43a0-da7f-265f-30c8c7e62977@sberdevices.ru> <2k3cbz762ua3fmlben5vcm7rs624sktaltbz3ldeevwiguwk2w@klggxj5e3ueu> <51022d5f-5b50-b943-ad92-b06f60bef433@sberdevices.ru> In-Reply-To: <51022d5f-5b50-b943-ad92-b06f60bef433@sberdevices.ru> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [100.64.160.123] X-ClientProxiedBy: p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) To p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) X-KSMG-Rule-ID: 10 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Lua-Profiles: 178796 [Jul 22 2023] X-KSMG-AntiSpam-Version: 5.9.59.0 X-KSMG-AntiSpam-Envelope-From: AVKrasnov@sberdevices.ru X-KSMG-AntiSpam-Rate: 0 X-KSMG-AntiSpam-Status: not_detected X-KSMG-AntiSpam-Method: none X-KSMG-AntiSpam-Auth: dkim=none X-KSMG-AntiSpam-Info: LuaCore: 525 525 723604743bfbdb7e16728748c3fa45e9eba05f7d, {Tracking_from_domain_doesnt_match_to}, p-i-exch-sc-m01.sberdevices.ru:7.1.1,5.0.1;100.64.160.123:7.1.2;127.0.0.199:7.1.2;sberdevices.ru:7.1.1,5.0.1;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1, FromAlignment: s, {Tracking_white_helo}, ApMailHostAddress: 100.64.160.123 X-MS-Exchange-Organization-SCL: -1 X-KSMG-AntiSpam-Interceptor-Info: scan successful X-KSMG-AntiPhishing: Clean X-KSMG-LinksScanning: Clean X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 2.0.1.6960, bases: 2023/07/23 08:49:00 #21663637 X-KSMG-AntiVirus-Status: Clean, skipped X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25.07.2023 11:46, Arseniy Krasnov wrote: > > > On 25.07.2023 11:43, Stefano Garzarella wrote: >> On Fri, Jul 21, 2023 at 08:09:03AM +0300, Arseniy Krasnov wrote: >>> >>> >>> On 21.07.2023 00:42, Arseniy Krasnov wrote: >>>> This adds handling of MSG_ZEROCOPY flag on transmission path: if this >>>> flag is set and zerocopy transmission is possible (enabled in socket >>>> options and transport allows zerocopy), then non-linear skb will be >>>> created and filled with the pages of user's buffer. Pages of user's >>>> buffer are locked in memory by 'get_user_pages()'. Second thing that >>>> this patch does is replace type of skb owning: instead of calling >>>> 'skb_set_owner_sk_safe()' it calls 'skb_set_owner_w()'. Reason of this >>>> change is that '__zerocopy_sg_from_iter()' increments 'sk_wmem_alloc' >>>> of socket, so to decrease this field correctly proper skb destructor is >>>> needed: 'sock_wfree()'. This destructor is set by 'skb_set_owner_w()'. >>>> >>>> Signed-off-by: Arseniy Krasnov >>>> --- >>>>  Changelog: >>>>  v5(big patchset) -> v1: >>>>   * Refactorings of 'if' conditions. >>>>   * Remove extra blank line. >>>>   * Remove 'frag_off' field unneeded init. >>>>   * Add function 'virtio_transport_fill_skb()' which fills both linear >>>>     and non-linear skb with provided data. >>>>  v1 -> v2: >>>>   * Use original order of last four arguments in 'virtio_transport_alloc_skb()'. >>>>  v2 -> v3: >>>>   * Add new transport callback: 'msgzerocopy_check_iov'. It checks that >>>>     provided 'iov_iter' with data could be sent in a zerocopy mode. >>>>     If this callback is not set in transport - transport allows to send >>>>     any 'iov_iter' in zerocopy mode. Otherwise - if callback returns 'true' >>>>     then zerocopy is allowed. Reason of this callback is that in case of >>>>     G2H transmission we insert whole skb to the tx virtio queue and such >>>>     skb must fit to the size of the virtio queue to be sent in a single >>>>     iteration (may be tx logic in 'virtio_transport.c' could be reworked >>>>     as in vhost to support partial send of current skb). This callback >>>>     will be enabled only for G2H path. For details pls see comment >>>>     'Check that tx queue...' below. >>>> >>>>  include/net/af_vsock.h                  |   3 + >>>>  net/vmw_vsock/virtio_transport.c        |  39 ++++ >>>>  net/vmw_vsock/virtio_transport_common.c | 257 ++++++++++++++++++------ >>>>  3 files changed, 241 insertions(+), 58 deletions(-) >>>> >>>> diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h >>>> index 0e7504a42925..a6b346eeeb8e 100644 >>>> --- a/include/net/af_vsock.h >>>> +++ b/include/net/af_vsock.h >>>> @@ -177,6 +177,9 @@ struct vsock_transport { >>>> >>>>      /* Read a single skb */ >>>>      int (*read_skb)(struct vsock_sock *, skb_read_actor_t); >>>> + >>>> +    /* Zero-copy. */ >>>> +    bool (*msgzerocopy_check_iov)(const struct iov_iter *); >>>>  }; >>>> >>>>  /**** CORE ****/ >>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c >>>> index 7bbcc8093e51..23cb8ed638c4 100644 >>>> --- a/net/vmw_vsock/virtio_transport.c >>>> +++ b/net/vmw_vsock/virtio_transport.c >>>> @@ -442,6 +442,43 @@ static void virtio_vsock_rx_done(struct virtqueue *vq) >>>>      queue_work(virtio_vsock_workqueue, &vsock->rx_work); >>>>  } >>>> >>>> +static bool virtio_transport_msgzerocopy_check_iov(const struct iov_iter *iov) >>>> +{ >>>> +    struct virtio_vsock *vsock; >>>> +    bool res = false; >>>> + >>>> +    rcu_read_lock(); >>>> + >>>> +    vsock = rcu_dereference(the_virtio_vsock); >>>> +    if (vsock) { >>>> +        struct virtqueue *vq; >>>> +        int iov_pages; >>>> + >>>> +        vq = vsock->vqs[VSOCK_VQ_TX]; >>>> + >>>> +        iov_pages = round_up(iov->count, PAGE_SIZE) / PAGE_SIZE; >>>> + >>>> +        /* Check that tx queue is large enough to keep whole >>>> +         * data to send. This is needed, because when there is >>>> +         * not enough free space in the queue, current skb to >>>> +         * send will be reinserted to the head of tx list of >>>> +         * the socket to retry transmission later, so if skb >>>> +         * is bigger than whole queue, it will be reinserted >>>> +         * again and again, thus blocking other skbs to be sent. >>>> +         * Each page of the user provided buffer will be added >>>> +         * as a single buffer to the tx virtqueue, so compare >>>> +         * number of pages against maximum capacity of the queue. >>>> +         * +1 means buffer for the packet header. >>>> +         */ >>>> +        if (iov_pages + 1 <= vq->num_max) >>> >>> I think this check is actual only for case one we don't have indirect buffer feature. >>> With indirect mode whole data to send will be packed into one indirect buffer. >> >> I think so. >> So, should we check also that here? >> >>> >>> Thanks, Arseniy >>> >>>> +            res = true; >>>> +    } >>>> + >>>> +    rcu_read_unlock(); >>>> + >>>> +    return res; >>>> +} >>>> + >>>>  static bool virtio_transport_seqpacket_allow(u32 remote_cid); >>>> >>>>  static struct virtio_transport virtio_transport = { >>>> @@ -475,6 +512,8 @@ static struct virtio_transport virtio_transport = { >>>>          .seqpacket_allow          = virtio_transport_seqpacket_allow, >>>>          .seqpacket_has_data       = virtio_transport_seqpacket_has_data, >>>> >>>> +        .msgzerocopy_check_iov      = virtio_transport_msgzerocopy_check_iov, >>>> + >>>>          .notify_poll_in           = virtio_transport_notify_poll_in, >>>>          .notify_poll_out          = virtio_transport_notify_poll_out, >>>>          .notify_recv_init         = virtio_transport_notify_recv_init, >>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c >>>> index 26a4d10da205..e4e3d541aff4 100644 >>>> --- a/net/vmw_vsock/virtio_transport_common.c >>>> +++ b/net/vmw_vsock/virtio_transport_common.c >>>> @@ -37,73 +37,122 @@ virtio_transport_get_ops(struct vsock_sock *vsk) >>>>      return container_of(t, struct virtio_transport, transport); >>>>  } >>>> >>>> -/* Returns a new packet on success, otherwise returns NULL. >>>> - * >>>> - * If NULL is returned, errp is set to a negative errno. >>>> - */ >>>> -static struct sk_buff * >>>> -virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, >>>> -               size_t len, >>>> -               u32 src_cid, >>>> -               u32 src_port, >>>> -               u32 dst_cid, >>>> -               u32 dst_port) >>>> -{ >>>> -    const size_t skb_len = VIRTIO_VSOCK_SKB_HEADROOM + len; >>>> -    struct virtio_vsock_hdr *hdr; >>>> -    struct sk_buff *skb; >>>> -    void *payload; >>>> -    int err; >>>> +static bool virtio_transport_can_zcopy(struct virtio_vsock_pkt_info *info, >>>> +                       size_t max_to_send) >>>> +{ >>>> +    const struct vsock_transport *t; >>>> +    struct iov_iter *iov_iter; >>>> >>>> -    skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL); >>>> -    if (!skb) >>>> -        return NULL; >>>> +    if (!info->msg) >>>> +        return false; >>>> >>>> -    hdr = virtio_vsock_hdr(skb); >>>> -    hdr->type    = cpu_to_le16(info->type); >>>> -    hdr->op        = cpu_to_le16(info->op); >>>> -    hdr->src_cid    = cpu_to_le64(src_cid); >>>> -    hdr->dst_cid    = cpu_to_le64(dst_cid); >>>> -    hdr->src_port    = cpu_to_le32(src_port); >>>> -    hdr->dst_port    = cpu_to_le32(dst_port); >>>> -    hdr->flags    = cpu_to_le32(info->flags); >>>> -    hdr->len    = cpu_to_le32(len); >>>> +    iov_iter = &info->msg->msg_iter; >>>> >>>> -    if (info->msg && len > 0) { >>>> -        payload = skb_put(skb, len); >>>> -        err = memcpy_from_msg(payload, info->msg, len); >>>> -        if (err) >>>> -            goto out; >>>> +    t = vsock_core_get_transport(info->vsk); >>>> >>>> -        if (msg_data_left(info->msg) == 0 && >>>> -            info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) { >>>> -            hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM); >>>> +    if (t->msgzerocopy_check_iov && >>>> +        !t->msgzerocopy_check_iov(iov_iter)) >>>> +        return false; >> >> I'd avoid adding a new transport callback used only internally in virtio >> transports. > > Ok, I see. > >> >> Usually the transport callbacks are used in af_vsock.c, if we need a >> callback just for virtio transports, maybe better to add it in struct >> virtio_vsock_pkt_info or struct virtio_vsock_sock. Hm, may be I just need to move this callback from 'struct vsock_transport' to parent 'struct virtio_transport', after 'send_pkt' callback. In this case: 1) AF_VSOCK part is not touched. 2) This callback stays in 'virtio_transport.c' and is set also in this file. vhost and loopback are unchanged - only 'send_pkt' still enabled in both files for these two transports. Thanks, Arseniy >> >> Maybe better the last one so we don't have to allocate pointer space >> for each packet and you should reach it via info. > > Ok, thanks, I'll try this way > > Thanks, Arseniy > >> >> Thanks, >> Stefano >> >>>> >>>> -            if (info->msg->msg_flags & MSG_EOR) >>>> -                hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR); >>>> -        } >>>> +    /* Data is simple buffer. */ >>>> +    if (iter_is_ubuf(iov_iter)) >>>> +        return true; >>>> + >>>> +    if (!iter_is_iovec(iov_iter)) >>>> +        return false; >>>> + >>>> +    if (iov_iter->iov_offset) >>>> +        return false; >>>> + >>>> +    /* We can't send whole iov. */ >>>> +    if (iov_iter->count > max_to_send) >>>> +        return false; >>>> + >>>> +    return true; >>>> +} >>>> + >>>> +static int virtio_transport_init_zcopy_skb(struct vsock_sock *vsk, >>>> +                       struct sk_buff *skb, >>>> +                       struct msghdr *msg, >>>> +                       bool zerocopy) >>>> +{ >>>> +    struct ubuf_info *uarg; >>>> + >>>> +    if (msg->msg_ubuf) { >>>> +        uarg = msg->msg_ubuf; >>>> +        net_zcopy_get(uarg); >>>> +    } else { >>>> +        struct iov_iter *iter = &msg->msg_iter; >>>> +        struct ubuf_info_msgzc *uarg_zc; >>>> +        int len; >>>> + >>>> +        /* Only ITER_IOVEC or ITER_UBUF are allowed and >>>> +         * checked before. >>>> +         */ >>>> +        if (iter_is_iovec(iter)) >>>> +            len = iov_length(iter->__iov, iter->nr_segs); >>>> +        else >>>> +            len = iter->count; >>>> + >>>> +        uarg = msg_zerocopy_realloc(sk_vsock(vsk), >>>> +                        len, >>>> +                        NULL); >>>> +        if (!uarg) >>>> +            return -1; >>>> + >>>> +        uarg_zc = uarg_to_msgzc(uarg); >>>> +        uarg_zc->zerocopy = zerocopy ? 1 : 0; >>>>      } >>>> >>>> -    if (info->reply) >>>> -        virtio_vsock_skb_set_reply(skb); >>>> +    skb_zcopy_init(skb, uarg); >>>> >>>> -    trace_virtio_transport_alloc_pkt(src_cid, src_port, >>>> -                     dst_cid, dst_port, >>>> -                     len, >>>> -                     info->type, >>>> -                     info->op, >>>> -                     info->flags); >>>> +    return 0; >>>> +} >>>> >>>> -    if (info->vsk && !skb_set_owner_sk_safe(skb, sk_vsock(info->vsk))) { >>>> -        WARN_ONCE(1, "failed to allocate skb on vsock socket with sk_refcnt == 0\n"); >>>> -        goto out; >>>> +static int virtio_transport_fill_skb(struct sk_buff *skb, >>>> +                     struct virtio_vsock_pkt_info *info, >>>> +                     size_t len, >>>> +                     bool zcopy) >>>> +{ >>>> +    if (zcopy) { >>>> +        return __zerocopy_sg_from_iter(info->msg, NULL, skb, >>>> +                          &info->msg->msg_iter, >>>> +                          len); >>>> +    } else { >>>> +        void *payload; >>>> +        int err; >>>> + >>>> +        payload = skb_put(skb, len); >>>> +        err = memcpy_from_msg(payload, info->msg, len); >>>> +        if (err) >>>> +            return -1; >>>> + >>>> +        if (msg_data_left(info->msg)) >>>> +            return 0; >>>> + >>>> +        return 0; >>>>      } >>>> +} >>>> >>>> -    return skb; >>>> +static void virtio_transport_init_hdr(struct sk_buff *skb, >>>> +                      struct virtio_vsock_pkt_info *info, >>>> +                      u32 src_cid, >>>> +                      u32 src_port, >>>> +                      u32 dst_cid, >>>> +                      u32 dst_port, >>>> +                      size_t len) >>>> +{ >>>> +    struct virtio_vsock_hdr *hdr; >>>> >>>> -out: >>>> -    kfree_skb(skb); >>>> -    return NULL; >>>> +    hdr = virtio_vsock_hdr(skb); >>>> +    hdr->type    = cpu_to_le16(info->type); >>>> +    hdr->op        = cpu_to_le16(info->op); >>>> +    hdr->src_cid    = cpu_to_le64(src_cid); >>>> +    hdr->dst_cid    = cpu_to_le64(dst_cid); >>>> +    hdr->src_port    = cpu_to_le32(src_port); >>>> +    hdr->dst_port    = cpu_to_le32(dst_port); >>>> +    hdr->flags    = cpu_to_le32(info->flags); >>>> +    hdr->len    = cpu_to_le32(len); >>>>  } >>>> >>>>  static void virtio_transport_copy_nonlinear_skb(const struct sk_buff *skb, >>>> @@ -214,6 +263,70 @@ static u16 virtio_transport_get_type(struct sock *sk) >>>>          return VIRTIO_VSOCK_TYPE_SEQPACKET; >>>>  } >>>> >>>> +static struct sk_buff *virtio_transport_alloc_skb(struct vsock_sock *vsk, >>>> +                          struct virtio_vsock_pkt_info *info, >>>> +                          size_t payload_len, >>>> +                          bool zcopy, >>>> +                          u32 src_cid, >>>> +                          u32 src_port, >>>> +                          u32 dst_cid, >>>> +                          u32 dst_port) >>>> +{ >>>> +    struct sk_buff *skb; >>>> +    size_t skb_len; >>>> + >>>> +    skb_len = VIRTIO_VSOCK_SKB_HEADROOM; >>>> + >>>> +    if (!zcopy) >>>> +        skb_len += payload_len; >>>> + >>>> +    skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL); >>>> +    if (!skb) >>>> +        return NULL; >>>> + >>>> +    virtio_transport_init_hdr(skb, info, src_cid, src_port, >>>> +                  dst_cid, dst_port, >>>> +                  payload_len); >>>> + >>>> +    /* Set owner here, because '__zerocopy_sg_from_iter()' uses >>>> +     * owner of skb without check to update 'sk_wmem_alloc'. >>>> +     */ >>>> +    if (vsk) >>>> +        skb_set_owner_w(skb, sk_vsock(vsk)); >>>> + >>>> +    if (info->msg && payload_len > 0) { >>>> +        int err; >>>> + >>>> +        err = virtio_transport_fill_skb(skb, info, payload_len, zcopy); >>>> +        if (err) >>>> +            goto out; >>>> + >>>> +        if (info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) { >>>> +            struct virtio_vsock_hdr *hdr = virtio_vsock_hdr(skb); >>>> + >>>> +            hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM); >>>> + >>>> +            if (info->msg->msg_flags & MSG_EOR) >>>> +                hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR); >>>> +        } >>>> +    } >>>> + >>>> +    if (info->reply) >>>> +        virtio_vsock_skb_set_reply(skb); >>>> + >>>> +    trace_virtio_transport_alloc_pkt(src_cid, src_port, >>>> +                     dst_cid, dst_port, >>>> +                     payload_len, >>>> +                     info->type, >>>> +                     info->op, >>>> +                     info->flags); >>>> + >>>> +    return skb; >>>> +out: >>>> +    kfree_skb(skb); >>>> +    return NULL; >>>> +} >>>> + >>>>  /* This function can only be used on connecting/connected sockets, >>>>   * since a socket assigned to a transport is required. >>>>   * >>>> @@ -222,10 +335,12 @@ static u16 virtio_transport_get_type(struct sock *sk) >>>>  static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, >>>>                        struct virtio_vsock_pkt_info *info) >>>>  { >>>> +    u32 max_skb_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE; >>>>      u32 src_cid, src_port, dst_cid, dst_port; >>>>      const struct virtio_transport *t_ops; >>>>      struct virtio_vsock_sock *vvs; >>>>      u32 pkt_len = info->pkt_len; >>>> +    bool can_zcopy = false; >>>>      u32 rest_len; >>>>      int ret; >>>> >>>> @@ -254,15 +369,30 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, >>>>      if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW) >>>>          return pkt_len; >>>> >>>> +    if (info->msg) { >>>> +        /* If zerocopy is not enabled by 'setsockopt()', we behave as >>>> +         * there is no MSG_ZEROCOPY flag set. >>>> +         */ >>>> +        if (!sock_flag(sk_vsock(vsk), SOCK_ZEROCOPY)) >>>> +            info->msg->msg_flags &= ~MSG_ZEROCOPY; >>>> + >>>> +        if (info->msg->msg_flags & MSG_ZEROCOPY) >>>> +            can_zcopy = virtio_transport_can_zcopy(info, pkt_len); >>>> + >>>> +        if (can_zcopy) >>>> +            max_skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE, >>>> +                        (MAX_SKB_FRAGS * PAGE_SIZE)); >>>> +    } >>>> + >>>>      rest_len = pkt_len; >>>> >>>>      do { >>>>          struct sk_buff *skb; >>>>          size_t skb_len; >>>> >>>> -        skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE, rest_len); >>>> +        skb_len = min(max_skb_len, rest_len); >>>> >>>> -        skb = virtio_transport_alloc_skb(info, skb_len, >>>> +        skb = virtio_transport_alloc_skb(vsk, info, skb_len, can_zcopy, >>>>                           src_cid, src_port, >>>>                           dst_cid, dst_port); >>>>          if (!skb) { >>>> @@ -270,6 +400,17 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, >>>>              break; >>>>          } >>>> >>>> +        /* This is last skb to send this portion of data. */ >>>> +        if (info->msg && info->msg->msg_flags & MSG_ZEROCOPY && >>>> +            skb_len == rest_len && info->op == VIRTIO_VSOCK_OP_RW) { >>>> +            if (virtio_transport_init_zcopy_skb(vsk, skb, >>>> +                                info->msg, >>>> +                                can_zcopy)) { >>>> +                ret = -ENOMEM; >>>> +                break; >>>> +            } >>>> +        } >>>> + >>>>          virtio_transport_inc_tx_pkt(vvs, skb); >>>> >>>>          ret = t_ops->send_pkt(skb); >>>> @@ -934,7 +1075,7 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t, >>>>      if (!t) >>>>          return -ENOTCONN; >>>> >>>> -    reply = virtio_transport_alloc_skb(&info, 0, >>>> +    reply = virtio_transport_alloc_skb(NULL, &info, 0, false, >>>>                         le64_to_cpu(hdr->dst_cid), >>>>                         le32_to_cpu(hdr->dst_port), >>>>                         le64_to_cpu(hdr->src_cid), >>> >>