Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp326176rwn; Thu, 8 Sep 2022 02:08:28 -0700 (PDT) X-Google-Smtp-Source: AA6agR6A09ZjglacnU4s1z/v7/HszoZtSmXPGifCEVPPYmzObW2M7XVsJk0W68YrSR2ZlJrGNsDo X-Received: by 2002:a05:6402:1f86:b0:447:8edd:1c4b with SMTP id c6-20020a0564021f8600b004478edd1c4bmr6103486edc.163.1662628108472; Thu, 08 Sep 2022 02:08:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662628108; cv=none; d=google.com; s=arc-20160816; b=QOyZ5B840tT78pL5Ud3dgkJ8HTg+J6V2Qlvdf0+nfQ9z0cO//nRQXoPSkK8t4YupkO 8lPUVlE4WfP920ZEdHc5VnqXVOV1nsPzlSBfFJm52UM3Cwo3XpmX2e34yoFUQvFAqsjT 8Z0jzKFpDUsDL6q1yiJMEW2wleSKK8BasMtHyl7WzGlFcFQySEtLR08ZETky+yhrJjVH /HhGqJ12F50GFoxGK8Ou8mJj3VWNLAhYuv5tAjWHsQasIsbV6YeAgESfzcqJACM9UgGK nwygC6feUXKit/JhJP/vracwtg3B91+XAwlOf4nQ9oly1n1adHe/TfZojsGO2ZBaF1rA 94UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date; bh=TNk2WL0yPN63bEgs4FKGDQlK2jvapo8uw//gkU+QI3s=; b=MJ4oSbr8TDIgmN9Vq3kfMS6UXLvIf+mwmyO3GwGSuJOwHGrIutspS0O6nZpLtZJ1Fd 3jjWW0znDk4TKmva/Imr/kB8E8Q+BRlna2KmmaV7Ry9Ck2OLwnO5iuPizjPxU9JTBYhq llCfr8lXttDha08w4eJew+blKFE/sDTPdWujORe6zEXo5xFFFE2vq08grqlKW0+KLZ5+ WHOaEvrM65SIBnyNClUojIE4PZjY9ibnZX5z8QzAIhuBrzGrDwXfzKLm+gkaNO/DsuaY gUtW4lBlAR7bRwIkn4PEd5X/c44Xohfd4zIhye4LjYYnjLKLRbqYCwocGceYcqGJwpKL 9PFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w11-20020a056402268b00b00446b907d8d2si15474385edd.7.2022.09.08.02.08.03; Thu, 08 Sep 2022 02:08:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231497AbiIHIld convert rfc822-to-8bit (ORCPT + 99 others); Thu, 8 Sep 2022 04:41:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231495AbiIHIlZ (ORCPT ); Thu, 8 Sep 2022 04:41:25 -0400 Received: from smtp236.sjtu.edu.cn (smtp236.sjtu.edu.cn [202.120.2.236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3E08112E40; Thu, 8 Sep 2022 01:41:22 -0700 (PDT) Received: from mta90.sjtu.edu.cn (unknown [10.118.0.90]) by smtp236.sjtu.edu.cn (Postfix) with ESMTPS id 8A1CD1008B392; Thu, 8 Sep 2022 16:41:17 +0800 (CST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mta90.sjtu.edu.cn (Postfix) with ESMTP id 6D88F37C894; Thu, 8 Sep 2022 16:41:17 +0800 (CST) X-Virus-Scanned: amavisd-new at Received: from mta90.sjtu.edu.cn ([127.0.0.1]) by localhost (mta90.sjtu.edu.cn [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 771Iyg8o78vt; Thu, 8 Sep 2022 16:41:17 +0800 (CST) Received: from mstore105.sjtu.edu.cn (mstore101.sjtu.edu.cn [10.118.0.105]) by mta90.sjtu.edu.cn (Postfix) with ESMTP id 3D69837C893; Thu, 8 Sep 2022 16:41:17 +0800 (CST) Date: Thu, 8 Sep 2022 16:41:14 +0800 (CST) From: Guo Zhi To: jasowang Cc: eperezma , sgarzare , Michael Tsirkin , netdev , linux-kernel , kvm list , virtualization Message-ID: <1010358496.165709.1662626474492.JavaMail.zimbra@sjtu.edu.cn> In-Reply-To: References: <20220901055434.824-1-qtxuning1999@sjtu.edu.cn> <20220901055434.824-4-qtxuning1999@sjtu.edu.cn> Subject: Re: [RFC v3 3/7] vsock: batch buffers in tx MIME-Version: 1.0 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: 8BIT X-Originating-IP: [10.162.206.161] X-Mailer: Zimbra 8.8.15_GA_4372 (ZimbraWebClient - GC104 (Mac)/8.8.15_GA_3928) Thread-Topic: vsock: batch buffers in tx Thread-Index: fUZ9bL4X165odfACHKzUbr6LjWRV9Q== X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- Original Message ----- > From: "jasowang" > To: "Guo Zhi" , "eperezma" , "sgarzare" , "Michael > Tsirkin" > Cc: "netdev" , "linux-kernel" , "kvm list" , > "virtualization" > Sent: Wednesday, September 7, 2022 12:27:40 PM > Subject: Re: [RFC v3 3/7] vsock: batch buffers in tx > ?? 2022/9/1 13:54, Guo Zhi ะด??: >> Vsock uses buffers in order, and for tx driver doesn't have to >> know the length of the buffer. So we can do a batch for vsock if >> in order negotiated, only write one used ring for a batch of buffers >> >> Signed-off-by: Guo Zhi >> --- >> drivers/vhost/vsock.c | 12 ++++++++++-- >> 1 file changed, 10 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >> index 368330417bde..e08fbbb5439e 100644 >> --- a/drivers/vhost/vsock.c >> +++ b/drivers/vhost/vsock.c >> @@ -497,7 +497,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work >> *work) >> struct vhost_vsock *vsock = container_of(vq->dev, struct vhost_vsock, >> dev); >> struct virtio_vsock_pkt *pkt; >> - int head, pkts = 0, total_len = 0; >> + int head, pkts = 0, total_len = 0, add = 0; >> unsigned int out, in; >> bool added = false; >> >> @@ -551,10 +551,18 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work >> *work) >> else >> virtio_transport_free_pkt(pkt); >> >> - vhost_add_used(vq, head, 0); >> + if (!vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) { >> + vhost_add_used(vq, head, 0); > > > I'd do this step by step. > > 1) switch to use vhost_add_used_n() for vsock, less copy_to_user() > better performance > 2) do in-order on top > > LGTM!, I think it is the correct way. >> + } else { >> + vq->heads[add].id = head; >> + vq->heads[add++].len = 0; > > > How can we guarantee that we are in the boundary of the heads array? > > Btw in the case of in-order we don't need to record the heads, instead > we just need to know the head of the last buffer, it reduces the stress > on the cache. > > Thanks > Yeah, I will change this and only copy last head for in order feature. Thanks > >> + } >> added = true; >> } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len))); >> >> + /* If in order feature negotiaged, we can do a batch to increase performance >> */ >> + if (vhost_has_feature(vq, VIRTIO_F_IN_ORDER) && added) >> + vhost_add_used_n(vq, vq->heads, add); >> no_more_replies: >> if (added) >> vhost_signal(&vsock->dev, vq);