Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3334060ybi; Fri, 19 Jul 2019 01:30:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqxIbmRXMmr27xf1Yj9o32lsjjE8p8FFYn6UHjoy0Rtax9P4/F5WFKPSjJjxxkQ7XXwX+OB2 X-Received: by 2002:a17:902:306:: with SMTP id 6mr56268275pld.148.1563525044435; Fri, 19 Jul 2019 01:30:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563525044; cv=none; d=google.com; s=arc-20160816; b=NUllIY9ekAx/czgOs6TCWn8Yg5eHHLb3u7JwmghOwW6Cwee7e9kefB58Y78x2ANtmL gFtzWQsEPQcl2TDG2g52ZZh/amOtRs+6eKdE1AV6lYz3S/NUEzxSlQGFCSYIVvZkELJz MeackkvoSg0581qEGQXJJ3h/GHwcHvhp9xfY1jm8dFA5u/i4kFq+eFhISNu2B6IvHZiO c7Nqs3Z0RV3qEDvihQsAn/s2ww1Upomaxz2ss/0XALuplMOlkmXiUtlSCd4B5oXPPWT/ fBmC2vdDr9bzqkzqJpaTWeTfAc0WXONdbbxGhX1G7z8XtqBpWwOJUBtAUUAnDmwjp3qk tP3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=k1bnFro2XXdwmJgrGLhv6xKBwMHqJtL+pcbxH+NzjmM=; b=my2H6hBS7YVmybGOFCbRjrOuwAhweuwGXwrx0+1aqWXisht5cB6d27N/yQ5Yis8j98 HoJ2R+wPDl/F8WuZb+C95laIh8u4MCT9iOuOfE/tLixEo2HYnmcq+y3NEvBNsGjj7zgJ C/lwB3OIMVZF3Iq6jUY4y06+8t6EWfVqsvKYX0EJYWxMW9drL33zX7ktrYYvRvE6gk5l 0Mehp/cdpCIa6eBcmLZeLeqdh9RRWLRrQXXnkixRC99xQP1kHDXbkXSGEpR9IX496VBA rqM74kWibCdBdADaeiWqEehxPPRB+kAsUPtfBHHJr7tHkMT53CX5+KBp53Ph84Uii3kd zRBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bh8si520856plb.175.2019.07.19.01.30.28; Fri, 19 Jul 2019 01:30:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727436AbfGSIaA (ORCPT + 99 others); Fri, 19 Jul 2019 04:30:00 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33502 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726795AbfGSI37 (ORCPT ); Fri, 19 Jul 2019 04:29:59 -0400 Received: by mail-wr1-f67.google.com with SMTP id n9so31431199wru.0 for ; Fri, 19 Jul 2019 01:29:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=k1bnFro2XXdwmJgrGLhv6xKBwMHqJtL+pcbxH+NzjmM=; b=hfcT2kIrbBOc3k3vNbklCVVSTfxswBxhgnTSgzl3zUfz1V7r9x7mBgzlS4fNJSeOFa QhJRTvpZQeD66icGq2ak1c/vxx6zOsQeMk6HYr04drF6ADLp9SQQBydU5hfa4wszrKdm Bve9YQYapasRHzr1mgBbMAP2Tmjpo9drr/P7WeGvVdloyzftxX0ubrjIkMs8q67FX2IA lyBZ+wFeADAv0DjMTnYkSVUEkkjPoWrUgdjVAewa4llUlnS21IWW5LcfOqwd/pbnablp b/5nTIUrPBDsmpYsa1XsR/tM2ua89hoycMeCW8PNWCnyK5+HBdp3+P9CgqrpaxLHAZ1Y KpTQ== X-Gm-Message-State: APjAAAXPGqSP4oArgLRupTEFC6ELPpHZf7bz2RRpBt13ZPPreCw7XUuS OHTLaUU1Q8nXv5y8U3mL68F55w== X-Received: by 2002:a5d:4212:: with SMTP id n18mr53314416wrq.261.1563524997628; Fri, 19 Jul 2019 01:29:57 -0700 (PDT) Received: from steredhat (host122-201-dynamic.13-79-r.retail.telecomitalia.it. [79.13.201.122]) by smtp.gmail.com with ESMTPSA id l17sm16941335wrr.94.2019.07.19.01.29.56 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Fri, 19 Jul 2019 01:29:56 -0700 (PDT) Date: Fri, 19 Jul 2019 10:29:54 +0200 From: Stefano Garzarella To: "Michael S. Tsirkin" Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , virtualization@lists.linux-foundation.org, Jason Wang , kvm@vger.kernel.org Subject: Re: [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed Message-ID: <20190719082954.m2lw77adpp5dylxw@steredhat> References: <20190717113030.163499-1-sgarzare@redhat.com> <20190717113030.163499-6-sgarzare@redhat.com> <20190717105703-mutt-send-email-mst@kernel.org> <20190718083105-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190718083105-mutt-send-email-mst@kernel.org> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 18, 2019 at 08:33:40AM -0400, Michael S. Tsirkin wrote: > On Thu, Jul 18, 2019 at 09:52:41AM +0200, Stefano Garzarella wrote: > > On Wed, Jul 17, 2019 at 5:00 PM Michael S. Tsirkin wrote: > > > > > > On Wed, Jul 17, 2019 at 01:30:30PM +0200, Stefano Garzarella wrote: > > > > Since now we are able to split packets, we can avoid limiting > > > > their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE. > > > > Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max > > > > packet size. > > > > > > > > Signed-off-by: Stefano Garzarella > > > > > > > > > OK so this is kind of like GSO where we are passing > > > 64K packets to the vsock and then split at the > > > low level. > > > > Exactly, something like that in the Host->Guest path, instead in the > > Guest->Host we use the entire 64K packet. > > > > Thanks, > > Stefano > > btw two allocations for each packet isn't great. How about > allocating the struct linearly with the data? Are you referring to the kzalloc() to allocate the 'struct virtio_vsock_pkt', followed by the kmalloc() to allocate the buffer? Actually they don't look great, I will try to do a single allocation. > And all buffers are same length for you - so you can actually > do alloc_pages. Yes, also Jason suggested it and we decided to postpone since we will try to reuse the virtio-net where it comes for free. > Allocating/freeing pages in a batch should also be considered. For the allocation of guest rx buffers we do some kind of batching (we refill the queue when it reaches the half), but only it this case :( I'll try to do more alloc/free batching. Thanks, Stefano