Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp3248112rwo; Mon, 24 Jul 2023 08:19:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlEPTkqxyEjZXrOeYxXArxQkGFuLD0BfgeS1lYF6InK6Bge/i0eUV3UPQ5Le01L+6cbnPSJM X-Received: by 2002:a05:6a20:8f0c:b0:137:4bff:7c92 with SMTP id b12-20020a056a208f0c00b001374bff7c92mr10770805pzk.0.1690211948470; Mon, 24 Jul 2023 08:19:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690211948; cv=none; d=google.com; s=arc-20160816; b=KkOPY+BQP5F/aSyXjvkLlo8pCfVZ1zu0XpFeDvBdiNGtJ8Czy/lPsBfHdxR47fSPK8 cAEJzOad1oHiFybWFnHu9jwKRlthwSmM3oavG5adh7XIfr54GgsZ2uF7YcyZx+ZkzNvB VlAUn/6iLP6Yn04s4puON+MnbnpJn7P56Jfh+vMV2/WAw/lwU75H3Y9pcfYakAW0BkIm lVjpaEld9vxbfW7Fe9RCLIgoSsdR7mpxTR3MMINO/WxLwUrnZfEkCiZJE8yFpu204OIj miwH2aW3g8RBhYA/LYT3VPGfQAfZ+mX1nnS3L+bJThAh7ezTcDXRNMqXtzynQ9W0EeP6 IWfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :to:content-language:subject:cc:user-agent:mime-version:date :message-id:from:dkim-signature; bh=qyDhIAIIOkviYuugYW1Y2WwDIHyMCteZyLgKjPu6Mk8=; fh=uLE1LOsXcs4dnIfmR/0jdTNGlr+jNfqZ5pznUv6Ufnc=; b=fo5xMuWoI16DA/1HEP2D+mvDHoJ6kMBudYdJ3CJg3cx+79dnfdDZkhE8jkmn3j508n o37ZKgC1ekpxWsc4j5dvSQU2DZX2PnQo/PnIHhY31fAwuGcUS4LLwaEqv6FUHPyawafx FnKQxAjVW+qRS1S/5ynFCpBFmzPkikAlFpYbn1GUuS6uLShcm8Cr10ruDhWJXZXDoHcH y7UpPeBwdDbFlvwDTg6CAOdH09HYTprGY8GLULSIBAapftXx+/tZzkyPxHjkjZXG72sn pUaSt177CRwgpZhuUJfdg9I8hpAGsSl4OC6FBnVEUb9QLFCdDZLYoJCGAaVNqDKhfzfH zIUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bLHOtEix; spf=pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-wireless-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k13-20020a056a00168d00b0068658355b40si9435766pfc.162.2023.07.24.08.18.47; Mon, 24 Jul 2023 08:19:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bLHOtEix; spf=pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-wireless-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbjGXO5U (ORCPT + 58 others); Mon, 24 Jul 2023 10:57:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229645AbjGXO5T (ORCPT ); Mon, 24 Jul 2023 10:57:19 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2543C10D1 for ; Mon, 24 Jul 2023 07:56:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1690210593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qyDhIAIIOkviYuugYW1Y2WwDIHyMCteZyLgKjPu6Mk8=; b=bLHOtEixW62mJrsbgtte/mDJCkb+34c3NJYiX19Jc21ABP/NmiQK7U/uWRT3xfnwZH/0mI 2SzEvWoJD9zgdCHKZ9z7aW11VPXckxyvQ6aJj7fYunrrszQ4NMobg3DxfzkUjMF+AcDo3q oOsjrlh0BqU4/GaSiA+p4m85l9ZmoaA= Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-250-sygWYpFQPMGX6o6X0GO4xA-1; Mon, 24 Jul 2023 10:56:32 -0400 X-MC-Unique: sygWYpFQPMGX6o6X0GO4xA-1 Received: by mail-ej1-f69.google.com with SMTP id a640c23a62f3a-993d41cbc31so369290866b.1 for ; Mon, 24 Jul 2023 07:56:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690210591; x=1690815391; h=content-transfer-encoding:in-reply-to:references:to :content-language:subject:cc:user-agent:mime-version:date:message-id :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qyDhIAIIOkviYuugYW1Y2WwDIHyMCteZyLgKjPu6Mk8=; b=JwSQnMfEaqBjIycexzcbwiyXxqVU8IGXlGrzuDFope2ohBmmSzoypF93trz7Mf7Y03 B1MSM059ecLsPb9DmWJXCAkbW/bS9HhgHrC/896LIiEooE6khpx964oOXAkaxWBUFj/M C0YFuDiLyc1hbsCKtq5/ohvr1XYNtv6NHOq/VS2M7NdNFhz9g9XebcUwFzPbYQRpLFo/ NyxPHtlV1iEcsxPrdMGco20CBM37iogiKmgUAV9UnloNDxg1olYs0j4RInsCRiofXmQz 0rPP8f+9aCSD8QTGV9Le89M+IdLgeG5j3k2tmiUKAlJKYGYvpEd1Lra5Q9NUkinyFmgY Dw7w== X-Gm-Message-State: ABy/qLYfrnVpC5yZZSSe1DY41Mvj0cewVWFLLocOU/ESOGEAzY//Gf/B 7A+HCfr3qghvw0Qj1EOJVzdFdKjHrpcHrjLr1gC2TjcZ+oAUVKcRFtc7ghYVPJMmCT2qePEFMZA lj1uYM0f4KAj9Mamc9qT35GWlRoo= X-Received: by 2002:a17:906:142:b0:99b:4378:a5aa with SMTP id 2-20020a170906014200b0099b4378a5aamr10932479ejh.49.1690210590811; Mon, 24 Jul 2023 07:56:30 -0700 (PDT) X-Received: by 2002:a17:906:142:b0:99b:4378:a5aa with SMTP id 2-20020a170906014200b0099b4378a5aamr10932446ejh.49.1690210590470; Mon, 24 Jul 2023 07:56:30 -0700 (PDT) Received: from [192.168.42.222] (194-45-78-10.static.kviknet.net. [194.45.78.10]) by smtp.gmail.com with ESMTPSA id dt15-20020a170906b78f00b00991bba473e1sm6867427ejb.3.2023.07.24.07.56.27 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 24 Jul 2023 07:56:29 -0700 (PDT) From: Jesper Dangaard Brouer X-Google-Original-From: Jesper Dangaard Brouer Message-ID: Date: Mon, 24 Jul 2023 16:56:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Cc: brouer@redhat.com, =?UTF-8?Q?Christian_K=c3=b6nig?= , Hari Ramakrishnan , David Ahern , Samiullah Khawaja , Willem de Bruijn , Jakub Kicinski , Christoph Hellwig , John Hubbard , Dan Williams , Jesper Dangaard Brouer , Alexander Duyck , Yunsheng Lin , davem@davemloft.net, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Lorenzo Bianconi , Yisen Zhuang , Salil Mehta , Eric Dumazet , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Saeed Mahameed , Leon Romanovsky , Felix Fietkau , Ryder Lee , Shayne Chen , Sean Wang , Kalle Valo , Matthias Brugger , AngeloGioacchino Del Regno , Jesper Dangaard Brouer , Ilias Apalodimas , linux-rdma@vger.kernel.org, linux-wireless@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Jonathan Lemon , logang@deltatee.com, Bjorn Helgaas Subject: Re: Memory providers multiplexing (Was: [PATCH net-next v4 4/5] page_pool: remove PP_FLAG_PAGE_FRAG flag) Content-Language: en-US To: Mina Almasry , Jason Gunthorpe References: <04187826-8dad-d17b-2469-2837bafd3cd5@kernel.org> <20230711093224.1bf30ed5@kernel.org> <20230711133915.03482fdc@kernel.org> <2263ae79-690e-8a4d-fca2-31aacc5c9bc6@kernel.org> <143a7ca4-e695-db98-9488-84cf8b78cf86@amd.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org On 17/07/2023 03.53, Mina Almasry wrote: > On Fri, Jul 14, 2023 at 8:55 AM Jason Gunthorpe wrote: >> >> On Fri, Jul 14, 2023 at 07:55:15AM -0700, Mina Almasry wrote: >> >>> Once the skb frags with struct new_abstraction are in the TCP stack, >>> they will need some special handling in code accessing the frags. But >>> my RFC already addressed that somewhat because the frags were >>> inaccessible in that case. In this case the frags will be both >>> inaccessible and will not be struct pages at all (things like >>> get_page() will not work), so more special handling will be required, >>> maybe. >> >> It seems sort of reasonable, though there will be interesting concerns >> about coherence and synchronization with generial purpose DMABUFs that >> will need tackling. >> >> Still it is such a lot of churn and weridness in the netdev side, I >> think you'd do well to present an actual full application as >> justification. >> >> Yes, you showed you can stick unordered TCP data frags into GPU memory >> sort of quickly, but have you gone further with this to actually show >> it is useful for a real world GPU centric application? >> >> BTW your cover letter said 96% utilization, the usual server >> configuation is one NIC per GPU, so you were able to hit 1500Gb/sec of >> TCP BW with this? >> > > I do notice that the number of NICs is missing from our public > documentation so far, so I will refrain from specifying how many NICs > are on those A3 VMs until the information is public. But I think I can > confirm that your general thinking is correct, the perf that we're > getting is 96.6% line rate of each GPU/NIC pair, What do you mean by 96.6% "line rate". Is is the Ethernet line-rate? Is the measured throughput the measured TCP data "goodput"? Assuming - MTU 1500 bytes (1514 on wire). - Ethernet header 14 bytes - IP header 20 bytes - TCP header 20 bytes Due to header overhead the goodput will be approx 96.4%. - (1514-(14+20+20))/1514 = 0.9643 - (Not taking Ethernet interframe gap into account). Thus, maybe you have hit Ethernet wire line-rate already? > and scales linearly > for each NIC/GPU pair we've tested with so far. Line rate of each > NIC/GPU pair is 200 Gb/sec. > > So if we have 8 NIC/GPU pairs we'd be hitting 96.6% * 200 * 8 = 1545 GB/sec. Lets keep our units straight. Here you mean 1545 Gbit/sec, which is 193 GBytes/s > If we have, say, 2 NIC/GPU pairs, we'd be hitting 96.6% * 200 * 2 = 384 GB/sec Here you mean 384 Gbit/sec, which is 48 GBytes/sec. > ... > etc. > These massive throughput numbers are important, because they *exceed* the physical host RAM/DIMM memory speeds. This is the *real argument* why software cannot afford to do a single copy of the data from host-RAM into GPU-memory, because the CPU memory throughput to DRAM/DIMM are insufficient. My testlab CPU E5-1650 have 4 DIMM slots DDR4 - Data Width: 64 bits (= 8 bytes) - Configured Memory Speed: 2400 MT/s - Theoretical maximum memory bandwidth: 76.8 GBytes/s (2400*8*4) Even the theoretical max 76.8 GBytes/s (614 Gbit/s) is not enough for the 193 GBytes/s or 1545 Gbit/s (8 NIC/GPU pairs). When testing this with lmbench tool bw_mem, the results (below signature) are in the area 14.8 GBytes/sec (118 Gbit/s), as soon as exceeding L3 cache size. In practice it looks like main memory is limited to reading 118 Gbit/s *once*. (Mina's NICs run at 200 Gbit/s) Given DDIO can deliver network packets into L3, I also tried to figure out what the L3 read bandwidth, which I measured to be 42.4 GBits/sec (339 Gbit/s), in hopes that it would be enough, but it was not. --Jesper (data below signature) CPU under test: $ cat /proc/cpuinfo | egrep -e 'model name|cache size' | head -2 model name : Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz cache size : 15360 KB Providing some cmdline outputs from lmbench "bw_mem" tool. (Output format is "%0.2f %.2f\n", megabytes, megabytes_per_second) $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 256M rd 256.00 14924.50 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 256M wr 256.00 9895.25 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 256M rdwr 256.00 9737.54 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 256M bcopy 256.00 12462.88 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 256M bzero 256.00 14869.89 Next output shows reducing size below L3 cache size, which shows an increase in speed, likely the L3 bandwidth. $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 64M rd 64.00 14840.58 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 32M rd 32.00 14823.97 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 16M rd 16.00 24743.86 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 8M rd 8.00 40852.26 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 4M rd 4.00 42545.65 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 2M rd 2.00 42447.82 $ taskset -c 2 /usr/lib/lmbench/bin/x86_64-linux-gnu/bw_mem 1M rd 1.00 42447.82