Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752363AbdHIEWI (ORCPT ); Wed, 9 Aug 2017 00:22:08 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:3476 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751602AbdHIEWH (ORCPT ); Wed, 9 Aug 2017 00:22:07 -0400 Subject: Re: [PATCH v2 0/4] Optimise 64-bit IOVA allocations To: Ganapatrao Kulkarni References: <20170726110807.GN15833@8bytes.org> <59787A48.6060200@huawei.com> <598A6877.4050307@huawei.com> CC: Joerg Roedel , Robin Murphy , Lorenzo Pieralisi , , , , , , , , , Ganapatrao Kulkarni From: "Leizhen (ThunderTown)" Message-ID: <598A8ADC.5080906@huawei.com> Date: Wed, 9 Aug 2017 12:09:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020206.598A8AE9.0074,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 88f3e01e4a2a67ecafb23b0213079b29 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4164 Lines: 117 On 2017/8/9 11:24, Ganapatrao Kulkarni wrote: > On Wed, Aug 9, 2017 at 7:12 AM, Leizhen (ThunderTown) > wrote: >> >> >> On 2017/8/8 20:03, Ganapatrao Kulkarni wrote: >>> On Wed, Jul 26, 2017 at 4:47 PM, Leizhen (ThunderTown) >>> wrote: >>>> >>>> >>>> On 2017/7/26 19:08, Joerg Roedel wrote: >>>>> Hi Robin. >>>>> >>>>> On Fri, Jul 21, 2017 at 12:41:57PM +0100, Robin Murphy wrote: >>>>>> Hi all, >>>>>> >>>>>> In the wake of the ARM SMMU optimisation efforts, it seems that certain >>>>>> workloads (e.g. storage I/O with large scatterlists) probably remain quite >>>>>> heavily influenced by IOVA allocation performance. Separately, Ard also >>>>>> reported massive performance drops for a graphical desktop on AMD Seattle >>>>>> when enabling SMMUs via IORT, which we traced to dma_32bit_pfn in the DMA >>>>>> ops domain getting initialised differently for ACPI vs. DT, and exposing >>>>>> the overhead of the rbtree slow path. Whilst we could go around trying to >>>>>> close up all the little gaps that lead to hitting the slowest case, it >>>>>> seems a much better idea to simply make said slowest case a lot less slow. >>>>> >>>>> Do you have some numbers here? How big was the impact before these >>>>> patches and how is it with the patches? >>>> Here are some numbers: >>>> >>>> (before)$ iperf -s >>>> ------------------------------------------------------------ >>>> Server listening on TCP port 5001 >>>> TCP window size: 85.3 KByte (default) >>>> ------------------------------------------------------------ >>>> [ 4] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 35898 >>>> [ ID] Interval Transfer Bandwidth >>>> [ 4] 0.0-10.2 sec 7.88 MBytes 6.48 Mbits/sec >>>> [ 5] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 35900 >>>> [ 5] 0.0-10.3 sec 7.88 MBytes 6.43 Mbits/sec >>>> [ 4] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 35902 >>>> [ 4] 0.0-10.3 sec 7.88 MBytes 6.43 Mbits/sec >>>> >>>> (after)$ iperf -s >>>> ------------------------------------------------------------ >>>> Server listening on TCP port 5001 >>>> TCP window size: 85.3 KByte (default) >>>> ------------------------------------------------------------ >>>> [ 4] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 36330 >>>> [ ID] Interval Transfer Bandwidth >>>> [ 4] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec >>>> [ 5] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 36332 >>>> [ 5] 0.0-10.0 sec 1.10 GBytes 939 Mbits/sec >>>> [ 4] local 192.168.1.106 port 5001 connected with 192.168.1.198 port 36334 >>>> [ 4] 0.0-10.0 sec 1.10 GBytes 938 Mbits/sec >>>> >>> >>> Is this testing done on Host or on Guest/VM? >> Host > > As per your log, iperf throughput is improved to 938 Mbits/sec > from 6.43 Mbits/sec. > IMO, this seems to be unrealistic, some thing wrong with the testing? For 64bits non-pci devices, the iova allocation is always searched from the last rb-tree node. When many iovas allocated and keep a long time, the search process should check many rb nodes then find a suitable free space. As my tracking, the average times exceeds 10K. [free-space][free][used][...][used] ^ ^ ^ | | |-----rb_last | |--------- maybe more than 10K allocated iova nodes |------- for 32bits devices, cached32_node remember the lastest freed node, which can help us reduce check times This patch series add a new member "cached_node" to service for 64bits devices, like cached32_node service for 32bits devices. > >> >>> >>>>> >>>>> >>>>> Joerg >>>>> >>>>> >>>>> . >>>>> >>>> >>>> -- >>>> Thanks! >>>> BestRegards >>>> >>>> >>>> _______________________________________________ >>>> linux-arm-kernel mailing list >>>> linux-arm-kernel@lists.infradead.org >>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >>> >>> thanks >>> Ganapat >>> >>> . >>> >> >> -- >> Thanks! >> BestRegards >> > > thanks > Ganapat > > . > -- Thanks! BestRegards