Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp526765pxv; Fri, 9 Jul 2021 03:30:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwLbMXGuDGqtjra+eg6XOhhqoq6/aMfp7XI5c/PvPAEZlydGlptkEPZlg66lxzZsY+39bi X-Received: by 2002:a17:907:7b9b:: with SMTP id ne27mr25612767ejc.318.1625826637393; Fri, 09 Jul 2021 03:30:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625826637; cv=none; d=google.com; s=arc-20160816; b=CEdnA2QOLhT+VCiv7HOwmmnOV6LylGMM3uN2f6OqNc5u/47FgLsPjdR81nGZQ4bxQE wCuBXe5lUsaSCsCxFfJOF8HTD7XSIp9W0J4p/FInTtowOIkKvJfI5ANuYs00wdvDA2xp Y65pLZabwEp/wreHu10NkxShpJaMSIjUXC5Kc6Ybgrio/5+0aXuH2Sql+5e7TXt4X3Fq hVQemxkI9avhDcKOWD84Kco3nNCUGd/bCwUa3v+2wpI42pGguGrqwGzNl6M95cELNfz4 8WFd2EX/jdAc+SjAHeq68K+bSc2GmTnuM45C5RNra6Oi2eZfJdZAoBF1yqAgdBCevSOV vtbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=TEiDdQsTAIJWeg/r08bxFfNWnfjVN2J/th1TsQh3G54=; b=xUDnwcDo58gsCg8j2ARUbonLxjSKHNNWvjqXP2y9baekig9KTjbcPIbQAVeB7Do1ES cW6S8KpPmKKjiqKQfA/VUIVXVMe2Z2CRZGEVoEEpWAv2E1c9UnFIrVZLXZeRMXU2DmAu cLM9eTjKGiccOCaAkAHsPLZkqwxzR3WxwkaCpkkYFcJCq8s+6qPjVhFO2DDGWPzgU9da 0nO/BtAkLHI96P44cKhscgIufXDaorhRek2po+3ujGxPWoeg0k8b2HvFfck9iU3wiipz jffsD/AL0OOJ6QkSEFBdNHR6OMH8KuS6ebg3kzxNnvYgMQu+6CusvYuox4epJLTBHLLg jTcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f26si6551659edy.22.2021.07.09.03.30.14; Fri, 09 Jul 2021 03:30:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232164AbhGIK3n (ORCPT + 99 others); Fri, 9 Jul 2021 06:29:43 -0400 Received: from foss.arm.com ([217.140.110.172]:49652 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232161AbhGIK3n (ORCPT ); Fri, 9 Jul 2021 06:29:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D53F5ED1; Fri, 9 Jul 2021 03:26:59 -0700 (PDT) Received: from [10.57.35.192] (unknown [10.57.35.192]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37B33F5A1; Fri, 9 Jul 2021 03:26:58 -0700 (PDT) Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node To: Ming Lei , linux-nvme@lists.infradead.org, Will Deacon , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org References: From: Robin Murphy Message-ID: <23e7956b-f3b5-b585-3c18-724165994051@arm.com> Date: Fri, 9 Jul 2021 11:26:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-07-09 09:38, Ming Lei wrote: > Hello, > > I observed that NVMe performance is very bad when running fio on one > CPU(aarch64) in remote numa node compared with the nvme pci numa node. > > Please see the test result[1] 327K vs. 34.9K. > > Latency trace shows that one big difference is in iommu_dma_unmap_sg(), > 1111 nsecs vs 25437 nsecs. Are you able to dig down further into that? iommu_dma_unmap_sg() itself doesn't do anything particularly special, so whatever makes a difference is probably happening at a lower level, and I suspect there's probably an SMMU involved. If for instance it turns out to go all the way down to __arm_smmu_cmdq_poll_until_consumed() because polling MMIO from the wrong node is slow, there's unlikely to be much you can do about that other than the global "go faster" knobs (iommu.strict and iommu.passthrough) with their associated compromises. Robin. > [1] fio test & results > > 1) fio test result: > > - run fio on local CPU > taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 327K > avg latency of iommu_dma_unmap_sg(): 1111 nsecs > > > - run fio on remote CPU > taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 /dev/nvme1n1 4k > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread --name=test --group_reporting > > IOPS: 34.9K > avg latency of iommu_dma_unmap_sg(): 25437 nsecs > > 2) system info > [root@ampere-mtjade-04 ~]# lscpu | grep NUMA > NUMA node(s): 2 > NUMA node0 CPU(s): 0-79 > NUMA node1 CPU(s): 80-159 > > lspci | grep NVMe > 0003:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 > > [root@ampere-mtjade-04 ~]# cat /sys/block/nvme1n1/device/device/numa_node > 0 > > > > Thanks, > Ming > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >