Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp3701922pxt; Tue, 10 Aug 2021 09:22:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx2TsUA6gmGFLUHekgpvzjbvLzDC1t7ZJYbVXNA+7E2oSgOwrRqEMrS8/8Wy90/s6N5zTTG X-Received: by 2002:a05:6638:1251:: with SMTP id o17mr28332881jas.15.1628612561177; Tue, 10 Aug 2021 09:22:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628612561; cv=none; d=google.com; s=arc-20160816; b=aViTeQNLPINl/zv6sIw2d3t8b+dfQ2rgMtYiHopY24rpIo8jH9368vlhukRliBQ0Gc VkBbjJQi7H02l6FnZ7LtTiJdDbxF1yA2ujoN0/6Z8eVoGurgOKneFKf5DSm4aMgKPQ9l tVwa5r3kYK0sY+hJlVEuzsURT641+fWpe3CjJG9kMfv0QFXOit248bYwQQcAOh/WRDNd jU+08NyiJRuPrKDp2ptMp+/8OY+C+81YRX4oMvomMJJuai59dFCvVKFOlvqA9HbLaGw+ ZD/LJxWPLxYXo4ISzli7maQaEsgkyMjogkwk/nNIvT0Apx/SpX/gbNXP8rBpQCGHtefm wFdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=8MKdVxhmWHkSF7Qc4zBZXEqJWUbfI37hUIq+4sHXK5w=; b=n9a7uqQ9xda1W1CNC2VdtK4Uvs+BhNRAycfXy7mAsRmu1l62l7KaCUvB6Lo4vKEXZC voCc/DgKp+ULaz1wWVYGFiYCXU788AXnylv1IFJopKpBzdrgjNadSNs2D6UaMRK8DWPk 6nzGbrKytmjWdiEskaA1BKWtjUY2YrbupxgVoW+BMkKxHWK+3pH2IATTRadQ16tPQCIu 4y+V6vxP8yjjsbhKBrFaX8i+sgZL2PoWzFjxUum/KXPLiWowpdr8D0cEXHBvGfA9nKRY Q6y9r2P3Ugv9G6YDGJquUliPqkbvDXaLENRV+X1dTF6Kkw1Nfy+Bb/VYgPXHHSc55oOL l35g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=e8zqrths; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y26si22341177jar.1.2021.08.10.09.22.29; Tue, 10 Aug 2021 09:22:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=e8zqrths; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239855AbhHJKgY (ORCPT + 99 others); Tue, 10 Aug 2021 06:36:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:59687 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239853AbhHJKgX (ORCPT ); Tue, 10 Aug 2021 06:36:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1628591761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=8MKdVxhmWHkSF7Qc4zBZXEqJWUbfI37hUIq+4sHXK5w=; b=e8zqrthseXpaRUVLS4s2wHOlkI0YdLsuDLzzZdnTeumgjvt1wYkrfaROcfsW4KCtC6v56Y XvSZhMT2rjG6ncxWTY2/oBcWAqTb3SBPBh/eeb8gU49qrJ8Wh6Ip/SN1HtUA+mJtq+MfMH JRUT6qhIT6kpJ2LUMh6AiF3g3F/B+ek= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-504-SLez8Vj3PA270oq6Eepj1A-1; Tue, 10 Aug 2021 06:35:58 -0400 X-MC-Unique: SLez8Vj3PA270oq6Eepj1A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A4B591008060; Tue, 10 Aug 2021 10:35:56 +0000 (UTC) Received: from T590 (ovpn-13-190.pek2.redhat.com [10.72.13.190]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CC8415C232; Tue, 10 Aug 2021 10:35:50 +0000 (UTC) Date: Tue, 10 Aug 2021 18:35:45 +0800 From: Ming Lei To: John Garry Cc: Robin Murphy , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, iommu@lists.linux-foundation.org, Will Deacon , linux-arm-kernel@lists.infradead.org Subject: Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node Message-ID: References: <0adbe03b-ce26-e4d3-3425-d967bc436ef5@arm.com> <6ceab844-465f-3bf3-1809-5df1f1dbbc5c@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 10, 2021 at 10:36:47AM +0100, John Garry wrote: > On 28/07/2021 16:17, Ming Lei wrote: > > > > > Have you tried turning off the IOMMU to ensure that this is really just > > > > > an IOMMU problem? > > > > > > > > > > You can try setting CONFIG_ARM_SMMU_V3=n in the defconfig or passing > > > > > cmdline param iommu.passthrough=1 to bypass the the SMMU (equivalent to > > > > > disabling for kernel drivers). > > > > Bypassing SMMU via iommu.passthrough=1 basically doesn't make a difference > > > > on this issue. > > > A ~90% throughput drop still seems to me to be too high to be a software > > > issue. More so since I don't see similar on my system. And that throughput > > > drop does not lead to a total CPU usage drop, from the fio log. > > > > > > Do you know if anyone has run memory benchmark tests on this board to find > > > out NUMA effect? I think lmbench or stream could be used for this. > > https://lore.kernel.org/lkml/YOhbc5C47IzC893B@T590/ > > Hi Ming, > > Out of curiosity, did you investigate this topic any further? IMO, the issue is probably in device/system side, since completion latency is increased a lot, meantime submission latency isn't changed. Either the submission isn't committed to hardware in time, or the completion status isn't updated to HW in time from viewpoint of CPU. We have tried to update to new FW, but not see difference made. > > And you also asked about my results earlier: > > On 22/07/2021 16:54, Ming Lei wrote: > >> [ 52.035895] nvme 0000:81:00.0: Adding to iommu group 5 > >> [ 52.047732] nvme nvme0: pci function 0000:81:00.0 > >> [ 52.067216] nvme nvme0: 22/0/2 default/read/poll queues > >> [ 52.087318] nvme0n1: p1 > >> > >> So I get these results: > >> cpu0 335K > >> cpu32 346K > >> cpu64 300K > >> cpu96 300K > >> > >> So still not massive changes. > > In your last email, the results are the following with irq mode io_uring: > > > > cpu0 497K > > cpu4 307K > > cpu32 566K > > cpu64 488K > > cpu96 508K > > > > So looks you get much worse result with real io_polling? > > > > Would the expectation be that at least I get the same performance with > io_polling here? io_polling is supposed to improve IO latency a lot compared with irq mode, and the perf data shows that clearly on x86_64. > Anything else to try which you can suggest to investigate > this lower performance? You may try to compare irq mode and polling and narrow down the possible reasons, no exact suggestion on how to investigate it, :-( Thanks, Ming