Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2900311imm; Sun, 10 Jun 2018 04:15:29 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKuCYlS3mEotToeib81cTb3uMicQRwMtoYcRQvq30R/8r+8+CEyAiifjd6HkaI1btr3vy8H X-Received: by 2002:a62:c9ce:: with SMTP id l75-v6mr13209234pfk.179.1528629329892; Sun, 10 Jun 2018 04:15:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528629329; cv=none; d=google.com; s=arc-20160816; b=adXBT8YXuNe5roWtT05MTDhkRTTRd4mb6Eiq5OBmlBuf4RI2eYIMpT6gTcsp0R1Oqb XoT4OejiQqikNjjm0eOXr1m86XPhugu4hqY/tsx3r5E0azx5vdz6Px5zZh1HDD+Gieeo pY+NtqNwbMpLMaG/Y8dTRQKMX5HmF9Gux0l1HqN4OrmGVBkAqP3aHYKZkSJrtTgBl0fJ VEJcELCaCt1rWOlLfyFD9nbUNSmLgJUzYgQhqCPkGa1j3yUjrQDqSyTeldHdocSiao/e Zng0BIrlnPaRSv3TGFgu8KfE3Ufvlvmfs2zKthx4o+xQjc25lQm0sFF0DbjacJ0WeI/i pvVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject:arc-authentication-results; bh=/0A4fVrg8xgCKfqtpqiEwL0HVbcAIY7s2YnGV17gM9E=; b=At4Q3FEntL3e5sog1n59z586BEFes+1fridpfn7Yih6tA4ekdEBeB0Il8K9euXVf/5 R5s0yikpb64Yz/qtkuBw0SpUSImIaZl4YGL/L8Bwn7nDjob/QQYBAaK11mfTzHuGIAlk q4oHI5StFsdo1/Xf9z1vQL6Q/DhFtBm/B3YrXd7fnAwtRtNzRNBbC0tJqHo+AADvadzt d8WOw0rLukPAVC6hk9sfk4mytw+2G8aHzSN6XNA1FtfKPzrYSZpYLX/BlP1gEMaDLVZ2 No7Tjzo+/v192VZ+462fC6wBJY5A1J1X492oMLwS7L5UilXzadIIhbP3KiJ9pmt7QEAh HaPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g61-v6si39960909plb.169.2018.06.10.04.15.15; Sun, 10 Jun 2018 04:15:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753893AbeFJLNz (ORCPT + 99 others); Sun, 10 Jun 2018 07:13:55 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:36412 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753864AbeFJLNx (ORCPT ); Sun, 10 Jun 2018 07:13:53 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 057C074FE7E31; Sun, 10 Jun 2018 19:13:20 +0800 (CST) Received: from [127.0.0.1] (10.177.23.164) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.382.0; Sun, 10 Jun 2018 19:13:14 +0800 Subject: Re: [PATCH 0/7] add non-strict mode support for arm-smmu-v3 To: Robin Murphy , Hanjun Guo , Will Deacon , Matthias Brugger , Rob Clark , Joerg Roedel , linux-mediatek , linux-arm-msm , linux-arm-kernel , iommu , linux-kernel References: <1527752569-18020-1-git-send-email-thunder.leizhen@huawei.com> <96cc25b9-b21f-6067-384d-f52e6b8b25e7@huawei.com> <92b240f5-596e-87a9-863a-b18475042cce@arm.com> <5B10ECBB.209@huawei.com> CC: Libin , Guozhu Li , Xinwei Hu From: "Leizhen (ThunderTown)" Message-ID: <5B1D07C9.6090507@huawei.com> Date: Sun, 10 Jun 2018 19:13:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <5B10ECBB.209@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/6/1 14:50, Leizhen (ThunderTown) wrote: > > > On 2018/5/31 22:25, Robin Murphy wrote: >> On 31/05/18 14:49, Hanjun Guo wrote: >>> Hi Robin, >>> >>> On 2018/5/31 19:24, Robin Murphy wrote: >>>> On 31/05/18 08:42, Zhen Lei wrote: >>>>> In common, a IOMMU unmap operation follow the below steps: >>>>> 1. remove the mapping in page table of the specified iova range >>>>> 2. execute tlbi command to invalid the mapping which is cached in TLB >>>>> 3. wait for the above tlbi operation to be finished >>>>> 4. free the IOVA resource >>>>> 5. free the physical memory resource >>>>> >>>>> This maybe a problem when unmap is very frequently, the combination of tlbi >>>>> and wait operation will consume a lot of time. A feasible method is put off >>>>> tlbi and iova-free operation, when accumulating to a certain number or >>>>> reaching a specified time, execute only one tlbi_all command to clean up >>>>> TLB, then free the backup IOVAs. Mark as non-strict mode. >>>>> >>>>> But it must be noted that, although the mapping has already been removed in >>>>> the page table, it maybe still exist in TLB. And the freed physical memory >>>>> may also be reused for others. So a attacker can persistent access to memory >>>>> based on the just freed IOVA, to obtain sensible data or corrupt memory. So >>>>> the VFIO should always choose the strict mode. >>>>> >>>>> Some may consider put off physical memory free also, that will still follow >>>>> strict mode. But for the map_sg cases, the memory allocation is not controlled >>>>> by IOMMU APIs, so it is not enforceable. >>>>> >>>>> Fortunately, Intel and AMD have already applied the non-strict mode, and put >>>>> queue_iova() operation into the common file dma-iommu.c., and my work is based >>>>> on it. The difference is that arm-smmu-v3 driver will call IOMMU common APIs to >>>>> unmap, but Intel and AMD IOMMU drivers are not. >>>>> >>>>> Below is the performance data of strict vs non-strict for NVMe device: >>>>> Randomly Read IOPS: 146K(strict) vs 573K(non-strict) >>>>> Randomly Write IOPS: 143K(strict) vs 513K(non-strict) >>>> >>>> What hardware is this on? If it's SMMUv3 without MSIs (e.g. D05), then you'll still be using the rubbish globally-blocking sync implementation. If that is the case, I'd be very interested to see how much there is to gain from just improving that - I've had a patch kicking around for a while[1] (also on a rebased branch at [2]), but don't have the means for serious performance testing. > I will try your patch to see how much it can improve. I think the best way Hi Robin, I applied your patch and got below improvemnet. Randomly Read IOPS: 146K --> 214K Randomly Write IOPS: 143K --> 212K > to resovle the globally-blocking sync is that the hardware provide 64bits > CONS regitster, so that it can never be wrapped, and the spinlock can also > be removed. > >>> >>> The hardware is the new D06 which the SMMU with MSIs, >> >> Cool! Now that profiling is fairly useful since we got rid of most of the locks, are you able to get an idea of how the overhead in the normal case is distributed between arm_smmu_cmdq_insert_cmd() and __arm_smmu_sync_poll_msi()? We're always trying to improve our understanding of where command-queue-related overheads turn out to be in practice, and there's still potentially room to do nicer things than TLBI_NH_ALL ;) > Even if the software has no overhead, there may still be a problem, because > the smmu need to execute the commands in sequence, especially before > globally-blocking sync has been removed. Base on the actually execute time > of single tlbi and sync, we can get the upper limit in theory. > > BTW, I will reply the reset of mail next week. I'm busy with other things now. > >> >> Robin. >> >>> it's not D05 :) >>> >>> Thanks >>> Hanjun >>> >> >> . >> > -- Thanks! BestRegards