Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp516975pxb; Wed, 14 Apr 2021 23:20:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyEHTv/EwAeVOCvpboRFWX1NZtXkYffBIsCEra2fQpcbMrqFs5m+52NZ0Dp6qH0+p2nKhD2 X-Received: by 2002:a17:90a:c404:: with SMTP id i4mr2151036pjt.10.1618467623251; Wed, 14 Apr 2021 23:20:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618467623; cv=none; d=google.com; s=arc-20160816; b=DXq3KEGfbUd6i1lDQMst/Q5D1I+nM13Jk3JLz1oHz7XhQ2luAv3tiIbW3qXiEyRZvP bf78tWFATmKmuYIJaNBjgi3rqjfN2XrckdJFt0kkx20UAT0jO0sSGoioib6UdKYclSSd 5YVh65OElQJUwj+UuZ36E2BbfRBPRt1TlZPUoi6KgxId9kKJnlk16TN0dIFS8nU6QK0U Op9xgttI8ChSZsvPvFSHb7Mh/S0FoFNfkfAb9It+xCbwb8oKq/bqDALAqvPgHUr1Ih0a ZGzehiazjsAgRvAob6w6o8WfP3+zTH1lmj+DephdpDCIM7A2bp5IxXpa2iKjUQvCYkCP CP0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=slnQj1flMpCKj3s3JJ4GIlH6gEbwZPk01aDMFJB7LDA=; b=wUjJCp/asOXFBx6ziw5aSv5zhyZEJPnUmp9kxmOgSAyXpozvMzV/Rf/nGKyqnX3DvS 8UZnhLm3htaKAYlH8yLkyT2cFwJgdRll6XvDAUpZptRjX87HpqxhrBCLRJgfHdt5ZTjD 44DraTBTgtIIPTlllDF/hrSEOPVZpWGwUUocIeKOQURYWklVFiCVwbDgfTTLVXL4on4R A3CWoNlpO4iRHFDWZOlOoicn/voduUIjw7o8nKXFUvYbGVn7SkkZNAbeoQYpZqA5KQsi hPc+y4dsB1Zr7Ag2K9vCTvi7tQFQRXZQ6ST+1aZVEeRPir1x6xhudQGhMQA8gjbspAhW Om9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v11si2556322pfu.140.2021.04.14.23.20.11; Wed, 14 Apr 2021 23:20:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231225AbhDOGSn (ORCPT + 99 others); Thu, 15 Apr 2021 02:18:43 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:16999 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbhDOGSm (ORCPT ); Thu, 15 Apr 2021 02:18:42 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FLTbb1hRCzPp4H; Thu, 15 Apr 2021 14:15:23 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.498.0; Thu, 15 Apr 2021 14:18:10 +0800 Subject: Re: [PATCH v3 01/12] iommu: Introduce dirty log tracking framework To: Lu Baolu , , , , Robin Murphy , Will Deacon , "Joerg Roedel" , Yi Sun , "Jean-Philippe Brucker" , Jonathan Cameron , Tian Kevin References: <20210413085457.25400-1-zhukeqian1@huawei.com> <20210413085457.25400-2-zhukeqian1@huawei.com> CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , From: Keqian Zhu Message-ID: Date: Thu, 15 Apr 2021 14:18:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Baolu, Thanks for the review! On 2021/4/14 15:00, Lu Baolu wrote: > Hi Keqian, > > On 4/13/21 4:54 PM, Keqian Zhu wrote: >> Some types of IOMMU are capable of tracking DMA dirty log, such as >> ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the >> dirty log tracking framework in the IOMMU base layer. >> >> Three new essential interfaces are added, and we maintaince the status >> of dirty log tracking in iommu_domain. >> 1. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking >> 2. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap >> 3. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap >> >> A new dev feature are added to indicate whether a specific type of >> iommu hardware supports and its driver realizes them. >> >> Signed-off-by: Keqian Zhu >> Signed-off-by: Kunkun Jiang >> --- >> drivers/iommu/iommu.c | 150 ++++++++++++++++++++++++++++++++++++++++++ >> include/linux/iommu.h | 53 +++++++++++++++ >> 2 files changed, 203 insertions(+) >> >> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >> index d0b0a15dba84..667b2d6d2fc0 100644 >> --- a/drivers/iommu/iommu.c >> +++ b/drivers/iommu/iommu.c >> @@ -1922,6 +1922,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus, >> domain->type = type; >> /* Assume all sizes by default; the driver may override this later */ >> domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap; >> + mutex_init(&domain->switch_log_lock); >> return domain; >> } >> @@ -2720,6 +2721,155 @@ int iommu_domain_set_attr(struct iommu_domain *domain, >> } >> EXPORT_SYMBOL_GPL(iommu_domain_set_attr); >> +int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, >> + unsigned long iova, size_t size, int prot) >> +{ >> + const struct iommu_ops *ops = domain->ops; >> + int ret; >> + >> + if (unlikely(!ops || !ops->switch_dirty_log)) >> + return -ENODEV; >> + >> + mutex_lock(&domain->switch_log_lock); >> + if (enable && domain->dirty_log_tracking) { >> + ret = -EBUSY; >> + goto out; >> + } else if (!enable && !domain->dirty_log_tracking) { >> + ret = -EINVAL; >> + goto out; >> + } >> + >> + ret = ops->switch_dirty_log(domain, enable, iova, size, prot); >> + if (ret) >> + goto out; >> + >> + domain->dirty_log_tracking = enable; >> +out: >> + mutex_unlock(&domain->switch_log_lock); >> + return ret; >> +} >> +EXPORT_SYMBOL_GPL(iommu_switch_dirty_log); > > Since you also added IOMMU_DEV_FEAT_HWDBM, I am wondering what's the > difference between > > iommu_switch_dirty_log(on) vs. iommu_dev_enable_feature(IOMMU_DEV_FEAT_HWDBM) > > iommu_switch_dirty_log(off) vs. iommu_dev_disable_feature(IOMMU_DEV_FEAT_HWDBM) Indeed. As I can see, IOMMU_DEV_FEAT_AUX is not switchable, so enable/disable are not applicable for it. IOMMU_DEV_FEAT_SVA is switchable, so we can use these interfaces for it. IOMMU_DEV_FEAT_HWDBM is used to indicate whether hardware supports HWDBM, so we should design it as not switchable. I will modify the commit message of patch#12, thanks! > >> + >> +int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, >> + size_t size, unsigned long *bitmap, >> + unsigned long base_iova, unsigned long bitmap_pgshift) >> +{ >> + const struct iommu_ops *ops = domain->ops; >> + unsigned int min_pagesz; >> + size_t pgsize; >> + int ret = 0; >> + >> + if (unlikely(!ops || !ops->sync_dirty_log)) >> + return -ENODEV; >> + >> + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); >> + if (!IS_ALIGNED(iova | size, min_pagesz)) { >> + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", >> + iova, size, min_pagesz); >> + return -EINVAL; >> + } >> + >> + mutex_lock(&domain->switch_log_lock); >> + if (!domain->dirty_log_tracking) { >> + ret = -EINVAL; >> + goto out; >> + } >> + >> + while (size) { >> + pgsize = iommu_pgsize(domain, iova, size); >> + >> + ret = ops->sync_dirty_log(domain, iova, pgsize, >> + bitmap, base_iova, bitmap_pgshift); > > Any reason why do you want to do this in a per-4K page manner? This can > lead to a lot of indirect calls and bad performance. > > How about a sync_dirty_pages()? The function name of iommu_pgsize() is a bit puzzling. Actually it will try to compute the max size that fit into size, so the pgsize can be a large page size even if the underlying mapping is 4K. The __iommu_unmap() also has a similar logic. BRs, Keqian