Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2469992pxb; Tue, 13 Apr 2021 02:41:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJycg4Jhzg0OIq3lWHqDgA38JtirOmdsY8AvSzZPRWECj2AoOIDA1byYUzOwYpd3YUaF5Wdj X-Received: by 2002:aa7:946b:0:b029:24c:57ea:99bf with SMTP id t11-20020aa7946b0000b029024c57ea99bfmr9844303pfq.63.1618306910446; Tue, 13 Apr 2021 02:41:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618306910; cv=none; d=google.com; s=arc-20160816; b=aYUy+6ae41C/mqPkyM+BC0pT9DhlUADSGo5/ottnBTTZjXHMszkO+RVab1yQ1UkVQn YkSSP3yFydRDCqlKqC8ZcMS6hKMIlzzspr4J8qRuujJfr0CnIid6MDv82sakI+ss8uQk oM4c8EGtWVRwlNv6XvqZq7n5sFO7HSx9m7UUczli9C/OGGOd0CEDJAnQDrmKUzS0yrCT IYiFxeOlLBat4OFK3UIylbPwhu4BvWkbQHa2RSpFARzL9jHQwCoLvCW9tU8PmKL7EMDh Nh5LNlT6r4iFBdB6drCGmGVATo+XId0Au8RdvNNjgwWn0ePO9TYIjvZyZK4q6DhiXAFG WRqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Z7NwHHEW1ulspMhkfUkOIdGLeiPLXBVo7maFOA9Jmkc=; b=uGrDCLqw84vVkcn2xfsCms5VbUqmoPvSaw1Vd3NpxKcDir3rHiY+h6lfDpsiY59N4R q2ho4Pcc+Cpy+/JOb4zj9Mqdmquzy2d1qkz9nYmx022LWJvTEsSa/oIFZYv7g+DS/Fca 6qDiOi5YOEg2a1UwCpProzWrayKQ3Y12JpTfAqCXVvW68QBLS0rDHfOdxO3tYEhBp56P QoljID+aaSs1VPaVwo+u/Q3boFspIjwr7Vb1mZpMj2wgHTIXssT69JZ4W23IT9yt0NkQ EYzjaLGFeAlazhp9olnlnx5sgERgGhvBwmb2KNvyWU7CzObgKoOQck6g7XbadH+28vyQ sgXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y18si16988841pge.167.2021.04.13.02.41.38; Tue, 13 Apr 2021 02:41:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244887AbhDMI4Q (ORCPT + 99 others); Tue, 13 Apr 2021 04:56:16 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:16545 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244092AbhDMIzn (ORCPT ); Tue, 13 Apr 2021 04:55:43 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9p6pxMzPqkf; Tue, 13 Apr 2021 16:52:30 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:14 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , Subject: [PATCH v3 10/12] iommu/arm-smmu-v3: Realize sync_dirty_log iommu ops Date: Tue, 13 Apr 2021 16:54:55 +0800 Message-ID: <20210413085457.25400-11-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kunkun Jiang During dirty log tracking, user will try to retrieve dirty log from iommu if it supports hardware dirty log. Scan leaf TTD and treat it is dirty if it's writable. As we just enable HTTU for stage1, so check whether AP[2] is not set. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 30 +++++++ drivers/iommu/io-pgtable-arm.c | 90 +++++++++++++++++++++ include/linux/io-pgtable.h | 4 + 3 files changed, 124 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 52c6f3e74d6f..9eb209a07acc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2627,6 +2627,35 @@ static int arm_smmu_switch_dirty_log(struct iommu_domain *domain, bool enable, return 0; } +static int arm_smmu_sync_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + return -ENODEV; + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return -EINVAL; + + if (!ops || !ops->sync_dirty_log) { + pr_err("io-pgtable don't realize sync dirty log\n"); + return -ENODEV; + } + + /* + * Flush iotlb to ensure all inflight transactions are completed. + * See doc IHI0070Da 3.13.4 "HTTU behavior summary". + */ + arm_smmu_flush_iotlb_all(domain); + return ops->sync_dirty_log(ops, iova, size, bitmap, base_iova, + bitmap_pgshift); +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2729,6 +2758,7 @@ static struct iommu_ops arm_smmu_ops = { .split_block = arm_smmu_split_block, .merge_page = arm_smmu_merge_page, .switch_dirty_log = arm_smmu_switch_dirty_log, + .sync_dirty_log = arm_smmu_sync_dirty_log, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 9028328b99b0..67a208a05ab2 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -877,6 +877,95 @@ static size_t arm_lpae_merge_page(struct io_pgtable_ops *ops, unsigned long iova return __arm_lpae_merge_page(data, iova, paddr, size, lvl, ptep, prot); } +static int __arm_lpae_sync_dirty_log(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, + int lvl, arm_lpae_iopte *ptep, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop = &data->iop; + size_t base, next_size; + unsigned long offset; + int nbits, ret; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return -EINVAL; + + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return -EINVAL; + + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + if (iopte_leaf(pte, lvl, iop->fmt)) { + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + /* It is writable, set the bitmap */ + nbits = size >> bitmap_pgshift; + offset = (iova - base_iova) >> bitmap_pgshift; + bitmap_set(bitmap, offset, nbits); + return 0; + } else { + /* To traverse next level */ + next_size = ARM_LPAE_BLOCK_SIZE(lvl + 1, data); + ptep = iopte_deref(pte, data); + for (base = 0; base < size; base += next_size) { + ret = __arm_lpae_sync_dirty_log(data, + iova + base, next_size, lvl + 1, + ptep, bitmap, base_iova, bitmap_pgshift); + if (ret) + return ret; + } + return 0; + } + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + /* Though the size is too small, also set bitmap */ + nbits = size >> bitmap_pgshift; + offset = (iova - base_iova) >> bitmap_pgshift; + bitmap_set(bitmap, offset, nbits); + return 0; + } + + /* Keep on walkin */ + ptep = iopte_deref(pte, data); + return __arm_lpae_sync_dirty_log(data, iova, size, lvl + 1, ptep, + bitmap, base_iova, bitmap_pgshift); +} + +static int arm_lpae_sync_dirty_log(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + arm_lpae_iopte *ptep = data->pgd; + int lvl = data->start_level; + struct io_pgtable_cfg *cfg = &data->iop.cfg; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + return -EINVAL; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext)) + return -EINVAL; + + if (data->iop.fmt != ARM_64_LPAE_S1 && + data->iop.fmt != ARM_32_LPAE_S1) + return -EINVAL; + + return __arm_lpae_sync_dirty_log(data, iova, size, lvl, ptep, + bitmap, base_iova, bitmap_pgshift); +} + static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; @@ -957,6 +1046,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) .iova_to_phys = arm_lpae_iova_to_phys, .split_block = arm_lpae_split_block, .merge_page = arm_lpae_merge_page, + .sync_dirty_log = arm_lpae_sync_dirty_log, }; return data; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index dbee6170de7f..9e2757d12066 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -171,6 +171,10 @@ struct io_pgtable_ops { size_t size); size_t (*merge_page)(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t phys, size_t size, int prot); + int (*sync_dirty_log)(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift); }; /** -- 2.19.1