Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2740787pxb; Tue, 13 Apr 2021 09:05:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyVicgeTxWIeZrH7TeAqc9/LEmhpZrWQEpd2fPYbH0LzRGsrbK/fY+JdHSfI5PSciee+dDj X-Received: by 2002:a17:906:5fce:: with SMTP id k14mr18092789ejv.9.1618329911773; Tue, 13 Apr 2021 09:05:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618329911; cv=none; d=google.com; s=arc-20160816; b=fIva0zVHCFrZAGBhYnijuCWarbzEfvV/3lw/ztoVUqvJQU16VPbmpuBqgzN8pYaPYl RnQnSoz+AkDfyeC1GWZ+oW4SagzgXT3JTzK4qgrjfvK7NNRQ2nH3MaWMizVjkQurDgge /lKNECvvBdnYEBRymg23/QqL0EcHMAyeGAqntFxoy5VmxNxsUIJ5SUo9P4prAulTQt/i KT4OiiHA7ronrJB416xdVsPuiKTlTp49hAcO0n3HnXnrXpBzL50jWioP1urVyt6Pz/XM ZwvMd+xL7lUzfs3oIEElm/w8NN+0JGBrdzvolZz+W093eIEqQooBpdC9Y/D9UA9X4Md3 O0MQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=umYDFbh9D0VoDIuyR0EeLOOHa4ETNvVr2Ba59BCvTX4=; b=uBgyPYXuAquq6380kxAqSBhHcWVZNeTZq5l8ugE82Iydi/V903JthYNb8LDgN+fErU 2lKtj6h+1J8re2uT3DXJb97ND/3FvbbGI0V2BJlpDJ3zknychO/CMF9IbN+QwKrEeOWX BDbbwL8+uL6+MOHMjPev2WyN6dtZzl31kzmViJGz2IMsQyWxJgqGYe5yv6G57kD3J0pD YlOY1FL6o3nYkpX75NvIFpWUeE9HuDkqlDHHAnVb6MC4e5cJ29lwGkmnh0v2BDzKDFy6 zs33oKiP2zfy8z9endkYgCdc1mWzXANpJ4V5HMbG2yXaDthgZZiBslMZRIuhV668m2Qj lKnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m19si11254160edd.340.2021.04.13.09.04.40; Tue, 13 Apr 2021 09:05:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245439AbhDMJPr (ORCPT + 99 others); Tue, 13 Apr 2021 05:15:47 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:16549 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245281AbhDMJPc (ORCPT ); Tue, 13 Apr 2021 05:15:32 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKKcg0RjGzPqkx; Tue, 13 Apr 2021 17:12:19 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 17:15:02 +0800 From: Keqian Zhu To: , , Alex Williamson , Kirti Wankhede , "Cornelia Huck" , Yi Sun , Tian Kevin CC: Robin Murphy , Will Deacon , "Joerg Roedel" , Jean-Philippe Brucker , Jonathan Cameron , Lu Baolu , , , , Subject: [PATCH 1/3] vfio/iommu_type1: Add HWDBM status maintanance Date: Tue, 13 Apr 2021 17:14:43 +0800 Message-ID: <20210413091445.7448-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413091445.7448-1-zhukeqian1@huawei.com> References: <20210413091445.7448-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kunkun Jiang We are going to optimize dirty log tracking based on iommu HWDBM feature, but the dirty log from iommu is useful only when all iommu backed groups are with HWDBM feature. This maintains a counter in vfio_iommu, which is used in the policy of dirty bitmap population in next patch. This also maintains a counter in vfio_domain, which is used in the policy of switch dirty log in next patch. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/vfio/vfio_iommu_type1.c | 44 +++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 45cbfd4879a5..9cb9ce021b22 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -73,6 +73,7 @@ struct vfio_iommu { unsigned int vaddr_invalid_count; uint64_t pgsize_bitmap; uint64_t num_non_pinned_groups; + uint64_t num_non_hwdbm_groups; wait_queue_head_t vaddr_wait; bool v2; bool nesting; @@ -85,6 +86,7 @@ struct vfio_domain { struct iommu_domain *domain; struct list_head next; struct list_head group_list; + uint64_t num_non_hwdbm_groups; int prot; /* IOMMU_CACHE */ bool fgsp; /* Fine-grained super pages */ }; @@ -116,6 +118,7 @@ struct vfio_group { struct list_head next; bool mdev_group; /* An mdev group */ bool pinned_page_dirty_scope; + bool iommu_hwdbm; /* For iommu-backed group */ }; struct vfio_iova { @@ -2252,6 +2255,44 @@ static void vfio_iommu_iova_insert_copy(struct vfio_iommu *iommu, list_splice_tail(iova_copy, iova); } +static int vfio_dev_enable_feature(struct device *dev, void *data) +{ + enum iommu_dev_features *feat = data; + + if (iommu_dev_feature_enabled(dev, *feat)) + return 0; + + return iommu_dev_enable_feature(dev, *feat); +} + +static bool vfio_group_supports_hwdbm(struct vfio_group *group) +{ + enum iommu_dev_features feat = IOMMU_DEV_FEAT_HWDBM; + + return !iommu_group_for_each_dev(group->iommu_group, &feat, + vfio_dev_enable_feature); +} + +/* + * Called after a new group is added to the group_list of domain, or before an + * old group is removed from the group_list of domain. + */ +static void vfio_iommu_update_hwdbm(struct vfio_iommu *iommu, + struct vfio_domain *domain, + struct vfio_group *group, + bool attach) +{ + /* Update the HWDBM status of group, domain and iommu */ + group->iommu_hwdbm = vfio_group_supports_hwdbm(group); + if (!group->iommu_hwdbm && attach) { + domain->num_non_hwdbm_groups++; + iommu->num_non_hwdbm_groups++; + } else if (!group->iommu_hwdbm && !attach) { + domain->num_non_hwdbm_groups--; + iommu->num_non_hwdbm_groups--; + } +} + static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -2409,6 +2450,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, vfio_iommu_detach_group(domain, group); if (!vfio_iommu_attach_group(d, group)) { list_add(&group->next, &d->group_list); + vfio_iommu_update_hwdbm(iommu, d, group, true); iommu_domain_free(domain->domain); kfree(domain); goto done; @@ -2435,6 +2477,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, list_add(&domain->next, &iommu->domain_list); vfio_update_pgsize_bitmap(iommu); + vfio_iommu_update_hwdbm(iommu, domain, group, true); done: /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); @@ -2618,6 +2661,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, continue; vfio_iommu_detach_group(domain, group); + vfio_iommu_update_hwdbm(iommu, domain, group, false); update_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); -- 2.19.1