Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3226072pxb; Sun, 7 Feb 2021 02:05:58 -0800 (PST) X-Google-Smtp-Source: ABdhPJwGq/JEujzlYcEpQj/HWP6U514rrfvPYtVwwD5lnnjETW9Q2MXaM2eVM3nsTzEUOTVQrIjK X-Received: by 2002:a17:906:55c5:: with SMTP id z5mr12204410ejp.275.1612692357848; Sun, 07 Feb 2021 02:05:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612692357; cv=none; d=google.com; s=arc-20160816; b=wZbUpwBH+JHql9CadZoI56A3Y7/KQjmE8IA7/yRqDB2CQU2eRBF71INlcMZY0UcjaK 6Of0zVIQIAFkS6u5YhgX9Fqd2FKRb6+HTDAGeF4Gr+3xgYmAWIXnXt+dX4n1JQu5Zfe6 GZjJNFyXeqFYL4MLItk7SRrteH9/4LAc0uw0TcPZSwkyUlPc3CnNI7SJC4TqoI579tiB 1CvIT5Y14/ftHT20FD/dxQxu+JmuVirNg+3GBOesgSqmc4JkhGMgc9c7GxKjHd9sn2yi l1It39I5Ql8+8tbbRX+WjssozYgXWGXY8mx5nop9rw1v4FSUR2q49dYhrt3rN9GBelGK IuHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=unc31F6dxF4OnciYCnkvRW+aYIstF36BQ+7cLhiMvvo=; b=DeNV86ZJQeM8cKkrAvS4fgoYnTGf+3UJRdyoYlJ4FobvPentE/FptovsoDaYJU5BNm yvSP0t3AGtgMQNl5qM05gWtKsolXCSMcqbqdtZkcW3uiI+mhxxkjoo/M2ZSI6G7DizPx OryLIL/PDKd1MNRVfnr0AeQgFN7ovnjMQwQtmYVtLNCmtzlvoro1WPt7rEaYsDGLdD/e swv7FVe6cGgxVZ8P3hNHfrp1XSkGRVEuWyoujXOw+UH7LpRikCa02su+GZPOgYTalF0x 9BNg/txzlbIRXWXOuG3RuKzEt2xaEP7kpRCl5s87vupVb29U8icRL8gfDOmuWp+kLwp/ hT9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jl17si8140122ejc.251.2021.02.07.02.05.32; Sun, 07 Feb 2021 02:05:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229590AbhBGKDs (ORCPT + 99 others); Sun, 7 Feb 2021 05:03:48 -0500 Received: from mga06.intel.com ([134.134.136.31]:50122 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229445AbhBGKDr (ORCPT ); Sun, 7 Feb 2021 05:03:47 -0500 IronPort-SDR: ETI7YzxekrQHHF6lb0EdsdVghedbMZOEOriMx8FTAEBnCY7lN0YceM8AsQ4riz5w+SIZuiA1F7 U+a/hTrAKt5A== X-IronPort-AV: E=McAfee;i="6000,8403,9887"; a="243097226" X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="243097226" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2021 02:02:01 -0800 IronPort-SDR: Mp8oKFuCuA6dAZ2YZ/k3bHQSVYlOl31EFoAhZtw2KltKpjEurl1bATYaVs9WFlQSWAC71xuHE6 2n62yqCcbYLg== X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="394665971" Received: from yisun1-ubuntu.bj.intel.com (HELO yi.y.sun) ([10.238.156.116]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA256; 07 Feb 2021 02:01:56 -0800 Date: Sun, 7 Feb 2021 17:56:30 +0800 From: Yi Sun To: Keqian Zhu Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, iommu@lists.linux-foundation.org, Will Deacon , Alex Williamson , Marc Zyngier , Catalin Marinas , Kirti Wankhede , Cornelia Huck , Mark Rutland , James Morse , Robin Murphy , Suzuki K Poulose , wanghaibin.wang@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, lushenming@huawei.com, kevin.tian@intel.com, yan.y.zhao@intel.com, baolu.lu@linux.intel.com Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Message-ID: <20210207095630.GA28580@yi.y.sun> References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210128151742.18840-11-zhukeqian1@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 21-01-28 23:17:41, Keqian Zhu wrote: [...] > +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_split_block(d->domain, dma->iova, dma->size); > + } > +} This should be a switch to prepare for dirty log start. Per Intel Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. It enables Accessed/Dirty Flags in second-level paging entries. So, a generic iommu interface here is better. For Intel iommu, it enables SLADE. For ARM, it splits block. > + > +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_merge_page(d->domain, dma->iova, dma->size, > + d->prot | dma->prot); > + } > +} Same as above comment, a generic interface is required here. > + > +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) > +{ > + struct rb_node *n; > + > + /* Split and merge even if all iommu don't support HWDBM now */ > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + > + if (!dma->iommu_mapped) > + continue; > + > + /* Go through all dma range anyway even if we fail */ > + if (start) > + vfio_dma_dirty_log_start(iommu, dma); > + else > + vfio_dma_dirty_log_stop(iommu, dma); > + } > +} > + > static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > pgsize = 1 << __ffs(iommu->pgsize_bitmap); > if (!iommu->dirty_page_tracking) { > ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); > - if (!ret) > + if (!ret) { > iommu->dirty_page_tracking = true; > + vfio_iommu_dirty_log_switch(iommu, true); > + } > } > mutex_unlock(&iommu->lock); > return ret; > @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > if (iommu->dirty_page_tracking) { > iommu->dirty_page_tracking = false; > vfio_dma_bitmap_free_all(iommu); > + vfio_iommu_dirty_log_switch(iommu, false); > } > mutex_unlock(&iommu->lock); > return 0; > -- > 2.19.1