Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp399678pxb; Tue, 9 Feb 2021 03:22:31 -0800 (PST) X-Google-Smtp-Source: ABdhPJycqMjt70hLVJVXGAB4z4dANgMNgVa0TwYCPlKN/1D3tfZj6BoOdfPL1+xXlnZpbe2Imc75 X-Received: by 2002:aa7:c94a:: with SMTP id h10mr22931947edt.41.1612869751181; Tue, 09 Feb 2021 03:22:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612869751; cv=none; d=google.com; s=arc-20160816; b=Yf/h83sFJT8gH182Yydgh6JBuN9XiAO6VR6I9Af2x58yjNxl3SkN4+qZ8cd4WgCSLo 4Qh1focqZqpRhDrR0agZpmY32uyDahhNyhfRx1yrQctbXtX+ZNV/9yQpwpldNyXmRUs/ 80RTIrEYdEPtnlzY5jrkWOBrcYlGbn9SgriM2gtfhHgP7s5nRIeynwphKx/U1L/h95hI 291ZY1uZHhgItEEK4CXXVKxEM08gfapQioSfdVkqyJfHU3wDkSZkKTztrUMtACNtOv5F qneiPyOK70ubX2yFOaNe2LJCf+zTQ7/JCL02IhT+DKLOYDXPju3MZAJACMwzS2uGvyBb MHgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=XikMli+8Z7a2tBy0T1qRqwlvymY77DGOxHQpyPnA1Cs=; b=cmEZl+9gNwjU5QXkxZoYPQH+O4WDXn2i+el/LqykzElOTTG2fW3/p6voorMt5SUveZ VK7AxMOANL0I92OeZOWqrV3NJKnq1T2y0iL71aFOjGDoSTe4PSubskUH+zNTnTwzGGUh aUM+1+fF7PDcXRkY0ZpSDYAJ9QPsx/CRMKvtvEkvOqhatzDsTuWL0m+uNIaBKfm7LPT1 taDNQUQNY+Db42PiJv1RjY8mFeaFzAgXFiurVg3Mg/yQpdWwxmUqJiiN4SCCeDmR9D23 4agsAVWpozQBoRzMyn2SX1h0w14l962dv+7/QKbnCIiPpMLJANAsr5Ypj1CF0FTrK0lL CQ0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m23si13799939eds.593.2021.02.09.03.22.06; Tue, 09 Feb 2021 03:22:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231796AbhBILTY (ORCPT + 99 others); Tue, 9 Feb 2021 06:19:24 -0500 Received: from foss.arm.com ([217.140.110.172]:49894 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230133AbhBILRA (ORCPT ); Tue, 9 Feb 2021 06:17:00 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7FC8ED1; Tue, 9 Feb 2021 03:16:12 -0800 (PST) Received: from [10.57.49.26] (unknown [10.57.49.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CBE543F73B; Tue, 9 Feb 2021 03:16:09 -0800 (PST) Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM To: Yi Sun , Keqian Zhu Cc: Mark Rutland , kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , jiangkunkun@huawei.com, wanghaibin.wang@huawei.com, kevin.tian@intel.com, yan.y.zhao@intel.com, Suzuki K Poulose , Alex Williamson , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, lushenming@huawei.com, iommu@lists.linux-foundation.org, James Morse References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> <20210207095630.GA28580@yi.y.sun> From: Robin Murphy Message-ID: <8150bd3a-dbb9-2e2b-386b-04e66f4b68dc@arm.com> Date: Tue, 9 Feb 2021 11:16:08 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <20210207095630.GA28580@yi.y.sun> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-02-07 09:56, Yi Sun wrote: > Hi, > > On 21-01-28 23:17:41, Keqian Zhu wrote: > > [...] > >> +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, >> + struct vfio_dma *dma) >> +{ >> + struct vfio_domain *d; >> + >> + list_for_each_entry(d, &iommu->domain_list, next) { >> + /* Go through all domain anyway even if we fail */ >> + iommu_split_block(d->domain, dma->iova, dma->size); >> + } >> +} > > This should be a switch to prepare for dirty log start. Per Intel > Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. > It enables Accessed/Dirty Flags in second-level paging entries. > So, a generic iommu interface here is better. For Intel iommu, it > enables SLADE. For ARM, it splits block. From a quick look, VT-D's SLADE and SMMU's HTTU appear to be the exact same thing. This step isn't about enabling or disabling that feature itself (the proposal for SMMU is to simply leave HTTU enabled all the time), it's about controlling the granularity at which the dirty status can be detected/reported at all, since that's tied to the pagetable structure. However, if an IOMMU were to come along with some other way of reporting dirty status that didn't depend on the granularity of individual mappings, then indeed it wouldn't need this operation. Robin. >> + >> +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, >> + struct vfio_dma *dma) >> +{ >> + struct vfio_domain *d; >> + >> + list_for_each_entry(d, &iommu->domain_list, next) { >> + /* Go through all domain anyway even if we fail */ >> + iommu_merge_page(d->domain, dma->iova, dma->size, >> + d->prot | dma->prot); >> + } >> +} > > Same as above comment, a generic interface is required here. > >> + >> +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) >> +{ >> + struct rb_node *n; >> + >> + /* Split and merge even if all iommu don't support HWDBM now */ >> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { >> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); >> + >> + if (!dma->iommu_mapped) >> + continue; >> + >> + /* Go through all dma range anyway even if we fail */ >> + if (start) >> + vfio_dma_dirty_log_start(iommu, dma); >> + else >> + vfio_dma_dirty_log_stop(iommu, dma); >> + } >> +} >> + >> static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, >> unsigned long arg) >> { >> @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, >> pgsize = 1 << __ffs(iommu->pgsize_bitmap); >> if (!iommu->dirty_page_tracking) { >> ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); >> - if (!ret) >> + if (!ret) { >> iommu->dirty_page_tracking = true; >> + vfio_iommu_dirty_log_switch(iommu, true); >> + } >> } >> mutex_unlock(&iommu->lock); >> return ret; >> @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, >> if (iommu->dirty_page_tracking) { >> iommu->dirty_page_tracking = false; >> vfio_dma_bitmap_free_all(iommu); >> + vfio_iommu_dirty_log_switch(iommu, false); >> } >> mutex_unlock(&iommu->lock); >> return 0; >> -- >> 2.19.1 > _______________________________________________ > iommu mailing list > iommu@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/iommu >