Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3229250pxu; Tue, 15 Dec 2020 01:42:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJwVPxHkE7GgoGC1HUp5Z+diILNXQ6iHEgg1qb2qG1zDa47uOZv8K1R7BzVGM87tnbY8pvFV X-Received: by 2002:a05:6402:94c:: with SMTP id h12mr9731323edz.268.1608025376538; Tue, 15 Dec 2020 01:42:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608025376; cv=none; d=google.com; s=arc-20160816; b=EXyUjhxYKEeyZ1LGRK9IqT53Te3YK2bSGS2fNUFLlxYRuKt/DzbUkNZD59BdLiqF4+ U+YQjjBK2EE5jJUQqIU9lUjmPiwtMZKjFPtif6rJ8UC/HY6wTl1vQXoQkR2eSANoZOZg vz89KE5ZrI33fl9M8Qpn9t9imrwLdp56dp+DrM9Y+1H6krR01yKURqrNebzqcKlqxdDq Gw3cMLXXp5SC/cnWhVYW93gUF8QcuiPRRTuIU0kkqt9ITbjhTt11EgsDfZC25Zi4p2TK JnlmeZNWWf1Z80aPzFsTwWty4taLQlULX78EqYWegUrDyp9OnntNX2B8K8mx9LMTV3Dd wlzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=rA+Z7j5fgYSb2KJGnq+4Jnf4RAwQkRW3SXP8qYxjwwI=; b=lTvhOc7lV4MRZQ86n1rPNq3V0QyczAO8j1i9JOYdAaUUEbzUxTyFY10MuDbb7zrK7o DQ+UaUA0que3CKwpkrvjXI6V3AXcGqP2bRNflXU6L1lJ9z9J8+J253AlNSjh+ijqOcS+ gufNzsrtlM5hUKF+9d02x/IyXmu89k09qc51fFkaNKHdTi7NhD9xaAvHVcLl2ERKbyEm 2WvzU7JCyPTsn4POIIls005AHp9mKAponYJTG1etLn1S6JLi6HX/Rmr8+I2StlfSOAtj JXY6xWDZIVDuj/Sh3yuHHD2dEhHzFr1VI0RyTbXmdZ5vYxAoaZo+ZX/d1WAnFT3atPUd mthQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s2si550235edw.5.2020.12.15.01.42.32; Tue, 15 Dec 2020 01:42:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbgLOJiQ (ORCPT + 99 others); Tue, 15 Dec 2020 04:38:16 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:9890 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727879AbgLOJiL (ORCPT ); Tue, 15 Dec 2020 04:38:11 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4CwCnn3xk5z6tw7; Tue, 15 Dec 2020 17:36:45 +0800 (CST) Received: from [10.174.187.37] (10.174.187.37) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Tue, 15 Dec 2020 17:37:12 +0800 Subject: Re: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope To: Alex Williamson References: <20201210073425.25960-1-zhukeqian1@huawei.com> <20201210073425.25960-5-zhukeqian1@huawei.com> <20201214170459.50cb8729@omen.home> CC: , , , , , Cornelia Huck , "Marc Zyngier" , Will Deacon , Robin Murphy , Joerg Roedel , Catalin Marinas , James Morse , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , From: zhukeqian Message-ID: Date: Tue, 15 Dec 2020 17:37:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20201214170459.50cb8729@omen.home> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Alex, On 2020/12/15 8:04, Alex Williamson wrote: > On Thu, 10 Dec 2020 15:34:22 +0800 > Keqian Zhu wrote: > >> When we pin or detach a group which is not dirty tracking capable, >> we will try to promote pinned_scope of vfio_iommu. >> >> If we succeed to do so, vfio only report pinned_scope as dirty to >> userspace next time, but these memory written before pin or detach >> is missed. >> >> The solution is that we must populate all dma range as dirty before >> promoting pinned_scope of vfio_iommu. > > Please don't bury fixes patches into a series with other optimizations > and semantic changes. Send it separately. > OK, I will. >> >> Signed-off-by: Keqian Zhu >> --- >> drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++ >> 1 file changed, 18 insertions(+) >> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c >> index bd9a94590ebc..00684597b098 100644 >> --- a/drivers/vfio/vfio_iommu_type1.c >> +++ b/drivers/vfio/vfio_iommu_type1.c >> @@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, >> return group; >> } >> >> +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu) >> +{ >> + struct rb_node *n; >> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); >> + >> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { >> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); >> + unsigned long nbits = dma->size >> pgshift; >> + >> + if (dma->iommu_mapped) >> + bitmap_set(dma->bitmap, 0, nbits); >> + } >> +} > > > If we detach a group which results in only non-IOMMU backed mdevs, > don't we also clear dma->iommu_mapped as part of vfio_unmap_unpin() > such that this test is invalid? Thanks, Good spot :-). The code will skip bitmap_set under this situation. We should set the bitmap unconditionally when vfio_iommu is promoted, as we must have IOMMU backed domain before promoting the vfio_iommu. Besides, I think we should also mark dirty in vfio_remove_dma if dirty tracking is active. Right? Thanks, Keqian > > Alex > >> + >> static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) >> { >> struct vfio_domain *domain; >> @@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) >> } >> >> iommu->pinned_page_dirty_scope = true; >> + >> + /* Set all bitmap to avoid missing dirty page */ >> + if (iommu->dirty_page_tracking) >> + vfio_populate_bitmap_all(iommu); >> } >> >> static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, > > . >