Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3072958pxu; Mon, 14 Dec 2020 20:07:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJxjlSw4ddwWbnLdAU1bIUtJuSMafAmQohOmgtExN6rJXptld4VS42zOVj3AeikKh/Owc2UF X-Received: by 2002:aa7:c151:: with SMTP id r17mr24288267edp.106.1608005242327; Mon, 14 Dec 2020 20:07:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608005242; cv=none; d=google.com; s=arc-20160816; b=uZH/6fD6JE7cYqh7jYgB4r6LB4CJKmz85yzmNhd9wJGiw+4kW95ukOyocIgSCFnXVO r65bcy5VSYkxbl3KnNia1E91vjHDlr4hjXRPR4kPRVWxHGc3w84GcJE8v0anCTvmLSPU eK30toqYaDGca8uRv+C9TnW8vyjklZ8TFFHuyhf7VI5tHsVMe9G58e2G13I40PTXQxBg n5jVJsiIegjZQGSNgNOU+DG8sNsGYunSbf1mIvno7L3BfbfxMk5N1HJ3uJSBpZ/Wbvzo xKg0wi/zLBeTnpLaKzVsyMPy+n8uAp9fOQQNtu/JEqdlb30AcqIk3FsGMxo9VSnXtSBJ cGow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=6g3IZkefy8eigv1Fk+b3mkHNOWy4CEh/CsuZ+osvTpg=; b=YQVAVniZOE76IoyrIUrwRMFImIkYV9TCCN25kIkjdS+SIiPMLQeeSk3f64dQ1CkGRN OxZ6oNA3FlHkbJtlGHOerrG15z1VAki6cAk1pj8luq8Dsf+r4Unwxol/jejvfcWj8G7Z 4L9QJtazWbUmCe2UEG6JC8fKxBwuynfTsdofVafwQIp6M4AzVlUTvfJyr/Dq8yjcunRA 2932U4ehMQTUvAPX/yfs67osvqTAqptNj1KmIr1ZXJR898lD+Xw33t+vwW3UgZHtvfnr +Kgl1p8kQehNC2peCMZIqw4CmO56r1mA7NdBNCjaAsUIfVgRPRU0IPctXCsdXyhNac5h 3fxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fAd4ojfk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a1si263316ejk.634.2020.12.14.20.06.59; Mon, 14 Dec 2020 20:07:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fAd4ojfk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728344AbgLOAGj (ORCPT + 99 others); Mon, 14 Dec 2020 19:06:39 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41876 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727377AbgLOAGf (ORCPT ); Mon, 14 Dec 2020 19:06:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607990709; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6g3IZkefy8eigv1Fk+b3mkHNOWy4CEh/CsuZ+osvTpg=; b=fAd4ojfk+K+BE0ipl5Tiav+YiAwJwtnXuGlsRVtddeooTaDs5pKfOAVbot5GybQzgT9TZj 3GBzH5UICeWFoUtoQ5yo0y6Yocr9SwS9FwYXMTzaXwMNsA1j1OsnHAjeMfvEyLE/8Ww79A EHKmU2p4YytPzqO9fatgyG5qAzLnph4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-521-SlUMOcjYPhqQzRsC2yXrMQ-1; Mon, 14 Dec 2020 19:05:04 -0500 X-MC-Unique: SlUMOcjYPhqQzRsC2yXrMQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7820C59; Tue, 15 Dec 2020 00:05:01 +0000 (UTC) Received: from omen.home (ovpn-112-193.phx2.redhat.com [10.3.112.193]) by smtp.corp.redhat.com (Postfix) with ESMTP id A28DC5D9DC; Tue, 15 Dec 2020 00:04:59 +0000 (UTC) Date: Mon, 14 Dec 2020 17:04:59 -0700 From: Alex Williamson To: Keqian Zhu Cc: , , , , , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy , Joerg Roedel , Catalin Marinas , James Morse , Suzuki K Poulose , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , Subject: Re: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope Message-ID: <20201214170459.50cb8729@omen.home> In-Reply-To: <20201210073425.25960-5-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> <20201210073425.25960-5-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 10 Dec 2020 15:34:22 +0800 Keqian Zhu wrote: > When we pin or detach a group which is not dirty tracking capable, > we will try to promote pinned_scope of vfio_iommu. > > If we succeed to do so, vfio only report pinned_scope as dirty to > userspace next time, but these memory written before pin or detach > is missed. > > The solution is that we must populate all dma range as dirty before > promoting pinned_scope of vfio_iommu. Please don't bury fixes patches into a series with other optimizations and semantic changes. Send it separately. > > Signed-off-by: Keqian Zhu > --- > drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index bd9a94590ebc..00684597b098 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, > return group; > } > > +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu) > +{ > + struct rb_node *n; > + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); > + > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + unsigned long nbits = dma->size >> pgshift; > + > + if (dma->iommu_mapped) > + bitmap_set(dma->bitmap, 0, nbits); > + } > +} If we detach a group which results in only non-IOMMU backed mdevs, don't we also clear dma->iommu_mapped as part of vfio_unmap_unpin() such that this test is invalid? Thanks, Alex > + > static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) > { > struct vfio_domain *domain; > @@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) > } > > iommu->pinned_page_dirty_scope = true; > + > + /* Set all bitmap to avoid missing dirty page */ > + if (iommu->dirty_page_tracking) > + vfio_populate_bitmap_all(iommu); > } > > static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,