Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1907789pxu; Fri, 18 Dec 2020 00:24:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJwgphT8ntO+7U83AVy9OtmXzxJxfFq+gFjSB45Ip9yXYGykDYX3pbNWexNLU3HNpMQeg/k+ X-Received: by 2002:a17:906:524a:: with SMTP id y10mr2853415ejm.97.1608279870563; Fri, 18 Dec 2020 00:24:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608279870; cv=none; d=google.com; s=arc-20160816; b=PLzD0j89JoGWHUIum1npS0rA48NWTjWMUrP30JdRMIZg1p1a9n2dt/I1ddXTYMhnY8 0xzBCMoUO0QnlDumaVSpZ7GbL407+3GkTTEGYyvylOsPh2hxgzqEu98zqvMs1cgbmR8W 7yFhoaASviuJbr6pD2yZMXgnJ0Hxo25Y/rCKBnQEaR3D70g/FEqQMMwJtO4hVbuwPPS5 izoLm7Q+DXudfNDgjJHzYHtcdKUPE81ghmAACU3VfBtjx5Kc1YmX8oGBJAIXUmYjCfGI a7fk3VSjLLJI8XmefOIXhlAFS12tBR6v2v30keXOEuOLxlPHhChDDH+PNs1W9kpCNihF oRJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=Y9ue2nLOprlu6FP/TSMGPzlLET0+4Wut6ZmryGDGkjc=; b=B5QF3FHRlmcbXiKrsZJ7QsVS7mq/WTKierzuHgOTiM7sy35EUGqc95XYdCplENmhju wDALKLloQI9PgMS4jCgRlqfUCc9sU7iK0Im2ngkjWNEPW5NEPXIWIRc+gH58voStvvlc Gpl6C95VJ9s4JhvSk+36UT6P0Y+FcyCGnpaeeMsXjGiRbD2ZSBfyKUDO09vrrzB3sv0y gDwF6w51WxatCQ1Nv09OazoEAst0/cDxmaprFxB+/Lx8eH2TOCwRxzvPnJsyObzm5Gf9 8U0AKrEcOQzZE6oKsKrpIfe6WGQHPCQQkDsiKIFPXWsN5Wtmp++YAQHywGVaCe9AUzrt MJBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h14si4450254eje.368.2020.12.18.00.24.08; Fri, 18 Dec 2020 00:24:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732686AbgLRIWx (ORCPT + 99 others); Fri, 18 Dec 2020 03:22:53 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:9904 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725895AbgLRIWw (ORCPT ); Fri, 18 Dec 2020 03:22:52 -0500 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cy1zZ3NMpz7Gx1; Fri, 18 Dec 2020 16:21:30 +0800 (CST) Received: from [10.174.187.37] (10.174.187.37) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Fri, 18 Dec 2020 16:21:59 +0800 Subject: Re: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope To: Alex Williamson References: <20201210073425.25960-1-zhukeqian1@huawei.com> <20201210073425.25960-5-zhukeqian1@huawei.com> <20201214170459.50cb8729@omen.home> <20201215085359.053e73ed@x1.home> CC: , , , , , Cornelia Huck , "Marc Zyngier" , Will Deacon , Robin Murphy , Joerg Roedel , Catalin Marinas , James Morse , "Suzuki K Poulose" , Sean Christopherson , Julien Thierry , Mark Brown , "Thomas Gleixner" , Andrew Morton , Alexios Zavras , , From: Keqian Zhu Message-ID: <340a58c3-3781-db31-59fa-06b015d27a5e@huawei.com> Date: Fri, 18 Dec 2020 16:21:58 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20201215085359.053e73ed@x1.home> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/12/15 23:53, Alex Williamson wrote: > On Tue, 15 Dec 2020 17:37:11 +0800 > zhukeqian wrote: > >> Hi Alex, >> >> On 2020/12/15 8:04, Alex Williamson wrote: [...] >>>> >>>> +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu) >>>> +{ >>>> + struct rb_node *n; >>>> + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); >>>> + >>>> + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { >>>> + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); >>>> + unsigned long nbits = dma->size >> pgshift; >>>> + >>>> + if (dma->iommu_mapped) >>>> + bitmap_set(dma->bitmap, 0, nbits); >>>> + } >>>> +} >>> >>> >>> If we detach a group which results in only non-IOMMU backed mdevs, >>> don't we also clear dma->iommu_mapped as part of vfio_unmap_unpin() >>> such that this test is invalid? Thanks, >> >> Good spot :-). The code will skip bitmap_set under this situation. >> >> We should set the bitmap unconditionally when vfio_iommu is promoted, >> as we must have IOMMU backed domain before promoting the vfio_iommu. >> >> Besides, I think we should also mark dirty in vfio_remove_dma if dirty >> tracking is active. Right? > > There's no remaining bitmap to mark dirty if the vfio_dma is removed. > In this case it's the user's responsibility to collect remaining dirty > pages using the VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP support in the > VFIO_IOMMU_UNMAP_DMA ioctl. Thanks, > Hi Alex, Thanks for pointing it out. I also notice that vfio_iommu_type1_detach_group will remove all dma_range (in vfio_iommu_unmap_unpin_all). If this happens during dirty tracking, then we have no chance to report dirty log to userspace. Besides, we will add more dirty log tracking ways to VFIO definitely, but we has no framework to support this, thus makes it inconvenient to extend and easy to lost dirty log. Giving above, I plan to refactor our dirty tracking code. One core idea is that we should distinguish Dirty Range Limit (such as pin, fully dirty) and Real Dirty Track (such as iopf, smmu httu). Thanks, Keqian