Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp912478pxf; Wed, 7 Apr 2021 14:54:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyTyZFOVjX70GxDGVDJrjSu876ZAjZ/IxXEZHcH5JFJjO3YvIz/jiZ6vk/Fj+O043x1J1L X-Received: by 2002:a17:907:2cd9:: with SMTP id hg25mr6262334ejc.498.1617832455535; Wed, 07 Apr 2021 14:54:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617832455; cv=none; d=google.com; s=arc-20160816; b=Wcr/xFxQcJRsxav4YFrezF21l1NWV+2EUqAuw9CHbrRgku6U/LB4kfadl/l2dq0uB/ DAEniEpjVZbKZKFmtz7KrAC3qHX6wXAmAgilvslZx2fShW6MvVhniaUP1xY0uaITq+Pc ecquf3rYf3xyqEOx+4UH0h5QDcN869rJ78nlY9TaqLB/UoJnSnU+q/uJhBHhaUrw6Vur sS6R5eHuzZlTZrpNOW1PAJ7Lot0fyT5yHoWwtDaN4xGSJiUeBbgMSM/Pv7nGn7pP+E69 qRCwhcxlVbaZjLA9IB1rK3h5JcHUmKsM2Vj3ARFe/mQNBdLf0kMoJAydwqnleNWQgo8x 1HjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=U09jkuhbNd4GqgIa8gbKdPQlfgb0wMDSk0hVwDV1gRw=; b=Ld19vUR6FhATLTrlmZXH7WMN6VxvGS4ioGXwVv9Ne+3TZRK6qpjlrUpBkqzrw49N82 SmG08aR0+17HDnEhBmJkZ0unk2Dx8XdqXznjZfmXT736DResbyfpttSVCUGoi28aik/F +fwBYMo9BhvO8t6QckwIqd7h/xZEFUQnWlADcw5CwUnbc00UFKWMJ7XjuIjiWgENmB4F Rwho2AAdHdMX+vryEJrZl4Bkm/Zl2L5d8gadiu1uTOMMKhyVg60KZ76I9qMQL7aGKewD o0mS0c9ueubc+urhnbD6uthrK5TfojpFcmqj87v0Xh4BRA4RTmGEGuCSVeWPxBOgK286 ZNHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ngd4o4Py; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o2si6330332ejg.172.2021.04.07.14.53.52; Wed, 07 Apr 2021 14:54:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ngd4o4Py; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232113AbhDGSoW (ORCPT + 99 others); Wed, 7 Apr 2021 14:44:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232880AbhDGSoV (ORCPT ); Wed, 7 Apr 2021 14:44:21 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2206BC06175F for ; Wed, 7 Apr 2021 11:44:10 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id j199-20020a1c23d00000b0290123bb155f6aso1658569wmj.2 for ; Wed, 07 Apr 2021 11:44:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=U09jkuhbNd4GqgIa8gbKdPQlfgb0wMDSk0hVwDV1gRw=; b=ngd4o4PyyH4y4MwaG4cnoC6BYaNJKVmTUn1FLk6iHOAPaNG8DeUfZzhymKYlRt0WBF HGIOklvd6ZH0GhYEpHVm1LuvZ4z2wBSgD26kvfyaBhMfmn663lBk1uddv969zUJdaAYt 3YBKSHQAb6fT2Vyy/1JMWOttU9895pAjrlIlWnXHy+8Vp2NPYpaGV+sB+0StjU9OeP98 tRy0QHfn3hsF8zE1Aq/BMXcz9eW6XrmshNU6mzVrbRp5IwhAeUJu2P0HNXcKc1pCQV2C WnUNExQdMC8rBcwLKpfweD2zJQ7hiRYL85Z0298BXC5qxaHuANgjHZkaqk2CWglrqSDR vunA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=U09jkuhbNd4GqgIa8gbKdPQlfgb0wMDSk0hVwDV1gRw=; b=cMl5J2vhLV/feGHINYUlO3s8X+GwGtebkP9ZB6mkqCtsgiyLQYGGAwPgSvDIZxblSf LXD29uPh+xmQgodLI7rpQ3PZX9Fj1b4xa8jzgk8o5GS8Uzx54eAOPw3ab8WExcfPenTM ZsJVREA8BR9hSw3qqMu8R7WfOmk9nH0GvfDrP618FkqHzCQeZhUjTCRgAIW/DRd10a8h K2z1g/rg1VJVb49AhuB/Pmoxrl33xxVax3EmmydGlRqlCXcfzoBFhrwJvd3+38fW2rXG YOcBs1JyTAU/WNEhLXGLO70tfyONPnmta+Zac/C/sjzIlD9Zz5h16KYzAAy7w5ZFEklZ Da7Q== X-Gm-Message-State: AOAM532PnkOEZA2JSeIbb+Pjez3o9E2+qGe+1a/6MOpUZVSBuZ9PBuJP xXmn2qSOkli7OYIVY5pzvhRM/Q== X-Received: by 2002:a1c:23d6:: with SMTP id j205mr4416562wmj.120.1617821048827; Wed, 07 Apr 2021 11:44:08 -0700 (PDT) Received: from myrica ([2001:1715:4e26:a7e0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id u10sm8410862wrt.75.2021.04.07.11.44.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Apr 2021 11:44:08 -0700 (PDT) Date: Wed, 7 Apr 2021 20:43:50 +0200 From: Jean-Philippe Brucker To: "Tian, Kevin" Cc: Jason Gunthorpe , Jason Wang , Alex Williamson , "Raj, Ashok" , Jonathan Corbet , Jean-Philippe Brucker , LKML , "Jiang, Dave" , "iommu@lists.linux-foundation.org" , Li Zefan , Johannes Weiner , Tejun Heo , "cgroups@vger.kernel.org" , "Wu, Hao" , David Woodhouse Subject: Re: [PATCH V4 05/18] iommu/ioasid: Redefine IOASID set and allocation APIs Message-ID: References: <20210322120300.GU2356281@nvidia.com> <20210324120528.24d82dbd@jacob-builder> <20210329163147.GG2356281@nvidia.com> <20210330132830.GO2356281@nvidia.com> <20210405234230.GF7405@nvidia.com> <20210406124251.GO7405@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 07, 2021 at 08:17:50AM +0000, Tian, Kevin wrote: > btw this discussion was raised when discussing the I/O page fault handling > process. Currently the IOMMU layer implements a per-device fault reporting > mechanism, which requires VFIO to register a handler to receive all faults > on its device and then forwards to ioasid if it's due to 1st-level. Possibly it > makes more sense to convert it into a per-pgtable reporting scheme, and > then the owner of each pgtable should register its own handler. Maybe, but you do need device information in there, since that's how the fault is reported to the guest and how the response is routed back to the faulting device (only PASID+PRGI would cause aliasing). And we need to report non-recoverable faults, as well as recoverable ones without PASID, once we hand control of level-1 page tables to guests. > It means > for 1) VFIO will register a 2nd-level pgtable handler while /dev/ioasid > will register a 1st-level pgtable handler, while for 3) /dev/ioasid will register > handlers for both 1st-level and 2nd-level pgtable. Jean? also want to know > your thoughts... Moving all IOMMU controls to /dev/ioasid rather that splitting them is probably better. Hopefully the implementation can reuse most of vfio_iommu_type1. I'm trying to sketch what may work for Arm, if we have to reuse /dev/ioasid to avoid duplication of fault and inval queues: * Get a container handle out of /dev/ioasid (or /dev/iommu, really.) No operation available since we don't know what the device and IOMMU capabilities are. * Attach the handle to a VF. With VFIO that would be VFIO_GROUP_SET_CONTAINER. That causes the kernel to associate an IOMMU with the handle, and decide which operations are available. * With a map/unmap vIOMMU (or shadow mappings), a single translation level is supported. With a nesting vIOMMU, we're populating the level-2 translation (some day maybe by binding the KVM page tables, but currently with map/unmap ioctl). Single-level translation needs single VF per container. Two level would allow sharing stage-2 between multiple VFs, though it's a pain to define and implement. * Without a vIOMMU or if the vIOMMU starts in bypass, populate the container page tables. Start the guest. * With a map/unmap vIOMMU, guest creates mappings, userspace populates the page tables with map/unmap ioctl. It would be possible to add a PASID mode there: guest requests an address space with a specific PASID, userspace derives an IOASID handle from the container handle and populate that address space with map/unmap ioctl. That would enable PASID on sub-VF assignment, which requires the host to control which PASID is programmed into the VF (with DEVICE_ALLOW_IOASID, I guess). And either the host allocates the PASID in this case (which isn't supported by a vSMMU) or we have to do a vPASID -> pPASID. I don't know if it's worth the effort. Or * With a nesting vIOMMU, the guest attaches a PASID table to a VF, userspace issues a SET_PASID_TABLE ioctl on the container handle. If we support multiple VFs per container, we first need to derive a child container from the main one and the device, then attach the PASID table. Guest programs the PASID table, sends invalidations when removing mappings which are relayed to the host on the child container. Page faults and response queue would be per container, so if multiple VF per container, we could have one queue for the parent (level-2 faults) and one for each child (level-1 faults). Thanks, Jean