Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp656582pxy; Wed, 5 May 2021 10:28:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwVP8DwKuZFlP+PhINLBbx/b0i3fu1uc3MauI4yyd9+L/yvWz8/ePrLtE3a7FXrR0nelpi8 X-Received: by 2002:aa7:c4c3:: with SMTP id p3mr147377edr.240.1620235733572; Wed, 05 May 2021 10:28:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620235733; cv=none; d=google.com; s=arc-20160816; b=Q0Pi5MTm9gmi+xOyf5Lu1T4zQ5B8qI8+Z6xRfmXoqgzOo1cKuRrwb6K/v6n1YFBFsw qSC4zof7A0sG0qbs1rnFj8gdRo7TGrVSSeIyHIeLCEgvEdkOlUK9EbTPZWxUhq06z/WT oAgTl1ySJDgbxgiw+m2fpfWnCfni6WjTBvpcV6cjCPwA/25voxQA905ZvayNP3b7qPF7 vjtADHKChlbrf5wIqUcQckXnzCZrTGF697GWh4kwB5myKE+E/khlJd6LEq9bRsQsX7wf 5M1gr+x/SG4qVDmwg7nVikvgbQpukQCQO6eJvpZjDFiK2Pzg/+SPclJVik7oyd3u4RB8 rWVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:ironport-sdr:ironport-sdr; bh=e9Uyz8LIvKT9zyjyqw18EeH+KUjAQb1R7zsTAVBlxOI=; b=Z18HTMVJDe5c8+41TWPt+WTXx8C2jxhHsxKg3H60jSPW8ERlDHCbv6rRp0P20JzwWy RTA3Z3l2ANVRYVB2zDcFhqrVjCbxs65Pl6hm52icWxsz8tk3ulFC+mSl6aC5X+Bkkvdv LUoTag8974Ej/U/SVE+6vIn2gtP3rz1GJr6VoZRiXTOp/OhLvx7cZig/2ERgraUzTMqv T9n2FKb0e9iO3QP3fldKrq7TFkD7/Xh5w5Fx7L6X1OB4aI45rYo1+4vsMe+Im11HQj1w hffcn3pKI2rNSLJ42aCnm0v02zPpJBapvjKTIK43I2KV7ThKJldkV3CSoHTf9LQB8yZy FOQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb8si11055edb.103.2021.05.05.10.28.28; Wed, 05 May 2021 10:28:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237682AbhEERXs (ORCPT + 99 others); Wed, 5 May 2021 13:23:48 -0400 Received: from mga04.intel.com ([192.55.52.120]:11464 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240400AbhEERVG (ORCPT ); Wed, 5 May 2021 13:21:06 -0400 IronPort-SDR: +PORprcYD82VJlTpzh/fXyV3ZtJtOGefelmej7b2cCyOWrsuqqUgZr4gcluHhj1rtIyghA3Ac5 mY6PImk7hKgA== X-IronPort-AV: E=McAfee;i="6200,9189,9975"; a="196230268" X-IronPort-AV: E=Sophos;i="5.82,276,1613462400"; d="scan'208";a="196230268" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2021 10:20:09 -0700 IronPort-SDR: iY7VJJJQFkvN2izb7iyCSnux/+J5KjzfQzVOdrkOuIj75vXIEDSM2Pj2eu6Skcs8Y/rUFmt4UD Miwr4xTTFJpw== X-IronPort-AV: E=Sophos;i="5.82,276,1613462400"; d="scan'208";a="433926176" Received: from jacob-builder.jf.intel.com (HELO jacob-builder) ([10.7.199.155]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2021 10:20:09 -0700 Date: Wed, 5 May 2021 10:22:59 -0700 From: Jacob Pan To: Jason Gunthorpe Cc: "Tian, Kevin" , Alex Williamson , "Liu, Yi L" , Auger Eric , Jean-Philippe Brucker , LKML , Joerg Roedel , Lu Baolu , David Woodhouse , "iommu@lists.linux-foundation.org" , "cgroups@vger.kernel.org" , Tejun Heo , Li Zefan , Johannes Weiner , Jean-Philippe Brucker , Jonathan Corbet , "Raj, Ashok" , "Wu, Hao" , "Jiang, Dave" , jacob.jun.pan@linux.intel.com Subject: Re: [PATCH V4 05/18] iommu/ioasid: Redefine IOASID set and allocation APIs Message-ID: <20210505102259.044cafdf@jacob-builder> In-Reply-To: <20210504231530.GE1370958@nvidia.com> References: <20210422121020.GT1370958@nvidia.com> <20210423114944.GF1370958@nvidia.com> <20210426123817.GQ1370958@nvidia.com> <20210504084148.4f61d0b5@jacob-builder> <20210504180050.GB1370958@nvidia.com> <20210504151154.02908c63@jacob-builder> <20210504231530.GE1370958@nvidia.com> Organization: OTC X-Mailer: Claws Mail 3.17.5 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jason, On Tue, 4 May 2021 20:15:30 -0300, Jason Gunthorpe wrote: > On Tue, May 04, 2021 at 03:11:54PM -0700, Jacob Pan wrote: > > > > It is a weird way to use xarray to have a structure which > > > itself is just a wrapper around another RCU protected structure. > > > > > > Make the caller supply the ioasid_data memory, embedded in its own > > > element, get rid of the void * and rely on XA_ZERO_ENTRY to hold > > > allocated but not active entries. > > > > > Let me try to paraphrase to make sure I understand. Currently > > struct ioasid_data is private to the iasid core, its memory is > > allocated by the ioasid core. > > > > You are suggesting the following: > > 1. make struct ioasid_data public > > 2. caller allocates memory for ioasid_data, initialize it then pass it > > to ioasid_alloc to store in the xarray > > 3. caller will be responsible for setting private data inside > > ioasid_data and do call_rcu after update if needed. > > Basically, but you probably won't need a "private data" once the > caller has this struct as it can just embed it in whatever larger > struct makes sense for it and use container_of/etc > that makes sense. thanks! > I didn't look too closely at the whole thing though. Honestly I'm a > bit puzzled why we need a pluggable global allocator framework.. The > whole framework went to some trouble to isolate everything into iommu > drivers then that whole design is disturbed by this global thing. > Global and pluggable are for slightly separate reasons. - We need global PASID on VT-d in that we need to support shared workqueues (SWQ). E.g. One SWQ can be wrapped into two mdevs then assigned to two VMs. Each VM uses its private guest PASID to submit work but each guest PASID must be translated to a global (system-wide) host PASID to avoid conflict. Also, since PASID table storage is per PF, if two mdevs of the same PF are assigned to different VMs, the PASIDs must be unique. - The pluggable allocator is to support the option where the guest PASIDs are allocated by the hypervisor. Let it be the same as the host PASID or some arbitrary number cooked up by the hypervisor but backed by a host HW PASID. VT-d spec has this virtual command interface that requires the guest to use it instead of allocating from the guest ioasid xarray. This is the reason why it has to go down to iommu vendor driver. I guess that is what you meant by "went to some trouble to isolate everything into iommu"? For ARM, since the guest owns the per device PASID table. There is no need to allocate PASIDs from the host nor the hypervisor. Without SWQ, there is no need for global PASID/SSID either. So PASID being global for ARM is for simplicity in case of host PASID/SSID. > Jason Thanks, Jacob