Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2383417pxb; Sat, 27 Feb 2021 22:42:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJxIILygY38RwH3oPx3BB2NR+aBUQCVYelcPQSXFc6hdAuLSNpvsHIsR72EKqnj6oBJ3KNmD X-Received: by 2002:a17:907:2652:: with SMTP id ar18mr7455149ejc.115.1614494573817; Sat, 27 Feb 2021 22:42:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614494573; cv=none; d=google.com; s=arc-20160816; b=yByv0n/myrWxNZfVmhtD1aK3ZQeTzD6pGhp9ygtvrbmgXDdasvj/n2ZcBrw7hRIpFx 8VAbjBX4N9EU12liFlMX/0LBbMlnMjve5mO5cL2+rW8HjT9PV9xky40xaup1topKIB+z rMe3RYJeeS27E0i7hzlQ04KUBojoYc592yd5VgGuBt9+mDB9wRE+LGC7T9n8Nq0Y1eku 5yyG1FWPjkKs13AwOjxaqf/FDn67Idfy+mfK+1hX7EI8Oph7aAgqHQr1lLH/dEMQFgPR 6hxMzW/VAa4PA0iRScmHah75g3PI5Zbv7gTD64+P86fjSzB/53nRW9EXN9LAVhpvPeqC ru6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=LQBWLKNtJFn1Vj7/1mWOl1XgiuJVqAi77hKJDVpFT2g=; b=NBxoUBBVzYzW3F5lpyk9YJHGjpuYyrurVU30YLjpZORSU2KEjwoeOix/kVKS+Le+4c lI0p9SyZ3h/SIvgG+V1qVe/hM9xJ+EXnxJi4aN6yCa+/7UileH1qot6l0H0vftPd8GFo A1EYsoKPD42Qu2Nf6vj/aQ6WyLW9OMG2SDP4hj5L6N+m3HQpaUAahQDQpnCU3DgRVVlJ 3Lfd6vIRqtArjdLea1CDSc9zkwqT0hx/IfwbDKl6sL+hJZASVbJmkmAwWqdqCCGoHyYN LLzaH6if7fjKxDJ6GWOT9Q0E3lXRdi/6SrgLTyHAU9Sz+W0snJFasgrBexAxKSTfipPY Nh7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f11si8969032edu.40.2021.02.27.22.42.31; Sat, 27 Feb 2021 22:42:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbhB1GjP (ORCPT + 99 others); Sun, 28 Feb 2021 01:39:15 -0500 Received: from mga09.intel.com ([134.134.136.24]:58903 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230435AbhB1GjJ (ORCPT ); Sun, 28 Feb 2021 01:39:09 -0500 IronPort-SDR: 70EMW9O4rG/DeSm0gD9kRC9aDRhjxqerY/J/89QzUNAjpZt66oJI7KYH6A22xJoZV52X9IdOyX O1EN0NStCtAA== X-IronPort-AV: E=McAfee;i="6000,8403,9908"; a="186323899" X-IronPort-AV: E=Sophos;i="5.81,211,1610438400"; d="scan'208";a="186323899" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2021 22:33:09 -0800 IronPort-SDR: wwnUe/gQ2DURd1fuukWv5FTP+PhsRLJQVbgzAvyohgvnXCCQ/Z/PK+ojVCBdGiyw0lBhOibOA2 sZ4cXi7/Va0Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,211,1610438400"; d="scan'208";a="517029718" Received: from otc-wp-03.jf.intel.com ([10.54.39.79]) by orsmga004.jf.intel.com with ESMTP; 27 Feb 2021 22:33:09 -0800 From: Jacob Pan To: LKML , Joerg Roedel , "Lu Baolu" , David Woodhouse , iommu@lists.linux-foundation.org, cgroups@vger.kernel.org, Tejun Heo , Li Zefan , Johannes Weiner , Jean-Philippe Brucker Cc: Alex Williamson , Eric Auger , Jason Gunthorpe , Jonathan Corbet , Raj Ashok , "Tian, Kevin" , Yi Liu , Wu Hao , Dave Jiang , Jacob Pan Subject: [PATCH V4 08/18] iommu/ioasid: Introduce ioasid_set private ID Date: Sat, 27 Feb 2021 14:01:16 -0800 Message-Id: <1614463286-97618-9-git-send-email-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614463286-97618-1-git-send-email-jacob.jun.pan@linux.intel.com> References: <1614463286-97618-1-git-send-email-jacob.jun.pan@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When an IOASID set is used for guest SVA, each VM will acquire its ioasid_set for IOASID allocations. IOASIDs within the VM must have a host/physical IOASID backing, mapping between guest and host IOASIDs can be non-identical. IOASID set private ID (SPID) is introduced in this patch to be used as guest IOASID. However, the concept of ioasid_set specific namespace is generic, thus named SPID. As SPID namespace is within the IOASID set, the IOASID core can provide lookup services at both directions. SPIDs may not be available when its IOASID is allocated, the mapping between SPID and IOASID is usually established when a guest page table is bound to a host PASID. Signed-off-by: Jacob Pan --- drivers/iommu/ioasid.c | 104 +++++++++++++++++++++++++++++++++++++++++ include/linux/ioasid.h | 18 +++++++ 2 files changed, 122 insertions(+) diff --git a/drivers/iommu/ioasid.c b/drivers/iommu/ioasid.c index 9a3ba157dec3..7707bb608bdd 100644 --- a/drivers/iommu/ioasid.c +++ b/drivers/iommu/ioasid.c @@ -26,6 +26,7 @@ enum ioasid_state { * struct ioasid_data - Meta data about ioasid * * @id: Unique ID + * @spid: Private ID unique within a set * @refs: Number of active users * @state: Track state of the IOASID * @set: ioasid_set of the IOASID belongs to @@ -34,6 +35,7 @@ enum ioasid_state { */ struct ioasid_data { ioasid_t id; + ioasid_t spid; enum ioasid_state state; struct ioasid_set *set; void *private; @@ -413,6 +415,107 @@ void ioasid_detach_data(ioasid_t ioasid) } EXPORT_SYMBOL_GPL(ioasid_detach_data); +static ioasid_t ioasid_find_by_spid_locked(struct ioasid_set *set, ioasid_t spid, bool get) +{ + ioasid_t ioasid = INVALID_IOASID; + struct ioasid_data *entry; + unsigned long index; + + if (!xa_load(&ioasid_sets, set->id)) { + pr_warn("Invalid set\n"); + goto done; + } + + xa_for_each(&set->xa, index, entry) { + if (spid == entry->spid) { + if (get) + refcount_inc(&entry->refs); + ioasid = index; + } + } +done: + return ioasid; +} + +/** + * ioasid_attach_spid - Attach ioasid_set private ID to an IOASID + * + * @ioasid: the system-wide IOASID to attach + * @spid: the ioasid_set private ID of @ioasid + * + * After attching SPID, future lookup can be done via ioasid_find_by_spid(). + */ +int ioasid_attach_spid(ioasid_t ioasid, ioasid_t spid) +{ + struct ioasid_data *data; + int ret = 0; + + if (spid == INVALID_IOASID) + return -EINVAL; + + spin_lock(&ioasid_allocator_lock); + data = xa_load(&active_allocator->xa, ioasid); + + if (!data) { + pr_err("No IOASID entry %d to attach SPID %d\n", + ioasid, spid); + ret = -ENOENT; + goto done_unlock; + } + /* Check if SPID is unique within the set */ + if (ioasid_find_by_spid_locked(data->set, spid, false) != INVALID_IOASID) { + ret = -EINVAL; + goto done_unlock; + } + data->spid = spid; + +done_unlock: + spin_unlock(&ioasid_allocator_lock); + return ret; +} +EXPORT_SYMBOL_GPL(ioasid_attach_spid); + +void ioasid_detach_spid(ioasid_t ioasid) +{ + struct ioasid_data *data; + + spin_lock(&ioasid_allocator_lock); + data = xa_load(&active_allocator->xa, ioasid); + + if (!data || data->spid == INVALID_IOASID) { + pr_err("Invalid IOASID entry %d to detach\n", ioasid); + goto done_unlock; + } + data->spid = INVALID_IOASID; + +done_unlock: + spin_unlock(&ioasid_allocator_lock); +} +EXPORT_SYMBOL_GPL(ioasid_detach_spid); + +/** + * ioasid_find_by_spid - Find the system-wide IOASID by a set private ID and + * its set. + * + * @set: the ioasid_set to search within + * @spid: the set private ID + * @get: flag indicates whether to take a reference once found + * + * Given a set private ID and its IOASID set, find the system-wide IOASID. Take + * a reference upon finding the matching IOASID if @get is true. Return + * INVALID_IOASID if the IOASID is not found in the set or the set is not valid. + */ +ioasid_t ioasid_find_by_spid(struct ioasid_set *set, ioasid_t spid, bool get) +{ + ioasid_t ioasid; + + spin_lock(&ioasid_allocator_lock); + ioasid = ioasid_find_by_spid_locked(set, spid, get); + spin_unlock(&ioasid_allocator_lock); + return ioasid; +} +EXPORT_SYMBOL_GPL(ioasid_find_by_spid); + static inline bool ioasid_set_is_valid(struct ioasid_set *set) { return xa_load(&ioasid_sets, set->id) == set; @@ -616,6 +719,7 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t max, } data->id = id; data->state = IOASID_STATE_IDLE; + data->spid = INVALID_IOASID; /* Store IOASID in the per set data */ if (xa_err(xa_store(&set->xa, id, data, GFP_ATOMIC))) { diff --git a/include/linux/ioasid.h b/include/linux/ioasid.h index e7f3e6108724..dcab02886cb5 100644 --- a/include/linux/ioasid.h +++ b/include/linux/ioasid.h @@ -81,6 +81,9 @@ int ioasid_register_allocator(struct ioasid_allocator_ops *allocator); void ioasid_unregister_allocator(struct ioasid_allocator_ops *allocator); int ioasid_attach_data(ioasid_t ioasid, void *data); void ioasid_detach_data(ioasid_t ioasid); +int ioasid_attach_spid(ioasid_t ioasid, ioasid_t spid); +void ioasid_detach_spid(ioasid_t ioasid); +ioasid_t ioasid_find_by_spid(struct ioasid_set *set, ioasid_t spid, bool get); void ioasid_set_for_each_ioasid(struct ioasid_set *sdata, void (*fn)(ioasid_t id, void *data), void *data); @@ -173,6 +176,21 @@ static inline struct ioasid_set *ioasid_find_set(ioasid_t ioasid) return ERR_PTR(-ENOTSUPP); } +static inline int ioasid_attach_spid(ioasid_t ioasid, ioasid_t spid) +{ + return -ENOTSUPP; +} + +static inline void ioasid_detach_spid(ioasid_t ioasid) +{ +} + +static inline ioasid_t ioasid_find_by_spid(struct ioasid_set *set, + ioasid_t spid, bool get) +{ + return INVALID_IOASID; +} + static inline void ioasid_set_for_each_ioasid(struct ioasid_set *sdata, void (*fn)(ioasid_t id, void *data), void *data) -- 2.25.1