Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp231671rdb; Thu, 21 Dec 2023 07:42:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IF84nZu2EgBGK6BUOh4iqoMuwiQtdb24tjKsKH85ayJhydj/0u3l+itYYdHHHa0zcNsstcw X-Received: by 2002:a2e:7003:0:b0:2cc:9cf1:aead with SMTP id l3-20020a2e7003000000b002cc9cf1aeadmr662428ljc.11.1703173338557; Thu, 21 Dec 2023 07:42:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703173338; cv=none; d=google.com; s=arc-20160816; b=unPTRi701UWmz4ST1+lykUYr9RoZaVpdcpDylhJlCOX7jfefMzwOzHnwn7eivl/zAQ XBAZplPWP0t/6rRYhcYpSPsjr6pLiXWYW+5fr8DOSG5mDazYGQibYF82dQZIdSLgPz3I 0WNuAkCG0gQ/QQ4al4Uxp8yKBP3K91PdWGWVHSu7+fq66D/VPubYsutaSY2qKJaIbYg0 m8O4yIWSQpF2hRylrp/pBIQeAEr+BIbFlYqXdbKpz2CGBXHGRkoFjBOHjH7ev/L2X6ot cgpGrQRm215adnKSJ8cgddcYZ6NEVZYIU+nALADtbmsrhPAroALxNdncI1ko1gxQX0HY kAsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=EsMtJ+037x0Y7unB/EF6YeY4NO27K3AtSeTpeomdWwg=; fh=JbgkfaGVOLJQIgFMcRSNKOzjVQFwxW1pSzjP6daOUwc=; b=mGJsiaPZ107bJ0DoVZ8MKJ2VkC7WzTY9hIZayjyI6i7HsTp6KDXLU5yMMiiVJbDznX 8n8nCK/iHCwi5KpBqsfQW9py2JPWZ40vuVl1OASki8THO1KrjbEAgF6pMdhx3OhhJGoG QvQp55NmbI8sP2s+eYk4NKvT1HNC/Tp6J5AiP2gPV9MmWknfYcbA+ys1/Fhb4oTPnQWT Q9P4iNH4J+gKVVohnd7WvsIWbnk3HB1uUng5YMr1jOyKdWwYy3ZqBit4FLULdg69DBP3 YnXfFcHz+Kn/RXv/TPEAJc340Tm1538vgRd5WUYyZXoClJQRHYJXlo+Il0OHHaKuj5OB h0BQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a4eFZ26C; spf=pass (google.com: domain of linux-kernel+bounces-8687-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8687-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id n16-20020a5099d0000000b00543670dada0si949592edb.217.2023.12.21.07.42.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 07:42:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-8687-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a4eFZ26C; spf=pass (google.com: domain of linux-kernel+bounces-8687-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8687-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 1C8281F21BE7 for ; Thu, 21 Dec 2023 15:42:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 89F806351C; Thu, 21 Dec 2023 15:40:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a4eFZ26C" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F14505991B; Thu, 21 Dec 2023 15:39:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703173199; x=1734709199; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k4N1OyXS3Ua8EcKKoVBQJzRFeyXvDcufBXaBm2M8Qio=; b=a4eFZ26CoRmrurXdvwmIaVz7wzzy1bCGyua4t9K0zzIps8ppyvdzUn8s rNyvIJU5/q+eo8mAA97SpP73Z8WOvoAbCuZRYZtT4TdKJZ1HRYfqfIv6A i3mIAfSKHL2OUw03oXqdFxtBW9vMOFrp/Svf+t3LukygJeZHDv6M/zs6a zIopBwPlCO5a6IcdAwOmKHIPeHqCvWKWkO7VjKfB8ai/8qgwiXFN2Ktkm ijUJ+5zkNMXvYqwb1jGAmjNCz9UPmZAEAVMsNVKWlv2qwuk4YR4X4prK7 KCxjZxOQlSDbNy+occE4b0Je6gmo/0OtC43PnWsBITMqMwWoQQF8B/iFK A==; X-IronPort-AV: E=McAfee;i="6600,9927,10931"; a="393155575" X-IronPort-AV: E=Sophos;i="6.04,293,1695711600"; d="scan'208";a="393155575" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2023 07:39:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10931"; a="949957263" X-IronPort-AV: E=Sophos;i="6.04,293,1695711600"; d="scan'208";a="949957263" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga005.jf.intel.com with ESMTP; 21 Dec 2023 07:39:55 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com, baolu.lu@linux.intel.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com, joao.m.martins@oracle.com, xin.zeng@intel.com, yan.y.zhao@intel.com, j.granados@samsung.com Subject: [PATCH v7 7/9] iommu/vt-d: Allow qi_submit_sync() to return the QI faults Date: Thu, 21 Dec 2023 07:39:46 -0800 Message-Id: <20231221153948.119007-8-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231221153948.119007-1-yi.l.liu@intel.com> References: <20231221153948.119007-1-yi.l.liu@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Lu Baolu This allows qi_submit_sync() to return back faults to callers. Signed-off-by: Lu Baolu Signed-off-by: Yi Liu --- drivers/iommu/intel/dmar.c | 29 ++++++++++++++++++----------- drivers/iommu/intel/iommu.h | 2 +- drivers/iommu/intel/irq_remapping.c | 2 +- drivers/iommu/intel/pasid.c | 2 +- drivers/iommu/intel/svm.c | 6 +++--- 5 files changed, 24 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 23cb80d62a9a..a3e0cb720e06 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -1267,7 +1267,8 @@ static void qi_dump_fault(struct intel_iommu *iommu, u32 fault) (unsigned long long)desc->qw1); } -static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) +static int qi_check_fault(struct intel_iommu *iommu, int index, + int wait_index, u32 *fsts) { u32 fault; int head, tail; @@ -1278,8 +1279,12 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) return -EAGAIN; fault = readl(iommu->reg + DMAR_FSTS_REG); - if (fault & (DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE)) + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; + if (fault) { + if (fsts) + *fsts |= fault; qi_dump_fault(iommu, fault); + } /* * If IQE happens, the head points to the descriptor associated @@ -1342,9 +1347,11 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index) * time, a wait descriptor will be appended to each submission to ensure * hardware has completed the invalidation before return. Wait descriptors * can be part of the submission but it will not be polled for completion. + * If callers are interested in the QI faults that occur during the handling + * of requests, the QI faults are saved in @fault. */ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, - unsigned int count, unsigned long options) + unsigned int count, unsigned long options, u32 *fault) { struct q_inval *qi = iommu->qi; s64 devtlb_start_ktime = 0; @@ -1430,7 +1437,7 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, * a deadlock where the interrupt context can wait indefinitely * for free slots in the queue. */ - rc = qi_check_fault(iommu, index, wait_index); + rc = qi_check_fault(iommu, index, wait_index, fault); if (rc) break; @@ -1476,7 +1483,7 @@ void qi_global_iec(struct intel_iommu *iommu) desc.qw3 = 0; /* should never fail */ - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, @@ -1490,7 +1497,7 @@ void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, @@ -1514,7 +1521,7 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, @@ -1545,7 +1552,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* PASID-based IOTLB invalidation */ @@ -1586,7 +1593,7 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, QI_EIOTLB_AM(mask); } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* PASID-based device IOTLB Invalidate */ @@ -1639,7 +1646,7 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid, desc.qw1 |= QI_DEV_EIOTLB_SIZE; } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, @@ -1649,7 +1656,7 @@ void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, desc.qw0 = QI_PC_PASID(pasid) | QI_PC_DID(did) | QI_PC_GRAN(granu) | QI_PC_TYPE; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } /* diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h index ce030c5b5772..c6de958e4f54 100644 --- a/drivers/iommu/intel/iommu.h +++ b/drivers/iommu/intel/iommu.h @@ -881,7 +881,7 @@ void qi_flush_pasid_cache(struct intel_iommu *iommu, u16 did, u64 granu, u32 pasid); int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, - unsigned int count, unsigned long options); + unsigned int count, unsigned long options, u32 *fault); /* * Options used in qi_submit_sync: * QI_OPT_WAIT_DRAIN - Wait for PRQ drain completion, spec 6.5.2.8. diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c index 29b9e55dcf26..f834afa3672d 100644 --- a/drivers/iommu/intel/irq_remapping.c +++ b/drivers/iommu/intel/irq_remapping.c @@ -153,7 +153,7 @@ static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask) desc.qw2 = 0; desc.qw3 = 0; - return qi_submit_sync(iommu, &desc, 1, 0); + return qi_submit_sync(iommu, &desc, 1, 0, NULL); } static int modify_irte(struct irq_2_iommu *irq_iommu, diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c index 74e8e4c17e81..67f924760ba8 100644 --- a/drivers/iommu/intel/pasid.c +++ b/drivers/iommu/intel/pasid.c @@ -467,7 +467,7 @@ pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu, desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } static void diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index ac12f76c1212..660d049ad5b6 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -543,7 +543,7 @@ void intel_drain_pasid_prq(struct device *dev, u32 pasid) QI_DEV_IOTLB_PFSID(info->pfsid); qi_retry: reinit_completion(&iommu->prq_complete); - qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN); + qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN, NULL); if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) { wait_for_completion(&iommu->prq_complete); goto qi_retry; @@ -646,7 +646,7 @@ static void handle_bad_prq_event(struct intel_iommu *iommu, desc.qw3 = 0; } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } static irqreturn_t prq_event_thread(int irq, void *d) @@ -811,7 +811,7 @@ int intel_svm_page_response(struct device *dev, ktime_to_ns(ktime_get()) - prm->private_data[0]); } - qi_submit_sync(iommu, &desc, 1, 0); + qi_submit_sync(iommu, &desc, 1, 0, NULL); } out: return ret; -- 2.34.1