Received: by 2002:ac0:950e:0:0:0:0:0 with SMTP id f14csp1321187imc; Sun, 17 Mar 2019 10:25:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqzq/8bu2Y8SCQlg7Mk7M+JA9AC7SAjgKkNyNLvpxIMxKneSWr1gIxdEg2TDbQgZKzvSDFeU X-Received: by 2002:a63:f843:: with SMTP id v3mr13938071pgj.25.1552843555865; Sun, 17 Mar 2019 10:25:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552843555; cv=none; d=google.com; s=arc-20160816; b=YywTpwJcXXnGCcoRyy6Y+wdD6p++BZJ0u5Be9oT5H0a3rOrKCarUCWB9IO9CXApBNo CXFeMMCHHTtV38/yxE8VbwWMZPg7VvCOYyxtP2jbgu+BnsXIlnlzNpO9wkSjMzZ6moYQ JOMdqI58pFFc+HQOzRcdkddnaxN3yrOtNtsVJ8DbJCkJ8V4rf+UFghTuUI84/x56VOiz mDit7D8uOblPR+i96WC3DEqPUxSiPASkt0ieeSCBOFiPwl0WAdXtcQGbFbIvWnR/0UZw i9NwjBwxP2HuAd/JFmgG79wD9GSkXIFedUUEHY4tYcCALhFU+u0NkS7nnGKQjE7PN7gG koBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/dSygFpbTy07i0624UOmsYPKm8GcUF72Y9WObeuSJt4=; b=jbUQP57pS2cqujsAMkakuSCVL09jIbKku/9leYB8M0Z43g+ExxuizlxuwX0BIDJ+ma 3FRTCdo6eXvJkyU49zAVBOXugK70o+n7ZXnmIEda3+fF0DkEAA0TpQbNdY+Ca2sNTK/o 893YmIVXH0pY6Gipu+TNAaIS6XLD1I5ie2Tk7z5ue0ksfaTy0CSjWi47V+FKsC2Iuitq MJGQ6t2Fe2EHnVqJpjg6Gu1rXxbyJUyk0ESxAUzI+eiCIcsGyZex44BwPRXZ80oCopyO empyG9f3J00fPzvEFPJygDfNt9o0ctvdywynEwKZR0BYhiDehPWVYzcUSrC+6s6fD00j Ra+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f11si6603084pgs.291.2019.03.17.10.25.40; Sun, 17 Mar 2019 10:25:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727646AbfCQRYa (ORCPT + 99 others); Sun, 17 Mar 2019 13:24:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55542 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726677AbfCQRY3 (ORCPT ); Sun, 17 Mar 2019 13:24:29 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EE5F7307D8E3; Sun, 17 Mar 2019 17:24:28 +0000 (UTC) Received: from laptop.redhat.com (ovpn-116-102.ams2.redhat.com [10.36.116.102]) by smtp.corp.redhat.com (Postfix) with ESMTP id 45A3C19C71; Sun, 17 Mar 2019 17:24:23 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@linux.intel.com, jean-philippe.brucker@arm.com, will.deacon@arm.com, robin.murphy@arm.com Cc: kevin.tian@intel.com, ashok.raj@intel.com, marc.zyngier@arm.com, christoffer.dall@arm.com, peter.maydell@linaro.org, vincent.stehle@arm.com Subject: [PATCH v6 17/22] iommu/smmuv3: Report non recoverable faults Date: Sun, 17 Mar 2019 18:22:27 +0100 Message-Id: <20190317172232.1068-18-eric.auger@redhat.com> In-Reply-To: <20190317172232.1068-1-eric.auger@redhat.com> References: <20190317172232.1068-1-eric.auger@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Sun, 17 Mar 2019 17:24:29 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a stage 1 related fault event is read from the event queue, let's propagate it to potential external fault listeners, ie. users who registered a fault handler. Signed-off-by: Eric Auger --- v4 -> v5: - s/IOMMU_FAULT_PERM_INST/IOMMU_FAULT_PERM_EXEC --- drivers/iommu/arm-smmu-v3.c | 169 +++++++++++++++++++++++++++++++++--- 1 file changed, 158 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index a4b82c520647..8b2160788c9b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -167,6 +167,26 @@ #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc +/* Events */ +#define ARM_SMMU_EVT_F_UUT 0x01 +#define ARM_SMMU_EVT_C_BAD_STREAMID 0x02 +#define ARM_SMMU_EVT_F_STE_FETCH 0x03 +#define ARM_SMMU_EVT_C_BAD_STE 0x04 +#define ARM_SMMU_EVT_F_BAD_ATS_TREQ 0x05 +#define ARM_SMMU_EVT_F_STREAM_DISABLED 0x06 +#define ARM_SMMU_EVT_F_TRANSL_FORBIDDEN 0x07 +#define ARM_SMMU_EVT_C_BAD_SUBSTREAMID 0x08 +#define ARM_SMMU_EVT_F_CD_FETCH 0x09 +#define ARM_SMMU_EVT_C_BAD_CD 0x0a +#define ARM_SMMU_EVT_F_WALK_EABT 0x0b +#define ARM_SMMU_EVT_F_TRANSLATION 0x10 +#define ARM_SMMU_EVT_F_ADDR_SIZE 0x11 +#define ARM_SMMU_EVT_F_ACCESS 0x12 +#define ARM_SMMU_EVT_F_PERMISSION 0x13 +#define ARM_SMMU_EVT_F_TLB_CONFLICT 0x20 +#define ARM_SMMU_EVT_F_CFG_CONFLICT 0x21 +#define ARM_SMMU_EVT_E_PAGE_REQUEST 0x24 + /* Common MSI config fields */ #define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2) #define MSI_CFG2_SH GENMASK(5, 4) @@ -332,6 +352,15 @@ #define EVTQ_MAX_SZ_SHIFT 7 #define EVTQ_0_ID GENMASK_ULL(7, 0) +#define EVTQ_0_SSV GENMASK_ULL(11, 11) +#define EVTQ_0_SUBSTREAMID GENMASK_ULL(31, 12) +#define EVTQ_0_STREAMID GENMASK_ULL(63, 32) +#define EVTQ_1_PNU GENMASK_ULL(33, 33) +#define EVTQ_1_IND GENMASK_ULL(34, 34) +#define EVTQ_1_RNW GENMASK_ULL(35, 35) +#define EVTQ_1_S2 GENMASK_ULL(39, 39) +#define EVTQ_1_CLASS GENMASK_ULL(40, 41) +#define EVTQ_3_FETCH_ADDR GENMASK_ULL(51, 3) /* PRI queue */ #define PRIQ_ENT_DWORDS 2 @@ -639,6 +668,64 @@ struct arm_smmu_domain { spinlock_t devices_lock; }; +/* fault propagation */ + +#define IOMMU_FAULT_F_FIELDS (IOMMU_FAULT_UNRECOV_PASID_VALID | \ + IOMMU_FAULT_UNRECOV_PERM_VALID | \ + IOMMU_FAULT_UNRECOV_ADDR_VALID) + +struct arm_smmu_fault_propagation_data { + enum iommu_fault_reason reason; + bool s1_check; + u32 fields; /* IOMMU_FAULT_UNRECOV_*_VALID bits */ +}; + +/* + * Describes how SMMU faults translate into generic IOMMU faults + * and if they need to be reported externally + */ +static const struct arm_smmu_fault_propagation_data fault_propagation[] = { +[ARM_SMMU_EVT_F_UUT] = { }, +[ARM_SMMU_EVT_C_BAD_STREAMID] = { }, +[ARM_SMMU_EVT_F_STE_FETCH] = { }, +[ARM_SMMU_EVT_C_BAD_STE] = { }, +[ARM_SMMU_EVT_F_BAD_ATS_TREQ] = { }, +[ARM_SMMU_EVT_F_STREAM_DISABLED] = { }, +[ARM_SMMU_EVT_F_TRANSL_FORBIDDEN] = { }, +[ARM_SMMU_EVT_C_BAD_SUBSTREAMID] = {IOMMU_FAULT_REASON_PASID_INVALID, + false, + IOMMU_FAULT_UNRECOV_PASID_VALID + }, +[ARM_SMMU_EVT_F_CD_FETCH] = {IOMMU_FAULT_REASON_PASID_FETCH, + false, + IOMMU_FAULT_UNRECOV_PASID_VALID | + IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID + }, +[ARM_SMMU_EVT_C_BAD_CD] = {IOMMU_FAULT_REASON_BAD_PASID_ENTRY, + false, + IOMMU_FAULT_UNRECOV_PASID_VALID + }, +[ARM_SMMU_EVT_F_WALK_EABT] = {IOMMU_FAULT_REASON_WALK_EABT, true, + IOMMU_FAULT_F_FIELDS | + IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID + }, +[ARM_SMMU_EVT_F_TRANSLATION] = {IOMMU_FAULT_REASON_PTE_FETCH, true, + IOMMU_FAULT_F_FIELDS + }, +[ARM_SMMU_EVT_F_ADDR_SIZE] = {IOMMU_FAULT_REASON_OOR_ADDRESS, true, + IOMMU_FAULT_F_FIELDS + }, +[ARM_SMMU_EVT_F_ACCESS] = {IOMMU_FAULT_REASON_ACCESS, true, + IOMMU_FAULT_F_FIELDS + }, +[ARM_SMMU_EVT_F_PERMISSION] = {IOMMU_FAULT_REASON_PERMISSION, true, + IOMMU_FAULT_F_FIELDS + }, +[ARM_SMMU_EVT_F_TLB_CONFLICT] = { }, +[ARM_SMMU_EVT_F_CFG_CONFLICT] = { }, +[ARM_SMMU_EVT_E_PAGE_REQUEST] = { }, +}; + struct arm_smmu_option_prop { u32 opt; const char *prop; @@ -1258,7 +1345,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } -__maybe_unused static struct arm_smmu_master_data * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -1284,24 +1370,85 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) return master; } +/* Populates the record fields according to the input SMMU event */ +static bool arm_smmu_transcode_fault(u64 *evt, u8 type, + struct iommu_fault_unrecoverable *record) +{ + const struct arm_smmu_fault_propagation_data *data; + u32 fields; + + if (type >= ARRAY_SIZE(fault_propagation)) + return false; + + data = &fault_propagation[type]; + if (!data->reason) + return false; + + fields = data->fields; + + if (data->s1_check & FIELD_GET(EVTQ_1_S2, evt[1])) + return false; /* S2 related fault, don't propagate */ + + if (fields & IOMMU_FAULT_UNRECOV_PASID_VALID) { + if (FIELD_GET(EVTQ_0_SSV, evt[0])) + record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]); + else + fields &= ~IOMMU_FAULT_UNRECOV_PASID_VALID; + } + if (fields & IOMMU_FAULT_UNRECOV_PERM_VALID) { + if (!FIELD_GET(EVTQ_1_RNW, evt[1])) + record->perm |= IOMMU_FAULT_PERM_WRITE; + if (FIELD_GET(EVTQ_1_PNU, evt[1])) + record->perm |= IOMMU_FAULT_PERM_PRIV; + if (FIELD_GET(EVTQ_1_IND, evt[1])) + record->perm |= IOMMU_FAULT_PERM_EXEC; + } + if (fields & IOMMU_FAULT_UNRECOV_ADDR_VALID) + record->addr = evt[2]; + + if (fields & IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID) + record->fetch_addr = FIELD_GET(EVTQ_3_FETCH_ADDR, evt[3]); + + record->flags = fields; + return true; +} + +static void arm_smmu_report_event(struct arm_smmu_device *smmu, u64 *evt) +{ + u32 sid = FIELD_GET(EVTQ_0_STREAMID, evt[0]); + u8 type = FIELD_GET(EVTQ_0_ID, evt[0]); + struct arm_smmu_master_data *master; + struct iommu_fault_event event = {}; + int i; + + master = arm_smmu_find_master(smmu, sid); + if (WARN_ON(!master)) + return; + + event.fault.type = IOMMU_FAULT_DMA_UNRECOV; + + if (arm_smmu_transcode_fault(evt, type, &event.fault.event)) { + iommu_report_device_fault(master->dev, &event); + return; + } + + dev_info(smmu->dev, "event 0x%02x received:\n", type); + for (i = 0; i < EVTQ_ENT_DWORDS; ++i) { + dev_info(smmu->dev, "\t0x%016llx\n", + (unsigned long long)evt[i]); + } +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { - int i; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->evtq.q; u64 evt[EVTQ_ENT_DWORDS]; do { - while (!queue_remove_raw(q, evt)) { - u8 id = FIELD_GET(EVTQ_0_ID, evt[0]); - - dev_info(smmu->dev, "event 0x%02x received:\n", id); - for (i = 0; i < ARRAY_SIZE(evt); ++i) - dev_info(smmu->dev, "\t0x%016llx\n", - (unsigned long long)evt[i]); - - } + while (!queue_remove_raw(q, evt)) + arm_smmu_report_event(smmu, evt); /* * Not much we can do on overflow, so scream and pretend we're -- 2.20.1