Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp3923922iog; Tue, 21 Jun 2022 08:31:58 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vgkXFW0OMs1jIuHUs5aY9TCs313KuMQgqY54HHTMdEmF3UW2b0hoA94IAVS/zvY3n9Rviz X-Received: by 2002:a17:907:e92:b0:711:9fe4:b226 with SMTP id ho18-20020a1709070e9200b007119fe4b226mr26164344ejc.88.1655825518457; Tue, 21 Jun 2022 08:31:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655825518; cv=none; d=google.com; s=arc-20160816; b=wlKW7rZ8Fmi+nvjbbQhRUw5JRn59sRv9sHr01QS2sQxp6zMLkNOncveIjdNgeu9NF9 7fc+tHitKaM1JOn1/vvy7uqhi1s2fPv+poeSZDORHWc4ujPdkhkxXX5sGjFj/LcnOpYC l1IA1zG2lHz/p9b7oWR3P8yg6jmJhdsuZhdF1+WYjxTXTsfwOUk+Cu49X0dDsUUGZZi6 6qH3W2hlp4FKP9Hd99lf8QHaIO9z70caTWUYdYhZacbQ9r7jrq6AY6RHjqdXPuoqeiCs lI2lyxz/te063DE5lX/aHDxdvoj8M/MaiCPoOynROR0+hv4CTA57vp3HXqNV2BN+d0mX Yb3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WaWBd0bTU4na34wZmOcP1aFlwumUqn8ZxcNEHCiWfzQ=; b=j+XR6n2UIOIO1PH0kdN6gDr9nB5HGteZ9UZGgS2JgqNAIxXoFUA95MzytL9Lsk2P+P Vo40a1tUJJ2Esny0ENUfQJO1QIWsnquOtGlSnspQJqt6qgiwrjhyerk6vGoiPgZ4QdXR YPMCaGbufjizaOW8l8Wp2D7qK91KAGi2vNZlU3Zk/G5aDDrG9aS+K1WpgE9HktsNY4Ri SuNelQP4rwN1852e9IU6reOWtaI82uIswn9yjO3pQtEC0QLWRYtd1BFgsno80MgGtaVj +fjqpPqEbh2O3GjmH9vNdo05632o36T+l+iYdM1HPARNapb2EP5VW6KjqysJU1wsxubk +tZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KvXTNgoR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gb6-20020a170907960600b006f4e28d9e84si2946611ejc.24.2022.06.21.08.31.31; Tue, 21 Jun 2022 08:31:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KvXTNgoR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351964AbiFUOs4 (ORCPT + 99 others); Tue, 21 Jun 2022 10:48:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351938AbiFUOsl (ORCPT ); Tue, 21 Jun 2022 10:48:41 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68CE5101C6 for ; Tue, 21 Jun 2022 07:48:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655822920; x=1687358920; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mOa+4UIpuPTWtTJY3MQFjByPu1Hq+xS8UVaKwdGiIS4=; b=KvXTNgoRw/H08Z5k3W5EBGo2Pyhv99SjyocLyTteQRyNuwHXmjtUWB4Y pE1s/MwSlVuySFZrxJjTMcVy/9DaogoCmUofULv3/mQauRLx0+223DyaA BcpCn1DCcMzcTfYrfEhS+tDr3+XOl5Sp3cWkCA8zZ7NALo3nDyIvmkOp3 MNmCE04Reqx566ytf4a7cvbKmId97cJyS4A4AJJ2mWxhWYuJ5aqSnk1Ds lSOu2RJ3nb/Xt0TwdMz9fJe4Nlbc/H5cWyDvqKvoS+CLtBoV8CorVVD94 I/Z5HSlJh9VFo1xwTqYGV/CuerwVJCTLxYIWboHTZeUo90/aowU989d0Q w==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="260566713" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="260566713" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 07:48:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="643659630" Received: from allen-box.sh.intel.com ([10.239.159.48]) by fmsmga008.fm.intel.com with ESMTP; 21 Jun 2022 07:48:36 -0700 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy , Jean-Philippe Brucker , Dave Jiang , Vinod Koul Cc: Eric Auger , Liu Yi L , Jacob jun Pan , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Lu Baolu , Jean-Philippe Brucker Subject: [PATCH v9 09/11] iommu: Prepare IOMMU domain for IOPF Date: Tue, 21 Jun 2022 22:43:51 +0800 Message-Id: <20220621144353.17547-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621144353.17547-1-baolu.lu@linux.intel.com> References: <20220621144353.17547-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This adds some mechanisms around the iommu_domain so that the I/O page fault handling framework could route a page fault to the domain and call the fault handler from it. Add pointers to the page fault handler and its private data in struct iommu_domain. The fault handler will be called with the private data as a parameter once a page fault is routed to the domain. Any kernel component which owns an iommu domain could install handler and its private parameter so that the page fault could be further routed and handled. This also prepares the SVA implementation to be the first consumer of the per-domain page fault handling model. The I/O page fault handler for SVA is copied to the SVA file with mmget_not_zero() added before mmap_read_lock(). Suggested-by: Jean-Philippe Brucker Signed-off-by: Lu Baolu Reviewed-by: Jean-Philippe Brucker --- include/linux/iommu.h | 3 ++ drivers/iommu/iommu-sva-lib.h | 8 +++++ drivers/iommu/io-pgfault.c | 7 ++++ drivers/iommu/iommu-sva-lib.c | 60 +++++++++++++++++++++++++++++++++++ drivers/iommu/iommu.c | 4 +++ 5 files changed, 82 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 17780537db6e..36c822a5b135 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -105,6 +105,9 @@ struct iommu_domain { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ struct iommu_domain_geometry geometry; struct iommu_dma_cookie *iova_cookie; + enum iommu_page_response_code (*iopf_handler)(struct iommu_fault *fault, + void *data); + void *fault_data; union { struct { /* IOMMU_DOMAIN_DMA */ iommu_fault_handler_t handler; diff --git a/drivers/iommu/iommu-sva-lib.h b/drivers/iommu/iommu-sva-lib.h index 8909ea1094e3..1b3ace4b5863 100644 --- a/drivers/iommu/iommu-sva-lib.h +++ b/drivers/iommu/iommu-sva-lib.h @@ -26,6 +26,8 @@ int iopf_queue_flush_dev(struct device *dev); struct iopf_queue *iopf_queue_alloc(const char *name); void iopf_queue_free(struct iopf_queue *queue); int iopf_queue_discard_partial(struct iopf_queue *queue); +enum iommu_page_response_code +iommu_sva_handle_iopf(struct iommu_fault *fault, void *data); #else /* CONFIG_IOMMU_SVA */ static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) @@ -63,5 +65,11 @@ static inline int iopf_queue_discard_partial(struct iopf_queue *queue) { return -ENODEV; } + +static inline enum iommu_page_response_code +iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +{ + return IOMMU_PAGE_RESP_INVALID; +} #endif /* CONFIG_IOMMU_SVA */ #endif /* _IOMMU_SVA_LIB_H */ diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 1df8c1dcae77..aee9e033012f 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -181,6 +181,13 @@ static void iopf_handle_group(struct work_struct *work) * request completes, outstanding faults will have been dealt with by the time * the PASID is freed. * + * Any valid page fault will be eventually routed to an iommu domain and the + * page fault handler installed there will get called. The users of this + * handling framework should guarantee that the iommu domain could only be + * freed after the device has stopped generating page faults (or the iommu + * hardware has been set to block the page faults) and the pending page faults + * have been flushed. + * * Return: 0 on success and <0 on error. */ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) diff --git a/drivers/iommu/iommu-sva-lib.c b/drivers/iommu/iommu-sva-lib.c index 1e3e2b395b1e..dee8e2e42e06 100644 --- a/drivers/iommu/iommu-sva-lib.c +++ b/drivers/iommu/iommu-sva-lib.c @@ -167,3 +167,63 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle) return domain->mm->pasid; } EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); + +/* + * I/O page fault handler for SVA + */ +enum iommu_page_response_code +iommu_sva_handle_iopf(struct iommu_fault *fault, void *data) +{ + vm_fault_t ret; + struct mm_struct *mm; + struct vm_area_struct *vma; + unsigned int access_flags = 0; + struct iommu_domain *domain = data; + unsigned int fault_flags = FAULT_FLAG_REMOTE; + struct iommu_fault_page_request *prm = &fault->prm; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; + + if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) + return status; + + mm = domain->mm; + if (IS_ERR_OR_NULL(mm) || !mmget_not_zero(mm)) + return status; + + mmap_read_lock(mm); + + vma = find_extend_vma(mm, prm->addr); + if (!vma) + /* Unmapped area */ + goto out_put_mm; + + if (prm->perm & IOMMU_FAULT_PERM_READ) + access_flags |= VM_READ; + + if (prm->perm & IOMMU_FAULT_PERM_WRITE) { + access_flags |= VM_WRITE; + fault_flags |= FAULT_FLAG_WRITE; + } + + if (prm->perm & IOMMU_FAULT_PERM_EXEC) { + access_flags |= VM_EXEC; + fault_flags |= FAULT_FLAG_INSTRUCTION; + } + + if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) + fault_flags |= FAULT_FLAG_USER; + + if (access_flags & ~vma->vm_flags) + /* Access fault */ + goto out_put_mm; + + ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL); + status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : + IOMMU_PAGE_RESP_SUCCESS; + +out_put_mm: + mmap_read_unlock(mm); + mmput(mm); + + return status; +} diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 34d71418e7c7..a0e3d8083943 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -29,6 +29,8 @@ #include #include +#include "iommu-sva-lib.h" + static struct kset *iommu_group_kset; static DEFINE_IDA(iommu_group_ida); @@ -3199,6 +3201,8 @@ struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, domain->type = IOMMU_DOMAIN_SVA; mmgrab(mm); domain->mm = mm; + domain->iopf_handler = iommu_sva_handle_iopf; + domain->fault_data = domain; return domain; } -- 2.25.1