Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp4276545rwb; Tue, 16 Aug 2022 18:44:18 -0700 (PDT) X-Google-Smtp-Source: AA6agR43d/S6bT9APB9J5gxqz1EW5/+RT40r7fua5LQr3sO4q54FVajhJ4PlKjjoqE+VRFk0b6S4 X-Received: by 2002:a05:6a00:1490:b0:52d:9ccb:bf96 with SMTP id v16-20020a056a00149000b0052d9ccbbf96mr23000382pfu.31.1660700658681; Tue, 16 Aug 2022 18:44:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660700658; cv=none; d=google.com; s=arc-20160816; b=oNPbaijhVwGVKpFjsQomKUxwx6i1TQ0731bBbKhHW8o1OAJ1aE4Gfc5LrgwUfQw5VL gf5RGj41/06JRgxEXrAmEv0wvRFPtYWbtuDYCF4lw5RcawdTy7Tf70we9xyshOcZlecv MzouvKZN0ZCjbPM5jt8lJTXhaCQhYw1SXr3MjCzADGwFP+AGF1lsd/2z6i/JluJx5I2R lnB1XqgYn0pobq2/0fIlCoQPEuoEsdzoRG11ZqzVblmOmIfeIu3yA5DVP+LMW1GSVQKf X5uVcFhOuPc3+tkwr7UdD/EbpprGqlviVVmzLukYQ2mPzp2nRii3x+M6hC7CLUfjejGX 2pQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7CZ+kybANHpiMHFCCloYZg3O6njpyS8j4/HixBs+yNM=; b=a/vQoj/Ty2wWkSQhYZRqGtjbTSsyNZwovlY43LgT/YID9s24TxCEtQFO2pbU8CkcvS g36WvLbfkCEfFPHeKQm8i3FwgN+DnWCnX7V3i6nllqAtsS7ipo8LUeBNvNb37tB0abXI ItWIAjZukspbWjo3znR1io/yTMrGKkHYuJsGT+n+rmW0N5BrLdM/mjQNcgNdn1IBHNmq YEGRVBF4ZHTU9VNjoeVVP7naPVqT+DJ5LVCRKoIlEdd3GzzfVSkhZbSqyNUZxblnZ95M 5FRZOAsmaJ54ImZo4mTH8MKwAvMgWmaMEyXBwx4NeqNyjPWQNxk+qh+3RkQ39Ci/JX8a uJBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TxXSWTRA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d3-20020a655ac3000000b0041d35693195si15370530pgt.67.2022.08.16.18.44.07; Tue, 16 Aug 2022 18:44:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TxXSWTRA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238292AbiHQB1h (ORCPT + 99 others); Tue, 16 Aug 2022 21:27:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238148AbiHQB1N (ORCPT ); Tue, 16 Aug 2022 21:27:13 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9038193509; Tue, 16 Aug 2022 18:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660699605; x=1692235605; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NsNdeZKdHQTwNIsHWiyDFpf6se5RH7aAVnaZUtpetlo=; b=TxXSWTRAEWHXBXR3SUx004gcx3Ebq5OgK7urQoJC1iUt3y4M6jaqDFhv 8LaX80kgoDh/rGQjXXGom6NDt92in4Mo3PHMPXFfkM5Tt2foGFdOQS9xO onR8+CV78CUt3TawalPA9Q2pqZbet1V1XlcyLXor8QqnmI3/0MyXgybN4 4DuG9mVCthqVzYLpR0Ja8/71K7ldiZtZp49dEpAwyIZvm1yKPa0fjOb/E 0G5HZN5GwJKsL8I6LYtjz5STFZNZzG6GlvS8v+w3jOvWU8w6Ds35uV6uA WKcrw3OA3TRASIvV19v/qHg0M4m3bCjR8hftD9GIcstAmtVuhaNGqkIpE w==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="293649305" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="293649305" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2022 18:26:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="696588005" Received: from allen-box.sh.intel.com ([10.239.159.48]) by FMSMGA003.fm.intel.com with ESMTP; 16 Aug 2022 18:26:40 -0700 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Bjorn Helgaas , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy , Jean-Philippe Brucker , Dave Jiang , Vinod Koul Cc: Eric Auger , Liu Yi L , Jacob jun Pan , Zhangfei Gao , Zhu Tony , iommu@lists.linux.dev, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jean-Philippe Brucker Subject: [PATCH v11 12/13] iommu: Per-domain I/O page fault handling Date: Wed, 17 Aug 2022 09:20:23 +0800 Message-Id: <20220817012024.3251276-13-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220817012024.3251276-1-baolu.lu@linux.intel.com> References: <20220817012024.3251276-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Tweak the I/O page fault handling framework to route the page faults to the domain and call the page fault handler retrieved from the domain. This makes the I/O page fault handling framework possible to serve more usage scenarios as long as they have an IOMMU domain and install a page fault handler in it. Some unused functions are also removed to avoid dead code. The iommu_get_domain_for_dev_pasid() which retrieves attached domain for a {device, PASID} pair is used. It will be used by the page fault handling framework which knows {device, PASID} reported from the iommu driver. We have a guarantee that the SVA domain doesn't go away during IOPF handling, because unbind() won't free the domain until all the pending page requests have been flushed from the pipeline. The drivers either call iopf_queue_flush_dev() explicitly, or in stall case, the device driver is required to flush all DMAs including stalled transactions before calling unbind(). This also renames iopf_handle_group() to iopf_handler() to avoid confusing. Signed-off-by: Lu Baolu Reviewed-by: Jean-Philippe Brucker Reviewed-by: Kevin Tian Reviewed-by: Jason Gunthorpe --- drivers/iommu/io-pgfault.c | 68 +++++--------------------------------- 1 file changed, 9 insertions(+), 59 deletions(-) diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index aee9e033012f..d1c522f4ab34 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, return iommu_page_response(dev, &resp); } -static enum iommu_page_response_code -iopf_handle_single(struct iopf_fault *iopf) -{ - vm_fault_t ret; - struct mm_struct *mm; - struct vm_area_struct *vma; - unsigned int access_flags = 0; - unsigned int fault_flags = FAULT_FLAG_REMOTE; - struct iommu_fault_page_request *prm = &iopf->fault.prm; - enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; - - if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) - return status; - - mm = iommu_sva_find(prm->pasid); - if (IS_ERR_OR_NULL(mm)) - return status; - - mmap_read_lock(mm); - - vma = find_extend_vma(mm, prm->addr); - if (!vma) - /* Unmapped area */ - goto out_put_mm; - - if (prm->perm & IOMMU_FAULT_PERM_READ) - access_flags |= VM_READ; - - if (prm->perm & IOMMU_FAULT_PERM_WRITE) { - access_flags |= VM_WRITE; - fault_flags |= FAULT_FLAG_WRITE; - } - - if (prm->perm & IOMMU_FAULT_PERM_EXEC) { - access_flags |= VM_EXEC; - fault_flags |= FAULT_FLAG_INSTRUCTION; - } - - if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) - fault_flags |= FAULT_FLAG_USER; - - if (access_flags & ~vma->vm_flags) - /* Access fault */ - goto out_put_mm; - - ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL); - status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : - IOMMU_PAGE_RESP_SUCCESS; - -out_put_mm: - mmap_read_unlock(mm); - mmput(mm); - - return status; -} - -static void iopf_handle_group(struct work_struct *work) +static void iopf_handler(struct work_struct *work) { struct iopf_group *group; + struct iommu_domain *domain; struct iopf_fault *iopf, *next; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); + domain = iommu_get_domain_for_dev_pasid(group->dev, + group->last_fault.fault.prm.pasid); + if (!domain || !domain->iopf_handler) + status = IOMMU_PAGE_RESP_INVALID; list_for_each_entry_safe(iopf, next, &group->faults, list) { /* @@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct *work) * faults in the group if there is an error. */ if (status == IOMMU_PAGE_RESP_SUCCESS) - status = iopf_handle_single(iopf); + status = domain->iopf_handler(&iopf->fault, + domain->fault_data); if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) @@ -242,7 +192,7 @@ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) group->last_fault.fault = *fault; INIT_LIST_HEAD(&group->faults); list_add(&group->last_fault.list, &group->faults); - INIT_WORK(&group->work, iopf_handle_group); + INIT_WORK(&group->work, iopf_handler); /* See if we have partial faults for this group */ list_for_each_entry_safe(iopf, next, &iopf_param->partial, list) { -- 2.25.1