Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp285870rdb; Tue, 19 Dec 2023 17:31:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IEkBXvyWM9a379+LsjmkupSJe0ZmCxcy8ZEm2xfVzTJLHH9NwpFZMu3OjVPmR5pCTX8Ct3x X-Received: by 2002:a17:902:ea03:b0:1d3:c08d:ba9b with SMTP id s3-20020a170902ea0300b001d3c08dba9bmr3329707plg.13.1703035889575; Tue, 19 Dec 2023 17:31:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703035889; cv=none; d=google.com; s=arc-20160816; b=EFP9t1aSRNoqFWxeBVUdAfLW+/q98x+F0waH9dNAEikGTFBL+0PGPhNu1/Jolw7BUk lomAVOUdl7sbVeTg5IJZle2/vcqDPZ7yGRj0aEFxDESw7RV2T8un0UOF5SqfbR0Po3Ih ZnE+pbQjKwH5re79REi2W6mnMSFiU4c5AlhCQgJ7asNb/+SIm3qVQAe6TSkGXKoa10iE wgO8ZvRS0FXD+Dnpf7llZu2ROMvGFgGCxpCvTJF5xNpqKV0alyOd2zv3Fg8UQ+9mE1tK rkzZVESu2F0JjpIb8jU2bWYXOHXZLK3R3+haulJmlMJO7aCUDEz77xzv0UVetEabcxOf kSHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=SQ4WUTYfN1kkVE9316Ja9GjWvaninkiBoZT15YkEEr8=; fh=0puWr8qDwzaogZbtOaymnvHkPoKpvOitlnMkTa2cyaw=; b=LyG/yVTNhJ49sLtqiePhW0lj7UigtZBmhbysGdE/j+hrQPrhWAKISTz9LumRoB7Kcw TcLepsuc3fNPp4wehgGDWT1oIwKfjw2c2WOgAiHLUpaVFHtkIOhOE9lJarsdsQjf6AMi jTPFTc/f9eV74gzZRyBIgv4KW3KPoNGWcmlP38pXokUKIstbkzB7SG6eXSKSN0mKv2sb 6NKTM1r6S00Vx71Ks2+0qS/5aw0g3KrwHRtY+ZhunLR5Aoti7cvxVBAPMep5NRoxR7Oz bv0xYh4h6IEdsF13VPsqP1yx0JSpM8M9OykfCUMn757W1mN9sNSo8R1ZZnNwXe1ZSUOM 4bgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nmksXrpc; spf=pass (google.com: domain of linux-kernel+bounces-6242-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6242-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id f1-20020a170902ce8100b001d3e11d80e5si1747773plg.395.2023.12.19.17.31.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 17:31:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6242-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nmksXrpc; spf=pass (google.com: domain of linux-kernel+bounces-6242-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6242-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3BE7A28137C for ; Wed, 20 Dec 2023 01:31:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AFCBA1641D; Wed, 20 Dec 2023 01:29:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nmksXrpc" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B46EE1CA9A; Wed, 20 Dec 2023 01:29:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703035743; x=1734571743; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PU8G+GJrdy2ZHCO/SwXRChWvczfDrcoZWX9GsuoccEE=; b=nmksXrpcG2m3X/BNkWizFJMvcrypMnQT50SUt7CjIfc6M8x3jK2/fKbJ /BF5vhy0V1wDkyU4IthJ3oZg6eBeAOW8pBimWE7LNu7lFAWK7adtx0ED0 6ESKzzFGlfwDI3vudLE+8i7+O1LqvXkXjtaWCI/ruRmgUnLbG3XgjBL0c 16yZu3Pk1v6MZcuf5rA/MMVVvXaCS1f5Em7QxurAhVHWZiPecZ2u6Ccw8 ZWTL2+Smlyvhpd6ShpMJ3hzJB0BlhMd9jLbseBUaXN7BqkeFd4h3JN+Rg Asaf8W+dL5+es0DUvtfpw2080NaUCLZCjutgwrGWdFDo0E1NjKqC0nLKU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="2965809" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="2965809" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 17:29:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="1023319130" X-IronPort-AV: E=Sophos;i="6.04,290,1695711600"; d="scan'208";a="1023319130" Received: from allen-box.sh.intel.com ([10.239.159.127]) by fmsmga006.fm.intel.com with ESMTP; 19 Dec 2023 17:28:58 -0800 From: Lu Baolu To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Kevin Tian , Jean-Philippe Brucker , Nicolin Chen Cc: Yi Liu , Jacob Pan , Longfang Liu , Yan Zhao , iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Jason Gunthorpe Subject: [PATCH v9 08/14] iommu: Prepare for separating SVA and IOPF Date: Wed, 20 Dec 2023 09:23:26 +0800 Message-Id: <20231220012332.168188-9-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220012332.168188-1-baolu.lu@linux.intel.com> References: <20231220012332.168188-1-baolu.lu@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Move iopf_group data structure to iommu.h to make it a minimal set of faults that a domain's page fault handler should handle. Add a new function, iopf_free_group(), to free a fault group after all faults in the group are handled. This function will be made global so that it can be called from other files, such as iommu-sva.c. Move iopf_queue data structure to iommu.h to allow the workqueue to be scheduled out of this file. This will simplify the sequential patches. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Reviewed-by: Kevin Tian Reviewed-by: Yi Liu Tested-by: Yan Zhao Tested-by: Longfang Liu --- include/linux/iommu.h | 20 +++++++++++++++++++- drivers/iommu/io-pgfault.c | 37 +++++++++++++------------------------ 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index f97a5ab52af6..799b56563026 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -41,7 +41,6 @@ struct iommu_dirty_ops; struct notifier_block; struct iommu_sva; struct iommu_dma_cookie; -struct iopf_queue; #define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ #define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ @@ -126,6 +125,25 @@ struct iopf_fault { struct list_head list; }; +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + +/** + * struct iopf_queue - IO Page Fault queue + * @wq: the fault workqueue + * @devices: devices attached to this queue + * @lock: protects the device list + */ +struct iopf_queue { + struct workqueue_struct *wq; + struct list_head devices; + struct mutex lock; +}; + /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 #define IOMMU_FAULT_WRITE 0x1 diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 10d48eb72608..c7e6bbed5c05 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -13,24 +13,17 @@ #include "iommu-sva.h" -/** - * struct iopf_queue - IO Page Fault queue - * @wq: the fault workqueue - * @devices: devices attached to this queue - * @lock: protects the device list - */ -struct iopf_queue { - struct workqueue_struct *wq; - struct list_head devices; - struct mutex lock; -}; +static void iopf_free_group(struct iopf_group *group) +{ + struct iopf_fault *iopf, *next; -struct iopf_group { - struct iopf_fault last_fault; - struct list_head faults; - struct work_struct work; - struct device *dev; -}; + list_for_each_entry_safe(iopf, next, &group->faults, list) { + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + kfree(group); +} static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, enum iommu_page_response_code status) @@ -50,9 +43,9 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf, static void iopf_handler(struct work_struct *work) { + struct iopf_fault *iopf; struct iopf_group *group; struct iommu_domain *domain; - struct iopf_fault *iopf, *next; enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; group = container_of(work, struct iopf_group, work); @@ -61,7 +54,7 @@ static void iopf_handler(struct work_struct *work) if (!domain || !domain->iopf_handler) status = IOMMU_PAGE_RESP_INVALID; - list_for_each_entry_safe(iopf, next, &group->faults, list) { + list_for_each_entry(iopf, &group->faults, list) { /* * For the moment, errors are sticky: don't handle subsequent * faults in the group if there is an error. @@ -69,14 +62,10 @@ static void iopf_handler(struct work_struct *work) if (status == IOMMU_PAGE_RESP_SUCCESS) status = domain->iopf_handler(&iopf->fault, domain->fault_data); - - if (!(iopf->fault.prm.flags & - IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) - kfree(iopf); } iopf_complete_group(group->dev, &group->last_fault, status); - kfree(group); + iopf_free_group(group); } /** -- 2.34.1