Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3200056pxb; Mon, 1 Mar 2021 04:19:33 -0800 (PST) X-Google-Smtp-Source: ABdhPJzaajdl7ixyks6ORmF4F1Mtimv498YfWJ4hMtCGD40wNrwaSDvWvfzFZDssCd8WejuVlPkv X-Received: by 2002:aa7:c447:: with SMTP id n7mr7800296edr.171.1614601173051; Mon, 01 Mar 2021 04:19:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614601173; cv=none; d=google.com; s=arc-20160816; b=AbneKi224mIkyKGnUPfYY2Kxrwm6so5PXPnnq3eikoTcLfJ/SFvXo1ssEeayanqTX6 M/iN5z0WgnlSOu9edhQMwb7raroWgNHShQoRW/VcXX3XVj/BaUe96e0oPyZUBXSnSdiF 98ujh6d+NXy2OkUOyxy09JtOC/O84VeU+kxhF8sxb0nXo84/4ZykJfQ8AwNRsV4Getct iHQHv/Lf6I34ORII5PKnunHFavwvp+wwOKlN074d2V+Mk5/WLqsPzxECMt/QQyL7C4f1 elSLJVtM+n2ZMVNCesUDTTedbhcDZwRtM++BduvqIbwxu3V1DPvYhJrlUiRujOEqBp7m oNKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=24FINEdUclVurEVtoT5SE8vHT1u50a6/AWA9KFHuw90=; b=IbXLGtMS9X1PnLuE2T3MWMjNC8J6HInsF8xypwHtYS7FVnLm8VuztGFGXKZGvJOe4v PMoerZRpcXI9NQMRzSRQEb0X/TCkzdr/Ki+d/uIShgm2wQlD4cJmP2vZkA9PXsyqR/P9 sV6oACsdfM13V7jvXlDIU2Rc8roUdVmzBkDawRGVX3FRp28zAGKv4rBcGUs4KARx3Scb 8bpPHFUjotmzvutqV+BhcUYnI8IReqeQfg4SLDP4KGdCuLESSo7MkMlqFPJmIAgr39Qd JRPMpbgYJ+4bE7aG6X11AjSK2RYfr3/aGNR8CSKS/PrcmFJhHCf4bhz3CAvzKqnK3u6/ a8jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bs11si10980202ejb.637.2021.03.01.04.19.10; Mon, 01 Mar 2021 04:19:33 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233835AbhCAMR3 (ORCPT + 99 others); Mon, 1 Mar 2021 07:17:29 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:13398 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233594AbhCAMRY (ORCPT ); Mon, 1 Mar 2021 07:17:24 -0500 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Dpzjb4WdgzjSpb; Mon, 1 Mar 2021 20:15:15 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.498.0; Mon, 1 Mar 2021 20:16:29 +0800 From: John Garry To: , , , CC: , , , , John Garry Subject: [PATCH 1/3] iova: Add CPU hotplug handler to flush rcaches Date: Mon, 1 Mar 2021 20:12:19 +0800 Message-ID: <1614600741-15696-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1614600741-15696-1-git-send-email-john.garry@huawei.com> References: <1614600741-15696-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Like the intel IOMMU driver already does, flush the per-IOVA domain CPU rcache when a CPU goes offline - there's no point in keeping it. Signed-off-by: John Garry --- drivers/iommu/iova.c | 30 +++++++++++++++++++++++++++++- include/linux/cpuhotplug.h | 1 + include/linux/iova.h | 1 + 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index e6e2fa85271c..c78312560425 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -25,6 +25,17 @@ static void init_iova_rcaches(struct iova_domain *iovad); static void free_iova_rcaches(struct iova_domain *iovad); static void fq_destroy_all_entries(struct iova_domain *iovad); static void fq_flush_timeout(struct timer_list *t); + +static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node) +{ + struct iova_domain *iovad; + + iovad = hlist_entry_safe(node, struct iova_domain, cpuhp_dead); + + free_cpu_cached_iovas(cpu, iovad); + return 0; +} + static void free_global_cached_iovas(struct iova_domain *iovad); void @@ -51,6 +62,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node); rb_insert_color(&iovad->anchor.node, &iovad->rbroot); + cpuhp_state_add_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, &iovad->cpuhp_dead); init_iova_rcaches(iovad); } EXPORT_SYMBOL_GPL(init_iova_domain); @@ -257,10 +269,21 @@ int iova_cache_get(void) { mutex_lock(&iova_cache_mutex); if (!iova_cache_users) { + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_IOMMU_IOVA_DEAD, "iommu/iova:dead", NULL, + iova_cpuhp_dead); + if (ret) { + mutex_unlock(&iova_cache_mutex); + pr_err("Couldn't register cpuhp handler\n"); + return ret; + } + iova_cache = kmem_cache_create( "iommu_iova", sizeof(struct iova), 0, SLAB_HWCACHE_ALIGN, NULL); if (!iova_cache) { + cpuhp_remove_multi_state(CPUHP_IOMMU_IOVA_DEAD); mutex_unlock(&iova_cache_mutex); pr_err("Couldn't create iova cache\n"); return -ENOMEM; @@ -282,8 +305,10 @@ void iova_cache_put(void) return; } iova_cache_users--; - if (!iova_cache_users) + if (!iova_cache_users) { + cpuhp_remove_multi_state(CPUHP_IOMMU_IOVA_DEAD); kmem_cache_destroy(iova_cache); + } mutex_unlock(&iova_cache_mutex); } EXPORT_SYMBOL_GPL(iova_cache_put); @@ -606,6 +631,9 @@ void put_iova_domain(struct iova_domain *iovad) { struct iova *iova, *tmp; + cpuhp_state_remove_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, + &iovad->cpuhp_dead); + free_iova_flush_queue(iovad); free_iova_rcaches(iovad); rbtree_postorder_for_each_entry_safe(iova, tmp, &iovad->rbroot, node) diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index f14adb882338..cedac9986557 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -58,6 +58,7 @@ enum cpuhp_state { CPUHP_NET_DEV_DEAD, CPUHP_PCI_XGENE_DEAD, CPUHP_IOMMU_INTEL_DEAD, + CPUHP_IOMMU_IOVA_DEAD, CPUHP_LUSTRE_CFS_DEAD, CPUHP_AP_ARM_CACHE_B15_RAC_DEAD, CPUHP_PADATA_DEAD, diff --git a/include/linux/iova.h b/include/linux/iova.h index c834c01c0a5b..4be6c0ab4997 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -95,6 +95,7 @@ struct iova_domain { flush-queues */ atomic_t fq_timer_on; /* 1 when timer is active, 0 when not */ + struct hlist_node cpuhp_dead; }; static inline unsigned long iova_size(struct iova *iova) -- 2.26.2