Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp345023pxf; Thu, 25 Mar 2021 05:38:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzDO0VcPXB4vQ/5Nqt6MMOX5sP/xCM0sDCTKduxQhduZ6P+D9pgRfexG+wpg13gUJDVdKsT X-Received: by 2002:a17:906:86c1:: with SMTP id j1mr9476718ejy.373.1616675917585; Thu, 25 Mar 2021 05:38:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616675917; cv=none; d=google.com; s=arc-20160816; b=Aq5VlOqrPCEXdYLk6ydAPctrxMEHiK68DMuAfReUKAAvDs6vaaHOtzuDvx8rJxKJJI 9SIIR6g6jxMrut1waBrjxIctsR9w9MDlOW6OxfkyQ/dpCJtcPL3uKI0xxMyIxxuuwzaa 1EX0f+BkcfyvNxKsEk1OXXvCoRS73pikBGolkm6IPxr+ZoLRfakRtSzKbjAKnI72oDi1 dfaNlHz16Y01SjRqhJ0FGEV/9eeU+meUOzj4Bu5p9bAwH8fB1Z4FcKxktmATXswQxy5p 4Y1oBOqBuFCHHTCHJliWlP6i9Lv5CYo5k+tYSzl4zFyab1fvJ2k4UejqwtJd9P7M74Yu vIJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=cbncDhv1hpRG11FAMQtW7yGWHJZLdQL85UDPZff7Am8=; b=E24fPogQquYo+KXAQ9/57A3tRE5742zH7KQSdOSwUw8bX5SfeQMSmlCKkz3AtM7Ykn ajs+h/rl4rAhhtGkbiYJiJzAkmNrmTd7XOuGZylcQ0U5NxkauToqbk4n4EHsPF1XYuhL B6MZQmWfV5XgqFAstbKOgIwdtIhY1XqUpIA4OHC6SdLOIsSmMT8hcuDtyvi0ILyEb+kD dlMmvbBxCDOcg172hB8sWGmOcdv+cCn0wyItJao8gCGnrMra9eyFs5Uf+JH+7wIx6a4N aqY9PxQj+4z8qXmxTJ4+IEu6/rYoOFq6j+z08xUusomtvxFB9ef1LcgHS3erOYz0BMy9 4NXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ju25si4145478ejc.668.2021.03.25.05.38.14; Thu, 25 Mar 2021 05:38:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230439AbhCYMea (ORCPT + 99 others); Thu, 25 Mar 2021 08:34:30 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14540 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230230AbhCYMeT (ORCPT ); Thu, 25 Mar 2021 08:34:19 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F5kxW0gkXzPlrw; Thu, 25 Mar 2021 20:31:43 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Thu, 25 Mar 2021 20:34:07 +0800 From: John Garry To: , , , , CC: , , , John Garry Subject: [PATCH v2 1/4] iova: Add CPU hotplug handler to flush rcaches Date: Thu, 25 Mar 2021 20:29:58 +0800 Message-ID: <1616675401-151997-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616675401-151997-1-git-send-email-john.garry@huawei.com> References: <1616675401-151997-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Like the Intel IOMMU driver already does, flush the per-IOVA domain CPU rcache when a CPU goes offline - there's no point in keeping it. Reviewed-by: Robin Murphy Signed-off-by: John Garry --- drivers/iommu/iova.c | 30 +++++++++++++++++++++++++++++- include/linux/cpuhotplug.h | 1 + include/linux/iova.h | 1 + 3 files changed, 31 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index e6e2fa85271c..c78312560425 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -25,6 +25,17 @@ static void init_iova_rcaches(struct iova_domain *iovad); static void free_iova_rcaches(struct iova_domain *iovad); static void fq_destroy_all_entries(struct iova_domain *iovad); static void fq_flush_timeout(struct timer_list *t); + +static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node) +{ + struct iova_domain *iovad; + + iovad = hlist_entry_safe(node, struct iova_domain, cpuhp_dead); + + free_cpu_cached_iovas(cpu, iovad); + return 0; +} + static void free_global_cached_iovas(struct iova_domain *iovad); void @@ -51,6 +62,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node); rb_insert_color(&iovad->anchor.node, &iovad->rbroot); + cpuhp_state_add_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, &iovad->cpuhp_dead); init_iova_rcaches(iovad); } EXPORT_SYMBOL_GPL(init_iova_domain); @@ -257,10 +269,21 @@ int iova_cache_get(void) { mutex_lock(&iova_cache_mutex); if (!iova_cache_users) { + int ret; + + ret = cpuhp_setup_state_multi(CPUHP_IOMMU_IOVA_DEAD, "iommu/iova:dead", NULL, + iova_cpuhp_dead); + if (ret) { + mutex_unlock(&iova_cache_mutex); + pr_err("Couldn't register cpuhp handler\n"); + return ret; + } + iova_cache = kmem_cache_create( "iommu_iova", sizeof(struct iova), 0, SLAB_HWCACHE_ALIGN, NULL); if (!iova_cache) { + cpuhp_remove_multi_state(CPUHP_IOMMU_IOVA_DEAD); mutex_unlock(&iova_cache_mutex); pr_err("Couldn't create iova cache\n"); return -ENOMEM; @@ -282,8 +305,10 @@ void iova_cache_put(void) return; } iova_cache_users--; - if (!iova_cache_users) + if (!iova_cache_users) { + cpuhp_remove_multi_state(CPUHP_IOMMU_IOVA_DEAD); kmem_cache_destroy(iova_cache); + } mutex_unlock(&iova_cache_mutex); } EXPORT_SYMBOL_GPL(iova_cache_put); @@ -606,6 +631,9 @@ void put_iova_domain(struct iova_domain *iovad) { struct iova *iova, *tmp; + cpuhp_state_remove_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, + &iovad->cpuhp_dead); + free_iova_flush_queue(iovad); free_iova_rcaches(iovad); rbtree_postorder_for_each_entry_safe(iova, tmp, &iovad->rbroot, node) diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index f14adb882338..cedac9986557 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -58,6 +58,7 @@ enum cpuhp_state { CPUHP_NET_DEV_DEAD, CPUHP_PCI_XGENE_DEAD, CPUHP_IOMMU_INTEL_DEAD, + CPUHP_IOMMU_IOVA_DEAD, CPUHP_LUSTRE_CFS_DEAD, CPUHP_AP_ARM_CACHE_B15_RAC_DEAD, CPUHP_PADATA_DEAD, diff --git a/include/linux/iova.h b/include/linux/iova.h index c834c01c0a5b..4be6c0ab4997 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -95,6 +95,7 @@ struct iova_domain { flush-queues */ atomic_t fq_timer_on; /* 1 when timer is active, 0 when not */ + struct hlist_node cpuhp_dead; }; static inline unsigned long iova_size(struct iova *iova) -- 2.26.2