Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751890AbdITLCz (ORCPT ); Wed, 20 Sep 2017 07:02:55 -0400 Received: from foss.arm.com ([217.140.101.70]:60002 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751606AbdITLCy (ORCPT ); Wed, 20 Sep 2017 07:02:54 -0400 From: Robin Murphy To: joro@8bytes.org Cc: iommu@lists.linux-foundation.org, thunder.leizhen@huawei.com, nwatters@codeaurora.org, tomasz.nowicki@caviumnetworks.com, linux-kernel@vger.kernel.org Subject: [PATCH v4 7/6] iommu/iova: Make cached_node always valid Date: Wed, 20 Sep 2017 12:02:48 +0100 Message-Id: <0c9865fdc53820a81f81adec5a7bb4239aba7bc1.1505904195.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.13.4.dirty In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1908 Lines: 60 With the anchor node at the top of the rbtree, there is always a valid node for rb_next() to return, such that cached_node is only ever NULL until the first allocation. Initialising it to point at the anchor node gets rid of that window and makes the NULL checking entirely redundant. Signed-off-by: Robin Murphy --- Oops, spotted this one slightly too late. This could be squashed into patch #5 (which I'll do myself if I there's any cause to resend the whole series again). Robin. drivers/iommu/iova.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index a7af8273fa98..ec443c0a8319 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -51,7 +51,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; - iovad->cached_node = NULL; + iovad->cached_node = &iovad->anchor.node; iovad->cached32_node = NULL; iovad->granule = granule; iovad->start_pfn = start_pfn; @@ -120,10 +120,7 @@ __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn) if (limit_pfn <= iovad->dma_32bit_pfn && iovad->cached32_node) return iovad->cached32_node; - if (iovad->cached_node) - return iovad->cached_node; - - return &iovad->anchor.node; + return iovad->cached_node; } static void @@ -141,14 +138,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) struct iova *cached_iova; struct rb_node **curr; - if (free->pfn_hi < iovad->dma_32bit_pfn) + if (free->pfn_hi < iovad->dma_32bit_pfn && iovad->cached32_node) curr = &iovad->cached32_node; else curr = &iovad->cached_node; - if (!*curr) - return; - cached_iova = rb_entry(*curr, struct iova, node); if (free->pfn_lo >= cached_iova->pfn_lo) *curr = rb_next(&free->node); -- 2.13.4.dirty