Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp886118ybv; Fri, 7 Feb 2020 10:15:40 -0800 (PST) X-Google-Smtp-Source: APXvYqx3BK96O2TRwWfmPYSMMX8gQcEs7b9I0aNWTUvjjOF+oS6Jra2x3UbOiLcvxt8Q0aw+/6C2 X-Received: by 2002:a9d:6b17:: with SMTP id g23mr485798otp.139.1581099340834; Fri, 07 Feb 2020 10:15:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581099340; cv=none; d=google.com; s=arc-20160816; b=UbPEYFm8Baycn+h00Hjx+6hwdawoLROTU3i/uMIitmZ1vHZ9XD6+LlDZ70iLB+iV0i 2DKHaUKx3b1wwBPySS4qVht7lmvVorNicwZMQu8t9md1FqtbiKlB59Kwk4z2dTymIeQJ 3DZnmVaHNNsURKWL0gUqwTHkhclfwtjEiHDJilDUtYzfRBkhZBR87Kg1Md858KzAyjkJ yiaTip8w3o8pbOr+rQjLS4eywedsCNxDNzQd0hpBBX/qoq3VC8ZtIHmLvRueDFoL+dQS w68P3gB8CVIMERifDQTtv25Ik4I5P7RqRKYOpUi7ivlLDvmeXu02SHCFEHwySzSG7Zch b4SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Uek+WTIpxFq08zfTtESfQLhKCqV4+xgFNJBNAG0VbVo=; b=ezSpAtvIc4vzVCgUPKk7djH9yfhCe4s+60gPP1YsQfBrbHmV8jPVI5WUj97VJIoIbC Rah2F4hfDghmjXmkiKftxMmi4uJdKI/Jyox5Ow/BeXi5RtiTxW3y326AzoiOg3Men1Mj nzA+GJVT9OHXZoy6CdLKLLCB/gFNJGvXMA0ihedAhkLn73Aa1kLoZYo4mVUW4dvJuciJ N378BfVaGDeW2h4nsAvahhCC+Z94pwil3iNoav7+RGaUWQpmlozySshsTxcwNZcbMRV0 lfOU8cehnFf8SGLRIjCgK7+EeQKm6WEEwvXGWkCg9nDx51GVCKRZFknKY1jYVufDvW58 t2Aw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a14si26312otk.2.2020.02.07.10.15.28; Fri, 07 Feb 2020 10:15:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727484AbgBGSOP (ORCPT + 99 others); Fri, 7 Feb 2020 13:14:15 -0500 Received: from mx2.suse.de ([195.135.220.15]:39500 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727392AbgBGSOO (ORCPT ); Fri, 7 Feb 2020 13:14:14 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 540E4AD3F; Fri, 7 Feb 2020 18:14:12 +0000 (UTC) From: Davidlohr Bueso To: akpm@linux-foundation.org Cc: dave@stgolabs.net, linux-kernel@vger.kernel.org, mcgrof@kernel.org, broonie@kernel.org, alex.williamson@redhat.com, cohuck@redhat.com, kvm@vger.kernel.org, Davidlohr Bueso Subject: [PATCH 4/5] vfio/type1: optimize dma_list tree iterations Date: Fri, 7 Feb 2020 10:03:04 -0800 Message-Id: <20200207180305.11092-5-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200207180305.11092-1-dave@stgolabs.net> References: <20200207180305.11092-1-dave@stgolabs.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Both ->release() and ->attach_group() vfio_iommu driver callbacks can incur in full in-order iommu->dma_list traversals, which can be suboptimal under regular rbtree interators. This patch proposes using the optimized llrbtree interface such that we always have a branchless O(1) next node available. The cost is higher memory footprint, mainly enlarging struct vfio_dma by two pointers. This allows for minimal runtime overhead when doing tree modifications. Cc: alex.williamson@redhat.com Cc: cohuck@redhat.com Cc: kvm@vger.kernel.org Signed-off-by: Davidlohr Bueso --- drivers/vfio/vfio_iommu_type1.c | 50 +++++++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2ada8e6cdb88..875170fc8515 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -65,7 +66,7 @@ struct vfio_iommu { struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; - struct rb_root dma_list; + struct llrb_root dma_list; struct blocking_notifier_head notifier; unsigned int dma_avail; bool v2; @@ -81,7 +82,7 @@ struct vfio_domain { }; struct vfio_dma { - struct rb_node node; + struct llrb_node node; dma_addr_t iova; /* Device address */ unsigned long vaddr; /* Process virtual addr */ size_t size; /* Map size (bytes) */ @@ -134,10 +135,10 @@ static int put_pfn(unsigned long pfn, int prot); static struct vfio_dma *vfio_find_dma(struct vfio_iommu *iommu, dma_addr_t start, size_t size) { - struct rb_node *node = iommu->dma_list.rb_node; + struct rb_node *node = iommu->dma_list.rb_root.rb_node; while (node) { - struct vfio_dma *dma = rb_entry(node, struct vfio_dma, node); + struct vfio_dma *dma = llrb_entry(node, struct vfio_dma, node); if (start + size <= dma->iova) node = node->rb_left; @@ -152,26 +153,30 @@ static struct vfio_dma *vfio_find_dma(struct vfio_iommu *iommu, static void vfio_link_dma(struct vfio_iommu *iommu, struct vfio_dma *new) { - struct rb_node **link = &iommu->dma_list.rb_node, *parent = NULL; + struct rb_node **link = &iommu->dma_list.rb_root.rb_node; + struct rb_node *parent = NULL; + struct llrb_node *prev = NULL; struct vfio_dma *dma; while (*link) { parent = *link; - dma = rb_entry(parent, struct vfio_dma, node); + dma = llrb_entry(parent, struct vfio_dma, node); if (new->iova + new->size <= dma->iova) link = &(*link)->rb_left; - else + else { link = &(*link)->rb_right; + prev = llrb_from_rb(parent); + } } - rb_link_node(&new->node, parent, link); - rb_insert_color(&new->node, &iommu->dma_list); + rb_link_node(&new->node.rb_node, parent, link); + llrb_insert(&iommu->dma_list, &new->node, prev); } static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old) { - rb_erase(&old->node, &iommu->dma_list); + llrb_erase(&old->node, &iommu->dma_list); } /* @@ -1170,15 +1175,15 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, struct vfio_domain *domain) { struct vfio_domain *d; - struct rb_node *n; + struct llrb_node *n; unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; int ret; /* Arbitrarily pick the first domain in the list for lookups */ d = list_first_entry(&iommu->domain_list, struct vfio_domain, next); - n = rb_first(&iommu->dma_list); + n = llrb_first(&iommu->dma_list); - for (; n; n = rb_next(n)) { + for (; n; n = llrb_next(n)) { struct vfio_dma *dma; dma_addr_t iova; @@ -1835,18 +1840,19 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, static void vfio_iommu_unmap_unpin_all(struct vfio_iommu *iommu) { - struct rb_node *node; + struct llrb_node *node; - while ((node = rb_first(&iommu->dma_list))) + while ((node = llrb_first(&iommu->dma_list))) vfio_remove_dma(iommu, rb_entry(node, struct vfio_dma, node)); } static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) { - struct rb_node *n, *p; + struct llrb_node *n; + struct rb_node *p; - n = rb_first(&iommu->dma_list); - for (; n; n = rb_next(n)) { + n = llrb_first(&iommu->dma_list); + for (; n; n = llrb_next(n)) { struct vfio_dma *dma; long locked = 0, unlocked = 0; @@ -1866,10 +1872,10 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) { - struct rb_node *n; + struct llrb_node *n; - n = rb_first(&iommu->dma_list); - for (; n; n = rb_next(n)) { + n = llrb_first(&iommu->dma_list); + for (; n; n = llrb_next(n)) { struct vfio_dma *dma; dma = rb_entry(n, struct vfio_dma, node); @@ -2060,7 +2066,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) INIT_LIST_HEAD(&iommu->domain_list); INIT_LIST_HEAD(&iommu->iova_list); - iommu->dma_list = RB_ROOT; + iommu->dma_list = LLRB_ROOT; iommu->dma_avail = dma_entry_limit; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); -- 2.16.4