Received: by 10.223.148.5 with SMTP id 5csp6341399wrq; Wed, 17 Jan 2018 12:27:25 -0800 (PST) X-Google-Smtp-Source: ACJfBov04H3W3kAiVVQ0cflJDXJrF5A36W6JsvzLrNXy2Ad06uTG0XES9ABziVoh7jp0DSZlBRrd X-Received: by 10.101.98.90 with SMTP id q26mr33596343pgv.158.1516220844956; Wed, 17 Jan 2018 12:27:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516220844; cv=none; d=google.com; s=arc-20160816; b=OLw+T/v9IC9OKzncVdXnedrKYdnGgpIJc1BiQEPlfWsnOQOtqrYXYqWfr3WPV3Iyve bRSnJeY4hGR/9Hwo91Uc9Jc/EwoBXokFOd/Df/VdmYSF5MTxSLBaiRl4HniFqcuhYuWE 6IeKhZ0upPkeOSG0SOs1bufxMuEgYLpgzSfMLwWajtNPeqXwIvjOMKCLP3de/QFor7FD 7x/qIOExhkUQlgsrNSRnW2anYj5yPe0g4H9SUS0kIYHnc8/7cKiXIIyuQFahUL8BmRc/ VZqnmHtHZKIyk2QKLbA4lDXA2lW0Y9BdnE8lGmA7BTWKtzORO+b+N2VMYXJRXLCfns/H SZpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=NcPrSNl0PJX8QPkrt8w3KQSJyJwFYqJs0WM0icyZM4s=; b=nyxEQ7s8nXTmVTW/jlOp+rQD5u0iHZKPb+OOfK0neiZPI9BCOKj2d+4At2bLCVDSb5 QS6CNgSROFCwKI2FoMkBFc9D6AgAWWW6MKlGLnWzRXgFyq0OouvMb12+1wECKSsak2DD HdYpYh8CuPYIPEOs2JCZIooMS8PuSJ0QQWJCBzcE1PotEvyeYKdOwOSX+oy1ceduR4wQ jau9SeQn8ZTatZYmhfy7sCFmtElX0TPaCaLcokYzvadZb18K36CvIDGIzArNHntP9tYU F7ragfIQni3f+D5MWf2pNkmH5kqaBUT39nyogYZ5wHJvCObRBDqmhKTomwVBoHyih9mi FxXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=rY3Wl+r0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z73si4380949pgd.744.2018.01.17.12.27.10; Wed, 17 Jan 2018 12:27:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=rY3Wl+r0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754414AbeAQUYy (ORCPT + 99 others); Wed, 17 Jan 2018 15:24:54 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:58573 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754014AbeAQUXG (ORCPT ); Wed, 17 Jan 2018 15:23:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NcPrSNl0PJX8QPkrt8w3KQSJyJwFYqJs0WM0icyZM4s=; b=rY3Wl+r0k7Afw9kHtRZYNbmTA +UMjVIiMFrLJIhj6DjBHGHeiKjU6yMZKSLT0VTJ7xFATPVjiOOL1hHZkGGu2AS+njwV2qhc+jsWTU o0bU5xeWqqEYAZKqMemX94Bglp8E3kiOZHvlmoE5UsgpsQ38WfxeDn1h+VXJ6zDOIGVLFlqx+PTIR Lpmrzwu6bGE7gSkX13hRXUJ8owpcUSQLvFvPpNewKFdL7Qem3zwjGoIDpJyAcGj8CXCRcTLRjrDPe 1zzXn1SLxXyrBw5Ap2wzAhKyYa51m1xm+dHV4yAzW7756tQLyPyqBXtKpUpJtFaCrsPym17n5eKKv GO9zBHtBw==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEj-0006Lx-ML; Wed, 17 Jan 2018 20:23:05 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Cc: Matthew Wilcox , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, Bjorn Andersson , Stefano Stabellini , iommu@lists.linux-foundation.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, intel-gfx@lists.freedesktop.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, David Howells Subject: [PATCH v6 96/99] dma-debug: Convert to XArray Date: Wed, 17 Jan 2018 12:22:00 -0800 Message-Id: <20180117202203.19756-97-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox This is an unusual way to use the xarray tags. If any other users come up, we can add an xas_get_tags() / xas_set_tags() API, but until then I don't want to encourage this kind of abuse. Signed-off-by: Matthew Wilcox --- lib/dma-debug.c | 105 +++++++++++++++++++++++++------------------------------- 1 file changed, 46 insertions(+), 59 deletions(-) diff --git a/lib/dma-debug.c b/lib/dma-debug.c index fb4af570ce04..965b3837d060 100644 --- a/lib/dma-debug.c +++ b/lib/dma-debug.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -30,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -465,9 +465,8 @@ EXPORT_SYMBOL(debug_dma_dump_mappings); * At any time debug_dma_assert_idle() can be called to trigger a * warning if any cachelines in the given page are in the active set. */ -static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT); -static DEFINE_SPINLOCK(radix_lock); -#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1) +static DEFINE_XARRAY_FLAGS(dma_active_cacheline, XA_FLAGS_LOCK_IRQ); +#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << XA_MAX_TAGS) - 1) #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT) #define CACHELINES_PER_PAGE (1 << CACHELINE_PER_PAGE_SHIFT) @@ -477,37 +476,40 @@ static phys_addr_t to_cacheline_number(struct dma_debug_entry *entry) (entry->offset >> L1_CACHE_SHIFT); } -static int active_cacheline_read_overlap(phys_addr_t cln) +static unsigned int active_cacheline_read_overlap(struct xa_state *xas) { - int overlap = 0, i; + unsigned int tags = 0; + xa_tag_t tag; - for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--) - if (radix_tree_tag_get(&dma_active_cacheline, cln, i)) - overlap |= 1 << i; - return overlap; + for (tag = 0; tag < XA_MAX_TAGS; tag++) + if (xas_get_tag(xas, tag)) + tags |= 1U << tag; + + return tags; } -static int active_cacheline_set_overlap(phys_addr_t cln, int overlap) +static int active_cacheline_set_overlap(struct xa_state *xas, int overlap) { - int i; + xa_tag_t tag; if (overlap > ACTIVE_CACHELINE_MAX_OVERLAP || overlap < 0) return overlap; - for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--) - if (overlap & 1 << i) - radix_tree_tag_set(&dma_active_cacheline, cln, i); + for (tag = 0; tag < XA_MAX_TAGS; tag++) { + if (overlap & (1U << tag)) + xas_set_tag(xas, tag); else - radix_tree_tag_clear(&dma_active_cacheline, cln, i); + xas_clear_tag(xas, tag); + } return overlap; } -static void active_cacheline_inc_overlap(phys_addr_t cln) +static void active_cacheline_inc_overlap(struct xa_state *xas) { - int overlap = active_cacheline_read_overlap(cln); + int overlap = active_cacheline_read_overlap(xas); - overlap = active_cacheline_set_overlap(cln, ++overlap); + overlap = active_cacheline_set_overlap(xas, ++overlap); /* If we overflowed the overlap counter then we're potentially * leaking dma-mappings. Otherwise, if maps and unmaps are @@ -517,21 +519,22 @@ static void active_cacheline_inc_overlap(phys_addr_t cln) */ WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP, "DMA-API: exceeded %d overlapping mappings of cacheline %pa\n", - ACTIVE_CACHELINE_MAX_OVERLAP, &cln); + ACTIVE_CACHELINE_MAX_OVERLAP, &xas->xa_index); } -static int active_cacheline_dec_overlap(phys_addr_t cln) +static int active_cacheline_dec_overlap(struct xa_state *xas) { - int overlap = active_cacheline_read_overlap(cln); + int overlap = active_cacheline_read_overlap(xas); - return active_cacheline_set_overlap(cln, --overlap); + return active_cacheline_set_overlap(xas, --overlap); } static int active_cacheline_insert(struct dma_debug_entry *entry) { phys_addr_t cln = to_cacheline_number(entry); + XA_STATE(xas, &dma_active_cacheline, cln); unsigned long flags; - int rc; + struct dma_debug_entry *exists; /* If the device is not writing memory then we don't have any * concerns about the cpu consuming stale data. This mitigates @@ -540,32 +543,32 @@ static int active_cacheline_insert(struct dma_debug_entry *entry) if (entry->direction == DMA_TO_DEVICE) return 0; - spin_lock_irqsave(&radix_lock, flags); - rc = radix_tree_insert(&dma_active_cacheline, cln, entry); - if (rc == -EEXIST) - active_cacheline_inc_overlap(cln); - spin_unlock_irqrestore(&radix_lock, flags); + xas_lock_irqsave(&xas, flags); + exists = xas_create(&xas); + if (exists) + active_cacheline_inc_overlap(&xas); + else + xas_store(&xas, entry); + xas_unlock_irqrestore(&xas, flags); - return rc; + return xas_error(&xas); } static void active_cacheline_remove(struct dma_debug_entry *entry) { phys_addr_t cln = to_cacheline_number(entry); + XA_STATE(xas, &dma_active_cacheline, cln); unsigned long flags; /* ...mirror the insert case */ if (entry->direction == DMA_TO_DEVICE) return; - spin_lock_irqsave(&radix_lock, flags); - /* since we are counting overlaps the final put of the - * cacheline will occur when the overlap count is 0. - * active_cacheline_dec_overlap() returns -1 in that case - */ - if (active_cacheline_dec_overlap(cln) < 0) - radix_tree_delete(&dma_active_cacheline, cln); - spin_unlock_irqrestore(&radix_lock, flags); + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + if (active_cacheline_dec_overlap(&xas) < 0) + xas_store(&xas, NULL); + xas_unlock_irqrestore(&xas, flags); } /** @@ -578,12 +581,8 @@ static void active_cacheline_remove(struct dma_debug_entry *entry) */ void debug_dma_assert_idle(struct page *page) { - static struct dma_debug_entry *ents[CACHELINES_PER_PAGE]; - struct dma_debug_entry *entry = NULL; - void **results = (void **) &ents; - unsigned int nents, i; - unsigned long flags; - phys_addr_t cln; + struct dma_debug_entry *entry; + unsigned long cln; if (dma_debug_disabled()) return; @@ -591,21 +590,9 @@ void debug_dma_assert_idle(struct page *page) if (!page) return; - cln = (phys_addr_t) page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT; - spin_lock_irqsave(&radix_lock, flags); - nents = radix_tree_gang_lookup(&dma_active_cacheline, results, cln, - CACHELINES_PER_PAGE); - for (i = 0; i < nents; i++) { - phys_addr_t ent_cln = to_cacheline_number(ents[i]); - - if (ent_cln == cln) { - entry = ents[i]; - break; - } else if (ent_cln >= cln + CACHELINES_PER_PAGE) - break; - } - spin_unlock_irqrestore(&radix_lock, flags); - + cln = page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT; + entry = xa_find(&dma_active_cacheline, &cln, + cln + CACHELINES_PER_PAGE - 1, XA_PRESENT); if (!entry) return; -- 2.15.1