Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1757727pxb; Fri, 5 Feb 2021 00:10:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJxjxAHevC10HHSjCI3i+J8EVxBehMKWDCRZLViOJ4fAKBIEiWdYMpPlZDwEFhMRAfoLHS/e X-Received: by 2002:a05:6402:57:: with SMTP id f23mr2423282edu.133.1612512641922; Fri, 05 Feb 2021 00:10:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512641; cv=none; d=google.com; s=arc-20160816; b=o4iuJ1IvJ+GX2rOVzmVONqincgpq3vXugTqslxcaPTNu7x0AUfLKbR/9KPzuhJxWWd 8d9o76rCn+/c5PnjrqsNK5K1sSprQiLOHUt5OlOrD8qOwXhNpM+l2hx4JDZa+RbPmy/F mi/KPoIt39k93Jm7642BIidvgGg6I1E7xUh82uYCD4ok09SCN2eKpL/j5mhtrdjus2uG pLncHgrEPjeGGMuaDE0rEHN6Ol7nNsLIuHSHTXmf+tOUza2vS0EIch7k6Ckz/uzHYL1q l4LdbAy9zOrZWynRT76em7w2rBAPI1gA0AH/z9/sJNLoZoQutYoi8U5mtZhUny8HLVmJ q2Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=dsC6WNt/JIpJ+kfdbjAnFP4dyo0V+T6k+Nlkxi7Lnd4UiEbcoyJZpGDaky8AhD8waP sU+IbO4nHAmzGYxfarrPi7ZAylKIRxYjM9v1kro+BrrUzBLSGGi5vTMjgP0Yxz5yT8Oj U78OzPHPxvOF7HJGOd3QRJ6i+Ko5Kss5Nsi3EIlngJ40cGQbfiFfvZBx+QQ+ChQFIQca +P3AlrvCpqvu0fl3m3/REbTcEqwM/8avS0RsRSR98suoe7POiCXGVD+aEiYs4FFtHnoy dJCmu7GSoHLcQnSRXDZJmwV+Dg3zMBgRbABay56PN7ko4ic9ffz5Aki1OMNTUIx60f/q SgXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LvkxxuZD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l18si4726370edw.180.2021.02.05.00.10.17; Fri, 05 Feb 2021 00:10:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LvkxxuZD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231700AbhBEIJC (ORCPT + 99 others); Fri, 5 Feb 2021 03:09:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231635AbhBEIHx (ORCPT ); Fri, 5 Feb 2021 03:07:53 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8D30C0617A7 for ; Fri, 5 Feb 2021 00:06:31 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id w14so3831165pfi.2 for ; Fri, 05 Feb 2021 00:06:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=LvkxxuZDAt3g61EzURtwZNN7B+c9BI9CcoYDAacfHmWWZaJmA/Q8ueMntoQL87InPz x3cic234jsbfsSUbgOlH9EphjhLyv75y5ljVcj96lD8GWTzU94A237kWBn6zXhPaZSVv Qxl5gI8MIE7AZ+8ju558OSalzhBAC+kamKS43qb/4rZZJCsnWqBeGMaKqEO4z6XB3UBy NICgSOqMmB85OIVfSU/BHZ31T9K5ZA/WU4F1VuEGj27Tm0IsEqPG/qmh1F+9RQIJlF/+ 72ItAQdnjOo4oeXkb2xUerZye6TyguzkprgbaF9p5TRXlRe/6wuok2G0jnFH6JqwoP9I KBsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=gpLTXJL1JIaSd2dPYyJCTnyrtvneWAvYZ6Hg/W0ULkG+wpMEPX1wcc04LyrFq/eoGU swZPwdWwrSwQLFV2I90AEhXlfCnOKodQahkVIh9bOs/iEp43saDVoG5LrXg4geQiZrlZ zR+bEPF9T5HfamG+u9Ov7NgM+4oZP9tzkFl0ByRcf0ic/XI+LcqDAjj1u7t59PogwqHq MXX6yZvco3TyxCj2hhL7vFeMPEd7ZKXsngI81rrnyXvphwDpjlL/6RYeR/bVRHcjJrfY I0bh4OBTaHgJslSp57ut/TuhF2SMtFkqsB4juuLQzjm4Im3QnoVd1pDkn+jfJh1HtbwE oG3w== X-Gm-Message-State: AOAM533HFHunLP7SiK+edVwD5nXwdn/WA+1G5DbAezrvwNjy8dSv3ulX wn447QX7a4h4C7egV2W5IlmxrjPJejCEtA== X-Received: by 2002:aa7:946c:0:b029:1ce:3f04:3f67 with SMTP id t12-20020aa7946c0000b02901ce3f043f67mr3589630pfq.6.1612512390927; Fri, 05 Feb 2021 00:06:30 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:30 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?UTF-8?q?=C3=98rjan=20Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 3/7] drm: ttm_pool: Rework ttm_pool_free_page to allow us to use it as a function pointer Date: Fri, 5 Feb 2021 08:06:17 +0000 Message-Id: <20210205080621.3102035-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This refactors ttm_pool_free_page(), and by adding extra entries to ttm_pool_page_dat, we then use it for all allocations, which allows us to simplify the arguments needed to be passed to ttm_pool_free_page(). This is critical for allowing the free function to be called by the sharable drm_page_pool logic. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/gpu/drm/ttm/ttm_pool.c | 60 ++++++++++++++++++---------------- 1 file changed, 32 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index c0274e256be3..eca36678f967 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -44,10 +44,14 @@ /** * struct ttm_pool_page_dat - Helper object for coherent DMA mappings * + * @pool: ttm_pool pointer the page was allocated by + * @caching: the caching value the allocated page was configured for * @addr: original DMA address returned for the mapping * @vaddr: original vaddr return for the mapping and order in the lower bits */ struct ttm_pool_page_dat { + struct ttm_pool *pool; + enum ttm_caching caching; dma_addr_t addr; unsigned long vaddr; }; @@ -71,13 +75,20 @@ static struct shrinker mm_shrinker; /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, - unsigned int order) + unsigned int order, enum ttm_caching caching) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; struct ttm_pool_page_dat *dat; struct page *p; void *vaddr; + dat = kmalloc(sizeof(*dat), GFP_KERNEL); + if (!dat) + return NULL; + + dat->pool = pool; + dat->caching = caching; + /* Don't set the __GFP_COMP flag for higher order allocations. * Mapping pages directly into an userspace process and calling * put_page() on a TTM allocated page is illegal. @@ -88,15 +99,13 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, if (!pool->use_dma_alloc) { p = alloc_pages(gfp_flags, order); - if (p) - p->private = order; + if (!p) + goto error_free; + dat->vaddr = order; + p->private = (unsigned long)dat; return p; } - dat = kmalloc(sizeof(*dat), GFP_KERNEL); - if (!dat) - return NULL; - if (order) attr |= DMA_ATTR_NO_WARN; @@ -123,34 +132,34 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, } /* Reset the caching and pages of size 1 << order */ -static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, - unsigned int order, struct page *p) +static int ttm_pool_free_page(struct page *p, unsigned int order) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; - struct ttm_pool_page_dat *dat; + struct ttm_pool_page_dat *dat = (void *)p->private; void *vaddr; #ifdef CONFIG_X86 /* We don't care that set_pages_wb is inefficient here. This is only * used when we have to shrink and CPU overhead is irrelevant then. */ - if (caching != ttm_cached && !PageHighMem(p)) + if (dat->caching != ttm_cached && !PageHighMem(p)) set_pages_wb(p, 1 << order); #endif - if (!pool || !pool->use_dma_alloc) { + if (!dat->pool || !dat->pool->use_dma_alloc) { __free_pages(p, order); - return; + goto out; } if (order) attr |= DMA_ATTR_NO_WARN; - dat = (void *)p->private; vaddr = (void *)(dat->vaddr & PAGE_MASK); - dma_free_attrs(pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dat->addr, + dma_free_attrs(dat->pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dat->addr, attr); +out: kfree(dat); + return 1 << order; } /* Apply a new caching to an array of pages */ @@ -264,7 +273,7 @@ static void ttm_pool_type_fini(struct ttm_pool_type *pt) mutex_unlock(&shrinker_lock); list_for_each_entry_safe(p, tmp, &pt->pages, lru) - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + ttm_pool_free_page(p, pt->order); } /* Return the pool_type to use for the given caching and order */ @@ -307,7 +316,7 @@ static unsigned int ttm_pool_shrink(void) p = ttm_pool_type_take(pt); if (p) { - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + ttm_pool_free_page(p, pt->order); num_freed = 1 << pt->order; } else { num_freed = 0; @@ -322,13 +331,9 @@ static unsigned int ttm_pool_shrink(void) /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { - if (pool->use_dma_alloc) { - struct ttm_pool_page_dat *dat = (void *)p->private; - - return dat->vaddr & ~PAGE_MASK; - } + struct ttm_pool_page_dat *dat = (void *)p->private; - return p->private; + return dat->vaddr & ~PAGE_MASK; } /** @@ -379,7 +384,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (p) { apply_caching = true; } else { - p = ttm_pool_alloc_page(pool, gfp_flags, order); + p = ttm_pool_alloc_page(pool, gfp_flags, order, tt->caching); if (p && PageHighMem(p)) apply_caching = true; } @@ -428,13 +433,13 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); error_free_page: - ttm_pool_free_page(pool, tt->caching, order, p); + ttm_pool_free_page(p, order); error_free_all: num_pages = tt->num_pages - num_pages; for (i = 0; i < num_pages; ) { order = ttm_pool_page_order(pool, tt->pages[i]); - ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); + ttm_pool_free_page(tt->pages[i], order); i += 1 << order; } @@ -470,8 +475,7 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) if (pt) ttm_pool_type_give(pt, tt->pages[i]); else - ttm_pool_free_page(pool, tt->caching, order, - tt->pages[i]); + ttm_pool_free_page(tt->pages[i], order); i += num_pages; } -- 2.25.1