Received: by 10.223.176.5 with SMTP id f5csp187626wra; Tue, 6 Feb 2018 20:10:01 -0800 (PST) X-Google-Smtp-Source: AH8x227UA8C673QNevSMlAohbtvP41tVg0B8LfiYN2GFFPyd8HahC1PXsPR+2/Mifk2L9eZ1dgyd X-Received: by 10.99.121.129 with SMTP id u123mr1967975pgc.263.1517976601452; Tue, 06 Feb 2018 20:10:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517976601; cv=none; d=google.com; s=arc-20160816; b=TDgRYqhKLgwPX7/dk/pe7MLUDh99deagWpEucC/cSatZHvuesqsyEOT31C4QP4WZwF MbIL52Q5oBXdtmOXT94a0B86Iu8BfxBdVx7RMghz76dKi2EKYoA8J1V5j01hu/UXIGRT zmrJ696T7cl3X3tN951FRvGfe1byebNWyFx6JKF0Xwm4JNHT4IgWB73Bq4b4ttwX3SzS E2VNmKTYVECKXpTLpVy4qyqkF/I72ezYcqYaoOoDqm6j7+C0Uy3bYZbt6m4nB464HLah cR0twWBEWtx40/fXvDDGddVwmFsb6XBFdGLFoEe+HQGO5tN+1Gb8DSgSmUkGZta3T5KP 56CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=j+2MH6CpugNITGVxY0V0/Ith0TdE9+ZYgVO6hWSQ4uk=; b=GNsw7tneWFNtTSYZfp5Ihhz3n0zdxv0+PxdZRB5hUqVP79UynyuqTpWrCt+AZB1rYj YoUs7FNsYQX8EkIs/1JftHNawpovyGGzCYfbjwfgq0YQPT6ffGTZfxZELT3BYhnYW2GA PClCRuVF4DTDEZi0wpufIC7KsGcDK8rb1ctgvhMySgHq0IOEHV9jX125EgX/k5kWMceH YReuoJuJiT4bt7eTEzU+3FkPfDppMzfLwvD31D517jMx+wS1Y31u7z/zxj/lKjBLQ/e/ T5uzEzis8BRjXUjK50DUUquJvM9O0appO5iCLIQbTRNGyv3I1UTMiEA1AjO2FDYMNk7j hQVw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u8si392158pgq.203.2018.02.06.20.09.47; Tue, 06 Feb 2018 20:10:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753311AbeBGEJL (ORCPT + 99 others); Tue, 6 Feb 2018 23:09:11 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:54490 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752952AbeBGEJK (ORCPT ); Tue, 6 Feb 2018 23:09:10 -0500 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 5BA40E5BD3B4E; Wed, 7 Feb 2018 12:08:56 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.361.1; Wed, 7 Feb 2018 12:08:47 +0800 From: Yisheng Xie To: , CC: , , Yisheng Xie Subject: [PATCH 2/2] staging: android: ion: Combine cache and uncache pools Date: Wed, 7 Feb 2018 11:59:46 +0800 Message-ID: <1517975986-46917-2-git-send-email-xieyisheng1@huawei.com> X-Mailer: git-send-email 1.7.12.4 In-Reply-To: <1517975986-46917-1-git-send-email-xieyisheng1@huawei.com> References: <1517975986-46917-1-git-send-email-xieyisheng1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now we call dma_map in the dma_buf API callbacks and handle explicit caching by the dma_buf sync API, which make cache and uncache pools in the same handling flow, which can be combined. Signed-off-by: Yisheng Xie --- drivers/staging/android/ion/ion.c | 5 -- drivers/staging/android/ion/ion.h | 13 +---- drivers/staging/android/ion/ion_page_pool.c | 5 +- drivers/staging/android/ion/ion_system_heap.c | 76 +++++---------------------- 4 files changed, 16 insertions(+), 83 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 461b193..c094be2 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -33,11 +33,6 @@ static struct ion_device *internal_dev; static int heap_id; -bool ion_buffer_cached(struct ion_buffer *buffer) -{ - return !!(buffer->flags & ION_FLAG_CACHED); -} - /* this function should only be called while dev->lock is held */ static void ion_buffer_add(struct ion_device *dev, struct ion_buffer *buffer) diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h index 1bc443f..ea08978 100644 --- a/drivers/staging/android/ion/ion.h +++ b/drivers/staging/android/ion/ion.h @@ -185,14 +185,6 @@ struct ion_heap { }; /** - * ion_buffer_cached - this ion buffer is cached - * @buffer: buffer - * - * indicates whether this ion buffer is cached - */ -bool ion_buffer_cached(struct ion_buffer *buffer); - -/** * ion_device_add_heap - adds a heap to the ion device * @heap: the heap to add */ @@ -302,7 +294,6 @@ size_t ion_heap_freelist_shrink(struct ion_heap *heap, * @gfp_mask: gfp_mask to use from alloc * @order: order of pages in the pool * @list: plist node for list of pools - * @cached: it's cached pool or not * * Allows you to keep a pool of pre allocated pages to use from your heap. * Keeping a pool of pages that is ready for dma, ie any cached mapping have @@ -312,7 +303,6 @@ size_t ion_heap_freelist_shrink(struct ion_heap *heap, struct ion_page_pool { int high_count; int low_count; - bool cached; struct list_head high_items; struct list_head low_items; struct mutex mutex; @@ -321,8 +311,7 @@ struct ion_page_pool { struct plist_node list; }; -struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, - bool cached); +struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order); void ion_page_pool_destroy(struct ion_page_pool *pool); struct page *ion_page_pool_alloc(struct ion_page_pool *pool); void ion_page_pool_free(struct ion_page_pool *pool, struct page *page); diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c index 6d2caf0..db8f614 100644 --- a/drivers/staging/android/ion/ion_page_pool.c +++ b/drivers/staging/android/ion/ion_page_pool.c @@ -123,8 +123,7 @@ int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask, return freed; } -struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, - bool cached) +struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order) { struct ion_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); @@ -138,8 +137,6 @@ struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, pool->order = order; mutex_init(&pool->mutex); plist_node_init(&pool->list, order); - if (cached) - pool->cached = true; return pool; } diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index bc19cdd..701eb9f 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -41,31 +41,16 @@ static inline unsigned int order_to_size(int order) struct ion_system_heap { struct ion_heap heap; - struct ion_page_pool *uncached_pools[NUM_ORDERS]; - struct ion_page_pool *cached_pools[NUM_ORDERS]; + struct ion_page_pool *pools[NUM_ORDERS]; }; -/** - * The page from page-pool are all zeroed before. We need do cache - * clean for cached buffer. The uncached buffer are always non-cached - * since it's allocated. So no need for non-cached pages. - */ static struct page *alloc_buffer_page(struct ion_system_heap *heap, struct ion_buffer *buffer, unsigned long order) { - bool cached = ion_buffer_cached(buffer); - struct ion_page_pool *pool; - struct page *page; + struct ion_page_pool *pool = heap->pools[order_to_index(order)]; - if (!cached) - pool = heap->uncached_pools[order_to_index(order)]; - else - pool = heap->cached_pools[order_to_index(order)]; - - page = ion_page_pool_alloc(pool); - - return page; + return ion_page_pool_alloc(pool); } static void free_buffer_page(struct ion_system_heap *heap, @@ -73,7 +58,6 @@ static void free_buffer_page(struct ion_system_heap *heap, { struct ion_page_pool *pool; unsigned int order = compound_order(page); - bool cached = ion_buffer_cached(buffer); /* go to system */ if (buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE) { @@ -81,10 +65,7 @@ static void free_buffer_page(struct ion_system_heap *heap, return; } - if (!cached) - pool = heap->uncached_pools[order_to_index(order)]; - else - pool = heap->cached_pools[order_to_index(order)]; + pool = heap->pools[order_to_index(order)]; ion_page_pool_free(pool, page); } @@ -190,8 +171,7 @@ static void ion_system_heap_free(struct ion_buffer *buffer) static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask, int nr_to_scan) { - struct ion_page_pool *uncached_pool; - struct ion_page_pool *cached_pool; + struct ion_page_pool *pool; struct ion_system_heap *sys_heap; int nr_total = 0; int i, nr_freed; @@ -203,26 +183,15 @@ static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask, only_scan = 1; for (i = 0; i < NUM_ORDERS; i++) { - uncached_pool = sys_heap->uncached_pools[i]; - cached_pool = sys_heap->cached_pools[i]; + pool = sys_heap->pools[i]; if (only_scan) { - nr_total += ion_page_pool_shrink(uncached_pool, + nr_total += ion_page_pool_shrink(pool, gfp_mask, nr_to_scan); - nr_total += ion_page_pool_shrink(cached_pool, - gfp_mask, - nr_to_scan); } else { - nr_freed = ion_page_pool_shrink(uncached_pool, - gfp_mask, - nr_to_scan); - nr_to_scan -= nr_freed; - nr_total += nr_freed; - if (nr_to_scan <= 0) - break; - nr_freed = ion_page_pool_shrink(cached_pool, + nr_freed = ion_page_pool_shrink(pool, gfp_mask, nr_to_scan); nr_to_scan -= nr_freed; @@ -253,26 +222,16 @@ static int ion_system_heap_debug_show(struct ion_heap *heap, struct seq_file *s, struct ion_page_pool *pool; for (i = 0; i < NUM_ORDERS; i++) { - pool = sys_heap->uncached_pools[i]; + pool = sys_heap->pools[i]; - seq_printf(s, "%d order %u highmem pages uncached %lu total\n", + seq_printf(s, "%d order %u highmem pages %lu total\n", pool->high_count, pool->order, (PAGE_SIZE << pool->order) * pool->high_count); - seq_printf(s, "%d order %u lowmem pages uncached %lu total\n", + seq_printf(s, "%d order %u lowmem pages %lu total\n", pool->low_count, pool->order, (PAGE_SIZE << pool->order) * pool->low_count); } - for (i = 0; i < NUM_ORDERS; i++) { - pool = sys_heap->cached_pools[i]; - - seq_printf(s, "%d order %u highmem pages cached %lu total\n", - pool->high_count, pool->order, - (PAGE_SIZE << pool->order) * pool->high_count); - seq_printf(s, "%d order %u lowmem pages cached %lu total\n", - pool->low_count, pool->order, - (PAGE_SIZE << pool->order) * pool->low_count); - } return 0; } @@ -285,8 +244,7 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools) ion_page_pool_destroy(pools[i]); } -static int ion_system_heap_create_pools(struct ion_page_pool **pools, - bool cached) +static int ion_system_heap_create_pools(struct ion_page_pool **pools) { int i; gfp_t gfp_flags = low_order_gfp_flags; @@ -297,7 +255,7 @@ static int ion_system_heap_create_pools(struct ion_page_pool **pools, if (orders[i] > 4) gfp_flags = high_order_gfp_flags; - pool = ion_page_pool_create(gfp_flags, orders[i], cached); + pool = ion_page_pool_create(gfp_flags, orders[i]); if (!pool) goto err_create_pool; pools[i] = pool; @@ -320,18 +278,12 @@ static struct ion_heap *__ion_system_heap_create(void) heap->heap.type = ION_HEAP_TYPE_SYSTEM; heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; - if (ion_system_heap_create_pools(heap->uncached_pools, false)) + if (ion_system_heap_create_pools(heap->pools)) goto free_heap; - if (ion_system_heap_create_pools(heap->cached_pools, true)) - goto destroy_uncached_pools; - heap->heap.debug_show = ion_system_heap_debug_show; return &heap->heap; -destroy_uncached_pools: - ion_system_heap_destroy_pools(heap->uncached_pools); - free_heap: kfree(heap); return ERR_PTR(-ENOMEM); -- 1.7.12.4