Received: by 10.223.185.116 with SMTP id b49csp2154147wrg; Mon, 12 Feb 2018 05:20:44 -0800 (PST) X-Google-Smtp-Source: AH8x226jR6zu/LV1IDgkJobUFWSIqyz8/H5g847xeP94JPHY/gTyL7VUQIGKk4wByttH30oxLcqY X-Received: by 2002:a17:902:aa0b:: with SMTP id be11-v6mr10536486plb.250.1518441644159; Mon, 12 Feb 2018 05:20:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518441644; cv=none; d=google.com; s=arc-20160816; b=a4Y+xZ0zfJ1rZPB7sETlXHPJzxAu1htsXMWwYbMvii87Q4/D9Z7XkrDklcsdnc0MCB rdb4y8RVnf8ALMZCSdGuBpAWUnoJyVqiCOcTlqn4vYZr9++domqaQVZGxgtC7OF39cUr K1r7uETxKn4TwGqiw4Wfs547J7P7tSXdH8W3u85n/QojIPiAHwCWbmN8CNG1NfvCQ6Sc kh0xy6NBM79KTwKa17YsdFhprRd0vaAhfflQrSFXodayKlupULuNshYG2mJL6wfXpw+b bVmugWDf8BHchwcBItUznBn6MetVzB0cxny8KtCK1h9qgS1CnSKRGIaJ0QUhD9W0kckW TEjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=dTYkxc7F5ZLmLUVYTPJb5KExO2+Z94SLdjzDXZu1eqQ=; b=i2+W9s38iKEdj+pMbBFuB8vRRMVs4MdkLJSamBPcFOtaAZLlKFchHY1bNluyK++Q8z lXoe6TQ5IPVgAK5+XUnejvqnY0fIzfy8eUVaPK6KWRqjscYMlSIqu+6mOOlNkcG6vHoV F2nuaYYNAiu9Q6lpwT0sCGM5HHsj2ZwoG/lMkZbyuKpvCwKk7/vnA7E5e4qRgzNkXk8L 9dLZqs/b8e+y2H9Gw6RXrNuqp/8cobxc/ow2veAx4p3g68/tda0BxERGNkcwytzhEU+9 sMVrVwAMX/VrKEDTS97Cwl8moh9Dd71FuVQwyBJw3Ep0JoUs31dyPFeUeG6/BP6V2ape JQbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w8si3614041pgo.616.2018.02.12.05.20.29; Mon, 12 Feb 2018 05:20:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933880AbeBLKxE (ORCPT + 99 others); Mon, 12 Feb 2018 05:53:04 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:5203 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754090AbeBLKwh (ORCPT ); Mon, 12 Feb 2018 05:52:37 -0500 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 8E0CC86C7EECF; Mon, 12 Feb 2018 18:52:24 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.361.1; Mon, 12 Feb 2018 18:52:18 +0800 From: Yisheng Xie To: , , CC: , , Yisheng Xie Subject: [PATCH v2 9/9] staging: android: ion: Combine cache and uncache pools Date: Mon, 12 Feb 2018 18:43:14 +0800 Message-ID: <1518432194-41536-10-git-send-email-xieyisheng1@huawei.com> X-Mailer: git-send-email 1.7.12.4 In-Reply-To: <1518432194-41536-1-git-send-email-xieyisheng1@huawei.com> References: <1518432194-41536-1-git-send-email-xieyisheng1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now we call dma_map in the dma_buf API callbacks and handle explicit caching by the dma_buf sync API, which make cache and uncache pools in the same handling flow, which can be combined. Acked-by: Sumit Semwal Signed-off-by: Yisheng Xie --- drivers/staging/android/ion/ion.c | 5 -- drivers/staging/android/ion/ion.h | 13 +---- drivers/staging/android/ion/ion_page_pool.c | 5 +- drivers/staging/android/ion/ion_system_heap.c | 76 +++++---------------------- 4 files changed, 16 insertions(+), 83 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 461b193..c094be2 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -33,11 +33,6 @@ static struct ion_device *internal_dev; static int heap_id; -bool ion_buffer_cached(struct ion_buffer *buffer) -{ - return !!(buffer->flags & ION_FLAG_CACHED); -} - /* this function should only be called while dev->lock is held */ static void ion_buffer_add(struct ion_device *dev, struct ion_buffer *buffer) diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h index 1bc443f..ea08978 100644 --- a/drivers/staging/android/ion/ion.h +++ b/drivers/staging/android/ion/ion.h @@ -185,14 +185,6 @@ struct ion_heap { }; /** - * ion_buffer_cached - this ion buffer is cached - * @buffer: buffer - * - * indicates whether this ion buffer is cached - */ -bool ion_buffer_cached(struct ion_buffer *buffer); - -/** * ion_device_add_heap - adds a heap to the ion device * @heap: the heap to add */ @@ -302,7 +294,6 @@ size_t ion_heap_freelist_shrink(struct ion_heap *heap, * @gfp_mask: gfp_mask to use from alloc * @order: order of pages in the pool * @list: plist node for list of pools - * @cached: it's cached pool or not * * Allows you to keep a pool of pre allocated pages to use from your heap. * Keeping a pool of pages that is ready for dma, ie any cached mapping have @@ -312,7 +303,6 @@ size_t ion_heap_freelist_shrink(struct ion_heap *heap, struct ion_page_pool { int high_count; int low_count; - bool cached; struct list_head high_items; struct list_head low_items; struct mutex mutex; @@ -321,8 +311,7 @@ struct ion_page_pool { struct plist_node list; }; -struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, - bool cached); +struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order); void ion_page_pool_destroy(struct ion_page_pool *pool); struct page *ion_page_pool_alloc(struct ion_page_pool *pool); void ion_page_pool_free(struct ion_page_pool *pool, struct page *page); diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c index 6d2caf0..db8f614 100644 --- a/drivers/staging/android/ion/ion_page_pool.c +++ b/drivers/staging/android/ion/ion_page_pool.c @@ -123,8 +123,7 @@ int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask, return freed; } -struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, - bool cached) +struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order) { struct ion_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); @@ -138,8 +137,6 @@ struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order, pool->order = order; mutex_init(&pool->mutex); plist_node_init(&pool->list, order); - if (cached) - pool->cached = true; return pool; } diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index bc19cdd..701eb9f 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -41,31 +41,16 @@ static inline unsigned int order_to_size(int order) struct ion_system_heap { struct ion_heap heap; - struct ion_page_pool *uncached_pools[NUM_ORDERS]; - struct ion_page_pool *cached_pools[NUM_ORDERS]; + struct ion_page_pool *pools[NUM_ORDERS]; }; -/** - * The page from page-pool are all zeroed before. We need do cache - * clean for cached buffer. The uncached buffer are always non-cached - * since it's allocated. So no need for non-cached pages. - */ static struct page *alloc_buffer_page(struct ion_system_heap *heap, struct ion_buffer *buffer, unsigned long order) { - bool cached = ion_buffer_cached(buffer); - struct ion_page_pool *pool; - struct page *page; + struct ion_page_pool *pool = heap->pools[order_to_index(order)]; - if (!cached) - pool = heap->uncached_pools[order_to_index(order)]; - else - pool = heap->cached_pools[order_to_index(order)]; - - page = ion_page_pool_alloc(pool); - - return page; + return ion_page_pool_alloc(pool); } static void free_buffer_page(struct ion_system_heap *heap, @@ -73,7 +58,6 @@ static void free_buffer_page(struct ion_system_heap *heap, { struct ion_page_pool *pool; unsigned int order = compound_order(page); - bool cached = ion_buffer_cached(buffer); /* go to system */ if (buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE) { @@ -81,10 +65,7 @@ static void free_buffer_page(struct ion_system_heap *heap, return; } - if (!cached) - pool = heap->uncached_pools[order_to_index(order)]; - else - pool = heap->cached_pools[order_to_index(order)]; + pool = heap->pools[order_to_index(order)]; ion_page_pool_free(pool, page); } @@ -190,8 +171,7 @@ static void ion_system_heap_free(struct ion_buffer *buffer) static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask, int nr_to_scan) { - struct ion_page_pool *uncached_pool; - struct ion_page_pool *cached_pool; + struct ion_page_pool *pool; struct ion_system_heap *sys_heap; int nr_total = 0; int i, nr_freed; @@ -203,26 +183,15 @@ static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask, only_scan = 1; for (i = 0; i < NUM_ORDERS; i++) { - uncached_pool = sys_heap->uncached_pools[i]; - cached_pool = sys_heap->cached_pools[i]; + pool = sys_heap->pools[i]; if (only_scan) { - nr_total += ion_page_pool_shrink(uncached_pool, + nr_total += ion_page_pool_shrink(pool, gfp_mask, nr_to_scan); - nr_total += ion_page_pool_shrink(cached_pool, - gfp_mask, - nr_to_scan); } else { - nr_freed = ion_page_pool_shrink(uncached_pool, - gfp_mask, - nr_to_scan); - nr_to_scan -= nr_freed; - nr_total += nr_freed; - if (nr_to_scan <= 0) - break; - nr_freed = ion_page_pool_shrink(cached_pool, + nr_freed = ion_page_pool_shrink(pool, gfp_mask, nr_to_scan); nr_to_scan -= nr_freed; @@ -253,26 +222,16 @@ static int ion_system_heap_debug_show(struct ion_heap *heap, struct seq_file *s, struct ion_page_pool *pool; for (i = 0; i < NUM_ORDERS; i++) { - pool = sys_heap->uncached_pools[i]; + pool = sys_heap->pools[i]; - seq_printf(s, "%d order %u highmem pages uncached %lu total\n", + seq_printf(s, "%d order %u highmem pages %lu total\n", pool->high_count, pool->order, (PAGE_SIZE << pool->order) * pool->high_count); - seq_printf(s, "%d order %u lowmem pages uncached %lu total\n", + seq_printf(s, "%d order %u lowmem pages %lu total\n", pool->low_count, pool->order, (PAGE_SIZE << pool->order) * pool->low_count); } - for (i = 0; i < NUM_ORDERS; i++) { - pool = sys_heap->cached_pools[i]; - - seq_printf(s, "%d order %u highmem pages cached %lu total\n", - pool->high_count, pool->order, - (PAGE_SIZE << pool->order) * pool->high_count); - seq_printf(s, "%d order %u lowmem pages cached %lu total\n", - pool->low_count, pool->order, - (PAGE_SIZE << pool->order) * pool->low_count); - } return 0; } @@ -285,8 +244,7 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools) ion_page_pool_destroy(pools[i]); } -static int ion_system_heap_create_pools(struct ion_page_pool **pools, - bool cached) +static int ion_system_heap_create_pools(struct ion_page_pool **pools) { int i; gfp_t gfp_flags = low_order_gfp_flags; @@ -297,7 +255,7 @@ static int ion_system_heap_create_pools(struct ion_page_pool **pools, if (orders[i] > 4) gfp_flags = high_order_gfp_flags; - pool = ion_page_pool_create(gfp_flags, orders[i], cached); + pool = ion_page_pool_create(gfp_flags, orders[i]); if (!pool) goto err_create_pool; pools[i] = pool; @@ -320,18 +278,12 @@ static struct ion_heap *__ion_system_heap_create(void) heap->heap.type = ION_HEAP_TYPE_SYSTEM; heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; - if (ion_system_heap_create_pools(heap->uncached_pools, false)) + if (ion_system_heap_create_pools(heap->pools)) goto free_heap; - if (ion_system_heap_create_pools(heap->cached_pools, true)) - goto destroy_uncached_pools; - heap->heap.debug_show = ion_system_heap_debug_show; return &heap->heap; -destroy_uncached_pools: - ion_system_heap_destroy_pools(heap->uncached_pools); - free_heap: kfree(heap); return ERR_PTR(-ENOMEM); -- 1.7.12.4