Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932749AbbFERTU (ORCPT ); Fri, 5 Jun 2015 13:19:20 -0400 Received: from mail-vn0-f42.google.com ([209.85.216.42]:36413 "EHLO mail-vn0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752415AbbFERTS convert rfc822-to-8bit (ORCPT ); Fri, 5 Jun 2015 13:19:18 -0400 MIME-Version: 1.0 In-Reply-To: <5571BFBE.3070209@redhat.com> References: <000001d09f66$056b67f0$104237d0$@yang@samsung.com> <5571BFBE.3070209@redhat.com> Date: Sat, 6 Jun 2015 02:19:17 +0900 X-Google-Sender-Auth: ekcli3HRX5bRGkm-pQ9wtZ8ABS8 Message-ID: Subject: Re: [PATCH] cma: allow concurrent cma pages allocation for multi-cma areas From: =?UTF-8?Q?Micha=C5=82_Nazarewicz?= To: Laura Abbott Cc: Weijie Yang , iamjoonsoo.kim@lge.com, Marek Szyprowski , Andrew Morton , Weijie Yang , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3750 Lines: 102 On Fri, Jun 05 2015, Laura Abbott wrote: > On 06/05/2015 01:01 AM, Weijie Yang wrote: >> Currently we have to hold the single cma_mutex when alloc cma pages, >> it is ok when there is only one cma area in system. >> However, when there are several cma areas, such as in our Android smart >> phone, the single cma_mutex prevents concurrent cma page allocation. >> >> This patch removes the single cma_mutex and uses per-cma area alloc_lock, >> this allows concurrent cma pages allocation for different cma areas while >> protects access to the same pageblocks. >> >> Signed-off-by: Weijie Yang > Last I knew alloc_contig_range needed to be serialized which is why we > still had the global CMA mutex. https://lkml.org/lkml/2014/2/18/462 > > So NAK unless something has changed to allow this. This patch should be fine. Change you’ve pointed to would get rid of any serialisation around alloc_contig_range which is dangerous, but since CMA regions are pageblock-aligned: /* * Sanitise input arguments. * Pages both ends in CMA area could be merged into adjacent unmovable * migratetype page by page allocator's buddy algorithm. In the case, * you couldn't get a contiguous memory, which is not what we want. */ alignment = max(alignment, (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order)); base = ALIGN(base, alignment); size = ALIGN(size, alignment); limit &= ~(alignment - 1); synchronising allocation in each area should work fine. >> --- >> mm/cma.c | 6 +++--- >> mm/cma.h | 1 + >> 2 files changed, 4 insertions(+), 3 deletions(-) >> >> diff --git a/mm/cma.c b/mm/cma.c >> index 3a7a67b..eaf1afe 100644 >> --- a/mm/cma.c >> +++ b/mm/cma.c >> @@ -41,7 +41,6 @@ >> >> struct cma cma_areas[MAX_CMA_AREAS]; >> unsigned cma_area_count; >> -static DEFINE_MUTEX(cma_mutex); >> >> phys_addr_t cma_get_base(const struct cma *cma) >> { >> @@ -128,6 +127,7 @@ static int __init cma_activate_area(struct cma *cma) >> } while (--i); >> >> mutex_init(&cma->lock); Since now we have two mutexes in the structure, rename this one to bitmap_lock. >> + mutex_init(&cma->alloc_lock); >> >> #ifdef CONFIG_CMA_DEBUGFS >> INIT_HLIST_HEAD(&cma->mem_head); >> @@ -398,9 +398,9 @@ struct page *cma_alloc(struct cma *cma, unsigned int count, unsigned int align) >> mutex_unlock(&cma->lock); >> >> pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); >> - mutex_lock(&cma_mutex); >> + mutex_lock(&cma->alloc_lock); >> ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA); >> - mutex_unlock(&cma_mutex); >> + mutex_unlock(&cma->alloc_lock); >> if (ret == 0) { >> page = pfn_to_page(pfn); >> break; >> diff --git a/mm/cma.h b/mm/cma.h >> index 1132d73..2084c9f 100644 >> --- a/mm/cma.h >> +++ b/mm/cma.h >> @@ -7,6 +7,7 @@ struct cma { >> unsigned long *bitmap; >> unsigned int order_per_bit; /* Order of pages represented by one bit */ >> struct mutex lock; >> + struct mutex alloc_lock; >> #ifdef CONFIG_CMA_DEBUGFS >> struct hlist_head mem_head; >> spinlock_t mem_head_lock; -- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Michał “mina86” Nazarewicz (o o) ooo +------ooO--(_)--Ooo-- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/