Received: by 2002:a19:771d:0:0:0:0:0 with SMTP id s29csp1277493lfc; Wed, 1 Jun 2022 13:58:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw5wAPFqj9Gw1nJsJfcdBrDUbfCHXTdTuzF0+z48wxK1cBK9k2PH2ZU241ZNqc/n5e3JjpD X-Received: by 2002:a05:6a00:c94:b0:518:d3dc:be1f with SMTP id a20-20020a056a000c9400b00518d3dcbe1fmr1404384pfv.76.1654117121641; Wed, 01 Jun 2022 13:58:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654117121; cv=none; d=google.com; s=arc-20160816; b=tk9AizRsEEFRL3xawtBnTo9t5YTdwW0bLr2L5lLdInw1QckHpgjX29FUXGtIkYgfyW itzCkJUqGVjC8btC4Ye6eGszWf+MKC/ROeNaEw1aAqo+rXoK1LMlxbeHB955+DP79HpM WGWujM9K+llxHMuTpKpyldikKyTQakywn6O8Dp/cLRDpZHJdqaHnABLyfUJo5U60unJf qwEkdWX8LOqi92wPpt4WxEmG4JB/GxVgKazGji/nE1Go+UzOQC2W+3EzH7R71Qa9xLAg 6eTLORi8U0ulrCSA6PD9P2PCz55M7tfIgpuU3AiVCy7l47QBi1qTd+L6nZ7mrme9IN/4 yjLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :cc:to:from:content-language:subject:user-agent:mime-version:date :message-id:dkim-signature; bh=DOnD3hVWfzzlQtOH7c2ljAgtJzaUt05vjMFxkgGY0NM=; b=Y96V85U2G8pDwW2Mx3K1sleOn93SXHneC9nnTqjpA9CjLhTn9WeS2K+3Bmhoqxcwv4 NtGdsNH71+otnqGDdBMwuJUppErRp1luK+Cveh7WDBJ8qzUbNvR3+463lvVMJi/ZB+dp JAAldB8xhjgyT8fte2CkNfuV9GFO9hrNxDA7JEyJSvx+sBr3FEMP+vTZwVv3fvFAmjIM CJj7WNiafgaqIvPhavSq6nGDipMFPAwPiVjoQOhas14OpcHnK2inD9QlJbN3whggjMjU ZyAqCxSikFku6+kjg3dqAXWuF1yoWCeDVPpuZyNBA3dQtssXG9LcOFGaqZ3PwA3N4h+A Y3yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cybernetics.com header.s=mail header.b="q/GVT243"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cybernetics.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id b4-20020a63eb44000000b003f5d31f8e36si3264142pgk.672.2022.06.01.13.58.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 13:58:41 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@cybernetics.com header.s=mail header.b="q/GVT243"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cybernetics.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B04151D30CA; Wed, 1 Jun 2022 13:00:24 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346190AbiEaSXH (ORCPT + 99 others); Tue, 31 May 2022 14:23:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234905AbiEaSXG (ORCPT ); Tue, 31 May 2022 14:23:06 -0400 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8808D2D8 for ; Tue, 31 May 2022 11:23:04 -0700 (PDT) X-ASG-Debug-ID: 1654021382-1cf43917f334b0e0001-xx1T2L Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id rbOwRDxhxvyi8U8j; Tue, 31 May 2022 14:23:02 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=DOnD3hVWfzzlQtOH7c2ljAgtJzaUt05vjMFxkgGY0NM=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=q/GVT243covpjwcmjMW5 z4a6gNMYZabMfljvc8LnFzU6VBDffzO74DsNJcFVzgVVWgtszVkbPOdj7dQVe8BH9hQSlkbXhFiKm x/uNZmiZQcoW7bKGHC3BqfdQw5ypKnuPYBbR9W0R5xk2kzXdVLhFDQXi2vNknYGNjdMVKjqrCY= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829238; Tue, 31 May 2022 14:23:02 -0400 Message-ID: Date: Tue, 31 May 2022 14:23:02 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 09/10] dmapool: improve scalability of dma_pool_alloc Content-Language: en-US X-ASG-Orig-Subj: [PATCH 09/10] dmapool: improve scalability of dma_pool_alloc From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021382 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 3568 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org dma_pool_alloc() scales poorly when allocating a large number of pages because it does a linear scan of all previously-allocated pages before allocating a new one. Improve its scalability by maintaining a separate list of pages that have free blocks ready to (re)allocate. In big O notation, this improves the algorithm from O(n^2) to O(n). Signed-off-by: Tony Battersby --- mm/dmapool.c | 27 +++++++++++++++++++++++---- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 58c11dcaa4e4..b3dd2ace0d2a 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -17,6 +17,10 @@ * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked * list of free blocks within the page. Used blocks aren't tracked, but we * keep a count of how many are currently allocated from each page. + * + * The avail_page_list keeps track of pages that have one or more free blocks + * available to (re)allocate. Pages are moved in and out of avail_page_list + * as their blocks are allocated and freed. */ #include @@ -42,6 +46,7 @@ struct dma_pool { /* the pool */ struct list_head page_list; + struct list_head avail_page_list; spinlock_t lock; unsigned int size; struct device *dev; @@ -54,6 +59,7 @@ struct dma_pool { /* the pool */ struct dma_page { /* cacheable header for 'allocation' bytes */ struct list_head page_list; + struct list_head avail_page_link; void *vaddr; dma_addr_t dma; unsigned int in_use; @@ -164,6 +170,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->dev = dev; INIT_LIST_HEAD(&retval->page_list); + INIT_LIST_HEAD(&retval->avail_page_list); spin_lock_init(&retval->lock); retval->size = size; retval->boundary = boundary; @@ -270,6 +277,7 @@ static void pool_free_page(struct dma_pool *pool, } list_del(&page->page_list); + list_del(&page->avail_page_link); kfree(page); } @@ -330,10 +338,11 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, might_alloc(mem_flags); spin_lock_irqsave(&pool->lock, flags); - list_for_each_entry(page, &pool->page_list, page_list) { - if (page->offset < pool->allocation) - goto ready; - } + page = list_first_entry_or_null(&pool->avail_page_list, + struct dma_page, + avail_page_link); + if (page) + goto ready; /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ spin_unlock_irqrestore(&pool->lock, flags); @@ -345,10 +354,13 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, spin_lock_irqsave(&pool->lock, flags); list_add(&page->page_list, &pool->page_list); + list_add(&page->avail_page_link, &pool->avail_page_list); ready: page->in_use++; offset = page->offset; page->offset = *(int *)(page->vaddr + offset); + if (page->offset >= pool->allocation) + list_del_init(&page->avail_page_link); retval = offset + page->vaddr; *handle = offset + page->dma; #ifdef DMAPOOL_DEBUG @@ -470,6 +482,13 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) memset(vaddr, 0, pool->size); #endif + /* + * list_empty() on the page tests if the page is already linked into + * avail_page_list to avoid adding it more than once. + */ + if (list_empty(&page->avail_page_link)) + list_add(&page->avail_page_link, &pool->avail_page_list); + page->in_use--; *(int *)vaddr = page->offset; page->offset = offset; -- 2.25.1