Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp1698988ybf; Sun, 1 Mar 2020 16:05:46 -0800 (PST) X-Google-Smtp-Source: ADFU+vsLM0misqdDvpEeGqDebo+DWPE6vUGv637sjOEB5Xyg5YIX6fLnjeWQCGAvWzmrAuRv8DMZ X-Received: by 2002:a05:6830:22f2:: with SMTP id t18mr906638otc.165.1583107546494; Sun, 01 Mar 2020 16:05:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583107546; cv=none; d=google.com; s=arc-20160816; b=rLHDEYo8T4nV7zbth2SogOuJiCV0g4TGG33TB4HtBNAeUj0BtV2KPvIjyfV1ahb4Pv RtiVbcm0+8+aYMM6gvh5ARxW4x5cqIzvQ6VSKL79kDbMZQNnOlPvQE9nUnWh+8JUWy9V Xjc1nPXxVmGo3jng6RGDKGnF3SvSijCW2RJPrj3YOVjPKnMHbE6a27Nc2DvEce3qzlM0 utipai5I0BUY+SbsNGyFTS3f+10M7xc5BZmE9G69vJzWaXLFlGWfgosVjgeSa6QJQaoR 55Rhe0n49pS+ipe9vuvXQRW2wnnSW23yT4dxY/MLSckSg2d+Z79Y/czB8ysQ1pNu28L5 fb/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=1LQD39SjVeqtmLQiFZHIBXAjKJBE0e2cwCAeofuH3Zo=; b=mKNc6o2X8t1cfgRMJvn5NQicQlsdXVoSqDRJwwQg82Gk+X9nhsd/hVTkRDz8yLAFK9 ZiBPF2B5d4zyxEOfaeCfcHG24tqJiNsUv5Hi5FfbiUACQpF9KTKkvbuXQ7fezp4pvYjt 2xaqKiwTpTgLitaGG0YoLoPbPmZKKi8uFbwN2wJ51KTdRoY1jgbQdih+7O59MzecmQJ9 Aju53LGbWmGiPkJcD0zi8aD+/tJBiNt7drJR6wTeLIP5s+nbi4RbIyYfi3yrDtk52RMg rI2Hf17pj/vHBhX5ZR+KgXBrU63nkbGiPCmZWaVta8oKbGfyyZ0FHP+izrbleO2JhSKg iWjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=j1TgDHt3; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c4si8779954ots.107.2020.03.01.16.05.34; Sun, 01 Mar 2020 16:05:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=j1TgDHt3; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726898AbgCBAFV (ORCPT + 99 others); Sun, 1 Mar 2020 19:05:21 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:39409 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726621AbgCBAFU (ORCPT ); Sun, 1 Mar 2020 19:05:20 -0500 Received: by mail-pg1-f194.google.com with SMTP id s2so3643922pgv.6 for ; Sun, 01 Mar 2020 16:05:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=1LQD39SjVeqtmLQiFZHIBXAjKJBE0e2cwCAeofuH3Zo=; b=j1TgDHt3ZsdFd2BIAni9vqIRjz05UjnfM1+l/4ZCv9UNbo38PL958p00ECLj8xUi9/ ZxQWRufdOgyN70yt6qdwUN/abK6hwPQCd12Nlto8GN2+77MeoLpNoihEZakvdzBK6hOJ vc9xP7dlSu5aH4DG8FmtaWx9mLIPwe8giBbPC/KaYIs/YETi0uaklM6li42kXHV+ihbi yxp7y1QeUHM58xdcfs0U4DaK/FIV1PpxguIe8lry/wWHxLlRnspwD4Hm6zVP/a/7seae Xw9vdn/FBtnQ2wDsgsZ0DQy2DNTZpP34D+JcPYGWwS40xGtTcPdQrhwlgB1bYAHc2une glKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=1LQD39SjVeqtmLQiFZHIBXAjKJBE0e2cwCAeofuH3Zo=; b=h+jAoozMvekGSrMYje7iSB/4cG69t+1341xcV1x+kvyLvGMdEM+mdoGVqndtH2UV0U 6eFam2QEjhzAZa3b9se3jeuSa89NOtU1COdc3JtQYcJVqa8Zit6/V6/Y74cQoXC69dGU /0DU3stBO47xNILqXbnMrY9kOksifPKEnc6KvuOX08bZ2gAMuGUEUxqEQk2rnoYNIBz9 EqiRicTpbsz7Wl4wwnvkGxT5nynMsw+oRcDPF02rO8wMCw4ShDKi9lUTa57Ld/7XXs/z WdhUMJxRfKXmgRFC3Hwq45Fl71jgsO4MLke1YDeB9WrALP8Jujs1qqbURwyBxDmTsxum Crug== X-Gm-Message-State: APjAAAV8Z0JyOx31DnxzcXvw3dfQcQCyyNJAX2XFFzSblhX3je5XTdD8 IeUevBmO/vVsbmz7rebhXBa+0Q== X-Received: by 2002:a62:fcd8:: with SMTP id e207mr14735494pfh.160.1583107519380; Sun, 01 Mar 2020 16:05:19 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id 23sm18422587pfh.28.2020.03.01.16.05.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Mar 2020 16:05:18 -0800 (PST) Date: Sun, 1 Mar 2020 16:05:18 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Christoph Hellwig , Tom Lendacky cc: "Singh, Brijesh" , "Grimm, Jon" , Joerg Roedel , baekhw@google.com, "linux-kernel@vger.kernel.org" , "iommu@lists.linux-foundation.org" Subject: [rfc 4/6] dma-remap: dynamically expanding atomic pools In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When an atomic pool becomes fully depleted because it is now relied upon for all non-blocking allocations through the DMA API, allow background expansion of each pool by a kworker. When an atomic pool has less than the default size of memory left, kick off a kworker to dynamically expand the pool in the background. The pool is doubled in size. This allows the default size to be kept quite low when one or more of the atomic pools is not used. Also switch over some node ids to the more appropriate NUMA_NO_NODE. Signed-off-by: David Rientjes --- kernel/dma/remap.c | 79 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 58 insertions(+), 21 deletions(-) diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -10,6 +10,7 @@ #include #include #include +#include struct page **dma_common_find_pages(void *cpu_addr) { @@ -104,7 +105,10 @@ static struct gen_pool *atomic_pool_dma32 __ro_after_init; static struct gen_pool *atomic_pool_normal __ro_after_init; #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K -static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE; +static size_t atomic_pool_size = DEFAULT_DMA_COHERENT_POOL_SIZE; + +/* Dynamic background expansion when the atomic pool is near capacity */ +struct work_struct atomic_pool_work; static int __init early_coherent_pool(char *p) { @@ -113,14 +117,14 @@ static int __init early_coherent_pool(char *p) } early_param("coherent_pool", early_coherent_pool); -static int __init __dma_atomic_pool_init(struct gen_pool **pool, - size_t pool_size, gfp_t gfp) +static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, + gfp_t gfp) { - const unsigned int order = get_order(pool_size); const unsigned long nr_pages = pool_size >> PAGE_SHIFT; + const unsigned int order = get_order(pool_size); struct page *page; void *addr; - int ret; + int ret = -ENOMEM; if (dev_get_cma_area(NULL)) page = dma_alloc_from_contiguous(NULL, nr_pages, order, false); @@ -131,38 +135,67 @@ static int __init __dma_atomic_pool_init(struct gen_pool **pool, arch_dma_prep_coherent(page, pool_size); - *pool = gen_pool_create(PAGE_SHIFT, -1); - if (!*pool) - goto free_page; - addr = dma_common_contiguous_remap(page, pool_size, pgprot_dmacoherent(PAGE_KERNEL), __builtin_return_address(0)); if (!addr) - goto destroy_genpool; + goto free_page; - ret = gen_pool_add_virt(*pool, (unsigned long)addr, page_to_phys(page), - pool_size, -1); + ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page), + pool_size, NUMA_NO_NODE); if (ret) goto remove_mapping; - gen_pool_set_algo(*pool, gen_pool_first_fit_order_align, NULL); - pr_info("DMA: preallocated %zu KiB %pGg pool for atomic allocations\n", - pool_size >> 10, &gfp); return 0; remove_mapping: dma_common_free_remap(addr, pool_size); -destroy_genpool: - gen_pool_destroy(*pool); - *pool = NULL; free_page: if (!dma_release_from_contiguous(NULL, page, nr_pages)) __free_pages(page, order); out: - pr_err("DMA: failed to allocate %zu KiB %pGg pool for atomic allocation\n", - atomic_pool_size >> 10, &gfp); - return -ENOMEM; + return ret; +} + +static void atomic_pool_resize(struct gen_pool *pool, gfp_t gfp) +{ + if (pool && gen_pool_avail(pool) < atomic_pool_size) + atomic_pool_expand(pool, gen_pool_size(pool), gfp); +} + +static void atomic_pool_work_fn(struct work_struct *work) +{ + if (IS_ENABLED(CONFIG_ZONE_DMA)) + atomic_pool_resize(atomic_pool, GFP_DMA); + if (IS_ENABLED(CONFIG_ZONE_DMA32)) + atomic_pool_resize(atomic_pool_dma32, GFP_DMA32); + atomic_pool_resize(atomic_pool_normal, GFP_KERNEL); +} + +static int __init __dma_atomic_pool_init(struct gen_pool **pool, + size_t pool_size, gfp_t gfp) +{ + int ret; + + *pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE); + if (!*pool) + return -ENOMEM; + + gen_pool_set_algo(*pool, gen_pool_first_fit_order_align, NULL); + + ret = atomic_pool_expand(*pool, pool_size, gfp); + if (ret) { + gen_pool_destroy(*pool); + *pool = NULL; + pr_err("DMA: failed to allocate %zu KiB %pGg pool for atomic allocation\n", + atomic_pool_size >> 10, &gfp); + return ret; + } + + + pr_info("DMA: preallocated %zu KiB %pGg pool for atomic allocations\n", + pool_size >> 10, &gfp); + return 0; } static int __init dma_atomic_pool_init(void) @@ -170,6 +203,8 @@ static int __init dma_atomic_pool_init(void) int ret = 0; int err; + INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); + ret = __dma_atomic_pool_init(&atomic_pool_normal, atomic_pool_size, GFP_KERNEL); if (IS_ENABLED(CONFIG_ZONE_DMA)) { @@ -231,6 +266,8 @@ void *dma_alloc_from_pool(struct device *dev, size_t size, ptr = (void *)val; memset(ptr, 0, size); } + if (gen_pool_avail(pool) < atomic_pool_size) + schedule_work(&atomic_pool_work); return ptr; }