Received: by 2002:a05:7412:8d09:b0:fa:4c10:6cad with SMTP id bj9csp310674rdb; Tue, 16 Jan 2024 00:24:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IHBygxeAfR5QcfqaD/nNDU8ipf5TC3be738l7klkSaxm6d5SSa7RZin3CZSlD4BV4ej6dsi X-Received: by 2002:a05:6808:3a0a:b0:3bd:852b:e566 with SMTP id gr10-20020a0568083a0a00b003bd852be566mr2393025oib.51.1705393443938; Tue, 16 Jan 2024 00:24:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705393443; cv=none; d=google.com; s=arc-20160816; b=Pr6ELOLxcTV4Kv+YGZkWxs80tDKrXpVRIHo74SsV+5Tt4/5pedyQSxiAHCw5ALuekn vSDY+cwJVSkloJYGlj+uHDkX4w2LW+YaYxDcxFE8myLvf7agMz5Q4r2sEW7H7HnhYweE ebgC1sqE2vQv3tn3KfSY+apbLz6vm3HUntQEpN2DaPk8wH1+8v/9yBrIRfCgzgUzGBXA R2PysDxSL40JErbj9mRL7iUmZXvp1w/F24qibpTyf+XTwDShYHmDUKrk+vOmPUtGAkQc xvRHa41Hzr6fM1WsIxrzycLQ///b7QIr5wHWTeUc2v6SYMFANoSORfG0mfsr+NYU/ZPZ ZTig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=PzmtJG34wGMNv+zt5PAWSBharX5lLLW72E6avQLj4wM=; fh=BYPWXTFuQ/AuMDLxT+UvIhRKtFIWBIs5AA2HOAW5yNc=; b=X7VlIlG3O427hGSa27JhlWhlKpaH7zLJRd2naNlnlzvAaRgy4azTCm+qdrBPNA2k43 pKaDHkpCBK99YiO0bh6/8UU5YJzHmxYxkhuiiRce+CrODja7N+8vhRr8t+qD3IWzlN9J 8n6gRBDdYigCJsUo2Oz1co65jklUbOsKpGHVa6mC9uZMdKgT7pGPmqM4Ui0WNCxvjFJ5 W1tUDFdki3ftUyIueZcA2/O8eZtPpw9sNaY9rvvX9kEfImldNAlnb1rHrt7Wj6OAwxeh eM1BprpJDe4UW0ZUr33BVuITvT/iNOmwAIcJQ+IfmQDuIRxQiViJDXOgGAfkPfwssHUF pe2Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-27124-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-27124-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id lm22-20020a056a003c9600b006dad16336f3si10836046pfb.5.2024.01.16.00.24.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Jan 2024 00:24:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-27124-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-27124-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-27124-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 8E445285299 for ; Tue, 16 Jan 2024 08:24:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6053B125B3; Tue, 16 Jan 2024 08:23:56 +0000 (UTC) Received: from relay3-d.mail.gandi.net (relay3-d.mail.gandi.net [217.70.183.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DAA4125A4 for ; Tue, 16 Jan 2024 08:23:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ghiti.fr Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ghiti.fr Received: by mail.gandi.net (Postfix) with ESMTPSA id 8398F60002; Tue, 16 Jan 2024 08:23:47 +0000 (UTC) Message-ID: Date: Tue, 16 Jan 2024 09:23:47 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required Content-Language: en-US To: Jisheng Zhang , Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org References: <20231202134224.4029-1-jszhang@kernel.org> From: Alexandre Ghiti In-Reply-To: <20231202134224.4029-1-jszhang@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-GND-Sasl: alex@ghiti.fr Hi Jisheng, On 02/12/2023 14:42, Jisheng Zhang wrote: > After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC > for !dma_coherent"), for non-coherent platforms with less than 4GB > memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters > to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go > further: If no bouncing needed for ZONE_DMA, let kernel automatically > allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on > non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" > any more. IIUC, DMA_BOUNCE_UNALIGNED_KMALLOC is enabled for all non-coherent platforms, even those with less than 4GB of memory. But the DMA bouncing (which is necessary to enable kmalloc-8/16/32/96...) was not enabled unless the user specified "swiotlb=mmnn,force" on the kernel command line. But does that mean that if the user did not specify "swiotlb=mmnn,force", the kmalloc-8/16/32/96 were enabled anyway and the behaviour was wrong (by lack of DMA bouncing)? I'm trying to understand if that's a fix or an enhancement. Thanks, Alex > > The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" > is taken from arm64. Users can still force smaller swiotlb buffer by > passing "swiotlb=mmnn". > > Signed-off-by: Jisheng Zhang > --- > > since v2: > - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n > > arch/riscv/include/asm/cache.h | 2 +- > arch/riscv/mm/init.c | 16 +++++++++++++++- > 2 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h > index 2174fe7bac9a..570e9d8acad1 100644 > --- a/arch/riscv/include/asm/cache.h > +++ b/arch/riscv/include/asm/cache.h > @@ -26,8 +26,8 @@ > > #ifndef __ASSEMBLY__ > > -#ifdef CONFIG_RISCV_DMA_NONCOHERENT > extern int dma_cache_alignment; > +#ifdef CONFIG_RISCV_DMA_NONCOHERENT > #define dma_get_cache_alignment dma_get_cache_alignment > static inline int dma_get_cache_alignment(void) > { > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 2e011cbddf3a..cbcb9918f721 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } > > void __init mem_init(void) > { > + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); > #ifdef CONFIG_FLATMEM > BUG_ON(!mem_map); > #endif /* CONFIG_FLATMEM */ > > - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); > + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && > + dma_cache_alignment != 1) { > + /* > + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb > + * buffer per 1GB of RAM for kmalloc() bouncing on > + * non-coherent platforms. > + */ > + unsigned long size = > + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); > + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); > + swiotlb = true; > + } > + > + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); > memblock_free_all(); > > print_vm_layout();