Received: by 2002:a05:6358:111d:b0:dc:6189:e246 with SMTP id f29csp2169358rwi; Tue, 1 Nov 2022 04:50:23 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5kgsp/HXh5HwH9TrQ7BaJUdhiF48sLeT+Ks41g9PkuODs5eKm49drSphZNUCYuwoOn1y9P X-Received: by 2002:aa7:cc90:0:b0:458:b07c:f35f with SMTP id p16-20020aa7cc90000000b00458b07cf35fmr18244897edt.310.1667303422887; Tue, 01 Nov 2022 04:50:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667303422; cv=none; d=google.com; s=arc-20160816; b=C3H6wMH5dxMw/KzGZttMTsJQERr0eHPtwo6R0JpysSzJlckoKd2eYcB5GhvjtDsJUB QnZimaA7z5ZTThkDfPbqVlpRTQpt+sRcNCq8P/E5hE84VqqUaVcWPhTkioJ8d1F0jbuO CuYZBV8mXHXUrzNkXWjQpm0qYC1fW2PcEtG9xd9UUyF/IPq+YNTHvy/rMwYVbCT+X3Ck CNR/eRzC6gx9L9I841AVsVMpuS4fYkQYQhGxS6B0steSphEz/k1mPBYxUuhaoo69fBQH FZUr7PSiwE0PVznXF6f30so8TZDz7X5GM0gT3inNa6bme7WoBjoEIi4DYrMSKqg2OFGA BmeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=zihUEyVTN/9EN314rjPBauUELPbk4nkCIjVZwhJUtCs=; b=1Ig0Qbv8AUEj1KXJg//AAMGJzIXDhIqqVLYQxuxrEEx/8EWPyssp8115a4N7oYdQoL 06yNEvkoe6NEt/zoB4vVNxkR9DW2q+Ep/84YgThmwgfQTQ9gWfLHSkOBIVHlhtxCnBAQ AXOBoD4mfuQU+9hc7KyqjaO+TXjPrdB0IZiicX21RyRQXGIUlZKW40N15T9Nh5/1QRL1 fY+Lj2XPh8gWS/VgE8NcmXaNnLKjhvCIU2wsTAwLfl278C6i0I4MzOTS6dns7R9JQZoJ y+YYj2TYrcy1/YKUAYjhbmF/JOw0LozJ+QVwG+IyRZAJIuhRgLivQVURMWxmS4cGf7Vi eIPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r8-20020a05640251c800b00456e33b69e1si12803800edd.347.2022.11.01.04.49.58; Tue, 01 Nov 2022 04:50:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230181AbiKALJP (ORCPT + 98 others); Tue, 1 Nov 2022 07:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230013AbiKALJN (ORCPT ); Tue, 1 Nov 2022 07:09:13 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72C1718E23; Tue, 1 Nov 2022 04:09:12 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id CFFAB6732D; Tue, 1 Nov 2022 12:09:08 +0100 (CET) Date: Tue, 1 Nov 2022 12:09:08 +0100 From: Christoph Hellwig To: Alexey Kardashevskiy Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, Robin Murphy , Marek Szyprowski , Christoph Hellwig , Ashish Kalra , Pankaj Gupta , Tom Lendacky Subject: Re: [PATCH kernel v2] swiotlb: Half the size if allocation failed Message-ID: <20221101110908.GA14146@lst.de> References: <20221031081327.47089-1-aik@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221031081327.47089-1-aik@amd.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thanks. I've applied this with minor edits (see below). --- From 8d58aa484920c4f9be4834a7aeb446cdced21a37 Mon Sep 17 00:00:00 2001 From: Alexey Kardashevskiy Date: Mon, 31 Oct 2022 19:13:27 +1100 Subject: swiotlb: reduce the swiotlb buffer size on allocation failure At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB or 1GB, whichever is less. However it is possible that there is no block big enough in the low memory which make SWIOTLB allocation fail and the kernel continues without DMA. In such case a VM hangs on DMA. This moves alloc+remap to a helper and calls it from a loop where the size is halved on each iteration. This updates default_nslabs on successful allocation which looks like an oversight as not doing so should have broken callers of swiotlb_size_or_default(). Signed-off-by: Alexey Kardashevskiy Reviewed-by: Pankaj Gupta Signed-off-by: Christoph Hellwig --- kernel/dma/swiotlb.c | 63 +++++++++++++++++++++++++++----------------- 1 file changed, 39 insertions(+), 24 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 339a990554e7f..a34c38bbe28f1 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -300,6 +300,37 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start, return; } +static void *swiotlb_memblock_alloc(unsigned long nslabs, unsigned int flags, + int (*remap)(void *tlb, unsigned long nslabs)) +{ + size_t bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT); + void *tlb; + + /* + * By default allocate the bounce buffer memory from low memory, but + * allow to pick a location everywhere for hypervisors with guest + * memory encryption. + */ + if (flags & SWIOTLB_ANY) + tlb = memblock_alloc(bytes, PAGE_SIZE); + else + tlb = memblock_alloc_low(bytes, PAGE_SIZE); + + if (!tlb) { + pr_warn("%s: Failed to allocate %zu bytes tlb structure\n", + __func__, bytes); + return NULL; + } + + if (remap && remap(tlb, nslabs) < 0) { + memblock_free(tlb, PAGE_ALIGN(bytes)); + pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes); + return NULL; + } + + return tlb; +} + /* * Statically reserve bounce buffer space and initialize bounce buffer data * structures for the software IO TLB used to implement the DMA API. @@ -310,7 +341,6 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, struct io_tlb_mem *mem = &io_tlb_default_mem; unsigned long nslabs; size_t alloc_size; - size_t bytes; void *tlb; if (!addressing_limit && !swiotlb_force_bounce) @@ -326,31 +356,16 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, swiotlb_adjust_nareas(num_possible_cpus()); nslabs = default_nslabs; - /* - * By default allocate the bounce buffer memory from low memory, but - * allow to pick a location everywhere for hypervisors with guest - * memory encryption. - */ -retry: - bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT); - if (flags & SWIOTLB_ANY) - tlb = memblock_alloc(bytes, PAGE_SIZE); - else - tlb = memblock_alloc_low(bytes, PAGE_SIZE); - if (!tlb) { - pr_warn("%s: failed to allocate tlb structure\n", __func__); - return; - } - - if (remap && remap(tlb, nslabs) < 0) { - memblock_free(tlb, PAGE_ALIGN(bytes)); - + while ((tlb = swiotlb_memblock_alloc(nslabs, flags, remap)) == NULL) { + if (nslabs <= IO_TLB_MIN_SLABS) + return; nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE); - if (nslabs >= IO_TLB_MIN_SLABS) - goto retry; + } - pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes); - return; + if (default_nslabs != nslabs) { + pr_info("SWIOTLB bounce buffer size adjusted %lu -> %lu slabs", + default_nslabs, nslabs); + default_nslabs = nslabs; } alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs)); -- 2.30.2