Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp792512ybt; Mon, 6 Jul 2020 23:56:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7EpWZAVWriu0E4N4goTu2xo+yOoZZOVmwM7ZJ3UojGu6MZP6OidxSV+NocISZSIaKuERY X-Received: by 2002:a17:906:c201:: with SMTP id d1mr39619687ejz.40.1594104989589; Mon, 06 Jul 2020 23:56:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594104989; cv=none; d=google.com; s=arc-20160816; b=VTtjdEngNgsuQpEAVPuSTo67SMhWQ2Ca5IEhdS0NfiZYZh2UbMI6Jsklsyqp7mJCI5 IPTymxfSP55Q3+9AH9bNxj8fAttcq75sDaOGYaoAYyHsoF9cJpRcxS13RKu+lAUVx8wW Ojw4zJqRUxtS3aXqXrRa2LcltT/2rx2TVNbfGvPNFPJDiTEVqbuMnbRIyYqhbInCC0h0 gX+vPSKzDVQPmXMJquQOPmPMcVScC1Zyu1JDzpkbHOUy3zedFNIe7QHiC6Lx4bLGakMa D9DdFhnNH6ExxkaqHI2hIBHDIRutDvDxM11+u/ZVwSnsRB7f95QhSi16Mz5zIK8Y7daT 7/Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Iebfz4slREAnku360MFEuNmogXb6ttSIkl8opiMp4Jc=; b=VyQ3hUhMUDmk50nSFXRDnoquVqBRDc8zMdoYNyrnMwrz7qG+g+25J0iWfYGHDNfbtz VWJ9vnbeO+94zXbASxf16RmY4ULc4/3DHiDlv9cFVS9/ILjrfmYGmsm1wye/AXByaVUG wY8Klp5kixw+S++S67HgzRJ6DJ/fRpC5TL63bnpxCYFKtjp5am+94QeFpgKm0AMRojPD nKCkA6ouWH9BHLcoNcLSKYwIT+97b+PQg4nvi5LmrKensBms0pThO7NLZH2u88zChqPY VrKZ9VwGGZ1/pAP+w/OEiEOqz3mhlYC+E9VzGLxWwr0sU85whfZDbWM+p4o8FAtWATrN uCJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qo28si14719557ejb.527.2020.07.06.23.56.05; Mon, 06 Jul 2020 23:56:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728121AbgGGGzy (ORCPT + 99 others); Tue, 7 Jul 2020 02:55:54 -0400 Received: from verein.lst.de ([213.95.11.211]:57417 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726788AbgGGGzx (ORCPT ); Tue, 7 Jul 2020 02:55:53 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 21E0268BEB; Tue, 7 Jul 2020 08:55:50 +0200 (CEST) Date: Tue, 7 Jul 2020 08:55:49 +0200 From: Christoph Hellwig To: Nicolas Saenz Julienne Cc: David Rientjes , Robin Murphy , Jeremy Linton , "linux-arm-kernel@lists.infradead.org" , linux-mm@kvack.org, "linux-usb@vger.kernel.org" , Christoph Hellwig , "linux-kernel@vger.kernel.org" , linux-rpi-kernel Subject: Re: [BUG] XHCI getting ZONE_DMA32 memory > than its bus_dma_limit Message-ID: <20200707065549.GA23760@lst.de> References: <34619bdf-6527-ae82-7e4d-e2ea7c67ed56@arm.com> <32ee3bf222b1966caa98b67a9cec8712817a4b52.camel@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <32ee3bf222b1966caa98b67a9cec8712817a4b52.camel@suse.de> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 06, 2020 at 04:09:36PM +0200, Nicolas Saenz Julienne wrote: > On Sun, 2020-07-05 at 16:41 -0700, David Rientjes wrote: > > On Fri, 3 Jul 2020, Robin Murphy wrote: > > > Or perhaps just get rid of atomic_pool_dma32 (and allocate atomic_pool_dma > > > from ZONE_DMA32 if !ZONE_DMA). That should make it fall pretty much back in > > > line while still preserving the potential benefit of the kernel pool for > > > non-address-constrained devices. > > > > > > > I assume it depends on how often we have devices where > > __dma_direct_alloc_pages() behavior is required, i.e. what requires the > > dma_coherent_ok() checks and altering of the gfp flags to get memory that > > works. > > > > Is the idea that getting rid of atomic_pool_dma32 would use GFP_KERNEL > > (and atomic_pool_kernel) as the default policy here? That doesn't do any > > dma_coherent_ok() checks so dma_direct_alloc_pages would return from > > ZONE_NORMAL without a < 3G check? > > IIUC this is not what Robin proposes. > > The idea is to only have one DMA pool, located in ZONE_DMA, if enabled, and > ZONE_DMA32 otherwise. This way you're always sure the memory is going to be > good enough for any device while maintaining the benefits of > atomic_pool_kernel. That is how I understood the proposal from Robin and I think it is the right thing to do. > > It *seems* like we want to check if dma_coherent_ok() succeeds for ret in > > dma_direct_alloc_pages() when allocating from the atomic pool and, based > > on criteria that allows fallback, just fall into > > __dma_direct_alloc_pages()? > > I suspect I don't have enough perspective here but, isn't that breaking the > point of having an atomic pool? Wouldn't that generate big latency spikes? I > can see how audio transfers over USB could be affected by this specifically, > IIRC those are allocated atomically and have timing constraints. > > That said, if Robin solution works for you, I don't mind having a go at it. We can't just fall back to __dma_direct_alloc_pages when allocation from the atomic pool fails, as the atomic pool exists for provide allocations that require sleeping actions for callers that can't sleep.