Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp4085828ybt; Sun, 5 Jul 2020 16:45:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHrKapIu3iTwmppslmpuuhlLM4AdrS/WADZLdb6VJgLYfagF90kpi5y3e5LYu7U5vrApqV X-Received: by 2002:a17:907:11db:: with SMTP id va27mr42918544ejb.175.1593992746272; Sun, 05 Jul 2020 16:45:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593992746; cv=none; d=google.com; s=arc-20160816; b=NEYYyq1bzuR2Ml1A/4REi3ITX8oh62Bxg/Fkp04JLvG6djSM+b2VdZG3fNXa7Itlc8 xQNzZ519CPQKvklY5i6HoLzjHz7YiP7d0k6q+mRyeCzBi0iGDlzSUDrsRWhG9Kp40EC6 udYXD50YBGl/AQSGgYy2ZiaPXrArH78qPepU4MtQUL/qOT2mVPuP3vfwErH4+v+dipeO Dwt120ys8ajc+nf0Ie/9xVAE/4dXL4sHySgZXnG9UnsBqA+yOSuw5LKZRadBAhR7G+zl lSISllWqdd271Xr8hketzXJEUuaqSMwE6cv3cbP4KM7oxWqzNb5xLqnPWW9JN/FGyq8G vCPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=eb5kTTPIY8Kj6SgMs2DxRI8+mvds0Fa5oQsKVsfNDKI=; b=PVwcOHhF7AMN8dcHrMTzDkDWNnyZXp7BNiaGqU5pfZP71W+H/VaW6chiBnNIL/VMZE S64J/bi2Se6ZLO9gjShjXT7kRsasbAwQdBR66941iaHsKmxMOyaKLQiiLXf7wkTV28fR /vlQcjMdThKaJYO/TAhSv+DjTKWbj6GWUza48Qm/oEMHqoOS2ruEJZ5DxWJ14pFmgOm0 4GWcwGVMhW6UaAHcVczxFn+Moa3vWQHBShgN1BD/Q+w0wM3IxLkbB5pyEiGW9MESNaxC YStU6DAZG6pmzp0y3HttZdGaiy5RfdDEelJrJ1Isixm11PElECFvC4hBn3zQWQFnPaOH XREw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=rVkda+rr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b18si11870235ejk.572.2020.07.05.16.44.58; Sun, 05 Jul 2020 16:45:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=rVkda+rr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728214AbgGEXmG (ORCPT + 99 others); Sun, 5 Jul 2020 19:42:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728128AbgGEXmF (ORCPT ); Sun, 5 Jul 2020 19:42:05 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 616F5C08C5DE for ; Sun, 5 Jul 2020 16:42:04 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id j7so1268520plk.13 for ; Sun, 05 Jul 2020 16:42:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=eb5kTTPIY8Kj6SgMs2DxRI8+mvds0Fa5oQsKVsfNDKI=; b=rVkda+rr061XxviVoktVTIZSVUQyVHEqjdJochLB3oUz7QnquidunYsO1iJIHBa8Uo EE7EzAH5kBZzwLdYFIJUGuZD5rQBhMGKhS+yQVSOHRqwhHBwFGq972G9zDv8MsL44aTt mk/iHgixc5k6rEIe4BMUgG5dZkCbGeQDEY1o2wb5t2svdwUPPF5mNqgScG2o9q9CKsJv D3REHQgRjgHE9HP9XWskIasAWEUXn/dnmQD4gd/XouNI2QmvlAgNcq9Egr1k1IdQVW+M UXR4rB2lo8LZWjHsRjdgGWg4fsKZgVSBah9T1K1zMTnK07xnUayCYTP1X2Yd0fFgkIh5 adOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=eb5kTTPIY8Kj6SgMs2DxRI8+mvds0Fa5oQsKVsfNDKI=; b=gpc7sw/t5vF9g1HiWCdvLrj0QPXld/iMJYeQE1UV0VoR9Ih4wibbgm2T4SfiqbwoGy Oel5kuTYp53aLJTLyO6erwOyrukwF9hfO8YurAlCMVXgnmtu44qFKkT62Zim+jE0ix0D AC+AokITBCvkFNuL6nwZt6B6ScPBQrroWHnb8aE6nUgSFbZ75GcshTjoJa5wCA38GuuC 2x1FdhLklsjHYhovTHbrqaZU6yr6uUALeOoqoifYt6ZPo7bjLFCKGlr9bZKlV+7mOxAD HAb+alNx0IUTWx8C5T65OIiCdlWKYh/yjOmvcM3nUXgEam3kdomTMtXYW3q3iVGOOte8 FHdA== X-Gm-Message-State: AOAM530zEjELXllPm/zqAk39PnjHO/5xSZPv6bf4XvAW6h2m3Uay25gM bYyz3g7fgHplHAXvC23v08M38A== X-Received: by 2002:a17:90b:8d0:: with SMTP id ds16mr19275025pjb.2.1593992523119; Sun, 05 Jul 2020 16:42:03 -0700 (PDT) Received: from [2620:15c:17:3:4a0f:cfff:fe51:6667] ([2620:15c:17:3:4a0f:cfff:fe51:6667]) by smtp.gmail.com with ESMTPSA id z6sm16958133pfn.173.2020.07.05.16.42.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Jul 2020 16:42:01 -0700 (PDT) Date: Sun, 5 Jul 2020 16:41:59 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Robin Murphy cc: Nicolas Saenz Julienne , Jeremy Linton , "linux-arm-kernel@lists.infradead.org" , linux-mm@kvack.org, "linux-usb@vger.kernel.org" , Christoph Hellwig , "linux-kernel@vger.kernel.org" , linux-rpi-kernel Subject: Re: [BUG] XHCI getting ZONE_DMA32 memory > than its bus_dma_limit In-Reply-To: Message-ID: References: <34619bdf-6527-ae82-7e4d-e2ea7c67ed56@arm.com> User-Agent: Alpine 2.23 (DEB 453 2020-06-18) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 3 Jul 2020, Robin Murphy wrote: > > Just for the record the offending commit is: c84dc6e68a1d2 ("dma-pool: add > > additional coherent pools to map to gfp mask"). > > > > On Thu, 2020-07-02 at 12:49 -0500, Jeremy Linton wrote: > > > Hi, > > > > > > Using 5.8rc3: > > > > > > The rpi4 has a 3G dev->bus_dma_limit on its XHCI controller. With a usb3 > > > hub, plus a few devices plugged in, randomly devices will fail > > > operations. This appears to because xhci_alloc_container_ctx() is > > > getting buffers > 3G via dma_pool_zalloc(). > > > > > > Tracking that down, it seems to be caused by dma_alloc_from_pool() using > > > dev_to_pool()->dma_direct_optimal_gfp_mask() to "optimistically" select > > > the atomic_pool_dma32 but then failing to verify that the allocations in > > > the pool are less than the dev bus_dma_limit. > > > > I can reproduce this too. > > > > The way I see it, dev_to_pool() wants a strict dma_direct_optimal_gfp_mask() > > that is never wrong, since it's going to stick to that pool for the device's > > lifetime. I've been looking at how to implement it, and it's not so trivial > > as > > I can't see a failproof way to make a distinction between who needs DMA32 > > and > > who is OK with plain KERNEL memory. > > > > Otherwise, as Jeremy points out, the patch needs to implement allocations > > with > > an algorithm similar to __dma_direct_alloc_pages()'s, which TBH I don't know > > if > > it's a little overkill for the atomic context. > > > > Short of finding a fix in the coming rc's, I suggest we revert this. > > Or perhaps just get rid of atomic_pool_dma32 (and allocate atomic_pool_dma > from ZONE_DMA32 if !ZONE_DMA). That should make it fall pretty much back in > line while still preserving the potential benefit of the kernel pool for > non-address-constrained devices. > I assume it depends on how often we have devices where __dma_direct_alloc_pages() behavior is required, i.e. what requires the dma_coherent_ok() checks and altering of the gfp flags to get memory that works. Is the idea that getting rid of atomic_pool_dma32 would use GFP_KERNEL (and atomic_pool_kernel) as the default policy here? That doesn't do any dma_coherent_ok() checks so dma_direct_alloc_pages would return from ZONE_NORMAL without a < 3G check? It *seems* like we want to check if dma_coherent_ok() succeeds for ret in dma_direct_alloc_pages() when allocating from the atomic pool and, based on criteria that allows fallback, just fall into __dma_direct_alloc_pages()?