Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp7881616ybh; Fri, 9 Aug 2019 01:16:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwh9iVhqOJCsvEZ+JcVBtTzpoCFhsOhPMpoXZt+CFmOakm+iwmpsgCNL1zkv1OpXI0V0LGS X-Received: by 2002:aa7:988a:: with SMTP id r10mr1633921pfl.253.1565338585083; Fri, 09 Aug 2019 01:16:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565338585; cv=none; d=google.com; s=arc-20160816; b=dyyK1hLV3xDBA/p65UdCwg8iQmibVpQoFLMJbEze0sM7l9uyPtZ11zxCc7XPQmVn8e 01ddtDFpzZIawarACPQYBaiMYsEarsDKQYaKAFMO7gbEs5phO+hZ84NmituT8UkdqXoo OP3jqwm0f4nqeGLT+VARwjyOaZi6Fyz69nMygXQ1r+Na16nfBDH+t4FxiJH7ofaU1q3f lrmIRatlCTMGVETxqgY0XmrRIjdxfE+UnpkiW4nTZcyYr+Ex2pPf8b9zFc1cYwmgQKdN Xxag4FoKvus1KYIjKEMmzFry2L+5tRUxqEmPxoAuE6KMmDb3ong1gXznTHrZ35AkpDiL Yqpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:to :from:date; bh=N3zJVcJWWqZVrStcZfCTWilcFRVmI0SKPXqHIC2jo5k=; b=sbNVAxOPKx8dePR5oaFWVL/o6fa1wFFAH7Ltp3GWIL4vBfZR/zczSsipiQ5pUzoHwk 0sPhMoUA865nyx8GdSorywp51AyuZI54c/78j36u2ke5sUwqMH02ymMTnSCCDRMDGDsA eVV3lRl3bPK4awP2N3l07+ePlZAQu+8BR4lMiOj3n76nKuFFUY+yTCRWaZR+aFVqcxnq Q8etNsiNLjxxOMk+K87Wraz0m9hiH9TcXJbFMB5sTgEqd4QMYjgYja7QUWNuQ/z6r4zW BguNu5JKjNvY/HL7B2/k1dnigqtyV642zl6gggZrr1WA60m4LBExjczNC4/wtrpNGUh0 Lbug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ck13si3800679pjb.47.2019.08.09.01.16.07; Fri, 09 Aug 2019 01:16:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405915AbfHIIPA (ORCPT + 99 others); Fri, 9 Aug 2019 04:15:00 -0400 Received: from verein.lst.de ([213.95.11.211]:53322 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726054AbfHIIPA (ORCPT ); Fri, 9 Aug 2019 04:15:00 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 8901768B05; Fri, 9 Aug 2019 10:14:55 +0200 (CEST) Date: Fri, 9 Aug 2019 10:14:55 +0200 From: Christoph Hellwig To: Christoph Hellwig , Rob Clark , Rob Clark , dri-devel , Catalin Marinas , Will Deacon , Maarten Lankhorst , Maxime Ripard , Sean Paul , David Airlie , Allison Randal , Greg Kroah-Hartman , Thomas Gleixner , Linux ARM , LKML Subject: Re: [PATCH 1/2] drm: add cache support for arm64 Message-ID: <20190809081455.GA21967@lst.de> References: <20190805211451.20176-1-robdclark@gmail.com> <20190806084821.GA17129@lst.de> <20190806155044.GC25050@lst.de> <20190807062545.GF6627@lst.de> <20190808095506.GA32621@lst.de> <20190808115808.GN7444@phenom.ffwll.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190808115808.GN7444@phenom.ffwll.local> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 08, 2019 at 01:58:08PM +0200, Daniel Vetter wrote: > > > We use shmem to get at swappable pages. We generally just assume that > > > the gpu can get at those pages, but things fall apart in fun ways: > > > - some setups somehow inject bounce buffers. Some drivers just give > > > up, others try to allocate a pool of pages with dma_alloc_coherent. > > > - some devices are misdesigned and can't access as much as the cpu. We > > > allocate using GFP_DMA32 to fix that. > > > > Well, for shmem you can't really call allocators directly, right? > > We can pass gfp flags to shmem_read_mapping_page_gfp, which is just about > enough for the 2 cases on intel platforms where the gpu can only access > 4G, but the cpu has way more. Right. And that works for architectures without weird DMA offsets and devices that exactly have a 32-bit DMA limit. It falls flat for all the more complex ones unfortunately. > > But userspace malloc really means dma_map_* anyway, so not really > > relevant for memory allocations. > > It does tie in, since we'll want a dma_map which fails if a direct mapping > isn't possible. It also helps the driver code a lot if we could use the > same low-level flushing functions between our own memory (whatever that > is) and anon pages from malloc. And in all the cases if it's not possible, > we want a failure, not elaborate attempts at hiding the differences > between all possible architectures out there. At the very lowest level all goes down to the same three primitives we talked about anyway, but there are different ways how they are combined. For the streaming mappins looks at the table in arch/arc/mm/dma.c I mentioned earlier. For memory that is prepared for just mmaping to userspace without a kernel user we'll always do a wb+inv. But as the other subthread shows we'll need to eventually look into unmapping (or remapping with the same attributes) of that memory in kernel space to avoid speculation bugs (or just invalid combination on x86 where we check for that), so the API will be a little more complex. Btw, are all DRM drivers using vmf_insert_* to pre-populate the mapping like the MSM case, or are some doing dynamic faulting from vm_ops->fault?