Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp5046984ybh; Tue, 6 Aug 2019 23:26:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqzGXwE/sX4dc0R0yJzIpcTD7/LHXnUMfGoOX3kK8I9aZMreLoSvchHMBEWScQDPpDPwnYsW X-Received: by 2002:a63:fb14:: with SMTP id o20mr6264491pgh.136.1565159204782; Tue, 06 Aug 2019 23:26:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565159204; cv=none; d=google.com; s=arc-20160816; b=DK8cp+VRi+tvXtoZJo1oOpB8eWDU4S7SP0ddN5NfN4SzaTaoHHL/8G1y8rgutIVaW/ dI2qqTScjOtlR7gT2mxAWzlE4FhpwyNJ6vOKqXIh0VsQwQ2cx6FXyxeLDJedMiZdf9p7 ledS+aFmNn6PEti+zQXvX6LZ1SKErejeioudge66weAbxv+D4Q8pJ4IFGUvjpefvSW7z f9aHJ4T1+6OeUOQEFdSv/it1UnNSx/l/AH1v2rctYXhERwKOvY0rBeIZ7GXBSGirtmsq EJxs+V3ij66G8xWJaNegxB8MbTcDiU6ZAZhwLWcT+jkQEyByihBxMksxTZzpkgGohWmh mC9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Y2kGHP8lx0Gnfz+MVXE6TT0rnG6HzDLzIVJcCZpgB9I=; b=DYwP/N1KEJdQTtdS4GrKPKZxvZBUtsOo84Nm+CqsDRcEUaFzPnNF60M3enB8ElIxuZ oj3tbRZU3cxbyFZ/dXlgWRkS2zQTklG6ekkZF4wrYacKUhiyC7536A2jWhahQC1dToAc 8e5fUXbDKZZQRN38av/W0aIYLm1TYqlxpf8IbY9eycnDWNH/TCgTA9VXfJmLxRof28LY VB54R/DSQBm/1llG1jXwV/UggMSx+6D526LhQeZTrx+ov944ZJeClyHJrD33PkyGn9OI SgcBZE5H5BmEE7mZxg5PBREpEgFpDscK+5Y4W33cFiXJiU9CqkaSXHNBzEGw7/7csOmL 7AXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b40si43873276plb.426.2019.08.06.23.26.28; Tue, 06 Aug 2019 23:26:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727048AbfHGGZv (ORCPT + 99 others); Wed, 7 Aug 2019 02:25:51 -0400 Received: from verein.lst.de ([213.95.11.211]:34817 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726734AbfHGGZv (ORCPT ); Wed, 7 Aug 2019 02:25:51 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id D947B68B05; Wed, 7 Aug 2019 08:25:45 +0200 (CEST) Date: Wed, 7 Aug 2019 08:25:45 +0200 From: Christoph Hellwig To: Rob Clark Cc: Christoph Hellwig , Rob Clark , dri-devel , Catalin Marinas , Will Deacon , Maarten Lankhorst , Maxime Ripard , Sean Paul , David Airlie , Daniel Vetter , Allison Randal , Greg Kroah-Hartman , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, LKML Subject: Re: [PATCH 1/2] drm: add cache support for arm64 Message-ID: <20190807062545.GF6627@lst.de> References: <20190805211451.20176-1-robdclark@gmail.com> <20190806084821.GA17129@lst.de> <20190806155044.GC25050@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote: > On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote: > > > > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote: > > > Agreed that drm_cflush_* isn't a great API. In this particular case > > > (IIUC), I need wb+inv so that there aren't dirty cache lines that drop > > > out to memory later, and so that I don't get a cache hit on > > > uncached/wc mmap'ing. > > > > So what is the use case here? Allocate pages using the page allocator > > (or CMA for that matter), and then mmaping them to userspace and never > > touching them again from the kernel? > > Currently, it is pages coming from tmpfs. Ideally we want pages that > are swappable when unpinned. tmpfs is basically a (complicated) frontend for alloc pages as far as page allocation is concerned. > CPU mappings are *mostly* just mapping to userspace. There are a few > exceptions that are vmap'd (fbcon, and ringbuffer). And those use the same backend? > (Eventually I'd like to support pages passed in from userspace.. but > that is down the road.) Eww. Please talk to the iommu list before starting on that. > > > Tying it in w/ iommu seems a bit weird to me.. but maybe that is just > > > me, I'm certainly willing to consider proposals or to try things and > > > see how they work out. > > > > This was just my through as the fit seems easy. But maybe you'll > > need to explain your use case(s) a bit more so that we can figure out > > what a good high level API is. > > Tying it to iommu_map/unmap would be awkward, as we could need to > setup cpu mmap before it ends up mapped to iommu. And the plan to > support per-process pagetables involved creating an iommu_domain per > userspace gl context.. some buffers would end up mapped into multiple > contexts/iommu_domains. > > If the cache operation was detached from iommu_map/unmap, then it > would seem weird to be part of the iommu API. > > I guess I'm not entirely sure what you had in mind, but this is why > iommu seemed to me like a bad fit. So back to the question, I'd like to understand your use case (and maybe hear from the other drm folks if that is common): - you allocate pages from shmem (why shmem, btw? if this is done by other drm drivers how do they guarantee addressability without an iommu?) - then the memory is either mapped to userspace or vmapped (or even both, althrough the lack of aliasing you mentioned would speak against it) as writecombine (aka arm v6+ normal uncached). Does the mapping live on until the memory is freed? - as you mention swapping - how do you guarantee there are no aliases in the kernel direct mapping after the page has been swapped in? - then the memory is potentially mapped to the iommu. Is it using a long-living mapping, or does get unmapped/remapped repeatedly?