Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753163AbcCIJPk (ORCPT ); Wed, 9 Mar 2016 04:15:40 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:37314 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752702AbcCIJPa (ORCPT ); Wed, 9 Mar 2016 04:15:30 -0500 Date: Wed, 9 Mar 2016 10:15:25 +0100 From: Ingo Molnar To: Toshi Kani Cc: "Luis R. Rodriguez" , Toshi Kani , Paul McKenney , Dave Airlie , Benjamin Herrenschmidt , "linux-kernel@vger.kernel.org" , linux-arch@vger.kernel.org, X86 ML , Daniel Vetter , Thomas Gleixner , "H. Peter Anvin" , Peter Zijlstra , Borislav Petkov , Linus Torvalds , Andrew Morton , Andy Lutomirski , Brian Gerst Subject: Re: Overlapping ioremap() calls, set_memory_*() semantics Message-ID: <20160309091525.GA11866@gmail.com> References: <20160304094424.GA16228@gmail.com> <1457115514.15454.216.camel@hpe.com> <20160305114012.GA7259@gmail.com> <1457370228.15454.311.camel@hpe.com> <20160308121601.GA6573@gmail.com> <1457483385.15454.519.camel@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1457483385.15454.519.camel@hpe.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5539 Lines: 129 * Toshi Kani wrote: > On Tue, 2016-03-08 at 13:16 +0100, Ingo Molnar wrote: > > * Toshi Kani wrote: > > > > > > So where is the problem? The memtype implementation and hence most > > > > ioremap() users are supposed to be safe. set_memory_*() APIs are > > > > supposed > > > > to be safe as well, as they too go via the memtype API. > > > > > > Let me try to summarize... > > > > > > The original issue Luis brought up was that drivers written to work > > > with MTRR may create a single ioremap range covering multiple cache > > > attributes since MTRR can overwrite cache attribute of a certain range. > > > ?Converting such drivers with PAT-based ioremap interfaces, i.e. > > > ioremap_wc() and ioremap_nocache(), requires a separate ioremap map for > > > each cache attribute, which can be challenging as it may result in > > > overlapping ioremap ranges (in his term) with different cache > > > attributes. > > > > > > So, Luis asked about 'sematics of overlapping ioremap()' calls. ?Hence, > > > I responded that aliasing mapping itself is supported, but alias with > > > different cache attribute is not. ?We have checks in place to detect > > > such condition. ?Overlapping ioremap calls with a different cache > > > attribute either fails or gets redirected to the existing cache > > > attribute on x86. > > > > Ok, fair enough! > > > > So to go back to the original suggestion from Luis, I've quoted it, but > > with a s/overlapping/aliased substitution: > > > > > I had suggested long ago then that one possible resolution was for us > > > to add an API that *enables* aliased ioremap() calls, and only use it > > > on select locations in the kernel. This means we only have to convert a > > > few users to that call to white list such semantics, and by default > > > we'd disable aliased calls. To kick things off -- is this strategy > > > agreeable for all other architectures? > > > > I'd say that since the overwhelming majority of ioremap() calls are not > > aliased, ever, thus making it 'harder' to accidentally alias is probably > > a good idea. > > Did you mean 'aliased' or 'aliased with different cache attribute'? ?The former > check might be too strict. I'd say even 'same attribute' aliasing is probably relatively rare. And 'different but compatible cache attribute' is in fact more of a sign that the driver author does the aliasing for a valid _reason_: to have two different types of access methods to the same piece of physical address space... > > The memtype infrastructure of phyisical memory ranges in that case acts as a > > security measure, to avoid unintended (not just physically incompatible) > > aliasing. I suspect it would be helpful during driver development as well. > > The memtype infrastructure does not track caller interfaces. ?So, it will check > against to any map, i.e. kernel & user map. ?I assume a kernel map is created > before user map, though. I don't understand this, could you give me a specific example? > > the other problem listed: > > > > > As another problem case, set_memor_*() will not fail on MMIO even > > > though set_memor_*() is designed only for RAM. > > > > So what does this mean exactly? Having WB caching on MMIO area is not > > good, but UC, WC and WB sure is still sensible in some cases, right? > > I responded to Luis in other email that: > | Drivers use ioremap family with a right cache type when mapping MMIO > | ranges, ex. ioremap_wc().??They do not need to change the type to MMIO. > | RAM is different since it's already mapped with WB at boot-time. > | set_memory_*() allows us to change the type from WB, and put it back to > | WB. So there's two different uses here: - set_memory_x() to set the caching attribute - set_memory_x() to set any of the RWX access attributes I'd in fact suggest we split these two uses to avoid confusion: keep set_memory_x() APIs for the RWX access attributes uses, and introduce a new API that sets caching attributes: set_cache_attr_wc() set_cache_attr_uc() set_cache_attr_wb() set_cache_attr_wt() it has the same arguments, so it's basically just a rename initially. And at that point we could definitely argue that set_cache_attr_*() APIs should probably generate a warning for _RAM_, because they mostly make sense for MMIO type of physical addresses, right? Regular RAM should always be WB. Are there cases where we change the caching attribute of RAM for valid reasons, outside of legacy quirks? > > Basically if ioremap_uc() and ioremap_wc() is allowed on MMIO areas, then > > I see no reason in principle why it should be invalid to change the area > > from UC to WC after it has been ioremap()ed. > > The current implementation does not support MMIO. > ?- It does not track cache attribute correctly for MMIO since it uses > __pa(). Hm, so __pa() works on mmio addresses as well - at least on x86. The whole mmtype tree is physical address based - which too is MMIO compatible in principle. The only problem would be if it was fundamentally struct page * based - but it's not AFAICS. We have a few APIs that work on struct page * arrays, but those are just vectored helper APIs AFAICS. What am I missing? > ?- It only supports attribute transition of {WB -> NewType -> WB} for RAM. > ?RAM is tracked differently that WB is treated as "no map". ?So, this > transition does not cause a conflict on RAM. ?This will causes a conflict > on MMIO when it is tracked correctly. ?? That looks like a bug? Thanks, Ingo