Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755546AbZCDR0T (ORCPT ); Wed, 4 Mar 2009 12:26:19 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753526AbZCDR0L (ORCPT ); Wed, 4 Mar 2009 12:26:11 -0500 Received: from relais.videotron.ca ([24.201.245.36]:61426 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753490AbZCDR0K (ORCPT ); Wed, 4 Mar 2009 12:26:10 -0500 MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: TEXT/PLAIN; charset=US-ASCII Date: Wed, 04 Mar 2009 12:26:00 -0500 (EST) From: Nicolas Pitre X-X-Sender: nico@xanadu.home To: Minchan Kim Cc: lkml , linux-mm@kvack.org, Russell King - ARM Linux Subject: Re: [RFC] atomic highmem kmap page pinning In-reply-to: <20090304171429.c013013c.minchan.kim@barrios-desktop> Message-id: References: <20090304171429.c013013c.minchan.kim@barrios-desktop> User-Agent: Alpine 2.00 (LFD 1167 2008-08-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4497 Lines: 108 On Wed, 4 Mar 2009, Minchan Kim wrote: > On Wed, 04 Mar 2009 00:58:13 -0500 (EST) > Nicolas Pitre wrote: > > > I've implemented highmem for ARM. Yes, some ARM machines do have lots > > of memory... > > > > The problem is that most ARM machines have a non IO coherent cache, > > meaning that the dma_map_* set of functions must clean and/or invalidate > > the affected memory manually. And because the majority of those > > machines have a VIVT cache, the cache maintenance operations must be > > performed using virtual addresses. > > > > In dma_map_page(), an highmem pages could still be mapped and cached > > even after kunmap() was called on it. As long as highmem pages are > > mapped, page_address(page) is non null and we can use that to > > synchronize the cache. > > It is unlikely but still possible for kmap() to race and recycle the > > obtained virtual address above, and use it for another page though. In > > that case, the new mapping could end up with dirty cache lines for > > another page, and the unsuspecting cache invalidation loop in > > dma_map_page() won't notice resulting in data loss. Hence the need for > > some kind of kmap page pinning which can be used in any context, > > including IRQ context. > > > > This is a RFC patch implementing the necessary part in the core code, as > > suggested by RMK. Please comment. > > I am not sure if i understand your concern totally. > I can understand it can be recycled. but Why is it racing ? Suppose this sequence of events: - dma_map_page(..., DMA_FROM_DEVICE) is called on a highmem page. --> - vaddr = page_address(page) is non null. In this case it is likely that the page has valid cache lines associated with vaddr. Remember that the cache is VIVT. --> - for (i = vaddr; i < vaddr + PAGE_SIZE; i += 32) invalidate_cache_line(i); *** preemption occurs in the middle of the loop above *** - kmap_high() is called for a different page. --> - last_pkmap_nr wraps to zero and flush_all_zero_pkmaps() is called. The pkmap_count value for the page passed to dma_map_page() above happens to be 1, so it is unmapped. But prior to that, flush_cache_kmaps() cleared the cache for it. So far so good. - A fresh pkmap entry is assigned for this kmap request. The Murphy law says it will eventually happen to use the same vaddr as the one which used to belong to the other page being processed by dma_map_page() in the preempted thread above. - The caller of kmap_high() start dirtying the cache using the new virtual mapping for its page. *** the first thread is rescheduled *** - The for loop is resumed, but now cached data belonging to a different physical page is being discarded! And this is not only a preemption issue. ARM can be SMP as well where this scenario is just as likely, and disabling preemption in dma_map_page() won't prevent it. > Now, kmap semantic is that it can't be called in interrupt context. I know. And in this case I don't need the full kmap_high() semantics. What I need is a guarantee that, if I start invalidating cache lines from an highmem page, its virtual mapping won't go away. Meaning that I need to increase pkmap_count whenever it is not zero. And if it is zero then there is simply no cache invalidation to worry about. And that pkmap_count increment must be possible from any context as its primary user would be dma_map_page(). > As far as I understand, To make irq_disable to prevent this problem is > rather big cost. How big? Could you please elaborate on the significance of this cost? > I think it would be better to make page_address can return null in that case > where pkmap_count is less than one This is already the case, and when it happens then there is no cache invalidation to perform like I say above. The race is possible when pkmap_count is 1 or becomes 1. > or it's not previous page mapping. Even if the cache invalidation loop checks on every iteration if the page mapping changed which would be terribly inefficient, there is still a race window for the mapping to change between the mapping test and the actual cache line invalidation instruction. Nicolas -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/