Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752708AbdH3SU3 (ORCPT ); Wed, 30 Aug 2017 14:20:29 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57564 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752690AbdH3SU1 (ORCPT ); Wed, 30 Aug 2017 14:20:27 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 1488368BB Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=jglisse@redhat.com Date: Wed, 30 Aug 2017 14:20:14 -0400 From: Jerome Glisse To: Andrea Arcangeli Cc: Nadav Amit , Linux Kernel Mailing List , "open list:MEMORY MANAGEMENT" , Dan Williams , Ross Zwisler , Linus Torvalds , Bernhard Held , Adam Borowski , Radim =?utf-8?B?S3LEjW3DocWZ?= , Wanpeng Li , Paolo Bonzini , Takashi Iwai , Mike Galbraith , "Kirill A . Shutemov" , axie , Andrew Morton Subject: Re: [PATCH 02/13] mm/rmap: update to new mmu_notifier semantic Message-ID: <20170830182013.GD2386@redhat.com> References: <20170829235447.10050-1-jglisse@redhat.com> <20170829235447.10050-3-jglisse@redhat.com> <6D58FBE4-5D03-49CC-AAFF-3C1279A5A849@gmail.com> <20170830172747.GE13559@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170830172747.GE13559@redhat.com> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 30 Aug 2017 18:20:26 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3718 Lines: 76 On Wed, Aug 30, 2017 at 07:27:47PM +0200, Andrea Arcangeli wrote: > On Tue, Aug 29, 2017 at 07:46:07PM -0700, Nadav Amit wrote: > > Therefore, IIUC, try_to_umap_one() should only call > > mmu_notifier_invalidate_range() after ptep_get_and_clear() and > > That would trigger an unnecessarily double call to > ->invalidate_range() both from mmu_notifier_invalidate_range() after > ptep_get_and_clear() and later at the end in > mmu_notifier_invalidate_range_end(). > > The only advantage of adding a mmu_notifier_invalidate_range() after > ptep_get_and_clear() is to flush the secondary MMU TLBs (keep in mind > the pagetables have to be shared with the primary MMU in order to use > the ->invalidate_range()) inside the PT lock. > > So if mmu_notifier_invalidate_range() after ptep_get_and_clear() is > needed or not, again boils down to the issue if the old code calling > ->invalidate_page outside the PT lock was always broken before or > not. I don't see why exactly it was broken, we even hold the page lock > there so I don't see a migration race possible either. Of course the > constraint to be safe is that the put_page in try_to_unmap_one cannot > be the last one, and that had to be enforced by the caller taking an > additional reference on it. > > One can wonder if the primary MMU TLB flush in ptep_clear_flush > (should_defer_flush returning false) could be put after releasing the > PT lock too (because that's not different from deferring the secondary > MMU notifier TLB flush in ->invalidate_range down to > mmu_notifier_invalidate_range_end) even if TTU_BATCH_FLUSH isn't set, > which may be safe too for the same reasons. > > When should_defer_flush returns true we already defer the primary MMU > TLB flush to much later to even mmu_notifier_invalidate_range_end, not > just after the PT lock release so at least when should_defer_flush is > true, it looks obviously safe to defer the secondary MMU TLB flush to > mmu_notifier_invalidate_range_end for the drivers implementing > ->invalidate_range. > > If I'm wrong and all TLB flushes must happen inside the PT lock, then > we should at least reconsider the implicit call to ->invalidate_range > method from mmu_notifier_invalidate_range_end or we would call it > twice unnecessarily which doesn't look optimal. Either ways this > doesn't look optimal. We would need to introduce a > mmu_notifier_invalidate_range_end_full that calls also > ->invalidate_range in such case so we skip the ->invalidate_range call > in mmu_notifier_invalidate_range_end if we put an explicit > mmu_notifier_invalidate_range() after ptep_get_and_clear inside the PT > lock like you suggested above. So i went over call to try_to_unmap() (try_to_munlock() is fine as it does not clear the CPU page table entry). I believe they are 2 cases where you can get a new pte entry after we release spinlock and before we call mmu_notifier_invalidate_range_end() First case is : if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { ... break; } The pte is clear, there was an error condition and this should never happen but a racing thread might install a new pte in the meantime. Maybe we should restore the pte value here. Anyway when this happens bad things are going on. The second case is non dirty anonymous page and MADV_FREE. But here the application is telling us that no one should care about that virtual address any more. So i am not sure how much we should care. If we ignore this 2 cases than the CPU pte can never be replace by something else between the time we release the spinlock and the time we call mmu_notifier_invalidate_range_end() so not invalidating the devices tlb is ok here. Do we want this kind of optimization ? J?r?me