Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754182AbbGWQtd (ORCPT ); Thu, 23 Jul 2015 12:49:33 -0400 Received: from eu-smtp-delivery-143.mimecast.com ([207.82.80.143]:64914 "EHLO eu-smtp-delivery-143.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753879AbbGWQtY convert rfc822-to-8bit (ORCPT ); Thu, 23 Jul 2015 12:49:24 -0400 Date: Thu, 23 Jul 2015 17:49:21 +0100 From: Catalin Marinas To: Andrea Arcangeli Cc: Dave Hansen , David Rientjes , linux-mm , Linux Kernel Mailing List , Andrew Morton , Martin Schwidefsky , Heiko Carstens Subject: Re: [PATCH] mm: Flush the TLB for a single address in a huge page Message-ID: <20150723164921.GH27052@e104818-lin.cambridge.arm.com> References: <1437585214-22481-1-git-send-email-catalin.marinas@arm.com> <55B021B1.5020409@intel.com> <20150723104938.GA27052@e104818-lin.cambridge.arm.com> <20150723141303.GB23799@redhat.com> MIME-Version: 1.0 In-Reply-To: <20150723141303.GB23799@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-OriginalArrivalTime: 23 Jul 2015 16:49:21.0132 (UTC) FILETIME=[85D422C0:01D0C567] X-MC-Unique: LsHje5W7T4y9ORTiOA_90A-1 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2752 Lines: 63 On Thu, Jul 23, 2015 at 03:13:03PM +0100, Andrea Arcangeli wrote: > On Thu, Jul 23, 2015 at 11:49:38AM +0100, Catalin Marinas wrote: > > On Thu, Jul 23, 2015 at 12:05:21AM +0100, Dave Hansen wrote: > > > On 07/22/2015 03:48 PM, Catalin Marinas wrote: > > > > You are right, on x86 the tlb_single_page_flush_ceiling seems to be > > > > 33, so for an HPAGE_SIZE range the code does a local_flush_tlb() > > > > always. I would say a single page TLB flush is more efficient than a > > > > whole TLB flush but I'm not familiar enough with x86. > > > > > > The last time I looked, the instruction to invalidate a single page is > > > more expensive than the instruction to flush the entire TLB. [...] > > Another question is whether flushing a single address is enough for a > > huge page. I assumed it is since tlb_remove_pmd_tlb_entry() only adjusts [...] > > the mmu_gather range by PAGE_SIZE (rather than HPAGE_SIZE) and > > no-one complained so far. AFAICT, there are only 3 architectures > > that don't use asm-generic/tlb.h but they all seem to handle this > > case: > > Agreed that archs using the generic tlb.h that sets the tlb->end to > address+PAGE_SIZE should be fine with the flush_tlb_page. > > > arch/arm: it implements tlb_remove_pmd_tlb_entry() in a similar way to > > the generic one > > > > arch/s390: tlb_remove_pmd_tlb_entry() is a no-op > > I guess s390 is fine too but I'm not convinced that the fact it won't > adjust the tlb->start/end is a guarantees that flush_tlb_page is > enough when a single 2MB TLB has to be invalidated (not during range > zapping). > > For the range zapping, could the arch decide to unconditionally flush > the whole TLB without doing the tlb->start/end tracking by overriding > tlb_gather_mmu in a way that won't call __tlb_reset_range? There seems > to be quite some flexibility in the per-arch tlb_gather_mmu setup in > order to unconditionally set tlb->start/end to the total range zapped, > without actually narrowing it down during the pagetable walk. You are right, looking at the s390 code, tlb_finish_mmu() flushes the whole TLB, so the ranges don't seem to matter. I'm cc'ing the s390 maintainers to confirm whether this patch affects them in any way: https://lkml.org/lkml/2015/7/22/521 IIUC, all the functions touched by this patch are implemented by s390 in its specific way, so I don't think it makes any difference: pmdp_set_access_flags pmdp_clear_flush_young pmdp_huge_clear_flush pmdp_splitting_flush pmdp_invalidate -- Catalin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/