Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754173AbbGWRNh (ORCPT ); Thu, 23 Jul 2015 13:13:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53627 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753692AbbGWRNe (ORCPT ); Thu, 23 Jul 2015 13:13:34 -0400 Date: Thu, 23 Jul 2015 19:13:32 +0200 From: Andrea Arcangeli To: Dave Hansen Cc: Catalin Marinas , David Rientjes , linux-mm , Linux Kernel Mailing List , Andrew Morton Subject: Re: [PATCH] mm: Flush the TLB for a single address in a huge page Message-ID: <20150723171332.GD23799@redhat.com> References: <1437585214-22481-1-git-send-email-catalin.marinas@arm.com> <55B021B1.5020409@intel.com> <20150723104938.GA27052@e104818-lin.cambridge.arm.com> <20150723141303.GB23799@redhat.com> <55B0FD14.8050501@intel.com> <20150723161644.GG27052@e104818-lin.cambridge.arm.com> <55B11C85.5070900@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55B11C85.5070900@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1839 Lines: 41 On Thu, Jul 23, 2015 at 09:55:33AM -0700, Dave Hansen wrote: > On 07/23/2015 09:16 AM, Catalin Marinas wrote: > > Anyway, if you want to keep the option of a full TLB flush for x86 on > > huge pages, I'm happy to repost a v2 with a separate > > flush_tlb_pmd_huge_page that arch code can define as it sees fit. > > I think your patch is fine on x86. We need to keep an eye out for any > regressions, but I think it's OK. That's my view as well. I've read more of the other thread and I quote Ingo: " It barely makes sense for a 2 pages and gets exponentially worse. It's probably done in microcode and its performance is horrible. " So in our case it's just 1 page (not 2, not 33), and considering it prevents to invalidate all other TLB entries, it's most certainly a win: it requires zero additional infrastructure and best of all it can also avoid to flush the entire TLB for remote CPUs too again without infrastructure or pfn arrays or multiple invlpg. As further confirmation that for 1 entry invlpg is worth it, even flush_tlb_page->flush_tlb_func invokes __flush_tlb_single in the IPI handler instead of local_flush_tlb(). So the discussion there was about the additional infrastructure and a flood of invlpg, perhaps more than 33, I agree a local_flush_tlb() sounds better for that. The question left for x86 is if invlpg is even slower for 2MB pages than it is for 4k pages, but I'd be surprised if it is, especially on newer CPUs where the TLB can use different page size for each TLB entry. Why we didn't do flush_tlb_page before wasn't related to such a concern at least. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/