Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753386AbdFVP4A (ORCPT ); Thu, 22 Jun 2017 11:56:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:46546 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753247AbdFVPz6 (ORCPT ); Thu, 22 Jun 2017 11:55:58 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6F0D22B6C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org MIME-Version: 1.0 In-Reply-To: <20170622145914.tzqdulshlssiywj4@pd.tnic> References: <91f24a6145b2077f992902891f8fa59abe5c8696.1498022414.git.luto@kernel.org> <20170621184424.eixb2jdyy66xq4hg@pd.tnic> <20170622072449.4rc4bnvucn7usuak@pd.tnic> <20170622145914.tzqdulshlssiywj4@pd.tnic> From: Andy Lutomirski Date: Thu, 22 Jun 2017 08:55:36 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm To: Borislav Petkov Cc: Andy Lutomirski , X86 ML , "linux-kernel@vger.kernel.org" , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Nadav Amit , Rik van Riel , Dave Hansen , Arjan van de Ven , Peter Zijlstra Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3880 Lines: 91 On Thu, Jun 22, 2017 at 7:59 AM, Borislav Petkov wrote: > On Thu, Jun 22, 2017 at 07:48:21AM -0700, Andy Lutomirski wrote: >> On Thu, Jun 22, 2017 at 12:24 AM, Borislav Petkov wrote: >> > On Wed, Jun 21, 2017 at 07:46:05PM -0700, Andy Lutomirski wrote: >> >> > I'm certainly still missing something here: >> >> > >> >> > We have f->new_tlb_gen and mm_tlb_gen to control the flushing, i.e., we >> >> > do once >> >> > >> >> > bump_mm_tlb_gen(mm); >> >> > >> >> > and once >> >> > >> >> > info.new_tlb_gen = bump_mm_tlb_gen(mm); >> >> > >> >> > and in both cases, the bumping is done on mm->context.tlb_gen. >> >> > >> >> > So why isn't that enough to do the flushing and we have to consult >> >> > info.new_tlb_gen too? >> >> >> >> The issue is a possible race. Suppose we start at tlb_gen == 1 and >> >> then two concurrent flushes happen. The first flush is a full flush >> >> and sets tlb_gen to 2. The second is a partial flush and sets tlb_gen >> >> to 3. If the second flush gets propagated to a given CPU first and it >> > >> > Maybe I'm still missing something, which is likely... >> > >> > but if the second flush gets propagated to the CPU first, the CPU will >> > have local tlb_gen 1 and thus enforce a full flush anyway because we >> > will go 1 -> 3 on that particular CPU. Or? >> > >> >> Yes, exactly. Which means I'm probably just misunderstanding your >> original question. Can you re-ask it? > > Ah, simple: we control the flushing with info.new_tlb_gen and > mm->context.tlb_gen. I.e., this check: > > > if (f->end != TLB_FLUSH_ALL && > f->new_tlb_gen == local_tlb_gen + 1 && > f->new_tlb_gen == mm_tlb_gen) { > > why can't we write: > > if (f->end != TLB_FLUSH_ALL && > mm_tlb_gen == local_tlb_gen + 1) > > ? Ah, I thought you were asking about why I needed mm_tlb_gen == local_tlb_gen + 1. This is just an optimization, or at least I hope it is. The idea is that, if we know that another flush is coming, it seems likely that it would be faster to do a full flush and increase local_tlb_gen all the way to mm_tlb_gen rather than doing a partial flush, increasing local_tlb_gen to something less than mm_tlb_gen, and needing to flush again very soon. > > If mm_tlb_gen is + 2, then we'll do a full flush, if it is + 1, then > partial. > > If the second flush, as you say is a partial one and still gets > propagated first, the check will force a full flush anyway. > > When the first flush propagates after the second, we'll ignore it > because local_tlb_gen has advanced adready due to the second flush. > > As a matter of fact, we could simplify the logic: if local_tlb_gen is > only mm_tlb_gen - 1, then do the requested flush type. Hmm. I'd be nervous that there are more subtle races if we do this. For example, suppose that a partial flush increments tlb_gen from 1 to 2 and a full flush increments tlb_gen from 2 to 3. Meanwhile, the CPU is busy switching back and forth between mms, so the partial flush sees the cpu set in mm_cpumask but the full flush doesn't see the cpu set in mm_cpumask. The flush IPI hits after a switch_mm_irqs_off() call notices the change from 1 to 2. switch_mm_irqs_off() will do a full flush and increment the local tlb_gen to 2, and the IPI handler for the partial flush will see local_tlb_gen == mm_tlb_gen - 1 (because local_tlb_gen == 2 and mm_tlb_gen == 3) and do a partial flush. The problem here is that it's not obvious to me that this actually ends up flushing everything that's needed. Maybe all the memory ordering gets this right, but I can imagine scenarios in which switch_mm_irqs_off() does its flush early enough that the TLB picks up an entry that was supposed to get zapped by the full flush. IOW it *might* be valid, but I think it would need very careful review and documentation. --Andy