Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752588AbdFSWBQ (ORCPT ); Mon, 19 Jun 2017 18:01:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:38588 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752283AbdFSWBO (ORCPT ); Mon, 19 Jun 2017 18:01:14 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 363EC23A0B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org MIME-Version: 1.0 In-Reply-To: <1619e0d4-683d-c129-a132-383c7495d285@suse.com> References: <039935bc914009103fdaa6f72f14980c19562de5.1497415951.git.luto@kernel.org> <1619e0d4-683d-c129-a132-383c7495d285@suse.com> From: Andy Lutomirski Date: Mon, 19 Jun 2017 15:00:51 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 05/10] x86/mm: Rework lazy TLB mode and TLB freshness tracking To: Juergen Gross Cc: Andy Lutomirski , X86 ML , "linux-kernel@vger.kernel.org" , Borislav Petkov , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Nadav Amit , Rik van Riel , Dave Hansen , Arjan van de Ven , Peter Zijlstra , Andrew Banman , Mike Travis , Dimitri Sivanich , Boris Ostrovsky Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1651 Lines: 37 On Tue, Jun 13, 2017 at 11:09 PM, Juergen Gross wrote: > On 14/06/17 06:56, Andy Lutomirski wrote: >> x86's lazy TLB mode used to be fairly weak -- it would switch to >> init_mm the first time it tried to flush a lazy TLB. This meant an >> unnecessary CR3 write and, if the flush was remote, an unnecessary >> IPI. >> >> Rewrite it entirely. When we enter lazy mode, we simply remove the >> cpu from mm_cpumask. This means that we need a way to figure out >> whether we've missed a flush when we switch back out of lazy mode. >> I use the tlb_gen machinery to track whether a context is up to >> date. >> >> Note to reviewers: this patch, my itself, looks a bit odd. I'm >> using an array of length 1 containing (ctx_id, tlb_gen) rather than >> just storing tlb_gen, and making it at array isn't necessary yet. >> I'm doing this because the next few patches add PCID support, and, >> with PCID, we need ctx_id, and the array will end up with a length >> greater than 1. Making it an array now means that there will be >> less churn and therefore less stress on your eyeballs. >> >> NB: This is dubious but, AFAICT, still correct on Xen and UV. >> xen_exit_mmap() uses mm_cpumask() for nefarious purposes and this >> patch changes the way that mm_cpumask() works. This should be okay, >> since Xen *also* iterates all online CPUs to find all the CPUs it >> needs to twiddle. > > There is a allocation failure path in xen_drop_mm_ref() which might > be wrong with this patch. As this path should be taken only very > unlikely I'd suggest to remove the test for mm_cpumask() bit zero in > this path. > Right, fixed. > > Juergen