Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2338961imm; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcHWaG6SxEDCQ6yAxZVNuUCOjGwHBZekE30fcoJ3Q8//ClsvuijHb1x0kGewFGVSjHDNBbS X-Received: by 2002:a17:902:7147:: with SMTP id u7-v6mr10837535plm.154.1532815043814; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532815043; cv=none; d=google.com; s=arc-20160816; b=fztq/7NVwhARtjicfGww6NOYQqHgi+fM0EW0SqBEzQb/GOCLY/XNHT5NrNS0GMEFbO w7y6fHx7R5Upcx0SFMZDok8NZ82Yzlc15ffOPOwErZnnHj2OBtZPtRkZ0HY9sXZGz7if FK/m1aae8Q2cILhWDY+gS5M8RbrD4rKc7WYvkrYLzlKNXOZglLxZV+qFaDtcmGgyozx4 Syhn220HVQ1yGX5aMl4GVFXSvbAoTjwOEGyuHuEoviHyR6VwXszoH7Es3o+857DZZAAk V0DB7sIG2mrpta8kKCE2/i8bedmvTGPO5H/KT+jVQly3u47yDQ116cJEJri4oE7Ji7Kz 9c+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=lT8Mgsp/DtQmJcNmDSnNZ5LQVUTJCbDOg0vmB3aFd4w=; b=D8Y+RwcdUHh9cI0zS+zWgvnFwRiFC3nVIZ148JNEbm1QO0FOE16T+RvIbDr12gG/eD leqz93FvMFwBIqdfc2gJg4J9eK6MXqS9h9yAgjtIGmn4GqNoX1zB9bQ4FpmB5rErVKTX jQhFaU63mYjYvBq+V4osdPsFTH8+PZdXSj9cY6cOq/qALqKXSIOI5xlbV2+YqGD4y66y okFj0+UbCiFzu2heOwqhd6iXA0VXtu+ComgQGYviyZ0UUTbz2H8vcI1052dcHqCe66Sg DhDxRCblMtx2JKF/7rPAs88RwUWPHf5AK3tOaOagbV/h0EdjKj0BggOW3zIMiIWuiov0 eIPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e16-v6si6151291pgv.561.2018.07.28.14.56.33; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731461AbeG1XWW (ORCPT + 99 others); Sat, 28 Jul 2018 19:22:22 -0400 Received: from shelob.surriel.com ([96.67.55.147]:44186 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731346AbeG1XWV (ORCPT ); Sat, 28 Jul 2018 19:22:21 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1fjXA0-0000QZ-Mr; Sat, 28 Jul 2018 17:54:00 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@fb.com, peterz@infradead.org, luto@kernel.org, x86@kernel.org, vkuznets@redhat.com, mingo@kernel.org, efault@gmx.de, dave.hansen@intel.com, will.daecon@arm.com, catalin.marinas@arm.com, benh@kernel.crashing.org, Rik van Riel Subject: [PATCH 04/10] x86,mm: use on_each_cpu_cond for TLB flushes Date: Sat, 28 Jul 2018 17:53:51 -0400 Message-Id: <20180728215357.3249-5-riel@surriel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180728215357.3249-1-riel@surriel.com> References: <20180728215357.3249-1-riel@surriel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Instead of open coding bitmap magic, use on_each_cpu_cond to determine which CPUs to send TLB flush IPIs to. This might be a little bit slower than examining the bitmaps, but it should be a lot easier to maintain in the long run. Suggested-by: Peter Zijlstra Signed-off-by: Rik van Riel --- arch/x86/mm/tlb.c | 75 +++++++++++-------------------------------------------- 1 file changed, 15 insertions(+), 60 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5321e02c4e09..671cc66df801 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -582,12 +582,19 @@ static void flush_tlb_func_remote(void *info) flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN); } +static bool tlb_is_lazy(int cpu, void *data) +{ + return per_cpu(cpu_tlbstate.is_lazy, cpu); +} + +static bool tlb_is_not_lazy(int cpu, void *data) +{ + return !per_cpu(cpu_tlbstate.is_lazy, cpu); +} + void native_flush_tlb_others(const struct cpumask *cpumask, const struct flush_tlb_info *info) { - cpumask_var_t lazymask; - unsigned int cpu; - count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); if (info->end == TLB_FLUSH_ALL) trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL); @@ -596,6 +603,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask, (info->end - info->start) >> PAGE_SHIFT); if (is_uv_system()) { + unsigned int cpu; /* * This whole special case is confused. UV has a "Broadcast * Assist Unit", which seems to be a fancy way to send IPIs. @@ -619,28 +627,8 @@ void native_flush_tlb_others(const struct cpumask *cpumask, return; } - /* - * A temporary cpumask is used in order to skip sending IPIs - * to CPUs in lazy TLB state, while keeping them in mm_cpumask(mm). - * If the allocation fails, simply IPI every CPU in mm_cpumask. - */ - if (!alloc_cpumask_var(&lazymask, GFP_ATOMIC)) { - smp_call_function_many(cpumask, flush_tlb_func_remote, - (void *)info, 1); - return; - } - - cpumask_copy(lazymask, cpumask); - - for_each_cpu(cpu, lazymask) { - if (per_cpu(cpu_tlbstate.is_lazy, cpu)) - cpumask_clear_cpu(cpu, lazymask); - } - - smp_call_function_many(lazymask, flush_tlb_func_remote, - (void *)info, 1); - - free_cpumask_var(lazymask); + on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote, + (void *)info, 1, GFP_ATOMIC, cpumask); } /* @@ -709,50 +697,17 @@ void tlb_flush_remove_tables_local(void *arg) } } -static void mm_fill_lazy_tlb_cpu_mask(struct mm_struct *mm, - struct cpumask *lazy_cpus) -{ - int cpu; - - for_each_cpu(cpu, mm_cpumask(mm)) { - if (!per_cpu(cpu_tlbstate.is_lazy, cpu)) - cpumask_set_cpu(cpu, lazy_cpus); - } -} - void tlb_flush_remove_tables(struct mm_struct *mm) { int cpu = get_cpu(); - cpumask_var_t lazy_cpus; if (cpumask_any_but(mm_cpumask(mm), cpu) >= nr_cpu_ids) { put_cpu(); return; } - if (!zalloc_cpumask_var(&lazy_cpus, GFP_ATOMIC)) { - /* - * If the cpumask allocation fails, do a brute force flush - * on all the CPUs that have this mm loaded. - */ - smp_call_function_many(mm_cpumask(mm), - tlb_flush_remove_tables_local, (void *)mm, 1); - put_cpu(); - return; - } - - /* - * CPUs with !is_lazy either received a TLB flush IPI while the user - * pages in this address range were unmapped, or have context switched - * and reloaded %CR3 since then. - * - * Shootdown IPIs at page table freeing time only need to be sent to - * CPUs that may have out of date TLB contents. - */ - mm_fill_lazy_tlb_cpu_mask(mm, lazy_cpus); - smp_call_function_many(lazy_cpus, - tlb_flush_remove_tables_local, (void *)mm, 1); - free_cpumask_var(lazy_cpus); + on_each_cpu_cond_mask(tlb_is_lazy, tlb_flush_remove_tables_local, + (void *)mm, 1, GFP_ATOMIC, mm_cpumask(mm)); put_cpu(); } -- 2.14.4