Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp3770005ybl; Tue, 20 Aug 2019 01:54:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqw0jFcGGjIJQtJ5Ldm0ngEZIt01wrPD9GdGp6K+aEMxOLfvjeY1ZNL6v+x6+lVKlqStXsgU X-Received: by 2002:a63:de43:: with SMTP id y3mr23936721pgi.211.1566291247849; Tue, 20 Aug 2019 01:54:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566291247; cv=none; d=google.com; s=arc-20160816; b=EnCZ7A+c0BeOeRlXyvsdGIUmooci8abskZqojxFOrEBnf9Yo4CMyuT63HYoWw1A7/8 0N4ZstJAjT1Gtzj2uiVwsshbKHnLRHDpX7BX94Z0pdZ33dAhMlclj83A9Uf4iGfj5OS1 71m+VnTrRUmjLfpF1CWXAPD4IpAIqNy2jFv0b5O5rZdGpMBgeojc4hZUBFQ9h4j9IeNx KU3FNQzpv0HV2j/X5XoemixLvJecDh53fSbeN3hAiWgk3mgsFHebdHGTspdYPBCcStiP IuppmxA+x4epae//n2KCRc/OI7vY8Pw3uA6Q/kjQZmPWBoaNVjL1uK07YeDBWtMYLayt g+kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=v+zMdfxIJC6szl/s3F3QtV0vZKEEjr7EvMl1gxJ6Wko=; b=XRxiZqFU8J5WN/lP3i1Nq2iMQvTpSPBuiLoio500wSTjHUOcoTQDKA3Ei/MKgCdLdE lxt91/pnY7kPhg2w7N46efPBiz2tEbWd7LJj2becuS6GRVE6FHZ7qZE3v2MfRlvErAt6 wwIrwdHZeHN29TUUvkNF8CsnE6/CrMv0/bEM7vsceLlS9mrhInUji+iPe4e7j/mHYaFV ocgB0dFJ/0Ubxf/quOrnPufYjxAhRF+s6pcM4rWdgT4HzSexdAa3u2vuWzoTgBQuurA1 8Fmbey/r9gkG9M92zHVoJ4vEAWmDweTLPQoDBlImqhM27i/tVmkgNpw4H0T3U3XRiALQ FQMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=VzunMJle; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o132si12405305pfg.249.2019.08.20.01.53.52; Tue, 20 Aug 2019 01:54:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=VzunMJle; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729457AbfHTIvt (ORCPT + 99 others); Tue, 20 Aug 2019 04:51:49 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:55985 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728842AbfHTIvs (ORCPT ); Tue, 20 Aug 2019 04:51:48 -0400 Received: by mail-wm1-f66.google.com with SMTP id f72so1872291wmf.5 for ; Tue, 20 Aug 2019 01:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=v+zMdfxIJC6szl/s3F3QtV0vZKEEjr7EvMl1gxJ6Wko=; b=VzunMJleY9TwC0oBKPWz1G/teeRWt/mJNGQ8Yt6ooyD9ildZ6hLc9QhAL7zBcNN8UG oh/J3pynXQS3grHkDGYn3Y95nrQhenrNUzQRyawsmRS9zkCJBiiioN4uvrDxauZdGkPP GCBcAP1W4msKzzcI8n/V7wvKnOkSNp+yMDgyNOgHxDqanJEGAyz+Z5O9Z4Ipw3ieRlHV IjKmb7YMU4LdsIDki1fgr8GCSgL9+ykmI2Po/TSHNWPp3cGh09m+JZuUaIGxEaICaHED cxFCroQ5qHLcNRix6HfQQJtfqiHzVeMVNRzR5MoHLDVPXB6hBCK2fUEhfyW/dJPKShKm e+0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=v+zMdfxIJC6szl/s3F3QtV0vZKEEjr7EvMl1gxJ6Wko=; b=FLgGlYHwLZSmcrp427Ce/KoqW0pzSpQfDy4JnpA12r40t1oGlq1UIfXSdb7KPT0+3j pox7OVLeZgQsVXYMJNqrjHakPbCF0aZFnX0BVZQQXl29C9rc5gc8BB0eOy6EgAcYnTZV lEH/QEak+mChw2pcHCyBSbyFFu5Y/7z12esS80F1N16TaZAQ15UMCMUDIc5a8sTHoSYM vdQ8lrkcN3AM977XkdcEXA3r5XOnFU47tXa7oVA8R9PditiYGhhhsYos0hp3Hah9AKVt xTyuNNeiBuPA8Z+eg6iEha4QGToQ/CaU60ufpK4X6p3/SJ8KSiF6wW3wR80H66ytzFyg +TXg== X-Gm-Message-State: APjAAAVBY2Bl+XLxCwCqB7GLUDqJSkD/L+SWp7uN6+SHEFThjnSlgq42 M59YdKc5U1Cp7aQwRIWa/TnT6GGClewoDXCtGZS7jg== X-Received: by 2002:a7b:c933:: with SMTP id h19mr4236338wml.177.1566291106225; Tue, 20 Aug 2019 01:51:46 -0700 (PDT) MIME-Version: 1.0 References: <20190820004735.18518-1-atish.patra@wdc.com> In-Reply-To: <20190820004735.18518-1-atish.patra@wdc.com> From: Anup Patel Date: Tue, 20 Aug 2019 14:21:34 +0530 Message-ID: Subject: Re: [v2 PATCH] RISC-V: Optimize tlb flush path. To: Atish Patra Cc: "linux-kernel@vger.kernel.org List" , Albert Ou , Allison Randal , linux-riscv , Palmer Dabbelt , Paul Walmsley , Andreas Schwab , "hch@infradead.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 20, 2019 at 6:17 AM Atish Patra wrote: > > In RISC-V, tlb flush happens via SBI which is expensive. > If the target cpumask contains a local hartid, some cost > can be saved by issuing a local tlb flush as we do that > in OpenSBI anyways. There is also no need of SBI call if > cpumask is empty. > > Do a local flush first if current cpu is present in cpumask. > Invoke SBI call only if target cpumask contains any cpus > other than local cpu. > > Signed-off-by: Atish Patra > --- > arch/riscv/include/asm/tlbflush.h | 37 ++++++++++++++++++++++++++----- > 1 file changed, 31 insertions(+), 6 deletions(-) > > diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h > index b5e64dc19b9e..3f9cd17b5402 100644 > --- a/arch/riscv/include/asm/tlbflush.h > +++ b/arch/riscv/include/asm/tlbflush.h > @@ -8,6 +8,7 @@ > #define _ASM_RISCV_TLBFLUSH_H > > #include > +#include > #include > > /* > @@ -42,20 +43,44 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, > > #include > > -static inline void remote_sfence_vma(struct cpumask *cmask, unsigned long start, > - unsigned long size) > +static void __riscv_flush_tlb(struct cpumask *cmask, unsigned long start, > + unsigned long size) > { > struct cpumask hmask; > + unsigned int hartid; > + unsigned int cpuid; > > cpumask_clear(&hmask); > + > + if (!cmask) { > + riscv_cpuid_to_hartid_mask(cpu_online_mask, &hmask); > + goto issue_sfence; > + } > + > + cpuid = get_cpu(); > + if (cpumask_test_cpu(cpuid, cmask)) { > + /* Save trap cost by issuing a local tlb flush here */ > + if ((start == 0 && size == -1) || (size > PAGE_SIZE)) > + local_flush_tlb_all(); > + else if (size == PAGE_SIZE) > + local_flush_tlb_page(start); > + } > + if (cpumask_any_but(cmask, cpuid) >= nr_cpu_ids) > + goto done; > + > riscv_cpuid_to_hartid_mask(cmask, &hmask); > + hartid = cpuid_to_hartid_map(cpuid); > + cpumask_clear_cpu(hartid, &hmask); > + > +issue_sfence: > sbi_remote_sfence_vma(hmask.bits, start, size); > +done: > + put_cpu(); > } > > -#define flush_tlb_all() sbi_remote_sfence_vma(NULL, 0, -1) > - > +#define flush_tlb_all() __riscv_flush_tlb(NULL, 0, -1) > #define flush_tlb_range(vma, start, end) \ > - remote_sfence_vma(mm_cpumask((vma)->vm_mm), start, (end) - (start)) > + __riscv_flush_tlb(mm_cpumask((vma)->vm_mm), start, (end) - (start)) > > static inline void flush_tlb_page(struct vm_area_struct *vma, > unsigned long addr) { > @@ -63,7 +88,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > } > > #define flush_tlb_mm(mm) \ > - remote_sfence_vma(mm_cpumask(mm), 0, -1) > + __riscv_flush_tlb(mm_cpumask(mm), 0, -1) > > #endif /* CONFIG_SMP */ > > -- > 2.21.0 > I think we should move __riscv_flush_tlb() to mm/tlbflush.c because it's quite big now. In future, we will also have __riscv_flush_tlb_asid() which will flush TLB based on ASID. Regards, Anup