Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp1321309ybg; Tue, 2 Jun 2020 07:04:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwKjc1ulK1Eaf0h6tu9txjnXT1Ly09tJ4V99IKZnvkPLPyf2F9wytpvCs5jIzApzPNzBbz2 X-Received: by 2002:aa7:cb53:: with SMTP id w19mr25623389edt.328.1591106664264; Tue, 02 Jun 2020 07:04:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591106664; cv=none; d=google.com; s=arc-20160816; b=kVoDmB6mVm3qyMqSFfF95Wvubog8pewo3axqZE0NF6qwAafyrXOqi2mrUQss5+tGci KXXveXPicfyLQAaZoHskDM5/hJTGxN9+a6G18rHHhowwavpLJCpOwtGwqmgJM16TzmYt waqsk0TE45X+lAhqb/INoyDEoqgmKE+AUkp1cZUSHnuJnXl851sv15mZ4dsJv0oN78JA dx38qGiVBse2Hm5CpoDjBbh3q8Hk5SiYPeeiCC9LvrCDJgWX4WGHz8Xgf7OEsvcywnvp otW56loHjKJNeRkyfqoFlipxNdR3JhGaMfAwq5+QeNws7cDRi0W/+D6UJtRcOKRGW6XI DRnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oqyDoO26YFmjlStlUjB1/bvr75ZAHG/7BVXZ2BUI68k=; b=aUutsrtRY+7CdgWWG0s1FAA6LGboZFP1J7qPODJeg/MaYObKxc36N/+OL8JjjYkGvb g5XlJMCGZ29RocYaKTquH7yvb8e7wlMHtT/N6PoFi5nvc9Mz0w0QTy1rTBam8L2ZgHI4 s6Y1RsUl9bEt/JoAhJ/5hp08lULS6bF8bC3LdcrSYNy1UKa4VNH5IoXtdFKhoV5ef83c Osyu679bDbJ83Q5qkzr3fvPAVOpz1R1EFvZV6sEnqeidNvCH9XqjY+AhviZaIFXY7wyc OcQWhoL6FWI3l/fPFF6ZF/PS5CqX4FixTMMi4PGUYyaeawa3S0cJTn30h5n0L3ZzUKMl Kg3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n20si1500156edt.487.2020.06.02.07.03.54; Tue, 02 Jun 2020 07:04:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728224AbgFBN7g (ORCPT + 99 others); Tue, 2 Jun 2020 09:59:36 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:5772 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726636AbgFBN7O (ORCPT ); Tue, 2 Jun 2020 09:59:14 -0400 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id D9675F93E3E27151445A; Tue, 2 Jun 2020 21:59:10 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.487.0; Tue, 2 Jun 2020 21:59:04 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [PATCH v4 5/6] arm64: tlb: Set the TTL field in flush_tlb_range Date: Tue, 2 Jun 2020 21:58:35 +0800 Message-ID: <20200602135836.1620-6-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200602135836.1620-1-yezhenyu2@huawei.com> References: <20200602135836.1620-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch uses the cleared_* in struct mmu_gather to set the TTL field in flush_tlb_range(). Signed-off-by: Zhenyu Ye Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlb.h | 29 ++++++++++++++++++++++++++++- arch/arm64/include/asm/tlbflush.h | 14 ++++++++------ 2 files changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index b76df828e6b7..61c97d3b58c7 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -21,11 +21,37 @@ static void tlb_flush(struct mmu_gather *tlb); #include +/* + * get the tlbi levels in arm64. Default value is 0 if more than one + * of cleared_* is set or neither is set. + * Arm64 doesn't support p4ds now. + */ +static inline int tlb_get_level(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes && !(tlb->cleared_pmds || + tlb->cleared_puds || + tlb->cleared_p4ds)) + return 3; + + if (tlb->cleared_pmds && !(tlb->cleared_ptes || + tlb->cleared_puds || + tlb->cleared_p4ds)) + return 2; + + if (tlb->cleared_puds && !(tlb->cleared_ptes || + tlb->cleared_pmds || + tlb->cleared_p4ds)) + return 1; + + return 0; +} + static inline void tlb_flush(struct mmu_gather *tlb) { struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); bool last_level = !tlb->freed_tables; unsigned long stride = tlb_get_unmap_size(tlb); + int tlb_level = tlb_get_level(tlb); /* * If we're tearing down the address space then we only care about @@ -38,7 +64,8 @@ static inline void tlb_flush(struct mmu_gather *tlb) return; } - __flush_tlb_range(&vma, tlb->start, tlb->end, stride, last_level); + __flush_tlb_range(&vma, tlb->start, tlb->end, stride, + last_level, tlb_level); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index bfb58e62c127..84cb98b60b7b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -215,7 +215,8 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long stride, bool last_level) + unsigned long stride, bool last_level, + int tlb_level) { unsigned long asid = ASID(vma->vm_mm); unsigned long addr; @@ -237,11 +238,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); for (addr = start; addr < end; addr += stride) { if (last_level) { - __tlbi_level(vale1is, addr, 0); - __tlbi_user_level(vale1is, addr, 0); + __tlbi_level(vale1is, addr, tlb_level); + __tlbi_user_level(vale1is, addr, tlb_level); } else { - __tlbi_level(vae1is, addr, 0); - __tlbi_user_level(vae1is, addr, 0); + __tlbi_level(vae1is, addr, tlb_level); + __tlbi_user_level(vae1is, addr, tlb_level); } } dsb(ish); @@ -253,8 +254,9 @@ static inline void flush_tlb_range(struct vm_area_struct *vma, /* * We cannot use leaf-only invalidation here, since we may be invalidating * table entries as part of collapsing hugepages or moving page tables. + * Set the tlb_level to 0 because we can not get enough information here. */ - __flush_tlb_range(vma, start, end, PAGE_SIZE, false); + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); } static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) -- 2.19.1