Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp535310ybg; Mon, 1 Jun 2020 07:50:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzMYAQzPGdwHlJdYDyK1EDadwL+ojAaWVOCEXCHdmhSTyx8COdIYbnDVxZnnA7Om3i8gayG X-Received: by 2002:a50:eacb:: with SMTP id u11mr21582581edp.162.1591023035226; Mon, 01 Jun 2020 07:50:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591023035; cv=none; d=google.com; s=arc-20160816; b=v04SEQdkWk8nJyUnY4Ql4YfYSFhz500SWQO0OOXXQtXP3WpuTMAAJDZGub8DMosXAv CPjpC4JWC0Ou73yTTbnP+6UvpANSP3ae7YGMIW33TX/4Jr29HMLS25faFlyxSPwcvvo/ D5TpKy0SRsuz76pcjcgJyvG1TyARB7avBpbxM/zX00T57V8j4r7mKB0UALObqqIYtMkx 6DGTG33O1+ZBci5yGUspRZ9MTgei/rOg/Re6nXwKkXemE5GAl6z+1QVLah2m8yLp03DJ Pl6Y+abxMKZUJwDif4fLBlrXW5YNB33WswlRu8EZ5GvRBzlAeNXSxPEFVtoP5UsYGjxP n2cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=6Jf2eswcN6TClRNO1aTAfuI82pFgcOGmhF/vqT5VSt0=; b=q88ZevK12eKFGNq4z7zmf49wgQJAvhdjgQbwsW/KGrtZiBhxxOwJfpeXAj4VRajf2o 6ra+GVNGyYayh+6Mo5kJ/X+x/Ov7EJItGA5AM/fW2Xdssrh5bNVNME1V/1xq2buCXdcv TOZj6y6Itr7hKq4jjA0DXa+q6zA2nA2IHVeTexJZKSowqCdKuZeILUUn2MH25G2WFNjF 0LpCrvKOBTpR7syMTawR45cH+Tfgoi+RKReb1MUA9hxYCOBhQdhYZdqeHwzcocCAmNB7 +pz4SUxdajZCrC4NDMH8bKMkYxmj1A1mibQW3ukoUSazvigctlD1m7VK1aOO+yyE3bV9 2kgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o18si11075489ejm.8.2020.06.01.07.50.11; Mon, 01 Jun 2020 07:50:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727906AbgFAOro (ORCPT + 99 others); Mon, 1 Jun 2020 10:47:44 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:43664 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726017AbgFAOrn (ORCPT ); Mon, 1 Jun 2020 10:47:43 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 95E84B8E01852B58C409; Mon, 1 Jun 2020 22:47:33 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.487.0; Mon, 1 Jun 2020 22:47:23 +0800 From: Zhenyu Ye To: , , , , , , CC: , , , , , , , , , Subject: [RFC PATCH v4 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 Date: Mon, 1 Jun 2020 22:47:13 +0800 Message-ID: <20200601144713.2222-3-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200601144713.2222-1-yezhenyu2@huawei.com> References: <20200601144713.2222-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range(). In this patch, we only use the TLBI RANGE feature if the stride == PAGE_SIZE, because when stride > PAGE_SIZE, usually only a small number of pages need to be flushed and classic tlbi intructions are more effective. We can also use 'end - start < threshold number' to decide which way to go, however, different hardware may have different thresholds, so I'm not sure if this is feasible. Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlbflush.h | 98 +++++++++++++++++++++++++++---- 1 file changed, 86 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index bc3949064725..818f27c82024 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -50,6 +50,16 @@ __tlbi(op, (arg) | USER_ASID_FLAG); \ } while (0) +#define __tlbi_last_level(op1, op2, arg, last_level) do { \ + if (last_level) { \ + __tlbi(op1, arg); \ + __tlbi_user(op1, arg); \ + } else { \ + __tlbi(op2, arg); \ + __tlbi_user(op2, arg); \ + } \ +} while (0) + /* This macro creates a properly formatted VA operand for the TLBI */ #define __TLBI_VADDR(addr, asid) \ ({ \ @@ -59,6 +69,47 @@ __ta; \ }) +/* + * __TG defines translation granule of the system, which is decided by + * PAGE_SHIFT. Used by TTL. + * - 4KB : 1 + * - 16KB : 2 + * - 64KB : 3 + */ +#define __TG ((PAGE_SHIFT - 12) / 2 + 1) + +/* + * This macro creates a properly formatted VA operand for the TLBI RANGE. + * The value bit assignments are: + * + * +----------+------+-------+-------+-------+----------------------+ + * | ASID | TG | SCALE | NUM | TTL | BADDR | + * +-----------------+-------+-------+-------+----------------------+ + * |63 48|47 46|45 44|43 39|38 37|36 0| + * + * The address range is determined by below formula: + * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE) + * + */ +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl) \ + ({ \ + unsigned long __ta = (addr) >> PAGE_SHIFT; \ + __ta &= GENMASK_ULL(36, 0); \ + __ta |= (unsigned long)(ttl) << 37; \ + __ta |= (unsigned long)(num) << 39; \ + __ta |= (unsigned long)(scale) << 44; \ + __ta |= (unsigned long)(__TG) << 46; \ + __ta |= (unsigned long)(asid) << 48; \ + __ta; \ + }) + +/* This macro defines the range pages of the TLBI RANGE. */ +#define __TLBI_RANGE_SIZES(num, scale) ((num + 1) << (5 * scale + 1) << PAGE_SHIFT) + +#define TLB_RANGE_MASK_SHIFT 5 +#define TLB_RANGE_MASK GENMASK_ULL(TLB_RANGE_MASK_SHIFT - 1, 0) + + /* * TLB Invalidation * ================ @@ -181,32 +232,55 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level) { + int num = 0; + int scale = 0; unsigned long asid = ASID(vma->vm_mm); unsigned long addr; + unsigned long range_pages; start = round_down(start, stride); end = round_up(end, stride); + range_pages = (end - start) >> PAGE_SHIFT; if ((end - start) >= (MAX_TLBI_OPS * stride)) { flush_tlb_mm(vma->vm_mm); return; } - /* Convert the stride into units of 4k */ - stride >>= 12; + dsb(ishst); - start = __TLBI_VADDR(start, asid); - end = __TLBI_VADDR(end, asid); + /* + * The minimum size of TLB RANGE is 2 pages; + * Use normal TLB instruction to handle odd pages. + * If the stride != PAGE_SIZE, this will never happen. + */ + if (range_pages % 2 == 1) { + addr = __TLBI_VADDR(start, asid); + __tlbi_last_level(vale1is, vae1is, addr, last_level); + start += 1 << PAGE_SHIFT; + range_pages >>= 1; + } - dsb(ishst); - for (addr = start; addr < end; addr += stride) { - if (last_level) { - __tlbi(vale1is, addr); - __tlbi_user(vale1is, addr); - } else { - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); + while (range_pages > 0) { + if (cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) && + stride == PAGE_SIZE) { + num = (range_pages & TLB_RANGE_MASK) - 1; + if (num >= 0) { + addr = __TLBI_VADDR_RANGE(start, asid, scale, + num, 0); + __tlbi_last_level(rvale1is, rvae1is, addr, + last_level); + start += __TLBI_RANGE_SIZES(num, scale); + } + scale++; + range_pages >>= TLB_RANGE_MASK_SHIFT; + continue; } + + addr = __TLBI_VADDR(start, asid); + __tlbi_last_level(vale1is, vae1is, addr, last_level); + start += stride; + range_pages -= stride >> 12; } dsb(ish); } -- 2.19.1