Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp332948iob; Wed, 18 May 2022 03:16:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHXXq4rzucXsqGk93kaeE3bbhkuZfmE0lHiMtN2VjCgtkfHye9/PQXfWNsKb40SRz2cEVj X-Received: by 2002:a17:902:e3d4:b0:161:888e:e707 with SMTP id r20-20020a170902e3d400b00161888ee707mr12754473ple.118.1652868965515; Wed, 18 May 2022 03:16:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652868965; cv=none; d=google.com; s=arc-20160816; b=MgBXEoS5sPDuUSHQuN6RGLI3tg/gGdOenZRzibl8EeP9mDls6+i1DzPZRLICYilwMr 3s0fjueJhfHVgQvaGnfLrztYu3rgK46Q5eBsbkWMzlywPiZXeAkXGk3gNTog0Vfw4zPM PZQtQC0kXD2S/KhG+muWcJd81oeLexwu23Vz9LBJLL0ygVhyqqHmbqMLRVDAzXw+jFCe jEya6pF46nN8QpgQYQRVKbDX2lnoaeBAbSqJ8Ay0mCT5CFyzEpxZd0pU1y4PZqsj3GmA 0aqJ+vXld2mNb8xOozfIswp2z5kkcLBCSdAU7Fs3+Yi0fuf19Tr1sv3lg3loCOWqEid3 nBcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=c8zlbrur3HXO3P77BSEeI1ofYNjCiLZZoYCjh4yrTv4=; b=Vj63Zz1AoZBSGD2HlL4M09qmtUzrwGckHPdP+vdkBk+H9C9Jcm7Uo41IkbHMtJhfJ1 Qc57KL7ateT7Jtgrz3oMKT34Vj+8OxJBU9vhqwk0jykRb4TgKbwDuxvYpBPoNOTxQbbc VBXB+GX5tIMLvNVsHRz5w/XcU7+mA8Cv/UY7M9LlBiaRjulDrbuN3Ec9kUzhB6U/jcaE YIvjqChoC1S0ckfwidkQ5zj0HSC5uhAefAkEAS2rHTGZj7bFibTiq4Ee72yEUG9tN/uD px13Sw5bYBLQL8/bAGJzIZOBHv5V53RX9Z1Cx56GZrR1WMZh2Gd1JbLF1oaB+3S/4FEI 87uA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id bm23-20020a656e97000000b003db43f9cc0asi2187618pgb.169.2022.05.18.03.16.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 03:16:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BF483D70; Wed, 18 May 2022 02:56:27 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234841AbiERJ4S (ORCPT + 99 others); Wed, 18 May 2022 05:56:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234832AbiERJ4Q (ORCPT ); Wed, 18 May 2022 05:56:16 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2414C9; Wed, 18 May 2022 02:56:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AFB83B81EF3; Wed, 18 May 2022 09:56:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83693C385AA; Wed, 18 May 2022 09:56:05 +0000 (UTC) From: Huacai Chen To: Arnd Bergmann , Andy Lutomirski , Thomas Gleixner , Peter Zijlstra , Andrew Morton , David Airlie , Jonathan Corbet , Linus Torvalds Cc: linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Xuefeng Li , Yanteng Si , Huacai Chen , Guo Ren , Xuerui Wang , Jiaxun Yang , Stephen Rothwell , Huacai Chen , WANG Xuerui Subject: [PATCH V11 17/22] LoongArch: Add some library functions Date: Wed, 18 May 2022 17:57:04 +0800 Message-Id: <20220518095709.1313120-1-chenhuacai@loongson.cn> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220518092619.1269111-1-chenhuacai@loongson.cn> References: <20220518092619.1269111-1-chenhuacai@loongson.cn> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add some library functions for LoongArch, including: delay, memset, memcpy, memmove, copy_user, strncpy_user, strnlen_user and tlb dump functions. Reviewed-by: WANG Xuerui Signed-off-by: Huacai Chen --- arch/loongarch/include/asm/delay.h | 26 +++++++ arch/loongarch/include/asm/string.h | 12 +++ arch/loongarch/lib/clear_user.S | 43 +++++++++++ arch/loongarch/lib/copy_user.S | 47 ++++++++++++ arch/loongarch/lib/delay.c | 43 +++++++++++ arch/loongarch/lib/dump_tlb.c | 111 ++++++++++++++++++++++++++++ 6 files changed, 282 insertions(+) create mode 100644 arch/loongarch/include/asm/delay.h create mode 100644 arch/loongarch/include/asm/string.h create mode 100644 arch/loongarch/lib/clear_user.S create mode 100644 arch/loongarch/lib/copy_user.S create mode 100644 arch/loongarch/lib/delay.c create mode 100644 arch/loongarch/lib/dump_tlb.c diff --git a/arch/loongarch/include/asm/delay.h b/arch/loongarch/include/asm/delay.h new file mode 100644 index 000000000000..36d775191310 --- /dev/null +++ b/arch/loongarch/include/asm/delay.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + */ +#ifndef _ASM_DELAY_H +#define _ASM_DELAY_H + +#include + +extern void __delay(unsigned long cycles); +extern void __ndelay(unsigned long ns); +extern void __udelay(unsigned long us); + +#define ndelay(ns) __ndelay(ns) +#define udelay(us) __udelay(us) + +/* make sure "usecs *= ..." in udelay do not overflow. */ +#if HZ >= 1000 +#define MAX_UDELAY_MS 1 +#elif HZ <= 200 +#define MAX_UDELAY_MS 5 +#else +#define MAX_UDELAY_MS (1000 / HZ) +#endif + +#endif /* _ASM_DELAY_H */ diff --git a/arch/loongarch/include/asm/string.h b/arch/loongarch/include/asm/string.h new file mode 100644 index 000000000000..b07e60ded957 --- /dev/null +++ b/arch/loongarch/include/asm/string.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + */ +#ifndef _ASM_STRING_H +#define _ASM_STRING_H + +extern void *memset(void *__s, int __c, size_t __count); +extern void *memcpy(void *__to, __const__ void *__from, size_t __n); +extern void *memmove(void *__dest, __const__ void *__src, size_t __n); + +#endif /* _ASM_STRING_H */ diff --git a/arch/loongarch/lib/clear_user.S b/arch/loongarch/lib/clear_user.S new file mode 100644 index 000000000000..25d9be5fbb19 --- /dev/null +++ b/arch/loongarch/lib/clear_user.S @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include + +.macro fixup_ex from, to, offset, fix +.if \fix + .section .fixup, "ax" +\to: addi.d a0, a1, \offset + jr ra + .previous +.endif + .section __ex_table, "a" + PTR \from\()b, \to\()b + .previous +.endm + +/* + * unsigned long __clear_user(void *addr, size_t size) + * + * a0: addr + * a1: size + */ +SYM_FUNC_START(__clear_user) + beqz a1, 2f + +1: st.b zero, a0, 0 + addi.d a0, a0, 1 + addi.d a1, a1, -1 + bgt a1, zero, 1b + +2: move a0, a1 + jr ra + + fixup_ex 1, 3, 0, 1 +SYM_FUNC_END(__clear_user) + +EXPORT_SYMBOL(__clear_user) diff --git a/arch/loongarch/lib/copy_user.S b/arch/loongarch/lib/copy_user.S new file mode 100644 index 000000000000..9ae507f851b5 --- /dev/null +++ b/arch/loongarch/lib/copy_user.S @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + */ + +#include +#include +#include +#include + +.macro fixup_ex from, to, offset, fix +.if \fix + .section .fixup, "ax" +\to: addi.d a0, a2, \offset + jr ra + .previous +.endif + .section __ex_table, "a" + PTR \from\()b, \to\()b + .previous +.endm + +/* + * unsigned long __copy_user(void *to, const void *from, size_t n) + * + * a0: to + * a1: from + * a2: n + */ +SYM_FUNC_START(__copy_user) + beqz a2, 3f + +1: ld.b t0, a1, 0 +2: st.b t0, a0, 0 + addi.d a0, a0, 1 + addi.d a1, a1, 1 + addi.d a2, a2, -1 + bgt a2, zero, 1b + +3: move a0, a2 + jr ra + + fixup_ex 1, 4, 0, 1 + fixup_ex 2, 4, 0, 0 +SYM_FUNC_END(__copy_user) + +EXPORT_SYMBOL(__copy_user) diff --git a/arch/loongarch/lib/delay.c b/arch/loongarch/lib/delay.c new file mode 100644 index 000000000000..5d856694fcfe --- /dev/null +++ b/arch/loongarch/lib/delay.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + */ +#include +#include +#include +#include + +#include +#include + +void __delay(unsigned long cycles) +{ + u64 t0 = get_cycles(); + + while ((unsigned long)(get_cycles() - t0) < cycles) + cpu_relax(); +} +EXPORT_SYMBOL(__delay); + +/* + * Division by multiplication: you don't have to worry about + * loss of precision. + * + * Use only for very small delays ( < 1 msec). Should probably use a + * lookup table, really, as the multiplications take much too long with + * short delays. This is a "reasonable" implementation, though (and the + * first constant multiplications gets optimized away if the delay is + * a constant) + */ + +void __udelay(unsigned long us) +{ + __delay((us * 0x000010c7ull * HZ * lpj_fine) >> 32); +} +EXPORT_SYMBOL(__udelay); + +void __ndelay(unsigned long ns) +{ + __delay((ns * 0x00000005ull * HZ * lpj_fine) >> 32); +} +EXPORT_SYMBOL(__ndelay); diff --git a/arch/loongarch/lib/dump_tlb.c b/arch/loongarch/lib/dump_tlb.c new file mode 100644 index 000000000000..cda2c6bc7f09 --- /dev/null +++ b/arch/loongarch/lib/dump_tlb.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020-2022 Loongson Technology Corporation Limited + * + * Derived from MIPS: + * Copyright (C) 1994, 1995 by Waldorf Electronics, written by Ralf Baechle. + * Copyright (C) 1999 by Silicon Graphics, Inc. + */ +#include +#include + +#include +#include +#include +#include + +void dump_tlb_regs(void) +{ + const int field = 2 * sizeof(unsigned long); + + pr_info("Index : %0x\n", read_csr_tlbidx()); + pr_info("PageSize : %0x\n", read_csr_pagesize()); + pr_info("EntryHi : %0*llx\n", field, read_csr_entryhi()); + pr_info("EntryLo0 : %0*llx\n", field, read_csr_entrylo0()); + pr_info("EntryLo1 : %0*llx\n", field, read_csr_entrylo1()); +} + +static void dump_tlb(int first, int last) +{ + unsigned long s_entryhi, entryhi, asid; + unsigned long long entrylo0, entrylo1, pa; + unsigned int index; + unsigned int s_index, s_asid; + unsigned int pagesize, c0, c1, i; + unsigned long asidmask = cpu_asid_mask(¤t_cpu_data); + int pwidth = 11; + int vwidth = 11; + int asidwidth = DIV_ROUND_UP(ilog2(asidmask) + 1, 4); + + s_entryhi = read_csr_entryhi(); + s_index = read_csr_tlbidx(); + s_asid = read_csr_asid(); + + for (i = first; i <= last; i++) { + write_csr_index(i); + tlb_read(); + pagesize = read_csr_pagesize(); + entryhi = read_csr_entryhi(); + entrylo0 = read_csr_entrylo0(); + entrylo1 = read_csr_entrylo1(); + index = read_csr_tlbidx(); + asid = read_csr_asid(); + + /* EHINV bit marks entire entry as invalid */ + if (index & CSR_TLBIDX_EHINV) + continue; + /* + * ASID takes effect in absence of G (global) bit. + */ + if (!((entrylo0 | entrylo1) & ENTRYLO_G) && + asid != s_asid) + continue; + + /* + * Only print entries in use + */ + pr_info("Index: %2d pgsize=%x ", i, (1 << pagesize)); + + c0 = (entrylo0 & ENTRYLO_C) >> ENTRYLO_C_SHIFT; + c1 = (entrylo1 & ENTRYLO_C) >> ENTRYLO_C_SHIFT; + + pr_cont("va=%0*lx asid=%0*lx", + vwidth, (entryhi & ~0x1fffUL), asidwidth, asid & asidmask); + + /* NR/NX are in awkward places, so mask them off separately */ + pa = entrylo0 & ~(ENTRYLO_NR | ENTRYLO_NX); + pa = pa & PAGE_MASK; + pr_cont("\n\t["); + pr_cont("ri=%d xi=%d ", + (entrylo0 & ENTRYLO_NR) ? 1 : 0, + (entrylo0 & ENTRYLO_NX) ? 1 : 0); + pr_cont("pa=%0*llx c=%d d=%d v=%d g=%d plv=%lld] [", + pwidth, pa, c0, + (entrylo0 & ENTRYLO_D) ? 1 : 0, + (entrylo0 & ENTRYLO_V) ? 1 : 0, + (entrylo0 & ENTRYLO_G) ? 1 : 0, + (entrylo0 & ENTRYLO_PLV) >> ENTRYLO_PLV_SHIFT); + /* NR/NX are in awkward places, so mask them off separately */ + pa = entrylo1 & ~(ENTRYLO_NR | ENTRYLO_NX); + pa = pa & PAGE_MASK; + pr_cont("ri=%d xi=%d ", + (entrylo1 & ENTRYLO_NR) ? 1 : 0, + (entrylo1 & ENTRYLO_NX) ? 1 : 0); + pr_cont("pa=%0*llx c=%d d=%d v=%d g=%d plv=%lld]\n", + pwidth, pa, c1, + (entrylo1 & ENTRYLO_D) ? 1 : 0, + (entrylo1 & ENTRYLO_V) ? 1 : 0, + (entrylo1 & ENTRYLO_G) ? 1 : 0, + (entrylo1 & ENTRYLO_PLV) >> ENTRYLO_PLV_SHIFT); + } + pr_info("\n"); + + write_csr_entryhi(s_entryhi); + write_csr_tlbidx(s_index); + write_csr_asid(s_asid); +} + +void dump_tlb_all(void) +{ + dump_tlb(0, current_cpu_data.tlbsize - 1); +} -- 2.27.0