Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp2183938rdh; Tue, 26 Sep 2023 15:34:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEsAiC/FhZxQW+sPIHc4AzxoEIZsZig/rRHQbmxkAyU6b7iYhiCacPNmyghehwJFzyt49gH X-Received: by 2002:a05:6808:1488:b0:3a7:2456:6af6 with SMTP id e8-20020a056808148800b003a724566af6mr526167oiw.31.1695767656231; Tue, 26 Sep 2023 15:34:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695767656; cv=none; d=google.com; s=arc-20160816; b=rBQkrZtWxCWZercgT/1T79P2M/f9A1EPvGIrict0MQ53sHekBFnF1MKj4S2+3xS+3F U5zjQN7i9Z9y3SHn0nA4BhW/WwfYsZSn4vLXDnxOs3I0swiRf9hjtcjM3KGqeG1nJdkX 56L+q8r+KRMSfN6HVGa8xx3JfYGZp2hD7A/ytz4xi/A4OxyE9hVseypg6XNXRZBe9oVr vtr2ODZkdSChFY5q7TX4/ICGkdfBePv1lziN22B7kRraVUqlzYiWMqlWQYi4uJ03ILiT mCesgBNRgV+eEm1ht0IFWgNi3DMRJolfiUz3dbmm8Hn8uumi0VH6MxAoSmj2MvL4ZU4I DeMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=cfa8a/1GSowkpRTiJNRKMjBAyn9dIA/JPayfLXP3jPU=; fh=KmIWz9Cw0hz+i4SdiN9nKf3TQcGpBBs/wTFkdjo+D5w=; b=FvKWMwmRUOW8HxlrrJckB+Hx4Vv2irfsIWK9/G7k5y11YvZFMs2J97GAJUlc0efyrk Dw3BUpMfW3O7M1L/tvtXowPWN13FRcSOxYRQpW2Lqi/bQrSi6dHMadt99ShUBGoyN3ZT OBt9okrjgfZstcyqIwIja7wAcw2L4YoUHcM3S72XWDoj0NlyBS7uSyyw5qwLQ1w/3LyL JShDPLtOfMVJdtQXm//ZLO8BdKeJ4TqxpJemtXo6Brm0VsAo/AixrgaTKCnRmt4Bt6Uc KtzuFhuKrKUmxUseyL10XU1hGZibMSKMN08GYQPTxjsG+RTWrgwymEr6Qjg9t5QU/CU7 rPjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=3KZJAg36; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id bw15-20020a056a00408f00b0068e3f550763si13856737pfb.101.2023.09.26.15.34.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Sep 2023 15:34:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=3KZJAg36; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id A2BAB8226F56; Tue, 26 Sep 2023 15:34:12 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjIZWeJ (ORCPT + 99 others); Tue, 26 Sep 2023 18:34:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231839AbjIZWcH (ORCPT ); Tue, 26 Sep 2023 18:32:07 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5622D1F9C8 for ; Tue, 26 Sep 2023 14:58:07 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2bffa8578feso166405821fa.2 for ; Tue, 26 Sep 2023 14:58:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1695765485; x=1696370285; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=cfa8a/1GSowkpRTiJNRKMjBAyn9dIA/JPayfLXP3jPU=; b=3KZJAg36AFqIApAXAa98tCjhyHp8LebK6z/u+MBJJnX6bRLHulvQJAkLQMZhzeLLIW H3noSx3lE7HrWDdox0Z+Joe/qARb88kiVOHEtaoRHeU8qgP6XUFqIZ8r334u/qLl6NpJ w/ooCGUZN3Vols2jva0QFw2SDqtzSJLDDzDTSYsVuDBhxNxbDBE1HeOicykMBat8h9DG KmrDsWki0Z41nKGwEVHhX8xv7TJYtPK+XXh3bJhHZXwxIwkECj0Nr8sy1tztBLMBVDKn Rz3+6GNPRZRCXOTTBTAWvk2DW8+h1xmbwfyV5IIKsOCKEFxFAr8foyV4WjMqLWLMR8PP gx7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695765485; x=1696370285; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cfa8a/1GSowkpRTiJNRKMjBAyn9dIA/JPayfLXP3jPU=; b=XGxC0a513FtqKZgmK83FSvIHVUHUA18OkCehU02ebhCOuToePAKKfxiCbPdNlF94Lj bdgt8pBw9LMhN6izx/l8l1ncCG12GRTGhtU+HwhB5Wm31e1SPsToyy3STZBIDHOSzvfc gIxA83sXeJ99gAd8mG35ZUkVzIZNKBHsChroaeCRwnu6o4YUnHpNiYMf21fr0gJgBn8L VFpMx2C/LW6FXYz2EAWhHGaCF7TRMZnIaVMHiCJrzn06JE2hXP1wNx6tBjUGjOeVaGI9 BQlJujzTJK1FNXyiLNITg+Cm7VOKXwNpIZ77kvl4QEcyqacWCeCtWmcucW1RF8HJDF01 khgA== X-Gm-Message-State: AOJu0Yx7fkZ1PBi0dbS539lRByEsuPWALxLntg/wYeClbdwjYgC8JE1U L5cUz9jaULLkGYfoNWy4LkRfRsBRnVM5JLozO9DdKGuYMFoQfAy/ X-Received: by 2002:a2e:3315:0:b0:2bc:f78a:e5e0 with SMTP id d21-20020a2e3315000000b002bcf78ae5e0mr234102ljc.43.1695765485516; Tue, 26 Sep 2023 14:58:05 -0700 (PDT) MIME-Version: 1.0 References: <20230926150316.1129648-1-cleger@rivosinc.com> <20230926150316.1129648-7-cleger@rivosinc.com> In-Reply-To: <20230926150316.1129648-7-cleger@rivosinc.com> From: Evan Green Date: Tue, 26 Sep 2023 14:57:29 -0700 Message-ID: Subject: Re: [PATCH 6/7] riscv: report misaligned accesses emulation to hwprobe To: =?UTF-8?B?Q2zDqW1lbnQgTMOpZ2Vy?= Cc: Paul Walmsley , Palmer Dabbelt , Albert Ou , Atish Patra , Andrew Jones , =?UTF-8?B?QmrDtnJuIFRvcGVs?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ron Minnich , Daniel Maslowski Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 26 Sep 2023 15:34:12 -0700 (PDT) On Tue, Sep 26, 2023 at 8:03=E2=80=AFAM Cl=C3=A9ment L=C3=A9ger wrote: > > hwprobe provides a way to report if misaligned access are emulated. In > order to correctly populate that feature, we can check if it actually > traps when doing a misaligned access. This can be checked using an > exception table entry which will actually be used when a misaligned > access is done from kernel mode. > > Signed-off-by: Cl=C3=A9ment L=C3=A9ger > --- > arch/riscv/include/asm/cpufeature.h | 6 +++ > arch/riscv/kernel/cpufeature.c | 6 ++- > arch/riscv/kernel/setup.c | 1 + > arch/riscv/kernel/traps_misaligned.c | 63 +++++++++++++++++++++++++++- > 4 files changed, 74 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm= /cpufeature.h > index d0345bd659c9..c1f0ef02cd7d 100644 > --- a/arch/riscv/include/asm/cpufeature.h > +++ b/arch/riscv/include/asm/cpufeature.h > @@ -8,6 +8,7 @@ > > #include > #include > +#include > > /* > * These are probed via a device_initcall(), via either the SBI or direc= tly > @@ -32,4 +33,9 @@ extern struct riscv_isainfo hart_isa[NR_CPUS]; > > void check_unaligned_access(int cpu); > > +bool unaligned_ctl_available(void); > + > +bool check_unaligned_access_emulated(int cpu); > +void unaligned_emulation_finish(void); > + > #endif > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeatur= e.c > index 1cfbba65d11a..fbbde800bc21 100644 > --- a/arch/riscv/kernel/cpufeature.c > +++ b/arch/riscv/kernel/cpufeature.c > @@ -568,6 +568,9 @@ void check_unaligned_access(int cpu) > void *src; > long speed =3D RISCV_HWPROBE_MISALIGNED_SLOW; > > + if (check_unaligned_access_emulated(cpu)) This spot (referenced below). > + return; > + > page =3D alloc_pages(GFP_NOWAIT, get_order(MISALIGNED_BUFFER_SIZE= )); > if (!page) { > pr_warn("Can't alloc pages to measure memcpy performance"= ); > @@ -645,9 +648,10 @@ void check_unaligned_access(int cpu) > __free_pages(page, get_order(MISALIGNED_BUFFER_SIZE)); > } > > -static int check_unaligned_access_boot_cpu(void) > +static int __init check_unaligned_access_boot_cpu(void) > { > check_unaligned_access(0); > + unaligned_emulation_finish(); > return 0; > } > > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c > index e600aab116a4..3af6ad4df7cf 100644 > --- a/arch/riscv/kernel/setup.c > +++ b/arch/riscv/kernel/setup.c > @@ -26,6 +26,7 @@ > #include > #include > #include > +#include > #include > #include > #include > diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/tra= ps_misaligned.c > index b5fb1ff078e3..fa81f6952fa4 100644 > --- a/arch/riscv/kernel/traps_misaligned.c > +++ b/arch/riscv/kernel/traps_misaligned.c > @@ -9,11 +9,14 @@ > #include > #include > #include > +#include > > #include > #include > #include > #include > +#include > +#include > > #define INSN_MATCH_LB 0x3 > #define INSN_MASK_LB 0x707f > @@ -396,8 +399,10 @@ union reg_data { > u64 data_u64; > }; > > +static bool unaligned_ctl __read_mostly; > + > /* sysctl hooks */ > -int unaligned_enabled __read_mostly =3D 1; /* Enabled by default *= / > +int unaligned_enabled __read_mostly; > > int handle_misaligned_load(struct pt_regs *regs) > { > @@ -412,6 +417,9 @@ int handle_misaligned_load(struct pt_regs *regs) > if (!unaligned_enabled) > return -1; > > + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SI= GBUS)) > + return -1; > + > if (get_insn(regs, epc, &insn)) > return -1; > > @@ -511,6 +519,9 @@ int handle_misaligned_store(struct pt_regs *regs) > if (!unaligned_enabled) > return -1; > > + if (user_mode(regs) && (current->thread.align_ctl & PR_UNALIGN_SI= GBUS)) > + return -1; > + > if (get_insn(regs, epc, &insn)) > return -1; > > @@ -585,3 +596,53 @@ int handle_misaligned_store(struct pt_regs *regs) > > return 0; > } > + > +bool check_unaligned_access_emulated(int cpu) > +{ > + unsigned long emulated =3D 1, tmp_var; > + > + /* Use a fixup to detect if misaligned access triggered an except= ion */ > + __asm__ __volatile__ ( > + "1:\n" > + " "REG_L" %[tmp], 1(%[ptr])\n" > + " li %[emulated], 0\n" > + "2:\n" > + _ASM_EXTABLE(1b, 2b) > + : [emulated] "+r" (emulated), [tmp] "=3Dr" (tmp_var) > + : [ptr] "r" (&tmp_var) > + : "memory"); > + > + if (!emulated) > + return false; > + > + per_cpu(misaligned_access_speed, cpu) =3D > + RISCV_HWPROBE_MISALIGNED_EMULATED; For tidiness, can we move the assignment of this per-cpu variable into check_unaligned_access(), at the spot I referenced above. That way people looking to see how this variable is set don't have to hunt through multiple locations. > + > + return true; > +} > + > +void __init unaligned_emulation_finish(void) > +{ > + int cpu; > + > + /* > + * We can only support PR_UNALIGN controls if all CPUs have misal= igned > + * accesses emulated since tasks requesting such control can run = on any > + * CPU. > + */ > + for_each_possible_cpu(cpu) { > + if (per_cpu(misaligned_access_speed, cpu) !=3D > + RISCV_HWPROBE_MISALIGNED_EMULATED) { > + goto out; > + } > + } > + unaligned_ctl =3D true; This doesn't handle the case where a CPU is hotplugged later that doesn't match with the others. You may want to add a patch that fails the onlining of that new CPU if unaligned_ctl is true and new_cpu.misaligned_access_speed !=3D EMULATED. -Evan