Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp705804rdg; Wed, 11 Oct 2023 03:19:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEDF3s7cVRy3ysqiSpR2nA1aIrFlz+1uqU4Y8dGXbZhOkj6cCe2jwXY10kaK+vdVtByRnoR X-Received: by 2002:a05:6a20:1047:b0:14d:72e2:686a with SMTP id gt7-20020a056a20104700b0014d72e2686amr19303061pzc.36.1697019566552; Wed, 11 Oct 2023 03:19:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697019566; cv=none; d=google.com; s=arc-20160816; b=SGfxsb9xTNb5ypUdYjC4iFXUNNbjMyVQ7GiUItqhGVKTcOoDb/hYollqXxEftpelOD 4/2muBstCR3uGTdEPC3mz+EtJibRlJMXE+eAu8Aun7+JrAkDd14iYwRupM9qk+4p0YUf wzrsHcvV/1tm8xcwORbAAA7YepZ7pJ1CaLUxGluPjNtDe2JjSIPvVk5qHqX0waKtlPKW jwK0ir8AGao8ipLpJsZqo384wKYCLMhe2AmOE2hciKixG/SjIHsBsoaHzKLGshayeVsP KuP1eigzFZP/pdSFsc/m8DVrPLekfrUM8+8t8WKewIAPqbwYY/xAD8mF9akWaAu7vrag MFvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ZxzzoH26dh1AzYODI3PztvwKZzoiY9qQBnMMbD9JPPk=; fh=zwYgZI7fViiWMszSXUp9QDpMCZZ/fkBX1HIXF8Q+kVM=; b=0OiyVRpBbGP3BknowMsm8gNg/7w64JzEECmliZ1j9Pbj8SkpYHwgLXAPhOG6YGedt+ CGKTjJJXTCIu7OK8RrTw/df2vDG4qw8tGdze5eut+lhyMdq8wBKt9VyUdCn21OaikhQI D4wYoO3/UW4KFnFKLfEXWbCHTHisWzOL5yNECrlbjBwYnSGw9F6cWKDsIlrlk+ZNfn9V d+uSC1etpMAish4EtQVq2JECqd171WWEG86OH1R2NaTyjTV9LzKZRhAak2byFPpSinQz wXQJcZBtAqYZ+ShJw1AIJ6YEFVxB431N0vNbLx11735von5pq9NvlKk4m6EQXuCJ6VpV hSYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id lx14-20020a17090b4b0e00b0027383ac5ebcsi10956728pjb.130.2023.10.11.03.19.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 03:19:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 8278C811547C; Wed, 11 Oct 2023 03:19:22 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346166AbjJKKS2 (ORCPT + 99 others); Wed, 11 Oct 2023 06:18:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234690AbjJKKSG (ORCPT ); Wed, 11 Oct 2023 06:18:06 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 349551BD1 for ; Wed, 11 Oct 2023 03:07:50 -0700 (PDT) Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4S57dh09L3zLqZW; Wed, 11 Oct 2023 18:03:52 +0800 (CST) Received: from huawei.com (10.67.174.55) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 11 Oct 2023 18:07:47 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v5.15 12/15] arm64: armv8_deprecated move emulation functions Date: Wed, 11 Oct 2023 10:06:52 +0000 Message-ID: <20231011100655.979626-13-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231011100655.979626-1-ruanjinjie@huawei.com> References: <20231011100655.979626-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.67.174.55] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemi500008.china.huawei.com (7.221.188.139) X-CFilter-Loop: Reflected X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 11 Oct 2023 03:19:22 -0700 (PDT) X-Spam-Level: ** From: Mark Rutland commit 25eeac0cfe7c97ade1be07340e11e7143aab57a6 upstream. Subsequent patches will rework the logic in armv8_deprecated.c. In preparation for subsequent changes, this patch moves the emulation logic earlier in the file, and moves the infrastructure later in the file. This will make subsequent diffs simpler and easier to read. This is purely a move. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Peter Zijlstra Cc: Will Deacon Link: https://lore.kernel.org/r/20221019144123.612388-8-mark.rutland@arm.com Signed-off-by: Will Deacon Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/armv8_deprecated.c | 394 +++++++++++++-------------- 1 file changed, 197 insertions(+), 197 deletions(-) diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index 494946942b7c..e52ae01b4805 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -52,203 +52,6 @@ struct insn_emulation { int max; }; -static LIST_HEAD(insn_emulation); -static int nr_insn_emulated __initdata; -static DEFINE_RAW_SPINLOCK(insn_emulation_lock); -static DEFINE_MUTEX(insn_emulation_mutex); - -static void register_emulation_hooks(struct insn_emulation *insn) -{ - struct undef_hook *hook; - - BUG_ON(!insn->hooks); - - for (hook = insn->hooks; hook->instr_mask; hook++) - register_undef_hook(hook); - - pr_notice("Registered %s emulation handler\n", insn->name); -} - -static void remove_emulation_hooks(struct insn_emulation *insn) -{ - struct undef_hook *hook; - - BUG_ON(!insn->hooks); - - for (hook = insn->hooks; hook->instr_mask; hook++) - unregister_undef_hook(hook); - - pr_notice("Removed %s emulation handler\n", insn->name); -} - -static void enable_insn_hw_mode(void *data) -{ - struct insn_emulation *insn = (struct insn_emulation *)data; - if (insn->set_hw_mode) - insn->set_hw_mode(true); -} - -static void disable_insn_hw_mode(void *data) -{ - struct insn_emulation *insn = (struct insn_emulation *)data; - if (insn->set_hw_mode) - insn->set_hw_mode(false); -} - -/* Run set_hw_mode(mode) on all active CPUs */ -static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable) -{ - if (!insn->set_hw_mode) - return -EINVAL; - if (enable) - on_each_cpu(enable_insn_hw_mode, (void *)insn, true); - else - on_each_cpu(disable_insn_hw_mode, (void *)insn, true); - return 0; -} - -/* - * Run set_hw_mode for all insns on a starting CPU. - * Returns: - * 0 - If all the hooks ran successfully. - * -EINVAL - At least one hook is not supported by the CPU. - */ -static int run_all_insn_set_hw_mode(unsigned int cpu) -{ - int rc = 0; - unsigned long flags; - struct insn_emulation *insn; - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_for_each_entry(insn, &insn_emulation, node) { - bool enable = (insn->current_mode == INSN_HW); - if (insn->set_hw_mode && insn->set_hw_mode(enable)) { - pr_warn("CPU[%u] cannot support the emulation of %s", - cpu, insn->name); - rc = -EINVAL; - } - } - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - return rc; -} - -static int update_insn_emulation_mode(struct insn_emulation *insn, - enum insn_emulation_mode prev) -{ - int ret = 0; - - switch (prev) { - case INSN_UNDEF: /* Nothing to be done */ - break; - case INSN_EMULATE: - remove_emulation_hooks(insn); - break; - case INSN_HW: - if (!run_all_cpu_set_hw_mode(insn, false)) - pr_notice("Disabled %s support\n", insn->name); - break; - } - - switch (insn->current_mode) { - case INSN_UNDEF: - break; - case INSN_EMULATE: - register_emulation_hooks(insn); - break; - case INSN_HW: - ret = run_all_cpu_set_hw_mode(insn, true); - if (!ret) - pr_notice("Enabled %s support\n", insn->name); - break; - } - - return ret; -} - -static void __init register_insn_emulation(struct insn_emulation *insn) -{ - unsigned long flags; - - insn->min = INSN_UNDEF; - - switch (insn->status) { - case INSN_DEPRECATED: - insn->current_mode = INSN_EMULATE; - /* Disable the HW mode if it was turned on at early boot time */ - run_all_cpu_set_hw_mode(insn, false); - insn->max = INSN_HW; - break; - case INSN_OBSOLETE: - insn->current_mode = INSN_UNDEF; - insn->max = INSN_EMULATE; - break; - } - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_add(&insn->node, &insn_emulation); - nr_insn_emulated++; - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - - /* Register any handlers if required */ - update_insn_emulation_mode(insn, INSN_UNDEF); -} - -static int emulation_proc_handler(struct ctl_table *table, int write, - void *buffer, size_t *lenp, - loff_t *ppos) -{ - int ret = 0; - struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); - enum insn_emulation_mode prev_mode = insn->current_mode; - - mutex_lock(&insn_emulation_mutex); - ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); - - if (ret || !write || prev_mode == insn->current_mode) - goto ret; - - ret = update_insn_emulation_mode(insn, prev_mode); - if (ret) { - /* Mode change failed, revert to previous mode. */ - insn->current_mode = prev_mode; - update_insn_emulation_mode(insn, INSN_UNDEF); - } -ret: - mutex_unlock(&insn_emulation_mutex); - return ret; -} - -static void __init register_insn_emulation_sysctl(void) -{ - unsigned long flags; - int i = 0; - struct insn_emulation *insn; - struct ctl_table *insns_sysctl, *sysctl; - - insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl), - GFP_KERNEL); - if (!insns_sysctl) - return; - - raw_spin_lock_irqsave(&insn_emulation_lock, flags); - list_for_each_entry(insn, &insn_emulation, node) { - sysctl = &insns_sysctl[i]; - - sysctl->mode = 0644; - sysctl->maxlen = sizeof(int); - - sysctl->procname = insn->name; - sysctl->data = &insn->current_mode; - sysctl->extra1 = &insn->min; - sysctl->extra2 = &insn->max; - sysctl->proc_handler = emulation_proc_handler; - i++; - } - raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); - - register_sysctl("abi", insns_sysctl); -} - /* * Implement emulation of the SWP/SWPB instructions using load-exclusive and * store-exclusive. @@ -608,6 +411,203 @@ static struct insn_emulation insn_setend = { .set_hw_mode = setend_set_hw_mode, }; +static LIST_HEAD(insn_emulation); +static int nr_insn_emulated __initdata; +static DEFINE_RAW_SPINLOCK(insn_emulation_lock); +static DEFINE_MUTEX(insn_emulation_mutex); + +static void register_emulation_hooks(struct insn_emulation *insn) +{ + struct undef_hook *hook; + + BUG_ON(!insn->hooks); + + for (hook = insn->hooks; hook->instr_mask; hook++) + register_undef_hook(hook); + + pr_notice("Registered %s emulation handler\n", insn->name); +} + +static void remove_emulation_hooks(struct insn_emulation *insn) +{ + struct undef_hook *hook; + + BUG_ON(!insn->hooks); + + for (hook = insn->hooks; hook->instr_mask; hook++) + unregister_undef_hook(hook); + + pr_notice("Removed %s emulation handler\n", insn->name); +} + +static void enable_insn_hw_mode(void *data) +{ + struct insn_emulation *insn = (struct insn_emulation *)data; + if (insn->set_hw_mode) + insn->set_hw_mode(true); +} + +static void disable_insn_hw_mode(void *data) +{ + struct insn_emulation *insn = (struct insn_emulation *)data; + if (insn->set_hw_mode) + insn->set_hw_mode(false); +} + +/* Run set_hw_mode(mode) on all active CPUs */ +static int run_all_cpu_set_hw_mode(struct insn_emulation *insn, bool enable) +{ + if (!insn->set_hw_mode) + return -EINVAL; + if (enable) + on_each_cpu(enable_insn_hw_mode, (void *)insn, true); + else + on_each_cpu(disable_insn_hw_mode, (void *)insn, true); + return 0; +} + +/* + * Run set_hw_mode for all insns on a starting CPU. + * Returns: + * 0 - If all the hooks ran successfully. + * -EINVAL - At least one hook is not supported by the CPU. + */ +static int run_all_insn_set_hw_mode(unsigned int cpu) +{ + int rc = 0; + unsigned long flags; + struct insn_emulation *insn; + + raw_spin_lock_irqsave(&insn_emulation_lock, flags); + list_for_each_entry(insn, &insn_emulation, node) { + bool enable = (insn->current_mode == INSN_HW); + if (insn->set_hw_mode && insn->set_hw_mode(enable)) { + pr_warn("CPU[%u] cannot support the emulation of %s", + cpu, insn->name); + rc = -EINVAL; + } + } + raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); + return rc; +} + +static int update_insn_emulation_mode(struct insn_emulation *insn, + enum insn_emulation_mode prev) +{ + int ret = 0; + + switch (prev) { + case INSN_UNDEF: /* Nothing to be done */ + break; + case INSN_EMULATE: + remove_emulation_hooks(insn); + break; + case INSN_HW: + if (!run_all_cpu_set_hw_mode(insn, false)) + pr_notice("Disabled %s support\n", insn->name); + break; + } + + switch (insn->current_mode) { + case INSN_UNDEF: + break; + case INSN_EMULATE: + register_emulation_hooks(insn); + break; + case INSN_HW: + ret = run_all_cpu_set_hw_mode(insn, true); + if (!ret) + pr_notice("Enabled %s support\n", insn->name); + break; + } + + return ret; +} + +static void __init register_insn_emulation(struct insn_emulation *insn) +{ + unsigned long flags; + + insn->min = INSN_UNDEF; + + switch (insn->status) { + case INSN_DEPRECATED: + insn->current_mode = INSN_EMULATE; + /* Disable the HW mode if it was turned on at early boot time */ + run_all_cpu_set_hw_mode(insn, false); + insn->max = INSN_HW; + break; + case INSN_OBSOLETE: + insn->current_mode = INSN_UNDEF; + insn->max = INSN_EMULATE; + break; + } + + raw_spin_lock_irqsave(&insn_emulation_lock, flags); + list_add(&insn->node, &insn_emulation); + nr_insn_emulated++; + raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); + + /* Register any handlers if required */ + update_insn_emulation_mode(insn, INSN_UNDEF); +} + +static int emulation_proc_handler(struct ctl_table *table, int write, + void *buffer, size_t *lenp, + loff_t *ppos) +{ + int ret = 0; + struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); + enum insn_emulation_mode prev_mode = insn->current_mode; + + mutex_lock(&insn_emulation_mutex); + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + + if (ret || !write || prev_mode == insn->current_mode) + goto ret; + + ret = update_insn_emulation_mode(insn, prev_mode); + if (ret) { + /* Mode change failed, revert to previous mode. */ + insn->current_mode = prev_mode; + update_insn_emulation_mode(insn, INSN_UNDEF); + } +ret: + mutex_unlock(&insn_emulation_mutex); + return ret; +} + +static void __init register_insn_emulation_sysctl(void) +{ + unsigned long flags; + int i = 0; + struct insn_emulation *insn; + struct ctl_table *insns_sysctl, *sysctl; + + insns_sysctl = kcalloc(nr_insn_emulated + 1, sizeof(*sysctl), + GFP_KERNEL); + if (!insns_sysctl) + return; + + raw_spin_lock_irqsave(&insn_emulation_lock, flags); + list_for_each_entry(insn, &insn_emulation, node) { + sysctl = &insns_sysctl[i]; + + sysctl->mode = 0644; + sysctl->maxlen = sizeof(int); + + sysctl->procname = insn->name; + sysctl->data = &insn->current_mode; + sysctl->extra1 = &insn->min; + sysctl->extra2 = &insn->max; + sysctl->proc_handler = emulation_proc_handler; + i++; + } + raw_spin_unlock_irqrestore(&insn_emulation_lock, flags); + + register_sysctl("abi", insns_sysctl); +} + /* * Invoked as core_initcall, which guarantees that the instruction * emulation is ready for userspace. -- 2.34.1