Received: by 10.192.165.156 with SMTP id m28csp59651imm; Tue, 10 Apr 2018 16:22:26 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/4ruHy+kr2sA6F1U1PHj3F0PxWa/AVAAEXe2CiFyl3arQ8/QxiW+ZhulejBeGFj6iFsWWs X-Received: by 10.101.100.132 with SMTP id e4mr1685182pgv.240.1523402546093; Tue, 10 Apr 2018 16:22:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523402546; cv=none; d=google.com; s=arc-20160816; b=okM1Drp4qSDJPRFBgRbk4pmXk1ZI6sgPZ8INKYS28nkAAGGxT2/m2JE6kC9sJQTUvG JoJt1Ifgp7fRM9ItCTnsWIOxdO8j+wyQJ2Pj3RnlsZMlqekqxcOx+UNp5lVVHkM+CJNO /a+ehf0S3CPitoSZchzOrPynrJXUuJu0GtefSBlBaltdaMegWHf7J4/itTu8DfgtOyY5 BsoyB4oXvJqgegjmlXu3IKuD6z3Dc+sbbNI3fJzf91hzscYLkHR1Ay0h8RLeUQ27UXE/ L1A3v2AgyLKJL/f+2P2dS+cT97bE205M/1POUgIfcyMwkPuTXkGdZr16FCM07UI87VgD KIoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Cy9if/A7pXxfBGtS9v3uf916eXGN9nNyAdNo7LbLvaM=; b=qGTIyLC6lzMxr0Zz0ke7suHESiP+jHegoQQE2kIxYKhYkxb2WMmc6PBxKonSUM56wI 5q8Le8MSDK8HYGf4vQJQbEJ/5Tg4KyjBE1DWSasPikzWOSqYscvA91/yGkJ8hkrq2WQF +sU9cmgKs/ZismxRUYdY0ixF8DbAsEadX18Top+7GmfZ7lMFVHAWH9LiQQ0iNckPXwB+ sjJjY6ljKjfpBPNzcnNVrULJdZWnFt+1YJAthV4ZkRaF2g9UkAVFfP0MeIa5RCLbti1Z mC31TbRj2o9Iq/QWcD6xWeXxsWvJB9HPUU00xVrD63vUCJXwbQ0BziPNRzYnTGlimDcs SJmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r8-v6si3631827pli.12.2018.04.10.16.21.49; Tue, 10 Apr 2018 16:22:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754884AbeDJWb6 (ORCPT + 99 others); Tue, 10 Apr 2018 18:31:58 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:40758 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752802AbeDJWb4 (ORCPT ); Tue, 10 Apr 2018 18:31:56 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 377398BF; Tue, 10 Apr 2018 22:31:55 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ashok Raj , Borislav Petkov , Thomas Gleixner , Tom Lendacky , Arjan Van De Ven Subject: [PATCH 4.15 118/168] x86/microcode: Synchronize late microcode loading Date: Wed, 11 Apr 2018 00:24:20 +0200 Message-Id: <20180410212805.557307914@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180410212800.144079021@linuxfoundation.org> References: <20180410212800.144079021@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ashok Raj commit a5321aec6412b20b5ad15db2d6b916c05349dbff upstream. Original idea by Ashok, completely rewritten by Borislav. Before you read any further: the early loading method is still the preferred one and you should always do that. The following patch is improving the late loading mechanism for long running jobs and cloud use cases. Gather all cores and serialize the microcode update on them by doing it one-by-one to make the late update process as reliable as possible and avoid potential issues caused by the microcode update. [ Borislav: Rewrite completely. ] Co-developed-by: Borislav Petkov Signed-off-by: Ashok Raj Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Tested-by: Tom Lendacky Tested-by: Ashok Raj Reviewed-by: Tom Lendacky Cc: Arjan Van De Ven Link: https://lkml.kernel.org/r/20180228102846.13447-8-bp@alien8.de Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/microcode/core.c | 118 +++++++++++++++++++++++++++-------- 1 file changed, 92 insertions(+), 26 deletions(-) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -22,13 +22,16 @@ #define pr_fmt(fmt) "microcode: " fmt #include +#include #include #include #include #include #include +#include #include #include +#include #include #include @@ -64,6 +67,11 @@ LIST_HEAD(microcode_cache); */ static DEFINE_MUTEX(microcode_mutex); +/* + * Serialize late loading so that CPUs get updated one-by-one. + */ +static DEFINE_SPINLOCK(update_lock); + struct ucode_cpu_info ucode_cpu_info[NR_CPUS]; struct cpu_info_ctx { @@ -486,6 +494,19 @@ static void __exit microcode_dev_exit(vo /* fake device for request_firmware */ static struct platform_device *microcode_pdev; +/* + * Late loading dance. Why the heavy-handed stomp_machine effort? + * + * - HT siblings must be idle and not execute other code while the other sibling + * is loading microcode in order to avoid any negative interactions caused by + * the loading. + * + * - In addition, microcode update on the cores must be serialized until this + * requirement can be relaxed in the future. Right now, this is conservative + * and good. + */ +#define SPINUNIT 100 /* 100 nsec */ + static int check_online_cpus(void) { if (num_online_cpus() == num_present_cpus()) @@ -496,23 +517,85 @@ static int check_online_cpus(void) return -EINVAL; } -static enum ucode_state reload_for_cpu(int cpu) +static atomic_t late_cpus; + +/* + * Returns: + * < 0 - on error + * 0 - no update done + * 1 - microcode was updated + */ +static int __reload_late(void *info) { - struct ucode_cpu_info *uci = ucode_cpu_info + cpu; + unsigned int timeout = NSEC_PER_SEC; + int all_cpus = num_online_cpus(); + int cpu = smp_processor_id(); + enum ucode_state err; + int ret = 0; + + atomic_dec(&late_cpus); + + /* + * Wait for all CPUs to arrive. A load will not be attempted unless all + * CPUs show up. + * */ + while (atomic_read(&late_cpus)) { + if (timeout < SPINUNIT) { + pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n", + atomic_read(&late_cpus)); + return -1; + } + + ndelay(SPINUNIT); + timeout -= SPINUNIT; + + touch_nmi_watchdog(); + } + + spin_lock(&update_lock); + apply_microcode_local(&err); + spin_unlock(&update_lock); + + if (err > UCODE_NFOUND) { + pr_warn("Error reloading microcode on CPU %d\n", cpu); + ret = -1; + } else if (err == UCODE_UPDATED) { + ret = 1; + } - if (!uci->valid) - return UCODE_OK; + atomic_inc(&late_cpus); - return apply_microcode_on_target(cpu); + while (atomic_read(&late_cpus) != all_cpus) + cpu_relax(); + + return ret; +} + +/* + * Reload microcode late on all CPUs. Wait for a sec until they + * all gather together. + */ +static int microcode_reload_late(void) +{ + int ret; + + atomic_set(&late_cpus, num_online_cpus()); + + ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask); + if (ret < 0) + return ret; + else if (ret > 0) + microcode_check(); + + return ret; } static ssize_t reload_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) { - int cpu, bsp = boot_cpu_data.cpu_index; enum ucode_state tmp_ret = UCODE_OK; - bool do_callback = false; + int bsp = boot_cpu_data.cpu_index; unsigned long val; ssize_t ret = 0; @@ -534,30 +617,13 @@ static ssize_t reload_store(struct devic goto put; mutex_lock(µcode_mutex); - - for_each_online_cpu(cpu) { - tmp_ret = reload_for_cpu(cpu); - if (tmp_ret > UCODE_NFOUND) { - pr_warn("Error reloading microcode on CPU %d\n", cpu); - - /* set retval for the first encountered reload error */ - if (!ret) - ret = -EINVAL; - } - - if (tmp_ret == UCODE_UPDATED) - do_callback = true; - } - - if (!ret && do_callback) - microcode_check(); - + ret = microcode_reload_late(); mutex_unlock(µcode_mutex); put: put_online_cpus(); - if (!ret) + if (ret >= 0) ret = size; return ret;