Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5421587img; Wed, 27 Mar 2019 08:13:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyZBue/2nffJ70079dWseIUWk/bUzytV9cspOjj8334LRc+JH/i04p5T9UjrNfbfEpvzWlS X-Received: by 2002:a17:902:d710:: with SMTP id w16mr36580628ply.198.1553699627061; Wed, 27 Mar 2019 08:13:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553699627; cv=none; d=google.com; s=arc-20160816; b=p8XFoC1761PMM9gvbSn5oITlG9aCBIlDsthsakv3KgOFfeXKBl66ygOCx0Hp8/mOU2 R/EbfYxWAIibXZ7ZZp5fld7UPYxqYP7GkoIZeM4btegQE/7aFMpRUkOodizbZfSWS+fG ahhr/5/keTaDrsshIeBKpzWoNR6cqKhLEulkRcH9r+0W9qSqjOuQjN+sNMZHhtyTJbST d/AlsYoU/JuzoNWlcUhlz1+4wK1zPFJstZja/94W4qQqCZXk+fS6rdHSlCI4HGOhN6YJ 30wQ90t2RE367wpGonJDOauvA4J+4RTcQF6zw6hYKcZJdAoV89LOC/z13qBZ3bwb27eb nAmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:subject:cc:to:from:date :message-id; bh=MAU94nXgrEuuKFJWkf22epEH7JLSdCeKglrI5co9ng8=; b=qqvGIENM291DF7URjIa3bJhSPN7Qh96DUHMNMMP+H8cbyNYKCUy4GOu1EdGzEwmb8+ k5Yaz7V8DfXQMqWg/Iy1I9aAOybp5KKfKLKwWFLAidnbb76a+stMWX4heVeJ9lZuIaBF MzKxtT9tINQtGl+QOwaJPdS4v7qlf4yDTt5Db0u1XgtHNJn9GboDSgSZmyumMgo/rDAx B7oOv8Mnha5GMLak4AeARBW8ou/1O4+YTHio9OuWbFWHKYyNndfPoUoHpkxUK0Ym1Vix YPOlsrcDj7krdWCvyMo855SQBhius9awfcAVgipzqMHAIxUji97g09hM78ovaqLqFctK k+kg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j19si18301964pfh.124.2019.03.27.08.13.30; Wed, 27 Mar 2019 08:13:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729114AbfC0PMq convert rfc822-to-8bit (ORCPT + 99 others); Wed, 27 Mar 2019 11:12:46 -0400 Received: from prv1-mh.provo.novell.com ([137.65.248.33]:56629 "EHLO prv1-mh.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727972AbfC0PMp (ORCPT ); Wed, 27 Mar 2019 11:12:45 -0400 Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Wed, 27 Mar 2019 09:12:44 -0600 Message-Id: <5C9B92EA020000780022227B@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.0 Date: Wed, 27 Mar 2019 09:12:42 -0600 From: "Jan Beulich" To: "Stefano Stabellini" , "Boris Ostrovsky" , "Juergen Gross" Cc: "xen-devel" , Subject: [PATCH] x86/Xen: streamline (and fix) PV CPU enumeration Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This started out with me noticing that "dom0_max_vcpus=" with larger than the number of physical CPUs reported through ACPI tables would not bring up the "excess" vCPU-s. Noticing that xen_fill_possible_map() gets called way too early, whereas xen_filter_cpu_maps() gets called too late (after per-CPU areas were already set up), and further observing that each of the functions serves only one of Dom0 or DomU, it looked like it was better to simplify this. Use the .get_smp_config hook instead, uniformly for Dom0 and DomU. xen_fill_possible_map() can be dropped altogether, while xen_filter_cpu_maps() gets re-purposed but not otherwise changed. Signed-off-by: Jan Beulich --- arch/x86/xen/enlighten_pv.c | 4 ---- arch/x86/xen/smp_pv.c | 26 ++++++-------------------- 2 files changed, 6 insertions(+), 24 deletions(-) --- 5.1-rc2/arch/x86/xen/enlighten_pv.c +++ 5.1-rc2-xen-x86-Dom0-more-vCPUs/arch/x86/xen/enlighten_pv.c @@ -1381,10 +1381,6 @@ asmlinkage __visible void __init xen_sta xen_acpi_sleep_register(); - /* Avoid searching for BIOS MP tables */ - x86_init.mpparse.find_smp_config = x86_init_noop; - x86_init.mpparse.get_smp_config = x86_init_uint_noop; - xen_boot_params_init_edd(); } --- 5.1-rc2/arch/x86/xen/smp_pv.c +++ 5.1-rc2-xen-x86-Dom0-more-vCPUs/arch/x86/xen/smp_pv.c @@ -146,28 +146,12 @@ int xen_smp_intr_init_pv(unsigned int cp return rc; } -static void __init xen_fill_possible_map(void) -{ - int i, rc; - - if (xen_initial_domain()) - return; - - for (i = 0; i < nr_cpu_ids; i++) { - rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL); - if (rc >= 0) { - num_processors++; - set_cpu_possible(i, true); - } - } -} - -static void __init xen_filter_cpu_maps(void) +static void __init _get_smp_config(unsigned int early) { int i, rc; unsigned int subtract = 0; - if (!xen_initial_domain()) + if (early) return; num_processors = 0; @@ -217,7 +201,6 @@ static void __init xen_pv_smp_prepare_bo loadsegment(es, __USER_DS); #endif - xen_filter_cpu_maps(); xen_setup_vcpu_info_placement(); /* @@ -503,5 +486,8 @@ static const struct smp_ops xen_smp_ops void __init xen_smp_init(void) { smp_ops = xen_smp_ops; - xen_fill_possible_map(); + + /* Avoid searching for BIOS MP tables */ + x86_init.mpparse.find_smp_config = x86_init_noop; + x86_init.mpparse.get_smp_config = _get_smp_config; }