Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp4140321rdb; Thu, 28 Dec 2023 11:25:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IF2BqgaN/HDrJSKazBDYZ9AKINIcyROUeFK1M2v+xWzG8CQA44kYPBRpDy9qi9ED+ZfVR6N X-Received: by 2002:a05:6214:20ca:b0:67f:2254:1628 with SMTP id 10-20020a05621420ca00b0067f22541628mr19393374qve.3.1703791548063; Thu, 28 Dec 2023 11:25:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703791548; cv=none; d=google.com; s=arc-20160816; b=cSZVzg9bVPYMQeK29NdTj+1Y/2M4ScQviowDAOZ1Z+RZTOb90qzViDakogcJ4+qUOh CsRECdoNDgkRWzWS5cMdbiBPphVNATO+iXllOHZrhVKemSvjsIM/9jKEj9pmpk1l4OZu MJdSJczTR4mqL29T0NEgL1hxKulZwIOtOaeAa6OGiZTysPa7T/fVUWGCvIdkwVO1p2yP txcq9NbxiFPUXk8WL9c9hbpiA+zZV4nV81JN61Q3siLC7mD4Lu51l3qoIDNvUH5F3CZo uGVmPXzo0ZaY79yHgiqxBZW4O561ild2ipnjp5a/OB6k3F8GMU7yyMc3YVi3p9RSDOpj Cdcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=n5EyfPbMLacNxKUf1PImKFSjJQyRIV7gdUeGxwWsglE=; fh=6RQVJhUadVlbCYQO8WbwFdcKs3+SddxIKFPo/gACb1Y=; b=p/J2t5/LOo+py5z5BW0utOkdlTbbDEXbTlg9dwAhW6qV0szmKC6nPAkW7m6oBmHzaj aXkCuLJbHhukBj2rcHOzNoNYWAEkJUUXm64H3IadLDxrhcRn/9dkvTPUeCtcyVWA66oW tR5lL4Fv42nBfFwRWDesTyFC3iGnJMXb27zI1lbLzzYTPX6uL2Rzusj3jMUDDLxN75BK 5bwOqn7FGqo9LbUdKSMMLN4WdaW+tLUSRijHQii8FLbPWkXyi+WjoNINj1ZY21wsib+t zP3ZvJp++6CqATbtCMjWtB19yowNNVkiYDeOBftffS4n4StAILSUXBr7xlcCPkbAqcHq 7suA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@broadcom.com header.s=google header.b=VxxBkkCA; spf=pass (google.com: domain of linux-kernel+bounces-12833-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12833-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=broadcom.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id h8-20020a05620a244800b0077d7d366837si17995950qkn.245.2023.12.28.11.25.47 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Dec 2023 11:25:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12833-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@broadcom.com header.s=google header.b=VxxBkkCA; spf=pass (google.com: domain of linux-kernel+bounces-12833-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12833-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=broadcom.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 8E9F41C23352 for ; Thu, 28 Dec 2023 19:25:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7A4C610A2A; Thu, 28 Dec 2023 19:24:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="VxxBkkCA" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-oi1-f174.google.com (mail-oi1-f174.google.com [209.85.167.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05EE7101E1 for ; Thu, 28 Dec 2023 19:24:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Received: by mail-oi1-f174.google.com with SMTP id 5614622812f47-3bbc7746812so1284960b6e.2 for ; Thu, 28 Dec 2023 11:24:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703791482; x=1704396282; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n5EyfPbMLacNxKUf1PImKFSjJQyRIV7gdUeGxwWsglE=; b=VxxBkkCAGOVwXh/dK5f21ELyk41NPWTvDKK5tl87coYWNG0Ah2KcuFp4/uHmS4Spdb /GWTaZq2jToL6YPsJnbi629EsJjyVUb+VG6EifMP4f0qeufB9vQCmeSC8RBpjxkjZehH ECdLO622HEPXoh9JZLqt/Lb7gd8+xigU2JiwA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703791482; x=1704396282; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n5EyfPbMLacNxKUf1PImKFSjJQyRIV7gdUeGxwWsglE=; b=PTrt1Hv3i6YSyxhVp19PJ6hpHhWBOtriFJtv+lPLjsAun+s1DUO5DHBkYf6egwzL4N cNtZSsWfPXRNI2xIV74s5Lffc84T2f3kOTnCIbq+N2xaNf36viDeo9Q6PoC6L2j/uEEG dpjcTa0tW74KUb10Kka0Nm2MddGzz5Vc9rPsati0i7l5Pj2VLQT5GU/K9Hr8l3tzdsjJ aCgpJLM07IPUjHj6LyXwTvZYXwGvoqzxwLRBn/6P4IwSfyp7catYmjNuV/PbCV+R4G3L GiEozwuaaJYqYHm0qERkF/aGCVIem1He+aZqbZFHtjNjvxRjLW0jSwaq0X/6i4I+TNcS kkaw== X-Gm-Message-State: AOJu0Yz0MRRlMI1/Ko4RAm1bJTRf8fHQQD5EMu66iXRgOygcL4OmAYj/ 4r7Ig+a9pEXrNuOGTWtOtUr3txCsvVO+wWD0g2OHH5BN1wo2oX2m9fSIphF2+ScG/IzbofJaApm psZSY1dvsR0KKqY5HkulS/fah98qse/t+8YNWRO7cU0Bl0bU1ewj2LuRhQoSbXxhJ9qhn2tPDpC elxQA3rq2UOob+OIyoB0RJEuQ= X-Received: by 2002:a05:6359:5ea6:b0:170:bfb9:fb41 with SMTP id px38-20020a0563595ea600b00170bfb9fb41mr5553531rwb.28.1703791481634; Thu, 28 Dec 2023 11:24:41 -0800 (PST) Received: from amakhalov-build-vm.eng.vmware.com ([128.177.82.146]) by smtp.gmail.com with ESMTPSA id k16-20020aa79990000000b006d9aa04574csm9522987pfh.52.2023.12.28.11.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Dec 2023 11:24:41 -0800 (PST) From: Alexey Makhalov To: linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, bp@alien8.de, hpa@zytor.com, dave.hansen@linux.intel.com, mingo@redhat.com, tglx@linutronix.de Cc: x86@kernel.org, netdev@vger.kernel.org, richardcochran@gmail.com, linux-input@vger.kernel.org, dmitry.torokhov@gmail.com, zackr@vmware.com, linux-graphics-maintainer@vmware.com, pv-drivers@vmware.com, namit@vmware.com, timothym@vmware.com, akaher@vmware.com, jsipek@vmware.com, dri-devel@lists.freedesktop.org, daniel@ffwll.ch, airlied@gmail.com, tzimmermann@suse.de, mripard@kernel.org, maarten.lankhorst@linux.intel.com, horms@kernel.org, kirill.shutemov@linux.intel.com Subject: [PATCH v4 2/6] x86/vmware: Introduce VMware hypercall API Date: Thu, 28 Dec 2023 11:24:17 -0800 Message-Id: <20231228192421.29894-3-alexey.makhalov@broadcom.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20231228192421.29894-1-alexey.makhalov@broadcom.com> References: <20231228192421.29894-1-alexey.makhalov@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Alexey Makhalov Introduce vmware_hypercall family of functions. It is a common implementation to be used by the VMware guest code and virtual device drivers in architecture independent manner. The API consists of vmware_hypercallX and vmware_hypercall_hb_{out,in} set of functions by analogy with KVM hypercall API. Architecture specific implementation is hidden inside. It will simplify future enhancements in VMware hypercalls such as SEV-ES and TDX related changes without needs to modify a caller in device drivers code. Current implementation extends an idea from commit bac7b4e84323 ("x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercalls") to have a slow, but safe path in VMWARE_HYPERCALL earlier during the boot when alternatives are not yet applied. This logic was inherited from VMWARE_CMD from the commit mentioned above. Default alternative code was optimized by size to reduce excessive nop alignment once alternatives are applied. Total default code size is 26 bytes, in worse case (3 bytes alternative) remaining 23 bytes will be aligned by only 3 long NOP instructions. Signed-off-by: Alexey Makhalov Reviewed-by: Nadav Amit Reviewed-by: Jeff Sipek --- arch/x86/include/asm/vmware.h | 289 +++++++++++++++++++++++++++------- arch/x86/kernel/cpu/vmware.c | 35 ++-- 2 files changed, 245 insertions(+), 79 deletions(-) diff --git a/arch/x86/include/asm/vmware.h b/arch/x86/include/asm/vmware.h index de2533337611..84a31f579a30 100644 --- a/arch/x86/include/asm/vmware.h +++ b/arch/x86/include/asm/vmware.h @@ -7,14 +7,37 @@ #include /* - * The hypercall definitions differ in the low word of the %edx argument + * VMware hypercall ABI. + * + * - Low bandwidth (LB) hypercalls (I/O port based, vmcall and vmmcall) + * have up to 6 input and 6 output arguments passed and returned using + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), + * %esi (arg4), %edi (arg5). + * The following input arguments must be initialized by the caller: + * arg0 - VMWARE_HYPERVISOR_MAGIC + * arg2 - Hypercall command + * arg3 bits [15:0] - Port number, LB and direction flags + * + * - High bandwidth (HB) hypercalls are I/O port based only. They have + * up to 7 input and 7 output arguments passed and returned using + * registers: %eax (arg0), %ebx (arg1), %ecx (arg2), %edx (arg3), + * %esi (arg4), %edi (arg5), %ebp (arg6). + * The following input arguments must be initialized by the caller: + * arg0 - VMWARE_HYPERVISOR_MAGIC + * arg1 - Hypercall command + * arg3 bits [15:0] - Port number, HB and direction flags + * + * For compatibility purposes, x86_64 systems use only lower 32 bits + * for input and output arguments. + * + * The hypercall definitions differ in the low word of the %edx (arg3) * in the following way: the old I/O port based interface uses the port * number to distinguish between high- and low bandwidth versions, and * uses IN/OUT instructions to define transfer direction. * * The new vmcall interface instead uses a set of flags to select * bandwidth mode and transfer direction. The flags should be loaded - * into %dx by any user and are automatically replaced by the port + * into arg3 by any user and are automatically replaced by the port * number if the I/O port method is used. */ @@ -37,69 +60,219 @@ extern u8 vmware_hypercall_mode; -/* The low bandwidth call. The low word of edx is presumed clear. */ -#define VMWARE_HYPERCALL \ - ALTERNATIVE_2("movw $" __stringify(VMWARE_HYPERVISOR_PORT) ", %%dx; " \ - "inl (%%dx), %%eax", \ - "vmcall", X86_FEATURE_VMCALL, \ - "vmmcall", X86_FEATURE_VMW_VMMCALL) - /* - * The high bandwidth out call. The low word of edx is presumed to have the - * HB and OUT bits set. + * The low bandwidth call. The low word of %edx is presumed to have OUT bit + * set. The high word of %edx may contain input data from the caller. */ -#define VMWARE_HYPERCALL_HB_OUT \ - ALTERNATIVE_2("movw $" __stringify(VMWARE_HYPERVISOR_PORT_HB) ", %%dx; " \ - "rep outsb", \ +#define VMWARE_HYPERCALL \ + ALTERNATIVE_3("cmpb $" \ + __stringify(CPUID_VMWARE_FEATURES_ECX_VMMCALL) \ + ", %[mode]\n\t" \ + "jg 2f\n\t" \ + "je 1f\n\t" \ + "movw %[port], %%dx\n\t" \ + "inl (%%dx), %%eax\n\t" \ + "jmp 3f\n\t" \ + "1: vmmcall\n\t" \ + "jmp 3f\n\t" \ + "2: vmcall\n\t" \ + "3:\n\t", \ + "movw %[port], %%dx\n\t" \ + "inl (%%dx), %%eax", X86_FEATURE_HYPERVISOR, \ "vmcall", X86_FEATURE_VMCALL, \ "vmmcall", X86_FEATURE_VMW_VMMCALL) +static inline +unsigned long vmware_hypercall1(unsigned long cmd, unsigned long in1) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall3(unsigned long cmd, unsigned long in1, + uint32_t *out1, uint32_t *out2) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall4(unsigned long cmd, unsigned long in1, + uint32_t *out1, uint32_t *out2, + uint32_t *out3) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (0) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall5(unsigned long cmd, unsigned long in1, + unsigned long in3, unsigned long in4, + unsigned long in5, uint32_t *out2) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=c" (*out2) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall6(unsigned long cmd, unsigned long in1, + unsigned long in3, uint32_t *out2, + uint32_t *out3, uint32_t *out4, + uint32_t *out5) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=c" (*out2), "=d" (*out3), "=S" (*out4), + "=D" (*out5) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall7(unsigned long cmd, unsigned long in1, + unsigned long in3, unsigned long in4, + unsigned long in5, uint32_t *out1, + uint32_t *out2, uint32_t *out3) +{ + unsigned long out0; + + asm_inline volatile (VMWARE_HYPERCALL + : "=a" (out0), "=b" (*out1), "=c" (*out2), "=d" (*out3) + : [port] "i" (VMWARE_HYPERVISOR_PORT), + [mode] "m" (vmware_hypercall_mode), + "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (in1), + "c" (cmd), + "d" (in3), + "S" (in4), + "D" (in5) + : "cc", "memory"); + return out0; +} + + +#ifdef CONFIG_X86_64 +#define VMW_BP_REG "%%rbp" +#define VMW_BP_CONSTRAINT "r" +#else +#define VMW_BP_REG "%%ebp" +#define VMW_BP_CONSTRAINT "m" +#endif + /* - * The high bandwidth in call. The low word of edx is presumed to have the - * HB bit set. + * High bandwidth calls are not supported on encrypted memory guests. + * The caller should check cc_platform_has(CC_ATTR_MEM_ENCRYPT) and use + * low bandwidth hypercall it memory encryption is set. + * This assumption simplifies HB hypercall impementation to just I/O port + * based approach without alternative patching. */ -#define VMWARE_HYPERCALL_HB_IN \ - ALTERNATIVE_2("movw $" __stringify(VMWARE_HYPERVISOR_PORT_HB) ", %%dx; " \ - "rep insb", \ - "vmcall", X86_FEATURE_VMCALL, \ - "vmmcall", X86_FEATURE_VMW_VMMCALL) +static inline +unsigned long vmware_hypercall_hb_out(unsigned long cmd, unsigned long in2, + unsigned long in3, unsigned long in4, + unsigned long in5, unsigned long in6, + uint32_t *out1) +{ + unsigned long out0; + + asm_inline volatile ( + UNWIND_HINT_SAVE + "push " VMW_BP_REG "\n\t" + UNWIND_HINT_UNDEFINED + "mov %[in6], " VMW_BP_REG "\n\t" + "rep outsb\n\t" + "pop " VMW_BP_REG "\n\t" + UNWIND_HINT_RESTORE + : "=a" (out0), "=b" (*out1) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (cmd), + "c" (in2), + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), + "S" (in4), + "D" (in5), + [in6] VMW_BP_CONSTRAINT (in6) + : "cc", "memory"); + return out0; +} + +static inline +unsigned long vmware_hypercall_hb_in(unsigned long cmd, unsigned long in2, + unsigned long in3, unsigned long in4, + unsigned long in5, unsigned long in6, + uint32_t *out1) +{ + unsigned long out0; -#define VMWARE_PORT(cmd, eax, ebx, ecx, edx) \ - __asm__("inl (%%dx), %%eax" : \ - "=a"(eax), "=c"(ecx), "=d"(edx), "=b"(ebx) : \ - "a"(VMWARE_HYPERVISOR_MAGIC), \ - "c"(VMWARE_CMD_##cmd), \ - "d"(VMWARE_HYPERVISOR_PORT), "b"(UINT_MAX) : \ - "memory") - -#define VMWARE_VMCALL(cmd, eax, ebx, ecx, edx) \ - __asm__("vmcall" : \ - "=a"(eax), "=c"(ecx), "=d"(edx), "=b"(ebx) : \ - "a"(VMWARE_HYPERVISOR_MAGIC), \ - "c"(VMWARE_CMD_##cmd), \ - "d"(0), "b"(UINT_MAX) : \ - "memory") - -#define VMWARE_VMMCALL(cmd, eax, ebx, ecx, edx) \ - __asm__("vmmcall" : \ - "=a"(eax), "=c"(ecx), "=d"(edx), "=b"(ebx) : \ - "a"(VMWARE_HYPERVISOR_MAGIC), \ - "c"(VMWARE_CMD_##cmd), \ - "d"(0), "b"(UINT_MAX) : \ - "memory") - -#define VMWARE_CMD(cmd, eax, ebx, ecx, edx) do { \ - switch (vmware_hypercall_mode) { \ - case CPUID_VMWARE_FEATURES_ECX_VMCALL: \ - VMWARE_VMCALL(cmd, eax, ebx, ecx, edx); \ - break; \ - case CPUID_VMWARE_FEATURES_ECX_VMMCALL: \ - VMWARE_VMMCALL(cmd, eax, ebx, ecx, edx); \ - break; \ - default: \ - VMWARE_PORT(cmd, eax, ebx, ecx, edx); \ - break; \ - } \ - } while (0) + asm_inline volatile ( + UNWIND_HINT_SAVE + "push " VMW_BP_REG "\n\t" + UNWIND_HINT_UNDEFINED + "mov %[in6], " VMW_BP_REG "\n\t" + "rep insb\n\t" + "pop " VMW_BP_REG "\n\t" + UNWIND_HINT_RESTORE + : "=a" (out0), "=b" (*out1) + : "a" (VMWARE_HYPERVISOR_MAGIC), + "b" (cmd), + "c" (in2), + "d" (in3 | VMWARE_HYPERVISOR_PORT_HB), + "S" (in4), + "D" (in5), + [in6] VMW_BP_CONSTRAINT (in6) + : "cc", "memory"); + return out0; +} +#undef VMW_BP_REG +#undef VMW_BP_CONSTRAINT +#undef VMWARE_HYPERCALL #endif diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c index 4db8e1daa4a1..3aa1adaed18f 100644 --- a/arch/x86/kernel/cpu/vmware.c +++ b/arch/x86/kernel/cpu/vmware.c @@ -67,9 +67,10 @@ EXPORT_SYMBOL_GPL(vmware_hypercall_mode); static inline int __vmware_platform(void) { - uint32_t eax, ebx, ecx, edx; - VMWARE_CMD(GETVERSION, eax, ebx, ecx, edx); - return eax != (uint32_t)-1 && ebx == VMWARE_HYPERVISOR_MAGIC; + uint32_t eax, ebx, ecx; + + eax = vmware_hypercall3(VMWARE_CMD_GETVERSION, 0, &ebx, &ecx); + return eax != UINT_MAX && ebx == VMWARE_HYPERVISOR_MAGIC; } static unsigned long vmware_get_tsc_khz(void) @@ -121,21 +122,12 @@ static void __init vmware_cyc2ns_setup(void) pr_info("using clock offset of %llu ns\n", d->cyc2ns_offset); } -static int vmware_cmd_stealclock(uint32_t arg1, uint32_t arg2) +static int vmware_cmd_stealclock(uint32_t addr_hi, uint32_t addr_lo) { - uint32_t result, info; - - asm volatile (VMWARE_HYPERCALL : - "=a"(result), - "=c"(info) : - "a"(VMWARE_HYPERVISOR_MAGIC), - "b"(0), - "c"(VMWARE_CMD_STEALCLOCK), - "d"(0), - "S"(arg1), - "D"(arg2) : - "memory"); - return result; + uint32_t info; + + return vmware_hypercall5(VMWARE_CMD_STEALCLOCK, 0, 0, addr_hi, addr_lo, + &info); } static bool stealclock_enable(phys_addr_t pa) @@ -344,10 +336,10 @@ static void __init vmware_set_capabilities(void) static void __init vmware_platform_setup(void) { - uint32_t eax, ebx, ecx, edx; + uint32_t eax, ebx, ecx; uint64_t lpj, tsc_khz; - VMWARE_CMD(GETHZ, eax, ebx, ecx, edx); + eax = vmware_hypercall3(VMWARE_CMD_GETHZ, UINT_MAX, &ebx, &ecx); if (ebx != UINT_MAX) { lpj = tsc_khz = eax | (((uint64_t)ebx) << 32); @@ -429,8 +421,9 @@ static uint32_t __init vmware_platform(void) /* Checks if hypervisor supports x2apic without VT-D interrupt remapping. */ static bool __init vmware_legacy_x2apic_available(void) { - uint32_t eax, ebx, ecx, edx; - VMWARE_CMD(GETVCPU_INFO, eax, ebx, ecx, edx); + uint32_t eax; + + eax = vmware_hypercall1(VMWARE_CMD_GETVCPU_INFO, 0); return !(eax & BIT(VCPU_RESERVED)) && (eax & BIT(VCPU_LEGACY_X2APIC)); } -- 2.39.0