Hello,
The main change in this version was to rebase on top of <asm/mem_encrypt.h>
cleanup series I just posted:
https://lore.kernel.org/linuxppc-dev/[email protected]/
In addition to the patches above, this patch series applies on top of v4 of
Claudio Carvalho's "kvmppc: Paravirtualize KVM to support ultravisor"
series:
https://lore.kernel.org/linuxppc-dev/[email protected]/
I only actually need the following two patches from his series:
[PATCH v4 1/8] KVM: PPC: Ultravisor: Introduce the MSR_S bit
[PATCH v4 3/8] KVM: PPC: Ultravisor: Add generic ultravisor call handler
Everything is available in branch ultravisor-secure-vm at this repo:
https://github.com/bauermann/linux.git
Original cover letter below, and changelog at the bottom:
This series enables Secure Virtual Machines (SVMs) on powerpc. SVMs use the
Protected Execution Facility (PEF) and request to be migrated to secure
memory during prom_init() so by default all of their memory is inaccessible
to the hypervisor. There is an Ultravisor call that the VM can use to
request certain pages to be made accessible to (or shared with) the
hypervisor.
The objective of these patches is to have the guest perform this request
for buffers that need to be accessed by the hypervisor such as the LPPACAs,
the SWIOTLB memory and the Debug Trace Log.
Patch 2 ("powerpc: Add support for adding an ESM blob to the zImage
wrapper") is posted as RFC because we are still finalizing the details on
how the ESM blob will be passed along with the kernel. All other patches are
(hopefully) in upstreamable shape and don't depend on this patch.
Unfortunately this series still doesn't enable the use of virtio devices in
the secure guest. This support depends on a discussion that is currently
ongoing with the virtio community:
https://lore.kernel.org/linuxppc-dev/[email protected]/
I was able to test it using Claudio's patches in the host kernel, booting
normally using an initramfs for the root filesystem.
This is the command used to start up the guest with QEMU 4.0:
qemu-system-ppc64 \
-nodefaults \
-cpu host \
-machine pseries,accel=kvm,kvm-type=HV,cap-htm=off,cap-cfpc=broken,cap-sbbc=broken,cap-ibs=broken \
-display none \
-serial mon:stdio \
-smp 1 \
-m 4G \
-kernel /root/bauermann/vmlinux \
-initrd /root/bauermann/fs_small.cpio \
-append 'debug'
Changelog
Since v1:
- Patch "powerpc/pseries: Introduce option to build secure virtual machines"
- Dropped redundant "default n" from CONFIG_PPC_SVM. Suggested by Christoph
Hellwig.
- Patch "powerpc: Add support for adding an ESM blob to the zImage wrapper"
- Renamed prom_rtas_os_term_hcall() to prom_rtas_hcall(). Suggested by Alexey
Kardashevskiy.
- In prom_rtas_hcall(), changed prom_printf() calls to prom_debug(), and
use H_RTAS constant instead of raw value.
- Changed enter_secure_mode() to new ABI passing ucall number in r3.
Also changed it to accept kbase argument instead of ESM blob address.
- Changed setup_secure_guest() to only make the ESM ultracall if svm=1 was
passed on the kernel command line.
- Patch "powerpc/pseries/svm: Unshare all pages before kexecing a new kernel"
- New patch from Ram Pai.
- Patch "powerpc/pseries/svm: Force SWIOTLB for secure guests"
- No need to define sme_me_mask, sme_active() and sev_active() anymore.
- Add definitions for mem_encrypt_active() and force_dma_unencrypted().
- Select ARCH_HAS_FORCE_DMA_UNENCRYPTED in CONFIG_PPC_SVM.
Since the RFC from August:
- Patch "powerpc/pseries: Introduce option to build secure virtual machines"
- New patch.
- Patch "powerpc: Add support for adding an ESM blob to the zImage wrapper"
- Patch from Benjamin Herrenschmidt, first posted here:
https://lore.kernel.org/linuxppc-dev/[email protected]/
- Made minor adjustments to some comments. Code is unchanged.
- Patch "powerpc/prom_init: Add the ESM call to prom_init"
- New patch from Ram Pai and Michael Anderson.
- Patch "powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE"
- New patch from Ram Pai.
- Patch "powerpc/pseries: Add and use LPPACA_SIZE constant"
- Moved LPPACA_SIZE macro inside the CONFIG_PPC_PSERIES #ifdef.
- Put sizeof() operand left of comparison operator in BUILD_BUG_ON()
macro to appease a checkpatch warning.
- Patch "powerpc/pseries/svm: Use shared memory for LPPACA structures"
- Moved definition of is_secure_guest() helper to this patch.
- Changed shared_lppaca and shared_lppaca_size from globals to static
variables inside alloc_shared_lppaca().
- Changed shared_lppaca to hold virtual address instead of physical
address.
- Patch "powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)"
- Add get_dtl_cache_ctor() macro. Suggested by Ram Pai.
- Patch "powerpc/pseries/svm: Export guest SVM status to user space via sysfs"
- New patch from Ryan Grimm.
- Patch "powerpc/pseries/svm: Disable doorbells in SVM guests"
- New patch from Sukadev Bhattiprolu.
- Patch "powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests"
- New patch.
- Patch "powerpc/pseries/svm: Force SWIOTLB for secure guests"
- New patch with code that was previously in other patches.
- Patch "powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs"
- New patch from Ryan Grimm.
- Patch "powerpc/pseries/svm: Detect Secure Virtual Machine (SVM) platform"
- Dropped this patch by moving its code to other patches.
- Patch "powerpc/svm: Select CONFIG_DMA_DIRECT_OPS and CONFIG_SWIOTLB"
- No need to select CONFIG_DMA_DIRECT_OPS anymore. The CONFIG_SWIOTLB
change was moved to another patch and this patch was dropped.
- Patch "powerpc/pseries/svm: Add memory conversion (shared/secure) helper functions"
- Dropped patch since the helper functions were unnecessary wrappers
around uv_share_page() and uv_unshare_page().
- Patch "powerpc/svm: Convert SWIOTLB buffers to shared memory"
- Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
guests"
- Patch "powerpc/svm: Don't release SWIOTLB buffers on secure guests"
- Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
guests"
- Patch "powerpc/svm: Use SWIOTLB DMA API for all virtio devices"
- Dropped patch. Enablement of virtio will use a difference approach.
- Patch "powerpc/svm: Force the use of bounce buffers"
- Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
guests"
- Added comment explaining why it's necessary.to force use of SWIOTLB.
Suggested by Christoph Hellwig.
- Patch "powerpc/svm: Increase SWIOTLB buffer size"
- Dropped patch.
Anshuman Khandual (3):
powerpc/pseries/svm: Use shared memory for LPPACA structures
powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)
powerpc/pseries/svm: Force SWIOTLB for secure guests
Benjamin Herrenschmidt (1):
powerpc: Add support for adding an ESM blob to the zImage wrapper
Ram Pai (3):
powerpc/prom_init: Add the ESM call to prom_init
powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE
powerpc/pseries/svm: Unshare all pages before kexecing a new kernel
Ryan Grimm (2):
powerpc/pseries/svm: Export guest SVM status to user space via sysfs
powerpc/configs: Enable secure guest support in pseries and ppc64
defconfigs
Sukadev Bhattiprolu (1):
powerpc/pseries/svm: Disable doorbells in SVM guests
Thiago Jung Bauermann (3):
powerpc/pseries: Introduce option to build secure virtual machines
powerpc/pseries: Add and use LPPACA_SIZE constant
powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests
.../admin-guide/kernel-parameters.txt | 5 +
arch/powerpc/boot/main.c | 41 ++++++++
arch/powerpc/boot/ops.h | 2 +
arch/powerpc/boot/wrapper | 24 ++++-
arch/powerpc/boot/zImage.lds.S | 8 ++
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig | 1 +
arch/powerpc/include/asm/mem_encrypt.h | 26 +++++
arch/powerpc/include/asm/svm.h | 31 ++++++
arch/powerpc/include/asm/ultravisor-api.h | 4 +
arch/powerpc/include/asm/ultravisor.h | 23 ++++-
arch/powerpc/kernel/Makefile | 4 +-
arch/powerpc/kernel/machine_kexec_64.c | 8 ++
arch/powerpc/kernel/paca.c | 52 +++++++++-
arch/powerpc/kernel/prom_init.c | 99 +++++++++++++++++++
arch/powerpc/kernel/sysfs.c | 29 ++++++
arch/powerpc/platforms/pseries/Kconfig | 14 +++
arch/powerpc/platforms/pseries/Makefile | 1 +
arch/powerpc/platforms/pseries/iommu.c | 6 +-
arch/powerpc/platforms/pseries/setup.c | 5 +-
arch/powerpc/platforms/pseries/smp.c | 3 +-
arch/powerpc/platforms/pseries/svm.c | 85 ++++++++++++++++
22 files changed, 459 insertions(+), 13 deletions(-)
create mode 100644 arch/powerpc/include/asm/mem_encrypt.h
create mode 100644 arch/powerpc/include/asm/svm.h
create mode 100644 arch/powerpc/platforms/pseries/svm.c
Introduce CONFIG_PPC_SVM to control support for secure guests and include
Ultravisor-related helpers when it is selected
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/ultravisor.h | 2 +-
arch/powerpc/kernel/Makefile | 4 +++-
arch/powerpc/platforms/pseries/Kconfig | 11 +++++++++++
3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index a5e4516c8ddb..f5dc5af739b8 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -28,7 +28,7 @@ extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
* This call supports up to 6 arguments and 4 return arguments. Use
* UCALL_BUFSIZE to size the return argument buffer.
*/
-#if defined(CONFIG_PPC_POWERNV)
+#if defined(CONFIG_PPC_POWERNV) || defined(CONFIG_PPC_SVM)
long ucall(unsigned long opcode, unsigned long *retbuf, ...);
#else
static long ucall(unsigned long opcode, unsigned long *retbuf, ...)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 929f79d3e6a9..ea671f03eba2 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -155,7 +155,9 @@ endif
obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
-obj-$(CONFIG_PPC_POWERNV) += ultravisor.o ucall.o
+ifneq ($(CONFIG_PPC_POWERNV)$(CONFIG_PPC_SVM),)
+obj-y += ultravisor.o ucall.o
+endif
# Disable GCOV, KCOV & sanitizers in odd or sensitive code
GCOV_PROFILE_prom_init.o := n
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index f7b484f55553..d09deb05bb66 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -145,3 +145,14 @@ config PAPR_SCM
tristate "Support for the PAPR Storage Class Memory interface"
help
Enable access to hypervisor provided storage class memory.
+
+config PPC_SVM
+ bool "Secure virtual machine (SVM) support for POWER"
+ depends on PPC_PSERIES
+ help
+ There are certain POWER platforms which support secure guests using
+ the Protected Execution Facility, with the help of an Ultravisor
+ executing below the hypervisor layer. This enables support for
+ those guests.
+
+ If unsure, say "N".
From: Benjamin Herrenschmidt <[email protected]>
For secure VMs, the signing tool will create a ticket called the "ESM blob"
for the Enter Secure Mode ultravisor call with the signatures of the kernel
and initrd among other things.
This adds support to the wrapper script for adding that blob via the "-e"
option to the zImage.pseries.
It also adds code to the zImage wrapper itself to retrieve and if necessary
relocate the blob, and pass its address to Linux via the device-tree, to be
later consumed by prom_init.
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
[ bauerman: Minor adjustments to some comments. ]
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/boot/main.c | 41 ++++++++++++++++++++++++++++++++++
arch/powerpc/boot/ops.h | 2 ++
arch/powerpc/boot/wrapper | 24 +++++++++++++++++---
arch/powerpc/boot/zImage.lds.S | 8 +++++++
4 files changed, 72 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/boot/main.c b/arch/powerpc/boot/main.c
index 78aaf4ffd7ab..ca612efd3e81 100644
--- a/arch/powerpc/boot/main.c
+++ b/arch/powerpc/boot/main.c
@@ -150,6 +150,46 @@ static struct addr_range prep_initrd(struct addr_range vmlinux, void *chosen,
return (struct addr_range){(void *)initrd_addr, initrd_size};
}
+#ifdef __powerpc64__
+static void prep_esm_blob(struct addr_range vmlinux, void *chosen)
+{
+ unsigned long esm_blob_addr, esm_blob_size;
+
+ /* Do we have an ESM (Enter Secure Mode) blob? */
+ if (_esm_blob_end <= _esm_blob_start)
+ return;
+
+ printf("Attached ESM blob at 0x%p-0x%p\n\r",
+ _esm_blob_start, _esm_blob_end);
+ esm_blob_addr = (unsigned long)_esm_blob_start;
+ esm_blob_size = _esm_blob_end - _esm_blob_start;
+
+ /*
+ * If the ESM blob is too low it will be clobbered when the
+ * kernel relocates to its final location. In this case,
+ * allocate a safer place and move it.
+ */
+ if (esm_blob_addr < vmlinux.size) {
+ void *old_addr = (void *)esm_blob_addr;
+
+ printf("Allocating 0x%lx bytes for esm_blob ...\n\r",
+ esm_blob_size);
+ esm_blob_addr = (unsigned long)malloc(esm_blob_size);
+ if (!esm_blob_addr)
+ fatal("Can't allocate memory for ESM blob !\n\r");
+ printf("Relocating ESM blob 0x%lx <- 0x%p (0x%lx bytes)\n\r",
+ esm_blob_addr, old_addr, esm_blob_size);
+ memmove((void *)esm_blob_addr, old_addr, esm_blob_size);
+ }
+
+ /* Tell the kernel ESM blob address via device tree. */
+ setprop_val(chosen, "linux,esm-blob-start", (u32)(esm_blob_addr));
+ setprop_val(chosen, "linux,esm-blob-end", (u32)(esm_blob_addr + esm_blob_size));
+}
+#else
+static inline void prep_esm_blob(struct addr_range vmlinux, void *chosen) { }
+#endif
+
/* A buffer that may be edited by tools operating on a zImage binary so as to
* edit the command line passed to vmlinux (by setting /chosen/bootargs).
* The buffer is put in it's own section so that tools may locate it easier.
@@ -218,6 +258,7 @@ void start(void)
vmlinux = prep_kernel();
initrd = prep_initrd(vmlinux, chosen,
loader_info.initrd_addr, loader_info.initrd_size);
+ prep_esm_blob(vmlinux, chosen);
prep_cmdline(chosen);
printf("Finalizing device tree...");
diff --git a/arch/powerpc/boot/ops.h b/arch/powerpc/boot/ops.h
index cd043726ed88..e0606766480f 100644
--- a/arch/powerpc/boot/ops.h
+++ b/arch/powerpc/boot/ops.h
@@ -251,6 +251,8 @@ extern char _initrd_start[];
extern char _initrd_end[];
extern char _dtb_start[];
extern char _dtb_end[];
+extern char _esm_blob_start[];
+extern char _esm_blob_end[];
static inline __attribute__((const))
int __ilog2_u32(u32 n)
diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 4ed18d63d892..ddb62b5b5b7a 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -14,6 +14,7 @@
# -i initrd specify initrd file
# -d devtree specify device-tree blob
# -s tree.dts specify device-tree source file (needs dtc installed)
+# -e esm_blob specify ESM blob for secure images
# -c cache $kernel.strip.gz (use if present & newer, else make)
# -C prefix specify command prefix for cross-building tools
# (strip, objcopy, ld)
@@ -38,6 +39,7 @@ platform=of
initrd=
dtb=
dts=
+esm_blob=
cacheit=
binary=
compression=.gz
@@ -61,9 +63,9 @@ tmpdir=.
usage() {
echo 'Usage: wrapper [-o output] [-p platform] [-i initrd]' >&2
- echo ' [-d devtree] [-s tree.dts] [-c] [-C cross-prefix]' >&2
- echo ' [-D datadir] [-W workingdir] [-Z (gz|xz|none)]' >&2
- echo ' [--no-compression] [vmlinux]' >&2
+ echo ' [-d devtree] [-s tree.dts] [-e esm_blob]' >&2
+ echo ' [-c] [-C cross-prefix] [-D datadir] [-W workingdir]' >&2
+ echo ' [-Z (gz|xz|none)] [--no-compression] [vmlinux]' >&2
exit 1
}
@@ -106,6 +108,11 @@ while [ "$#" -gt 0 ]; do
[ "$#" -gt 0 ] || usage
dtb="$1"
;;
+ -e)
+ shift
+ [ "$#" -gt 0 ] || usage
+ esm_blob="$1"
+ ;;
-s)
shift
[ "$#" -gt 0 ] || usage
@@ -219,9 +226,16 @@ objflags=-S
tmp=$tmpdir/zImage.$$.o
ksection=.kernel:vmlinux.strip
isection=.kernel:initrd
+esection=.kernel:esm_blob
link_address='0x400000'
make_space=y
+
+if [ -n "$esm_blob" -a "$platform" != "pseries" ]; then
+ echo "ESM blob not support on non-pseries platforms" >&2
+ exit 1
+fi
+
case "$platform" in
of)
platformo="$object/of.o $object/epapr.o"
@@ -478,6 +492,10 @@ if [ -n "$dtb" ]; then
fi
fi
+if [ -n "$esm_blob" ]; then
+ addsec $tmp "$esm_blob" $esection
+fi
+
if [ "$platform" != "miboot" ]; then
if [ -n "$link_address" ] ; then
text_start="-Ttext $link_address"
diff --git a/arch/powerpc/boot/zImage.lds.S b/arch/powerpc/boot/zImage.lds.S
index 4ac1e36edfe7..a21f3a76e06f 100644
--- a/arch/powerpc/boot/zImage.lds.S
+++ b/arch/powerpc/boot/zImage.lds.S
@@ -68,6 +68,14 @@ SECTIONS
_initrd_end = .;
}
+ . = ALIGN(4096);
+ .kernel:esm_blob :
+ {
+ _esm_blob_start = .;
+ *(.kernel:esm_blob)
+ _esm_blob_end = .;
+ }
+
#ifdef CONFIG_PPC64_BOOT_WRAPPER
. = ALIGN(256);
.got :
From: Ram Pai <[email protected]>
Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
mode. Add "svm=" command line option to turn on switching to secure mode.
Signed-off-by: Ram Pai <[email protected]>
[ andmike: Generate an RTAS os-term hcall when the ESM ucall fails. ]
Signed-off-by: Michael Anderson <[email protected]>
[ bauerman: Cleaned up the code a bit. ]
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
.../admin-guide/kernel-parameters.txt | 5 +
arch/powerpc/include/asm/ultravisor-api.h | 1 +
arch/powerpc/kernel/prom_init.c | 99 +++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 7b15abf7db21..c611891b5992 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4585,6 +4585,11 @@
/sys/power/pm_test). Only available when CONFIG_PM_DEBUG
is set. Default value is 5.
+ svm= [PPC]
+ Format: { on | off | y | n | 1 | 0 }
+ This parameter controls use of the Protected
+ Execution Facility on pSeries.
+
swapaccount=[0|1]
[KNL] Enable accounting of swap in memory resource
controller if no parameter or 1 is given or disable
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index c8180427fa01..fe9a0d8d7673 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -19,6 +19,7 @@
/* opcodes */
#define UV_WRITE_PATE 0xF104
+#define UV_ESM 0xF110
#define UV_RETURN 0xF11C
#define UV_REGISTER_MEM_SLOT 0xF120
#define UV_UNREGISTER_MEM_SLOT 0xF124
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index a3fb90bb5a39..6389a992451b 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -44,6 +44,7 @@
#include <asm/sections.h>
#include <asm/machdep.h>
#include <asm/asm-prototypes.h>
+#include <asm/ultravisor-api.h>
#include <linux/linux_logo.h>
@@ -175,6 +176,10 @@ static bool __prombss prom_radix_disable;
static bool __prombss prom_xive_disable;
#endif
+#ifdef CONFIG_PPC_SVM
+static bool __prombss prom_svm_enable;
+#endif
+
struct platform_support {
bool hash_mmu;
bool radix_mmu;
@@ -816,6 +821,17 @@ static void __init early_cmdline_parse(void)
prom_debug("XIVE disabled from cmdline\n");
}
#endif /* CONFIG_PPC_PSERIES */
+
+#ifdef CONFIG_PPC_SVM
+ opt = prom_strstr(prom_cmd_line, "svm=");
+ if (opt) {
+ bool val;
+
+ opt += sizeof("svm=") - 1;
+ if (!prom_strtobool(opt, &val))
+ prom_svm_enable = val;
+ }
+#endif /* CONFIG_PPC_SVM */
}
#ifdef CONFIG_PPC_PSERIES
@@ -1716,6 +1732,43 @@ static void __init prom_close_stdin(void)
}
}
+#ifdef CONFIG_PPC_SVM
+static int prom_rtas_hcall(uint64_t args)
+{
+ register uint64_t arg1 asm("r3") = H_RTAS;
+ register uint64_t arg2 asm("r4") = args;
+
+ asm volatile("sc 1\n" : "=r" (arg1) :
+ "r" (arg1),
+ "r" (arg2) :);
+ return arg1;
+}
+
+static struct rtas_args __prombss os_term_args;
+
+static void __init prom_rtas_os_term(char *str)
+{
+ phandle rtas_node;
+ __be32 val;
+ u32 token;
+
+ prom_debug("%s: start...\n", __func__);
+ rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
+ prom_debug("rtas_node: %x\n", rtas_node);
+ if (!PHANDLE_VALID(rtas_node))
+ return;
+
+ val = 0;
+ prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
+ token = be32_to_cpu(val);
+ prom_debug("ibm,os-term: %x\n", token);
+ if (token == 0)
+ prom_panic("Could not get token for ibm,os-term\n");
+ os_term_args.token = cpu_to_be32(token);
+ prom_rtas_hcall((uint64_t)&os_term_args);
+}
+#endif /* CONFIG_PPC_SVM */
+
/*
* Allocate room for and instantiate RTAS
*/
@@ -3172,6 +3225,49 @@ static void unreloc_toc(void)
#endif
#endif
+#ifdef CONFIG_PPC_SVM
+/*
+ * Perform the Enter Secure Mode ultracall.
+ */
+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
+{
+ register uint64_t func asm("r3") = UV_ESM;
+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
+
+ asm volatile("sc 2\n"
+ : "=r"(func)
+ : "0"(func), "r"(arg1), "r"(arg2)
+ :);
+
+ return (int)func;
+}
+
+/*
+ * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
+ */
+static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
+{
+ int ret;
+
+ if (!prom_svm_enable)
+ return;
+
+ /* Switch to secure mode. */
+ prom_printf("Switching to secure mode.\n");
+
+ ret = enter_secure_mode(kbase, fdt);
+ if (ret != U_SUCCESS) {
+ prom_printf("Returned %d from switching to secure mode.\n", ret);
+ prom_rtas_os_term("Switch to secure mode failed.\n");
+ }
+}
+#else
+static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
/*
* We enter here early on, when the Open Firmware prom is still
* handling exceptions and the MMU hash table for us.
@@ -3370,6 +3466,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
unreloc_toc();
#endif
+ /* Move to secure memory if we're supposed to be secure guests. */
+ setup_secure_guest(kbase, hdr);
+
__start(hdr, kbase, 0, 0, 0, 0, 0);
return 0;
From: Ram Pai <[email protected]>
These functions are used when the guest wants to grant the hypervisor
access to certain pages.
Signed-off-by: Ram Pai <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/ultravisor-api.h | 2 ++
arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index fe9a0d8d7673..c7513bbadf57 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -25,6 +25,8 @@
#define UV_UNREGISTER_MEM_SLOT 0xF124
#define UV_PAGE_IN 0xF128
#define UV_PAGE_OUT 0xF12C
+#define UV_SHARE_PAGE 0xF130
+#define UV_UNSHARE_PAGE 0xF134
#define UV_PAGE_INVAL 0xF138
#define UV_SVM_TERMINATE 0xF13C
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f5dc5af739b8..f7418b663a0e 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
return ucall(UV_SVM_TERMINATE, retbuf, lpid);
}
+
+static inline int uv_share_page(u64 pfn, u64 npages)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
+}
+
+static inline int uv_unshare_page(u64 pfn, u64 npages)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_ULTRAVISOR_H */
From: Anshuman Khandual <[email protected]>
LPPACA structures need to be shared with the host. Hence they need to be in
shared memory. Instead of allocating individual chunks of memory for a
given structure from memblock, a contiguous chunk of memory is allocated
and then converted into shared memory. Subsequent allocation requests will
come from the contiguous chunk which will be always shared memory for all
structures.
While we are able to use a kmem_cache constructor for the Debug Trace Log,
LPPACAs are allocated very early in the boot process (before SLUB is
available) so we need to use a simpler scheme here.
Introduce helper is_svm_platform() which uses the S bit of the MSR to tell
whether we're running as a secure guest.
Signed-off-by: Anshuman Khandual <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/svm.h | 26 ++++++++++++++++++++
arch/powerpc/kernel/paca.c | 43 +++++++++++++++++++++++++++++++++-
2 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
new file mode 100644
index 000000000000..fef3740f46a6
--- /dev/null
+++ b/arch/powerpc/include/asm/svm.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 Anshuman Khandual, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SVM_H
+#define _ASM_POWERPC_SVM_H
+
+#ifdef CONFIG_PPC_SVM
+
+static inline bool is_secure_guest(void)
+{
+ return mfmsr() & MSR_S;
+}
+
+#else /* CONFIG_PPC_SVM */
+
+static inline bool is_secure_guest(void)
+{
+ return false;
+}
+
+#endif /* CONFIG_PPC_SVM */
+#endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 854105db5cff..a9622f4b45bb 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -18,6 +18,8 @@
#include <asm/sections.h>
#include <asm/pgtable.h>
#include <asm/kexec.h>
+#include <asm/svm.h>
+#include <asm/ultravisor.h>
#include "setup.h"
@@ -58,6 +60,41 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
#define LPPACA_SIZE 0x400
+static void *__init alloc_shared_lppaca(unsigned long size, unsigned long align,
+ unsigned long limit, int cpu)
+{
+ size_t shared_lppaca_total_size = PAGE_ALIGN(nr_cpu_ids * LPPACA_SIZE);
+ static unsigned long shared_lppaca_size;
+ static void *shared_lppaca;
+ void *ptr;
+
+ if (!shared_lppaca) {
+ memblock_set_bottom_up(true);
+
+ shared_lppaca =
+ memblock_alloc_try_nid(shared_lppaca_total_size,
+ PAGE_SIZE, MEMBLOCK_LOW_LIMIT,
+ limit, NUMA_NO_NODE);
+ if (!shared_lppaca)
+ panic("cannot allocate shared data");
+
+ memblock_set_bottom_up(false);
+ uv_share_page(PHYS_PFN(__pa(shared_lppaca)),
+ shared_lppaca_total_size >> PAGE_SHIFT);
+ }
+
+ ptr = shared_lppaca + shared_lppaca_size;
+ shared_lppaca_size += size;
+
+ /*
+ * This is very early in boot, so no harm done if the kernel crashes at
+ * this point.
+ */
+ BUG_ON(shared_lppaca_size >= shared_lppaca_total_size);
+
+ return ptr;
+}
+
/*
* See asm/lppaca.h for more detail.
*
@@ -87,7 +124,11 @@ static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
if (early_cpu_has_feature(CPU_FTR_HVMODE))
return NULL;
- lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+ if (is_secure_guest())
+ lp = alloc_shared_lppaca(LPPACA_SIZE, 0x400, limit, cpu);
+ else
+ lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+
init_lppaca(lp);
return lp;
From: Anshuman Khandual <[email protected]>
Secure guests need to share the DTL buffers with the hypervisor. To that
end, use a kmem_cache constructor which converts the underlying buddy
allocated SLUB cache pages into shared memory.
Signed-off-by: Anshuman Khandual <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/svm.h | 5 ++++
arch/powerpc/platforms/pseries/Makefile | 1 +
arch/powerpc/platforms/pseries/setup.c | 5 +++-
arch/powerpc/platforms/pseries/svm.c | 40 +++++++++++++++++++++++++
4 files changed, 50 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
index fef3740f46a6..f253116c31fc 100644
--- a/arch/powerpc/include/asm/svm.h
+++ b/arch/powerpc/include/asm/svm.h
@@ -15,6 +15,9 @@ static inline bool is_secure_guest(void)
return mfmsr() & MSR_S;
}
+void dtl_cache_ctor(void *addr);
+#define get_dtl_cache_ctor() (is_secure_guest() ? dtl_cache_ctor : NULL)
+
#else /* CONFIG_PPC_SVM */
static inline bool is_secure_guest(void)
@@ -22,5 +25,7 @@ static inline bool is_secure_guest(void)
return false;
}
+#define get_dtl_cache_ctor() NULL
+
#endif /* CONFIG_PPC_SVM */
#endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
index ab3d59aeacca..a420ef4c9d8e 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_IBMVIO) += vio.o
obj-$(CONFIG_IBMEBUS) += ibmebus.o
obj-$(CONFIG_PAPR_SCM) += papr_scm.o
obj-$(CONFIG_PPC_SPLPAR) += vphn.o
+obj-$(CONFIG_PPC_SVM) += svm.o
ifdef CONFIG_PPC_PSERIES
obj-$(CONFIG_SUSPEND) += suspend.o
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index cb418d2bb1ac..3009b5bd11d2 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -73,6 +73,7 @@
#include <asm/security_features.h>
#include <asm/asm-const.h>
#include <asm/swiotlb.h>
+#include <asm/svm.h>
#include "pseries.h"
#include "../../../../drivers/pci/pci.h"
@@ -301,8 +302,10 @@ static inline int alloc_dispatch_logs(void)
static int alloc_dispatch_log_kmem_cache(void)
{
+ void (*ctor)(void *) = get_dtl_cache_ctor();
+
dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
- DISPATCH_LOG_BYTES, 0, NULL);
+ DISPATCH_LOG_BYTES, 0, ctor);
if (!dtl_cache) {
pr_warn("Failed to create dispatch trace log buffer cache\n");
pr_warn("Stolen time statistics will be unreliable\n");
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
new file mode 100644
index 000000000000..c508196f7c83
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Secure VM platform
+ *
+ * Copyright 2019 IBM Corporation
+ * Author: Anshuman Khandual <[email protected]>
+ */
+
+#include <linux/mm.h>
+#include <asm/ultravisor.h>
+
+/* There's one dispatch log per CPU. */
+#define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
+
+static struct page *dtl_page_store[NR_DTL_PAGE];
+static long dtl_nr_pages;
+
+static bool is_dtl_page_shared(struct page *page)
+{
+ long i;
+
+ for (i = 0; i < dtl_nr_pages; i++)
+ if (dtl_page_store[i] == page)
+ return true;
+
+ return false;
+}
+
+void dtl_cache_ctor(void *addr)
+{
+ unsigned long pfn = PHYS_PFN(__pa(addr));
+ struct page *page = pfn_to_page(pfn);
+
+ if (!is_dtl_page_shared(page)) {
+ dtl_page_store[dtl_nr_pages] = page;
+ dtl_nr_pages++;
+ WARN_ON(dtl_nr_pages >= NR_DTL_PAGE);
+ uv_share_page(pfn, 1);
+ }
+}
From: Ram Pai <[email protected]>
A new kernel deserves a clean slate. Any pages shared with the hypervisor
is unshared before invoking the new kernel. However there are exceptions.
If the new kernel is invoked to dump the current kernel, or if there is a
explicit request to preserve the state of the current kernel, unsharing
of pages is skipped.
NOTE: While testing crashkernel, make sure at least 256M is reserved for
crashkernel. Otherwise SWIOTLB allocation will fail and crash kernel will
fail to boot.
Signed-off-by: Ram Pai <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/ultravisor-api.h | 1 +
arch/powerpc/include/asm/ultravisor.h | 6 ++++++
arch/powerpc/kernel/machine_kexec_64.c | 8 ++++++++
3 files changed, 15 insertions(+)
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index c7513bbadf57..ab4f756cb91c 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -29,5 +29,6 @@
#define UV_UNSHARE_PAGE 0xF134
#define UV_PAGE_INVAL 0xF138
#define UV_SVM_TERMINATE 0xF13C
+#define UV_UNSHARE_ALL_PAGES 0xF140
#endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f7418b663a0e..80d4beaf61b8 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -106,6 +106,12 @@ static inline int uv_unshare_page(u64 pfn, u64 npages)
return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
}
+static inline int uv_unshare_all_pages(void)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_UNSHARE_ALL_PAGES, retbuf);
+}
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 75692c327ba0..b3d87d32e8f7 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -31,6 +31,7 @@
#include <asm/smp.h>
#include <asm/hw_breakpoint.h>
#include <asm/asm-prototypes.h>
+#include <asm/ultravisor.h>
int default_machine_kexec_prepare(struct kimage *image)
{
@@ -329,6 +330,13 @@ void default_machine_kexec(struct kimage *image)
#ifdef CONFIG_PPC_PSERIES
kexec_paca.lppaca_ptr = NULL;
#endif
+
+ if (is_secure_guest() && !(image->preserve_context ||
+ image->type == KEXEC_TYPE_CRASH)) {
+ uv_unshare_all_pages();
+ printk("kexec: Unshared all shared pages.\n");
+ }
+
paca_ptrs[kexec_paca.paca_index] = &kexec_paca;
setup_paca(&kexec_paca);
From: Sukadev Bhattiprolu <[email protected]>
Normally, the HV emulates some instructions like MSGSNDP, MSGCLRP
from a KVM guest. To emulate the instructions, it must first read
the instruction from the guest's memory and decode its parameters.
However for a secure guest (aka SVM), the page containing the
instruction is in secure memory and the HV cannot access directly.
It would need the Ultravisor (UV) to facilitate accessing the
instruction and parameters but the UV currently does not have
the support for such accesses.
Until the UV has such support, disable doorbells in SVMs. This might
incur a performance hit but that is yet to be quantified.
With this patch applied (needed only in SVMs not needed for HV) we
are able to launch SVM guests with multi-core support. Eg:
qemu -smp sockets=2,cores=2,threads=2.
Fix suggested by Benjamin Herrenschmidt. Thanks to input from
Paul Mackerras, Ram Pai and Michael Anderson.
Signed-off-by: Sukadev Bhattiprolu <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/platforms/pseries/smp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index 3df46123cce3..95a5c24a1544 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -45,6 +45,7 @@
#include <asm/dbell.h>
#include <asm/plpar_wrappers.h>
#include <asm/code-patching.h>
+#include <asm/svm.h>
#include "pseries.h"
#include "offline_states.h"
@@ -225,7 +226,7 @@ static __init void pSeries_smp_probe_xics(void)
{
xics_smp_probe();
- if (cpu_has_feature(CPU_FTR_DBELL))
+ if (cpu_has_feature(CPU_FTR_DBELL) && !is_secure_guest())
smp_ops->cause_ipi = smp_pseries_cause_ipi;
else
smp_ops->cause_ipi = icp_ops->cause_ipi;
Helps document what the hard-coded number means.
Also take the opportunity to fix an #endif comment.
Suggested-by: Alexey Kardashevskiy <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/kernel/paca.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 9cc91d03ab62..854105db5cff 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -56,6 +56,8 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
#ifdef CONFIG_PPC_PSERIES
+#define LPPACA_SIZE 0x400
+
/*
* See asm/lppaca.h for more detail.
*
@@ -69,7 +71,7 @@ static inline void init_lppaca(struct lppaca *lppaca)
*lppaca = (struct lppaca) {
.desc = cpu_to_be32(0xd397d781), /* "LpPa" */
- .size = cpu_to_be16(0x400),
+ .size = cpu_to_be16(LPPACA_SIZE),
.fpregs_in_use = 1,
.slb_count = cpu_to_be16(64),
.vmxregs_in_use = 0,
@@ -79,19 +81,18 @@ static inline void init_lppaca(struct lppaca *lppaca)
static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
{
struct lppaca *lp;
- size_t size = 0x400;
- BUILD_BUG_ON(size < sizeof(struct lppaca));
+ BUILD_BUG_ON(sizeof(struct lppaca) > LPPACA_SIZE);
if (early_cpu_has_feature(CPU_FTR_HVMODE))
return NULL;
- lp = alloc_paca_data(size, 0x400, limit, cpu);
+ lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
init_lppaca(lp);
return lp;
}
-#endif /* CONFIG_PPC_BOOK3S */
+#endif /* CONFIG_PPC_PSERIES */
#ifdef CONFIG_PPC_BOOK3S_64
Secure guest memory is inacessible to devices so regular DMA isn't
possible.
In that case set devices' dma_map_ops to NULL so that the generic
DMA code path will use SWIOTLB and DMA to bounce buffers.
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/platforms/pseries/iommu.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 03bbb299320e..7d9550edb700 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -50,6 +50,7 @@
#include <asm/udbg.h>
#include <asm/mmzone.h>
#include <asm/plpar_wrappers.h>
+#include <asm/svm.h>
#include "pseries.h"
@@ -1332,7 +1333,10 @@ void iommu_init_early_pSeries(void)
of_reconfig_notifier_register(&iommu_reconfig_nb);
register_memory_notifier(&iommu_mem_nb);
- set_pci_dma_ops(&dma_iommu_ops);
+ if (is_secure_guest())
+ set_pci_dma_ops(NULL);
+ else
+ set_pci_dma_ops(&dma_iommu_ops);
}
static int __init disable_multitce(char *str)
From: Ryan Grimm <[email protected]>
User space might want to know it's running in a secure VM. It can't do
a mfmsr because mfmsr is a privileged instruction.
The solution here is to create a cpu attribute:
/sys/devices/system/cpu/svm
which will read 0 or 1 based on the S bit of the guest's CPU 0.
Signed-off-by: Ryan Grimm <[email protected]>
Reviewed-by: Ram Pai <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/kernel/sysfs.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index e2147d7c9e72..f7100ab77d29 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -19,6 +19,7 @@
#include <asm/smp.h>
#include <asm/pmc.h>
#include <asm/firmware.h>
+#include <asm/svm.h>
#include "cacheinfo.h"
#include "setup.h"
@@ -715,6 +716,32 @@ static struct device_attribute pa6t_attrs[] = {
#endif /* HAS_PPC_PMC_PA6T */
#endif /* HAS_PPC_PMC_CLASSIC */
+#ifdef CONFIG_PPC_SVM
+static void get_svm(void *val)
+{
+ u32 *value = val;
+
+ *value = is_secure_guest();
+}
+
+static ssize_t show_svm(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ u32 val;
+ smp_call_function_single(0, get_svm, &val, 1);
+ return sprintf(buf, "%u\n", val);
+}
+static DEVICE_ATTR(svm, 0444, show_svm, NULL);
+
+static void create_svm_file(void)
+{
+ device_create_file(cpu_subsys.dev_root, &dev_attr_svm);
+}
+#else
+static void create_svm_file(void)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
static int register_cpu_online(unsigned int cpu)
{
struct cpu *c = &per_cpu(cpu_devices, cpu);
@@ -1058,6 +1085,8 @@ static int __init topology_init(void)
sysfs_create_dscr_default();
#endif /* CONFIG_PPC64 */
+ create_svm_file();
+
return 0;
}
subsys_initcall(topology_init);
From: Anshuman Khandual <[email protected]>
SWIOTLB checks range of incoming CPU addresses to be bounced and sees if
the device can access it through its DMA window without requiring bouncing.
In such cases it just chooses to skip bouncing. But for cases like secure
guests on powerpc platform all addresses need to be bounced into the shared
pool of memory because the host cannot access it otherwise. Hence the need
to do the bouncing is not related to device's DMA window and use of bounce
buffers is forced by setting swiotlb_force.
Also, connect the shared memory conversion functions into the
ARCH_HAS_MEM_ENCRYPT hooks and call swiotlb_update_mem_attributes() to
convert SWIOTLB's memory pool to shared memory.
Signed-off-by: Anshuman Khandual <[email protected]>
[ bauerman: Use ARCH_HAS_MEM_ENCRYPT hooks to share swiotlb memory pool. ]
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/mem_encrypt.h | 26 +++++++++++++++
arch/powerpc/platforms/pseries/Kconfig | 3 ++
arch/powerpc/platforms/pseries/svm.c | 45 ++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
new file mode 100644
index 000000000000..d8d5a7fcf298
--- /dev/null
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 IBM Corporation
+ */
+
+#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
+#define _ASM_POWERPC_MEM_ENCRYPT_H
+
+#include <asm/svm.h>
+
+static inline bool mem_encrypt_active(void)
+{
+ return is_secure_guest();
+}
+
+static inline bool force_dma_unencrypted(struct device *dev)
+{
+ return is_secure_guest();
+}
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index d09deb05bb66..9e35cddddf73 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -149,6 +149,9 @@ config PAPR_SCM
config PPC_SVM
bool "Secure virtual machine (SVM) support for POWER"
depends on PPC_PSERIES
+ select SWIOTLB
+ select ARCH_HAS_MEM_ENCRYPT
+ select ARCH_HAS_FORCE_DMA_UNENCRYPTED
help
There are certain POWER platforms which support secure guests using
the Protected Execution Facility, with the help of an Ultravisor
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index c508196f7c83..618622d636d5 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -7,8 +7,53 @@
*/
#include <linux/mm.h>
+#include <asm/machdep.h>
+#include <asm/svm.h>
+#include <asm/swiotlb.h>
#include <asm/ultravisor.h>
+static int __init init_svm(void)
+{
+ if (!is_secure_guest())
+ return 0;
+
+ /* Don't release the SWIOTLB buffer. */
+ ppc_swiotlb_enable = 1;
+
+ /*
+ * Since the guest memory is inaccessible to the host, devices always
+ * need to use the SWIOTLB buffer for DMA even if dma_capable() says
+ * otherwise.
+ */
+ swiotlb_force = SWIOTLB_FORCE;
+
+ /* Share the SWIOTLB buffer with the host. */
+ swiotlb_update_mem_attributes();
+
+ return 0;
+}
+machine_early_initcall(pseries, init_svm);
+
+int set_memory_encrypted(unsigned long addr, int numpages)
+{
+ if (!PAGE_ALIGNED(addr))
+ return -EINVAL;
+
+ uv_unshare_page(PHYS_PFN(__pa(addr)), numpages);
+
+ return 0;
+}
+
+int set_memory_decrypted(unsigned long addr, int numpages)
+{
+ if (!PAGE_ALIGNED(addr))
+ return -EINVAL;
+
+ uv_share_page(PHYS_PFN(__pa(addr)), numpages);
+
+ return 0;
+}
+
/* There's one dispatch log per CPU. */
#define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
From: Ryan Grimm <[email protected]>
Enables running as a secure guest in platforms with an Ultravisor.
Signed-off-by: Ryan Grimm <[email protected]>
Signed-off-by: Ram Pai <[email protected]>
Signed-off-by: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
index c63b019ba753..0b04c5ca9e13 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -31,6 +31,7 @@ CONFIG_DTL=y
CONFIG_SCANLOG=m
CONFIG_PPC_SMLPAR=y
CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
CONFIG_PPC_MAPLE=y
CONFIG_PPC_PASEMI=y
CONFIG_PPC_PASEMI_IOMMU=y
diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
index 38abc9c1770a..26126b4d4de3 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -42,6 +42,7 @@ CONFIG_DTL=y
CONFIG_SCANLOG=m
CONFIG_PPC_SMLPAR=y
CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
# CONFIG_PPC_PMAC is not set
CONFIG_RTAS_FLASH=m
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> From: Ram Pai <[email protected]>
>
> Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
> mode. Add "svm=" command line option to turn on switching to secure mode.
>
> Signed-off-by: Ram Pai <[email protected]>
> [ andmike: Generate an RTAS os-term hcall when the ESM ucall fails. ]
> Signed-off-by: Michael Anderson <[email protected]>
> [ bauerman: Cleaned up the code a bit. ]
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
> ---
> .../admin-guide/kernel-parameters.txt | 5 +
> arch/powerpc/include/asm/ultravisor-api.h | 1 +
> arch/powerpc/kernel/prom_init.c | 99 +++++++++++++++++++
> 3 files changed, 105 insertions(+)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 7b15abf7db21..c611891b5992 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4585,6 +4585,11 @@
> /sys/power/pm_test). Only available when CONFIG_PM_DEBUG
> is set. Default value is 5.
>
> + svm= [PPC]
> + Format: { on | off | y | n | 1 | 0 }
> + This parameter controls use of the Protected
> + Execution Facility on pSeries.
> +
> swapaccount=[0|1]
> [KNL] Enable accounting of swap in memory resource
> controller if no parameter or 1 is given or disable
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
> index c8180427fa01..fe9a0d8d7673 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -19,6 +19,7 @@
>
> /* opcodes */
> #define UV_WRITE_PATE 0xF104
> +#define UV_ESM 0xF110
> #define UV_RETURN 0xF11C
> #define UV_REGISTER_MEM_SLOT 0xF120
> #define UV_UNREGISTER_MEM_SLOT 0xF124
> diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
> index a3fb90bb5a39..6389a992451b 100644
> --- a/arch/powerpc/kernel/prom_init.c
> +++ b/arch/powerpc/kernel/prom_init.c
> @@ -44,6 +44,7 @@
> #include <asm/sections.h>
> #include <asm/machdep.h>
> #include <asm/asm-prototypes.h>
> +#include <asm/ultravisor-api.h>
>
> #include <linux/linux_logo.h>
>
> @@ -175,6 +176,10 @@ static bool __prombss prom_radix_disable;
> static bool __prombss prom_xive_disable;
> #endif
>
> +#ifdef CONFIG_PPC_SVM
> +static bool __prombss prom_svm_enable;
> +#endif
> +
> struct platform_support {
> bool hash_mmu;
> bool radix_mmu;
> @@ -816,6 +821,17 @@ static void __init early_cmdline_parse(void)
> prom_debug("XIVE disabled from cmdline\n");
> }
> #endif /* CONFIG_PPC_PSERIES */
> +
> +#ifdef CONFIG_PPC_SVM
> + opt = prom_strstr(prom_cmd_line, "svm=");
> + if (opt) {
> + bool val;
> +
> + opt += sizeof("svm=") - 1;
> + if (!prom_strtobool(opt, &val))
> + prom_svm_enable = val;
> + }
> +#endif /* CONFIG_PPC_SVM */
> }
>
> #ifdef CONFIG_PPC_PSERIES
> @@ -1716,6 +1732,43 @@ static void __init prom_close_stdin(void)
> }
> }
>
> +#ifdef CONFIG_PPC_SVM
> +static int prom_rtas_hcall(uint64_t args)
> +{
> + register uint64_t arg1 asm("r3") = H_RTAS;
> + register uint64_t arg2 asm("r4") = args;
> +
> + asm volatile("sc 1\n" : "=r" (arg1) :
> + "r" (arg1),
> + "r" (arg2) :);
> + return arg1;
> +}
> +
> +static struct rtas_args __prombss os_term_args;
> +
> +static void __init prom_rtas_os_term(char *str)
> +{
> + phandle rtas_node;
> + __be32 val;
> + u32 token;
> +
> + prom_debug("%s: start...\n", __func__);
> + rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
> + prom_debug("rtas_node: %x\n", rtas_node);
> + if (!PHANDLE_VALID(rtas_node))
> + return;
> +
> + val = 0;
> + prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
> + token = be32_to_cpu(val);
> + prom_debug("ibm,os-term: %x\n", token);
> + if (token == 0)
> + prom_panic("Could not get token for ibm,os-term\n");
> + os_term_args.token = cpu_to_be32(token);
> + prom_rtas_hcall((uint64_t)&os_term_args);
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
> /*
> * Allocate room for and instantiate RTAS
> */
> @@ -3172,6 +3225,49 @@ static void unreloc_toc(void)
> #endif
> #endif
>
> +#ifdef CONFIG_PPC_SVM
> +/*
> + * Perform the Enter Secure Mode ultracall.
> + */
> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> +{
> + register uint64_t func asm("r3") = UV_ESM;
> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
What does UV do with kbase and fdt precisely? Few words in the commit
log will do.
> +
> + asm volatile("sc 2\n"
> + : "=r"(func)
> + : "0"(func), "r"(arg1), "r"(arg2)
> + :);
> +
> + return (int)func;
And why "func"? Is it "function"? Weird name. Thanks,
> +}
> +
> +/*
> + * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
> + */
> +static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
> +{
> + int ret;
> +
> + if (!prom_svm_enable)
> + return;
> +
> + /* Switch to secure mode. */
> + prom_printf("Switching to secure mode.\n");
> +
> + ret = enter_secure_mode(kbase, fdt);
> + if (ret != U_SUCCESS) {
> + prom_printf("Returned %d from switching to secure mode.\n", ret);
> + prom_rtas_os_term("Switch to secure mode failed.\n");
> + }
> +}
> +#else
> +static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
> +{
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
> /*
> * We enter here early on, when the Open Firmware prom is still
> * handling exceptions and the MMU hash table for us.
> @@ -3370,6 +3466,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
> unreloc_toc();
> #endif
>
> + /* Move to secure memory if we're supposed to be secure guests. */
> + setup_secure_guest(kbase, hdr);
> +
> __start(hdr, kbase, 0, 0, 0, 0, 0);
>
> return 0;
>
--
Alexey
On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> From: Ram Pai <[email protected]>
>
> These functions are used when the guest wants to grant the hypervisor
> access to certain pages.
>
> Signed-off-by: Ram Pai <[email protected]>
> Signed-off-by: Thiago Jung Bauermann <[email protected]>
> ---
> arch/powerpc/include/asm/ultravisor-api.h | 2 ++
> arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
> index fe9a0d8d7673..c7513bbadf57 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -25,6 +25,8 @@
> #define UV_UNREGISTER_MEM_SLOT 0xF124
> #define UV_PAGE_IN 0xF128
> #define UV_PAGE_OUT 0xF12C
> +#define UV_SHARE_PAGE 0xF130
> +#define UV_UNSHARE_PAGE 0xF134
> #define UV_PAGE_INVAL 0xF138
> #define UV_SVM_TERMINATE 0xF13C
>
> diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
> index f5dc5af739b8..f7418b663a0e 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
>
> return ucall(UV_SVM_TERMINATE, retbuf, lpid);
> }
> +
> +static inline int uv_share_page(u64 pfn, u64 npages)
> +{
> + unsigned long retbuf[UCALL_BUFSIZE];
> +
> + return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
What is in that retbuf? Can you pass NULL instead?
> +}
> +
> +static inline int uv_unshare_page(u64 pfn, u64 npages)
> +{
> + unsigned long retbuf[UCALL_BUFSIZE];
> +
> + return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
> +}
> +
> #endif /* !__ASSEMBLY__ */
>
> #endif /* _ASM_POWERPC_ULTRAVISOR_H */
>
--
Alexey
(Sorry to hijack your reply).
On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> >From: Ram Pai <[email protected]>
> >+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> >+{
> >+ register uint64_t func asm("r3") = UV_ESM;
> >+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
> >+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>
> What does UV do with kbase and fdt precisely? Few words in the commit
> log will do.
>
> >+
> >+ asm volatile("sc 2\n"
> >+ : "=r"(func)
> >+ : "0"(func), "r"(arg1), "r"(arg2)
> >+ :);
> >+
> >+ return (int)func;
>
> And why "func"? Is it "function"? Weird name. Thanks,
Maybe the three vars should just be called "r3", "r4", and "r5" --
r3 is used as return value as well, so "func" isn't a great name for it.
Some other comments about this inline asm:
The "\n" makes the generated asm look funny and has no other function.
Instead of using backreferences you can use a "+" constraint, "inout".
Empty clobber list is strange.
Casts to the return type, like most other casts, are an invitation to
bugs and not actually useful.
So this can be written
static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
{
register uint64_t r3 asm("r3") = UV_ESM;
register uint64_t r4 asm("r4") = kbase;
register uint64_t r4 asm("r5") = fdt;
asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
return r3;
}
(and it probably should use u64 instead of both uint64_t and unsigned long?)
Segher
Hello Alexey,
Thanks for your review!
Alexey Kardashevskiy <[email protected]> writes:
> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>> From: Ram Pai <[email protected]>
>>
>> These functions are used when the guest wants to grant the hypervisor
>> access to certain pages.
>>
>> Signed-off-by: Ram Pai <[email protected]>
>> Signed-off-by: Thiago Jung Bauermann <[email protected]>
>> ---
>> arch/powerpc/include/asm/ultravisor-api.h | 2 ++
>> arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
>> 2 files changed, 17 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
>> index fe9a0d8d7673..c7513bbadf57 100644
>> --- a/arch/powerpc/include/asm/ultravisor-api.h
>> +++ b/arch/powerpc/include/asm/ultravisor-api.h
>> @@ -25,6 +25,8 @@
>> #define UV_UNREGISTER_MEM_SLOT 0xF124
>> #define UV_PAGE_IN 0xF128
>> #define UV_PAGE_OUT 0xF12C
>> +#define UV_SHARE_PAGE 0xF130
>> +#define UV_UNSHARE_PAGE 0xF134
>> #define UV_PAGE_INVAL 0xF138
>> #define UV_SVM_TERMINATE 0xF13C
>> diff --git a/arch/powerpc/include/asm/ultravisor.h
>> b/arch/powerpc/include/asm/ultravisor.h
>> index f5dc5af739b8..f7418b663a0e 100644
>> --- a/arch/powerpc/include/asm/ultravisor.h
>> +++ b/arch/powerpc/include/asm/ultravisor.h
>> @@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
>> return ucall(UV_SVM_TERMINATE, retbuf, lpid);
>> }
>> +
>> +static inline int uv_share_page(u64 pfn, u64 npages)
>> +{
>> + unsigned long retbuf[UCALL_BUFSIZE];
>> +
>> + return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
>
>
> What is in that retbuf? Can you pass NULL instead?
I think so, that buffer isn't used actually. Claudio is working on a
ucall_norets() which doesn't take the buffer and I can switch to that.
--
Thiago Jung Bauermann
IBM Linux Technology Center
Hello Segher,
Thanks for your review and suggestions!
Segher Boessenkool <[email protected]> writes:
> (Sorry to hijack your reply).
>
> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>> >From: Ram Pai <[email protected]>
>> >+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>> >+{
>> >+ register uint64_t func asm("r3") = UV_ESM;
>> >+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>> >+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>
>> What does UV do with kbase and fdt precisely? Few words in the commit
>> log will do.
>>
>> >+
>> >+ asm volatile("sc 2\n"
>> >+ : "=r"(func)
>> >+ : "0"(func), "r"(arg1), "r"(arg2)
>> >+ :);
>> >+
>> >+ return (int)func;
>>
>> And why "func"? Is it "function"? Weird name. Thanks,
Yes, I believe func is for function. Perhaps ucall would be clearer
if the variable wasn't reused for the return value as Segher points out.
> Maybe the three vars should just be called "r3", "r4", and "r5" --
> r3 is used as return value as well, so "func" isn't a great name for it.
Yes, that does seem simpler.
> Some other comments about this inline asm:
>
> The "\n" makes the generated asm look funny and has no other function.
> Instead of using backreferences you can use a "+" constraint, "inout".
> Empty clobber list is strange.
> Casts to the return type, like most other casts, are an invitation to
> bugs and not actually useful.
>
> So this can be written
>
> static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> {
> register uint64_t r3 asm("r3") = UV_ESM;
> register uint64_t r4 asm("r4") = kbase;
> register uint64_t r4 asm("r5") = fdt;
>
> asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
>
> return r3;
> }
I'll adopt your version, it is cleaner inded. Thanks for providing it!
> (and it probably should use u64 instead of both uint64_t and unsigned long?)
Almost all of prom_init.c uses unsigned long, with u64 in just a few
places. uint64_t isn't used anywhere else in the file. I'll switch to
unsigned long everywhere, since this feature is only for 64 bit.
--
Thiago Jung Bauermann
IBM Linux Technology Center
On 19/07/2019 07:28, Thiago Jung Bauermann wrote:
>
> Hello Segher,
>
> Thanks for your review and suggestions!
>
> Segher Boessenkool <[email protected]> writes:
>
>> (Sorry to hijack your reply).
>>
>> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>>>> From: Ram Pai <[email protected]>
>>>> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>>>> +{
>>>> + register uint64_t func asm("r3") = UV_ESM;
>>>> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>>>> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>>
>>> What does UV do with kbase and fdt precisely? Few words in the commit
>>> log will do.
What about this one? :)
>>>
>>>> +
>>>> + asm volatile("sc 2\n"
>>>> + : "=r"(func)
>>>> + : "0"(func), "r"(arg1), "r"(arg2)
>>>> + :);
>>>> +
>>>> + return (int)func;
>>>
>>> And why "func"? Is it "function"? Weird name. Thanks,
>
> Yes, I believe func is for function. Perhaps ucall would be clearer
> if the variable wasn't reused for the return value as Segher points out.
>
>> Maybe the three vars should just be called "r3", "r4", and "r5" --
>> r3 is used as return value as well, so "func" isn't a great name for it.
>
> Yes, that does seem simpler.
>
>> Some other comments about this inline asm:
>>
>> The "\n" makes the generated asm look funny and has no other function.
>> Instead of using backreferences you can use a "+" constraint, "inout".
>> Empty clobber list is strange.
>> Casts to the return type, like most other casts, are an invitation to
>> bugs and not actually useful.
>>
>> So this can be written
>>
>> static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>> {
>> register uint64_t r3 asm("r3") = UV_ESM;
>> register uint64_t r4 asm("r4") = kbase;
>> register uint64_t r4 asm("r5") = fdt;
>>
>> asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
>>
>> return r3;
>> }
>
> I'll adopt your version, it is cleaner inded. Thanks for providing it!
>
>> (and it probably should use u64 instead of both uint64_t and unsigned long?)
>
> Almost all of prom_init.c uses unsigned long, with u64 in just a few
> places. uint64_t isn't used anywhere else in the file. I'll switch to
> unsigned long everywhere, since this feature is only for 64 bit.
>
--
Alexey
Alexey Kardashevskiy <[email protected]> writes:
> On 19/07/2019 07:28, Thiago Jung Bauermann wrote:
>>
>> Hello Segher,
>>
>> Thanks for your review and suggestions!
>>
>> Segher Boessenkool <[email protected]> writes:
>>
>>> (Sorry to hijack your reply).
>>>
>>> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>>>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>>>>> From: Ram Pai <[email protected]>
>>>>> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>>>>> +{
>>>>> + register uint64_t func asm("r3") = UV_ESM;
>>>>> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>>>>> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>>>
>>>> What does UV do with kbase and fdt precisely? Few words in the commit
>>>> log will do.
>
>
> What about this one? :)
Sorry, I don't have an elaborate answer yet. The non-elaborate answer is
that the ultravisor uses the kbase and fdt as part of integrity checking
of the secure guest.
--
Thiago Jung Bauermann
IBM Linux Technology Center