Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp9820742ybl; Thu, 26 Dec 2019 06:00:25 -0800 (PST) X-Google-Smtp-Source: APXvYqyz83f/uN787EJuzPf/XIzPqrrA3hucOo6S8FIMhbnOTUV7Yhik8do1wABTstB45MMqumvG X-Received: by 2002:a05:6830:1653:: with SMTP id h19mr37137199otr.305.1577368825829; Thu, 26 Dec 2019 06:00:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577368825; cv=none; d=google.com; s=arc-20160816; b=U62HzUdKy35RpNJ5QxSCBF48Da3A6E7SXYD20kA2yaLlJTUB+APSFG/vlZ0S3x+BMs kDD5Ki1B49AXmeHJf8D7AnpLYeE+UqYJTNbh6mdyOiX5E0fm2Lo4pquoTZ1g4D/KD4ha XCiWVclyS3qPMvBZ04WoKmGAb3uy9BYJC9cxXYLlUVc33UJD0NHZBLp2HxRx+iJabY9A EtIi3n4OI/sNmKAuW4ytk0lsYCVfShTbBCXOoB/FH6hoCbUPY3Mvl55KFi29q64IylBx vv/a1DlhVcKkvXuzvfyxXIQ9Jy9zaWPvjPTe2tdq5HtfFOAXRCIJP6U5ux7H/Q2fooVW 2tZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aEIceimv3YOjn0Vv31a2KjnKAmvDAm1c4twjs+7kJ74=; b=INkstUmpqNWy2Q09e56tQ1hSDI9UDzHHUfnu+Dp3jFiY5DTLkHMXdHwKnpXI4hVWfZ u88dLhnrXAm9W066oQpwBlH0145pQkapZc1x6g7uhGLSVUkeaKX+sQc7FYoWgEFWE9h5 R7jcCuUG0x1Qg7tTevzExja255a+sFHkOp4WkAuWCiq8L0UMMTV5zS7qOp8zQuD9tibZ WDay3r2sxv/dd+KB6gGxjTL0eQgBKTfr24iI1ipjUKM9nbI/8gbEUxUVeQ8i0j6Fgzyo y8II5iHNQy2Y2OOQB+YCM5fRu3EtUWJdxLPpKSvG+AD0bzFyZXp3WgwcPaldoJrm+swX JlCg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i5si15037310oif.211.2019.12.26.06.00.14; Thu, 26 Dec 2019 06:00:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726885AbfLZN7I (ORCPT + 99 others); Thu, 26 Dec 2019 08:59:08 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:8627 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726628AbfLZN7H (ORCPT ); Thu, 26 Dec 2019 08:59:07 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 8BD9BC7D4B9866C103B7; Thu, 26 Dec 2019 21:59:02 +0800 (CST) Received: from DESKTOP-1NISPDV.china.huawei.com (10.173.221.248) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.439.0; Thu, 26 Dec 2019 21:58:55 +0800 From: Zengruan Ye To: , , , , , CC: , , , , , , , , , , Subject: [PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check Date: Thu, 26 Dec 2019 21:58:32 +0800 Message-ID: <20191226135833.1052-6-yezengruan@huawei.com> X-Mailer: git-send-email 2.23.0.windows.1 In-Reply-To: <20191226135833.1052-1-yezengruan@huawei.com> References: <20191226135833.1052-1-yezengruan@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.173.221.248] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is to fix some lock holder preemption issues. Some other locks implementation do a spin loop before acquiring the lock itself. Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It takes the CPU as parameter and return true if the CPU is preempted. Then kernel can break the spin loops upon the retval of vcpu_is_preempted. As kernel has used this interface, So lets support it. Signed-off-by: Zengruan Ye --- arch/arm64/include/asm/paravirt.h | 12 ++++++++++++ arch/arm64/include/asm/spinlock.h | 7 +++++++ arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/paravirt-spinlocks.c | 13 +++++++++++++ arch/arm64/kernel/paravirt.c | 4 +++- 5 files changed, 36 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/kernel/paravirt-spinlocks.c diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index cf3a0fd7c1a7..7b1c81b544bb 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -11,8 +11,13 @@ struct pv_time_ops { unsigned long long (*steal_clock)(int cpu); }; +struct pv_lock_ops { + bool (*vcpu_is_preempted)(int cpu); +}; + struct paravirt_patch_template { struct pv_time_ops time; + struct pv_lock_ops lock; }; extern struct paravirt_patch_template pv_ops; @@ -24,6 +29,13 @@ static inline u64 paravirt_steal_clock(int cpu) int __init pv_time_init(void); +__visible bool __native_vcpu_is_preempted(int cpu); + +static inline bool pv_vcpu_is_preempted(int cpu) +{ + return pv_ops.lock.vcpu_is_preempted(cpu); +} + #else #define pv_time_init() do {} while (0) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index b093b287babf..45ff1b2949a6 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -7,8 +7,15 @@ #include #include +#include /* See include/linux/spinlock.h */ #define smp_mb__after_spinlock() smp_mb() +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(long cpu) +{ + return pv_vcpu_is_preempted(cpu); +} + #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index fc6488660f64..b23cdae433a4 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -50,7 +50,7 @@ obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o -obj-$(CONFIG_PARAVIRT) += paravirt.o +obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt-spinlocks.o obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \ diff --git a/arch/arm64/kernel/paravirt-spinlocks.c b/arch/arm64/kernel/paravirt-spinlocks.c new file mode 100644 index 000000000000..718aa773d45c --- /dev/null +++ b/arch/arm64/kernel/paravirt-spinlocks.c @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright(c) 2019 Huawei Technologies Co., Ltd + * Author: Zengruan Ye + */ + +#include +#include + +__visible bool __native_vcpu_is_preempted(int cpu) +{ + return false; +} diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index 1ef702b0be2d..d8f1ba8c22ce 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -26,7 +26,9 @@ struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; -struct paravirt_patch_template pv_ops; +struct paravirt_patch_template pv_ops = { + .lock.vcpu_is_preempted = __native_vcpu_is_preempted, +}; EXPORT_SYMBOL_GPL(pv_ops); struct pv_time_stolen_time_region { -- 2.19.1