Hi Thomas and all,
This patch set is aimed to improve IRQ throughput on Intel Xeon by making use of
posted interrupts.
There is a session at LPC2023 IOMMU/VFIO/PCI MC where I have presented this
topic.
https://lpc.events/event/17/sessions/172/#20231115
Background
==========
On modern x86 server SoCs, interrupt remapping (IR) is required and turned
on by default to support X2APIC. Two interrupt remapping modes can be supported
by IOMMU/VT-d:
- Remappable (host)
- Posted (guest only so far)
With remappable mode, the device MSI to CPU process is a HW flow without system
software touch points, it roughly goes as follows:
1. Devices issue interrupt requests with writes to 0xFEEx_xxxx
2. The system agent accepts and remaps/translates the IRQ
3. Upon receiving the translation response, the system agent notifies the
destination CPU with the translated MSI
4. CPU's local APIC accepts interrupts into its IRR/ISR registers
5. Interrupt delivered through IDT (MSI vector)
The above process can be inefficient under high IRQ rates. The notifications in
step #3 are often unnecessary when the destination CPU is already overwhelmed
with handling bursts of IRQs. On some architectures, such as Intel Xeon, step #3
is also expensive and requires strong ordering w.r.t DMA. As a result, slower
IRQ rates can become a limiting factor for DMA I/O performance.
For example, on Intel Xeon Sapphire Rapids SoC, as more NVMe disks are attached
to the same socket, FIO (libaio engine) 4K block random read performance
per-disk drops quickly.
# of disks 2 4 8
-------------------------------------
IOPS(million) 1.991 1.136 0.834
(NVMe Gen 5 Samsung PM174x)
With posted mode enabled in interrupt remapping, the interrupt flow is divided
into two parts: posting (storing pending IRQ vector information in memory) and
CPU notification.
The above remappable IRQ flow becomes the following (1 and 2 unchanged):
3. Notifies the destination CPU with a notification vector
- IOMMU suppresses CPU notification
- IOMMU atomic swap/store IRQ status to memory-resident posted interrupt
descriptor (PID)
4. CPU's local APIC accepts the notification interrupt into its IRR/ISR
registers
5. Interrupt delivered through IDT (notification vector handler)
System SW allows new notifications by clearing outstanding notification
(ON) bit in PID.
(The above flow is not in Linux today since we only use posted mode for VM)
Note that the system software can now suppress CPU notifications at runtime as
needed. This allows the system software to coalesce the expensive CPU
notifications and in turn, improve IRQ throughput and DMA performance.
Consider the following scenario when MSIs arrive at a CPU in high-frequency
bursts:
Time ----------------------------------------------------------------------->
^ ^ ^ ^ ^ ^ ^ ^ ^
MSIs A B C D E F G H I
RI N N' N' N N' N' N' N N
PI N N N N
RI: remappable interrupt; PI: posted interrupt;
N: interrupt notification, N': superfluous interrupt notification
With remappable interrupt (row titled RI), every MSI generates a notification
event to the CPU.
With posted interrupts enabled in this patch set (row titled PI), CPU
notifications are coalesced during IRQ bursts. N's are eliminated in the flow
above. We refer to this mechanism Coalesced Interrupt Delivery (CID).
Post interrupts have existed for a long time, they have been used for
virtualization where MSIs from directly assigned devices can be delivered to
the guest kernel without VMM intervention. On x86 Intel platforms, posted
interrupts can be used on the host as well. Only host physical address of
Posted interrupt descriptor (PID) is used.
This patch set enables a new usage of posted interrupts on existing (and
new hardware) for host kernel device MSIs. It is referred to as Posted MSIs
throughout this patch set.
Performance (with this patch set):
==================================
Test #1. NVMe FIO
FIO libaio (million IOPS/sec/disk) Gen 5 NVMe Samsung PM174x disks on a single
socket, Intel Xeon Sapphire Rapids. Random read with 4k block size. NVMe IRQ
affinity is managed by the kernel with one vector per CPU.
#disks Before After %Gain
---------------------------------------------
8 0.834 1.943 132%
4 1.136 2.023 78%
Other observations:
- Increased block sizes shows diminishing benefits, e.g. with 4 NVME disks on
one x16 PCIe slot, the combined IOPS looks like:
Block Size Baseline PostedMSI
-------------------------------------
4K 6475 8778
8K 5727 5896
16k 2864 2900
32k 1546 1520
128k 397 398
- Submission/Completion latency (usec) also improved at 4K block size only
FIO report SLAT
---------------------------------------
Block Size Baseline postedMSI
4k 2177 2282
8k 4416 3967
16k 2950 3053
32k 3453 3505
128k 5911 5801
FIO report CLAT
---------------------------------------
Block Size Baseline postedMSI
4k 313 230
8k 352 343
16k 711 702
32k 1320 1343
128k 5146 5137
Test #2. Intel Data Streaming Accelerator
Two dedicated workqueues from two PCI root complex integrated endpoint
(RCIEP) devices, pin IRQ affinity of the two interrupts to a single CPU.
Before After %Gain
-------------------------------------
DSA memfill (mil IRQs/sec) 5.157 8.987 74%
DMA throughput has similar improvements.
At lower IRQ rate (< 1 million/second), no performance benefits nor regression
observed so far.
No harm tests also performed to ensure no performance regression on workloads
that do not have high interrupt rate. These tests include:
- kernel compile time
- file copy
- FIO NVME random writes
Implementation choices:
======================
- Transparent to the device drivers
- System-wide option instead of per-device or per-IRQ opt-in, i.e. once enabled
all device MSIs are posted. The benefit is that we only need to change IR
irq_chip and domain layer. No change to PCI MSI.
Exceptions are: IOAPIC, HPET, and VT-d's own IRQs
- Limit the number of polling/demuxing loops per CPU notification event
- Only change Intel-IR in IRQ domain hierarchy VECTOR->INTEL-IR->PCI-MSI,
- X86 Intel only so far, can be extended to other architectures with posted
interrupt support (ARM and AMD), RFC.
- Bare metal only, no posted interrupt capable virtual IOMMU.
Changes and implications (moving from remappable to posted mode)
===============================
1. All MSI vectors are multiplexed into a single notification vector for each
CPU MSI vectors are then de-multiplexed by SW, no IDT delivery for MSIs
2. Losing the following features compared to the remappable mode (AFAIK, none of
the below matters for device MSIs)
- Control of delivery mode, e.g. NMI for MSIs
- No logical destinations, posted interrupt destination is x2APIC
physical APIC ID
- No per vector stack, since all MSI vectors are multiplexed into one
Runtime changes
===============
The IRQ runtime behavior has changed with this patch, here is a pseudo trace
comparison for 3 MSIs of different vectors arriving in a burst on the same CPU.
A system vector interrupt (e.g. timer) arrives randomly.
BEFORE:
interrupt(MSI)
irq_enter()
handler() /* EOI */
irq_exit()
process_softirq()
interrupt(timer)
interrupt(MSI)
irq_enter()
handler() /* EOI */
irq_exit()
process_softirq()
interrupt(MSI)
irq_enter()
handler() /* EOI */
irq_exit()
process_softirq()
AFTER:
interrupt /* Posted MSI notification vector */
irq_enter()
atomic_xchg(PIR)
handler()
handler()
handler()
pi_clear_on()
apic_eoi()
irq_exit()
interrupt(timer)
process_softirq()
With posted MSI (as pointed out by Thomas Gleixner), both high-priority
interrupts (system interrupt vectors) and softIRQs are blocked during MSI vector
demux loop. Some can be timing sensitive.
Here are the options I have attempted or still working on:
1. Use self-IPI to invoke MSI vector handler but that took away the majority of
the performance benefits.
2. Limit the # of demuxing loops, this is implemented in this patch. Note that
today, we already allow one low priority MSI to block system interrupts. System
vector can preempt MSI vectors without waiting for EOI but we have IRQ disabled
in the ISR.
Performance data (on DSA with MEMFILL) also shows that coalescing more than 3
loops yields diminishing benefits. Therefore, the max loops for coalescing is
set to 3 in this patch.
MaxLoop IRQ/sec bandwidth Mbps
-------------------------------------------------------------------------
2 6157107 25219
3 6226611 25504
4 6557081 26857
5 6629683 27155
6 6662425 27289
3. limit the time that system interrupts can be blocked (WIP).
In addition, posted MSI uses atomic xchg from both CPU and IOMMU. Compared to
remappable mode, there may be additional cache line ownership contention over
PID. However, we have not observed performance regression at lower IRQ rates.
At high interrupt rate, posted mode always wins.
Testing:
========
The following tests have been performed and continue to be evaluated.
- IRQ affinity change, migration
- CPU offlining
- Multi vector coalescing
- Low IRQ rate, general no-harm test
- VM device assignment via VFIO
- General no harm test, performance regressions have not been observed for low
IRQ rate workload.
With the patch, a new entry in /proc/interrupts is added.
cat /proc/interrupts | grep PMN
PMN: 13868907 Posted MSI notification event
No change to the device MSI accounting.
A new INTEL-IR-POST irq_chip is visible at IRQ debugfs, e.g.
domain: IR-PCI-MSIX-0000:6f:01.0-12
hwirq: 0x8
chip: IR-PCI-MSIX-0000:6f:01.0
flags: 0x430
IRQCHIP_SKIP_SET_WAKE
IRQCHIP_ONESHOT_SAFE
parent:
domain: INTEL-IR-12-13
hwirq: 0x90000
chip: INTEL-IR-POST /* For posted MSIs */
flags: 0x0
parent:
domain: VECTOR
hwirq: 0x65
chip: APIC
Acknowledgment
==============
- Rajesh Sankaran and Ashok Raj for the original idea
- Thomas Gleixner for reviewing and guiding the upstream direction of PoC
patches. Help correct my many misunderstandings of the IRQ subsystem.
- Jie J Yan(Jeff), Sebastien Lemarie, and Dan Liang for performance evaluation
with NVMe and network workload
- Bernice Zhang and Scott Morris for functional validation
- Michael Prinke helped me understand how VT-d HW works
- Sanjay Kumar for providing the DSA IRQ test suite
Changelogs:
V2:
- Code change logs are in individual patches.
- Use "Originally-by" and "Suggested-by" tags to clarify
credits/responsibilities.
- More performance evaluation done on FIO
4K rand read test. Four Samsung PM174x NVMe drives on a single x16 PCIe gen5
lane. Fixed CPU frequency at 2.7GHz (p1, highest non-turbo).
IOPS* CPU% sys% user% Ints/sec IOPS/CPU LAT**
AIO (before) 6231 55.5 39.7 15.8 5714721 112.2702703 328
AIO (after) 8936 71.5 51.5 20 7397543 124.979021 229
IOURING(before) 6880 43.7 30.3 13.4 6512402 157.4370709 149
IOURING(after) 8688 58.3 41.3 17 7625158 149.0222985 118
IOURING POLLEDQ 13100 100 85.1 14.9 8000 131 156
* x1000 4 drives combined
** 95% usec.
This patchset improves IOPS, IRQ throughput, and reduces latency for non-polled
queues.
V1 (since RFC)
- Removed mentioning of wishful features, IRQ preemption, separate and
full MSI vector space
- Refined MSI handler de-multiplexing loop based on suggestions from
Peter and Thomas. Reduced xchg() usage and code duplication
- Assign the new posted IR irq_chip only to device MSI/x, avoid changing
IO-APIC code
- Extract and use common code for preventing lost interrupt during
affinity change
- Added more test results to the cover letter
Thanks,
Jacob
Jacob Pan (13):
x86/irq: Move posted interrupt descriptor out of vmx code
x86/irq: Unionize PID.PIR for 64bit access w/o casting
x86/irq: Remove bitfields in posted interrupt descriptor
x86/irq: Add a Kconfig option for posted MSI
x86/irq: Reserve a per CPU IDT vector for posted MSIs
x86/irq: Set up per host CPU posted interrupt descriptors
x86/irq: Factor out calling ISR from common_interrupt
x86/irq: Install posted MSI notification handler
x86/irq: Factor out common code for checking pending interrupts
x86/irq: Extend checks for pending vectors to posted interrupts
iommu/vt-d: Make posted MSI an opt-in cmdline option
iommu/vt-d: Add an irq_chip for posted MSIs
iommu/vt-d: Enable posted mode for device MSIs
.../admin-guide/kernel-parameters.txt | 1 +
arch/x86/Kconfig | 11 ++
arch/x86/include/asm/apic.h | 12 ++
arch/x86/include/asm/hardirq.h | 6 +
arch/x86/include/asm/idtentry.h | 3 +
arch/x86/include/asm/irq_remapping.h | 11 ++
arch/x86/include/asm/irq_vectors.h | 9 +-
arch/x86/include/asm/posted_intr.h | 108 ++++++++++++
arch/x86/kernel/apic/vector.c | 5 +-
arch/x86/kernel/cpu/common.c | 3 +
arch/x86/kernel/idt.c | 3 +
arch/x86/kernel/irq.c | 164 ++++++++++++++++--
arch/x86/kvm/vmx/posted_intr.c | 4 +-
arch/x86/kvm/vmx/posted_intr.h | 93 +---------
arch/x86/kvm/vmx/vmx.c | 3 +-
arch/x86/kvm/vmx/vmx.h | 2 +-
drivers/iommu/intel/irq_remapping.c | 115 +++++++++++-
drivers/iommu/irq_remapping.c | 13 +-
tools/arch/x86/include/asm/irq_vectors.h | 9 +-
19 files changed, 457 insertions(+), 118 deletions(-)
create mode 100644 arch/x86/include/asm/posted_intr.h
--
2.25.1
With posted MSI feature enabled on the CPU side, iommu interrupt
remapping table entries (IRTEs) for device MSI/x can be allocated,
activated, and programed in posted mode. This means that IRTEs are
linked with their respective PIDs of the target CPU.
Handlers for the posted MSI notification vector will de-multiplex
device MSI handlers. CPU notifications are coalesced if interrupts
arrive at a high frequency.
Excluding the following:
- legacy devices IOAPIC, HPET (may be needed for booting, not a source
of high MSIs)
- VT-d's own IRQs (not remappable).
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
---
v2: Fold in helper function for retrieving PID address
v1: Added a warning if the effective affinity mask is not set up
---
drivers/iommu/intel/irq_remapping.c | 69 +++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
index fa719936b44e..ac5f9e83943b 100644
--- a/drivers/iommu/intel/irq_remapping.c
+++ b/drivers/iommu/intel/irq_remapping.c
@@ -19,6 +19,7 @@
#include <asm/cpu.h>
#include <asm/irq_remapping.h>
#include <asm/pci-direct.h>
+#include <asm/posted_intr.h>
#include "iommu.h"
#include "../irq_remapping.h"
@@ -49,6 +50,7 @@ struct irq_2_iommu {
u16 sub_handle;
u8 irte_mask;
enum irq_mode mode;
+ bool posted_msi;
};
struct intel_ir_data {
@@ -1118,6 +1120,14 @@ static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
irte->redir_hint = 1;
}
+static void prepare_irte_posted(struct irte *irte)
+{
+ memset(irte, 0, sizeof(*irte));
+
+ irte->present = 1;
+ irte->p_pst = 1;
+}
+
struct irq_remap_ops intel_irq_remap_ops = {
.prepare = intel_prepare_irq_remapping,
.enable = intel_enable_irq_remapping,
@@ -1126,6 +1136,47 @@ struct irq_remap_ops intel_irq_remap_ops = {
.enable_faulting = enable_drhd_fault_handling,
};
+#ifdef CONFIG_X86_POSTED_MSI
+
+static phys_addr_t get_pi_desc_addr(struct irq_data *irqd)
+{
+ int cpu = cpumask_first(irq_data_get_effective_affinity_mask(irqd));
+
+ if (WARN_ON(cpu >= nr_cpu_ids))
+ return 0;
+
+ return __pa(per_cpu_ptr(&posted_interrupt_desc, cpu));
+}
+
+static void intel_ir_reconfigure_irte_posted(struct irq_data *irqd)
+{
+ struct intel_ir_data *ir_data = irqd->chip_data;
+ struct irte *irte = &ir_data->irte_entry;
+ struct irte irte_pi;
+ u64 pid_addr;
+
+ pid_addr = get_pi_desc_addr(irqd);
+
+ if (!pid_addr) {
+ pr_warn("Failed to setup IRQ %d for posted mode", irqd->irq);
+ return;
+ }
+
+ memset(&irte_pi, 0, sizeof(irte_pi));
+
+ /* The shared IRTE already be set up as posted during alloc_irte */
+ dmar_copy_shared_irte(&irte_pi, irte);
+
+ irte_pi.pda_l = (pid_addr >> (32 - PDA_LOW_BIT)) & ~(-1UL << PDA_LOW_BIT);
+ irte_pi.pda_h = (pid_addr >> 32) & ~(-1UL << PDA_HIGH_BIT);
+
+ modify_irte(&ir_data->irq_2_iommu, &irte_pi);
+}
+
+#else
+static inline void intel_ir_reconfigure_irte_posted(struct irq_data *irqd) {}
+#endif
+
static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
{
struct intel_ir_data *ir_data = irqd->chip_data;
@@ -1139,8 +1190,9 @@ static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
irte->vector = cfg->vector;
irte->dest_id = IRTE_DEST(cfg->dest_apicid);
- /* Update the hardware only if the interrupt is in remapped mode. */
- if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
+ if (ir_data->irq_2_iommu.posted_msi)
+ intel_ir_reconfigure_irte_posted(irqd);
+ else if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
modify_irte(&ir_data->irq_2_iommu, irte);
}
@@ -1194,7 +1246,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
struct intel_ir_data *ir_data = data->chip_data;
struct vcpu_data *vcpu_pi_info = info;
- /* stop posting interrupts, back to remapping mode */
+ /* stop posting interrupts, back to the default mode */
if (!vcpu_pi_info) {
modify_irte(&ir_data->irq_2_iommu, &ir_data->irte_entry);
} else {
@@ -1320,6 +1372,11 @@ static void intel_irq_remapping_prepare_irte(struct intel_ir_data *data,
break;
case X86_IRQ_ALLOC_TYPE_PCI_MSI:
case X86_IRQ_ALLOC_TYPE_PCI_MSIX:
+ if (posted_msi_supported()) {
+ prepare_irte_posted(irte);
+ data->irq_2_iommu.posted_msi = 1;
+ }
+
set_msi_sid(irte,
pci_real_dma_dev(msi_desc_to_pci_dev(info->desc)));
break;
@@ -1407,7 +1464,11 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain,
irq_data->hwirq = (index << 16) + i;
irq_data->chip_data = ird;
- irq_data->chip = &intel_ir_chip;
+ if (posted_msi_supported() &&
+ ((info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI) || (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSIX)))
+ irq_data->chip = &intel_ir_chip_post_msi;
+ else
+ irq_data->chip = &intel_ir_chip;
intel_irq_remapping_prepare_irte(ird, irq_cfg, info, index, i);
irq_set_status_flags(virq + i, IRQ_MOVE_PCNTXT);
}
--
2.25.1
To support posted MSIs, create a posted interrupt descriptor (PID) for each
host CPU. Later on, when setting up IRQ CPU affinity, IOMMU's interrupt
remapping table entry (IRTE) will point to the physical address of the
matching CPU's PID.
Each PID is initialized with the owner CPU's physical APICID as the
destination.
Originally-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
---
v2: Fix xAPIC destination ID assignment, Oliver Sang reported failture on
system with x2apic optout BIOS option.
---
arch/x86/include/asm/hardirq.h | 3 +++
arch/x86/include/asm/posted_intr.h | 7 +++++++
arch/x86/kernel/cpu/common.c | 3 +++
arch/x86/kernel/irq.c | 23 +++++++++++++++++++++++
4 files changed, 36 insertions(+)
diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
index fbc7722b87d1..01ea52859462 100644
--- a/arch/x86/include/asm/hardirq.h
+++ b/arch/x86/include/asm/hardirq.h
@@ -48,6 +48,9 @@ typedef struct {
DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
+#ifdef CONFIG_X86_POSTED_MSI
+DECLARE_PER_CPU_ALIGNED(struct pi_desc, posted_interrupt_desc);
+#endif
#define __ARCH_IRQ_STAT
#define inc_irq_stat(member) this_cpu_inc(irq_stat.member)
diff --git a/arch/x86/include/asm/posted_intr.h b/arch/x86/include/asm/posted_intr.h
index c682c41d4e44..4e5eea2d20e0 100644
--- a/arch/x86/include/asm/posted_intr.h
+++ b/arch/x86/include/asm/posted_intr.h
@@ -80,4 +80,11 @@ static inline bool pi_test_sn(struct pi_desc *pi_desc)
return test_bit(POSTED_INTR_SN, (unsigned long *)&pi_desc->control);
}
+#ifdef CONFIG_X86_POSTED_MSI
+extern void intel_posted_msi_init(void);
+
+#else
+static inline void intel_posted_msi_init(void) {};
+
+#endif /* X86_POSTED_MSI */
#endif /* _X86_POSTED_INTR_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5c1e6d6be267..1a5fb657e1b6 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -68,6 +68,7 @@
#include <asm/traps.h>
#include <asm/sev.h>
#include <asm/tdx.h>
+#include <asm/posted_intr.h>
#include "cpu.h"
@@ -2219,6 +2220,8 @@ void cpu_init(void)
barrier();
x2apic_setup();
+
+ intel_posted_msi_init();
}
mmgrab(&init_mm);
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 35fde0107901..f39f6147104c 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -22,6 +22,8 @@
#include <asm/desc.h>
#include <asm/traps.h>
#include <asm/thermal.h>
+#include <asm/posted_intr.h>
+#include <asm/irq_remapping.h>
#define CREATE_TRACE_POINTS
#include <asm/trace/irq_vectors.h>
@@ -334,6 +336,27 @@ DEFINE_IDTENTRY_SYSVEC_SIMPLE(sysvec_kvm_posted_intr_nested_ipi)
}
#endif
+#ifdef CONFIG_X86_POSTED_MSI
+
+/* Posted Interrupt Descriptors for coalesced MSIs to be posted */
+DEFINE_PER_CPU_ALIGNED(struct pi_desc, posted_interrupt_desc);
+
+void intel_posted_msi_init(void)
+{
+ u32 destination;
+ u32 apic_id;
+
+ this_cpu_write(posted_interrupt_desc.nv, POSTED_MSI_NOTIFICATION_VECTOR);
+
+ /*
+ * APIC destination ID is stored in bit 8:15 while in XAPIC mode.
+ * VT-d spec. CH 9.11
+ */
+ apic_id = this_cpu_read(x86_cpu_to_apicid);
+ destination = x2apic_enabled() ? apic_id : apic_id << 8;
+ this_cpu_write(posted_interrupt_desc.ndst, destination);
+}
+#endif /* X86_POSTED_MSI */
#ifdef CONFIG_HOTPLUG_CPU
/* A cpu has been removed from cpu_online_mask. Reset irq affinities. */
--
2.25.1
Mixture of bitfields and types is weird and really not intuitive, remove
bitfields and use typed data exclusively.
Link: https://lore.kernel.org/all/20240404101735.402feec8@jacob-builder/T/#mf66e34a82a48f4d8e2926b5581eff59a122de53a
Suggested-by: Sean Christopherson <[email protected]>
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
---
v2:
- Replace bitfields, no more mix.
---
arch/x86/include/asm/posted_intr.h | 10 +---------
arch/x86/kvm/vmx/posted_intr.c | 4 ++--
arch/x86/kvm/vmx/vmx.c | 2 +-
3 files changed, 4 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/posted_intr.h b/arch/x86/include/asm/posted_intr.h
index acf237b2882e..c682c41d4e44 100644
--- a/arch/x86/include/asm/posted_intr.h
+++ b/arch/x86/include/asm/posted_intr.h
@@ -15,17 +15,9 @@ struct pi_desc {
};
union {
struct {
- /* bit 256 - Outstanding Notification */
- u16 on : 1,
- /* bit 257 - Suppress Notification */
- sn : 1,
- /* bit 271:258 - Reserved */
- rsvd_1 : 14;
- /* bit 279:272 - Notification Vector */
+ u16 notif_ctrl; /* Suppress and outstanding bits */
u8 nv;
- /* bit 287:280 - Reserved */
u8 rsvd_2;
- /* bit 319:288 - Notification Destination */
u32 ndst;
};
u64 control;
diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
index af662312fd07..592dbb765675 100644
--- a/arch/x86/kvm/vmx/posted_intr.c
+++ b/arch/x86/kvm/vmx/posted_intr.c
@@ -107,7 +107,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu)
* handle task migration (@cpu != vcpu->cpu).
*/
new.ndst = dest;
- new.sn = 0;
+ new.notif_ctrl &= ~POSTED_INTR_SN;
/*
* Restore the notification vector; in the blocking case, the
@@ -157,7 +157,7 @@ static void pi_enable_wakeup_handler(struct kvm_vcpu *vcpu)
&per_cpu(wakeup_vcpus_on_cpu, vcpu->cpu));
raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
- WARN(pi_desc->sn, "PI descriptor SN field set before blocking");
+ WARN(pi_desc->notif_ctrl & POSTED_INTR_SN, "PI descriptor SN field set before blocking");
old.control = READ_ONCE(pi_desc->control);
do {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d94bb069bac9..50580bbfba5d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4843,7 +4843,7 @@ static void __vmx_vcpu_reset(struct kvm_vcpu *vcpu)
* or POSTED_INTR_WAKEUP_VECTOR.
*/
vmx->pi_desc.nv = POSTED_INTR_VECTOR;
- vmx->pi_desc.sn = 1;
+ vmx->pi_desc.notif_ctrl |= POSTED_INTR_SN;
}
static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
--
2.25.1
> From: Jacob Pan <[email protected]>
> Sent: Saturday, April 6, 2024 6:31 AM
>
> +#ifdef CONFIG_X86_POSTED_MSI
> +
> +/* Posted Interrupt Descriptors for coalesced MSIs to be posted */
> +DEFINE_PER_CPU_ALIGNED(struct pi_desc, posted_interrupt_desc);
'posted_msi_desc' to be more accurate?
Hi Kevin,
On Fri, 12 Apr 2024 09:16:25 +0000, "Tian, Kevin" <[email protected]>
wrote:
> > From: Jacob Pan <[email protected]>
> > Sent: Saturday, April 6, 2024 6:31 AM
> >
> > +#ifdef CONFIG_X86_POSTED_MSI
> > +
> > +/* Posted Interrupt Descriptors for coalesced MSIs to be posted */
> > +DEFINE_PER_CPU_ALIGNED(struct pi_desc, posted_interrupt_desc);
>
> 'posted_msi_desc' to be more accurate?
makes sense, will do.
Thanks,
Jacob
"KVM" here would be nice too.
On Fri, Apr 05, 2024, Jacob Pan wrote:
> Mixture of bitfields and types is weird and really not intuitive, remove
> bitfields and use typed data exclusively.
>
> Link: https://lore.kernel.org/all/20240404101735.402feec8@jacob-builder/T/#mf66e34a82a48f4d8e2926b5581eff59a122de53a
> Suggested-by: Sean Christopherson <[email protected]>
> Suggested-by: Thomas Gleixner <[email protected]>
> Signed-off-by: Jacob Pan <[email protected]>
>
> ---
> v2:
> - Replace bitfields, no more mix.
> ---
> arch/x86/include/asm/posted_intr.h | 10 +---------
> arch/x86/kvm/vmx/posted_intr.c | 4 ++--
> arch/x86/kvm/vmx/vmx.c | 2 +-
> 3 files changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/posted_intr.h b/arch/x86/include/asm/posted_intr.h
> index acf237b2882e..c682c41d4e44 100644
> --- a/arch/x86/include/asm/posted_intr.h
> +++ b/arch/x86/include/asm/posted_intr.h
> @@ -15,17 +15,9 @@ struct pi_desc {
> };
> union {
> struct {
> - /* bit 256 - Outstanding Notification */
> - u16 on : 1,
> - /* bit 257 - Suppress Notification */
> - sn : 1,
> - /* bit 271:258 - Reserved */
> - rsvd_1 : 14;
> - /* bit 279:272 - Notification Vector */
> + u16 notif_ctrl; /* Suppress and outstanding bits */
> u8 nv;
> - /* bit 287:280 - Reserved */
> u8 rsvd_2;
> - /* bit 319:288 - Notification Destination */
> u32 ndst;
> };
> u64 control;
> diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
> index af662312fd07..592dbb765675 100644
> --- a/arch/x86/kvm/vmx/posted_intr.c
> +++ b/arch/x86/kvm/vmx/posted_intr.c
> @@ -107,7 +107,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu)
> * handle task migration (@cpu != vcpu->cpu).
> */
> new.ndst = dest;
> - new.sn = 0;
> + new.notif_ctrl &= ~POSTED_INTR_SN;
At the risk of creating confusing, would it make sense to add double-underscore,
non-atomic versions of the set/clear helpers for ON and SN?
I can't tell if that's a net positive versus open coding clear() and set() here
and below.
> /*
> * Restore the notification vector; in the blocking case, the
> @@ -157,7 +157,7 @@ static void pi_enable_wakeup_handler(struct kvm_vcpu *vcpu)
> &per_cpu(wakeup_vcpus_on_cpu, vcpu->cpu));
> raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
>
> - WARN(pi_desc->sn, "PI descriptor SN field set before blocking");
> + WARN(pi_desc->notif_ctrl & POSTED_INTR_SN, "PI descriptor SN field set before blocking");
This can use pi_test_sn(), as test_bit() isn't atomic, i.e. doesn't incur a LOCK.
>
> old.control = READ_ONCE(pi_desc->control);
> do {
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index d94bb069bac9..50580bbfba5d 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4843,7 +4843,7 @@ static void __vmx_vcpu_reset(struct kvm_vcpu *vcpu)
> * or POSTED_INTR_WAKEUP_VECTOR.
> */
> vmx->pi_desc.nv = POSTED_INTR_VECTOR;
> - vmx->pi_desc.sn = 1;
> + vmx->pi_desc.notif_ctrl |= POSTED_INTR_SN;
Hi Sean,
On Tue, 16 Apr 2024 17:39:42 -0700, Sean Christopherson <[email protected]>
wrote:
> "KVM" here would be nice too.
>
> On Fri, Apr 05, 2024, Jacob Pan wrote:
> > Mixture of bitfields and types is weird and really not intuitive, remove
> > bitfields and use typed data exclusively.
> >
> > Link:
> > https://lore.kernel.org/all/20240404101735.402feec8@jacob-builder/T/#mf66e34a82a48f4d8e2926b5581eff59a122de53a
> > Suggested-by: Sean Christopherson <[email protected]> Suggested-by:
> > Thomas Gleixner <[email protected]> Signed-off-by: Jacob Pan
> > <[email protected]>
> >
> > ---
> > v2:
> > - Replace bitfields, no more mix.
> > ---
> > arch/x86/include/asm/posted_intr.h | 10 +---------
> > arch/x86/kvm/vmx/posted_intr.c | 4 ++--
> > arch/x86/kvm/vmx/vmx.c | 2 +-
> > 3 files changed, 4 insertions(+), 12 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/posted_intr.h
> > b/arch/x86/include/asm/posted_intr.h index acf237b2882e..c682c41d4e44
> > 100644 --- a/arch/x86/include/asm/posted_intr.h
> > +++ b/arch/x86/include/asm/posted_intr.h
> > @@ -15,17 +15,9 @@ struct pi_desc {
> > };
> > union {
> > struct {
> > - /* bit 256 - Outstanding Notification
> > */
> > - u16 on : 1,
> > - /* bit 257 - Suppress Notification */
> > - sn : 1,
> > - /* bit 271:258 - Reserved */
> > - rsvd_1 : 14;
> > - /* bit 279:272 - Notification Vector */
> > + u16 notif_ctrl; /* Suppress and
> > outstanding bits */ u8 nv;
> > - /* bit 287:280 - Reserved */
> > u8 rsvd_2;
> > - /* bit 319:288 - Notification
> > Destination */ u32 ndst;
> > };
> > u64 control;
> > diff --git a/arch/x86/kvm/vmx/posted_intr.c
> > b/arch/x86/kvm/vmx/posted_intr.c index af662312fd07..592dbb765675 100644
> > --- a/arch/x86/kvm/vmx/posted_intr.c
> > +++ b/arch/x86/kvm/vmx/posted_intr.c
> > @@ -107,7 +107,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int
> > cpu)
> > * handle task migration (@cpu != vcpu->cpu).
> > */
> > new.ndst = dest;
> > - new.sn = 0;
> > + new.notif_ctrl &= ~POSTED_INTR_SN;
>
> At the risk of creating confusing, would it make sense to add
> double-underscore, non-atomic versions of the set/clear helpers for ON
> and SN?
>
> I can't tell if that's a net positive versus open coding clear() and
> set() here and below.
IMHO, we can add non-atomic helpers when we have more than one user for
each operation.
I do have a stupid bug here, it should be:
- new.notif_ctrl &= ~POSTED_INTR_SN;
+ new.notif_ctrl &= ~BIT(POSTED_INTR_SN);
Same as below.
Thanks to Oliver(LKP kvm self test). I didn't catch that in my VFIO device
assignment test.
>
> > /*
> > * Restore the notification vector; in the blocking
> > case, the @@ -157,7 +157,7 @@ static void
> > pi_enable_wakeup_handler(struct kvm_vcpu *vcpu)
> > &per_cpu(wakeup_vcpus_on_cpu, vcpu->cpu));
> > raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, vcpu->cpu));
> > - WARN(pi_desc->sn, "PI descriptor SN field set before
> > blocking");
> > + WARN(pi_desc->notif_ctrl & POSTED_INTR_SN, "PI descriptor SN
> > field set before blocking");
>
> This can use pi_test_sn(), as test_bit() isn't atomic, i.e. doesn't incur
> a LOCK.
make sense. will do.
> >
> > old.control = READ_ONCE(pi_desc->control);
> > do {
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index d94bb069bac9..50580bbfba5d 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -4843,7 +4843,7 @@ static void __vmx_vcpu_reset(struct kvm_vcpu
> > *vcpu)
> > * or POSTED_INTR_WAKEUP_VECTOR.
> > */
> > vmx->pi_desc.nv = POSTED_INTR_VECTOR;
> > - vmx->pi_desc.sn = 1;
> > + vmx->pi_desc.notif_ctrl |= POSTED_INTR_SN;
Thanks,
Jacob
On Wed, Apr 17 2024 at 11:01, Jacob Pan wrote:
> On Tue, 16 Apr 2024 17:39:42 -0700, Sean Christopherson <[email protected]>
> wrote:
>> > diff --git a/arch/x86/kvm/vmx/posted_intr.c
>> > b/arch/x86/kvm/vmx/posted_intr.c index af662312fd07..592dbb765675 100644
>> > --- a/arch/x86/kvm/vmx/posted_intr.c
>> > +++ b/arch/x86/kvm/vmx/posted_intr.c
>> > @@ -107,7 +107,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int
>> > cpu)
>> > * handle task migration (@cpu != vcpu->cpu).
>> > */
>> > new.ndst = dest;
>> > - new.sn = 0;
>> > + new.notif_ctrl &= ~POSTED_INTR_SN;
>>
>> At the risk of creating confusing, would it make sense to add
>> double-underscore, non-atomic versions of the set/clear helpers for ON
>> and SN?
>>
>> I can't tell if that's a net positive versus open coding clear() and
>> set() here and below.
> IMHO, we can add non-atomic helpers when we have more than one user for
> each operation.
>
> I do have a stupid bug here, it should be:
> - new.notif_ctrl &= ~POSTED_INTR_SN;
> + new.notif_ctrl &= ~BIT(POSTED_INTR_SN);
That's a perfect reason to use a proper helper.
Hi Thomas,
On Thu, 18 Apr 2024 19:30:52 +0200, Thomas Gleixner <[email protected]>
wrote:
> On Wed, Apr 17 2024 at 11:01, Jacob Pan wrote:
> > On Tue, 16 Apr 2024 17:39:42 -0700, Sean Christopherson
> > <[email protected]> wrote:
> >> > diff --git a/arch/x86/kvm/vmx/posted_intr.c
> >> > b/arch/x86/kvm/vmx/posted_intr.c index af662312fd07..592dbb765675
> >> > 100644 --- a/arch/x86/kvm/vmx/posted_intr.c
> >> > +++ b/arch/x86/kvm/vmx/posted_intr.c
> >> > @@ -107,7 +107,7 @@ void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int
> >> > cpu)
> >> > * handle task migration (@cpu != vcpu->cpu).
> >> > */
> >> > new.ndst = dest;
> >> > - new.sn = 0;
> >> > + new.notif_ctrl &= ~POSTED_INTR_SN;
> >>
> >> At the risk of creating confusing, would it make sense to add
> >> double-underscore, non-atomic versions of the set/clear helpers for ON
> >> and SN?
> >>
> >> I can't tell if that's a net positive versus open coding clear() and
> >> set() here and below.
> > IMHO, we can add non-atomic helpers when we have more than one user for
> > each operation.
> >
> > I do have a stupid bug here, it should be:
> > - new.notif_ctrl &= ~POSTED_INTR_SN;
> > + new.notif_ctrl &= ~BIT(POSTED_INTR_SN);
>
> That's a perfect reason to use a proper helper.
I just proved I was wrong:) will do.
Thanks,
Jacob