Received: by 2002:a05:6358:e9c4:b0:b2:91dc:71ab with SMTP id hc4csp6511849rwb; Tue, 9 Aug 2022 17:21:32 -0700 (PDT) X-Google-Smtp-Source: AA6agR6o9LeFsVNm4b1mLIFOr+uqwOHF6a/6qlJsXRCOVJG2Ec6bsp+Bi26kmHo38WxYzQyzToVA X-Received: by 2002:a17:902:d50f:b0:16e:ecdd:98b8 with SMTP id b15-20020a170902d50f00b0016eecdd98b8mr25289123plg.22.1660090891999; Tue, 09 Aug 2022 17:21:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660090891; cv=none; d=google.com; s=arc-20160816; b=0xXCVdSYhyQLku93Xs3EoQVhFtX34HHz7ryY9bKSyecFS/6Uk08gp6KzGthWYZJiYt hxy3cOGtm88dKglYaWvqF+8H9Ij43rPRXU7Fu9oEMXYUfJGm6nv2R03HLbETXKtX3ImC 7/3D2fe+m2JoQEQ7if0KMMyLXd9xDDLoH5aE+/wo9W2/j5PpVBX+TQ2qSandKxJcYu6F MFKCV33N7IUDPnLxbTYFidDBTJQRAl3NgLrMoQMeDKCz1jRVukGQPwJBRDBRqQ/Zm6xU yQEUtBQ2o7bbu5JOFm9Ij59RJZy4Ir0O2MXgXYXUVE6uvqB4DbAkJ1i8y7k7x1WgP4o+ 58AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=gUuhPzrcnrqpheUY4XkJI2wptYkRTVtSUR+joOkMlL0=; b=BOyEgRsIAoWykhx8P1mThXFbuwa3vwo1uTRgRXaMRVWpex7oNfAbRB6wzGUdmKxWWL bGI++X75Q6/rmXdDGed0Hy9tJfvosPF8WYntiC5vvlb2gEoJSNlBIZBj+2I0myGLyjAw fo93jaLHtxut7NA0o0fTzIo4EU+af0fGZAC6kY98NbiyFFwlJdkT1rZw1B75rAmu1O4K crRzGP3FnDBSJJB23B/XAHqQSEUOBqa+6ZDtaTvDDd8Pj5OahGUsHdu6PAvCWc/HYZ0t yqVmFt99x+vf7HN96RETVyyHZf1N5oj+lD+cI9kXlpwXH3Wmad75MsDJJSWgzJrt+YiB ngYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@semihalf.com header.s=google header.b=hhgOGpYx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=semihalf.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f8-20020a656288000000b00408bad8c980si14418537pgv.15.2022.08.09.17.21.17; Tue, 09 Aug 2022 17:21:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@semihalf.com header.s=google header.b=hhgOGpYx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=semihalf.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229849AbiHIX4k (ORCPT + 99 others); Tue, 9 Aug 2022 19:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229871AbiHIX4h (ORCPT ); Tue, 9 Aug 2022 19:56:37 -0400 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6EFD80494 for ; Tue, 9 Aug 2022 16:56:34 -0700 (PDT) Received: by mail-lf1-x12c.google.com with SMTP id f20so19033894lfc.10 for ; Tue, 09 Aug 2022 16:56:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf.com; s=google; h=content-transfer-encoding:content-language:in-reply-to:mime-version :user-agent:date:message-id:from:references:cc:to:subject:from:to:cc; bh=gUuhPzrcnrqpheUY4XkJI2wptYkRTVtSUR+joOkMlL0=; b=hhgOGpYxiDb+Y1lQQWGM+An5CSIukHuN7+js1MlfBaZ5CdDIxJK8QnnshuemoPA/m5 8gVX4E/oNMg6Z/kSrBG0ZoAEvxfq/+sGQjtebAfiSl+xdVanDjL9eH4WgrPhMCc3C7nC 9Grk8AXtre6YGbsC7AllUYKe0fDPE3cKIlyHULgAhCJgXZasRhf6zNe9VAigw1KKsxgX N6yYCeWWbMlODCu7TYUapresE15Lsp73NW6qkJLNFRTrLHfkwa8bpnEdMnlDgw62vc8O ehyyZpye/byJinMRwm6PPL70cyn9ecMTMI7yAaLTOP1IP9BHbp1v23cKdCjHE39SCh0z puow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:content-language:in-reply-to:mime-version :user-agent:date:message-id:from:references:cc:to:subject :x-gm-message-state:from:to:cc; bh=gUuhPzrcnrqpheUY4XkJI2wptYkRTVtSUR+joOkMlL0=; b=Ot8L/YzjHHVHdkteWpXNgJiMbDDdSvkqM+vqcyoYpZcF+pbyVmHDq4RsJ3bjBaCH9p 4qwrOMFAjd70sSGnOlpy8TZqb2bIiHpdwxqhAncSRmtWw8LDTl0TpmKc1l8SklD4KR73 2LS0jyvSeYkv19PEm8vFW/43K8hDUr4UOxey0Weh157EeeWQ1hTnWSIYzmbxDMtdemv5 pD24Y9gwVxVpcgRELhB1fYWoTly/YgCDNqH5ebb37cdULqha4TpLb6NywkdWB5niVAVE ZXPupwPBHE3ofXWUYQjBM7MG/QPEP/5w6I0r2Z7XG3JZARairZ2d+fG6ay4ROunyxHPS 0xZA== X-Gm-Message-State: ACgBeo3fx0FqEeMNZGEXGF2g6lUlv5KEH+wB4CeMYVhs/AYy9CrJdcX5 jZZIdmgTuDY37aYY2ZHkOl6ABg== X-Received: by 2002:a05:6512:220d:b0:48b:45b:ac3d with SMTP id h13-20020a056512220d00b0048b045bac3dmr8304982lfu.15.1660089393055; Tue, 09 Aug 2022 16:56:33 -0700 (PDT) Received: from ?IPv6:2a02:a31b:33d:9c00:463a:87e3:44fc:2b2f? ([2a02:a31b:33d:9c00:463a:87e3:44fc:2b2f]) by smtp.gmail.com with ESMTPSA id a5-20020a056512390500b0048aeafde9b8sm131218lfu.108.2022.08.09.16.56.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 09 Aug 2022 16:56:32 -0700 (PDT) Subject: Re: [PATCH v2 2/5] KVM: x86: Add kvm_register_and_fire_irq_mask_notifier() To: eric.auger@redhat.com, Sean Christopherson , Paolo Bonzini , kvm@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Alex Williamson , Rong L Liu , Zhenyu Wang , Tomasz Nowicki , Grzegorz Jaszczyk , upstream@semihalf.com, Dmitry Torokhov , Marc Zyngier References: <20220805193919.1470653-1-dmy@semihalf.com> <20220805193919.1470653-3-dmy@semihalf.com> <3fe7398d-6496-717c-c0f0-f7af3c69cdd0@redhat.com> From: Dmytro Maluka Message-ID: Date: Wed, 10 Aug 2022 01:56:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: <3fe7398d-6496-717c-c0f0-f7af3c69cdd0@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Eric, On 8/9/22 10:43 PM, Eric Auger wrote: > Hi Dmytro, > > On 8/5/22 21:39, Dmytro Maluka wrote: >> In order to implement postponing resamplefd notification until an >> interrupt is unmasked, we need not only to track changes of the >> interrupt mask state (which is already possible with >> kvm_register_irq_mask_notifier()) but also to know its initial >> mask state before any mask notifier has fired. >> >> Moreover, we need to do this initial check of the IRQ mask state in a >> race-free way, to ensure that we will not miss any further mask or >> unmask events after we check the initial mask state. >> >> So implement kvm_register_and_fire_irq_mask_notifier() which atomically >> registers an IRQ mask notifier and calls it with the current mask value >> of the IRQ. It does that using the same locking order as when calling >> notifier normally via kvm_fire_mask_notifiers(), to prevent deadlocks. >> >> Its implementation needs to be arch-specific since it relies on >> arch-specific synchronization (e.g. ioapic->lock and pic->lock on x86, >> or a per-IRQ lock on ARM vGIC) for serializing our initial reading of >> the IRQ mask state with a pending change of this mask state. >> >> For now implement it for x86 only, and for other archs add a weak dummy >> implementation which doesn't really call the notifier (as other archs >> don't currently implement calling notifiers normally via >> kvm_fire_mask_notifiers() either, i.e. registering mask notifiers has no >> effect on those archs anyway). >> >> Signed-off-by: Dmytro Maluka >> Link: https://lore.kernel.org/lkml/c7b7860e-ae3a-7b98-e97e-28a62470c470@semihalf.com/ >> --- >> arch/x86/include/asm/kvm_host.h | 1 + >> arch/x86/kvm/i8259.c | 6 ++++ >> arch/x86/kvm/ioapic.c | 6 ++++ >> arch/x86/kvm/ioapic.h | 1 + >> arch/x86/kvm/irq_comm.c | 57 +++++++++++++++++++++++++++++++++ >> include/linux/kvm_host.h | 4 +++ >> virt/kvm/eventfd.c | 31 ++++++++++++++++-- >> 7 files changed, 104 insertions(+), 2 deletions(-) >> >> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h >> index dc76617f11c1..cf0571ed2968 100644 >> --- a/arch/x86/include/asm/kvm_host.h >> +++ b/arch/x86/include/asm/kvm_host.h >> @@ -1834,6 +1834,7 @@ static inline int __kvm_irq_line_state(unsigned long *irq_state, >> >> int kvm_pic_set_irq(struct kvm_pic *pic, int irq, int irq_source_id, int level); >> void kvm_pic_clear_all(struct kvm_pic *pic, int irq_source_id); >> +bool kvm_pic_irq_is_masked(struct kvm_pic *s, int irq); >> >> void kvm_inject_nmi(struct kvm_vcpu *vcpu); >> >> diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c >> index e1bb6218bb96..1eb3127f6047 100644 >> --- a/arch/x86/kvm/i8259.c >> +++ b/arch/x86/kvm/i8259.c >> @@ -211,6 +211,12 @@ void kvm_pic_clear_all(struct kvm_pic *s, int irq_source_id) >> pic_unlock(s); >> } >> >> +/* Called with s->lock held. */ >> +bool kvm_pic_irq_is_masked(struct kvm_pic *s, int irq) >> +{ >> + return !!(s->pics[irq >> 3].imr & (1 << irq)); >> +} >> + >> /* >> * acknowledge interrupt 'irq' >> */ >> diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c >> index 765943d7cfa5..fab11de1f885 100644 >> --- a/arch/x86/kvm/ioapic.c >> +++ b/arch/x86/kvm/ioapic.c >> @@ -478,6 +478,12 @@ void kvm_ioapic_clear_all(struct kvm_ioapic *ioapic, int irq_source_id) >> spin_unlock(&ioapic->lock); >> } >> >> +/* Called with ioapic->lock held. */ >> +bool kvm_ioapic_irq_is_masked(struct kvm_ioapic *ioapic, int irq) >> +{ >> + return !!ioapic->redirtbl[irq].fields.mask; >> +} >> + >> static void kvm_ioapic_eoi_inject_work(struct work_struct *work) >> { >> int i; >> diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h >> index 539333ac4b38..fe1f51319992 100644 >> --- a/arch/x86/kvm/ioapic.h >> +++ b/arch/x86/kvm/ioapic.h >> @@ -114,6 +114,7 @@ void kvm_ioapic_destroy(struct kvm *kvm); >> int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int irq_source_id, >> int level, bool line_status); >> void kvm_ioapic_clear_all(struct kvm_ioapic *ioapic, int irq_source_id); >> +bool kvm_ioapic_irq_is_masked(struct kvm_ioapic *ioapic, int irq); >> void kvm_get_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state); >> void kvm_set_ioapic(struct kvm *kvm, struct kvm_ioapic_state *state); >> void kvm_ioapic_scan_entry(struct kvm_vcpu *vcpu, >> diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c >> index f27e4c9c403e..4bd4218821a2 100644 >> --- a/arch/x86/kvm/irq_comm.c >> +++ b/arch/x86/kvm/irq_comm.c >> @@ -234,6 +234,63 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id) >> mutex_unlock(&kvm->irq_lock); >> } >> >> +void kvm_register_and_fire_irq_mask_notifier(struct kvm *kvm, int irq, >> + struct kvm_irq_mask_notifier *kimn) >> +{ >> + struct kvm_pic *pic = kvm->arch.vpic; >> + struct kvm_ioapic *ioapic = kvm->arch.vioapic; >> + struct kvm_kernel_irq_routing_entry entries[KVM_NR_IRQCHIPS]; >> + struct kvm_kernel_irq_routing_entry *pic_e = NULL, *ioapic_e = NULL; >> + int idx, i, n; >> + bool masked; >> + >> + mutex_lock(&kvm->irq_lock); >> + >> + /* >> + * Not possible to detect if the guest uses the PIC or the >> + * IOAPIC. So assume the interrupt to be unmasked iff it is >> + * unmasked in at least one of both. >> + */ >> + idx = srcu_read_lock(&kvm->irq_srcu); >> + n = kvm_irq_map_gsi(kvm, entries, irq); >> + srcu_read_unlock(&kvm->irq_srcu, idx); >> + >> + for (i = 0; i < n; i++) { >> + if (entries[i].type != KVM_IRQ_ROUTING_IRQCHIP) >> + continue; >> + >> + switch (entries[i].irqchip.irqchip) { >> + case KVM_IRQCHIP_PIC_MASTER: >> + case KVM_IRQCHIP_PIC_SLAVE: >> + pic_e = &entries[i]; >> + break; >> + case KVM_IRQCHIP_IOAPIC: >> + ioapic_e = &entries[i]; >> + break; >> + default: >> + break; >> + } >> + } >> + >> + if (pic_e) >> + spin_lock(&pic->lock); >> + if (ioapic_e) >> + spin_lock(&ioapic->lock); >> + >> + __kvm_register_irq_mask_notifier(kvm, irq, kimn); >> + >> + masked = (!pic_e || kvm_pic_irq_is_masked(pic, pic_e->irqchip.pin)) && >> + (!ioapic_e || kvm_ioapic_irq_is_masked(ioapic, ioapic_e->irqchip.pin)); > Looks a bit cryptic to me. Don't you want pic_e && masked on pic || > ioapic_e && masked on ioapic? That would be quite different: it would be "masked on at least one of both", while I want "masked on both (if both are used)". > >> + kimn->func(kimn, masked); >> + >> + if (ioapic_e) >> + spin_unlock(&ioapic->lock); >> + if (pic_e) >> + spin_unlock(&pic->lock); >> + >> + mutex_unlock(&kvm->irq_lock); >> +} >> + >> bool kvm_arch_can_set_irq_routing(struct kvm *kvm) >> { >> return irqchip_in_kernel(kvm); >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h >> index dd5f14e31996..55233eb18eb4 100644 >> --- a/include/linux/kvm_host.h >> +++ b/include/linux/kvm_host.h >> @@ -1608,8 +1608,12 @@ void kvm_register_irq_ack_notifier(struct kvm *kvm, >> struct kvm_irq_ack_notifier *kian); >> void kvm_unregister_irq_ack_notifier(struct kvm *kvm, >> struct kvm_irq_ack_notifier *kian); >> +void __kvm_register_irq_mask_notifier(struct kvm *kvm, int irq, >> + struct kvm_irq_mask_notifier *kimn); >> void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq, >> struct kvm_irq_mask_notifier *kimn); >> +void kvm_register_and_fire_irq_mask_notifier(struct kvm *kvm, int irq, >> + struct kvm_irq_mask_notifier *kimn); >> void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq, >> struct kvm_irq_mask_notifier *kimn); >> void kvm_fire_mask_notifiers(struct kvm *kvm, unsigned irqchip, unsigned pin, >> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c >> index 39403d9fbdcc..3007d956b626 100644 >> --- a/virt/kvm/eventfd.c >> +++ b/virt/kvm/eventfd.c >> @@ -519,15 +519,42 @@ void kvm_unregister_irq_ack_notifier(struct kvm *kvm, >> kvm_arch_post_irq_ack_notifier_list_update(kvm); >> } >> >> +void __kvm_register_irq_mask_notifier(struct kvm *kvm, int irq, >> + struct kvm_irq_mask_notifier *kimn) >> +{ >> + kimn->irq = irq; >> + hlist_add_head_rcu(&kimn->link, &kvm->irq_mask_notifier_list); >> +} >> + >> void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq, >> struct kvm_irq_mask_notifier *kimn) >> { >> mutex_lock(&kvm->irq_lock); >> - kimn->irq = irq; >> - hlist_add_head_rcu(&kimn->link, &kvm->irq_mask_notifier_list); >> + __kvm_register_irq_mask_notifier(kvm, irq, kimn); >> mutex_unlock(&kvm->irq_lock); >> } >> >> +/* >> + * kvm_register_and_fire_irq_mask_notifier() registers the notifier and >> + * immediately calls it with the current mask value of the IRQ. It does >> + * that atomically, so that we will find out the initial mask state of >> + * the IRQ and will not miss any further mask or unmask events. It does >> + * that using the same locking order as when calling notifier normally >> + * via kvm_fire_mask_notifiers(), to prevent deadlocks. > you may document somewhere that it must be called before > > kvm_register_irq_ack_notifier() Actually I think it would still be ok to call it after kvm_register_irq_ack_notifier(), not necessarily before. We could then miss a mask notification between kvm_register_irq_ack_notifier() and kvm_register_and_fire_irq_mask_notifier(), but it's ok since kvm_register_and_fire_irq_mask_notifier() would then immediately send a new notification with the up-to-date mask value. > >> + * >> + * Implementation is arch-specific since it relies on arch-specific >> + * (irqchip-specific) synchronization. Below is a weak dummy >> + * implementation for archs not implementing it yet, as those archs >> + * don't implement calling notifiers normally via >> + * kvm_fire_mask_notifiers() either, i.e. registering mask notifiers >> + * has no effect on those archs anyway. > I would advise you to put Marc in the loop for the whole series (adding > him in CC). Ok. > > Thanks > > Eric >> + */ >> +void __weak kvm_register_and_fire_irq_mask_notifier(struct kvm *kvm, int irq, >> + struct kvm_irq_mask_notifier *kimn) >> +{ >> + kvm_register_irq_mask_notifier(kvm, irq, kimn); >> +} >> + >> void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq, >> struct kvm_irq_mask_notifier *kimn) >> { >