2022-02-08 01:10:39

by Janosch Frank

[permalink] [raw]
Subject: Re: [PATCH v7 10/17] KVM: s390: pv: add mmu_notifier

On 2/4/22 16:53, Claudio Imbrenda wrote:
> Add an mmu_notifier for protected VMs. The callback function is
> triggered when the mm is torn down, and will attempt to convert all
> protected vCPUs to non-protected. This allows the mm teardown to use
> the destroy page UVC instead of export.
>
> Signed-off-by: Claudio Imbrenda <[email protected]>
> ---
> arch/s390/include/asm/kvm_host.h | 2 ++
> arch/s390/kvm/pv.c | 20 ++++++++++++++++++++
> 2 files changed, 22 insertions(+)
>
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index a22c9266ea05..1bccb8561ba9 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -19,6 +19,7 @@
> #include <linux/kvm.h>
> #include <linux/seqlock.h>
> #include <linux/module.h>
> +#include <linux/mmu_notifier.h>
> #include <asm/debug.h>
> #include <asm/cpu.h>
> #include <asm/fpu/api.h>
> @@ -921,6 +922,7 @@ struct kvm_s390_pv {
> u64 guest_len;
> unsigned long stor_base;
> void *stor_var;
> + struct mmu_notifier mmu_notifier;
> };
>
> struct kvm_arch{
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index f1e812a45acb..d3b8fd9b5b3e 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -193,6 +193,21 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> return -EIO;
> }
>
> +static void kvm_s390_pv_mmu_notifier_release(struct mmu_notifier *subscription,
> + struct mm_struct *mm)
> +{
> + struct kvm *kvm = container_of(subscription, struct kvm, arch.pv.mmu_notifier);

Are we sure that the kvm pointer is still valid at this point?

> + u16 dummy;
> +
> + mutex_lock(&kvm->lock);

Against what are we locking here?

> + kvm_s390_cpus_from_pv(kvm, &dummy, &dummy);

I'd guess that we can't really have a second kvm_s390_cpus_from_pv()
call in flight, right? If the mm is being torn down there would be no
code left that can execute the IOCTL.

> + mutex_unlock(&kvm->lock);
> +}
> +
> +static const struct mmu_notifier_ops kvm_s390_pv_mmu_notifier_ops = {
> + .release = kvm_s390_pv_mmu_notifier_release,
> +};
> +
> int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> {
> struct uv_cb_cgc uvcb = {
> @@ -234,6 +249,11 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> return -EIO;
> }
> kvm->arch.gmap->guest_handle = uvcb.guest_handle;
> + /* Add the notifier only once. No races because we hold kvm->lock */
> + if (kvm->arch.pv.mmu_notifier.ops != &kvm_s390_pv_mmu_notifier_ops) {
> + kvm->arch.pv.mmu_notifier.ops = &kvm_s390_pv_mmu_notifier_ops;
> + mmu_notifier_register(&kvm->arch.pv.mmu_notifier, kvm->mm);
> + }
> return 0;
> }
>
>



2022-02-09 09:52:31

by Claudio Imbrenda

[permalink] [raw]
Subject: Re: [PATCH v7 10/17] KVM: s390: pv: add mmu_notifier

On Mon, 7 Feb 2022 12:04:56 +0100
Janosch Frank <[email protected]> wrote:

> On 2/4/22 16:53, Claudio Imbrenda wrote:
> > Add an mmu_notifier for protected VMs. The callback function is
> > triggered when the mm is torn down, and will attempt to convert all
> > protected vCPUs to non-protected. This allows the mm teardown to use
> > the destroy page UVC instead of export.
> >
> > Signed-off-by: Claudio Imbrenda <[email protected]>
> > ---
> > arch/s390/include/asm/kvm_host.h | 2 ++
> > arch/s390/kvm/pv.c | 20 ++++++++++++++++++++
> > 2 files changed, 22 insertions(+)
> >
> > diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> > index a22c9266ea05..1bccb8561ba9 100644
> > --- a/arch/s390/include/asm/kvm_host.h
> > +++ b/arch/s390/include/asm/kvm_host.h
> > @@ -19,6 +19,7 @@
> > #include <linux/kvm.h>
> > #include <linux/seqlock.h>
> > #include <linux/module.h>
> > +#include <linux/mmu_notifier.h>
> > #include <asm/debug.h>
> > #include <asm/cpu.h>
> > #include <asm/fpu/api.h>
> > @@ -921,6 +922,7 @@ struct kvm_s390_pv {
> > u64 guest_len;
> > unsigned long stor_base;
> > void *stor_var;
> > + struct mmu_notifier mmu_notifier;
> > };
> >
> > struct kvm_arch{
> > diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> > index f1e812a45acb..d3b8fd9b5b3e 100644
> > --- a/arch/s390/kvm/pv.c
> > +++ b/arch/s390/kvm/pv.c
> > @@ -193,6 +193,21 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> > return -EIO;
> > }
> >
> > +static void kvm_s390_pv_mmu_notifier_release(struct mmu_notifier *subscription,
> > + struct mm_struct *mm)
> > +{
> > + struct kvm *kvm = container_of(subscription, struct kvm, arch.pv.mmu_notifier);
>
> Are we sure that the kvm pointer is still valid at this point?

it should be, because KVM is torn down after the mm. which is the whole
reason why the notifier is needed.

on the other hand, I realized that I should have unregistered the
notifier somewhere, which I forgot to do. the best place would be KVM
teardown, which then also guarantees that the notifier can only be
called with a valid struct kvm

>
> > + u16 dummy;
> > +
> > + mutex_lock(&kvm->lock);
>
> Against what are we locking here?
>
> > + kvm_s390_cpus_from_pv(kvm, &dummy, &dummy);
>
> I'd guess that we can't really have a second kvm_s390_cpus_from_pv()
> call in flight, right? If the mm is being torn down there would be no
> code left that can execute the IOCTL.

yeah makes sense

>
> > + mutex_unlock(&kvm->lock);
> > +}
> > +
> > +static const struct mmu_notifier_ops kvm_s390_pv_mmu_notifier_ops = {
> > + .release = kvm_s390_pv_mmu_notifier_release,
> > +};
> > +
> > int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> > {
> > struct uv_cb_cgc uvcb = {
> > @@ -234,6 +249,11 @@ int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> > return -EIO;
> > }
> > kvm->arch.gmap->guest_handle = uvcb.guest_handle;
> > + /* Add the notifier only once. No races because we hold kvm->lock */
> > + if (kvm->arch.pv.mmu_notifier.ops != &kvm_s390_pv_mmu_notifier_ops) {
> > + kvm->arch.pv.mmu_notifier.ops = &kvm_s390_pv_mmu_notifier_ops;
> > + mmu_notifier_register(&kvm->arch.pv.mmu_notifier, kvm->mm);
> > + }
> > return 0;
> > }
> >
> >
>