Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3582153imm; Mon, 6 Aug 2018 07:16:15 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcfqeQaLUtq2rAekCg3T/7spoVnc1J6bTxWELiaR7poe2giDawjgJNQQRhXMXpoomXCTaGp X-Received: by 2002:a63:e74e:: with SMTP id j14-v6mr14626329pgk.185.1533564975860; Mon, 06 Aug 2018 07:16:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533564975; cv=none; d=google.com; s=arc-20160816; b=rEzq1KXjN2WPBd9yKf1K9Da04Noc271pPu6yr1E/NgVtgZ+PNVgLqQvn9cbe8QL5YN 8A6uLchT0LhnIZhgSFgZZ/bpwSvXQiVgWkYxltw2aTmYUQqc6OCF4kw8uvp1+lMcXgTy n2g/YcdpykWOc6oBv2Fpqs+43XgypaKi052S+25W1Lb4aTjMb4rm3Tnvp4e7KOrIQAfV Tx7Zny5YwkQVwfBLFEstO4EQWfySWpnA0eFICf0WdDGq/TDMyuPU2kie1L/Y/8lEfb2l eqr+5wOQXZe+zhluHBqGfHZvPak2gdau4D+LS0flIhsplsEe3kBgvbrYhG9f+0s6ZNs7 Uv4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=Kr1UF204X2IKqU1HO2HCPTnA7RYh77/jUovbuRqA9S0=; b=td+KqvNkO3GS1SdJls2+IagaUuzFDgDboX8KadG/Fz8Df5yDwHdAQ+1Tk2u3Mb5Qfh KW7LROheaQsxnULrMQbGhQ+H1ZHC+b5KiCd/iwg+XXXIhyWNIj3l9auBNXgvHiQBOhqV BlZv/NMzPxpjVdRO7cknsK31ZyL/HncnohgzuM2WyfugwmuGtcMzoV48OPHu28fJTbpX UxxpmPySZ+SFc9HOQv7L1Fz+/wsu8DlAcF7Q1Cavl3Ooa8kGmXA7wXrQmj8RopDUwbgY ZZb1sXStkZggsmCPuylJMKNAssr/iEtwTwUmu8rKXXMZQ8JoWUwMoEqt4Y/q55bK18q8 0I1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b16-v6si9776020pgw.478.2018.08.06.07.16.00; Mon, 06 Aug 2018 07:16:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731092AbeHFQXQ (ORCPT + 99 others); Mon, 6 Aug 2018 12:23:16 -0400 Received: from foss.arm.com ([217.140.101.70]:39716 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727112AbeHFQXQ (ORCPT ); Mon, 6 Aug 2018 12:23:16 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 579C27A9; Mon, 6 Aug 2018 07:13:58 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AE1103F5D0; Mon, 6 Aug 2018 07:13:57 -0700 (PDT) Date: Mon, 6 Aug 2018 16:13:55 +0200 From: Christoffer Dall To: Jia He Cc: Marc Zyngier , Catalin Marinas , Eric Auger , Ard Biesheuvel , Andre Przywara , Greg Kroah-Hartman , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, Jia He Subject: Re: [PATCH 2/2] KVM: arm/arm64: vgic: no need to call spin_lock_irqsave/restore when irq is disabled Message-ID: <20180806141355.GF5985@e113682-lin.lund.arm.com> References: <1533304624-43250-1-git-send-email-jia.he@hxt-semitech.com> <1533304624-43250-2-git-send-email-jia.he@hxt-semitech.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1533304624-43250-2-git-send-email-jia.he@hxt-semitech.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 03, 2018 at 09:57:04PM +0800, Jia He wrote: > Because kvm_vgic_sync_hwstate currently is definitly in the context > which irq is disabled (local_irq_disable/enable). There is no need to > call spin_lock_irqsave/restore in vgic_fold_lr_state and vgic_prune_ap_list > > This patch replace them with the spin_lock/unlock no irq version > > Signed-off-by: Jia He > --- > virt/kvm/arm/vgic/vgic-v2.c | 7 ++++--- > virt/kvm/arm/vgic/vgic-v3.c | 7 ++++--- > virt/kvm/arm/vgic/vgic.c | 13 +++++++------ > 3 files changed, 15 insertions(+), 12 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c > index a5f2e44..487f5f2 100644 > --- a/virt/kvm/arm/vgic/vgic-v2.c > +++ b/virt/kvm/arm/vgic/vgic-v2.c > @@ -62,7 +62,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2; > int lr; > - unsigned long flags; > + > + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); > > cpuif->vgic_hcr &= ~GICH_HCR_UIE; > > @@ -83,7 +84,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > > irq = vgic_get_irq(vcpu->kvm, vcpu, intid); > > - spin_lock_irqsave(&irq->irq_lock, flags); > + spin_lock(&irq->irq_lock); > > /* Always preserve the active bit */ > irq->active = !!(val & GICH_LR_ACTIVE_BIT); > @@ -126,7 +127,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) > vgic_irq_set_phys_active(irq, false); > } > > - spin_unlock_irqrestore(&irq->irq_lock, flags); > + spin_unlock(&irq->irq_lock); > vgic_put_irq(vcpu->kvm, irq); > } > > diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c > index cdce653..b66b513 100644 > --- a/virt/kvm/arm/vgic/vgic-v3.c > +++ b/virt/kvm/arm/vgic/vgic-v3.c > @@ -46,7 +46,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3; > u32 model = vcpu->kvm->arch.vgic.vgic_model; > int lr; > - unsigned long flags; > + > + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); > > cpuif->vgic_hcr &= ~ICH_HCR_UIE; > > @@ -75,7 +76,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > if (!irq) /* An LPI could have been unmapped. */ > continue; > > - spin_lock_irqsave(&irq->irq_lock, flags); > + spin_lock(&irq->irq_lock); > > /* Always preserve the active bit */ > irq->active = !!(val & ICH_LR_ACTIVE_BIT); > @@ -118,7 +119,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) > vgic_irq_set_phys_active(irq, false); > } > > - spin_unlock_irqrestore(&irq->irq_lock, flags); > + spin_unlock(&irq->irq_lock); > vgic_put_irq(vcpu->kvm, irq); > } > > diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c > index c22cea6..7cfdfbc 100644 > --- a/virt/kvm/arm/vgic/vgic.c > +++ b/virt/kvm/arm/vgic/vgic.c > @@ -593,10 +593,11 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) > { > struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; > struct vgic_irq *irq, *tmp; > - unsigned long flags; > + > + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); > > retry: > - spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); > + spin_lock(&vgic_cpu->ap_list_lock); > > list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { > struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; > @@ -637,7 +638,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) > /* This interrupt looks like it has to be migrated. */ > > spin_unlock(&irq->irq_lock); > - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); > + spin_unlock(&vgic_cpu->ap_list_lock); > > /* > * Ensure locking order by always locking the smallest > @@ -651,7 +652,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) > vcpuB = vcpu; > } > > - spin_lock_irqsave(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); > + spin_lock(&vcpuA->arch.vgic_cpu.ap_list_lock); > spin_lock_nested(&vcpuB->arch.vgic_cpu.ap_list_lock, > SINGLE_DEPTH_NESTING); > spin_lock(&irq->irq_lock); > @@ -676,7 +677,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) > > spin_unlock(&irq->irq_lock); > spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); > - spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); > + spin_unlock(&vcpuA->arch.vgic_cpu.ap_list_lock); > > if (target_vcpu_needs_kick) { > kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); > @@ -686,7 +687,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) > goto retry; > } > > - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); > + spin_unlock(&vgic_cpu->ap_list_lock); > } > > static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu) > -- > 1.8.3.1 > Acked-by: Christoffer Dall