Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp409633pxu; Tue, 1 Dec 2020 14:33:41 -0800 (PST) X-Google-Smtp-Source: ABdhPJwn+DxXAVREl6YM8tTM5guG89+kBm7HkXtijtQT7+9ZpVTggY1efQpH0wEcMLjzEU6liROw X-Received: by 2002:aa7:cac2:: with SMTP id l2mr5344940edt.141.1606862020921; Tue, 01 Dec 2020 14:33:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606862020; cv=none; d=google.com; s=arc-20160816; b=h05hqe6hP0wUfg6N7KPiHvkcovrFFDTXhowc5EC6BNKWuy+mmgAovwggUEQguRKyTo pu0KkOzYVrq9ngm4LA20BU1592hRYuCYvMSMM97N4q9YmD3CxEI6Y5uYNc6e8hcXHsuh hkziiJPt698UbXmGlVRpk8I/R9UXpmdlGsF02lrVhQBr6WVzM9RdMF8HLiyGsFTXuMmi sV5PRPVMsLCsgM3fqr5BTdjAykzdiu615tT9bh82lXsBFhm/PMx6XCQtfcPmfIuFD3et ImuVNeQJBWpnHIBHPrmlTuY1Q2RHKfanxAgXd1Ar9SKJNudVUtiYtk2D+D7455whAFTi b3pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=0k88TgfG6FmUTWYgg+1K1p2Ali8lVfR+VfHEOghErn0=; b=rZcyv9GBHEebznu3RA1pAPGDwYB4ACtdJ7yKkDvoxMbKgcU4MQfdYcAICph9Fu+pWQ MldcZ/I85LB5REVvgqmBzsON7ft95TbAapeKjZ/T1ur40gSOHmiE3skaPY+aBSH//j2Z KwSJOMHdICp/CKX6sEKMC6W5yBrOAObBLD+yDoUaDO+BVmp1p4GiqBVtefd6oGtziKbk ro0xdZ8UgoN6vZS+9CMfYTkbrdeG61qNeKN1HB0tpbRsXWzmd9DWZPI+T/TeqUY16bbg HjKLUaw2ZArgBuen5qY8/KQdkykdwvdR42REDqv05tCAU9cX3yzZwzfMBmhLNFKjKE0A ioRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x25si718248edi.388.2020.12.01.14.33.18; Tue, 01 Dec 2020 14:33:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727731AbgLAJjF (ORCPT + 99 others); Tue, 1 Dec 2020 04:39:05 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:8899 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727035AbgLAJjE (ORCPT ); Tue, 1 Dec 2020 04:39:04 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4ClcTc5vX0z72pS; Tue, 1 Dec 2020 17:37:56 +0800 (CST) Received: from [127.0.0.1] (10.57.22.126) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Tue, 1 Dec 2020 17:38:12 +0800 Subject: Re: [RFC PATCH v1 1/4] irqchip/gic-v4.1: Plumb get_irqchip_state VLPI callback To: Marc Zyngier CC: Shenming Lu , James Morse , Julien Thierry , Suzuki K Poulose , Eric Auger , , , , , Christoffer Dall , Alex Williamson , Kirti Wankhede , Cornelia Huck , "Neo Jia" , , References: <20201123065410.1915-1-lushenming@huawei.com> <20201123065410.1915-2-lushenming@huawei.com> <869dbc36-c510-fd00-407a-b05e068537c8@huawei.com> <875z5p6ayp.wl-maz@kernel.org> From: luojiaxing Message-ID: <316fe41d-f004-f004-4f31-6fe6e7ff64b7@huawei.com> Date: Tue, 1 Dec 2020 17:38:12 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <875z5p6ayp.wl-maz@kernel.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [10.57.22.126] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/11/28 18:18, Marc Zyngier wrote: > On Sat, 28 Nov 2020 07:19:48 +0000, > luojiaxing wrote: >> Hi, shenming >> >> >> I got few questions about this patch. >> >> Although it's a bit late and not very appropriate, I'd like to ask >> before you send next version. >> >> On 2020/11/23 14:54, Shenming Lu wrote: >>> From: Zenghui Yu >>> >>> Up to now, the irq_get_irqchip_state() callback of its_irq_chip >>> leaves unimplemented since there is no architectural way to get >>> the VLPI's pending state before GICv4.1. Yeah, there has one in >>> v4.1 for VLPIs. >> >> I checked the invoking scenario of irq_get_irqchip_state and found no >> scenario related to vLPI. >> >> For example, synchronize_irq(), it pass IRQCHIP_STATE_ACTIVE to which, >> so in your patch, you will directly return and other is for vSGI, >> GICD_ISPENDR, GICD_ICPENDR and so on. > You do realise that LPIs have no active state, right? yes, I know > And that LPIs > have a radically different programming interface to the rest of the GIC? I found out that my mailbox software filtered out the other two patches, which led me to look at the patch alone, so it was weird. I already got the answer now. >> The only one I am not sure is vgic_get_phys_line_level(), is it your >> purpose to fill this callback, or some scenarios I don't know about >> that use this callback. > LPIs only offer edge signalling, so the concept of "line level" means > absolutely nothing. > >> >>> With GICv4.1, after unmapping the vPE, which cleans and invalidates >>> any caching of the VPT, we can get the VLPI's pending state by >>> peeking at the VPT. So we implement the irq_get_irqchip_state() >>> callback of its_irq_chip to do it. >>> >>> Signed-off-by: Zenghui Yu >>> Signed-off-by: Shenming Lu >>> --- >>> drivers/irqchip/irq-gic-v3-its.c | 38 ++++++++++++++++++++++++++++++++ >>> 1 file changed, 38 insertions(+) >>> >>> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c >>> index 0fec31931e11..287003cacac7 100644 >>> --- a/drivers/irqchip/irq-gic-v3-its.c >>> +++ b/drivers/irqchip/irq-gic-v3-its.c >>> @@ -1695,6 +1695,43 @@ static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) >>> iommu_dma_compose_msi_msg(irq_data_get_msi_desc(d), msg); >>> } >>> +static bool its_peek_vpt(struct its_vpe *vpe, irq_hw_number_t >>> hwirq) >>> +{ >>> + int mask = hwirq % BITS_PER_BYTE; >>> + void *va; >>> + u8 *pt; >>> + >>> + va = page_address(vpe->vpt_page); >>> + pt = va + hwirq / BITS_PER_BYTE; >>> + >>> + return !!(*pt & (1U << mask)); >> >> How can you confirm that the interrupt pending status is the latest? >> Is it possible that the interrupt pending status is still cached in >> the GICR but not synchronized to the memory. > That's a consequence of the vPE having been unmapped: > > "A VMAPP with {V,Alloc}=={0,1} cleans and invalidates any caching of > the Virtual Pending Table and Virtual Configuration Table associated > with the vPEID held in the GIC." Yes, in addition to that, if a vPE is scheduled out of the PE, the cache clearing and write-back to VPT are also performed, I think. However, I feel a litter confusing to read this comment at first ,  because it is not only VMAPP that causes cache clearing. I don't know why VMAPP was mentioned here until I check the other two patches ("KVM: arm64: GICv4.1: Try to save hw pending state in save_pending_tables"). So I think may be it's better to add some background description here. Thanks Jiaxing > > An implementation that wouldn't follow this simple rule would simply > be totally broken, and unsupported. > > M. >