Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1227178ybv; Wed, 5 Feb 2020 23:59:40 -0800 (PST) X-Google-Smtp-Source: APXvYqw3lDx9KzqFeNS8YvcRDYcPLPyWaYaXIo7WaFGMgznY5NPC2UvU0dXcrGp+LzjwVAQ8vVPg X-Received: by 2002:a9d:64b:: with SMTP id 69mr28386906otn.237.1580975980119; Wed, 05 Feb 2020 23:59:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580975980; cv=none; d=google.com; s=arc-20160816; b=BU0BmKmnOUesRtL9Zw6WMyVjv1a3vkDomfn2f/9YK/fjtunOiR3zCiKHCc5rcl2gQ2 qbz3SbAefRBnYFwXLpeDWobU4AIY7YE0YrBZaxe5wku1JwenMy/F2LDRNjYYSLQIJJ2a UbkFEjCh8ZXiwoA4chgOgBENZoqmNdnqMNAi+dk50p3733s0AAjHKHz5vtMiTmiBSt7+ SnGXtwQ0Vhx/MvOn5P0J1jONf0kSKZnMNhQpC+t5Fuxou0INocG5TD62o3Ip6IwLuWnd ihGBmEox+CvheeEBtuYeTRj+0abGi+Ik0skeTLjnoEdy5kf9DZUexKDhzpKfBOQ2qu9A ooQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ySfaRctOmZMLxPwk+3FxKeZfVFQGSAogzpHlSPW6LWc=; b=D+bBo+SOntUqMY8JOkgUld2TlrozSg5j/euX6Qj1EJd3xJRaJ5Hrh7fl9kLwYvYAev 4H4t2m4ueXMSAXCnv25EZ5hDuZPKeJUqWqCMWdkn+EjSza5HMHOT7EoPGB9QuhoWFmK2 FCoqZDxGFCy3KVYyuZTF8+ml7/RUh2wQ/ZsrsuGAMd5KEfzv7edpQts0wOGIUY+Kk4H0 4RqjL6EdYkkecxyB/5zJ6bzE1qhL7aayobGSFDrrF8mqJ1TkxgglgGte3WwFn3pJue41 vixzbEkp+CkJQWcxWHvQbozA8gKQEr+Te3fJC9rgIvb/hccGcnf3x3VFjZaXGb/CLZV2 b2fA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c64si1855855oig.178.2020.02.05.23.59.27; Wed, 05 Feb 2020 23:59:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728107AbgBFH6f (ORCPT + 99 others); Thu, 6 Feb 2020 02:58:35 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:43868 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727999AbgBFH6f (ORCPT ); Thu, 6 Feb 2020 02:58:35 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 0CD9D1FAEF82E36C756D; Thu, 6 Feb 2020 15:58:33 +0800 (CST) Received: from DESKTOP-8RFUVS3.china.huawei.com (10.173.222.27) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.439.0; Thu, 6 Feb 2020 15:58:25 +0800 From: Zenghui Yu To: , CC: , , , , , Zenghui Yu Subject: [PATCH v2 3/6] irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level Date: Thu, 6 Feb 2020 15:57:08 +0800 Message-ID: <20200206075711.1275-4-yuzenghui@huawei.com> X-Mailer: git-send-email 2.23.0.windows.1 In-Reply-To: <20200206075711.1275-1-yuzenghui@huawei.com> References: <20200206075711.1275-1-yuzenghui@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.173.222.27] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In GICv4, we will ensure that level2 vPE table memory is allocated for the specified vpe_id on all v4 ITS, in its_alloc_vpe_table(). This still works well for the typical GICv4.1 implementation, where the new vPE table is shared between the ITSs and the RDs. To make it explicit, let us introduce allocate_vpe_l2_table() to make sure that the L2 tables are allocated on all v4.1 RDs. We're likely not need to allocate memory in it because the vPE table is shared and (L2 table is) already allocated at ITS level, except for the case where the ITS doesn't share anything (say SVPET == 0, practically unlikely but architecturally allowed). The implementation of allocate_vpe_l2_table() is mostly copied from its_alloc_table_entry(). Signed-off-by: Zenghui Yu --- drivers/irqchip/irq-gic-v3-its.c | 80 ++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 0f1fe56ce0af..ae4e7b355b46 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -2443,6 +2443,72 @@ static u64 inherit_vpe_l1_table_from_rd(cpumask_t **mask) return 0; } +static bool allocate_vpe_l2_table(int cpu, u32 id) +{ + void __iomem *base = gic_data_rdist_cpu(cpu)->rd_base; + u64 val, gpsz, npg; + unsigned int psz, esz, idx; + struct page *page; + __le64 *table; + + if (!gic_rdists->has_rvpeid) + return true; + + val = gits_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER); + + esz = FIELD_GET(GICR_VPROPBASER_4_1_ENTRY_SIZE, val) + 1; + gpsz = FIELD_GET(GICR_VPROPBASER_4_1_PAGE_SIZE, val); + npg = FIELD_GET(GICR_VPROPBASER_4_1_SIZE, val) + 1; + + switch (gpsz) { + default: + WARN_ON(1); + /* fall through */ + case GIC_PAGE_SIZE_4K: + psz = SZ_4K; + break; + case GIC_PAGE_SIZE_16K: + psz = SZ_16K; + break; + case GIC_PAGE_SIZE_64K: + psz = SZ_64K; + break; + } + + /* Don't allow vpe_id that exceeds single, flat table limit */ + if (!(val & GICR_VPROPBASER_4_1_INDIRECT)) + return (id < (npg * psz / (esz * SZ_8))); + + /* Compute 1st level table index & check if that exceeds table limit */ + idx = id >> ilog2(psz / (esz * SZ_8)); + if (idx >= (npg * psz / GITS_LVL1_ENTRY_SIZE)) + return false; + + table = gic_data_rdist_cpu(cpu)->vpe_l1_base; + + /* Allocate memory for 2nd level table */ + if (!table[idx]) { + page = alloc_pages(GFP_KERNEL | __GFP_ZERO, get_order(psz)); + if (!page) + return false; + + /* Flush Lvl2 table to PoC if hw doesn't support coherency */ + if (!(val & GICR_VPROPBASER_SHAREABILITY_MASK)) + gic_flush_dcache_to_poc(page_address(page), psz); + + table[idx] = cpu_to_le64(page_to_phys(page) | GITS_BASER_VALID); + + /* Flush Lvl1 entry to PoC if hw doesn't support coherency */ + if (!(val & GICR_VPROPBASER_SHAREABILITY_MASK)) + gic_flush_dcache_to_poc(table + idx, GITS_LVL1_ENTRY_SIZE); + + /* Ensure updated table contents are visible to RD hardware */ + dsb(sy); + } + + return true; +} + static int allocate_vpe_l1_table(void) { void __iomem *vlpi_base = gic_data_rdist_vlpi_base(); @@ -2957,6 +3023,7 @@ static bool its_alloc_device_table(struct its_node *its, u32 dev_id) static bool its_alloc_vpe_table(u32 vpe_id) { struct its_node *its; + int cpu; /* * Make sure the L2 tables are allocated on *all* v4 ITSs. We @@ -2979,6 +3046,19 @@ static bool its_alloc_vpe_table(u32 vpe_id) return false; } + /* Non v4.1? No need to iterate RDs and go back early. */ + if (!gic_rdists->has_rvpeid) + return true; + + /* + * Make sure the L2 tables are allocated for all copies of + * the L1 table on *all* v4.1 RDs. + */ + for_each_possible_cpu(cpu) { + if (!allocate_vpe_l2_table(cpu, vpe_id)) + return false; + } + return true; } -- 2.19.1