Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp410778pxu; Tue, 1 Dec 2020 14:35:50 -0800 (PST) X-Google-Smtp-Source: ABdhPJznpFkMqOyid/BajR9dxkgxCrKtm0cLSbaSFdmyypIt5q7lrZdjvaP34ZkqoQBytbKpUqp9 X-Received: by 2002:a50:e688:: with SMTP id z8mr2657519edm.129.1606862150197; Tue, 01 Dec 2020 14:35:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606862150; cv=none; d=google.com; s=arc-20160816; b=etnJYa6Chwlay1OjlDeA0vBE/fV2pq9I0Y6bN2qbaMBkMGzkXDVOiFaZA0+IVlanub ehFjsn1vb8RaSmeZX6B8vLuRs09PklsQyEtW1rT+aQLgfw+G7PbF6Fy5o75NL112oq5F wOhGxA7668ZZ2a4YG2qbgcZOsnl2/iouY3j5cMjCpNqw2XLxgQ31Jc5bxS3/s/ZAhRzJ Tu90H3PzRbzZMP/lXPEZWqKtE/tYTn8zfHXp3FkubCRm7sWg6Q4S2YFjf/G8eSgTdiUl 6SwKbrjByJyBUcDqWXuCijNn3Qh4YBbTk31tQXuufdJrj1i65gOp/iuiWduG1pevB8ep yogw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=s1eenP4HySR6zo3XkzO0vtrJ2vkQzp2bUZCifG8iIMs=; b=rHe9XrfqmezvxOOHQU657K20oyDnrvxznjUqHZNug1eDEDfgKlYaibLAtNbucGAbqp zxce1ahP08QP5dnv20EZUV/9WK0/W/kWQwHZIYkJ7rdhu1d958yAJSoz04jo1dlWGpc1 nWhTzbJZJ1UAOC/iWeX3ltmK2f0p2A1+Tk4CAZvntgFa2vsOAhuxFn002RfOtpCXWqAd QN8JtMJaoVFDQopv93rbSo3GfLMhkgItDd+CemWEL/5S0fDUFymkawsnhJgNKTSLB6CH 0w23lsTOHGpG3pQ2yzi7wjMBzddR8iPpWJ2sTEx5yXe3SiNwzcAVT/r9ACycZ+oEDLsC ZuFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="OAGcRA/1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k17si831666edf.245.2020.12.01.14.35.26; Tue, 01 Dec 2020 14:35:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="OAGcRA/1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391289AbgLANqz (ORCPT + 99 others); Tue, 1 Dec 2020 08:46:55 -0500 Received: from mail.kernel.org ([198.145.29.99]:42238 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391285AbgLANqy (ORCPT ); Tue, 1 Dec 2020 08:46:54 -0500 Received: from willie-the-truck (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A0FC720770; Tue, 1 Dec 2020 13:46:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1606830373; bh=jKd8KoPjJ69TJ40PmJD4Wfy02BVwL1rHNuh/7ARwmdw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=OAGcRA/1u/yCOlF1mmmsp9BdD5Y5uHULGMAke3nSBKdj04HTbZkQmetJT8zFYNvna D/ZoIqBmzjQognFPlcRnXsrnouuI4dmUbY/yqAihwcs1ySMvyTpvYsRTcIiCbKTdqy ujr4XK9GDSeDgqokeQMdRxMeDdAqbl/coBb7ZajE= Date: Tue, 1 Dec 2020 13:46:07 +0000 From: Will Deacon To: "wangyanan (Y)" Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Marc Zyngier , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose , Gavin Shan , Quentin Perret , wanghaibin.wang@huawei.com, yezengruan@huawei.com, zhukeqian1@huawei.com, yuzenghui@huawei.com, jiangkunkun@huawei.com, wangjingyi11@huawei.com, lushenming@huawei.com Subject: Re: [RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry Message-ID: <20201201134606.GB26973@willie-the-truck> References: <20201130121847.91808-1-wangyanan55@huawei.com> <20201130121847.91808-3-wangyanan55@huawei.com> <20201130133421.GB24837@willie-the-truck> <67e9e393-1836-eca7-4235-6f4a19fed652@huawei.com> <20201130160119.GA25051@willie-the-truck> <868a4403-10d3-80f3-4ae1-a490813c55e2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <868a4403-10d3-80f3-4ae1-a490813c55e2@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 01, 2020 at 10:30:41AM +0800, wangyanan (Y) wrote: > On 2020/12/1 0:01, Will Deacon wrote: > > On Mon, Nov 30, 2020 at 11:24:19PM +0800, wangyanan (Y) wrote: > > > On 2020/11/30 21:34, Will Deacon wrote: > > > > On Mon, Nov 30, 2020 at 08:18:46PM +0800, Yanan Wang wrote: > > > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > > > > index 696b6aa83faf..fec8dc9f2baa 100644 > > > > > --- a/arch/arm64/kvm/hyp/pgtable.c > > > > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > > > > @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, > > > > > return 0; > > > > > } > > > > > +static void stage2_flush_dcache(void *addr, u64 size); > > > > > +static bool stage2_pte_cacheable(kvm_pte_t pte); > > > > > + > > > > > static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > > > > struct stage2_map_data *data) > > > > > { > > > > > @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > > > > struct page *page = virt_to_page(ptep); > > > > > if (data->anchor) { > > > > > - if (kvm_pte_valid(pte)) > > > > > + if (kvm_pte_valid(pte)) { > > > > > + kvm_set_invalid_pte(ptep); > > > > > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, > > > > > + addr, level); > > > > > put_page(page); > > > > This doesn't make sense to me: the page-table pages we're walking when the > > > > anchor is set are not accessible to the hardware walker because we unhooked > > > > the entire sub-table in stage2_map_walk_table_pre(), which has the necessary > > > > TLB invalidation. > > > > > > > > Are you seeing a problem in practice here? > > > Yes, I indeed find a problem in practice. > > > > > > When the migration was cancelled, a TLB conflic abort? was found in guest. > > > > > > This problem is fixed before rework of the page table code, you can have a > > > look in the following two links: > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c3736cd32bf5197aed1410ae826d2d254a5b277 > > > > > > https://lists.cs.columbia.edu/pipermail/kvmarm/2019-March/035031.html > > Ok, let's go through this, because I still don't see the bug. Please correct > > me if you spot any mistakes: > > > > 1. We have a block mapping for X => Y > > 2. Dirty logging is enabled, so the block mapping is write-protected and > > ends up being split into page mappings > > 3. Dirty logging is disabled due to a failed migration. > > > > --- At this point, I think we agree that the state of the MMU is alright --- > > > > 4. We take a stage-2 fault and want to reinstall the block mapping: > > > > a. kvm_pgtable_stage2_map() is invoked to install the block mapping > > b. stage2_map_walk_table_pre() finds a table where we would like to > > install the block: > > > > i. The anchor is set to point at this entry > > ii. The entry is made invalid > > iii. We invalidate the TLB for the input address. This is > > TLBI IPAS2SE1IS without level hint and then TLBI VMALLE1IS. > > > > *** At this point, the page-table pointed to by the old table entry > > is not reachable to the hardware walker *** > > > > c. stage2_map_walk_leaf() is called for each leaf entry in the > > now-unreachable subtree, dropping page-references for each valid > > entry it finds. > > d. stage2_map_walk_table_post() is eventually called for the entry > > which we cleared back in b.ii, so we install the new block mapping. > > > > You are proposing to add additional TLB invalidation to (c), but I don't > > think that is necessary, thanks to the invalidation already performed in > > b.iii. What am I missing here? > > The point is at b.iii where the TLBI is not enough. There are many page > mappings that we need to merge into a block mapping. > > We invalidate the TLB for the input address without level hint at b.iii, but > this operation just flush TLB for one page mapping, there > > are still some TLB entries for the other page mappings in the cache, the MMU > hardware walker can still hit these entries next time. Ah, yes, I see. Thanks. I hadn't considered the case where there are table entries beneath the anchor. So how about the diff below? Will --->8 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0271b4a3b9fe..12526d8c7ae4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -493,7 +493,7 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, return 0; kvm_set_invalid_pte(ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0); + /* TLB invalidation is deferred until the _post handler */ data->anchor = ptep; return 0; } @@ -547,11 +547,21 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, struct stage2_map_data *data) { int ret = 0; + kvm_pte_t pte = *ptep; if (!data->anchor) return 0; - free_page((unsigned long)kvm_pte_follow(*ptep)); + kvm_set_invalid_pte(ptep); + + /* + * Invalidate the whole stage-2, as we may have numerous leaf + * entries below us which would otherwise need invalidating + * individually. + */ + kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu); + + free_page((unsigned long)kvm_pte_follow(pte)); put_page(virt_to_page(ptep)); if (data->anchor == ptep) {