Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2308159pxu; Mon, 7 Dec 2020 03:15:35 -0800 (PST) X-Google-Smtp-Source: ABdhPJw4jD/n2ljJOE2U4KnbPl7iabIp7hTddueh9sR1kpwLDHr4AYUzoDfAtO+D8q+WaFJ7TR86 X-Received: by 2002:a17:906:e15:: with SMTP id l21mr18549083eji.509.1607339735531; Mon, 07 Dec 2020 03:15:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607339735; cv=none; d=google.com; s=arc-20160816; b=pRgN5l54AxHDYCng2j/q4mbIOWvsJUShFfzLLLwLsa9mqFXiCVhueQlMV6R4bMV5Zw hNTqG3I3tM4qX6SPjxFnh63gUcqLAoGiUCAyrJ/k0swM61p2/StDopHMXt29iNRJJ9ov TnG9DzCHiYepIr9Rf+9Cp/JzKsIn56q3AWUbYh7Cq+wf7PtB9cz986buISvGoOk9ES60 gDg16u10PhR7PVNJh/plTllgmpsC+xwD/Yrsq/UR08hEBPDOksREx25+3YRltzoUXF06 981BQfKPnJGBgpekPgv8AlVq7wcUazyHDoYxv5BoGh35WNWv76AO5ms61HNU3ibIdKgE nB/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from :dkim-signature:date; bh=z/rjZPCdLM3kRPL3g0G0jPIz7Kg+U5g21b+3ladxAoA=; b=Yvadc+0+ye7fj86NWw7QAcCvk89KZNx+kpWrwAYn9t/zQSKagdLKHdEXSEtWnRQU7v cuQ19hZ/GWduEgmWN/31m0PRsjsiZd4jo23w5P3W038vg5feD72JOA6aThN36haXt8fg ELzMHgt4oqIUXuSMxVyM+CWzF/13qbhMyMZr6Ul1a453tmsgAtyD6eRvL5kg2K8rEBmD OKBtsdq6I9Xxzj/31rXGnCIdc287NYRVdqE6cqLTcTaRnp9LcI8WPkU777SrVuDRsv/p rz2fj00l+QmmHu2mhQtekfccxBBP+LYxcGgpvswwLax6zOn9WxiSyscQwZM61wMpMnVH c5tA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fKXZWTLv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n1si5223178edq.466.2020.12.07.03.15.11; Mon, 07 Dec 2020 03:15:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=fKXZWTLv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726633AbgLGLLB (ORCPT + 99 others); Mon, 7 Dec 2020 06:11:01 -0500 Received: from mail.kernel.org ([198.145.29.99]:46794 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726667AbgLGLLB (ORCPT ); Mon, 7 Dec 2020 06:11:01 -0500 Date: Mon, 7 Dec 2020 11:10:14 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1607339420; bh=G9EsmmXSsnROa35VPjXLwGmctcwNCJzvlkxZBxdE0ik=; h=From:To:Cc:Subject:References:In-Reply-To:From; b=fKXZWTLvBHBgphzv4ZC46t36HsgceZS7konE/taGhYyjKdKxYOPMG9ZjMC1oooYZv 27gZ8nJP4+WS+B22v4HGz8S4t77abN5hDNNW6Zo2WsXr2oGAWGLaG3/kEQtco3NMgl s+v8NRs5QTH0P4tw53Z2c3gg/cLOTGFL3Azb/AeBZKUWW83/OEseCryb4FXvnaPJo7 8OrRSgADA82mbbZvtOG02lJucVBxQIqhFBV4xKOA2EAZdrP6ZXzBdC22mvWix1TZkW 2FtE9gp8W6gezN54iQPMKVsm+ZDworf/e04EFIfvVjBNnd5zR2JXrp+a3siCvKrPsb uTa0r+K4UO4Eg== From: Will Deacon To: Mark Rutland Cc: Quentin Perret , "moderated list:ARM64 PORT (AARCH64 ARCHITECTURE)" , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , kernel-team@android.com, Android KVM , Catalin Marinas , open list , Rob Herring , Fuad Tabba , Marc Zyngier , Frank Rowand , "open list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" Subject: Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection Message-ID: <20201207111013.GA4379@willie-the-truck> References: <20201117181607.1761516-1-qperret@google.com> <20201117181607.1761516-17-qperret@google.com> <20201207102002.GA3825@willie-the-truck> <20201207110528.GA18365@C02TD0UTHF1T.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201207110528.GA18365@C02TD0UTHF1T.local> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 07, 2020 at 11:05:45AM +0000, Mark Rutland wrote: > On Mon, Dec 07, 2020 at 10:20:03AM +0000, Will Deacon wrote: > > On Fri, Dec 04, 2020 at 06:01:52PM +0000, Quentin Perret wrote: > > > On Thursday 03 Dec 2020 at 12:57:33 (+0000), Fuad Tabba wrote: > > > > > > > > +SYM_FUNC_START(__kvm_init_switch_pgd) > > > > > + /* Turn the MMU off */ > > > > > + pre_disable_mmu_workaround > > > > > + mrs x2, sctlr_el2 > > > > > + bic x3, x2, #SCTLR_ELx_M > > > > > + msr sctlr_el2, x3 > > > > > + isb > > > > > + > > > > > + tlbi alle2 > > > > > + > > > > > + /* Install the new pgtables */ > > > > > + ldr x3, [x0, #NVHE_INIT_PGD_PA] > > > > > + phys_to_ttbr x4, x3 > > > > > +alternative_if ARM64_HAS_CNP > > > > > + orr x4, x4, #TTBR_CNP_BIT > > > > > +alternative_else_nop_endif > > > > > + msr ttbr0_el2, x4 > > > > > + > > > > > + /* Set the new stack pointer */ > > > > > + ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA] > > > > > + mov sp, x0 > > > > > + > > > > > + /* And turn the MMU back on! */ > > > > > + dsb nsh > > > > > + isb > > > > > + msr sctlr_el2, x2 > > > > > + isb > > > > > + ret x1 > > > > > +SYM_FUNC_END(__kvm_init_switch_pgd) > > > > > + > > > > > > > > Should the instruction cache be flushed here (ic iallu), to discard > > > > speculatively fetched instructions? > > > > > > Hmm, Will? Thoughts? > > > > The I-cache is physically tagged, so not sure what invalidation would > > achieve here. Fuad -- what do you think could go wrong specifically? > > While the MMU is off, instruction fetches can be made from the PoC > rather than the PoU, so where instructions have been modified/copied and > not cleaned to the PoC, it's possible to fetch stale copies into the > I-caches. The physical tag doesn't prevent that. Oh yeah, we even have a comment about that in idmap_kpti_install_ng_mappings(). Maybe we should wrap disable_mmu and enable_mmu in some macros so we don't have to trip over this every time (and this would mean we could get rid of pre_disable_mmu_workaround too). > In the regular CPU boot paths, __enabble_mmu() has an IC IALLU after > enabling the MMU to ensure that we get rid of anything stale (e.g. so > secondaries don't miss ftrace patching, which is only cleaned to the > PoU). > > That might not be a problem here, if things are suitably padded and > never dynamically patched, but if so it's probably worth a comment. It's fragile enough that we should just do the invalidation. Will