Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp930561pxk; Thu, 10 Sep 2020 02:24:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwxYn1UFGntPkJhALcEynB4pYQfPLDbdzpvrdX3veNDbR+n7g3Y4SLnxtE9hnmd+P6bome/ X-Received: by 2002:a17:906:ae45:: with SMTP id lf5mr7558465ejb.339.1599729867397; Thu, 10 Sep 2020 02:24:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599729867; cv=none; d=google.com; s=arc-20160816; b=asRFRBCqXgPpBOxF2OYW/qYDfM+sRSr0UZOKPD9Zu3jGFSXAckAcjUgpd8fuJvY0Gz 8gjpI5PfQ4jX88Or7bwp6kDCOrRqWRJog3g1kg0GCduR5upCEwOI4PQ/6LudfQmCxoZ7 Uk67wn6j4UkoGyfc3vxW3+HyWcjRIsFKCp6xJ3zpd/YQ/Xqu4nnDRkujdh2GuuLBYiwa +aeuO7TwbvtSqCKBauhnRczeWi0o8U7XClhNy4wR6HvtmYmeeSylT6t+oO2Sp5c198mq KENx04cjHuyH3JVym3a4+jzwzQrdYCaJhHxigs5LClygP3hPn8vhJkDG5AgdIfOXXS9F NBYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=ppWepXjAAGxIy+g9dJRqMqoTpFiNsj+EH42TUPx9/K4=; b=WO7HBRQc1zT8V8S50xDxCzSv1/gsQovmgc+dEzhcs9+eSs8Jcu4Bsd8dsutp442/GP 6Vf/b2n+hIBUdIVxY8P3YvOBV19u3mEkE8DX5P+WJc2MxBBWYszp9Hxl766/dWXt7IgW yqBkixENkj3GlMUq63BhbxBdiAoSJ9zbklp+fCYbe9SKGEJCqvabKDYUXAf6XuZRMS3I M2FDztBtRJRIVEJV2C8JTsLYOMamoect4I6Wh5PVIEQIs3b6r9JSyMhYaUAQaY5ASzby e3n+GeShpVAmuTqK2iLqkrK/qjL66HL9IkE03+nIG743b7i04ZNOutUMIJL3iSadZLI/ yqCg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ck3si3155218ejb.362.2020.09.10.02.24.04; Thu, 10 Sep 2020 02:24:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728626AbgIJJVN (ORCPT + 99 others); Thu, 10 Sep 2020 05:21:13 -0400 Received: from foss.arm.com ([217.140.110.172]:58688 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726738AbgIJJVM (ORCPT ); Thu, 10 Sep 2020 05:21:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C4410101E; Thu, 10 Sep 2020 02:21:11 -0700 (PDT) Received: from [192.168.1.79] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E12263F68F; Thu, 10 Sep 2020 02:21:09 -0700 (PDT) Subject: Re: [PATCH v2 0/2] MTE support for KVM guest To: Andrew Jones Cc: Peter Maydell , Juan Quintela , Catalin Marinas , Richard Henderson , qemu-devel@nongnu.org, "Dr. David Alan Gilbert" , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Marc Zyngier , Thomas Gleixner , Will Deacon , Dave Martin , linux-kernel@vger.kernel.org References: <20200904160018.29481-1-steven.price@arm.com> <20200909152540.ylnrljd6aelxoxrf@kamzik.brq.redhat.com> <857566df-1b98-84f7-9268-d092722dc749@arm.com> <20200910062958.o55apuvdxmf3uiqb@kamzik.brq.redhat.com> From: Steven Price Message-ID: <37663bb6-d3a7-6f53-d0cd-88777633a2b2@arm.com> Date: Thu, 10 Sep 2020 10:21:04 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200910062958.o55apuvdxmf3uiqb@kamzik.brq.redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/09/2020 07:29, Andrew Jones wrote: > On Wed, Sep 09, 2020 at 05:04:15PM +0100, Steven Price wrote: >> On 09/09/2020 16:25, Andrew Jones wrote: >>> On Fri, Sep 04, 2020 at 05:00:16PM +0100, Steven Price wrote: >>>> 2. Automatically promotes (normal host) memory given to the guest to be >>>> tag enabled (sets PG_mte_tagged), if any VCPU has MTE enabled. The >>>> tags are cleared if the memory wasn't previously MTE enabled. >>> >>> Shouldn't this be up to the guest? Or, is this required in order for the >>> guest to use tagging at all. Something like making the guest IPAs memtag >>> capable, but if the guest doesn't enable tagging then there is no guest >>> impact? In any case, shouldn't userspace be the one that adds PROT_MTE >>> to the memory regions it wants the guest to be able to use tagging with, >>> rather than KVM adding the attribute page by page? >> >> I think I've probably explained this badly. >> >> The guest can choose how to populate the stage 1 mapping - so can choose >> which parts of memory are accessed tagged or not. However, the hypervisor >> cannot restrict this in stage 2 (except by e.g. making the memory uncached >> but that's obviously not great - however devices forward to the guest can be >> handled like this). >> >> Because the hypervisor cannot restrict the guest's access to the tags, the >> hypervisor must assume that all memory given to the guest could have the >> tags accessed. So it must (a) clear any stale data from the tags, and (b) >> ensure that the tags are preserved (e.g. when swapping pages out). >> > > Yes, this is how I understood it. Ok, I've obviously misunderstood your comment instead ;) >> Because of the above the current series automatically sets PG_mte_tagged on >> the pages. Note that this doesn't change the mappings that the VMM has (a >> non-PROT_MTE mapping will still not have access to the tags). > > But if userspace created the memslots with memory already set with > PROT_MTE, then this wouldn't be necessary, right? And, as long as > there's still a way to access the memory with tag checking disabled, > then it shouldn't be a problem. Yes, so one option would be to attempt to validate that the VMM has provided memory pages with the PG_mte_tagged bit set (e.g. by mapping with PROT_MTE). The tricky part here is that we support KVM_CAP_SYNC_MMU which means that the VMM can change the memory backing at any time - so we could end up in user_mem_abort() discovering that a page doesn't have PG_mte_tagged set - at that point there's no nice way of handling it (other than silently upgrading the page) so the VM is dead. So since enforcing that PG_mte_tagged is set isn't easy and provides a hard-to-debug foot gun to the VMM I decided the better option was to let the kernel set the bit automatically. >>> >>> If userspace needs to write to guest memory then it should be due to >>> a device DMA or other specific hardware emulation. Those accesses can >>> be done with tag checking disabled. >> >> Yes, the question is can the VMM (sensibly) wrap the accesses with a >> disable/renable tag checking for the process sequence. The alternative at >> the moment is to maintain a separate (untagged) mapping for the purpose >> which might present it's own problems. > > Hmm, so there's no easy way to disable tag checking when necessary? If we > don't map the guest ram with PROT_MTE and continue setting the attribute > in KVM, as this series does, then we don't need to worry about it tag > checking when accessing the memory, but then we can't access the tags for > migration. There's a "TCO" (Tag Check Override) bit in PSTATE which allows disabling tag checking, so if it's reasonable to wrap accesses to the memory you can simply set the TCO bit, perform the memory access and then unset TCO. That would mean a single mapping with MTE enabled would work fine. What I don't have a clue about is whether it's practical in the VMM to wrap guest accesses like this. >> >>>> >>>> If it's not practical to either disable tag checking in the VMM or >>>> maintain multiple mappings then the alternatives I'm aware of are: >>>> >>>> * Provide a KVM-specific method to extract the tags from guest memory. >>>> This might also have benefits in terms of providing an easy way to >>>> read bulk tag data from guest memory (since the LDGM instruction >>>> isn't available at EL0). >>> >>> Maybe we need a new version of KVM_GET_DIRTY_LOG that also provides >>> the tags for all addresses of each dirty page. >> >> Certainly possible, although it seems to conflate two operations: "get list >> of dirty pages", "get tags from page". It would also require a lot of return >> space (size of slot/32). >> > > It would require num-set-bits * host-page-size / 16 / 2, right? Yes, where the worst case is all bits set which is size/32. Since you don't know at the time of the call how many bits are going to be set I'm not sure how you would design the API which doesn't require preallocating the worst case. >>>> * Provide support for user space setting the TCMA0 or TCMA1 bits in >>>> TCR_EL1. These would allow the VMM to generate pointers which are not >>>> tag checked. >>> >>> So this is necessary to allow the VMM to keep tag checking enabled for >>> itself, plus map guest memory as PROT_MTE, and write to that memory when >>> needed? >> >> This is certainly one option. The architecture provides two "magic" values >> (all-0s and all-1s) which can be configured using TCMAx to be treated >> differently. The VMM could therefore construct pointers to otherwise tagged >> memory which would be treated as untagged. >> >> However, Catalin's user space series doesn't at the moment expose this >> functionality. >> > > So if I understand correctly this would allow us to map the guest memory > with PAGE_MTE and still access the memory when needed. If so, then this > sounds interesting. Yes - you could derive a pointer which didn't perform tag checking. Note that this also requires the rest of user space to play along (i.e. understand that the tag value is reserved). I believe for user space we have to use the all-0s value which means that a standard pointer (top-byte is 0) would be unchecked. Steve