Received: by 2002:ab2:60d1:0:b0:1f7:5705:b850 with SMTP id i17csp529752lqm; Wed, 1 May 2024 08:03:12 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWM70/rubaFqLd2MCjAX2SLrT+NDBbX4I8xVFQ3GRvJNUNgBqsJXssQvuFIPpHq9wuTaUoUhcVbKLDA/+HXS8fr3h4vl8qXuW04aWhN/Q== X-Google-Smtp-Source: AGHT+IHKeqXpMDuvFs0Y13GtuxqHcChLDAb177OQMT9xQ5WeAoS3pVmft6my1yvnR8F5tNtovhQP X-Received: by 2002:a05:6a21:616:b0:1a9:9839:f142 with SMTP id ll22-20020a056a21061600b001a99839f142mr4314710pzb.10.1714575791930; Wed, 01 May 2024 08:03:11 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714575791; cv=pass; d=google.com; s=arc-20160816; b=qZcO/6FnThlX5sxQnlSQDL6nbS66tx1Gh2VTRRc5B3lK03I1KC+JJ7KVjviRCudtHd e3AfigbJZWRtFu+VFWnl2SgX6rJVqJ5o/8PF5PZnTOLuabZ4SJOTrC1y2ChvzttgSyXI dM0JCzMoNhzrOTxytKjMD0ZrSUb9SjiSkLg0PFC2tyCkMcZyvRl8nDYNqr/u9afzejwT rO10c/xSOdkAU8O5J93vQ5K9lnZOaqyyl18+QgvH3VPgi7ZwuNcYplwvlnuD5IFCmWVo 4snHRiZEhvL+qIzVUAvORqUCryVjcLRLFlftvFX2frE5AM2VgdSOlIRWO591fKT7ekIT u9Tw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=mEDTQKX3fgMe27HKzupdB9Ow3PTR16GXS946/H04dIU=; fh=N/htlZtG+6fgxXLUkgy+ZjczmFs1KkCEEYOWNcngZMA=; b=qoaQbRQ7wzE30Upw1LZLkZuy/Xu4YtxSgdHEfs6avVCIHwTddb+rPVzaDRqTt0xubA joyfpEx27NIpFFggimx09MKrdikQSW/VaJEcer0o56Omh2Fvre/CP+qq6x8N7H65S8v8 CAA8WY1MKAqxjZKeMGWbUQIWyEu6Y22b1zvC4O+pJQjhmPNSt9FKJhcjEChDvuMrzBRI FX6UZ9Okt5/c4cyyee8QEVuBQYCzqP5ePv8pL/l/EWP9wDrM0AeiscTW6qfbUcU82djJ TCa9PWx28F0HvdV/x1vTvAKKKYrSC5he0nATq/FJslOTduIuODmSOkrql0uwJXZpRh2h xj1g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-165409-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165409-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id l5-20020a63da45000000b005dbde399b23si21469871pgj.901.2024.05.01.08.03.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 May 2024 08:03:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-165409-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-165409-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165409-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 22667B234D6 for ; Wed, 1 May 2024 14:59:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 41A3C1E87C; Wed, 1 May 2024 14:56:39 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6287112F5AC; Wed, 1 May 2024 14:56:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714575398; cv=none; b=qYzUi71udQLDaDN3GtLkiye0k/a638D3F6gD6ZNDT21Kaf1HgeHDD9h+jsPgILFE3DO6UXjX+bOJQiQBCFUH717ctuDP/Gi0AoLO2udlhty10V4nz3vmLPYV0M7pNbWbQhcJ/RbCKww3C03OwQFXudJ3S2CP/3EeQEzunMO0oSg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714575398; c=relaxed/simple; bh=p52p3F37XvyVfrfrsqWyEV0jgw7gj7LU4qdYq7WAN2o=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=gMycCskeuPq+65i2dpuGkCIOiWnmpIbaqHar+8POcAQXSKEfZVisPTqrNWQPKopW1HZT/Ht3120iT6vVxA8UQV3tjFssfgu7yLVc5sOLuQ9Mpgu9m2Z51pYYS5mkcsn+ofysynMv2Q9IVMgRcVAubqwJhSbZ23xfRa1Vt5gigiU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20F062F4; Wed, 1 May 2024 07:57:03 -0700 (PDT) Received: from [10.57.82.68] (unknown [10.57.82.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B37333F793; Wed, 1 May 2024 07:56:34 -0700 (PDT) Message-ID: Date: Wed, 1 May 2024 15:56:33 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 17/43] arm64: RME: Allow VMM to set RIPAS Content-Language: en-GB To: Jean-Philippe Brucker , Steven Price Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni References: <20240412084056.1733704-1-steven.price@arm.com> <20240412084309.1733783-1-steven.price@arm.com> <20240412084309.1733783-18-steven.price@arm.com> <20240501142712.GB484338@myrica> From: Suzuki K Poulose In-Reply-To: <20240501142712.GB484338@myrica> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 01/05/2024 15:27, Jean-Philippe Brucker wrote: > On Fri, Apr 12, 2024 at 09:42:43AM +0100, Steven Price wrote: >> +static inline bool realm_is_addr_protected(struct realm *realm, >> + unsigned long addr) >> +{ >> + unsigned int ia_bits = realm->ia_bits; >> + >> + return !(addr & ~(BIT(ia_bits - 1) - 1)); > > Is it enough to return !(addr & BIT(realm->ia_bits - 1))? I thought about that too. But if we are dealing with an IPA that is > (BIT(realm->ia_bits)), we don't want to be treating that as a protected address. This could only happen if the Realm is buggy (or the VMM has tricked it). So the existing check looks safer. > >> +static void realm_unmap_range_shared(struct kvm *kvm, >> + int level, >> + unsigned long start, >> + unsigned long end) >> +{ >> + struct realm *realm = &kvm->arch.realm; >> + unsigned long rd = virt_to_phys(realm->rd); >> + ssize_t map_size = rme_rtt_level_mapsize(level); >> + unsigned long next_addr, addr; >> + unsigned long shared_bit = BIT(realm->ia_bits - 1); >> + >> + if (WARN_ON(level > RME_RTT_MAX_LEVEL)) >> + return; >> + >> + start |= shared_bit; >> + end |= shared_bit; >> + >> + for (addr = start; addr < end; addr = next_addr) { >> + unsigned long align_addr = ALIGN(addr, map_size); >> + int ret; >> + >> + next_addr = ALIGN(addr + 1, map_size); >> + >> + if (align_addr != addr || next_addr > end) { >> + /* Need to recurse deeper */ >> + if (addr < align_addr) >> + next_addr = align_addr; >> + realm_unmap_range_shared(kvm, level + 1, addr, >> + min(next_addr, end)); >> + continue; >> + } >> + >> + ret = rmi_rtt_unmap_unprotected(rd, addr, level, &next_addr); >> + switch (RMI_RETURN_STATUS(ret)) { >> + case RMI_SUCCESS: >> + break; >> + case RMI_ERROR_RTT: >> + if (next_addr == addr) { >> + next_addr = ALIGN(addr + 1, map_size); >> + realm_unmap_range_shared(kvm, level + 1, addr, >> + next_addr); >> + } >> + break; >> + default: >> + WARN_ON(1); > > In this case we also need to return, because RMM returns with next_addr == > 0, causing an infinite loop. At the moment a VMM can trigger this easily > by creating guest memfd before creating a RD, see below Thats a good point. I agree. > >> + } >> + } >> +} >> + >> +static void realm_unmap_range_private(struct kvm *kvm, >> + unsigned long start, >> + unsigned long end) >> +{ >> + struct realm *realm = &kvm->arch.realm; >> + ssize_t map_size = RME_PAGE_SIZE; >> + unsigned long next_addr, addr; >> + >> + for (addr = start; addr < end; addr = next_addr) { >> + int ret; >> + >> + next_addr = ALIGN(addr + 1, map_size); >> + >> + ret = realm_destroy_protected(realm, addr, &next_addr); >> + >> + if (WARN_ON(ret)) >> + break; >> + } >> +} >> + >> +static void realm_unmap_range(struct kvm *kvm, >> + unsigned long start, >> + unsigned long end, >> + bool unmap_private) >> +{ > > Should this check for a valid kvm->arch.realm.rd, or a valid realm state? > I'm not sure what the best place is but none of the RMM calls will succeed > if the RD is NULL, causing some WARNs. > > I can trigger this with set_memory_attributes() ioctls before creating a > RD for example. > True, this could be triggered by a buggy VMM in other ways, and we could easily gate it on the Realm state >= NEW. Suzuki