Received: by 2002:ab2:784b:0:b0:1fd:adc2:8405 with SMTP id m11csp508588lqp; Mon, 10 Jun 2024 10:27:39 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVVtkwc0C/BThacCg5jUh4rkV/xJZ3+7cHbFNGd3txavQkzm6Kjk/W/zi8hQDKDFoSV9CbcWtzPDa0/sRzGmJurBE1T6QiKj9yHVWi1lg== X-Google-Smtp-Source: AGHT+IFPlBAfxku4SlVkiahqb3A41c8m14G6AZbpmKjEatpY8lIp37aWJLpE4NuRMMegdVjo8XlA X-Received: by 2002:a17:906:2415:b0:a6f:3210:ac1d with SMTP id a640c23a62f3a-a6f3210aedamr61713866b.63.1718040459715; Mon, 10 Jun 2024 10:27:39 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718040459; cv=pass; d=google.com; s=arc-20160816; b=YAd8gSv4LQcrqfrLy6P2ZULQMNfITqnGv33FPnU81EKkcQ5MoAsrbrQRWaciQB0oCX Sn4FkjTiJpof+tnmJ6qcaLkqa1eNAW+OMzo/vWh2idDdxFAukZB9rxqfH+b3VLsoxH98 7Thzsolfxut+KosGl2j7zm9b1rk/8SNDvdeyUCqVCfz262AXr+7f3jjSfVFGXM1SahDt n7Y87d31GyRGnVzsFaLlN8yH1FnztSZy29vbbYEW8+rJzJRlCd62MBCNlc50FL58k+Rk 9IpJwd8wg2TJnyiSJzsGLVs+vwuPwAoYCi1sB75zUtSzlbAbKL1fw9F9h+s/H4sHixhq pZeQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date; bh=+BYtESfsYPlD6TI1YbecT94NUqnZ5ZmUM+Zu8ZW7fao=; fh=pKn/293/ZKXIMcQJl9ZzFjLMVevdBzTVVGTle239r1w=; b=x8Q2rr4VC7+vHWmZWPzH0BMMmbWMslnfx7HC/ias1FVSN+ObODtxOah2uViQz1omlu pg0/UfNR51/2Jxch+GjL6zOPxOF98UhR60k6qiCdrfJUSstqFQlOMEqTwWGFWKrlkcuE 8aOino5MQyBd2i5SmxYDlcIvNnjyz6A2LgFLOIDzGz2sbI9v6qlYZLd7JUCQY98sPRBV ZYkI90vWQ/xSV7RHF6LSRO3AdiCP7VA+p+dfbBH6s+RyA88kL1JT2Zt9cvQRc8ARtJkF /m0xE8nhvm5eS4jVwD10wgwOHl8fbLkU3i63jgHMEYawLl9ftHfLAMS9ECCEhIqW3Xz2 9kvg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-208650-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-208650-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id a640c23a62f3a-a6f1854d1ebsi167726266b.578.2024.06.10.10.27.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jun 2024 10:27:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-208650-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-208650-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-208650-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 6674C1F22A09 for ; Mon, 10 Jun 2024 17:27:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 93A81147C8A; Mon, 10 Jun 2024 17:27:28 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D2A714532C; Mon, 10 Jun 2024 17:27:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718040448; cv=none; b=YLx63nASStJKPVcJViCOegmaPVF4pAwT2u2TFhbMhes0WSbDHSpHgdfbu3JA5aNhwpCRLH8/gts7y0+LtunVipgSEiEUYhZN5Qt7zTWplqG3M8TvgzLoSwe0oa92uTU82r4gxTL11XEFuDQb7lawpAFQcthb7g29HuY4Dxr8BKg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718040448; c=relaxed/simple; bh=wJYRldg3T8FDmjjLK/CGxgQ4cydiTmDEgBrsTZOAVPA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=FKl2WC3imrOGQG/+VEJFOgUNF9GO/zU93mXDN8n7MhMsCcWX4aS0UBLxnuyQIb4tkqiMgunLzsgv7DX6q3T+81hpXzHbFjSUnA3knIWTsUhwHObrjVNjE49VLZMiF6iqTiwS+CqWL1eA2/iAdbLxB1ovgnrBWO6hcUK6yuPQV3U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4E42C4AF1C; Mon, 10 Jun 2024 17:27:24 +0000 (UTC) Date: Mon, 10 Jun 2024 18:27:22 +0100 From: Catalin Marinas To: Steven Price Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Suzuki K Poulose , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: Re: [PATCH v3 09/14] arm64: Enable memory encrypt for Realms Message-ID: References: <20240605093006.145492-1-steven.price@arm.com> <20240605093006.145492-10-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240605093006.145492-10-steven.price@arm.com> On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote: > +static int __set_memory_encrypted(unsigned long addr, > + int numpages, > + bool encrypt) > +{ > + unsigned long set_prot = 0, clear_prot = 0; > + phys_addr_t start, end; > + int ret; > + > + if (!is_realm_world()) > + return 0; > + > + if (!__is_lm_address(addr)) > + return -EINVAL; > + > + start = __virt_to_phys(addr); > + end = start + numpages * PAGE_SIZE; > + > + /* > + * Break the mapping before we make any changes to avoid stale TLB > + * entries or Synchronous External Aborts caused by RIPAS_EMPTY > + */ > + ret = __change_memory_common(addr, PAGE_SIZE * numpages, > + __pgprot(0), > + __pgprot(PTE_VALID)); > + > + if (encrypt) { > + clear_prot = PROT_NS_SHARED; > + ret = rsi_set_memory_range_protected(start, end); > + } else { > + set_prot = PROT_NS_SHARED; > + ret = rsi_set_memory_range_shared(start, end); > + } > + > + if (ret) > + return ret; > + > + set_prot |= PTE_VALID; > + > + return __change_memory_common(addr, PAGE_SIZE * numpages, > + __pgprot(set_prot), > + __pgprot(clear_prot)); > +} This works, does break-before-make and also rejects vmalloc() ranges (for the time being). One particular aspect I don't like is doing the TLBI twice. It's sufficient to do it when you first make the pte invalid. We could guess this in __change_memory_common() if set_mask has PTE_VALID. The call sites are restricted to this file, just add a comment. An alternative would be to add a bool flush argument to this function. -- Catalin