Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp5359473pxv; Wed, 21 Jul 2021 03:38:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw4+8zXNyQxyKb+fUKgmq8m/gVgnYvfyRR5zutEVgz4EM9F6BrI4PkkbIS1LomIRY7CIaxk X-Received: by 2002:a5e:8816:: with SMTP id l22mr21395811ioj.100.1626863916726; Wed, 21 Jul 2021 03:38:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626863916; cv=none; d=google.com; s=arc-20160816; b=JS4+AePae0Yn7Awv9b9ypvaayrP7FS/Lz8J8ZyISdveY2siHaZNvu1EZ6euXvGJSft Gj7IJPMXzKVmUlLz7dPrUezLg0s//Fi74esU+IotQ0A74ZDk96k0pjj2Luu37paUIb1S JVadHK0THzUTg9Z8rJLK/Vw3bcClIT4/lc8SRvGimvvJ8J5kIX/qKoqYnuslthIqyAyB VJbgipa8mHzs3s8oMAnfo6D8+RVzeeTQ9Vd8ZsCQZnspbWnB4cTWJDSd36L9B9L8k+23 gpzLGM8VZPDLAr/5lwbiIa0kZ24J0WygXRwWKMy1gjWcuspkzjdHd0c0M7HUV/0SwvdR ThvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:in-reply-to :subject:cc:to:from:message-id:date; bh=Sx3fzArViNT6Qba/iX6JrH/BOkVYnVICJBzacDOn7sI=; b=e+dFrK3pOyThbS5qvbBcAyGQVaIY6hWEFSgd8uvJrLRrfsHtAvTRN9aaQqpsO+po+L YuVXRmQRcEkYfEeUCin5OyEb7Ggb/FnLdZnlq0cF7yZMtZwam5lSgq5YFrk7/kBcZgE6 w0yDi6tqLELJ8Mz2PKJRPg8b17vYAStyEv7040El1heYxvR65CcRGb3zSkiHbCsT4Xgn vJpF/9ymr3q4VuPCiK+yXQF7ywG1Wyo7EKnhHVNhZmB/LjeTH/+IHvrwsI08DxNdqXTn emevs+Q07uSZrcUvAW74pY7YYP2JymtlmOzVOsCsLU2HK4o8syeWWeKtTOktU2Glh4O7 st2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x29si26157283jap.52.2021.07.21.03.38.25; Wed, 21 Jul 2021 03:38:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238441AbhGUJob (ORCPT + 99 others); Wed, 21 Jul 2021 05:44:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:51436 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238806AbhGUJf1 (ORCPT ); Wed, 21 Jul 2021 05:35:27 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AC1CD61222; Wed, 21 Jul 2021 10:16:04 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1m69Gk-00EgEK-O6; Wed, 21 Jul 2021 11:16:02 +0100 Date: Wed, 21 Jul 2021 11:16:02 +0100 Message-ID: <87bl6w2crh.wl-maz@kernel.org> From: Marc Zyngier To: Sergey Senozhatsky Cc: Will Deacon , Suleiman Souhlal , Joel Fernandes , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: Re: [PATCHv2 2/4] arm64: add guest pvstate support In-Reply-To: References: <20210709043713.887098-1-senozhatsky@chromium.org> <20210709043713.887098-3-senozhatsky@chromium.org> <877dhv35ea.wl-maz@kernel.org> <87im142i0b.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: senozhatsky@chromium.org, will@kernel.org, suleiman@google.com, joelaf@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 21 Jul 2021 09:47:52 +0100, Sergey Senozhatsky wrote: > > On (21/07/21 09:22), Marc Zyngier wrote: > > On Wed, 21 Jul 2021 03:05:25 +0100, > > Sergey Senozhatsky wrote: > > > > > > On (21/07/12 16:42), Marc Zyngier wrote: > > > > > > > > > > PV-vcpu-state is a per-CPU struct, which, for the time being, > > > > > holds boolean `preempted' vCPU state. During the startup, > > > > > given that host supports PV-state, each guest vCPU sends > > > > > a pointer to its per-CPU variable to the host as a payload > > > > > > > > What is the expected memory type for this memory region? What is its > > > > life cycle? Where is it allocated from? > > > > > > Guest per-CPU area, which physical addresses is shared with the > > > host. > > > > Again: what are the memory types you expect this to be used with? > > I heard your questions, I'm trying to figure out the answers now. > > As of memory type - I presume you are talking about coherent vs > non-coherent memory. No. I'm talking about cacheable vs non-cacheable. The ARM architecture is always coherent for memory that is inner-shareable, which applies to any system running Linux. On the other hand, there is no architected cache snooping when using non-cacheable accesses. > Can guest per-CPU memory be non-coherent? Guest never writes > anything to the region of memory it shares with the host, it only > reads what the host writes to it. All reads and writes are done from > CPU (no devices DMA access, etc). > > Do we need any cache flushes/syncs in this case? If you expect the guest to have non-cacheable mappings (or to run with its MMU off at any point, which amounts to the same thing) *and* still be able to access the shared page, then *someone* will have to perform CMOs to make these writes visible to the PoC (unless you have FWB). Needless to say, this would kill any sort of performance gain this feature could hypothetically bring. Defining the scope for the access would help mitigating this, even if that's just a sentence saying "the shared page *must* be accessed from a cacheable mapping". > > > When will the hypervisor ever stop accessing this? > > KVM always access it for the vcpus that are getting scheduled out or > scheduled in on the host side. I was more hinting at whether there was a way to disable this at runtime. Think of a guest using kexec, for example, where you really don't want the hypervisor to start messing with memory that has been since reallocated by the guest. > > How does it work across reset? > > I need to figure out what happens during reset/migration in the first > place. Yup. M. -- Without deviation from the norm, progress is not possible.