Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp822902rwe; Wed, 24 Aug 2022 09:28:33 -0700 (PDT) X-Google-Smtp-Source: AA6agR4GScpsPth0HA9VE/sQCDRwgX7a8EHHYi82pxiK7TCzdv89tZ4MQ4GUGlRcOaU+ROBd5oyc X-Received: by 2002:a05:6402:e98:b0:441:a982:45bc with SMTP id h24-20020a0564020e9800b00441a98245bcmr7984244eda.239.1661358512717; Wed, 24 Aug 2022 09:28:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661358512; cv=none; d=google.com; s=arc-20160816; b=LwcQAybajm1gKXE8ez1bbhl7T6+8BQMe/49pAKxcW8dNPq3rn5BuVCEJc77fh0c8kq 6Lop4CDA/21LOV0Q5Xe42AE+4A3D6TK+v5MDLUJkVvt1CtfCOFdk2i7d6y90/TvEwSlx +w6jJXLRGFq8RkgGKbR3WnZ8Pl658bzP4GoWk0lU9vrvbaWXQLfUAtEG44wcPKIeqpMW SsSZSViu0blvl+t8Dl+2t07m2+BlJiBmrJef1KCkEonbBkGdZoWE/ds9ZgFTodTpTJ+g 2DRuNYIFek87RTv2pmUUMOmZBvNLmXr4MKAyAaltrlP+gIaHwr4QuLESSP+XAmrA53YO mPCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=wANo/GHMzT757gDMJMmO8LCL5UwXYQqWDiktVytCe74=; b=xh/ocKtPWDVGrmmhidNZ0CiGzC8Kdt3ER5dR0UG+MTcjp7PkAOQcATwlvqY6yK9pkt NniwNe4IKLYaLqpNH+pkdvmCPseEOsn66gvDdxfLeZMqumSl8kksYXTbfQIxT1uXt7Xu GzlKeibcv1yO9hp1uuR9IFiqXSQWimPfX6J1ATSJrR28M8uPPnHz5OYWQ+pWrkclbjhG aGZzyiXiVY9ggM7pKyc0ZYdOD3z88EfXmrKTyeq33AQFWXnlwtrRK2X9DDKcZxWc7L4m E+H/9HfZOTf0SFM/sXBinuu4PI8WfY2ebjRIjlWkguuMdw5DMj1gSISShgtP3SN0FNZt AInA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Kzsrnw1+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk15-20020a1709077f8f00b00734bbe8d2f7si2528651ejc.952.2022.08.24.09.28.06; Wed, 24 Aug 2022 09:28:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Kzsrnw1+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238289AbiHXQWB (ORCPT + 99 others); Wed, 24 Aug 2022 12:22:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238546AbiHXQV6 (ORCPT ); Wed, 24 Aug 2022 12:21:58 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 046229A9B5 for ; Wed, 24 Aug 2022 09:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661358115; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wANo/GHMzT757gDMJMmO8LCL5UwXYQqWDiktVytCe74=; b=Kzsrnw1+6zStgXAsbc3aZVeaJSM381+AJb8kJT5TXV3MTJJ2rx2nDG09SCzyDrXFCUCo4x t3R4wy519d0hD8ee+cthDPImBCzKwz1h98v6fap6zYqCImc5Va+ixP+QgvkCh8YuFXIV4B S5AFhppxyR+IIO8pjctoxGcefmkVPZE= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-614-my3qS84-PMK3JBkyiRUQFw-1; Wed, 24 Aug 2022 12:21:53 -0400 X-MC-Unique: my3qS84-PMK3JBkyiRUQFw-1 Received: by mail-qv1-f69.google.com with SMTP id m18-20020a0cf192000000b00496ac947c21so9860939qvl.4 for ; Wed, 24 Aug 2022 09:21:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=wANo/GHMzT757gDMJMmO8LCL5UwXYQqWDiktVytCe74=; b=6T8p3MQK8JpZ5Kd4y0qhR4wuNvQusLvobrG/pQwn7Ug2bj8huNSCAIHEN9BBBBwDfK K9hcGuvhIjMHQCxYeUaBpoemEKq5zXsxTQHmqLtlNjFJX/KbEXexuND/dPfhl3ePhMgD 1ZifZAGFRRw2bJylduHD4suYNAARPQ2c4Z9EtOPS7l8oXxQwRfSJpp0mEIKym6JgaYfl 3NAVQauA0GP82nriBRUy8lML2Us0v2giFkMD4TFdM8Ocob5/uyGKqRRluUNwBXCj8hCd eOzKV2BFk4SWjCNl38/KB9+mWpYmpMoxGBS9tIauw9iao3t5r2GDp/JC+arivBxSHBJG f5lA== X-Gm-Message-State: ACgBeo0AvaTgixhXGie9deK0ezZWN0W3rsVTs1+W8h9WxeeZLgBgsiNq zIfljOjnDgEHjN6+pwX0/9BC/bXdA4DEERO+NO0J194B1dKwVG/RPOGfw+9V7QvInThsBtVbQ/W xvAReKf+BLj4SVZVQlNqBOOTU X-Received: by 2002:a05:6214:d86:b0:496:e991:c4a9 with SMTP id e6-20020a0562140d8600b00496e991c4a9mr12630942qve.129.1661358113250; Wed, 24 Aug 2022 09:21:53 -0700 (PDT) X-Received: by 2002:a05:6214:d86:b0:496:e991:c4a9 with SMTP id e6-20020a0562140d8600b00496e991c4a9mr12630908qve.129.1661358112949; Wed, 24 Aug 2022 09:21:52 -0700 (PDT) Received: from xz-m1.local (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id j4-20020a05620a0a4400b006b905e003a4sm15116119qka.135.2022.08.24.09.21.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 09:21:52 -0700 (PDT) Date: Wed, 24 Aug 2022 12:21:50 -0400 From: Peter Xu To: Marc Zyngier Cc: Gavin Shan , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, pbonzini@redhat.com, corbet@lwn.net, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, catalin.marinas@arm.com, will@kernel.org, shuah@kernel.org, seanjc@google.com, dmatlack@google.com, bgardon@google.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: Re: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking Message-ID: References: <87lerkwtm5.wl-maz@kernel.org> <41fb5a1f-29a9-e6bb-9fab-4c83a2a8fce5@redhat.com> <87fshovtu0.wl-maz@kernel.org> <171d0159-4698-354b-8b2f-49d920d03b1b@redhat.com> <87bksawz0w.wl-maz@kernel.org> <878rnewpaw.wl-maz@kernel.org> <87y1vdr98o.wl-maz@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <87y1vdr98o.wl-maz@kernel.org> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 24, 2022 at 03:45:11PM +0100, Marc Zyngier wrote: > On Wed, 24 Aug 2022 00:19:04 +0100, > Peter Xu wrote: > > > > On Tue, Aug 23, 2022 at 11:47:03PM +0100, Marc Zyngier wrote: > > > On Tue, 23 Aug 2022 22:20:32 +0100, > > > Peter Xu wrote: > > > > > > > > On Tue, Aug 23, 2022 at 08:17:03PM +0100, Marc Zyngier wrote: > > > > > I don't think we really need this check on the hot path. All we need > > > > > is to make the request sticky until userspace gets their act together > > > > > and consumes elements in the ring. Something like: > > > > > > > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > > > > index 986cee6fbc7f..e8ed5e1af159 100644 > > > > > --- a/arch/arm64/kvm/arm.c > > > > > +++ b/arch/arm64/kvm/arm.c > > > > > @@ -747,6 +747,14 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) > > > > > > > > > > if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) > > > > > return kvm_vcpu_suspend(vcpu); > > > > > + > > > > > + if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) && > > > > > + kvm_dirty_ring_soft_full(vcpu)) { > > > > > + kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu); > > > > > + vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; > > > > > + trace_kvm_dirty_ring_exit(vcpu); > > > > > + return 0; > > > > > + } > > > > > } > > > > > > > > > > return 1; > > > > > > > > Right, this seems working. We can also use kvm_test_request() here. > > > > > > > > > > > > > > > > > > > However, I'm a bit concerned by the reset side of things. It iterates > > > > > over the vcpus and expects the view of each ring to be consistent, > > > > > even if userspace is hacking at it from another CPU. For example, I > > > > > can't see what guarantees that the kernel observes the writes from > > > > > userspace in the order they are being performed (the documentation > > > > > provides no requirements other than "it must collect the dirty GFNs in > > > > > sequence", which doesn't mean much from an ordering perspective). > > > > > > > > > > I can see that working on a strongly ordered architecture, but on > > > > > something as relaxed as ARM, the CPUs may^Wwill aggressively reorder > > > > > stuff that isn't explicitly ordered. I have the feeling that a CAS > > > > > operation on both sides would be enough, but someone who actually > > > > > understands how this works should have a look... > > > > > > > > I definitely don't think I 100% understand all the ordering things since > > > > they're complicated.. but my understanding is that the reset procedure > > > > didn't need memory barrier (unlike pushing, where we have explicit wmb), > > > > because we assumed the userapp is not hostile so logically it should only > > > > modify the flags which is a 32bit field, assuming atomicity guaranteed. > > > > > > Atomicity doesn't guarantee ordering, unfortunately. > > > > Right, sorry to be misleading. The "atomicity" part I was trying to say > > the kernel will always see consistent update on the fields. > > > > The ordering should also be guaranteed, because things must happen with > > below sequence: > > > > (1) kernel publish dirty GFN data (slot, offset) > > (2) kernel publish dirty GFN flag (set to DIRTY) > > (3) user sees DIRTY, collects (slots, offset) > > (4) user sets it to RESET > > (5) kernel reads RESET > > Maybe. Maybe not. The reset could well be sitting in the CPU write > buffer for as long as it wants and not be seen by the kernel if the > read occurs on another CPU. And that's the crucial bit: single-CPU is > fine, but cross CPU isn't. Unfortunately, the userspace API is per-CPU > on collection, and global on reset (this seems like a bad decision, > but it is too late to fix this). Regarding the last statement, that's something I had question too and discussed with Paolo, even though at that time it's not a outcome of discussing memory ordering issues. IIUC the initial design was trying to avoid tlb flush flood when vcpu number is large (each RESET per ring even for one page will need all vcpus to flush, so O(N^2) flushing needed). With global RESET it's O(N). So it's kind of a trade-off, and indeed until now I'm not sure which one is better. E.g., with per-ring reset, we can have locality too in userspace, e.g. the vcpu thread might be able to recycle without holding global locks. Regarding safety I hope I covered that below in previous reply. > > > > > So the ordering of single-entry is guaranteed in that when (5) happens it > > must be after stablized (1+2). > > > > > Take the > > > following example: CPU0 is changing a bunch of flags for GFNs A, B, C, > > > D that exist in the ring in that order, and CPU1 performs an ioctl to > > > reset the page state. > > > > > > CPU0: > > > write_flag(A, KVM_DIRTY_GFN_F_RESET) > > > write_flag(B, KVM_DIRTY_GFN_F_RESET) > > > write_flag(C, KVM_DIRTY_GFN_F_RESET) > > > write_flag(D, KVM_DIRTY_GFN_F_RESET) > > > [...] > > > > > > CPU1: > > > ioctl(KVM_RESET_DIRTY_RINGS) > > > > > > Since CPU0 writes do not have any ordering, CPU1 can observe the > > > writes in a sequence that have nothing to do with program order, and > > > could for example observe that GFN A and D have been reset, but not B > > > and C. This in turn breaks the logic in the reset code (B, C, and D > > > don't get reset), despite userspace having followed the spec to the > > > letter. If each was a store-release (which is the case on x86), it > > > wouldn't be a problem, but nothing calls it in the documentation. > > > > > > Maybe that's not a big deal if it is expected that each CPU will issue > > > a KVM_RESET_DIRTY_RINGS itself, ensuring that it observe its own > > > writes. But expecting this to work across CPUs without any barrier is > > > wishful thinking. > > > > I see what you meant... > > > > Firstly I'm actually curious whether that'll really happen if the gfns are > > collected in something like a for loop: > > > > for(i = 0; i < N; i++) > > collect_dirty_gfn(ring, i); > > > > Because since all the gfps to be read will depend on variable "i", IIUC no > > reordering should happen, but I'm not really sure, so more of a pure > > question. > > 'i' has no influence on the write ordering. Each write targets a > different address, there is no inter-write dependencies (this concept > doesn't exist other than for writes to the same address), so they can > be reordered at will. > > If you want a proof of this, head to http://diy.inria.fr/www/ and run > the MP.litmus test (which conveniently gives you a reduction of this > problem) on both the x86 and AArch64 models. You will see that the > reordering isn't allowed on x86, but definitely allowed on arm64. > > > Besides, the other thing to mention is that I think it is fine the RESET > > ioctl didn't recycle all the gfns got set to reset state. Taking above > > example of GFNs A-D, if when reaching the RESET ioctl only A & D's flags > > are updated, the ioctl will recycle gfn A but stop at gfn B assuming B-D > > are not reset. But IMHO it's okay because it means we reset partial of the > > gfns not all of them, and it's safe to do so. It means the next ring full > > event can come earlier because we recycled less, but that's functionally > > safe to me. > > It may be safe, but it isn't what the userspace API promises. The document says: After processing one or more entries in the ring buffer, userspace calls the VM ioctl KVM_RESET_DIRTY_RINGS to notify the kernel about it, so that the kernel will reprotect those collected GFNs. Therefore, the ioctl must be called *before* reading the content of the dirty pages. I'd say it's not an explicit promise, but I think I agree with you that at least it's unclear on the behavior. Since we have a global recycle mechanism, most likely the app (e.g. current qemu impl) will use the same thread to collect/reset dirty GFNs, and trigger the RESET ioctl(). In that case it's safe, IIUC, because no cross-core ops. QEMU even guarantees this by checking it (kvm_dirty_ring_reap_locked): if (total) { ret = kvm_vm_ioctl(s, KVM_RESET_DIRTY_RINGS); assert(ret == total); } I think the assert() should never trigger as mentioned above. But ideally maybe it should just be a loop until cleared gfns match total. > In other words, without further straightening of the API, this doesn't > work as expected on relaxed memory architectures. So before this gets > enabled on arm64, this whole ordering issue must be addressed. How about adding some more documentation for KVM_RESET_DIRTY_RINGS on the possibility of recycling partial of the pages, especially when collection and the ioctl() aren't done from the same thread? Any suggestions will be greatly welcomed. Thanks, -- Peter Xu