Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1377107ybl; Fri, 6 Dec 2019 16:29:42 -0800 (PST) X-Google-Smtp-Source: APXvYqwzuCQ5HkKNvx7K3aUNPvk7WwU705Bvfeo2sBvZJe/9IkM1aeLpGFwhK4yTUZ38hdPSn3U0 X-Received: by 2002:a9d:6b12:: with SMTP id g18mr13172446otp.211.1575678582114; Fri, 06 Dec 2019 16:29:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575678582; cv=none; d=google.com; s=arc-20160816; b=ExlreZfjvXFCGHOEGXLqbf6HwH71OBXg9nW4Q4rGKFz951kfX1dTqDU4s6u2z6nZo5 yvu9QzIGknq61N2FPdtwCWar7bpXNk0Y8S5TkA5SnUEox6YzKcI+bkaO6f6d8Ok7VySS fJx7I5Pk+T6mJL3AS4C3LYMAdAwQ3X9Ikcc1F4feHwQPviCZ9b1Rkbvb8WrYfTrpb3Wc I6af3iVKX0W+jsSFV3WWgUttc2DEr+HBjDeymaoumVx0fB/W30y4LkrGnRqnxUX3hx5M m8mYzAd1XeQBtwIwu2HdKbHHIIQWIvsRyZmxTdTLvfkB3NlgjxrJLWlbqZ1VpSF/Ib0s qJ6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=C2L59xF3FAgpp4qSpod+uf5qHVMgtLwMPx9+FKxjlvw=; b=kJrKKhNLJb829gZH8t7cv5kYrF9G2Afiekz4tbW034wzP9n7q0L0/LWLt+lezY2RtW /sMgAjhQUAj+gspE4TlenGteBAEAb5sdc26skhTG4isgrKByTUfMZYHviKkmEPdBAC/+ FW9EDf9BDDETdsm+UxMkEjvED0Iv3GGEVmaWD+/JYZOCjrXlxrTOWYBR6q7LTOrGI61t cuOtFbvMg1a/xuyJlPCJU/C5tjT/LYRBN4Ia6s+fcrewsudhUnMn7wTvCtczyd7PjERD h9VCWVoXGo1ioYCFIoZveQEHnNkH27PzW/IKbNs38Qf+9GR5AzZvbK5UPQQFMLL8reZf urNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11si5520913oiy.210.2019.12.06.16.29.30; Fri, 06 Dec 2019 16:29:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726414AbfLGA3F (ORCPT + 99 others); Fri, 6 Dec 2019 19:29:05 -0500 Received: from mga09.intel.com ([134.134.136.24]:11091 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726374AbfLGA3F (ORCPT ); Fri, 6 Dec 2019 19:29:05 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Dec 2019 16:29:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,286,1571727600"; d="scan'208";a="214583796" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.202]) by orsmga006.jf.intel.com with ESMTP; 06 Dec 2019 16:29:04 -0800 Date: Fri, 6 Dec 2019 16:29:04 -0800 From: Sean Christopherson To: Paolo Bonzini Cc: Peter Xu , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Dr . David Alan Gilbert" , Vitaly Kuznetsov Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Message-ID: <20191207002904.GA29396@linux.intel.com> References: <20191129213505.18472-1-peterx@redhat.com> <20191129213505.18472-5-peterx@redhat.com> <20191202201036.GJ4063@linux.intel.com> <20191202211640.GF31681@xz-x1> <20191202215049.GB8120@linux.intel.com> <20191203184600.GB19877@linux.intel.com> <374f18f1-0592-9b70-adbb-0a72cc77d426@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <374f18f1-0592-9b70-adbb-0a72cc77d426@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 04, 2019 at 11:05:47AM +0100, Paolo Bonzini wrote: > On 03/12/19 19:46, Sean Christopherson wrote: > > Rather than reserve entries, what if vCPUs reserved an entire ring? Create > > a pool of N=nr_vcpus rings that are shared by all vCPUs. To mark pages > > dirty, a vCPU claims a ring, pushes the pages into the ring, and then > > returns the ring to the pool. If pushing pages hits the soft limit, a > > request is made to drain the ring and the ring is not returned to the pool > > until it is drained. > > > > Except for acquiring a ring, which likely can be heavily optimized, that'd > > allow parallel processing (#1), and would provide a facsimile of #2 as > > pushing more pages onto a ring would naturally increase the likelihood of > > triggering a drain. And it might be interesting to see the effect of using > > different methods of ring selection, e.g. pure round robin, LRU, last used > > on the current vCPU, etc... > > If you are creating nr_vcpus rings, and draining is done on the vCPU > thread that has filled the ring, why not create nr_vcpus+1? The current > code then is exactly the same as pre-claiming a ring per vCPU and never > releasing it, and using a spinlock to claim the per-VM ring. Because I really don't like kvm_get_running_vcpu() :-) Binding the rings to vCPUs also makes for an inflexible API, e.g. the amount of memory required for the rings scales linearly with the number of vCPUs, or maybe there's a use case for having M:N vCPUs:rings. That being said, I'm pretty clueless when it comes to implementing and tuning the userspace side of this type of stuff, so feel free to ignore my thoughts on the API.