Received: by 2002:a05:7412:251c:b0:e2:908c:2ebd with SMTP id w28csp2315072rda; Tue, 24 Oct 2023 21:26:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE5xg+LEZ5S8ogni9EMvw//pZDN/VKgOS1PfZLmdjGVeMG06ZdptR71bPlIrw7Itzln3C0+ X-Received: by 2002:a05:620a:2620:b0:76d:ada0:4c0 with SMTP id z32-20020a05620a262000b0076dada004c0mr14131406qko.76.1698208008060; Tue, 24 Oct 2023 21:26:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698208008; cv=none; d=google.com; s=arc-20160816; b=jY/x11/vpR0N/oqFQsezApHudIYww40527usBHy8+8raNJB/bwQiOCtX3VFGYh04Z7 5vXim/gGY0O8DiyXaPM2VAnTi+hRSiU/pOsWRd5o3MtmIjouB95CGoNiSdNJRfhTsemA M0Rjpd/tEyDDb+OBWN2pm6eLaM3mO7yMxURMgk+icymqiB3BqcC66Yy7vtfLk2J4D/of YGs9eanlAAcyzlpgApm//MjaJcj12uLfOIEN939m8Szigoc43BBvXwqmv9QDyfzpuTvU rHrA7jJ9ZGnRKFMQZs/J/1I44mzCcViErwcMzjJnc1PH8flY8wIlOFNMUMaOQX4Dr1MD KKHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=6Lz80dXaLr9ufuPnfiEN8FrGhUxyQ2FuPSJLk3pHOiU=; fh=vff9f70+KDVwW15OUgjFdOWgxwRTY7YGVvEIRddKGrw=; b=clsON4/RugbwYS3SAEFWohbftUwzjj10huoffllSqInGzhsK1hIjfl5cBWJ+A8kYSx mRy+JhYUHNhclnQ2wsP2222xQmLNBKQ0lz2kmUgwKcRd1h6QT8GFYXaSkKnGgudzt2lR kj6lTMOy46sFLpzT+AcdYYPksLfTz3NF4vlhQfoNKY+pgZWN2YZAljs93TUFLI9JBBAW T14RgEvOrNN9rhEdTacjxV6uasu7x0xj6h/hXG3dxPGMhGMccyJjXUgPYSr5KD7WuKQ5 9fIOTDPjxwzW7jlG8iiBrxOvra9gxBQil7Y2uIHXTfkqEubNi9RFRdIiF0eOtWqOxDrq zWww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D9pp20sD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id 187-20020a250dc4000000b00da03c35d038si3139867ybn.60.2023.10.24.21.26.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 21:26:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D9pp20sD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 7BDEF801E1F6; Tue, 24 Oct 2023 21:26:45 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjJYE0d (ORCPT + 99 others); Wed, 25 Oct 2023 00:26:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbjJYE0a (ORCPT ); Wed, 25 Oct 2023 00:26:30 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1901134; Tue, 24 Oct 2023 21:26:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698207988; x=1729743988; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=1zdjDP3Z0zigG3iX/C6RuVgtRdw2cSJlDEcn/F65EtA=; b=D9pp20sD6t2AB975mzbaA0hpYc5O6ITZUcH8NNDt9MzPAk66Ffuv6zfd K8jQnmfMI6gQixaGfTZZTHKcibvlMLbv57dQXpodFvOTVIkY+t7Cga3vD EzjuqVLWi7FYgWn0HnC7npZnZn3nMZ0ZOsWjsMBkr0iYKNcBxGjNe+w/p wdaVRBnOltxzt6kw5cWCVBNmYlf7pbUT66Rpi55Lm70dPWAKmjjQEZjDK MTrczSX2PPPlZeAZ7S+BqP3pzvYrFKxX1HfTHLSXWem6XrOttHh1mjtB8 IQsJsQs3fMGj/ZSZtz9/wSkk8RlOV6BokVObCvTnRkmo0orB4aA9c9cUx w==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451459479" X-IronPort-AV: E=Sophos;i="6.03,249,1694761200"; d="scan'208";a="451459479" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 21:26:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="932243175" X-IronPort-AV: E=Sophos;i="6.03,249,1694761200"; d="scan'208";a="932243175" Received: from yilunxu-optiplex-7050.sh.intel.com (HELO localhost) ([10.239.159.165]) by orsmga005.jf.intel.com with ESMTP; 24 Oct 2023 21:26:25 -0700 Date: Wed, 25 Oct 2023 12:25:11 +0800 From: Xu Yilun To: Sean Christopherson Cc: Paolo Bonzini , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Al Viro , David Matlack Subject: Re: [PATCH 2/3] KVM: Always flush async #PF workqueue when vCPU is being destroyed Message-ID: References: <20231018204624.1905300-1-seanjc@google.com> <20231018204624.1905300-3-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 24 Oct 2023 21:26:45 -0700 (PDT) On Tue, Oct 24, 2023 at 08:49:24AM -0700, Sean Christopherson wrote: > On Tue, Oct 24, 2023, Xu Yilun wrote: > > On Wed, Oct 18, 2023 at 01:46:23PM -0700, Sean Christopherson wrote: > > > @@ -126,7 +124,19 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu) > > > list_first_entry(&vcpu->async_pf.done, > > > typeof(*work), link); > > > list_del(&work->link); > > > + > > > + spin_unlock(&vcpu->async_pf.lock); > > > + > > > + /* > > > + * The async #PF is "done", but KVM must wait for the work item > > > + * itself, i.e. async_pf_execute(), to run to completion. If > > > + * KVM is a module, KVM must ensure *no* code owned by the KVM > > > + * (the module) can be run after the last call to module_put(), > > > + * i.e. after the last reference to the last vCPU's file is put. > > > + */ > > > + flush_work(&work->work); > > > > I see the flush_work() is inside the check: > > > > while (!list_empty(&vcpu->async_pf.done)) > > > > Is it possible all async_pf are already completed but the work item, > > i.e. async_pf_execute, is not completed before this check? That the > > work is scheduled out after kvm_arch_async_page_present_queued() and > > all APF_READY requests have been handled. In this case the work > > synchronization will be skipped... > > Good gravy. Yes, I assumed KVM wouldn't be so crazy to delete the work before it > completed, but I obviously didn't see this comment in async_pf_execute(): > > /* > * apf may be freed by kvm_check_async_pf_completion() after > * this point > */ > > The most straightforward fix I see is to also flush the work in > kvm_check_async_pf_completion(), and then delete the comment. The downside is > that there's a small chance a vCPU could be delayed waiting for the work to > complete, but that's a very, very small chance, and likely a very small delay. > kvm_arch_async_page_present_queued() unconditionaly makes a new request, i.e. will > effectively delay entering the guest, so the remaining work is really just: > > trace_kvm_async_pf_completed(addr, cr2_or_gpa); > > __kvm_vcpu_wake_up(vcpu); > > mmput(mm); > > Since mmput() can't drop the last reference to the page tables if the vCPU is > still alive. OK, seems the impact is minor. I'm good to it. Thanks, Yilun > > I think I'll spin off the async #PF fix to a separate series. There's are other > tangetially related cleanups that can be done, e.g. there's no reason to pin the > page tables while work is queued, async_pf_execute() can do mmget_not_zero() and > then bail if the process is dying. Then there's no need to do mmput() when > canceling work.