Received: by 2002:a05:7412:ba23:b0:fa:4c10:6cad with SMTP id jp35csp1745774rdb; Sat, 20 Jan 2024 11:30:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IGRRrZuURvcqNUN7w2LKgu53la8MXrK7avsLFg+BhfE9DnkVUtjpKv7aBddvFstrolP+bwg X-Received: by 2002:a05:622a:86:b0:427:e99e:c644 with SMTP id o6-20020a05622a008600b00427e99ec644mr3194171qtw.30.1705779007902; Sat, 20 Jan 2024 11:30:07 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705779007; cv=pass; d=google.com; s=arc-20160816; b=AQ4CtUgzt7j90RLKww2Q1eqvkq3mUdjgPLKEp/sSNvgcKyV6vIx3PsPHDT2opsgtn7 bI3MJwFQWNQbkb+LxWB/ZUG+r+aebPrj/lOO6R1UCNFaUxI/rgnMwdCAZuF4QqnCMG3p cg58u3yjCv7Bk8p9ZPtGHfl01TYzB5l4p7XWd3ibxAzh9liVTkGOT+o/Pi4uPN+CA4ER aOIUMSNsy9w3EClzDjcOt2hZxFhU3anpzKnO/ztDn2RAZWF+JX+CoNQ4YZp348mIdG+K UxaOJ6CoCqFDD2i3C35KylzDc7DkLmM42PCCHACvfisa6G40ZSLpXqCAovejKd0tYjjd xj7g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=u38SAycTiI+W8zJQ2UcZevKAp1rBiIpuN1QcnaS3Ygg=; fh=975nUjm/5Tvv+TwDZcgMRfki0UMZiSZXCwYjYR8WL5I=; b=PbCKTXO2tFTDgHTWVnYo8Lw3xXmzkupG0va9OyYNSIktsRWg2FsYd4DJ7rYnco023x el9x+4t1bm57vAe//GSzIdOTh4aWBRPzFSO686thC6VSSIuh8lcjMjSwzDHBP6XNhZtf IY59e/OPPgB/jdvI3ed1RzLLZ+7fqsUC92CxdldC1ino2nnlZxowKJEN5caTPvzJqnn3 WcE0Y/PgwYM/u9xf7yO1UMpILdl529yH7zyBPBjQvQTgOoXzB2J68Sn1w2iqoX3aGZAE DRdg1qTOcMPbAbtPOdY2fly52MyMcDN5tgt2uGD/3KIWHcG2HBNRB2CujaIb3IeFmNfm Fm1w== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=g+R9ng5Y; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-31826-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-31826-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id x11-20020a05622a000b00b0042a34c27c8asi892336qtw.273.2024.01.20.11.30.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 20 Jan 2024 11:30:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-31826-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=g+R9ng5Y; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-31826-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-31826-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 675B31C2106E for ; Sat, 20 Jan 2024 15:20:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4606910942; Sat, 20 Jan 2024 15:20:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="g+R9ng5Y" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D47B7FC10; Sat, 20 Jan 2024 15:20:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705764006; cv=none; b=lK2O3qFWUAzWCu7n4P2r6Fd631LOgJKnKEM/ORbzsT2CYENxoiiCrI6DiL8LSIKqzGZlC6HsTarcIBikBcUAijMGn76Qys+2plr4C3T3mm84IqJcrIqURi4TzIJfuFW7O5tx9B1FWtZdNZsS1Hk5tLcR/QFU0SgUGNSlSBR0CwY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705764006; c=relaxed/simple; bh=2x7L2nPWKcD/ke7shQz1wJugb8Cya+iMq5jXQicfUGk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=qxTf4DcGxSoEPouespTuXzyWPVbjkEL/l1hRFF/ZCyVZC1GfSNM+HC1rrYGMaLh8G2t1bGCH++x7oCi4AatYbecftgi6Xccfq6xmnfuC1Zt2CISlhwU2KXGPipDzaNmKT3Cxhd7MbIuAH/sKPB3uLSRUlseLQp8WZ2oVlWN937M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=g+R9ng5Y; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705764006; x=1737300006; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=2x7L2nPWKcD/ke7shQz1wJugb8Cya+iMq5jXQicfUGk=; b=g+R9ng5Yav/TvLCF5Lmc6udv+jytq+CREOXiQX/RUMjtJlEBfLzy3/9d 7n/coNIcafioXo4WaC4sWPR5/lChE8ZDh9QSdGdrtoxeBfDChCICv7NWL 7f+jSNNcWHAkk1QBxDC1f+PM5YWl3HaqThpGV0SuX2MyO4xiVOiu9z/ZF +Iqnbi4jprLULa+FTP6J4/aM3rgEy0trbmszdXvFlixysCiUEj5vQgh2u yF5ywEgm4jSKoURRUpfHkrkMsWqz1d9q0uQV8aO/WlVAbLlXIa81k9NJ/ lJA5l0yynC3JyrbL9FOxoL+vNF5MuvYmDcq5CQyJSdKF+zN8p33iovh3C Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10959"; a="19521797" X-IronPort-AV: E=Sophos;i="6.05,208,1701158400"; d="scan'208";a="19521797" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jan 2024 07:20:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10959"; a="1116484887" X-IronPort-AV: E=Sophos;i="6.05,208,1701158400"; d="scan'208";a="1116484887" Received: from yilunxu-optiplex-7050.sh.intel.com (HELO localhost) ([10.239.159.165]) by fmsmga005.fm.intel.com with ESMTP; 20 Jan 2024 07:20:02 -0800 Date: Sat, 20 Jan 2024 23:16:49 +0800 From: Xu Yilun To: Sean Christopherson Cc: Paolo Bonzini , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Subject: Re: [PATCH 3/4] KVM: Get reference to VM's address space in the async #PF worker Message-ID: References: <20240110011533.503302-1-seanjc@google.com> <20240110011533.503302-4-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240110011533.503302-4-seanjc@google.com> On Tue, Jan 09, 2024 at 05:15:32PM -0800, Sean Christopherson wrote: > Get a reference to the target VM's address space in async_pf_execute() > instead of gifting a reference from kvm_setup_async_pf(). Keeping the > address space alive just to service an async #PF is counter-productive, > i.e. if the process is exiting and all vCPUs are dead, then NOT doing > get_user_pages_remote() and freeing the address space asap is desirable. > > Handling the mm reference entirely within async_pf_execute() also > simplifies the async #PF flows as a whole, e.g. it's not immediately > obvious when the worker task vs. the vCPU task is responsible for putting > the gifted mm reference. > > Signed-off-by: Sean Christopherson > --- > include/linux/kvm_host.h | 1 - > virt/kvm/async_pf.c | 32 ++++++++++++++++++-------------- > 2 files changed, 18 insertions(+), 15 deletions(-) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 7e7fd25b09b3..bbfefd7e612f 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -238,7 +238,6 @@ struct kvm_async_pf { > struct list_head link; > struct list_head queue; > struct kvm_vcpu *vcpu; > - struct mm_struct *mm; > gpa_t cr2_or_gpa; > unsigned long addr; > struct kvm_arch_async_pf arch; > diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c > index d5dc50318aa6..c3f4f351a2ae 100644 > --- a/virt/kvm/async_pf.c > +++ b/virt/kvm/async_pf.c > @@ -46,8 +46,8 @@ static void async_pf_execute(struct work_struct *work) > { > struct kvm_async_pf *apf = > container_of(work, struct kvm_async_pf, work); > - struct mm_struct *mm = apf->mm; > struct kvm_vcpu *vcpu = apf->vcpu; > + struct mm_struct *mm = vcpu->kvm->mm; > unsigned long addr = apf->addr; > gpa_t cr2_or_gpa = apf->cr2_or_gpa; > int locked = 1; > @@ -56,16 +56,24 @@ static void async_pf_execute(struct work_struct *work) > might_sleep(); > > /* > - * This work is run asynchronously to the task which owns > - * mm and might be done in another context, so we must > - * access remotely. > + * Attempt to pin the VM's host address space, and simply skip gup() if > + * acquiring a pin fail, i.e. if the process is exiting. Note, KVM > + * holds a reference to its associated mm_struct until the very end of > + * kvm_destroy_vm(), i.e. the struct itself won't be freed before this > + * work item is fully processed. > */ > - mmap_read_lock(mm); > - get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); > - if (locked) > - mmap_read_unlock(mm); > - mmput(mm); > + if (mmget_not_zero(mm)) { > + mmap_read_lock(mm); > + get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, &locked); > + if (locked) > + mmap_read_unlock(mm); > + mmput(mm); > + } > > + /* > + * Notify and kick the vCPU even if faulting in the page failed, e.g. How about when the process is exiting? Could we just skip the following? Thanks, Yilun > + * so that the vCPU can retry the fault synchronously. > + */ > if (IS_ENABLED(CONFIG_KVM_ASYNC_PF_SYNC)) > kvm_arch_async_page_present(vcpu, apf); > > @@ -129,10 +137,8 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu) > #ifdef CONFIG_KVM_ASYNC_PF_SYNC > flush_work(&work->work); > #else > - if (cancel_work_sync(&work->work)) { > - mmput(work->mm); > + if (cancel_work_sync(&work->work)) > kmem_cache_free(async_pf_cache, work); > - } > #endif > spin_lock(&vcpu->async_pf.lock); > } > @@ -211,8 +217,6 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > work->cr2_or_gpa = cr2_or_gpa; > work->addr = hva; > work->arch = *arch; > - work->mm = current->mm; > - mmget(work->mm); > > INIT_WORK(&work->work, async_pf_execute); > > -- > 2.43.0.472.g3155946c3a-goog >