Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1192469pxb; Fri, 21 Jan 2022 11:56:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJwUO3WdJO9omlKBEpuJclY3+RkGcm73WiA2ATqnbmkgEVnUpdMIEcF9PnLYgWAxIYw6mH8O X-Received: by 2002:a17:90b:4c0e:: with SMTP id na14mr2342210pjb.84.1642794968521; Fri, 21 Jan 2022 11:56:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642794968; cv=none; d=google.com; s=arc-20160816; b=Ud0IM8vc/3zSixNMQmlQ5p7UvZAcx/B0APVd4LbSHdBq9zGt8n5QNGnCRoYIS0hDWz cELuWOhc5A+msBuVjILmgzXtfFZMtffsuIg4VewAe9BG3jl3FmSbaHZuv+6u4iu8SHE8 IhEPV8eSgBlg1kYc5osLBna0r+NJ+f3BqaRl8zFRv0QIA9E6AH/gY4BDSLbyQqnte0dO 8vl+QkqMptuopAKICBvQ3nI56KGULnjYbna592LRj+oy0B1xz7/V8LI1jX7lRGxsDVQM LqKhFfyE0csC/9nniFgbxA0DW7TbU2EQWiAfWzKZbjXsvEsIfMgzrPY2lsEInw3ceP1s QOTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=pILDFV1B0pcs8CFlfnc4VuEEfnu7s3r3AjaWa+U4qlU=; b=tX+0a9Owa/db/fPvhlvSstF/ZjMnKDfzwxsTlICIPxY94wmkv0+Ppw83T3yjslN3Xc 6UicpVQlSGXz4Jiii/nAjSNFmhO517dU0ofgHiwPlvmCdiTHjkQW0FeZUdCV9pfImuBV SLfousdZuS0YvxlYJHQEsOv8Fl6ga7/skYE7AX65zm8CnKzo6qf2vy9mvfz1SwcIl+lY 4JusTLUqvwyRAS1AOOgycIEc1q5iHu3PIRRypePyqc9tdY9Sm64wRTat+gHuW9sO01de 8cBgcVff1AfWdkvbLnHpkPN2ZKFVfJh/RozT/rcaPWnp8LGhUh0oBbVp0o+NJPoibXKO ftvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b="G/3SbGpY"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b16si7857410pfm.13.2022.01.21.11.55.56; Fri, 21 Jan 2022 11:56:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b="G/3SbGpY"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356831AbiASSb1 (ORCPT + 99 others); Wed, 19 Jan 2022 13:31:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356799AbiASSar (ORCPT ); Wed, 19 Jan 2022 13:30:47 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E676EC061574; Wed, 19 Jan 2022 10:30:46 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id v11-20020a17090a520b00b001b512482f36so585208pjh.3; Wed, 19 Jan 2022 10:30:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=pILDFV1B0pcs8CFlfnc4VuEEfnu7s3r3AjaWa+U4qlU=; b=G/3SbGpYSVIOIGF7kh+kzHoGUA5tjQNhhwD8Ct/g4lU1Vu7oi22nFS+AZudtTGWnr3 rrhbbDLZytfxj5YVdB3iT4iolvT7lMZWM+MNINx7AQydBvXsMdbYOmkzq2tgcTNbGPWu Qpa+33b4JH7QozpPtVL7r0KodsrkmTOGQP2niwVLRhBjZo9fy3C+npjUu6X50dKpWeEM yKN/XtEo/mnaJhpjliB+QNljmjRXxQlYGPfYEPyyoxvCI5hzEnSFX540cpRZtqmRW9A+ Ji9zM1OR2SJi0kJeAZdohbNCC1kjFVNUXaYGmn8+JxFeoizkJJ86x1FDWsXiU1XNd3oN hbag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=pILDFV1B0pcs8CFlfnc4VuEEfnu7s3r3AjaWa+U4qlU=; b=P+gfIXQ/w3gcd9K9hH95DJdaGm1f1GDfa5g+fk9whB24Xlm9ALRubYXyEBbSET1AqI 99I/wPfI4JLZ7LW41NXdfluCC6hT/JdO8URVNi/fAYvmSufOi2bNAEkjSRofYn6Bixqc TPCL7jQHMLFYJkt5oZtnpO3k33VE3SAJz56EW/3quBViwQU22korY4X+QGazBrMwkqrk b/c11qg6loPSgVzizO4V6CzuziaiJ2jXnItng2FRT8VMq/YZE2Bup8qgbeHuL+TutkzQ uLsTwvLhpAeNeGoBeW/zeMW05YLKHzRBonIKl/CiDGVHDcDQCKwUi/qiaXLTYUiJYrFx 88tQ== X-Gm-Message-State: AOAM532oa8LPwmjK9VqrvCeLK2SdBCq/vj+PrKdlaMneA3coG4EhAc5F Qn9oRJs1fFCqcwKk/KnjHz8= X-Received: by 2002:a17:903:2342:b0:14a:e540:6c83 with SMTP id c2-20020a170903234200b0014ae5406c83mr7261385plh.69.1642617046283; Wed, 19 Jan 2022 10:30:46 -0800 (PST) Received: from localhost (2603-800c-1a02-1bae-e24f-43ff-fee6-449f.res6.spectrum.com. [2603:800c:1a02:1bae:e24f:43ff:fee6:449f]) by smtp.gmail.com with ESMTPSA id b5sm373150pgl.22.2022.01.19.10.30.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jan 2022 10:30:45 -0800 (PST) Sender: Tejun Heo Date: Wed, 19 Jan 2022 08:30:43 -1000 From: Tejun Heo To: Paolo Bonzini Cc: Vipin Sharma , Michal =?iso-8859-1?Q?Koutn=FD?= , seanjc@google.com, lizefan.x@bytedance.com, hannes@cmpxchg.org, dmatlack@google.com, jiangshanlai@gmail.com, kvm@vger.kernel.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] KVM: Move VM's worker kthreads back to the original cgroups before exiting. Message-ID: References: <20211222225350.1912249-1-vipinsh@google.com> <20220105180420.GC6464@blackbody.suse.cz> <7a0bc562-9f25-392d-5c05-9dbcd350d002@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7a0bc562-9f25-392d-5c05-9dbcd350d002@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 19, 2022 at 07:02:53PM +0100, Paolo Bonzini wrote: > On 1/18/22 21:39, Tejun Heo wrote: > > So, these are normally driven by the !populated events. That's how everyone > > else is doing it. If you want to tie the kvm workers lifetimes to kvm > > process, wouldn't it be cleaner to do so from kvm side? ie. let kvm process > > exit wait for the workers to be cleaned up. > > It does. For example kvm_mmu_post_init_vm's call to > kvm_vm_create_worker_thread is matched with the call to > kthread_stop in kvm_mmu_pre_destroy_vm. > According to Vpin, the problem is that there's a small amount of time > between the return from kthread_stop and the point where the cgroup > can be removed. My understanding of the race is the following: Okay, this is because kthread_stop piggy backs on vfork_done to wait for the task exit intead of the usual exit notification, so it only waits till exit_mm(), which is uhh... weird. So, migrating is one option, I guess, albeit a rather ugly one. It'd be nicer if we can make kthread_stop() waiting more regular but I couldn't find a good existing place and routing the usual parent signaling might be too complicated. Anyone has better ideas? Thanks. -- tejun