Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp9577698rwb; Thu, 24 Nov 2022 15:25:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf7FEPqmpEf2Kwv9mLIkAVQRuQvYQu00WbHe2L/z+0R6F9Vm+J2LfiGfHKfxDWKC/0FY/c0K X-Received: by 2002:a17:90a:4a8f:b0:215:f80c:18e6 with SMTP id f15-20020a17090a4a8f00b00215f80c18e6mr44166740pjh.45.1669332349554; Thu, 24 Nov 2022 15:25:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669332349; cv=none; d=google.com; s=arc-20160816; b=TJXUcxMP7KbuoWjWO5B5Ff+a/ly7UG+U3nNXE1fWdfoV/0l6dxOerSSiIqz8ZU4HA/ xxgjK5fzr5T6BC3t/2sryti9ekFaDeSjM8LIJixzJtWqDcvmDP4dVjDnMbbaAJUkaCGL RTRE7xH7jTtQG6Cx5MTwYoeKK0GoQzVz0Z8VyMx4uOGSilcspJLzz6bFOFdW5n+LGXoz mVAVqydv6msUjAESd6YeiyEzA/MtQGNKMQpjoRisPv3yQbYvX6acrvfXIRm8KQveQxtS x61294Y0WMIc+w7yAD019bjMnvXTKf16idUCguMaHSqHANEUVY4OlMP2/wfh/Ek8YTQ/ qROA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=oeuZHzT4EOHz8tiIWkvz6nWvsD7aaG1MkDoma4AjWr0=; b=EUZDlRr5J9I3doQA3zd162Y+m/ugf9j+d0lUhz6ulgSvho8snGDtgo2EQRzTig2+aC P9joZLP+m+o3fgj9VuF3ZOyO4cmL8jYNIiBU31qyCFLgeArFJ54es68r4x6yONNfkOKt 80m3b+JG1nfNyl6KxeNDeEZsIbijklKLRAe+qVvjwA7KgYCdXM1/qcNjNButGsOL/8iy Uod/bhMi/oewr1RJueKnDmUVKDMyrlG1bDiRoVkGul0b7e00JOh13FWJuGwKYdEduC5u E969FghWkXVJs31Uo+vR3avpw4AKI4TiaH4+VNEpEkvhhoOuEjpEpy1hj4FBxjkkzM6E nBqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dIt5fDuU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j29-20020a63551d000000b0046ef5b3572fsi2512907pgb.562.2022.11.24.15.25.37; Thu, 24 Nov 2022 15:25:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dIt5fDuU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229535AbiKXXB3 (ORCPT + 88 others); Thu, 24 Nov 2022 18:01:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbiKXXB0 (ORCPT ); Thu, 24 Nov 2022 18:01:26 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE79086FDC for ; Thu, 24 Nov 2022 15:00:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669330831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oeuZHzT4EOHz8tiIWkvz6nWvsD7aaG1MkDoma4AjWr0=; b=dIt5fDuUD3kbgPkPKhgqsKKizh5R8SXlwlqQoJurV2SuoW275xD4XQbVDsY6Hr2Waiam37 gT3Y1ByNLE4XnsKRVWw0qMspgQwyKT3XLInkMUA0Bb8HV4PScG/oZCAuYiwIHzDdcNar3W BinfEkX2Ffs8/7lPSBx1aimicDG2tKI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-378-4QIxZktDODuVUBuAbt12hg-1; Thu, 24 Nov 2022 18:00:27 -0500 X-MC-Unique: 4QIxZktDODuVUBuAbt12hg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 283EC833AEC; Thu, 24 Nov 2022 23:00:27 +0000 (UTC) Received: from [10.22.32.81] (unknown [10.22.32.81]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82EC72166B26; Thu, 24 Nov 2022 23:00:26 +0000 (UTC) Message-ID: <1a997ea7-bb63-1710-14d6-c3b88a22bdb3@redhat.com> Date: Thu, 24 Nov 2022 18:00:24 -0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.0 Subject: Re: [PATCH] cgroup/cpuset: Optimize update_tasks_nodemask() Content-Language: en-US To: Haifeng Xu Cc: lizefan.x@bytedance.com, tj@kernel.org, hannes@cmpxchg.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org References: <20221123082157.71326-1-haifeng.xu@shopee.com> <2ac6f207-e08a-2a7f-01ae-dfaf15eefaf6@redhat.com> <4de8821b-e0c0-bf63-4d76-b0ce208cce3b@shopee.com> <21e73dad-c6d0-21ea-dcdf-355b71c8537b@shopee.com> From: Waiman Long In-Reply-To: <21e73dad-c6d0-21ea-dcdf-355b71c8537b@shopee.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/24/22 02:49, Haifeng Xu wrote: > > On 2022/11/24 12:24, Waiman Long wrote: >> On 11/23/22 22:33, Haifeng Xu wrote: >>> On 2022/11/24 04:23, Waiman Long wrote: >>>> On 11/23/22 03:21, haifeng.xu wrote: >>>>> When change the 'cpuset.mems' under some cgroup, system will hung >>>>> for a long time. From the dmesg, many processes or theads are >>>>> stuck in fork/exit. The reason is show as follows. >>>>> >>>>> thread A: >>>>> cpuset_write_resmask /* takes cpuset_rwsem */ >>>>>     ... >>>>>       update_tasks_nodemask >>>>>         mpol_rebind_mm /* waits mmap_lock */ >>>>> >>>>> thread B: >>>>> worker_thread >>>>>     ... >>>>>       cpuset_migrate_mm_workfn >>>>>         do_migrate_pages /* takes mmap_lock */ >>>>> >>>>> thread C: >>>>> cgroup_procs_write /* takes cgroup_mutex and >>>>> cgroup_threadgroup_rwsem */ >>>>>     ... >>>>>       cpuset_can_attach >>>>>         percpu_down_write /* waits cpuset_rwsem */ >>>>> >>>>> Once update the nodemasks of cpuset, thread A wakes up thread B to >>>>> migrate mm. But when thread A iterates through all tasks, including >>>>> child threads and group leader, it has to wait the mmap_lock which >>>>> has been take by thread B. Unfortunately, thread C wants to migrate >>>>> tasks into cgroup at this moment, it must wait thread A to release >>>>> cpuset_rwsem. If thread B spends much time to migrate mm, the >>>>> fork/exit which acquire cgroup_threadgroup_rwsem also need to >>>>> wait for a long time. >>>>> >>>>> There is no need to migrate the mm of child threads which is >>>>> shared with group leader. Just iterate through the group >>>>> leader only. >>>>> >>>>> Signed-off-by: haifeng.xu >>>>> --- >>>>>    kernel/cgroup/cpuset.c | 3 +++ >>>>>    1 file changed, 3 insertions(+) >>>>> >>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >>>>> index 589827ccda8b..43cbd09546d0 100644 >>>>> --- a/kernel/cgroup/cpuset.c >>>>> +++ b/kernel/cgroup/cpuset.c >>>>> @@ -1968,6 +1968,9 @@ static void update_tasks_nodemask(struct cpuset >>>>> *cs) >>>>>              cpuset_change_task_nodemask(task, &newmems); >>>>>    +        if (!thread_group_leader(task)) >>>>> +            continue; >>>>> + >>>>>            mm = get_task_mm(task); >>>>>            if (!mm) >>>>>                continue; >>>> Could you try the attached test patch to see if it can fix your problem? >>>> Something along the line of this patch will be more acceptable. >>>> >>>> Thanks, >>>> Longman >>>> >>> Hi, Longman. >>> Thanks for your patch, but there are still some problems. >>> >>> 1) >>>    (group leader, node: 0,1) >>>           cgroup0 >>>           /     \ >>>          /       \ >>>      cgroup1   cgroup2 >>>     (threads)  (threads) >>> >>> If set node 0 in cgroup1 and node 1 in cgroup2, both of them will update >>> the mm. And the nodemask of mm depends on who set the node last. >> Yes, that is the existing behavior. It was not that well defined in the >> past and so it is somewhat ambiguous as to what we need to do about it. >> > The test patch works if the child threads are in same cpuset with group > leader which has same logic with my patch. But if they are in different > cpusets, the test patch will fail because the contention of mmap_lock > still exsits and seems similar to the original logic. That is true. I am thinking about adding a nodemask to mm_struct so that we can figure out if we need to propagate the changes down to all the VMAs and do the migration. That will enable us to avoid doing wasteful work. Current node mask handling isn't that efficient especially for distros that have a relatively large NODES_SHIFT value. Some work may also be need in this area. >> BTW, cgroup1 has a memory_migrate flag which will force page migration >> if set. I guess you may have it set in your case as it will introduce a >> lot more delay as page migration takes time. That is probably the reason >> why you are seeing a long delay. So one possible solution is to turn >> this flag off. Cgroup v2 doesn't have this flag. >> > Dou you mean 'CS_MEMORY_MIGRATE'? This flag can be turn off in Cgroup > v1, but it has been set in Cgroup v2 (cpuset_css_alloc) in default and > couldn't be changed. You are right. Cgroup v2 has CS_MEMORY_MIGRATE enabled by default and can't be turned off. > >>> 2) >>>     (process, node: 0,1) >>>           cgroup0 >>>           /     \ >>>          /       \ >>>      cgroup1   cgroup2 >>>     (node: 0)  (node: 1) >>> >>> If migrate thread from cgroup0 to cgroup1 or cgroup2, cpuset_attach >>> won't update the mm. So the nodemask of thread, including mems_allowed >>> and mempolicy(updated in cpuset_change_task_nodemask), is different >>> from >>> the vm_policy in vma(updated in mpol_rebind_mm). >> Yes, that can be the case. >> >>> >>> In a word, if threads have different cpusets with different nodemask, it >>> will cause inconsistent memory behavior. >> So do you have suggestion of what we need to do going forward? > Should we prevent thread from migrating to those cgroups which have > different nodemask with the cgroup that contains the group leader? > > In addition, the group leader and child threads should be in same cgroup > tree, also the level of cgroup containes group leader must be higher > than these cgroups contain child threads, so update_nodemask will work. > > Or just disable thread migration in cpuset?It's easy to achieve but will > affect cpu bind. As said above, my current inclination is to add a nodemask to mm_struct and revise the way nodemask is being handled. That will take some time. Cheers, Longman