Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3190953pxj; Mon, 7 Jun 2021 04:49:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxLwcXm3DGBMTICA0iLVUBnG1g15PuVppFvrgXIg49sXycLJleTE3OYT5QW2Skt5EwONo8d X-Received: by 2002:aa7:d846:: with SMTP id f6mr19496701eds.341.1623066570652; Mon, 07 Jun 2021 04:49:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623066570; cv=none; d=google.com; s=arc-20160816; b=i4+UHyXHfPdVnFGTjz1wiP49oOYcTYiaAddGOafWwbdFGSUS85HlPTCKGmEKXIpmIt gfA3WNpdl2J9FXpWbOlNKMutsaGPk8/ctJMbtffyfbm8pAjFMg4zRPWQlNyqDJV4maPh FqqUUzpWUxgUS58yr6M4iMvE947hfMY2jqpdK1DNUV1jxwZs+fd/qwU39JBoTrueWWOe 1rPdQVHvv0w9iL2Ia9Tm9xjDvoUDONhrQw+Myr0S32b1Kra85GWEIR+IlJb72FacaDP/ Xh3Dpt4Krr+XtCDUoHXpsXOOmZ73qZVmmGL1agV2e1LE3lITAn9d17I1yXVFAkwdIXA8 UUug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=MEl81Lqay+G30/sSfGcw7ncOc/rjW1sWcXqyvmGm4kE=; b=TzoPsto8ClMZp4F8vicsIwyK/sEa/tolOcDU7r7qyIDrqajEqpbAQWzdmigN0WYAAv fa+4XECqbX9DxHuJhu+5JnRphVPcKzrHGtZ3mkM3gVtWioJgUWkctQwVviRqurhReKTI 7Y9EkKMZjTHZ9ICgbElY3i8Mr/UxBk1bzgrsiS4z/5DHK45qhH1aykycVW23t+1GzVhu qdrHCtYbkIcCBTnsGa6hDgRSVdd3PsIoMnugI+zdh46glQRUBXj1QlStn7MuFyhSLq6b O2M2jjUUspgncNTWLfzIuYMjHMANddagMhsZO8SWqbeZ7NMQSh96+qeoAIVmSNSglOuY EArw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n11si12109276edy.241.2021.06.07.04.49.06; Mon, 07 Jun 2021 04:49:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230483AbhFGLsK (ORCPT + 99 others); Mon, 7 Jun 2021 07:48:10 -0400 Received: from mail-oi1-f182.google.com ([209.85.167.182]:36700 "EHLO mail-oi1-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230237AbhFGLsJ (ORCPT ); Mon, 7 Jun 2021 07:48:09 -0400 Received: by mail-oi1-f182.google.com with SMTP id a21so17742776oiw.3; Mon, 07 Jun 2021 04:46:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MEl81Lqay+G30/sSfGcw7ncOc/rjW1sWcXqyvmGm4kE=; b=mfhJvCXU7d3Tg80EMW0j24hIwMdw1Km9/qHuYNt0QKKUhnF4PaW8b9R1Q3yCRJNuNW PkFtxibamVlrihAnMteXL82lBPbLJ8ktIG4EehtuJTJq14UWwK1wFtE8gYyp53tIEWB2 2bgT2I4Fracy9mmTEYZR40YF/8xTWcHd819QjGPsFQTZ5QyxSqtTgP9Mb1YR5257oszD 92t7YFgU3TVp5o/0qJUXJyCEu5oNtaunGCzPvK3l314F2d+LfIwHgckaMsIK/80LdhRS l9gcvpBMHiLpJcEn98PI/TJhCAJLUwe+rBBye2u3v+QIw6HiK+8cqVUcj/kzy4ltVjrm u9Ww== X-Gm-Message-State: AOAM532g/0J0NzrsVqJ7C3ZVzRumS21DuJsgw0J4jn1TswlFxW9odg1R I5qOJcYIgBGSvzdq7i6y9tGeRkdFEdbuJkgk+/dICRk2 X-Received: by 2002:aca:b406:: with SMTP id d6mr11026154oif.71.1623066361440; Mon, 07 Jun 2021 04:46:01 -0700 (PDT) MIME-Version: 1.0 References: <20210607065743.1596-1-qiang.zhang@windriver.com> In-Reply-To: <20210607065743.1596-1-qiang.zhang@windriver.com> From: "Rafael J. Wysocki" Date: Mon, 7 Jun 2021 13:45:43 +0200 Message-ID: Subject: Re: [PATCH] PM: sleep: Replace read_lock/unlock(tasklist_lock) with rcu_read_lock/unlock() To: qiang.zhang@windriver.com Cc: Rafael Wysocki , Len Brown , Pavel Machek , "Paul E. McKenney" , Linux PM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 7, 2021 at 8:57 AM wrote: > > From: Zqiang > > Using rcu_read_lock/unlock() instead of read_lock/unlock(tasklist_lock), > the task list can be traversed in parallel to any list additions or > removals, improve concurrency. > > Signed-off-by: Zqiang This changes the reader side only AFAICS, but what about the writer side? What exactly is there to ensure that the updates of the list will remain safe after this change? > --- > kernel/power/process.c | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/kernel/power/process.c b/kernel/power/process.c > index 50cc63534486..0f8dee9ee097 100644 > --- a/kernel/power/process.c > +++ b/kernel/power/process.c > @@ -48,7 +48,7 @@ static int try_to_freeze_tasks(bool user_only) > > while (true) { > todo = 0; > - read_lock(&tasklist_lock); > + rcu_read_lock(); > for_each_process_thread(g, p) { > if (p == current || !freeze_task(p)) > continue; > @@ -56,7 +56,7 @@ static int try_to_freeze_tasks(bool user_only) > if (!freezer_should_skip(p)) > todo++; > } > - read_unlock(&tasklist_lock); > + rcu_read_unlock(); > > if (!user_only) { > wq_busy = freeze_workqueues_busy(); > @@ -97,13 +97,13 @@ static int try_to_freeze_tasks(bool user_only) > show_workqueue_state(); > > if (!wakeup || pm_debug_messages_on) { > - read_lock(&tasklist_lock); > + rcu_read_lock(); > for_each_process_thread(g, p) { > if (p != current && !freezer_should_skip(p) > && freezing(p) && !frozen(p)) > sched_show_task(p); > } > - read_unlock(&tasklist_lock); > + rcu_read_unlock(); > } > } else { > pr_cont("(elapsed %d.%03d seconds) ", elapsed_msecs / 1000, > @@ -206,13 +206,13 @@ void thaw_processes(void) > > cpuset_wait_for_hotplug(); > > - read_lock(&tasklist_lock); > + rcu_read_lock(); > for_each_process_thread(g, p) { > /* No other threads should have PF_SUSPEND_TASK set */ > WARN_ON((p != curr) && (p->flags & PF_SUSPEND_TASK)); > __thaw_task(p); > } > - read_unlock(&tasklist_lock); > + rcu_read_unlock(); > > WARN_ON(!(curr->flags & PF_SUSPEND_TASK)); > curr->flags &= ~PF_SUSPEND_TASK; > @@ -233,12 +233,12 @@ void thaw_kernel_threads(void) > > thaw_workqueues(); > > - read_lock(&tasklist_lock); > + rcu_read_lock(); > for_each_process_thread(g, p) { > if (p->flags & PF_KTHREAD) > __thaw_task(p); > } > - read_unlock(&tasklist_lock); > + rcu_read_unlock(); > > schedule(); > pr_cont("done.\n"); > -- > 2.17.1 >