Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp3571468rdb; Wed, 27 Dec 2023 11:39:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IGhbFHXI+V8f3Jxn01ilQ2uoW93lU5zZWgdp73N9MeRJBJ5tmAPiHjtFORvOfg0kQSHcbEg X-Received: by 2002:a50:9b56:0:b0:552:1679:6b73 with SMTP id a22-20020a509b56000000b0055216796b73mr6647170edj.64.1703705976127; Wed, 27 Dec 2023 11:39:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703705976; cv=none; d=google.com; s=arc-20160816; b=bEDeFh233WsRh4IxXTlFHsBxxNa+Z8zw99k/XOuzkZm0t1fBeL1hesEocSC3oCfDiW 46n6YbZi5/rDZxsihlUbhggYEehtFZLIrKoh1slwonqJUf7Q+GWmtUOFwS8eLH/qos8L tP9VPxWwTuB2XFq74ADpvw4GuNgSfu1iLlTH+EWCNKWiMuQxN5PaJ37GMRKsooOI41JR D/6FOHd1oxMq9Wc/dEnRsSOUf/ZQqBL2uDuecBu/+b6YAPGNfRhbhQm9KRYxYDVJQrHE 2dwQoJsVtY/K2vNScYNlbIcT3pDUylfpQQAkQFCjBgvk/G2SlJgsqqcFxeP2bYQcgSl2 3dvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=a2upyqmpWZlQOYYxA2H4gV4+tp8BHd8mAxWUYLm0Svw=; fh=MDf/OjPDuEgQXe+FH7P8IJt8OSE0/kJDZ2cqNBWSImk=; b=G4Cvqn1oqFXuB8SVBbVmaBHC67mp3yGG0U15IES7rXVLjAZCk8+8m4DPgMhypmcOib ES5XLU/5rNFOjLfAzvtUmIfKRYU9/UytRZ5lqAMnvU//AMDbSdgTOHU8GfnygtHfu7R/ NTUni+3Y9y0V99j9xXK2OJIk/+c2HRT44bbC75RwkFqssnYgAjiuqDGVIoNLUBHBP+jz pK+w8VWhtiwRRkKn2gdZXD5jKQrT2EOchc6aKt0s1SwyqTYEaHHwXDL0ujmpkCyYDfXi 2+NlYl1OF1c5xNyrlplPM8UopcmLbXxGTUw90Tlhs4vruC2xatZVURw0XZU1vbjT56R1 OKTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12229-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12229-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id q23-20020a50cc97000000b00554c8ec6ae3si3361256edi.184.2023.12.27.11.39.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 11:39:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12229-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-12229-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12229-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id AF7BD1F23062 for ; Wed, 27 Dec 2023 19:39:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6C18647A4E; Wed, 27 Dec 2023 19:39:26 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FF4247762; Wed, 27 Dec 2023 19:39:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 5.4.0) id eba26f2d8613ebb7; Wed, 27 Dec 2023 19:39:21 +0100 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id 14B55668E12; Wed, 27 Dec 2023 19:39:21 +0100 (CET) From: "Rafael J. Wysocki" To: Youngmin Nam , Greg KH Cc: rafael@kernel.org, len.brown@intel.com, pavel@ucw.cz, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, d7271.choe@samsung.com, janghyuck.kim@samsung.com, hyesoo.yu@samsung.com Subject: Re: [BUG] mutex deadlock of dpm_resume() in low memory situation Date: Wed, 27 Dec 2023 19:39:20 +0100 Message-ID: <5754861.DvuYhMxLoT@kreacher> In-Reply-To: <2023122701-mortify-deed-4e66@gregkh> References: <2023122701-mortify-deed-4e66@gregkh> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="UTF-8" X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: clean X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvkedrvddvledguddujecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfjqffogffrnfdpggftiffpkfenuceurghilhhouhhtmecuudehtdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvvefufffkjghfggfgtgesthfuredttddtjeenucfhrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqeenucggtffrrghtthgvrhhnpedvffeuiedtgfdvtddugeeujedtffetteegfeekffdvfedttddtuefhgeefvdejhfenucfkphepudelhedrudefiedrudelrdelgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpeduleehrddufeeirdduledrleegpdhhvghlohepkhhrvggrtghhvghrrdhlohgtrghlnhgvthdpmhgrihhlfhhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqpdhnsggprhgtphhtthhopedutddprhgtphhtthhopeihohhunhhgmhhinhdrnhgrmhesshgrmhhsuhhnghdrtghomhdprhgtphhtthhopehgrhgvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhgvnhdrsghrohifnhesihhnthgvlhdrtghomhdprhgtphhtthhopehprghvvghlsehutgifrdgt iidprhgtphhtthhopehlihhnuhigqdhpmhesvhhgvghrrdhkvghrnhgvlhdrohhrgh X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 On Wednesday, December 27, 2023 5:08:40 PM CET Greg KH wrote: > On Wed, Dec 27, 2023 at 05:42:50PM +0900, Youngmin Nam wrote: > > Could you look into this issue ? > > Can you submit a patch that resolves the issue for you, as you have a > way to actually test this out? That would be the quickest way to get it > resolved, and to help confirm that this is even an issue at all. Something like the appended patch should be sufficient to address this AFAICS. I haven't tested it yet (will do so shortly), so all of the usual disclaimers apply. I think that it can be split into 2 patches, but for easier testing here it goes in one piece. Fixes: f2a424f6c613 ("PM / core: Introduce dpm_async_fn() helper") Signed-off-by: Rafael J. Wysocki --- drivers/base/power/main.c | 12 ++++-- include/linux/async.h | 2 + kernel/async.c | 85 ++++++++++++++++++++++++++++++++++------------ 3 files changed, 73 insertions(+), 26 deletions(-) Index: linux-pm/kernel/async.c =================================================================== --- linux-pm.orig/kernel/async.c +++ linux-pm/kernel/async.c @@ -145,6 +145,39 @@ static void async_run_entry_fn(struct wo wake_up(&async_done); } +static async_cookie_t __async_schedule_node_domain(async_func_t func, + void *data, int node, + struct async_domain *domain, + struct async_entry *entry) +{ + async_cookie_t newcookie; + unsigned long flags; + + INIT_LIST_HEAD(&entry->domain_list); + INIT_LIST_HEAD(&entry->global_list); + INIT_WORK(&entry->work, async_run_entry_fn); + entry->func = func; + entry->data = data; + entry->domain = domain; + + spin_lock_irqsave(&async_lock, flags); + + /* allocate cookie and queue */ + newcookie = entry->cookie = next_cookie++; + + list_add_tail(&entry->domain_list, &domain->pending); + if (domain->registered) + list_add_tail(&entry->global_list, &async_global_pending); + + atomic_inc(&entry_count); + spin_unlock_irqrestore(&async_lock, flags); + + /* schedule for execution */ + queue_work_node(node, system_unbound_wq, &entry->work); + + return newcookie; +} + /** * async_schedule_node_domain - NUMA specific version of async_schedule_domain * @func: function to execute asynchronously @@ -186,29 +219,8 @@ async_cookie_t async_schedule_node_domai func(data, newcookie); return newcookie; } - INIT_LIST_HEAD(&entry->domain_list); - INIT_LIST_HEAD(&entry->global_list); - INIT_WORK(&entry->work, async_run_entry_fn); - entry->func = func; - entry->data = data; - entry->domain = domain; - - spin_lock_irqsave(&async_lock, flags); - - /* allocate cookie and queue */ - newcookie = entry->cookie = next_cookie++; - - list_add_tail(&entry->domain_list, &domain->pending); - if (domain->registered) - list_add_tail(&entry->global_list, &async_global_pending); - - atomic_inc(&entry_count); - spin_unlock_irqrestore(&async_lock, flags); - - /* schedule for execution */ - queue_work_node(node, system_unbound_wq, &entry->work); - return newcookie; + return __async_schedule_node_domain(func, data, node, domain, entry); } EXPORT_SYMBOL_GPL(async_schedule_node_domain); @@ -232,6 +244,35 @@ async_cookie_t async_schedule_node(async EXPORT_SYMBOL_GPL(async_schedule_node); /** + * async_schedule_dev_nocall - A simplified variant of async_schedule_dev() + * @func: function to execute asynchronously + * @dev: device argument to be passed to function + * + * @dev is used as both the argument for the function and to provide NUMA + * context for where to run the function. + * + * If the asynchronous execution of @func is scheduled successfully, return + * true. Otherwise, do nothing and return false, unlike async_schedule_dev() + * that will run the function synchronously then. + */ +bool async_schedule_dev_nocall(async_func_t func, struct device *dev) +{ + struct async_entry *entry; + + entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL); + + /* Give up if there is no memory or too much work. */ + if (!entry || atomic_read(&entry_count) > MAX_WORK) { + kfree(entry); + return false; + } + + __async_schedule_node_domain(func, dev, dev_to_node(dev), + &async_dfl_domain, entry); + return true; +} + +/** * async_synchronize_full - synchronize all asynchronous function calls * * This function waits until all asynchronous function calls have been done. Index: linux-pm/include/linux/async.h =================================================================== --- linux-pm.orig/include/linux/async.h +++ linux-pm/include/linux/async.h @@ -90,6 +90,8 @@ async_schedule_dev(async_func_t func, st return async_schedule_node(func, dev, dev_to_node(dev)); } +bool async_schedule_dev_nocall(async_func_t func, struct device *dev); + /** * async_schedule_dev_domain - A device specific version of async_schedule_domain * @func: function to execute asynchronously Index: linux-pm/drivers/base/power/main.c =================================================================== --- linux-pm.orig/drivers/base/power/main.c +++ linux-pm/drivers/base/power/main.c @@ -668,11 +668,15 @@ static bool dpm_async_fn(struct device * { reinit_completion(&dev->power.completion); - if (is_async(dev)) { - get_device(dev); - async_schedule_dev(func, dev); + if (!is_async(dev)) + return false; + + get_device(dev); + + if (async_schedule_dev_nocall(func, dev)) return true; - } + + put_device(dev); return false; }