Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1090694rwd; Thu, 15 Jun 2023 06:18:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4nxkw1dgoNa898GA0v4d1Go86Pd1Q/gUFjCVyuKupQbzwvxNPaHYleqsmYj0poHOJsSAWL X-Received: by 2002:a05:6a00:174e:b0:64b:205:dbf3 with SMTP id j14-20020a056a00174e00b0064b0205dbf3mr6570925pfc.34.1686835108974; Thu, 15 Jun 2023 06:18:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686835108; cv=none; d=google.com; s=arc-20160816; b=FtJnjrlZo1qKBPhfsEPt33j2gcovOItnVIApku7d1Lcp/DL8uqyZa9umSXI0ksEiEJ 1MfH7bcd2734RAQfFGXhHRJArrOmms9oTsgUWNyvtF1ECSjUCO177UBqMubZ4RA4B3/v 05qcRRNvEr0mQA7UDMNATG8OyiqL6ukIZ5xNp07fZ58qfEAJkyY5p0DhNqT/ZwiQWBOp 5JuugcpXZeEhVwBGgmW6nJKrUbE6F5iZ7YBOK7arueRK5K9Wjcfau1cEo4t/+qr60+uo xUQ+ad9nD98Y73SK22D1mLmSc8u45Th5VqApIoU4pM4A5wknExUfP16r8Z/4Zb7xEnmC /mMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:message-id:content-transfer-encoding :mime-version:subject:date:from:dkim-signature; bh=1yRetSLviTkJw1LOdu72QcSIr3UF+ZQAyOZl+ZcLrGw=; b=MCV29GbpaG1qzE82D/fh87Wd55qqU07dp9jw5deF1cN3G7flMPjWvadZ1Zq54EGmPL mZLV1xiAAwF/h6OYJGCqEul+eJ3MQgOwlApi72Wz9PdnhWCDNVZUBU8QvrVO8sNeaB7Z 64ZYUicDkGeGkOEM2WKK28/DjLSoA41+ejWxbsN72e0/7dCTad8j6qbqkYNi0Epx7Ha7 Qm896x2dDeEkeeXGx3k8vZhKetVrcDsKoG6FwiHaxgZHJI6Ht36huYiI6ebeJBVXXJSc Oi1jhG1J9PLzvxEqDjiTCoTc1aEXNP7G3OVCm02OSHAOBU859qNASW7F0WJ3JmqGaIA8 udmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@axis.com header.s=axis-central1 header.b="ZxSKT/zq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=axis.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l197-20020a633ece000000b00545273315a4si8862502pga.560.2023.06.15.06.18.07; Thu, 15 Jun 2023 06:18:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@axis.com header.s=axis-central1 header.b="ZxSKT/zq"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=axis.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241642AbjFONME (ORCPT + 99 others); Thu, 15 Jun 2023 09:12:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240260AbjFONMB (ORCPT ); Thu, 15 Jun 2023 09:12:01 -0400 Received: from smtp2.axis.com (smtp2.axis.com [195.60.68.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9D48123 for ; Thu, 15 Jun 2023 06:11:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1686834721; x=1718370721; h=from:date:subject:mime-version:content-transfer-encoding: message-id:to:cc; bh=1yRetSLviTkJw1LOdu72QcSIr3UF+ZQAyOZl+ZcLrGw=; b=ZxSKT/zqQOPkY4tqgDKw0re/0a44NdD5uTniNS/Sh9fUrwMixAVg3hzg 1IvC01cGZe4aLXL45HO4vjM1xYtSJOqq7AtM8GXLzLbcw7K9rAKI4gkV2 pl90d1qUQf4jT30EM8kqshihFY59erZlkvawwl2RaRBgvqz42Qvywm4Ck 0uxCdgQFfGWHkbftUKwdiFxCgjpWAsz+W2mJVOFyKJJvgat4hF65WmPrB CiEu70gDwyGenMpT4LfEWyw8AgqcEZIjFWQDzbh++J2RxzAkUoMM94Ysa 8n4sqatJYkUTdsHoYMp8ocEMxMVHdCE+l+9hn8r4Mv0onY3JdzAOaStSA g==; From: Vincent Whitchurch Date: Thu, 15 Jun 2023 15:11:57 +0200 Subject: [PATCH] genirq: Fix nested thread vs synchronize_hardirq() deadlock MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-ID: <20230613-genirq-nested-v1-1-289dc15b7669@axis.com> X-B4-Tracking: v=1; b=H4sIABwOi2QC/x2NywqDQAwAf0VybmAfuIf+SukhalZzybZJKQXx3 117nIFhdnA2YYf7sIPxV1yadoi3AeaNdGWUpTOkkHIoMePKKvZGZf/wgiWNJSYqlcIIvZnIGSc jnberqq3lS7+Mq/z+m8fzOE6cdfTsdgAAAA== To: Thomas Gleixner CC: , , Vincent Whitchurch X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is a possibility of deadlock if synchronize_hardirq() is called when the nested threaded interrupt is active. The following scenario was observed on a uniprocessor PREEMPT_NONE system: Thread 1 Thread 2 handle_nested_thread() Set INPROGRESS Call ->thread_fn() thread_fn goes to sleep free_irq() __synchronize_hardirq() Busy-loop forever waiting for INPROGRESS to be cleared Since the purpose of the INPROGRESS flag seems to be for hard IRQ handlers, remove its usage in the nested threaded interrupt case and instead re-use the active_threads mechanism to wait for nested threaded interrupts to complete. Signed-off-by: Vincent Whitchurch --- kernel/irq/chip.c | 5 +++-- kernel/irq/manage.c | 4 ++++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 49e7bc871fece..3e4b4c6de8195 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -476,7 +476,7 @@ void handle_nested_irq(unsigned int irq) } kstat_incr_irqs_this_cpu(desc); - irqd_set(&desc->irq_data, IRQD_IRQ_INPROGRESS); + atomic_inc(&desc->threads_active); raw_spin_unlock_irq(&desc->lock); action_ret = IRQ_NONE; @@ -487,7 +487,8 @@ void handle_nested_irq(unsigned int irq) note_interrupt(desc, action_ret); raw_spin_lock_irq(&desc->lock); - irqd_clear(&desc->irq_data, IRQD_IRQ_INPROGRESS); + if (atomic_dec_and_test(&desc->threads_active)) + wake_up(&desc->wait_for_threads); out_unlock: raw_spin_unlock_irq(&desc->lock); diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index d2742af0f0fd8..58dcc9df6d72c 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1977,6 +1977,10 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) } } + /* Wait for any remaining nested threaded interrupts. */ + wait_event(desc->wait_for_threads, + !atomic_read(&desc->threads_active)); + /* Last action releases resources */ if (!desc->action) { /* --- base-commit: 858fd168a95c5b9669aac8db6c14a9aeab446375 change-id: 20230613-genirq-nested-625612a6fa05 Best regards, -- Vincent Whitchurch