Received: by 2002:a05:7412:9c07:b0:fa:6e18:a558 with SMTP id lr7csp773760rdb; Sun, 28 Jan 2024 08:18:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IFfA8GQDHg5aIsxtW9hW8A9kT7OtWS/1NgA7oPpNor0rnrB1xwLUQ5imzNvsz8tTUY92uYS X-Received: by 2002:a17:902:ee53:b0:1d4:4fc6:8d9 with SMTP id 19-20020a170902ee5300b001d44fc608d9mr1278540plo.60.1706458726484; Sun, 28 Jan 2024 08:18:46 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706458726; cv=pass; d=google.com; s=arc-20160816; b=Lxl89Ij/1qOygAhV0bKS5udGy4zOYQbXWA8drnK065Ls6KRuEI/E2aejgvOUOlBKOW aYlV/iWE3ZN6qj7qt7GNJc+WijTI4A34wlmhNUN/VMmq2IZjZtwZrI4OT47OcfHbRrmu DaMPGaqmE8FRZSxRV9uryW2vgNSY+ORUBhxMWnxkjC554Cln16YFhBcvOfdq7DIlcYxN u64Rt7UeBU5kDAonsVd/cIY8E2UOKnkrZr9KiTjARiJ4aZVmZ/TJ6eo+hOFrVQfN0TIN KloE7VDLGXpOyjx1NtUb+RSNmFcyPBEI8bt6boG6xim29wSY1dmzUx1bT60jzsBo+j7o Vkxg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=gpk4+e48AekWrRumlU/HuTexJkv+NR8mR0hZGXnmyHQ=; fh=iHNnPM6UMHcDvykAThiB+hz13FLj5GX5rR/Y/iv31ZY=; b=XBWL8b67SxvPJfUWDiXsvfOq42jNn5W5TlWyKOu2GftnBsDVXIciHCH3ZO3lb4pG/b k1orJmQ8VNzNYnZzNbdVk6sNYiSyTA7m0iPiM6NQIKfj54R9U11xCzXzjuk/CUVBj/E2 BvZV4DqLCerr/CoEpEalOtW6kqBnO7gTmbJQEwMk5L1gy2TP+ltUCqbqnYzG1OZ1/12b LyotWipXK1nJiFXfHmERVfoRZS4hS3l04PpSEfVoBlllEGIsSzG5dgzgCDCugl6l/1Xk HRYFkhEAKFBJwrkBJKKTQBwoWULpgWm9ptjYW7m9FJZAalBRMSc/7+ESMvbbEKpVli0R wIQQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="q4/s6U5Z"; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-41707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-41707-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id l24-20020a63ba58000000b005c68da9ca8asi4424290pgu.787.2024.01.28.08.18.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 28 Jan 2024 08:18:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-41707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="q4/s6U5Z"; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-41707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-41707-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id DFDC2B20C32 for ; Sun, 28 Jan 2024 16:18:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 83AE854BCC; Sun, 28 Jan 2024 16:12:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q4/s6U5Z" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8DAE854BEC; Sun, 28 Jan 2024 16:12:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706458335; cv=none; b=fY6AZJlAPX+RjAw2o6Q/5vlqs7WM2b3it6xW2UAxfvGe6QERo0eTxOROm9nJZ/AO5ziqSXBfYo3yqADM6tu9Njo7YxV0X53BprF2vDdPeaOaanllEdg3Q6DXdUWWVYRh99UBe9dccnuz7EH9aat4ahnq/4+iw+kT/AmgwYKZbs8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706458335; c=relaxed/simple; bh=COcTa5VbY5tdUzHtFQCMV94oaHYapclH55nWG5JL2Eo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SAJACmSmKGVpV850bHML1kQCKlT+Ov3Y9bt5b4aTCp3/y7zXqc5bxpQyvM4f02MlPN2uwG/rAqNQs8kYNsSv426ZQepDj2HmiMqm6YdPuqqvYLm7a+2gkggFItWoHfz5b/BCEdrmRdg5tPumlfbCXzevkh3V8irNINFOGVPtOa8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q4/s6U5Z; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0ADCC433F1; Sun, 28 Jan 2024 16:12:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706458335; bh=COcTa5VbY5tdUzHtFQCMV94oaHYapclH55nWG5JL2Eo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q4/s6U5ZZ4ApcGp4Yk3QfEiE7Stb/C0D4k7Lnnkr+BES7mns2zBK69fZKbwtOTzbJ ha3O/5ADVlbYpBhMfH/ntoMSVY7cW1DtgVupFcG1YZkt5DwnTE13JCuJcx5PZBbSJs GtJuKH8ctNc+eCGFfhNS9XAWeXc4Il7xynBQffJBLebjuKzvjudRWxopktn8qUjSMo GvespWyGJ0bCPSWB5LWZxd8wSJpi7aJCSdcRFUfRGRDqq6oYn9X7llNiApybHuIGA8 x6HP8aydvW6TwmmEem1BWgpRU52jQU4f7bSegM/VwgAkWcp/puNTwDwd6W5nWljZGf x0C4q9Uc02qwA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Felix Kuehling , Philip Yang , Alex Deucher , Sasha Levin , Felix.Kuehling@amd.com, christian.koenig@amd.com, Xinhui.Pan@amd.com, airlied@gmail.com, daniel@ffwll.ch, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH AUTOSEL 6.7 25/39] drm/amdkfd: Fix lock dependency warning Date: Sun, 28 Jan 2024 11:10:45 -0500 Message-ID: <20240128161130.200783-25-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240128161130.200783-1-sashal@kernel.org> References: <20240128161130.200783-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.7.2 Content-Transfer-Encoding: 8bit From: Felix Kuehling [ Upstream commit 47bf0f83fc86df1bf42b385a91aadb910137c5c9 ] ====================================================== WARNING: possible circular locking dependency detected 6.5.0-kfd-fkuehlin #276 Not tainted ------------------------------------------------------ kworker/8:2/2676 is trying to acquire lock: ffff9435aae95c88 ((work_completion)(&svm_bo->eviction_work)){+.+.}-{0:0}, at: __flush_work+0x52/0x550 but task is already holding lock: ffff9435cd8e1720 (&svms->lock){+.+.}-{3:3}, at: svm_range_deferred_list_work+0xe8/0x340 [amdgpu] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&svms->lock){+.+.}-{3:3}: __mutex_lock+0x97/0xd30 kfd_ioctl_alloc_memory_of_gpu+0x6d/0x3c0 [amdgpu] kfd_ioctl+0x1b2/0x5d0 [amdgpu] __x64_sys_ioctl+0x86/0xc0 do_syscall_64+0x39/0x80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&mm->mmap_lock){++++}-{3:3}: down_read+0x42/0x160 svm_range_evict_svm_bo_worker+0x8b/0x340 [amdgpu] process_one_work+0x27a/0x540 worker_thread+0x53/0x3e0 kthread+0xeb/0x120 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x11/0x20 -> #0 ((work_completion)(&svm_bo->eviction_work)){+.+.}-{0:0}: __lock_acquire+0x1426/0x2200 lock_acquire+0xc1/0x2b0 __flush_work+0x80/0x550 __cancel_work_timer+0x109/0x190 svm_range_bo_release+0xdc/0x1c0 [amdgpu] svm_range_free+0x175/0x180 [amdgpu] svm_range_deferred_list_work+0x15d/0x340 [amdgpu] process_one_work+0x27a/0x540 worker_thread+0x53/0x3e0 kthread+0xeb/0x120 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x11/0x20 other info that might help us debug this: Chain exists of: (work_completion)(&svm_bo->eviction_work) --> &mm->mmap_lock --> &svms->lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&svms->lock); lock(&mm->mmap_lock); lock(&svms->lock); lock((work_completion)(&svm_bo->eviction_work)); I believe this cannot really lead to a deadlock in practice, because svm_range_evict_svm_bo_worker only takes the mmap_read_lock if the BO refcount is non-0. That means it's impossible that svm_range_bo_release is running concurrently. However, there is no good way to annotate this. To avoid the problem, take a BO reference in svm_range_schedule_evict_svm_bo instead of in the worker. That way it's impossible for a BO to get freed while eviction work is pending and the cancel_work_sync call in svm_range_bo_release can be eliminated. v2: Use svm_bo_ref_unless_zero and explained why that's safe. Also removed redundant checks that are already done in amdkfd_fence_enable_signaling. Signed-off-by: Felix Kuehling Reviewed-by: Philip Yang Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index a15bfb5223e8..92d8b1513e57 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -400,14 +400,9 @@ static void svm_range_bo_release(struct kref *kref) spin_lock(&svm_bo->list_lock); } spin_unlock(&svm_bo->list_lock); - if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base)) { - /* We're not in the eviction worker. - * Signal the fence and synchronize with any - * pending eviction work. - */ + if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base)) + /* We're not in the eviction worker. Signal the fence. */ dma_fence_signal(&svm_bo->eviction_fence->base); - cancel_work_sync(&svm_bo->eviction_work); - } dma_fence_put(&svm_bo->eviction_fence->base); amdgpu_bo_unref(&svm_bo->bo); kfree(svm_bo); @@ -3447,13 +3442,14 @@ svm_range_trigger_migration(struct mm_struct *mm, struct svm_range *prange, int svm_range_schedule_evict_svm_bo(struct amdgpu_amdkfd_fence *fence) { - if (!fence) - return -EINVAL; - - if (dma_fence_is_signaled(&fence->base)) - return 0; - - if (fence->svm_bo) { + /* Dereferencing fence->svm_bo is safe here because the fence hasn't + * signaled yet and we're under the protection of the fence->lock. + * After the fence is signaled in svm_range_bo_release, we cannot get + * here any more. + * + * Reference is dropped in svm_range_evict_svm_bo_worker. + */ + if (svm_bo_ref_unless_zero(fence->svm_bo)) { WRITE_ONCE(fence->svm_bo->evicting, 1); schedule_work(&fence->svm_bo->eviction_work); } @@ -3468,8 +3464,6 @@ static void svm_range_evict_svm_bo_worker(struct work_struct *work) int r = 0; svm_bo = container_of(work, struct svm_range_bo, eviction_work); - if (!svm_bo_ref_unless_zero(svm_bo)) - return; /* svm_bo was freed while eviction was pending */ if (mmget_not_zero(svm_bo->eviction_fence->mm)) { mm = svm_bo->eviction_fence->mm; -- 2.43.0