Received: by 2002:a05:6512:3d0e:0:0:0:0 with SMTP id d14csp52093lfv; Tue, 12 Apr 2022 16:58:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwDvSWeNVmPmJ6vX7QktqsBAz4qS/BRolTU+L12TYrrxDwCjbVvcIf7+kju33Af9aY0mfDZ X-Received: by 2002:a63:5a1c:0:b0:39d:adec:1764 with SMTP id o28-20020a635a1c000000b0039dadec1764mr1224675pgb.113.1649807896021; Tue, 12 Apr 2022 16:58:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649807896; cv=none; d=google.com; s=arc-20160816; b=FfQdUkeXbGcrajE1V2Nntafxbmf29NI1R0utepMFztKNUViKUbG11A/UOZCWpGv9rY V6p70f4bb+KXsetBrO1BIBLEJI/PFbzKlqL70nI9RWhMz4VFBSeCs4T0mzqd44wsbtWy W0gyzcC0urAK2JVlzYqBoV2F3cpT5RtiiBaQeMLz40GcybT7kjwFmHzqLxlz979eGDnj fyk8CGPALMfmHkq1dwxwIkW5sdqUyofJhoS81ZouiXxcLw7ryAiwpplWoQRx1/UZmtGH GgTFnELcaSHQlXrNF7rSbl3J0GtpY7C0Sp72361lyLmBlAzzIBWDOskHPiDa2dR/sSQL Rqvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=akpJjHHH5GAWt8nyNxFrrxFWlOFSOCqSwLCjS7InCtA=; b=mz+FJz0+goTZbn1pp9aTIQhNnfU2KuSi9mAnC+MHFV89+VktKwHrbYgU/GJ2j4UyCD KK0qNHUM6lb5Hx4ABS3J/D2uvpAVIrMzhGxfFWVI6hVAeLo2p4gkUHlxFAG+JGOqQJxc bxDIuAFgpEMwFL0CnbNaU2UxjK0RzzCLKWSwm6DeK2KSyBqD85hywDqyugqfz6xKSL7k XD/50yQkVOrUO6hNR/+PYsXc++d8iHQovmUMZPNlr6p6kuy/JlBNvmOerh8BsyUohCVk OYCGJxbcCGejgFf4K3PRY7zfZdoL2X8dCzUfId+41Z7ZpHa0KWObWH2xQje4pIAG26AP jgBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="1kf8i/TB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id w8-20020a63d748000000b00382295afcf7si3998763pgi.634.2022.04.12.16.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Apr 2022 16:58:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="1kf8i/TB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 31D7D1403E9; Tue, 12 Apr 2022 14:54:13 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379416AbiDLIU3 (ORCPT + 99 others); Tue, 12 Apr 2022 04:20:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355863AbiDLH3b (ORCPT ); Tue, 12 Apr 2022 03:29:31 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D76CC4FC4E; Tue, 12 Apr 2022 00:08:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0A88F616C5; Tue, 12 Apr 2022 07:08:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DA69C385A1; Tue, 12 Apr 2022 07:08:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649747303; bh=uvmvK5G4Vh3vydzA/XSSQgtsGxMG1xvRkHz1oUWROgc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1kf8i/TBlbV5LdNnS5aw6QXIdeXrZBoSzhiz8U6hp9wVVy8aMuxH2yYnrInxn5vW4 GBvji5V1SY7MOKT5T/cVp+a9wqqLl4gmO62ibe0d2CgA54WzFnF23arYXU5YlZpM75 EWkcw72dU+ebfaEE1qQ1PJyDxIrQ6jVSbA2UdgHg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Philip Yang , Ruili Ji , Felix Kuehling , Alex Deucher , Sasha Levin Subject: [PATCH 5.17 031/343] drm/amdkfd: Ensure mm remain valid in svm deferred_list work Date: Tue, 12 Apr 2022 08:27:29 +0200 Message-Id: <20220412062952.004344561@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220412062951.095765152@linuxfoundation.org> References: <20220412062951.095765152@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Philip Yang [ Upstream commit 367c9b0f1b8750a704070e7ae85234d591290434 ] svm_deferred_list work should continue to handle deferred_range_list which maybe split to child range to avoid child range leak, and remove ranges mmu interval notifier to avoid mm mm_count leak. So taking mm reference when adding range to deferred list, to ensure mm is valid in the scheduled deferred_list_work, and drop the mm referrence after range is handled. Signed-off-by: Philip Yang Reported-by: Ruili Ji Reviewed-by: Felix Kuehling Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 62 ++++++++++++++++------------ 1 file changed, 36 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index f2805ba74c80..225affcddbc1 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -1985,10 +1985,9 @@ svm_range_update_notifier_and_interval_tree(struct mm_struct *mm, } static void -svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange) +svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange, + struct mm_struct *mm) { - struct mm_struct *mm = prange->work_item.mm; - switch (prange->work_item.op) { case SVM_OP_NULL: pr_debug("NULL OP 0x%p prange 0x%p [0x%lx 0x%lx]\n", @@ -2071,34 +2070,41 @@ static void svm_range_deferred_list_work(struct work_struct *work) pr_debug("enter svms 0x%p\n", svms); p = container_of(svms, struct kfd_process, svms); - /* Avoid mm is gone when inserting mmu notifier */ - mm = get_task_mm(p->lead_thread); - if (!mm) { - pr_debug("svms 0x%p process mm gone\n", svms); - return; - } -retry: - mmap_write_lock(mm); - - /* Checking for the need to drain retry faults must be inside - * mmap write lock to serialize with munmap notifiers. - */ - if (unlikely(atomic_read(&svms->drain_pagefaults))) { - mmap_write_unlock(mm); - svm_range_drain_retry_fault(svms); - goto retry; - } spin_lock(&svms->deferred_list_lock); while (!list_empty(&svms->deferred_range_list)) { prange = list_first_entry(&svms->deferred_range_list, struct svm_range, deferred_list); - list_del_init(&prange->deferred_list); spin_unlock(&svms->deferred_list_lock); pr_debug("prange 0x%p [0x%lx 0x%lx] op %d\n", prange, prange->start, prange->last, prange->work_item.op); + mm = prange->work_item.mm; +retry: + mmap_write_lock(mm); + + /* Checking for the need to drain retry faults must be inside + * mmap write lock to serialize with munmap notifiers. + */ + if (unlikely(atomic_read(&svms->drain_pagefaults))) { + mmap_write_unlock(mm); + svm_range_drain_retry_fault(svms); + goto retry; + } + + /* Remove from deferred_list must be inside mmap write lock, for + * two race cases: + * 1. unmap_from_cpu may change work_item.op and add the range + * to deferred_list again, cause use after free bug. + * 2. svm_range_list_lock_and_flush_work may hold mmap write + * lock and continue because deferred_list is empty, but + * deferred_list work is actually waiting for mmap lock. + */ + spin_lock(&svms->deferred_list_lock); + list_del_init(&prange->deferred_list); + spin_unlock(&svms->deferred_list_lock); + mutex_lock(&svms->lock); mutex_lock(&prange->migrate_mutex); while (!list_empty(&prange->child_list)) { @@ -2109,19 +2115,20 @@ static void svm_range_deferred_list_work(struct work_struct *work) pr_debug("child prange 0x%p op %d\n", pchild, pchild->work_item.op); list_del_init(&pchild->child_list); - svm_range_handle_list_op(svms, pchild); + svm_range_handle_list_op(svms, pchild, mm); } mutex_unlock(&prange->migrate_mutex); - svm_range_handle_list_op(svms, prange); + svm_range_handle_list_op(svms, prange, mm); mutex_unlock(&svms->lock); + mmap_write_unlock(mm); + + /* Pairs with mmget in svm_range_add_list_work */ + mmput(mm); spin_lock(&svms->deferred_list_lock); } spin_unlock(&svms->deferred_list_lock); - - mmap_write_unlock(mm); - mmput(mm); pr_debug("exit svms 0x%p\n", svms); } @@ -2139,6 +2146,9 @@ svm_range_add_list_work(struct svm_range_list *svms, struct svm_range *prange, prange->work_item.op = op; } else { prange->work_item.op = op; + + /* Pairs with mmput in deferred_list_work */ + mmget(mm); prange->work_item.mm = mm; list_add_tail(&prange->deferred_list, &prange->svms->deferred_range_list); -- 2.35.1