Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4652583pxj; Wed, 12 May 2021 10:06:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxkuVRvlk6kNR1/0MptHF9e+7fwoFRu9jxXgze6ebm9CuO1JKySDUwe15nz4R1nEfvd49G0 X-Received: by 2002:a2e:99cd:: with SMTP id l13mr9504079ljj.89.1620839195042; Wed, 12 May 2021 10:06:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620839195; cv=none; d=google.com; s=arc-20160816; b=atzudDJ88NFRWZqiRcKQP/S5xp5ioIZHmHfe3MCYoGp+sfYtsm1CkiGZNjrNosrvH6 XKQJPTGa6Zgz7+DDTZsifAYB9neTf3+oBUoKzLstv1T/kzFiWS2Ool8eK1ICy7BIodDf gqpCPB3mlNRsNHC+IWT+SvSvBu6Qj7QKNoAs2820rn14j83QC2BoESnPhaw9AAtGzeiW f3Fqz1WJpXQDzVv0ykRzsaPCxJa8PQo8naALjStPdfcHG07y5ArZ1qNnlkKmCIKzigxa 3QdM/5rBQaAYvXCvqM18bkqYzRxnSoqW/xrO2eLzvV5Ku2ASCT6WhhnkjbwiCzaPtk5+ Btbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=l+WeIScCUD9+QrwWTLSKJMPQuYBTtrqJ6S4PhWUBD38=; b=uPQgcp8JAkZDSXAuIMk4jCEc58GAhO6ZL2LShMq+AS+OrSW0gyJOM+c64DnxsoY1/6 PfT8EsShSDpfaX0QgS9p0/RGcr5oYEpMrSahvrSdYnSsXyYpHPWOIYFWtM2BdZN3QLeR 9vwjyuv5h5XGcgk4DRV4j5yEsY6MyVNqha8Wkv4ngZfaN2s4NX2ez2Vl5H2o+ftnYULv SyqB1Osz8CAzh0Tc8sKdchSCgzhB9ni67Pa0E+5uhu2F2XGLFrxpfMLM/XdRSAfVuoAQ hTWhccEQTNi4cwzntk99FWMn171TIbaLL1bihRhAxBKFd1l6FtdYVt8+PutvGDXMSFpb nNVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=NjD4mzeE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u1si273033ljj.546.2021.05.12.10.06.04; Wed, 12 May 2021 10:06:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=NjD4mzeE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343909AbhELRAK (ORCPT + 99 others); Wed, 12 May 2021 13:00:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:41762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237930AbhELP4y (ORCPT ); Wed, 12 May 2021 11:56:54 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D80B961937; Wed, 12 May 2021 15:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620833331; bh=wVleOt7jaroLatdPaam+e/ZZJbmQYuTD+K6ndpfBHOs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NjD4mzeE1LjsXzVQYuH3dREdKGRnK6/HDvEP08+KFTyiBTTMFZXJxF5k/ae8Uc7CY vDNWmj5A0DBN/M5Qv3UW0CXJywlZdtEBNHnuIROQl6Ly8Y+LnvEP+Lc9ITG9W7Onya 8j4dC1QeSfY6dvDwEJXUZPDFl3reZoZ+VWkPaVNE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, =?UTF-8?q?Christian=20K=C3=B6nig?= , James Zhu , Alex Deucher Subject: [PATCH 5.11 072/601] drm/amdgpu: fix concurrent VM flushes on Vega/Navi v2 Date: Wed, 12 May 2021 16:42:29 +0200 Message-Id: <20210512144830.196897523@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512144827.811958675@linuxfoundation.org> References: <20210512144827.811958675@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christian König commit 20a5f5a98e1bb3d40acd97e89299e8c2d22784be upstream. Starting with Vega the hardware supports concurrent flushes of VMID which can be used to implement per process VMID allocation. But concurrent flushes are mutual exclusive with back to back VMID allocations, fix this to avoid a VMID used in two ways at the same time. v2: don't set ring to NULL Signed-off-by: Christian König Reviewed-by: James Zhu Tested-by: James Zhu Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 19 +++++++++++-------- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 6 ++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 1 + 3 files changed, 18 insertions(+), 8 deletions(-) --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -215,7 +215,11 @@ static int amdgpu_vmid_grab_idle(struct /* Check if we have an idle VMID */ i = 0; list_for_each_entry((*idle), &id_mgr->ids_lru, list) { - fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, ring); + /* Don't use per engine and per process VMID at the same time */ + struct amdgpu_ring *r = adev->vm_manager.concurrent_flush ? + NULL : ring; + + fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, r); if (!fences[i]) break; ++i; @@ -281,7 +285,7 @@ static int amdgpu_vmid_grab_reserved(str if (updates && (*id)->flushed_updates && updates->context == (*id)->flushed_updates->context && !dma_fence_is_later(updates, (*id)->flushed_updates)) - updates = NULL; + updates = NULL; if ((*id)->owner != vm->immediate.fence_context || job->vm_pd_addr != (*id)->pd_gpu_addr || @@ -290,6 +294,10 @@ static int amdgpu_vmid_grab_reserved(str !dma_fence_is_signaled((*id)->last_flush))) { struct dma_fence *tmp; + /* Don't use per engine and per process VMID at the same time */ + if (adev->vm_manager.concurrent_flush) + ring = NULL; + /* to prevent one context starved by another context */ (*id)->pd_gpu_addr = 0; tmp = amdgpu_sync_peek_fence(&(*id)->active, ring); @@ -365,12 +373,7 @@ static int amdgpu_vmid_grab_used(struct if (updates && (!flushed || dma_fence_is_later(updates, flushed))) needs_flush = true; - /* Concurrent flushes are only possible starting with Vega10 and - * are broken on Navi10 and Navi14. - */ - if (needs_flush && (adev->asic_type < CHIP_VEGA10 || - adev->asic_type == CHIP_NAVI10 || - adev->asic_type == CHIP_NAVI14)) + if (needs_flush && !adev->vm_manager.concurrent_flush) continue; /* Good, we can use this VMID. Remember this submission as --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -3145,6 +3145,12 @@ void amdgpu_vm_manager_init(struct amdgp { unsigned i; + /* Concurrent flushes are only possible starting with Vega10 and + * are broken on Navi10 and Navi14. + */ + adev->vm_manager.concurrent_flush = !(adev->asic_type < CHIP_VEGA10 || + adev->asic_type == CHIP_NAVI10 || + adev->asic_type == CHIP_NAVI14); amdgpu_vmid_mgr_init(adev); adev->vm_manager.fence_context = --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h @@ -331,6 +331,7 @@ struct amdgpu_vm_manager { /* Handling of VMIDs */ struct amdgpu_vmid_mgr id_mgr[AMDGPU_MAX_VMHUBS]; unsigned int first_kfd_vmid; + bool concurrent_flush; /* Handling of VM fences */ u64 fence_context;