Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1691239ybl; Sat, 25 Jan 2020 06:24:02 -0800 (PST) X-Google-Smtp-Source: APXvYqysl5+mfVJh+v1rXclYLtMbKUSKu+0xTLAOHfwfNCcH/mvyj3zvVr0djCe0HyphIWKGqgGo X-Received: by 2002:a9d:1928:: with SMTP id j40mr6578255ota.68.1579962241924; Sat, 25 Jan 2020 06:24:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579962241; cv=none; d=google.com; s=arc-20160816; b=cMAfxKCmg2KLcWLIuZE38p75riru2rMF3vPCbMhoe+eRLd4jLNZiQP6OjEoDt2D7rV fNBvGu9CXLvbQCISCJtqo71PwL1kT7PYbeH70i0rzoOMj8MImUpqcaWOZbfP2b5GXPUN IFbn6xMc0po2Jtadtv6xIouLqozDhpm1jPbr05r3kZSLuhmFq4tiGKVOuDSV2C8zkhpj /bbVzgTjXGxUK/XTuMu+04w1I9lTBp7nD4vD2xIEOOMd4gOGMm4S8D5+pIjtVQ9P0F6A /soyieCaenNX39Omytrw/Nvf9FzO96lLkB3GBI4kF0tY7j1CsGhLDIrhslQI5l7+jD/8 lxDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:references :in-reply-to:subject:cc:to:from:date:content-transfer-encoding :mime-version; bh=JTsrlIFGqSvzxLGHVBE1a0eIL7ztJoArHTWsAkdMvW0=; b=Co2GYNqkcUgsNZUYWZ0vtPvC6IfeMQARmykD4lfl6FkSVnctfeAcL/vRwsyp9EvIf1 O9QG1YwIrv/Skd5RhuWeNjCMUW1FnzcQxPYslzy1aKUu3rS9pgk2cHd+CXh+ysx0ugLu E4X0bylcw3z6dbiTxR26SIYpdNkUMZNA8NLovb7FXbvG7Ke8SPkWPgHNfMbmp71xVO35 vYFxTKDFTnJLTK3BABJV88YDrHhcCnT+WdH17jgmIohKTLK6m2bt5Tq4YTqWMb93wlvo gHo70lz1jiqLPkM+93kiwq5MHc8fq16nYH0tCq3Fexw5s1KNv/4wq3nZwKHeFRgugKyd ygLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q26si1257237oij.38.2020.01.25.06.23.49; Sat, 25 Jan 2020 06:24:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726565AbgAYOTy (ORCPT + 99 others); Sat, 25 Jan 2020 09:19:54 -0500 Received: from 9.mo177.mail-out.ovh.net ([46.105.72.238]:51879 "EHLO 9.mo177.mail-out.ovh.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725710AbgAYOTx (ORCPT ); Sat, 25 Jan 2020 09:19:53 -0500 X-Greylist: delayed 8831 seconds by postgrey-1.27 at vger.kernel.org; Sat, 25 Jan 2020 09:19:52 EST Received: from player734.ha.ovh.net (unknown [10.108.42.142]) by mo177.mail-out.ovh.net (Postfix) with ESMTP id 099A6120F82 for ; Sat, 25 Jan 2020 12:52:39 +0100 (CET) Received: from RCM-web9.webmail.mail.ovh.net (unknown [147.229.117.36]) (Authenticated sender: steve@sk2.org) by player734.ha.ovh.net (Postfix) with ESMTPSA id 98DE3E7D3290; Sat, 25 Jan 2020 11:52:28 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Sat, 25 Jan 2020 12:52:23 +0100 From: Stephen Kitt To: Felix Kuehling , Alex Deucher , =?UTF-8?Q?Christian_K=C3=B6nig?= , David Zhou , David Airlie , Daniel Vetter , Sumit Semwal Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, steve@sk2.org Subject: Re: [PATCH] amdgpu: use dma-resv API instead of manual kmalloc In-Reply-To: <20200125114624.2093235-1-steve@sk2.org> References: <20200125114624.2093235-1-steve@sk2.org> User-Agent: Roundcube Webmail/1.4.2 Message-ID: X-Sender: steve@sk2.org X-Originating-IP: 147.229.117.36 X-Webmail-UserID: steve@sk2.org X-Ovh-Tracer-Id: 3172504463427325436 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedugedrvdejgdeffecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemucehtddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpeggtgfgfffhvffujghffgfkgihisehtkehjtddtreejnecuhfhrohhmpefuthgvphhhvghnucfmihhtthcuoehsthgvvhgvsehskhdvrdhorhhgqeenucfkpheptddrtddrtddrtddpudegjedrvddvledruddujedrfeeinecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmohguvgepshhmthhpqdhouhhtpdhhvghlohepphhlrgihvghrjeefgedrhhgrrdhovhhhrdhnvghtpdhinhgvtheptddrtddrtddrtddpmhgrihhlfhhrohhmpehsthgvvhgvsehskhdvrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhg Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org And of course I forgot this is an internal API, so this doesn't work without some of other stuff which isn't ready. Please ignore... Regards, Stephen Le 25/01/2020 12:46, Stephen Kitt a écrit : > Instead of hand-coding the dma_resv_list allocation, use > dma_resv_list_alloc, which masks the shared_max handling. While we're > at it, since we only need shared_count fences, allocate shared_count > fences rather than shared_max. > > (This is the only place in the kernel, outside of dma-resv.c, which > touches shared_max. This suggests we'd probably be better off without > it!) > > Signed-off-by: Stephen Kitt > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 6 ++---- > 1 file changed, 2 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c > index 888209eb8cec..aec595752200 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c > @@ -234,12 +234,11 @@ static int > amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, > if (!old) > return 0; > > - new = kmalloc(offsetof(typeof(*new), shared[old->shared_max]), > - GFP_KERNEL); > + new = dma_resv_list_alloc(old->shared_count); > if (!new) > return -ENOMEM; > > - /* Go through all the shared fences in the resevation object and sort > + /* Go through all the shared fences in the reservation object and > sort > * the interesting ones to the end of the list. > */ > for (i = 0, j = old->shared_count, k = 0; i < old->shared_count; ++i) > { > @@ -253,7 +252,6 @@ static int > amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, > else > RCU_INIT_POINTER(new->shared[k++], f); > } > - new->shared_max = old->shared_max; > new->shared_count = k; > > /* Install the new fence list, seqcount provides the barriers */