Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1500061pxb; Thu, 28 Oct 2021 05:04:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwGv8tYr7fwoRuGU2fzK+2Xa/pVIod4/DV22jouoj3geQDF6RtpHtDVWCGL6ncJ+f64bvaA X-Received: by 2002:a17:90b:38c1:: with SMTP id nn1mr12593204pjb.83.1635422651805; Thu, 28 Oct 2021 05:04:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635422651; cv=none; d=google.com; s=arc-20160816; b=Y5wAFVDtzHuxIdDwDV49R3OmK6UDLQAp6RTga/ZeokyZOwp30a+1uRtKgbdD4nSI0k BmpmPrEyi4/cbCpWPKvjoIlpGcfU/LK5xbKbxOwbhbGmQ/qlClpNtMQkzsw9uLHLwdcg Z0kVoBQmay1W+3Lb+a4g6Om1YXPNvXBBQOfnq2Zp0sO9qch5gr61z9J4dZEujqWoGmll pag1XhID3gq+Nr8wWQsKSMuBdgFpzVMqVzuPJ1V/imjzLri4t5gEyiWzxKq2GCxFjktv irVxtuWtK1uu/KESC5Quc/q7mceZpnzh5nxe2WsZ7GZV16uW4emPdtLgg+pnRt7JsaOX I5KA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=66/4XZkO/EiB+eSR2/3xy1BlDhuYDYyfR0wFYpD6ckE=; b=Xjr2G9BUoYQzl2FQxzagsnUVpTWr8z9darpsnBoXFZJyTVMg3GwqMjnujcO42LlcaR 7Mi6vowGxURBQLsveikbGSU/YlAiPBtcwlZp6VFDaqBPu9B8WTydrccfsh+iwrwjqEIi XRp00k+/12pfTq/iPi1OZvfzUSplrLxbvBbcVY1+9rpVnPJqCvk0KMrDT9hXaXPPxrFK 1pTZOwXIJeX8TX4Ehr/zaWW5JeKmrWjnHukgEShqndv0utAmEn0T2YYKzd3PrLT3gQBq 79jWbba+ORZQ+43tFIp1HmVNzPnVE06bUbwWgPuzcpjbPrMYaqfGVX62esBR1O/ky/FP SnRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="a/PpiSh1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i1si3611878pgt.343.2021.10.28.05.03.56; Thu, 28 Oct 2021 05:04:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="a/PpiSh1"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230135AbhJ1MDt (ORCPT + 99 others); Thu, 28 Oct 2021 08:03:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:46428 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229578AbhJ1MDs (ORCPT ); Thu, 28 Oct 2021 08:03:48 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FD5A61151 for ; Thu, 28 Oct 2021 12:01:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1635422482; bh=66/4XZkO/EiB+eSR2/3xy1BlDhuYDYyfR0wFYpD6ckE=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=a/PpiSh1sesE/waJt83yHjofkVgT5GcgsOakKvmiDhqj2wxgn6yoWXuYR9xGAUqk8 0G1MPg3ZAKe0NAA21zBU8TfmrQfGkLMGBYmr2FASZ2a2N28DYeD9RW7AJzmpuykxGs o5b8Komzu6Tav1J3Cv2j+WQXe7IB8fZU99u0/ucmrgpB8qmB3wD5C3MnlHdbm/X6xy tmvkDKQ8jmEXBrGNoevd5qHL8r2lHmCQEF8Y16uNl8d4yTFU4Sa+gdjGbDYh7ofmy1 ay3G0BkovrvBlqUAfx5Cj7q/XaTbojcnDJGJVWFpVSj31aTp2FDPT4FExE6852IJgf 2QJYEDiRao1wA== Received: by mail-yb1-f180.google.com with SMTP id v64so8128140ybi.5 for ; Thu, 28 Oct 2021 05:01:22 -0700 (PDT) X-Gm-Message-State: AOAM530RPGWTe7HUasFG5VEzJnpAvRwQPiEDalo1i4khRypon5987lij Kz2gMYsqNH0xaf8EUs1za4FwoHER9tPuU5YVlQk= X-Received: by 2002:a25:1c02:: with SMTP id c2mr4125824ybc.218.1635422481210; Thu, 28 Oct 2021 05:01:21 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Oded Gabbay Date: Thu, 28 Oct 2021 15:00:54 +0300 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Habanalabs Open-Source TPC LLVM compiler and SynapseAI Core library To: Daniel Vetter Cc: Greg Kroah-Hartman , Linus Torvalds , Dave Airlie , Jason Gunthorpe , "Linux-Kernel@Vger. Kernel. Org" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 28, 2021 at 10:38 AM Daniel Vetter wrote: > > On Wed, Oct 27, 2021 at 8:53 AM Oded Gabbay wrote: > > > > On Fri, Sep 10, 2021 at 10:58 AM Greg Kroah-Hartman > > wrote: > > > > > > On Fri, Sep 10, 2021 at 10:26:56AM +0300, Oded Gabbay wrote: > > > > Hi Greg, > > > > > > > > Following our conversations a couple of months ago, I'm happy to tell you that > > > > Habanalabs has open-sourced its TPC (Tensor Processing Core) LLVM compiler, > > > > which is a fork of the LLVM open-source project. > > > > > > > > The project can be found on Habanalabs GitHub website at: > > > > https://github.com/HabanaAI/tpc_llvm > > > > > > > > There is a companion guide on how to write TPC kernels at: > > > > https://docs.habana.ai/en/latest/TPC_User_Guide/TPC_User_Guide.html > > > > > > That's great news, thanks for pushing for this and releasing it all! > > > > > > greg k-h > > > > Hi Greg, > > I would like to update that yesterday AWS launched new EC2 instances > > powered by the Gaudi accelerators. It is now in general availability, > > and anyone can launch an instance with those devices. > > Therefore, one can now take the upstream driver, hl-thunk, tpc llvm > > compiler and SynapseAI core and execute compute kernels on the Gaudi > > devices. I have verified this to be working with the driver in kernel > > 5.15-rc6. > > Nice! > > Now that the llvm part is open, any plans to upstream that? Years ago AFAIK, there were internal discussions about doing that and the decision was to pursue that goal somewhere in the future. Not sure how far in the future they were talking about... Having said that, I'm not at all involved at the compiler front, so I might have outdated information. If you want, I can connect you with the compiler group leader to discuss that with him. Oded > when amd upstreamed their backend there was the hope that llvm would > grow some competent support for gpu style accelerator isa, but since > for years now amd's the only backend that ever was merged it's stuck > in a chicken-egg situation of upstream llvm complaining why amd > backend has all these special requirements. And other accel backends > (at least the gpu-style simd ones) not having a good path to upstream > llvm since a lot of the infrastructure and understanding isn't there. > > Getting a 2nd accel backend into upstream llvm would be a huge step > towards fixing this mess. As far as I know the only other open accel > backend based on llvm is intel's igc (for intel gpus), and that one is > such a massive fork that's been out of upstream llvm for so long that > it's not going to land anytime soon, if ever (in it's current form at > least). > > Once we do have an accel backend in upstream llvm we can finally start > building a real stack here I think, so whomever is first will win > quite some advantage I think. > > Cheers, Daniel > > > We are still missing the networking parts, but I hope to start > > upstreaming them in the next coming months. > > > > Thanks, > > Oded > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch