Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2656802pxj; Mon, 17 May 2021 06:53:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzjI9FIX4eNUvoziufSfNe8zPCsA9DTCxMcFmERGDVmRKevK3HuGrk0hQa1iMioU7L+2Jza X-Received: by 2002:a17:906:17cc:: with SMTP id u12mr41923eje.170.1621259629846; Mon, 17 May 2021 06:53:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621259629; cv=none; d=google.com; s=arc-20160816; b=uV6gSzisJK0z96a6jv9STBbLhu2cyDbzDlUDDtfsPE4Fd3hQt0cShgj6iBlz9LX4F1 OwSIn9DzGXTpHE+4fFFJh9hK/DekcxXD7H0e0Ve6jjgEL2ztGSf4nYUX4O3E2cVcN+1A 06+sSz4hyS6WrLyMsK/gsXKNuAvTWdPMmakSca0oLpCkgSdJeClYQ/luVhaX87AgWVcS 8xKx3G8vSeNDSj0BEsbYIBF/tBYwqCX7j92php6ogOE07JRNCDgOkz/JkLWmCxnM5gcB QjRctX9y8PiqaajpGJNg0Kw9/xKoXqIXb9bjiPrzgCx5YdaB9Te5UD3Tna7eC8dnn2Ki Y00g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=gfSGgq8Bw634jos2BhPIYD1MCiKr2MgkE0Xb4Qf8U2M=; b=p5fk9XGnJ4Rds+pGQz0BMK5vEctwgQG1IXgcsDDl/8ajKcIsBBwnyVMrejBdYygcwb v+ooE9IZWlYvgKy9vKE0YnF0PRJUK6Vu/go1Nuxd3ILRgB+Y/iw1u1usLcc7BpRle+2s 3WvIiVSTeVRF8fippWxvUBYBjXad55C6yr9wePcjQiid4RcWf+fBVvHFSY96491ivpKp EQIL1l/I+UvBlutgfs9s6vQ5w41JyDmCLHpdq6ub8UGX14tSRiNqlvsQ/TlHZYmbcqPO hvTu4LUci3gmpifJytPYnjlhy5esPI8/tDlXiWteXjdQ2KLTumiRGd7ZObycubd5xlSN UvnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=VZAIk70Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cr15si18813741ejc.676.2021.05.17.06.53.26; Mon, 17 May 2021 06:53:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=VZAIk70Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235963AbhEQJQ1 (ORCPT + 99 others); Mon, 17 May 2021 05:16:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236002AbhEQJOc (ORCPT ); Mon, 17 May 2021 05:14:32 -0400 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39D55C061342 for ; Mon, 17 May 2021 02:12:56 -0700 (PDT) Received: by mail-oi1-x22c.google.com with SMTP id x15so5828197oic.13 for ; Mon, 17 May 2021 02:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gfSGgq8Bw634jos2BhPIYD1MCiKr2MgkE0Xb4Qf8U2M=; b=VZAIk70QX6PShxHjY+RskDqCQ+sq+dfLgQxruSlP9A4jtbpZmS6QMo0MCd5599CT+x MA75Mk5rqZMKVmQjye41lzWVs5vf4YiPYQYJlWxmclZKKURgXnJccs+rODdJdwYDffC3 l2uDzVge0MacxdZNpKFYQOwD8Sqa6YmtGY1XI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gfSGgq8Bw634jos2BhPIYD1MCiKr2MgkE0Xb4Qf8U2M=; b=eTlfwAts9T9J5GdC22gEkNyyH0Fkp7r/kPWlryfazqu5wopxQKNeV7K+64YDj469mM xGy1/3hlg7SW9Y/yOdo7FfhjSTdJA1nBl2ZcF4TywXs5arFcA2GbJlyWoAzG9BxvrBlC kz/4WqP3LaxG5+KkvZwPDHHxbgcp2eDaUK6lcv8+an65bNT0iEcpbVuvWURC468uwuAh eI3uoaOSMWDmWcljWd+sTLVF/pt07d6G6Qi0GV0ZTZjGk4LRH5NqjtckwLabjfe7gXWs uJhByToFypPjCf6VhmSbJQwXF5qCvlFYbAMDisFLvQC6k4W90BRu91aUqCyGMNlOZtQ0 Z6/A== X-Gm-Message-State: AOAM533svJZS+UQACehiD5AxbZKNW8Pqdf1rWItYCP/sxJ32eAPa+dVk SQqgizxqC8nQEyBXF4Zw1Zgt5O75HMS95BE+cvO6Xw== X-Received: by 2002:a54:4809:: with SMTP id j9mr14630807oij.14.1621242775675; Mon, 17 May 2021 02:12:55 -0700 (PDT) MIME-Version: 1.0 References: <20210513110040.2268-1-maciej.kwapulinski@linux.intel.com> In-Reply-To: From: Daniel Vetter Date: Mon, 17 May 2021 11:12:44 +0200 Message-ID: Subject: Re: [PATCH v3 00/14] Driver of Intel(R) Gaussian & Neural Accelerator To: Greg Kroah-Hartman Cc: Arnd Bergmann , Dave Airlie , Maciej Kwapulinski , Jonathan Corbet , Derek Kiernan , Dragan Cvetic , Andy Shevchenko , Linux Kernel Mailing List , "open list:DOCUMENTATION" , DRI Development Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 17, 2021 at 10:55 AM Greg Kroah-Hartman wrote: > > On Mon, May 17, 2021 at 10:49:09AM +0200, Daniel Vetter wrote: > > On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman > > wrote: > > > > > > On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote: > > > > On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote: > > > > > On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman > > > > > wrote: > > > > > > On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote: > > > > > > > Dear kernel maintainers, > > > > > > > > > > > > > > This submission is a kernel driver to support Intel(R) Gaussian & Neural > > > > > > > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor > > > > > > > available on multiple Intel platforms. AI developers and users can offload > > > > > > > continuous inference workloads to an Intel(R) GNA device in order to free > > > > > > > processor resources and save power. Noise reduction and speech recognition > > > > > > > are the examples of the workloads Intel(R) GNA deals with while its usage > > > > > > > is not limited to the two. > > > > > > > > > > > > How does this compare with the "nnpi" driver being proposed here: > > > > > > https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com > > > > > > > > > > > > Please work with those developers to share code and userspace api and > > > > > > tools. Having the community review two totally different apis and > > > > > > drivers for the same type of functionality from the same company is > > > > > > totally wasteful of our time and energy. > > > > > > > > > > Agreed, but I think we should go further than this and work towards a > > > > > subsystem across companies for machine learning and neural networks > > > > > accelerators for both inferencing and training. > > > > > > > > We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or > > > > think G as in General, not Graphisc. > > > > > > > > > We have support for Intel habanalabs hardware in drivers/misc, and there are > > > > > countless hardware solutions out of tree that would hopefully go the same > > > > > way with an upstream submission and open source user space, including > > > > > > > > > > - Intel/Mobileye EyeQ > > > > > - Intel/Movidius Keembay > > > > > - Nvidia NVDLA > > > > > - Gyrfalcon Lightspeeur > > > > > - Apple Neural Engine > > > > > - Google TPU > > > > > - Arm Ethos > > > > > > > > > > plus many more that are somewhat less likely to gain fully open source > > > > > driver stacks. > > > > > > > > We also had this entire discussion 2 years ago with habanalabs. The > > > > hang-up is that drivers/gpu folks require fully open source userspace, > > > > including compiler and anything else you need to actually use the chip. > > > > Greg doesn't, he's happy if all he has is the runtime library with some > > > > tests. > > > > I guess we're really going to beat this horse into pulp ... oh well. > > > > > All you need is a library, what you write on top of that is always > > > application-specific, so how can I ask for "more"? > > > > This is like accepting a new cpu port, where all you require is that > > the libc port is open source, but the cpu compiler is totally fine as > > a blob (doable with llvm now being supported). It makes no sense at > > all, at least to people who have worked with accelerators like this > > before. > > > > We are not requiring that applications are open. We're only requiring > > that at least one of the compilers you need (no need to open the fully > > optimized one with all the magic sauce) to create any kind of > > applications is open, because without that you can't use the device, > > you can't analyze the stack, and you have no idea at all about what > > exactly it is you're merging. With these devices, the uapi visible in > > include/uapi is the smallest part of the interface exposed to > > userspace. > > Ok, sorry, I was not aware that the habanalabs compiler was not > available to all under an open source license. All I was trying to > enforce was that the library to use the kernel api was open so that > anyone could use it. Trying to enforce compiler requirements like this > might feel to be a bit of a reach as the CPU on the hardware really > doesn't fall under the license of the operating system running on this > CPU over here :) Experience says if you don't, forget about supporting your drivers/subsystem long-term. At best you're stuck with a per-device fragmented mess that vendors might or might not support. This has nothing to do with GPL licensing or not, but about making sure you can do proper engineering/support/review of the driver stack. At least in the GPU world we're already making it rather clear that running blobby userspace is fine with us (as long as it's using the exact same uapi as the truly open stack, no exceptions/hacks/abuse are supported). Also yes vendors don't like it. But they also don't like that they have to open source their kernel drivers, or runtime library. Lots of background chats over years, and a very clear line in the sand helps to get there, and also makes sure that the vendors who got here don't return to the old closed source ways they love so much. Anyway we've had all this discussions 2 years ago, nothing has changed (well on the gpu side we managed to get ARM officially on board with fully open stack paid by them meanwhile, other discussions still ongoing). I just wanted to re-iterate that if we'd really care about having a proper accel subsystem, there's people who've been doing this for decades. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch