Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753396AbdHWGMq (ORCPT ); Wed, 23 Aug 2017 02:12:46 -0400 Received: from mail-qk0-f177.google.com ([209.85.220.177]:33038 "EHLO mail-qk0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753238AbdHWGMp (ORCPT ); Wed, 23 Aug 2017 02:12:45 -0400 MIME-Version: 1.0 In-Reply-To: References: <20170808035301.1980-1-brendanhiggins@google.com> <20170814160312.GA20526@asimov.lan> From: Brendan Higgins Date: Tue, 22 Aug 2017 23:12:43 -0700 Message-ID: Subject: Re: [RFC v1 0/4] ipmi_bmc: framework for IPMI on BMCs To: Patrick Williams Cc: Corey Minyard , Benjamin Fair , =?UTF-8?Q?C=C3=A9dric_Le_Goater?= , Joel Stanley , Andrew Jeffery , openipmi-developer@lists.sourceforge.net, OpenBMC Maillist , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3105 Lines: 71 On Mon, Aug 14, 2017 at 3:28 PM, Brendan Higgins wrote: > On Mon, Aug 14, 2017 at 7:03 PM, Patrick Williams wrote: >> On Mon, Aug 07, 2017 at 08:52:57PM -0700, Brendan Higgins wrote: >>> Currently, OpenBMC handles all IPMI message routing and handling in userland; >>> the existing drivers simply provide a file interface for the hardware on the >>> device. In this patchset, we propose a common file interface to be shared by all >>> IPMI hardware interfaces, but also a framework for implementing handlers at the >>> kernel level, similar to how the existing OpenIPMI framework supports both >>> kernel users, as well as misc device file interface. >> >> Brendan, >> >> Can you expand on why this is a good thing from an OpenBMC perspective? > > Sure, so in addition to the individual handlers; this does introduce a > common file > system interface for BMC side IPMI hardware interfaces. I think that is pretty > straightforward. > > Corey and I are still exploring the handlers. My original intention > was not to replace > any of the handlers implemented in userspace. My motivating use case is for some > OEM commands that would be easier to implement inside of the kernel. > > I was hoping to send out an overview of that, but the internet in my > hotel sucks, > so I will do it the next time I get decent internet access. :-P I was able to get this out on Monday on the OpenBMC mailing lists: https://lists.ozlabs.org/pipermail/openbmc/2017-August/008861.html > > In any case, Corey raised some interesting points on the subject; the > most recent > round I have not responded to yet. > >> We have a pretty significant set of IPMI providers that run in the >> userspace daemon(s) and I can't picture more than a very small subset >> even being possible to run in kernel space without userspace assistance. > > Like I said, I have an example of some OEM commands. Also, as I have said, > my intention is not to replace any of the userland stuff. That being said, I am > not sure the approach we have taken so far is the best when it comes to some > of the new protocols we are looking at like IPMB and MCTP. Having some > consistency of where we draw these interface boundaries would be nice; so > maybe that means rethinking some of that. I don't know, but it sounds like > Corey has already tried some of this stuff out on his own BMC side > implementation. > > Regardless, I think there is a lot of interesting conversation to be had. > >> We also already have an implementation of a RMCP+ daemon that can, and >> does, share most of its providers with the host-side daemon. > > That's great. Like I said, my original intention was not to rewrite any of that. Corey had a good point about this in my thread with him. I made a proposal of what to do there. > >> >> -- >> Patrick Williams > > By the way, Corey suggested that we have a BoF session at the Linux Plumbers > Conference, so I set one up: > https://linuxplumbersconf.org/2017/ocw/proposals/4723 > I highly encourage anyone who is interested in this discussion to attend. > > Thanks!