On Thu, 2012-09-27 at 15:41 +0530, viresh kumar wrote:
> On Thu, Sep 27, 2012 at 3:31 PM, Vinod Koul <[email protected]> wrote:
> > Let me try again.
> >
> > what does it take to do platform and PCI driver for this:
> > 1. make dma h/w access (read/write) platform independent. which you have
> > already done
> > 2. Device registration: Create two probes, or use common probe.
> > You smartly chose the second one BUT by creating another device.
> > If you look closely at the probe then I would say it would be easy to
> > create library for probe which can be used across both pci and platform
> > driver probes. The probe library which initializes driver and registers
> > with dmaengine needs device struct and resources can be provided by each
> > probe.
>
> Or in other words... create three files
> - dw_dmac.c
> - dw_dmac-pltfm.c
> - dw_dmac-pci.c...
>
> Don't do anything specific to platform or pci in dw_dmac.c...
> Keep pltfm and pci files to smallest possible size, and keep as much of
> common part in dmac.c...
>
> Similar is done in drivers/mmc/host/sdhci*...
This approach has the significant differences to proposed before.
First of all it will export IP-block relevant functions to the kernel
namespace. I think it is not a good idea to pollute kernel more.
Moreover the API dependencies disallow transparent build and usage of
the drivers. For example, you can't built-in platform driver and leave
core part as a module. And there is at least one device, Intel Medfield,
where DMA controller is exposed as PCI device on fake PCI bus. In that
case the initialization sequence matters and the easier way is to
provide independent drivers for platform device.
--
Andy Shevchenko <[email protected]>
Intel Finland Oy
On 8 October 2012 15:47, Andy Shevchenko
<[email protected]> wrote:
> This approach has the significant differences to proposed before.
I am afraid i didn't get your mail completely. Still i will try based on my
understanding.
> First of all it will export IP-block relevant functions to the kernel
> namespace. I think it is not a good idea to pollute kernel more.
So, few routines which are required to be called from probe,
suspend/resume and exit would be made non-static... This is what you
wanted to say? Hopefully most of the routines would still be declared
static in the core file. So IMHO, the simplicity or clarity that new approach
gives has more advantages than this aspect.
> Moreover the API dependencies disallow transparent build and usage of
> the drivers. For example, you can't built-in platform driver and leave
> core part as a module.
Obviously... Should be done this way only... What if core driver isn't inserted
and platform's probe is already called. That's why depends-on should not
allow you to make core part as module alone...
I couldn't get the issue completely. What's the problem in this approach?
> And there is at least one device, Intel Medfield,
> where DMA controller is exposed as PCI device on fake PCI bus. In that
> case the initialization sequence matters and the easier way is to
> provide independent drivers for platform device.
Couldn't get this one :(
--
viresh
On Mon, 2012-10-08 at 16:19 +0530, Viresh Kumar wrote:
> On 8 October 2012 15:47, Andy Shevchenko
> <[email protected]> wrote:
> > This approach has the significant differences to proposed before.
>
> I am afraid i didn't get your mail completely. Still i will try based on my
> understanding.
Ok, let me try again.
> > First of all it will export IP-block relevant functions to the kernel
> > namespace. I think it is not a good idea to pollute kernel more.
>
> So, few routines which are required to be called from probe,
> suspend/resume and exit would be made non-static... This is what you
> wanted to say? Hopefully most of the routines would still be declared
> static in the core file. So IMHO, the simplicity or clarity that new approach
> gives has more advantages than this aspect.
The probe (core part), remove, shutdown, suspend, and resume will be
exported. Practically each generic function which will be used in
drivers outside of core. Why do we need them in the kernel namespace?
> > Moreover the API dependencies disallow transparent build and usage of
> > the drivers. For example, you can't built-in platform driver and leave
> > core part as a module.
> Obviously... Should be done this way only...
I could not agree. I don't see any reason why this is the only way.
> What if core driver isn't inserted
> and platform's probe is already called.
Nothing will happen in my case until user loads all necessary bits.
> That's why depends-on should not
> allow you to make core part as module alone...
> I couldn't get the issue completely. What's the problem in this approach?
Why we need to do this if we could avoid it? I see nothing to prevent us
to build parts as modules, or otherwise, or mixed style.
In other words one approach provides weak dependency and the other -
hard dependency between pieces of code.
--
Andy Shevchenko <[email protected]>
Intel Finland Oy
On 8 October 2012 18:16, Andy Shevchenko
<[email protected]> wrote:
> On Mon, 2012-10-08 at 16:19 +0530, Viresh Kumar wrote:
>> > First of all it will export IP-block relevant functions to the kernel
>> > namespace. I think it is not a good idea to pollute kernel more.
>>
>> So, few routines which are required to be called from probe,
>> suspend/resume and exit would be made non-static... This is what you
>> wanted to say? Hopefully most of the routines would still be declared
>> static in the core file. So IMHO, the simplicity or clarity that new approach
>> gives has more advantages than this aspect.
> The probe (core part), remove, shutdown, suspend, and resume will be
> exported. Practically each generic function which will be used in
> drivers outside of core. Why do we need them in the kernel namespace?
And these are generic kernel framework supporting routines, instead of
routines that expose the hardware. Even then exposing them isn't that bad.
I do agree, we must have as less routines in kernel namespace as possible.
But not at the price of over complexity of stuff which isn't required.
>> What if core driver isn't inserted
>> and platform's probe is already called.
> Nothing will happen in my case until user loads all necessary bits.
>
>> That's why depends-on should not
>> allow you to make core part as module alone...
>> I couldn't get the issue completely. What's the problem in this approach?
>
> Why we need to do this if we could avoid it? I see nothing to prevent us
> to build parts as modules, or otherwise, or mixed style.
> In other words one approach provides weak dependency and the other -
> hard dependency between pieces of code.
I agree that there are some parts of your approach which might be having
few advantages. But it is actually adding more complexity without much
need of it. Logically speaking, we never had two devices for the same
dma controller. We are adding them just to have pci over platform.. Which
would mean the system become more and more complex..
So, during run time...
- there will be two device-driver binding loops.. Once for pci and then for
platform
- In suspend/resume... two devices will get into suspend, instead of one..
- There might be other frameworks in kernel.. which react on struct device
basis... they will get affected too..
- You have larger image size for pci case. as you compile platform too..
Just try to think from this perspective... we dont have two hardware devices
in the system.... Ideally speaking there must be a struct device associated
with a hardware device...
@Arnd/Vinod: Can you guys throw some more light here.. on the adv/disadv
of both the approaches?
--
viresh
On Mon, 2012-10-08 at 18:57 +0530, Viresh Kumar wrote:
> I agree that there are some parts of your approach which might be having
> few advantages. But it is actually adding more complexity without much
> need of it. Logically speaking, we never had two devices for the same
> dma controller. We are adding them just to have pci over platform.. Which
> would mean the system become more and more complex..
>
> So, during run time...
> - there will be two device-driver binding loops.. Once for pci and then for
> platform
> - In suspend/resume... two devices will get into suspend, instead of one..
> - There might be other frameworks in kernel.. which react on struct device
> basis... they will get affected too..
> - You have larger image size for pci case. as you compile platform too..
>
> Just try to think from this perspective... we dont have two hardware devices
> in the system.... Ideally speaking there must be a struct device associated
> with a hardware device...
>
> @Arnd/Vinod: Can you guys throw some more light here.. on the adv/disadv
> of both the approaches?
I am worried about those tow. Runtime PM handlers in case PCI devices
make life a lot easier am not sure what support will be there for
platform devices in other systems.
Also next version of this h/w on our systems is bringing subtle changes
so having them separate seemed to me a better idea.
--
Vinod Koul
Intel Corp.
On Mon, 2012-10-08 at 18:57 +0530, Viresh Kumar wrote:
> On 8 October 2012 18:16, Andy Shevchenko
> <[email protected]> wrote:
> > On Mon, 2012-10-08 at 16:19 +0530, Viresh Kumar wrote:
> >> > First of all it will export IP-block relevant functions to the kernel
> >> > namespace. I think it is not a good idea to pollute kernel more.
> >>
> >> So, few routines which are required to be called from probe,
> >> suspend/resume and exit would be made non-static... This is what you
> >> wanted to say? Hopefully most of the routines would still be declared
> >> static in the core file. So IMHO, the simplicity or clarity that new approach
> >> gives has more advantages than this aspect.
> > The probe (core part), remove, shutdown, suspend, and resume will be
> > exported. Practically each generic function which will be used in
> > drivers outside of core. Why do we need them in the kernel namespace?
>
> And these are generic kernel framework supporting routines, instead of
> routines that expose the hardware. Even then exposing them isn't that bad.
>
> I do agree, we must have as less routines in kernel namespace as possible.
> But not at the price of over complexity of stuff which isn't required.
I didn't see the complexity you are talking about.
> >> That's why depends-on should not
> >> allow you to make core part as module alone...
> >> I couldn't get the issue completely. What's the problem in this approach?
> >
> > Why we need to do this if we could avoid it? I see nothing to prevent us
> > to build parts as modules, or otherwise, or mixed style.
> > In other words one approach provides weak dependency and the other -
> > hard dependency between pieces of code.
>
> I agree that there are some parts of your approach which might be having
> few advantages. But it is actually adding more complexity without much
> need of it.
Sorry, still didn't get where it is (complexity) and in what part.
> Logically speaking, we never had two devices for the same
> dma controller.
What do you mean by this? Do we ever have the same IP block on different
buses at the same time? I think it's potentially possible.
> We are adding them just to have pci over platform.. Which
> would mean the system become more and more complex..
> So, during run time...
> - there will be two device-driver binding loops.. Once for pci and then for
> platform
Is this a real problem? We have dozens of the drivers on modern hw, part
of them by the way use proposed approach.
> - In suspend/resume... two devices will get into suspend, instead of one..
True. But it allows to keep a potential to have the same devices to be
attached to the different buses simultaneously.
> - There might be other frameworks in kernel.. which react on struct device
> basis... they will get affected too..
I'm wondering if there any that affects us. I think you might know if
there was any example of it (in history?).
> - You have larger image size for pci case. as you compile platform too..
How much? I didn't check this, but I believe it will consume one page at
most.
> Just try to think from this perspective... we dont have two hardware devices
> in the system.... Ideally speaking there must be a struct device associated
> with a hardware device...
I'm sorry, but I'm still not fully convinced by those arguments. For me
it looks like both approaches have good and bad aspects on the similar
level.
Actually sdhci seems for me as a not good example. In the mmc code they
have already robust architecture (there is host controller, there is
core part, there is card part and so on...) and enough of ->ops
structures. We have a simpler driver.
--
Andy Shevchenko <[email protected]>
Intel Finland Oy
On 30 October 2012 20:35, Andy Shevchenko
<[email protected]> wrote:
> On Mon, 2012-10-08 at 18:57 +0530, Viresh Kumar wrote:
>> I do agree, we must have as less routines in kernel namespace as possible.
>> But not at the price of over complexity of stuff which isn't required.
>
> I didn't see the complexity you are talking about.
Complexity not in the way that we need a genius to understand the code :)
But creating devices layer which doesn't exist.
>> Logically speaking, we never had two devices for the same
>> dma controller.
>
> What do you mean by this? Do we ever have the same IP block on different
> buses at the same time? I think it's potentially possible.
One device is one instance of an IP. With your approach we will have two
device structures for a single physical IP present.
>> We are adding them just to have pci over platform.. Which
>> would mean the system become more and more complex..
>
>> So, during run time...
>> - there will be two device-driver binding loops.. Once for pci and then for
>> platform
>
> Is this a real problem? We have dozens of the drivers on modern hw, part
> of them by the way use proposed approach.
I am repeating this (probably said this earlier, if not sorry). If
something already
exists in kernel, it doesn't meant that it is perfect and others
should blindly follow
it.
The loops isn't a problem, but an unnecessary activity done.
>> - In suspend/resume... two devices will get into suspend, instead of one..
>
> True. But it allows to keep a potential to have the same devices to be
> attached to the different buses simultaneously.
I don't see why a single physical block would be connected to AMBA and PCI
at the same time. And obviously that's not the case right now, so better leave
that in current argument :)
>> - There might be other frameworks in kernel.. which react on struct device
>> basis... they will get affected too..
>
> I'm wondering if there any that affects us. I think you might know if
> there was any example of it (in history?).
There are many frameworks which work per device. regulator, clk, RTPM, etc.
They will see a hardware hierarchy which doesn't exist.
>> - You have larger image size for pci case. as you compile platform too..
>
> How much? I didn't check this, but I believe it will consume one page at
> most.
Whatever amount it is. We can't waste it.
> Actually sdhci seems for me as a not good example. In the mmc code they
> have already robust architecture (there is host controller, there is
> core part, there is card part and so on...) and enough of ->ops
> structures. We have a simpler driver.
I am not talking about SD/MMC as a whole. SDHCI targets a specific hardware
configuration (manufactured by multiple vendors). It is very similar to our case
here.
For me i don't agree with your approach. So, its a NACK.
However if Arnd/Vinod or You can give me some strong points about
your approach, i can consider it again :)
--
viresh
On Tue, Oct 30, 2012 at 5:19 PM, Viresh Kumar <[email protected]> wrote:
> On 30 October 2012 20:35, Andy Shevchenko
> <[email protected]> wrote:
>> On Mon, 2012-10-08 at 18:57 +0530, Viresh Kumar wrote:
> For me i don't agree with your approach. So, its a NACK.
Point taken.
> However if Arnd/Vinod or You can give me some strong points about
> your approach, i can consider it again :)
Fair enough.
--
With Best Regards,
Andy Shevchenko