Subject: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

this message came up on debian-arm and i figured that it is worthwhile
endeavouring to get across to people why device tree cannot and will
not ever be the solution it was believed to be, in the ARM world.

[just a quick note to david who asked this question on the debian-arm
mailing list: any chance you could use replies with plaintext in
future? converting from HTML to text proved rather awkward and
burdensome, requiring considerable editing. the generally-accepted
formatting rules for international technical mailing lists are
plaintext only and 7-bit characters]

On Sun, May 5, 2013 at 11:14 AM, David Goodenough
<[email protected]> wrote:
> On Sunday 05 May 2013, Luke Kenneth Casson Leighton wrote:

>> > And I have a question: as the Debian installer takes the arch armhf in
>> > charge, do you think a standard install' from a netboot image will work
>> > ?
>>
>
>> this has been on my list for a loooong time. as with *all* debian
>> installer images however you are hampered by the fact that there is no
>> BIOS - at all - on ARM devices - and therefore it is impossible to
>> have a "one size fits all" debian installer.

> I wonder if the device tree is the answer here. If the box comes with
> a DT or one is available on the web then the installer could read it and
> know what to install. That and the armmp kernel should solve the problem.

you'd think so, and it's a very good question, to which the answer
could have been and was predicted to be "not a snowbal in hell's
chance", even before work started on device tree, and turns out to
*be* "not a snowball in hell's chance" which i believe people are now
beginning to learn, based on the ultra-low adoption rate of device
tree in the ARM world (side-note: [*0]).

in the past, i've written at some length as to why this is the case,
however the weighting given to my opinions on linux kernel strategic
decision-making is negligeable, so as one associate of mine once
wisely said, "you just gotta let the train wreck happen".

device tree was designed to take the burden off of the linux kernel
due to proliferation of platform-specific hard-coding of support for
peripherals. however it was designed ***WITHOUT*** its advocates
having a full grasp of the sheer overwhelming diversity of the
platforms.

specifically i am referring to linus torvald's complete lack of
understanding of the ARM linux kernel world, as his primary experience
is with x86. in his mind, and the minds of those people who do not
understand how ARM-based boxes are built and linux brought up on them,
*surely* it cannot be all that complicated, *surely* it cannot be as
bad as it is, right?

what they're completely missing is the following:

* the x86 world resolves around standards such as ACPI, BIOSes and
general-purpose dynamic buses.
* ACPI normalises every single piece of hardware from the perspective
of most low-level peripherals.
* the BIOS also helps in that normalisation. DOS INT33 is the
classic one i remember.
* the general-purpose dynamic buses include:
- USB and its speed variants (self-describing peripherals)
- PCI and its derivatives (self-describing peripherals)
- SATA and its speed variants (self-describing peripherals)

exceptions to the above include i2c (unusual, and taken care of by
i2c-sensors, which uses good heuristics to "probe" devices from
userspace) and the ISA bus and its derivatives such as Compact Flash
and IDE. even PCMCIA got sufficient advances to auto-identify devices
from userspace at runtime.

so as a general rule, supporting a new x86-based piece of hardware is
a piece of piss. get datasheet or reverse-engineer, drop it in, it's
got BIOS, ACPI, USB, PCIe, SATA, wow big deal, job done. also as a
general rule, hardware that conforms to x86-motherboard-like layouts
such as the various powerpc architectures are along the same lines.

so here, device tree is a real easy thing to add, and to some extent a
"nice-to-have". i.e. it's not really essential to have device tree on
top of something where 99% of the peripherals can describe themselves
dynamically over their bus architecture when they're plugged in!

now let's look at the ARM world.

* is there a BIOS? no. so all the boot-up procedures including
ultra-low-level stuff like DDR3 RAM timings initialisation, which is
normally the job of the BIOS - must be taken care of BY YOU (usually
in u-boot) and it must be done SPECIFICALLY CUSTOMISED EACH AND EVERY
SINGLE TIME FOR EVERY SINGLE SPECIFIC HARDWARE COMBINATION.

* is there ACPI present? no. so anything related to power
management, fans (if there are any), temperature detection (if there
is any), all of that must be taken care of BY YOU.

* what about the devices? here's where it becomes absolute hell on
earth as far as attempting to "streamline" the linux kernel into a
"one size fits all" monolithic package.

the classic example i give here is the HTC Universal, which was a
device that, after 3 years of dedicated reverse-engineering, finally
had fully-working hardware with the exception of write to its on-board
NAND. the reason for the complexity is in the hardware design, where
not even 110 GPIO pins of the PXA270 were enough to cover all of the
peripherals, so they had to use a custom ASIC with an additional 64
GPIO pins. it turned out that *that* wasn't enough either, so in
desperation the designers used the 16 GPIO pins of the Ericsson 3G
Radio ROM, in order to do basic things like switch on the camera flash
LED.

the point is: each device that's designed using an ARM processor is
COMPLETELY AND UTTERLY DIFFERENT from any other device in the world.

when i say "completely and utterly different", i am not just talking
about the processor, i am not just talking about the GPIO, or even the
buses: i'm talking about the sensors, the power-up mechanisms, the
startup procedures - everything. one device uses GPIO pin 32 for
powering up and resetting a USB hub peripheral, yet for another device
that exact same GPIO pin is used not even as a GPIO but as an
alternate multiplexed function e.g. as RS232 TX pin!

additionally, there are complexities in the bring-up procedure for
devices, where a hardware revision has made a mistake (or made too
many cost savings), and by the skin of their teeth the kernel
developers work out a bring-up procedure. the example i give here is
the one of the HTC Blueangel, where the PXA processor's GPIO was used
(directly) to power the device. unfortunately, there simply wasn't
enough current. but that's ok! why? because what they did was:

* bring up the 3.3v GPIO (power to the GSM chip)
* bring up the 2nd 3.3v GPIO
* pull the GPIO pin connected to the GSM "reset" chip
* wait 5 milliseconds
* **PULL EVERYTHING BACK DOWN AGAIN**
* wait 1 millisecond
* bring up the 1st 3.3v GPIO (again)
* wait 10 milliseconds
* bring up the 2nd 3.3v GPIO (again)
* wait 5 milliseconds
* pull up the "RESET" GPIO
* wait 10 milliseconds
* pull the "RESET" GPIO down
* ***AGAIN*** do the reset GPIO.

this procedure was clearly designed to put enough power into the
capacitors of the on-board GSM chip so that it could start up (and
crash) then try again (crash again), and finally have enough power to
not drain itself beyond its capacity.

... the pointed question is: how the bloody hell are you going to
represent *that* in "device tree"??? and why - even if it was
possible to do - should you burden other platforms with such an insane
boot-up procedure even if they *did* use the exact same chipset?

... but these devices, because they are in a huge market with
ever-changing prices and are simply overwhelmed with choice for
low-level I2C, I2S devices etc, each made in different countries, each
with their NDAs, simply don't use the same peripheral chips. and even
if they did, they certainly don't use them in the same way!

again, the example that i give here is of the Phillips UDA1381 which
was quite a common sound IC used in Compaq iPAQ PDAs (designed by
HTC). so, of course, when HTC did the Himalaya, they used the same
sound IC.

.... did they power it up in the exact same way across both devices?

no.

did they even use the same *interfaces* across both devices?

no.

why not?

because the UDA1381 can be used *either* in I2S mode *or* in SPI mode,
and one [completely independent] team used one mode, and the other
team used the other.

so when it came to looking at the existing uda1381.c source code, and
trying to share that code across both platforms, could i do that?

no.

then - as if that wasn't enough you also have the diversity amongst
the ARM chips themselves. if you look for example at the history of
the development of the S3C6410, then the S5PC100 and the S5PC110, it
just doesn't make any sense... *until* you are aware that my associate
was the global director of R&D at samsung and he instigated the
procedure of having two overlapping teams, one who does development
and the other does testing; they swap every 8-9 months.

basically what happened was that the S3C6410 used FIMD 3D GPU, and so
did the 800mhz S5PC100. by the time the S5PC110 came out (which was
nothing more than a 1ghz jump) it had been flipped over to a
*completely* different 3D engine! changing the 3D GPU in mid-flow on
a CPU series!

and that's just within *one* of the fabless semiconductor companies,
and you have to bear in mind that there are *several hundred* ARM
licensees. when this topic was last raised, someone mentioned that
ARM attempted to standardise on dynamic peripheral publication on the
internal AXI/AHB bus, so that things like device tree or udev could
read it. what happened? some company failed to implement the
standard properly, and it was game over right there.


are you beginning to see the sheer scope of the problem, here? can
you see now why russell is so completely overwhelmed? are you
beginning to get a picture as to why device tree can never solve the
problem?

the best that device tree does in the ARM world is add an extra burden
for device development, because there is so little that can actually
be shared between disparate hardware platforms - so much so that it is
utterly hopeless and pointless for a time-pressured product designer
to even consider going down that route when they *know* it's not going
to be of benefit for them.

you also have to bear in mind that the SoC vendors don't really talk
to each other. you also have to bear in mind that they are usually
overwhelmed by the ignorance of the factories and OEMs that use their
SoCs - a situation that's not helped in many cases by their failure to
provide adequate documentation [but if you're selling 50 million SoCs
a year through android and the SoC is, at $7.50, a small part of the
BOM, why would you care about or even answer requests for adequate
documentation??] - so it's often the SoC vendors that have to write
the linux kernel source code themselves. MStar Semi take this to its
logical GPL-violating extreme by even preventing and prohibiting
*everyone* from gaining access to even the *product* designs. if they
like your idea, they will design it for you - in total secrecy - from
start to finish. and if not, you f*** off.

so at worst, device tree becomes a burden on the product designers
when using ARM processors, because of the sheer overwhelming
diversity. at best... no, there is no "best". device tree just moves
the problem from the linux kernel source code into the device tree
specifications.

possible solutions:

* can a BIOS help? no, because you will never get disparate ARM SoC
licensees to agree to use it. their SoCs are all highly-specialised.
and, why would they add extra layers of complexity and function cals
in a memory-constrained, CPU-constrained and power-constrained
specialist environment?

* can ACPI help? no, because power management is done in a completely
different way, and low-level peripherals are typically connected
directly to the chip and managed directly. ACPI is an x86 solution.

* can splitting CPUs into northbridge architectural designs help? no,
because again that's an x86 solution. and it usually adds 2 watts to
the power budget to drive the 128-bit or 256-bit-wide bus between the
two chips. which is insane, considering that some ARM SoCs are 0.1 to
1 watts. Not Gonna Happen. plus, it doesn't help you with the
existing mess.

* can hardware standardisation help? yes [*1] but that has its own
challenges, as well as additional costs which need to have some sort
of strategic benefit before companies will adopt it [or users].

* can chip-level standardisation help? no - ARM tried that already,
one SoC vendor screwed it up (failed to implement the tables properly)
and that was the end of the standard. plus, it doesn't help with the
existing SoCs nor the NDA situation wrt all the sensors and
peripherals nor the proliferation and localisation and
contact-relationships associated with these hard-wired sensors and
peripherals.

* what about splitting up the linux kernel into "core" and
"peripheral"? that might do it. something along the lines of OSKit -
except done right, and along lines that are acceptable to and
initiated by the key linux kernel developers. the "core" could be as
little as 2% of the entire linux kernel code-base: lib/* arch/*,
kernel/* and so on. the rest (peripherals) done as a gitmodule.
whilst it doesn't *actually* solve the problem, it reduces and
clarifies the development scope on each side of the fence,
so-to-speak, and, critically, separates out the discussions and focus
for each part. perhaps even go further and have forks of the "core"
which deal *exclusively* with each architecture.

i don't know. i don't know - honestly - what the solution is here,
and perhaps there really isn't one - certainly not at the software
level, not with the sheer overwhelming hardware diversity. linus
"ordered" the ARM representatives at that 2007 conference to "go away,
get themselves organised, come back with only one representative" and
that really says it all, the level of naivete and lack of
understanding of the scope, and the fact that each ARM processor
*really is* a completely different processor: they might as well have
completely different instruction sets and it's just damn lucky that
they do.

so i'm attempting to solve this from a different angle: the hardware
level. drastic and i mean *drastic* simplification of the permitted
interfaces, thus allowing hardware to become a series of dynamic buses
plus a series of about 10 device tree descriptions.

so, in answer to your question, david: *yes* device-tree could help,
if the hardware was simple enough and was standardised, and that's
what i'm working towards, but the EOMA standards are the *only* ones
which are simple enough and have taken this problem properly into
consideration. if you don't use hardware standards then the answer
is, sadly i feel, that device tree has absolutely no place in the ARM
world and will only cause harm rather than good [*2]

thoughts on a postcard....

l.

[*0] allwinner use "script.fex". it's an alternative to device-tree
and it's a hell of a lot better. INI config file format.
standardised across *all* hardware that uses the A10. it even allows
setting of the pin mux functions. _and_ there is a GUI application
which helps ODMs to customise the linux kernel for their clients...
WITHOUT RECOMPILING THE LINUX KERNEL AT ALL. the designers of
devicetree could learn a lot from what allwinner have achieved.

[*1] http://elinux.org/Embedded_Open_Modular_Architecture/EOMA-68

[*2] that's not quite true. if there were multiple ARM-based systems
that conformed to x86 hardware standards such as ITX motherboard
layout, those ARM systems would actually work and benefit from
device-tree. if aarm64 starts to get put regularly into x86-style
hardware form-factors, that would work. but right now, and for the
forseeable future, any ARM SoC that's used for "embedded" single-board
purposes, flat-out forget it. and even when aarm64 comes out, there
will still be plenty of cheap 32-bit ARM systems out there.


2013-05-06 04:15:38

by Robert Hancock

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On 05/05/2013 06:27 AM, Luke Kenneth Casson Leighton wrote:
> this message came up on debian-arm and i figured that it is worthwhile
> endeavouring to get across to people why device tree cannot and will
> not ever be the solution it was believed to be, in the ARM world.
>
> [just a quick note to david who asked this question on the debian-arm
> mailing list: any chance you could use replies with plaintext in
> future? converting from HTML to text proved rather awkward and
> burdensome, requiring considerable editing. the generally-accepted
> formatting rules for international technical mailing lists are
> plaintext only and 7-bit characters]
>
> On Sun, May 5, 2013 at 11:14 AM, David Goodenough
> <[email protected]> wrote:
>> On Sunday 05 May 2013, Luke Kenneth Casson Leighton wrote:
>
>>>> And I have a question: as the Debian installer takes the arch armhf in
>>>> charge, do you think a standard install' from a netboot image will work
>>>> ?
>>>
>>
>>> this has been on my list for a loooong time. as with *all* debian
>>> installer images however you are hampered by the fact that there is no
>>> BIOS - at all - on ARM devices - and therefore it is impossible to
>>> have a "one size fits all" debian installer.
>
>> I wonder if the device tree is the answer here. If the box comes with
>> a DT or one is available on the web then the installer could read it and
>> know what to install. That and the armmp kernel should solve the problem.
>
> you'd think so, and it's a very good question, to which the answer
> could have been and was predicted to be "not a snowbal in hell's
> chance", even before work started on device tree, and turns out to
> *be* "not a snowball in hell's chance" which i believe people are now
> beginning to learn, based on the ultra-low adoption rate of device
> tree in the ARM world (side-note: [*0]).
>
> in the past, i've written at some length as to why this is the case,
> however the weighting given to my opinions on linux kernel strategic
> decision-making is negligeable, so as one associate of mine once
> wisely said, "you just gotta let the train wreck happen".
>
> device tree was designed to take the burden off of the linux kernel
> due to proliferation of platform-specific hard-coding of support for
> peripherals. however it was designed ***WITHOUT*** its advocates
> having a full grasp of the sheer overwhelming diversity of the
> platforms.
>
> specifically i am referring to linus torvald's complete lack of
> understanding of the ARM linux kernel world, as his primary experience
> is with x86. in his mind, and the minds of those people who do not
> understand how ARM-based boxes are built and linux brought up on them,
> *surely* it cannot be all that complicated, *surely* it cannot be as
> bad as it is, right?
>
> what they're completely missing is the following:
>
> * the x86 world resolves around standards such as ACPI, BIOSes and
> general-purpose dynamic buses.
> * ACPI normalises every single piece of hardware from the perspective
> of most low-level peripherals.
> * the BIOS also helps in that normalisation. DOS INT33 is the
> classic one i remember.
> * the general-purpose dynamic buses include:
> - USB and its speed variants (self-describing peripherals)
> - PCI and its derivatives (self-describing peripherals)
> - SATA and its speed variants (self-describing peripherals)
>
> exceptions to the above include i2c (unusual, and taken care of by
> i2c-sensors, which uses good heuristics to "probe" devices from
> userspace) and the ISA bus and its derivatives such as Compact Flash
> and IDE. even PCMCIA got sufficient advances to auto-identify devices
> from userspace at runtime.
>
> so as a general rule, supporting a new x86-based piece of hardware is
> a piece of piss. get datasheet or reverse-engineer, drop it in, it's
> got BIOS, ACPI, USB, PCIe, SATA, wow big deal, job done. also as a
> general rule, hardware that conforms to x86-motherboard-like layouts
> such as the various powerpc architectures are along the same lines.
>
> so here, device tree is a real easy thing to add, and to some extent a
> "nice-to-have". i.e. it's not really essential to have device tree on
> top of something where 99% of the peripherals can describe themselves
> dynamically over their bus architecture when they're plugged in!
>
> now let's look at the ARM world.
>
> * is there a BIOS? no. so all the boot-up procedures including
> ultra-low-level stuff like DDR3 RAM timings initialisation, which is
> normally the job of the BIOS - must be taken care of BY YOU (usually
> in u-boot) and it must be done SPECIFICALLY CUSTOMISED EACH AND EVERY
> SINGLE TIME FOR EVERY SINGLE SPECIFIC HARDWARE COMBINATION.
>
> * is there ACPI present? no. so anything related to power
> management, fans (if there are any), temperature detection (if there
> is any), all of that must be taken care of BY YOU.
>
> * what about the devices? here's where it becomes absolute hell on
> earth as far as attempting to "streamline" the linux kernel into a
> "one size fits all" monolithic package.
>
> the classic example i give here is the HTC Universal, which was a
> device that, after 3 years of dedicated reverse-engineering, finally
> had fully-working hardware with the exception of write to its on-board
> NAND. the reason for the complexity is in the hardware design, where
> not even 110 GPIO pins of the PXA270 were enough to cover all of the
> peripherals, so they had to use a custom ASIC with an additional 64
> GPIO pins. it turned out that *that* wasn't enough either, so in
> desperation the designers used the 16 GPIO pins of the Ericsson 3G
> Radio ROM, in order to do basic things like switch on the camera flash
> LED.
>
> the point is: each device that's designed using an ARM processor is
> COMPLETELY AND UTTERLY DIFFERENT from any other device in the world.
>
> when i say "completely and utterly different", i am not just talking
> about the processor, i am not just talking about the GPIO, or even the
> buses: i'm talking about the sensors, the power-up mechanisms, the
> startup procedures - everything. one device uses GPIO pin 32 for
> powering up and resetting a USB hub peripheral, yet for another device
> that exact same GPIO pin is used not even as a GPIO but as an
> alternate multiplexed function e.g. as RS232 TX pin!
>
> additionally, there are complexities in the bring-up procedure for
> devices, where a hardware revision has made a mistake (or made too
> many cost savings), and by the skin of their teeth the kernel
> developers work out a bring-up procedure. the example i give here is
> the one of the HTC Blueangel, where the PXA processor's GPIO was used
> (directly) to power the device. unfortunately, there simply wasn't
> enough current. but that's ok! why? because what they did was:
>
> * bring up the 3.3v GPIO (power to the GSM chip)
> * bring up the 2nd 3.3v GPIO
> * pull the GPIO pin connected to the GSM "reset" chip
> * wait 5 milliseconds
> * **PULL EVERYTHING BACK DOWN AGAIN**
> * wait 1 millisecond
> * bring up the 1st 3.3v GPIO (again)
> * wait 10 milliseconds
> * bring up the 2nd 3.3v GPIO (again)
> * wait 5 milliseconds
> * pull up the "RESET" GPIO
> * wait 10 milliseconds
> * pull the "RESET" GPIO down
> * ***AGAIN*** do the reset GPIO.
>
> this procedure was clearly designed to put enough power into the
> capacitors of the on-board GSM chip so that it could start up (and
> crash) then try again (crash again), and finally have enough power to
> not drain itself beyond its capacity.
>
> ... the pointed question is: how the bloody hell are you going to
> represent *that* in "device tree"??? and why - even if it was
> possible to do - should you burden other platforms with such an insane
> boot-up procedure even if they *did* use the exact same chipset?
>
> ... but these devices, because they are in a huge market with
> ever-changing prices and are simply overwhelmed with choice for
> low-level I2C, I2S devices etc, each made in different countries, each
> with their NDAs, simply don't use the same peripheral chips. and even
> if they did, they certainly don't use them in the same way!
>
> again, the example that i give here is of the Phillips UDA1381 which
> was quite a common sound IC used in Compaq iPAQ PDAs (designed by
> HTC). so, of course, when HTC did the Himalaya, they used the same
> sound IC.
>
> .... did they power it up in the exact same way across both devices?
>
> no.
>
> did they even use the same *interfaces* across both devices?
>
> no.
>
> why not?
>
> because the UDA1381 can be used *either* in I2S mode *or* in SPI mode,
> and one [completely independent] team used one mode, and the other
> team used the other.
>
> so when it came to looking at the existing uda1381.c source code, and
> trying to share that code across both platforms, could i do that?
>
> no.
>
> then - as if that wasn't enough you also have the diversity amongst
> the ARM chips themselves. if you look for example at the history of
> the development of the S3C6410, then the S5PC100 and the S5PC110, it
> just doesn't make any sense... *until* you are aware that my associate
> was the global director of R&D at samsung and he instigated the
> procedure of having two overlapping teams, one who does development
> and the other does testing; they swap every 8-9 months.
>
> basically what happened was that the S3C6410 used FIMD 3D GPU, and so
> did the 800mhz S5PC100. by the time the S5PC110 came out (which was
> nothing more than a 1ghz jump) it had been flipped over to a
> *completely* different 3D engine! changing the 3D GPU in mid-flow on
> a CPU series!
>
> and that's just within *one* of the fabless semiconductor companies,
> and you have to bear in mind that there are *several hundred* ARM
> licensees. when this topic was last raised, someone mentioned that
> ARM attempted to standardise on dynamic peripheral publication on the
> internal AXI/AHB bus, so that things like device tree or udev could
> read it. what happened? some company failed to implement the
> standard properly, and it was game over right there.

Admittedly without knowing much background on the situation, that seems
like a bit of a cop-out. In the PC world there are a bunch of devices
that don't follow standards properly (ACPI, PCI, etc.) and we add quirks
or workarounds and move on with life - people don't decide to abandon
the standard because of it.

>
>
> are you beginning to see the sheer scope of the problem, here? can
> you see now why russell is so completely overwhelmed? are you
> beginning to get a picture as to why device tree can never solve the
> problem?

I think part of the answer has to come from the source of all of these
problems: there seems to be this culture in the ARM world (and, well,
the embedded world generally) where the HW designers don't care what
kind of mess they cause the people who have to write and maintain device
drivers and kernels that run on the devices. In the PC world designers
can't really do many crazy things as the people doing drivers will tell
them "What is this crap? There's no way we can make this work properly
in Windows". In the embedded world the attitude is more like "Hey, it's
Linux, it's open, we know you can put in a bunch of crazy hacks to make
this mess we created work reasonably". So the designers have no reason
to make things behave in a standardized and/or sane manner.

Obviously this is a longer-term solution and won't help with existing
devices, but in the long run device designers may need to realize the
kind of mess they're creating for the poor software people and try to
achieve some more standardization and device discoverability. Given the
market dominance of Linux in many parts of the embedded world, one
thinks this should be achievable.

>
> the best that device tree does in the ARM world is add an extra burden
> for device development, because there is so little that can actually
> be shared between disparate hardware platforms - so much so that it is
> utterly hopeless and pointless for a time-pressured product designer
> to even consider going down that route when they *know* it's not going
> to be of benefit for them.
>
> you also have to bear in mind that the SoC vendors don't really talk
> to each other. you also have to bear in mind that they are usually
> overwhelmed by the ignorance of the factories and OEMs that use their
> SoCs - a situation that's not helped in many cases by their failure to
> provide adequate documentation [but if you're selling 50 million SoCs
> a year through android and the SoC is, at $7.50, a small part of the
> BOM, why would you care about or even answer requests for adequate
> documentation??] - so it's often the SoC vendors that have to write
> the linux kernel source code themselves. MStar Semi take this to its
> logical GPL-violating extreme by even preventing and prohibiting
> *everyone* from gaining access to even the *product* designs. if they
> like your idea, they will design it for you - in total secrecy - from
> start to finish. and if not, you f*** off.
>
> so at worst, device tree becomes a burden on the product designers
> when using ARM processors, because of the sheer overwhelming
> diversity. at best... no, there is no "best". device tree just moves
> the problem from the linux kernel source code into the device tree
> specifications.
>
> possible solutions:
>
> * can a BIOS help? no, because you will never get disparate ARM SoC
> licensees to agree to use it. their SoCs are all highly-specialised.
> and, why would they add extra layers of complexity and function cals
> in a memory-constrained, CPU-constrained and power-constrained
> specialist environment?
>
> * can ACPI help? no, because power management is done in a completely
> different way, and low-level peripherals are typically connected
> directly to the chip and managed directly. ACPI is an x86 solution.
>
> * can splitting CPUs into northbridge architectural designs help? no,
> because again that's an x86 solution. and it usually adds 2 watts to
> the power budget to drive the 128-bit or 256-bit-wide bus between the
> two chips. which is insane, considering that some ARM SoCs are 0.1 to
> 1 watts. Not Gonna Happen. plus, it doesn't help you with the
> existing mess.
>
> * can hardware standardisation help? yes [*1] but that has its own
> challenges, as well as additional costs which need to have some sort
> of strategic benefit before companies will adopt it [or users].
>
> * can chip-level standardisation help? no - ARM tried that already,
> one SoC vendor screwed it up (failed to implement the tables properly)
> and that was the end of the standard. plus, it doesn't help with the
> existing SoCs nor the NDA situation wrt all the sensors and
> peripherals nor the proliferation and localisation and
> contact-relationships associated with these hard-wired sensors and
> peripherals.
>
> * what about splitting up the linux kernel into "core" and
> "peripheral"? that might do it. something along the lines of OSKit -
> except done right, and along lines that are acceptable to and
> initiated by the key linux kernel developers. the "core" could be as
> little as 2% of the entire linux kernel code-base: lib/* arch/*,
> kernel/* and so on. the rest (peripherals) done as a gitmodule.
> whilst it doesn't *actually* solve the problem, it reduces and
> clarifies the development scope on each side of the fence,
> so-to-speak, and, critically, separates out the discussions and focus
> for each part. perhaps even go further and have forks of the "core"
> which deal *exclusively* with each architecture.
>
> i don't know. i don't know - honestly - what the solution is here,
> and perhaps there really isn't one - certainly not at the software
> level, not with the sheer overwhelming hardware diversity. linus
> "ordered" the ARM representatives at that 2007 conference to "go away,
> get themselves organised, come back with only one representative" and
> that really says it all, the level of naivete and lack of
> understanding of the scope, and the fact that each ARM processor
> *really is* a completely different processor: they might as well have
> completely different instruction sets and it's just damn lucky that
> they do.
>
> so i'm attempting to solve this from a different angle: the hardware
> level. drastic and i mean *drastic* simplification of the permitted
> interfaces, thus allowing hardware to become a series of dynamic buses
> plus a series of about 10 device tree descriptions.
>
> so, in answer to your question, david: *yes* device-tree could help,
> if the hardware was simple enough and was standardised, and that's
> what i'm working towards, but the EOMA standards are the *only* ones
> which are simple enough and have taken this problem properly into
> consideration. if you don't use hardware standards then the answer
> is, sadly i feel, that device tree has absolutely no place in the ARM
> world and will only cause harm rather than good [*2]
>
> thoughts on a postcard....
>
> l.
>
> [*0] allwinner use "script.fex". it's an alternative to device-tree
> and it's a hell of a lot better. INI config file format.
> standardised across *all* hardware that uses the A10. it even allows
> setting of the pin mux functions. _and_ there is a GUI application
> which helps ODMs to customise the linux kernel for their clients...
> WITHOUT RECOMPILING THE LINUX KERNEL AT ALL. the designers of
> devicetree could learn a lot from what allwinner have achieved.
>
> [*1] http://elinux.org/Embedded_Open_Modular_Architecture/EOMA-68
>
> [*2] that's not quite true. if there were multiple ARM-based systems
> that conformed to x86 hardware standards such as ITX motherboard
> layout, those ARM systems would actually work and benefit from
> device-tree. if aarm64 starts to get put regularly into x86-style
> hardware form-factors, that would work. but right now, and for the
> forseeable future, any ARM SoC that's used for "embedded" single-board
> purposes, flat-out forget it. and even when aarm64 comes out, there
> will still be plenty of cheap 32-bit ARM systems out there.
>
>

Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, May 6, 2013 at 5:09 AM, Robert Hancock <[email protected]> wrote:

>> and that's just within *one* of the fabless semiconductor companies,
>> and you have to bear in mind that there are *several hundred* ARM
>> licensees. when this topic was last raised, someone mentioned that
>> ARM attempted to standardise on dynamic peripheral publication on the
>> internal AXI/AHB bus, so that things like device tree or udev could
>> read it. what happened? some company failed to implement the
>> standard properly, and it was game over right there.
>
>
> Admittedly without knowing much background on the situation, that seems like
> a bit of a cop-out.

i don't either - i may have the details / reasons wrong. it's
probably more along the lines of "as a general solution, because so
few people adopted it plus because those who did adopt it got it wrong
plus because it was too little too late everybody gave up"

> In the PC world there are a bunch of devices that don't
> follow standards properly (ACPI, PCI, etc.) and we add quirks or workarounds
> and move on with life - people don't decide to abandon the standard because
> of it.

in this case i believe it could be more to do with it being added
some 15 years *after* there had been over 500+ ARM licensees who had
already created huge numbers of CPUs, each of which is highly
specialised but happens to share an instruction set.

>
>>
>>
>> are you beginning to see the sheer scope of the problem, here? can
>> you see now why russell is so completely overwhelmed? are you
>> beginning to get a picture as to why device tree can never solve the
>> problem?
>
>
> I think part of the answer has to come from the source of all of these
> problems: there seems to be this culture in the ARM world (and, well, the
> embedded world generally) where the HW designers don't care what kind of
> mess they cause the people who have to write and maintain device drivers and
> kernels that run on the devices.

in a word... yes. i think you summed it up nicely - that there's
simply too much diversity for linux to take into account all on its
own. and u-boot. and the pre-boot loaders [spl, part of u-boot].

but the question you have to ask is: why should the HW designers even
care? they're creating an embedded specialist system, they picked the
most cost-effective and most available solution to them - why _should_
they care?

and the answer is: they don't have to. tough luck. get over it, mr
software engineer. hardware cost reductions take priority.

> In the PC world designers can't really do
> many crazy things as the people doing drivers will tell them "What is this
> crap? There's no way we can make this work properly in Windows". In the
> embedded world the attitude is more like "Hey, it's Linux, it's open, we
> know you can put in a bunch of crazy hacks to make this mess we created work
> reasonably". So the designers have no reason to make things behave in a
> standardized and/or sane manner.
>
> Obviously this is a longer-term solution

what is? what solution? you mean device tree? i'm still waiting for
someone to put a comprehensive case together which says that device
tree is a solution *at all*.

yes, sure, as someone on #arm-netbooks pointed out, device tree has
helped them, personally, when it comes to understanding linux kernel
source code and for porting of drivers to other hardware, because the
GPIO to control the signals is separated out from the source code.

but that only helps in cases where that one specific piece of hardware
is re-used: we're talking THOUSANDS if not TENS of thousands of
disparate pieces of hardware [small sensors with only 3 pins, all the
way up to GPIO extender ICs with hundreds], where the majority of
those device drivers never see the light of day due either to GPL
violations, burdensome patch submission procedures and so on.

and in such cases, where the chances of code re-use are zero to none,
what benefit does device tree offer wrt code re-use? none!

> and won't help with existing
> devices, but in the long run device designers may need to realize the kind
> of mess they're creating for the poor software people and try to achieve
> some more standardization and device discoverability.

the economics of market forces don't work that way.
profit-maximising companies are pathologically and *LEGALLY* bound to
enact the articles of incorporation. so you'd need to show them that
it would hurt their profits to continue the way that they are going.

fortunately, the linux kernel isn't bound by the same corporate
rules, so if there *was* a solution, it would be possible to apply
that solution and thus move everyone forward, kicking and screaming
over a long period and turn things around.

> Given the market
> dominance of Linux in many parts of the embedded world, one thinks this
> should be achievable.

working against that is the fact that there are only a few SoC
companies with the expertise, they work in secret *without* consulting
the linux kernel developers [as we well know], and the ODMs and
factories simply run with whatever-the-SoC-vendors-put-their-way.

in fact, the factories usually have zero expertise: as far as they're
concerned they might as well be making socks or jumpers rather than
laptops or tablets and in some cases the owners of the factories
really *do* have their staff making socks, jumpers or handbags as well
as laptops or tablets [*1]

anyway, my point is that the problem which device tree set out to
solve - that of the diversity in the ARM world - is an almost
unsolvable problem in software. i won't say "completely unsolvable".

device tree solves *some* problems, and generally makes things nice
in certain cases, but it doesn't *actually* solve the problem it was
originally designed to solve.

i'm still waiting to hear from someone - anyone - who recognises
this, and i'm also still waiting to hear from people who may have an
alternative solution or process which actually and truly helps solve
the problem. preferably on-list so that it can be discussed and
properly reviewed.

l.

[*1] one day the factory owner woke up, probably around christmas five
years ago when the UK's retail sector cartels Next, Top Shop etc.
complained about Asda's George range of cheap clothing prices and
demanded a monopolies review, causing huge shipping containers to sit
outside of the UK in international waters for months] and went "my
cash is all in clothing! i must diversify! i know, i'll buy some PCB
printing equipment and some plastics moulding stamps, and make
electronics appliances!". this tells you everything you need to know
about e.g. the problems that the KDE Plasma Vivaldi Spark Tablet has
been having, as well as why there is a 98% GPL violations rate on
low-cost tablet hardware.

2013-05-06 08:40:07

by Olliver Schinagl

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

Note, I'm not qualified nor important or anything really to be part of
this discussion or mud slinging this may turn into, but I do fine some
flaws in the reasoning here that If not pointed out, may get grossly
overlooked.

On 06-05-13 06:09, Robert Hancock wrote:
> On 05/05/2013 06:27 AM, Luke Kenneth Casson Leighton wrote:
>> this message came up on debian-arm and i figured that it is worthwhile
>> endeavouring to get across to people why device tree cannot and will
>> not ever be the solution it was believed to be, in the ARM world.
>>
>> [just a quick note to david who asked this question on the debian-arm
>> mailing list: any chance you could use replies with plaintext in
>> future? converting from HTML to text proved rather awkward and
>> burdensome, requiring considerable editing. the generally-accepted
>> formatting rules for international technical mailing lists are
>> plaintext only and 7-bit characters]
>>
>> On Sun, May 5, 2013 at 11:14 AM, David Goodenough
>> <[email protected]> wrote:
>>> On Sunday 05 May 2013, Luke Kenneth Casson Leighton wrote:

<snip>

>> * is there a BIOS? no. so all the boot-up procedures including
>> ultra-low-level stuff like DDR3 RAM timings initialisation, which is
>> normally the job of the BIOS - must be taken care of BY YOU (usually
>> in u-boot) and it must be done SPECIFICALLY CUSTOMISED EACH AND EVERY
>> SINGLE TIME FOR EVERY SINGLE SPECIFIC HARDWARE COMBINATION.
Isn't on ARM DDR init done by SPL/U-Boot? I've come quite accustomed to
this for the A10. Right now, we have a dedicated u-boot_spl for each
memory configuration as the values are hardcoded. A long term plan for
this platform is to possibly parse the DT or Read those settings from
somewhere and dynamically configure them, but in the end, memory init is
done by SPL, and to me, it makes perfect sense to do that there.

So yes, every single ARM SoC/platform will need its own dedicated
SPL/U-boot. Kinda like a bios? But if you want to boot from LAN (I think
that's what this discussion was about?) you need U-boot loaded by SPL
anyway. Can you boot a generic linux install (say from CD) on arm?
Usually no, the onboard boot loader only knows a very specific boot
path, often flash, mmc etc etc. Those need to be able to bring up the
memory too (SPL) so you'll need some specific glue for your platform
anyhow. I'm not sure if DT was supposed to solve that problem? If that
where the case, was DT to replace the BIOS too?

>>
>> * is there ACPI present? no. so anything related to power
>> management, fans (if there are any), temperature detection (if there
>> is any), all of that must be taken care of BY YOU.
Again, I only know about 1 specific SoC, but for the A10, you have/need
an external Power Manamgent IC, basically, a poor man's ACPI if you
must. If you don't have this luxury, yes, you'll need a special driver.
But how is that different from not having DT? You still need to write
'something' for this? A driver etc?
>>
>> * what about the devices? here's where it becomes absolute hell on
>> earth as far as attempting to "streamline" the linux kernel into a
>> "one size fits all" monolithic package.
Well that's where DT tries to help, doesn't it.
>>
>> the classic example i give here is the HTC Universal, which was a
>> device that, after 3 years of dedicated reverse-engineering, finally
>> had fully-working hardware with the exception of write to its on-board
>> NAND. the reason for the complexity is in the hardware design, where
>> not even 110 GPIO pins of the PXA270 were enough to cover all of the
>> peripherals, so they had to use a custom ASIC with an additional 64
>> GPIO pins. it turned out that *that* wasn't enough either, so in
>> desperation the designers used the 16 GPIO pins of the Ericsson 3G
>> Radio ROM, in order to do basic things like switch on the camera flash
>> LED.
So, nofi, you have some shitty engineerd device, that can't fit into
this DT solution, and thus DT must be broken? Though with proper drivers
and proper PINCTRL setup this may actually even work :p
>>
>> the point is: each device that's designed using an ARM processor is
>> COMPLETELY AND UTTERLY DIFFERENT from any other device in the world.
>>
>> when i say "completely and utterly different", i am not just talking
>> about the processor, i am not just talking about the GPIO, or even the
>> buses: i'm talking about the sensors, the power-up mechanisms, the
>> startup procedures - everything. one device uses GPIO pin 32 for
>> powering up and resetting a USB hub peripheral, yet for another device
>> that exact same GPIO pin is used not even as a GPIO but as an
>> alternate multiplexed function e.g. as RS232 TX pin!
I think PINCTRL tries to solve this, in combination with? DT?
>>
>> additionally, there are complexities in the bring-up procedure for
>> devices, where a hardware revision has made a mistake (or made too
>> many cost savings), and by the skin of their teeth the kernel
>> developers work out a bring-up procedure. the example i give here is
>> the one of the HTC Blueangel, where the PXA processor's GPIO was used
>> (directly) to power the device. unfortunately, there simply wasn't
>> enough current. but that's ok! why? because what they did was:
>>
>> * bring up the 3.3v GPIO (power to the GSM chip)
>> * bring up the 2nd 3.3v GPIO
>> * pull the GPIO pin connected to the GSM "reset" chip
>> * wait 5 milliseconds
>> * **PULL EVERYTHING BACK DOWN AGAIN**
>> * wait 1 millisecond
>> * bring up the 1st 3.3v GPIO (again)
>> * wait 10 milliseconds
>> * bring up the 2nd 3.3v GPIO (again)
>> * wait 5 milliseconds
>> * pull up the "RESET" GPIO
>> * wait 10 milliseconds
>> * pull the "RESET" GPIO down
>> * ***AGAIN*** do the reset GPIO.
>>
>> this procedure was clearly designed to put enough power into the
>> capacitors of the on-board GSM chip so that it could start up (and
>> crash) then try again (crash again), and finally have enough power to
>> not drain itself beyond its capacity.
So again horribly shitty designed solution.
I only can see this as 'one kernel to rule them all' won't apply here
and some extra hacks will be required, simply because there's hacks
required on the hardware side.
>>
>> ... the pointed question is: how the bloody hell are you going to
>> represent *that* in "device tree"??? and why - even if it was
>> possible to do - should you burden other platforms with such an insane
>> boot-up procedure even if they *did* use the exact same chipset?
You are not supposed to, again, it's a horribly hack-ish device and thus
can only be served by ugly hacks.
>>
>> ... but these devices, because they are in a huge market with
>> ever-changing prices and are simply overwhelmed with choice for
>> low-level I2C, I2S devices etc, each made in different countries, each
>> with their NDAs, simply don't use the same peripheral chips. and even
>> if they did, they certainly don't use them in the same way!
>>
>> again, the example that i give here is of the Phillips UDA1381 which
>> was quite a common sound IC used in Compaq iPAQ PDAs (designed by
>> HTC). so, of course, when HTC did the Himalaya, they used the same
>> sound IC.
>>
>> .... did they power it up in the exact same way across both devices?
>>
>> no.
>>
>> did they even use the same *interfaces* across both devices?
>>
>> no.
>>
>> why not?
>>
>> because the UDA1381 can be used *either* in I2S mode *or* in SPI mode,
>> and one [completely independent] team used one mode, and the other
>> team used the other.
Afaik, there's several IC's that work that way, and there's drivers for
them in that way. I haven't seen this being applied in DT, but i'm sure
this can reasonably easy be adapted into DT.
>>
>> so when it came to looking at the existing uda1381.c source code, and
>> trying to share that code across both platforms, could i do that?
>>
>> no.
Why not? And if not, because the driver is written badly? So it needs a
rewrite because it's been written without taking into account that it
can interface either in SPI mode or in I2C mode? Then that will have to
be done.

<snip>

>> are you beginning to see the sheer scope of the problem, here? can
>> you see now why russell is so completely overwhelmed? are you
>> beginning to get a picture as to why device tree can never solve the
>> problem?
>
> I think part of the answer has to come from the source of all of these
> problems: there seems to be this culture in the ARM world (and, well,
> the embedded world generally) where the HW designers don't care what
> kind of mess they cause the people who have to write and maintain device
> drivers and kernels that run on the devices. In the PC world designers
> can't really do many crazy things as the people doing drivers will tell
> them "What is this crap? There's no way we can make this work properly
> in Windows". In the embedded world the attitude is more like "Hey, it's
> Linux, it's open, we know you can put in a bunch of crazy hacks to make
> this mess we created work reasonably". So the designers have no reason
> to make things behave in a standardized and/or sane manner.
This will level itself out in the end I suppose. Once a proper
infrastructure is in place (working DT, reasonably well adopted etc,
drivers rewritten/fixed etc). Once that all is in place, engineers will
hopefully think twice. They have two options, either adapt their design
(within reason and cost) to more closely match the 'one kernel to rule
them all' approach, and reap its benefits, or apply hacks like the HTC
example above and are then responsible for hacking around the code to
get it to work. Their choice eventually.

>
> Obviously this is a longer-term solution and won't help with existing
> devices, but in the long run device designers may need to realize the
> kind of mess they're creating for the poor software people and try to
> achieve some more standardization and device discoverability. Given the
> market dominance of Linux in many parts of the embedded world, one
> thinks this should be achievable.
There we go, long term, I don't think DT is half as bad and In time,
we'll see if it was really bad or not to bad at all.
>
>>
>> the best that device tree does in the ARM world is add an extra burden
>> for device development, because there is so little that can actually
>> be shared between disparate hardware platforms - so much so that it is
>> utterly hopeless and pointless for a time-pressured product designer
>> to even consider going down that route when they *know* it's not going
>> to be of benefit for them.
That's like say, why bother using the Linux kerenl, when it's pointless
to use and they might as well use a much simpler in house designed
kernel. Or just a bare metal system. If they can see the benefits of
using the Linux kernel, then surely they must see the benefit from
possibly (not forceably, choice) using the DT. Personally, I think in
the long run, DT will be the better choice. Allowing you to use a
same/similar kernel for different products is still a win in my opinion.
And again, if this isn't important (now, who knows later) they have the
choice to hack things around and do as they please, with all pro's and
con's of that later.

>>
>> you also have to bear in mind that the SoC vendors don't really talk
>> to each other. you also have to bear in mind that they are usually
>> overwhelmed by the ignorance of the factories and OEMs that use their
>> SoCs - a situation that's not helped in many cases by their failure to
>> provide adequate documentation [but if you're selling 50 million SoCs
>> a year through android and the SoC is, at $7.50, a small part of the
>> BOM, why would you care about or even answer requests for adequate
>> documentation??] - so it's often the SoC vendors that have to write
>> the linux kernel source code themselves. MStar Semi take this to its
>> logical GPL-violating extreme by even preventing and prohibiting
>> *everyone* from gaining access to even the *product* designs. if they
>> like your idea, they will design it for you - in total secrecy - from
>> start to finish. and if not, you f*** off.
Besides the obvious violation here (gpl-violations.org knows about
this?) doesn't make it right. Yes they violate the GPL, don't provide
docs etc. But think of this long term, once we have better arm SoC
support with DT, why would they bother doing all the work, over and over
(in secret) again? Wouldn't it also be much easier for them to follow
mainline?

Simple example. I see little touch screen drivers in the kernel now. I
also see several 'drivers' for them out in the wild. Some are reference
drivers with little adoptions, some are rewrites and most hack in
support for their chip. I'm sure that's partially the reason why they do
a lot in house, to get things working.

Now if said touch screen driver is in mainline, with flexible DT
support, all you have to do to get this TS driver working is make a DT
definition in your DT and it 'just works (tm)'. Why would they then
bother redoing/hacking things together? Isn't it cheaper for them to
follow mainline in that sense?

>>
>> [*0] allwinner use "script.fex". it's an alternative to device-tree
>> and it's a hell of a lot better. INI config file format.
>> standardised across *all* hardware that uses the A10.
IMO fex is just a basic DT parallel development thing. Is it better? Is
it similar? I would think so. I mean, it's a definition file, that gets
loaded at boot, and parsed by drivers. Basically what DT does as well,
albeit in a more complicated (and future proof?) way?

>> _and_ there is a GUI application
>> which helps ODMs to customise the linux kernel for their clients...
As for the GUI application, who says that's not possible for DT? Just
needs someone to write it I guess :) Besides, that still doesn't mean
anything. Lazy engineers who don't know what they are doing will write
crappy stuff. Be it DT or 'fex'. I've seen completly wrong fex files
myself (tablets with sata ports enabled, even though there is no sata
port to name 'an' example). Maybe because DT is a little harder that'll
force them to think before implementing.

>> WITHOUT RECOMPILING THE LINUX KERNEL AT ALL.
>> it even allows
>> setting of the pin mux functions. the designers of
>> devicetree could learn a lot from what allwinner have achieved.
And you don't need to recompile the kernel at all when using DT do you?
That would totally defy its purpose.



Again, I know very little about this all. But do feel DT is atleast one
step (maybe out of several) in the right direction.

2013-05-06 09:04:20

by James Dutton

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

The real problem with any new system, is the hardware is designed and
then it is a challenge for the software developer to get the software
to boot on the new hardware.
The nirvana here would be to take the original hardware circuit
diagram, and process it to automatically create a config file.
The config file would then be used by the software to configure itself
to boot the new hardware.
I think the device tree config file is going some way to help here.
X86 is lucky in that there are config standards out there, and people
are actually using them. PCI, USB, ACPI.
ARM is different and does not have this yet.
Also, so long as there is some way to uniquely identify the hardware
with, for example a model number, quirks can be written into the
software to handle special cases, and the config file identify which
quirks need to be used.
As more and more hardware manufactures report their quirks and device
drivers to the mainline kernel, the closer we will get to an automated
process to boot new hardware.
There are already efforts around the ARM multi-platform where a single
kernel can boot multiple ARM CPUs.
I suppose that ARM multi-platform will never cover all ARM CPUs, but
the more it covers, the easier and cheaper it will be to work with new
hardware and ARM.
Say, if a manufacturer has 20 different models of mobile phone out
there, all based on ARM.
Currently, they need a different kernel for each one. If they could
use the same kernel across all 20 models, that would reduce their
development costs considerably.



On 6 May 2013 07:53, Luke Kenneth Casson Leighton <[email protected]> wrote:
> On Mon, May 6, 2013 at 5:09 AM, Robert Hancock <[email protected]> wrote:
>
>>> and that's just within *one* of the fabless semiconductor companies,
>>> and you have to bear in mind that there are *several hundred* ARM
>>> licensees. when this topic was last raised, someone mentioned that
>>> ARM attempted to standardise on dynamic peripheral publication on the
>>> internal AXI/AHB bus, so that things like device tree or udev could
>>> read it. what happened? some company failed to implement the
>>> standard properly, and it was game over right there.
>>
>>
>> Admittedly without knowing much background on the situation, that seems like
>> a bit of a cop-out.
>
> i don't either - i may have the details / reasons wrong. it's
> probably more along the lines of "as a general solution, because so
> few people adopted it plus because those who did adopt it got it wrong
> plus because it was too little too late everybody gave up"
>
>> In the PC world there are a bunch of devices that don't
>> follow standards properly (ACPI, PCI, etc.) and we add quirks or workarounds
>> and move on with life - people don't decide to abandon the standard because
>> of it.
>
> in this case i believe it could be more to do with it being added
> some 15 years *after* there had been over 500+ ARM licensees who had
> already created huge numbers of CPUs, each of which is highly
> specialised but happens to share an instruction set.
>
>>
>>>
>>>
>>> are you beginning to see the sheer scope of the problem, here? can
>>> you see now why russell is so completely overwhelmed? are you
>>> beginning to get a picture as to why device tree can never solve the
>>> problem?
>>
>>
>> I think part of the answer has to come from the source of all of these
>> problems: there seems to be this culture in the ARM world (and, well, the
>> embedded world generally) where the HW designers don't care what kind of
>> mess they cause the people who have to write and maintain device drivers and
>> kernels that run on the devices.
>
> in a word... yes. i think you summed it up nicely - that there's
> simply too much diversity for linux to take into account all on its
> own. and u-boot. and the pre-boot loaders [spl, part of u-boot].
>
> but the question you have to ask is: why should the HW designers even
> care? they're creating an embedded specialist system, they picked the
> most cost-effective and most available solution to them - why _should_
> they care?
>
> and the answer is: they don't have to. tough luck. get over it, mr
> software engineer. hardware cost reductions take priority.
>
>> In the PC world designers can't really do
>> many crazy things as the people doing drivers will tell them "What is this
>> crap? There's no way we can make this work properly in Windows". In the
>> embedded world the attitude is more like "Hey, it's Linux, it's open, we
>> know you can put in a bunch of crazy hacks to make this mess we created work
>> reasonably". So the designers have no reason to make things behave in a
>> standardized and/or sane manner.
>>
>> Obviously this is a longer-term solution
>
> what is? what solution? you mean device tree? i'm still waiting for
> someone to put a comprehensive case together which says that device
> tree is a solution *at all*.
>
> yes, sure, as someone on #arm-netbooks pointed out, device tree has
> helped them, personally, when it comes to understanding linux kernel
> source code and for porting of drivers to other hardware, because the
> GPIO to control the signals is separated out from the source code.
>
> but that only helps in cases where that one specific piece of hardware
> is re-used: we're talking THOUSANDS if not TENS of thousands of
> disparate pieces of hardware [small sensors with only 3 pins, all the
> way up to GPIO extender ICs with hundreds], where the majority of
> those device drivers never see the light of day due either to GPL
> violations, burdensome patch submission procedures and so on.
>
> and in such cases, where the chances of code re-use are zero to none,
> what benefit does device tree offer wrt code re-use? none!
>
>> and won't help with existing
>> devices, but in the long run device designers may need to realize the kind
>> of mess they're creating for the poor software people and try to achieve
>> some more standardization and device discoverability.
>
> the economics of market forces don't work that way.
> profit-maximising companies are pathologically and *LEGALLY* bound to
> enact the articles of incorporation. so you'd need to show them that
> it would hurt their profits to continue the way that they are going.
>
> fortunately, the linux kernel isn't bound by the same corporate
> rules, so if there *was* a solution, it would be possible to apply
> that solution and thus move everyone forward, kicking and screaming
> over a long period and turn things around.
>
>> Given the market
>> dominance of Linux in many parts of the embedded world, one thinks this
>> should be achievable.
>
> working against that is the fact that there are only a few SoC
> companies with the expertise, they work in secret *without* consulting
> the linux kernel developers [as we well know], and the ODMs and
> factories simply run with whatever-the-SoC-vendors-put-their-way.
>
> in fact, the factories usually have zero expertise: as far as they're
> concerned they might as well be making socks or jumpers rather than
> laptops or tablets and in some cases the owners of the factories
> really *do* have their staff making socks, jumpers or handbags as well
> as laptops or tablets [*1]
>
> anyway, my point is that the problem which device tree set out to
> solve - that of the diversity in the ARM world - is an almost
> unsolvable problem in software. i won't say "completely unsolvable".
>
> device tree solves *some* problems, and generally makes things nice
> in certain cases, but it doesn't *actually* solve the problem it was
> originally designed to solve.
>
> i'm still waiting to hear from someone - anyone - who recognises
> this, and i'm also still waiting to hear from people who may have an
> alternative solution or process which actually and truly helps solve
> the problem. preferably on-list so that it can be discussed and
> properly reviewed.
>
> l.
>
> [*1] one day the factory owner woke up, probably around christmas five
> years ago when the UK's retail sector cartels Next, Top Shop etc.
> complained about Asda's George range of cheap clothing prices and
> demanded a monopolies review, causing huge shipping containers to sit
> outside of the UK in international waters for months] and went "my
> cash is all in clothing! i must diversify! i know, i'll buy some PCB
> printing equipment and some plastics moulding stamps, and make
> electronics appliances!". this tells you everything you need to know
> about e.g. the problems that the KDE Plasma Vivaldi Spark Tablet has
> been having, as well as why there is a 98% GPL violations rate on
> low-cost tablet hardware.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2013-05-06 10:08:48

by Alexander Holler

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

Am 06.05.2013 08:53, schrieb Luke Kenneth Casson Leighton:

> but the question you have to ask is: why should the HW designers even
> care? they're creating an embedded specialist system, they picked the
> most cost-effective and most available solution to them - why _should_
> they care?
>
> and the answer is: they don't have to. tough luck. get over it, mr
> software engineer. hardware cost reductions take priority.

So why do you post this at lkml at all? It looks like your HW is able to
run without SW or if it still needs SW, the necessary SW is freely
available right around the corner, doesn't need modifications and
therefor doesn't need a share of the budget.

Do you develop cables or similiar?

Regards,

Alexander Holler

2013-05-06 11:47:55

by luke.leighton

[permalink] [raw]
Subject: Re: [Arm-netbook] device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, May 6, 2013 at 9:22 AM, Oliver Schinagl <[email protected]> wrote:
> Note, I'm not qualified nor important or anything really to be part of
> this discussion or mud slinging this may turn into, but I do fine some
> flaws in the reasoning here that If not pointed out, may get grossly
> overlooked.

allo oliver - did a quick read, didn't see anything remotely
resembling mud :) which is a pity because i am looking forward to
making my next house out of compressed earth with a 3% concrete mix.

but seriously: the only thing i'd say is it's a pity in some ways you
replied to this message rather than to the reply that robert wrote
[but i'd trimmed that], because i made a summary of the whole original
message based on robert's prompting and insights, and also invited
people to come up with some potential alternative solutions.

.... and to do that, the problem has to be properly recognised, which
unnnfortunately takes quite a lot of thought/reading/observations to
take into account and express. i don't necessarily have the best
experience to do that, which is why i asked people if they could help,
and in that regard your review is really really appreciated.

ok, so let's have a look...

> On 06-05-13 06:09, Robert Hancock wrote:
>> On 05/05/2013 06:27 AM, Luke Kenneth Casson Leighton wrote:
> So yes, every single ARM SoC/platform will need its own dedicated
> SPL/U-boot. Kinda like a bios?

kinda like a BIOS, yes. except the differences are that a BIOS (and
ACPI) stay around (ROM, TSRs), whereas SPL and u-boot merely do the
board-bring-up and once done you're on your own [side-note: so there
is considerable code duplication between u-boot and the linux kernel,
and u-boot typically does bit-level direct manipulation of GPIO as
it's simpler]

so, whereas a BIOS (and ACPI) ease the pain of bringing up a system
(actually multiple systems that conform to the same BIOS and ACPI
standards), and help normalise it (them) to a very large extent (bugs
in BIOSes and ACPI notwithstanding), in the ARM world the solutions
used actually *increase* the number of man-hours required to bring up
any one board!

> But if you want to boot from LAN (I think
> that's what this discussion was about?) you need U-boot loaded by SPL
> anyway. Can you boot a generic linux install (say from CD) on arm?

if u-boot has cut across sufficient parts of the linux kernel device
driver infrastructure then yes! we have a call on the arm-netbooks
and linux-sunxi mailing lists for example for addition of USB-OTG to
SPL. that means going over to the linux kernel source code, *again*
duplicating that code, and adding it to the SPL in the u-boot sources.

> Usually no, the onboard boot loader only knows a very specific boot
> path, often flash, mmc etc etc.

yes. amazingly, the iMX6 supports a ton more boot options than i've
ever seen in any other ARM SoC: SATA boot, UEFI partitions and loads
more.

it's yet another example unfortunately of the insane level of
diversity being faced by and imposed on the linux kernel
infrastructure.

>Those need to be able to bring up the
> memory too (SPL) so you'll need some specific glue for your platform
> anyhow.

yes. this is going to be interesting for when standard DIMMs become
commonplace in the aarm64 world, if companies consider making standard
ITX motherboards.

> I'm not sure if DT was supposed to solve that problem?

mmm.... it would help, i feel, because the RAM timings would be part
of the DT. however, it *wouldn't* help because this is incredibly
low-level, you'd have to have SPL (which is often extremely limited in
size, typically 32k i.e. the same size as the CPU's 1st level cache no
that's not a coincidence) understand DT.

so it *might* be good, but it might be a very poor match. have to see.

in all other cases, where the RAM is hard-wired: DT is not much use.
the RAM timings have to be hard-wired, they're done by SPL, SPL is
often written with a disproportionately large amount of assembly code,
etc. etc. i'm waffling but you get the point?

> If that where the case, was DT to replace the BIOS too?

i'm sure that was partly the intention, but ARM systems don't *have*
a standardised BIOS, because, i feel, there is too much diversity and
fragmentation - for very very good and sound business reasons as well
as pure good-old-fashioned FUBARness, for any kind of standardisation
to make a dent in that mess.

>>>
>>> * is there ACPI present? no. so anything related to power
>>> management, fans (if there are any), temperature detection (if there
>>> is any), all of that must be taken care of BY YOU.

> Again, I only know about 1 specific SoC, but for the A10, you have/need
> an external Power Manamgent IC, basically, a poor man's ACPI if you
> must.

yes. exactly! this is a _great_ example. if you've seen the
offerings from X-Powers and their competitors (MAXIM for example, and
then ingenic recommend another company), you'll know that the actual
PMIC is *customised* for a particular SoC!!!

which is completely insane.

so, for example, allwinner (whom i believe also own X-Powers) created
the AXP221 for their new SoC. Samsung, back when the Odriod came out
(with the S5PC100), contacted MAXIM and asked them to create a special
custom PMIC. MOQ for that customised PMIC: 50,000 units and special
privileges required before you could gain access to it.

and for the MK802 they just used 3 LDOs [and as a result that little
USB-HDMI computer quite badly overheats]

and the burden of power management is then placed onto the linux kernel.

in the case of the iMX6, i don't know if you've read freescale's app
notes, but they go to some lengths to describe how to optimise power
consumption. the Reference Board has some ODT (on-die termination)
resistors that can be electronically controlled with GPIO pins. you
can adjust the DDR RAM speed (dynamically!!!) and when you do so, it's
best to change the ODT resistance, and if you do so you save 200mA (or
something - can't remember the details).

normally these things are taken care of at the BIOS level... and the
burden of responsibility is placed onto the linux kernel +
device-tree.

i'm pointing these things out because i don't believe people are
fully aware of the sheer overwhelming technical scope being faced, let
alone the business cases [or business FUBARs] surrounding ARM-based
product development.

> If you don't have this luxury, yes, you'll need a special driver.
> But how is that different from not having DT? You still need to write
> 'something' for this? A driver etc?

yes. exactly. does DT help solve the above problem? no not really,
because it's heavy customisation at quite a low level. my point is
not that you won't need a special driver, my point is - the focus of
this discussion is - the question being asked is - "does the addition
of DT *actually* help solve these issues?"

and the other [unfortunately pointed] question is, "were the people
who designed DT actually aware of these issues when they designed it?"
because i certainly didn't see the public discussions which took
place, nor the online articles discussing it, nor see any draft or
preliminary documentation, nor any invitations to comment [RFCs].

and this is a *major* piece of strategic decision-making we're talking.

>>> the classic example i give here is the HTC Universal, which was a
>>> device that, after 3 years of dedicated reverse-engineering, finally
> So, nofi, you have some shitty engineerd device,

noo... beautifully engineered device. a micro laptop with 5
speakers, 2 microphones, flip-over screen. if you'd ever owned one
you would have gone "wow, why is this running wince??" and would have
immediately got onto #htc-linux to help :)

> that can't fit into this DT solution,

you've misunderstood: it could. there's nothing to stop anyone from
adding in DT, is there? but the point is: in doing so, does it
*actually* help at all?

> and thus DT must be broken?

DT *itself* is not broken. it's a good solution. but... what
problem does it actually solve, and what problem is *actually* being
faced?

i'm going to emphasise this again and again until i get it through to
people, or until someone gives me a satisfactory answer. device tree
solves *a* problem, but there is a logical disconnect - an assumption
- that it solves the much *larger* problem of helping with the massive
diversity of hardware in the ARM world.

i mention the HTC universal as a kind of insane example (sample of
one) of the kind of thing that's faced, here. i won't damage peoples'
brains including my own by listing every single bit of ARM kit out
there.

> Though with proper drivers
> and proper PINCTRL setup this may actually even work :p

yes. that chip was called the ASIC3 and it was used in half a dozen
products. as such, those half-a-dozen-or-so products would have
benefitted from devicetree. the userspace communication over RS232,
sending AT commands in order to activate any one of the extra 16 GPIO
pins on the other hand would not, not because it's userspace, but
because nobody else was faced with that kind of insane level of GPIO
(almost 200 pins) such that they actually *ran out* on the available
hardware and had to do that kind of desperate trick.

so even if you _did_ write a device tree aware driver for that part,
how much code re-use would it see? ABSOLUTELY NONE.

and that's the key, key CRITICAL point that i'm making, here. the
point of device-tree is to encourage code re-use. but if the hardware
is massively diverse, there's NO OPPORTUNITY FOR REUSE.

therefore, logically, device tree does not help!

it's as simple as that!

no - really, it's as simple as that.

and it's something that i don't think people really thought about
before embarking on implementing device tree.


>>> this procedure was clearly designed to put enough power into the
>>> capacitors of the on-board GSM chip so that it could start up (and
>>> crash) then try again (crash again), and finally have enough power to
>>> not drain itself beyond its capacity.
> So again horribly shitty designed solution.

no not at all. *iteratively-designed* solution, where they were told
"you can't have another go, you're out of time, MAKE it work".

it's another example where the unique hardware-software solution will
never be repeated (hopefully...) and as such, device tree is
completely ineffective at solving the goal it was designed to solve,
because there is zero chance - ever - of any code re-use.


>>> because the UDA1381 can be used *either* in I2S mode *or* in SPI mode,
>>> and one [completely independent] team used one mode, and the other
>>> team used the other.
> Afaik, there's several IC's that work that way, and there's drivers for
> them in that way. I haven't seen this being applied in DT, but i'm sure
> this can reasonably easy be adapted into DT.

yes it could. but *will* it? my point is slightly different here
from the other two examples. unfortunately it's necessary to
speculate here (as DT wasn't around in 2004), but can you see that if
there were two independent teams even in the same company working on
drivers simultaneously, you'd end up with two *different*
implementations of the same driver?

corporate companies work in secret. they don't work together - they
don't collaborate - they *compete*. HTC, kicking and screaming,
releases GPL source code *NINETY DAYS* after a product first hits the
shelf. that means that the software was being done OVER A YEAR ago.
in secret.

the amount of overlap time is enormous.

will device tree help here, especially given that companies like HTC
are on the bleeding edge, and they use completely new ICs, and are
often literally the first to write the linux kenel device drivers for
that hardware?

of course not.

now multiply that up by samsung, motorola and every other
manufacturer, all working in secret, all competing, all not
communicating, throwing out code over the fence (often kicking and
screaming and *definitely* late), zero consultation with the linux
kernel developers who have to clean up their mess, work out the
duplications and so on.

will device tree help in this circumstance?

if there happens to be any common ground in the design of the
products - shared hardware - it'll help *after the fact*, to
rationalise things, but that's a burden that's now on the linux kernel
developers. more fool them for being taken advantage of, as unpaid
slave labour by the large corporate organisations, i feel.

... you see what i'm getting at?

>> I think part of the answer has to come from the source of all of these
>> problems: there seems to be this culture in the ARM world (and, well,
>> the embedded world generally) where the HW designers don't care what
>> kind of mess they cause the people who have to write and maintain device
>> drivers and kernels that run on the devices.
>> [...]
>> this mess we created work reasonably". So the designers have no reason
>> to make things behave in a standardized and/or sane manner.

> This will level itself out in the end I suppose.

naaahhh nah nah, oliver: you can't "suppose" :)

> Once a proper
> infrastructure is in place (working DT, reasonably well adopted etc,
> drivers rewritten/fixed etc). Once that all is in place, engineers will
> hopefully think twice. They have two options, either adapt their design
> (within reason and cost)

yes. exactly. and cost *is* the driving factor. i remember a
friend of mine saying that phillips used to argue over 0.001 pence
from suppliers of plastic keys. that's 5 decimal places down on a GBP
(ok make it a dollar. 5 decimal places down on a dollar).

in quantity five hundred million and above, that 0.001 pence shaved
off across say 100 keys on a keyboard represents £500,000. that's
potentially the entire profit margin on a mass-volume product.

in mass-volume, cost *is* the driving factor. if there's a choice
between an insane low-cost solution which has the software engineers
tearing their hair out and wanting to take a bath every day for a
year, or a nice clean one that's even $0.10 more expensive, guess
which one they'll get *ORDERED* to implement?

my associate when working for samsung as the head of R&D made it
clear that anyone who came up with a 4-layer board was told to go away
and to make it work as a 2-layer board. the cost of the extra layers
would often mean the difference between profit and failure.

> to more closely match the 'one kernel to rule
> them all' approach, and reap its benefits, or apply hacks like the HTC
> example above and are then responsible for hacking around the code to
> get it to work. Their choice eventually.

no. you're not getting it oliver. _really_ not getting it. the
hardware cost is *everything*. the software is a one-off expense that
can be amortised over the lifetime of the product, and becomes a
negligeable amount over time. software is done *once*, hardware
production costs are on *every* unit.

> There we go, long term, I don't think DT is half as bad and In time,
> we'll see if it was really bad or not to bad at all.

the question is not whether it's bad, the question is: does it solve
the problem for which it was designed? or, more specifically, was the
problem even *discussed* in public prior to the solution being
enacted?

the answer to both questions is definitely "no".

ok - i believe i've repeated this point enough, and it's taken a
considerable chunk of my day, i don't know about anyone else -
apologies oliver i have to cut this short.

l.

Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

james, hi - top-posting or not you make some valid points, and i don't
believe you're subscribed to arm-netbooks so i'm going to take a
liberty and reply briefly inline but keep most of what you've written
intact, apologies to debian-arm and lkml.

On Mon, May 6, 2013 at 10:04 AM, James Courtier-Dutton
<[email protected]> wrote:
> The real problem with any new system, is the hardware is designed and
> then it is a challenge for the software developer to get the software
> to boot on the new hardware.
> The nirvana here would be to take the original hardware circuit
> diagram, and process it to automatically create a config file.
> The config file would then be used by the software to configure itself
> to boot the new hardware.
> I think the device tree config file is going some way to help here.
> X86 is lucky in that there are config standards out there, and people
> are actually using them. PCI, USB, ACPI.
> ARM is different

insanely so. only the instruction set is guaranteed to be common...
oh wait, it's not, is it? insanely so. this was the mistake that
linus made at that conference in 2007, by telling the arm
representatives to "go away and come back with only one person to
future conferences".

> and does not have this yet.

... and won't. many SoCs don't - and won't have - USB3 because it
uses too much power [to drive the high-speed signal lanes]. the
percentage of SoCs that have PCIe is extremely small, and those that
do typically have only a 1x lane (iMX6). samsung and marvell's
higher-power offerings have 2x and 4x PCIe.

ACPI? flat-out forget it! really. sorry, brief answer there.
running out of time here, apologies. see previous reply (to oliver)

> Also, so long as there is some way to uniquely identify the hardware
> with, for example a model number, quirks can be written into the
> software to handle special cases, and the config file identify which
> quirks need to be used.
> As more and more hardware manufactures report their quirks and device
> drivers to the mainline kernel, the closer we will get to an automated
> process to boot new hardware.

again, see previous reply to oliver (overlapped) which points out
that this is an "after-the-fact" burden on the linux kernel developers

> There are already efforts around the ARM multi-platform where a single
> kernel can boot multiple ARM CPUs.

... but it can't do all of them, can it?

> I suppose that ARM multi-platform will never cover all ARM CPUs, but
> the more it covers, the easier and cheaper it will be to work with new
> hardware and ARM.

no. no, no no and wrong. absolutely dead wrong. you're completely
misunderstanding the economics of large and mass-volume product
development. the hardware cost is
EEEEEEVVVEEERRRRRRYYYTTHHHHIIINNNNGGGG.

the software development cost because it is a one-off (non-recurring
expense) is completely and utterly irrelevant. again, see reply to
oliver's message.


> Say, if a manufacturer has 20 different models of mobile phone out
> there, all based on ARM.
> Currently, they need a different kernel for each one. If they could
> use the same kernel across all 20 models, that would reduce their
> development costs considerably.

let's do some maths. let's say that the cost of re-using a DT-based
kernel is $50k, but the custom development (not using DT) is $250,000.

let's say that you have to use a freescale $20 processor (iMX6 Dual)
with that, because linaro is being paid $1m per year to help freescale
to make devicetree work, and it gets them in on that
one-kernel-across-20-models thing.

so they work out the BOM, against the projected expenditure over say
2 years they expect to sell say 100 million phones.

iMX6 Dual phone's expenditure (based on processor and software NREs
alone): $20 * 100 million + $50k = $2.00005 billion.

even if you don't have the custom development, the cost is $2.00025
billion which, i know i said it might be the profit margin but hey,
we're still looking at the 4th decimal place here.

now let's compare that to e.g. the allwinner A20, which is coming in
at the *same* functionality, less power than the iMX6 Dual, and the
price i've learned recently could well be the same as the A10 in
mass-volume (but don't quote me on that).

so we work out a BOM on 100 million phones:

A20 (Dual-Core Cortex A7 remember) based on processor and software
NREs alone: $7.50 * 100 million + $50k = $0.75005 billion.

do you see the point, james? the cost of the software development is
utterly, utterly, utterly irrelevant.

the cost of the hardware *is everything*.

ok, that's not quite true. if the amount of *time* taken on the
software development is too great, then it puts your entire hardware
development NREs at risk because you're so far behind the curve that
nobody will buy the product even when it's out.

but the amount of time taken on software development is *not* the
same as the *cost* of the software development.

ok. now i really have to give it a rest. another 25 mins gone. whoops :)

l.

2013-05-06 20:09:46

by Rob Landley

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On 05/06/2013 07:08:44 AM, Luke Kenneth Casson Leighton wrote:
> > I suppose that ARM multi-platform will never cover all ARM CPUs, but
> > the more it covers, the easier and cheaper it will be to work with
> new
> > hardware and ARM.
>
> no. no, no no and wrong. absolutely dead wrong. you're completely
> misunderstanding the economics of large and mass-volume product
> development. the hardware cost is
> EEEEEEVVVEEERRRRRRYYYTTHHHHIIINNNNGGGG.

And economies of scale are everything to hardware cost. Unit volume
amortizes the development (and often licensing) costs down, in the long
run who has the highest unit volume has the cheapest product. Being
able to reuse off the shelf components is nice, but being able to
repurpose existing high-volume smartphone packages semi-verbatim is
nicer.

Also, your cheap little one-off product tends to have a lifespan
measured in months. Especially since the most common southeast asian
business model used to be something like "develop thing through shell
company, fill inventory channel with product, launder profits through
series of shell companies that patent infringement suits can't burrow
through in a reasonable amount of time, dissolve company and shell
companies before product ever winds up on store shelves leaving nobody
to sue, re-hire the same set of engineers at new shell company, rinse,
repeat.'

(Has this changed recently? I haven't really been paying attention
since smartphones started replacing the PC the way the PC replaced
minicomputers and mainframes, so the billion seats of Android are more
interesting than the rest of the embedded space combined. I did a talk
about that at ELC if you're bored:
http://www.youtube.com/watch?v=SGmtP5Lg_t0 )

> the software development cost because it is a one-off (non-recurring
> expense) is completely and utterly irrelevant. again, see reply to
> oliver's message.

Which is why this hardware tends to ship with crappy, unusable,
unsupported software. Because actually programming the sucker is an
afterthought, and the company that created it won't be _around_ long
enough to support it, because if it did it would be around long enough
to be sued for "we patented breathing, pay up".

The reason we should care about this business model when the vendors
don't is...?

> > Say, if a manufacturer has 20 different models of mobile phone out
> > there, all based on ARM.
> > Currently, they need a different kernel for each one. If they could
> > use the same kernel across all 20 models, that would reduce their
> > development costs considerably.
>
> let's do some maths. let's say that the cost of re-using a DT-based
> kernel is $50k, but the custom development (not using DT) is $250,000.

If you use a DT-based kernel you can upgrade to new vanilla releases
for 5 years, and if you don't you probably never upgrade to a new
version ever.

Except the type of company you're describing won't be around long
enough to provide an upgrade. They'll just tell you to buy a new one
next year, from the same engineers working at new shell company du jour
(which has already dissolved by the time product hits the shelves; this
stuff can get outsourced and rebadged with other people's branding to
disguise the churn but the short-term thinking is there for a _reason_).

> let's say that you have to use a freescale $20 processor (iMX6 Dual)
> with that, because linaro is being paid $1m per year to help freescale
> to make devicetree work, and it gets them in on that
> one-kernel-across-20-models thing.
>
> so they work out the BOM, against the projected expenditure over say
> 2 years they expect to sell say 100 million phones.

You realize that nobody except Samsung and Apple is currently making
money in the smartphone space, right?

http://www.slate.com/blogs/moneybox/2013/04/02/smartphone_profit_shares_apple_and_samsung_have_the_whole_pie.html

And that they're doing better because neither one is as stupid as the
companies you're describing?

http://www.bbc.co.uk/news/business-22036876

Yes, you can install Linux on cheap plastic pieces of nonstandard crap
that have already ceased production before you can buy one. It's about
as interesting as hollowing out a Furby and making it run Linux.

> do you see the point, james? the cost of the software development is
> utterly, utterly, utterly irrelevant.

Which means that nothing we do matters to them anyway, they will never
listen to us, we have no reason to listen to them, and they can
basically piss off and stop bothering us? (Which was pretty much what
Linus asked?)

Meanwhile, we pay attention to the companies that have a future, and
not the modern gold rush iteration. (Before the smartphone we had the
digital watch boom, the calculator boom, the incomptible 8-bit
microcomputer boom, the dot-com pets.com/drkoop.com era... this is not
a new thing, and unix has lived through all of it.)

Don't get me wrong: I'm happy to provide them with good tools. But
making their needs a primary design consideration when it comes to
sustainability and upgrade paths is wrong. A company that lives or dies
based on half a cent in component selection is NOT worried about an
upgrade path. It's making something disposable, and the company itself
is disposable.

> but the amount of time taken on software development is *not* the
> same as the *cost* of the software development.

And neither is the same as the quality or sustainability of the
resulting software. But if the product line will be be discontinued
three months after its introduction, who cares about being able to
maintain anything?

Rob-

2013-05-06 20:38:57

by Lennart Sorensen

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, May 06, 2013 at 03:01:58PM -0500, Rob Landley wrote:
> And economies of scale are everything to hardware cost. Unit volume
> amortizes the development (and often licensing) costs down, in the
> long run who has the highest unit volume has the cheapest product.
> Being able to reuse off the shelf components is nice, but being able
> to repurpose existing high-volume smartphone packages semi-verbatim
> is nicer.
>
> Also, your cheap little one-off product tends to have a lifespan
> measured in months. Especially since the most common southeast asian
> business model used to be something like "develop thing through
> shell company, fill inventory channel with product, launder profits
> through series of shell companies that patent infringement suits
> can't burrow through in a reasonable amount of time, dissolve
> company and shell companies before product ever winds up on store
> shelves leaving nobody to sue, re-hire the same set of engineers at
> new shell company, rinse, repeat.'
>
> (Has this changed recently? I haven't really been paying attention
> since smartphones started replacing the PC the way the PC replaced
> minicomputers and mainframes, so the billion seats of Android are
> more interesting than the rest of the embedded space combined. I did
> a talk about that at ELC if you're bored:
> http://www.youtube.com/watch?v=SGmtP5Lg_t0 )

Probably still true.

> Which is why this hardware tends to ship with crappy, unusable,
> unsupported software. Because actually programming the sucker is an
> afterthought, and the company that created it won't be _around_ long
> enough to support it, because if it did it would be around long
> enough to be sued for "we patented breathing, pay up".
>
> The reason we should care about this business model when the vendors
> don't is...?

I am getting the impression that we should ignore the cell phones given
they seem to be thoroughly ignoring their customers and everyone else
anyhow. If we then focus on the devices that perhaps do care to be around
for a while and supported, we might actually have a manageable problem.
Who knows maybe at some point the cell phone makers will smarted up and
realize there is a market in having happy long term customers and join in.

> If you use a DT-based kernel you can upgrade to new vanilla releases
> for 5 years, and if you don't you probably never upgrade to a new
> version ever.

Sure looks that way to the majority of devices.

> Except the type of company you're describing won't be around long
> enough to provide an upgrade. They'll just tell you to buy a new one
> next year, from the same engineers working at new shell company du
> jour (which has already dissolved by the time product hits the
> shelves; this stuff can get outsourced and rebadged with other
> people's branding to disguise the churn but the short-term thinking
> is there for a _reason_).
>
> You realize that nobody except Samsung and Apple is currently making
> money in the smartphone space, right?
>
> http://www.slate.com/blogs/moneybox/2013/04/02/smartphone_profit_shares_apple_and_samsung_have_the_whole_pie.html
>
> And that they're doing better because neither one is as stupid as
> the companies you're describing?
>
> http://www.bbc.co.uk/news/business-22036876
>
> Yes, you can install Linux on cheap plastic pieces of nonstandard
> crap that have already ceased production before you can buy one.
> It's about as interesting as hollowing out a Furby and making it run
> Linux.

It does look like a better model, although they do have to work at doing
things well, unlike the makers of cheap junk.

> Which means that nothing we do matters to them anyway, they will
> never listen to us, we have no reason to listen to them, and they
> can basically piss off and stop bothering us? (Which was pretty much
> what Linus asked?)
>
> Meanwhile, we pay attention to the companies that have a future, and
> not the modern gold rush iteration. (Before the smartphone we had
> the digital watch boom, the calculator boom, the incomptible 8-bit
> microcomputer boom, the dot-com pets.com/drkoop.com era... this is
> not a new thing, and unix has lived through all of it.)
>
> Don't get me wrong: I'm happy to provide them with good tools. But
> making their needs a primary design consideration when it comes to
> sustainability and upgrade paths is wrong. A company that lives or
> dies based on half a cent in component selection is NOT worried
> about an upgrade path. It's making something disposable, and the
> company itself is disposable.
>
> And neither is the same as the quality or sustainability of the
> resulting software. But if the product line will be be discontinued
> three months after its introduction, who cares about being able to
> maintain anything?

Sounds good to me. Those that think devicetree will never work can go
do whatever they want with the hardware they have from the makers of
cheap junk with no expected lifetime in the market. The rest of us can
go focus on making something efficient and usable long term.

--
Len Sorensen

Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, May 6, 2013 at 9:01 PM, Rob Landley <[email protected]> wrote:
> On 05/06/2013 07:08:44 AM, Luke Kenneth Casson Leighton wrote:
>>
>> > I suppose that ARM multi-platform will never cover all ARM CPUs, but
>> > the more it covers, the easier and cheaper it will be to work with new
>> > hardware and ARM.
>>
>> no. no, no no and wrong. absolutely dead wrong. you're completely
>> misunderstanding the economics of large and mass-volume product
>> development. the hardware cost is
>> EEEEEEVVVEEERRRRRRYYYTTHHHHIIINNNNGGGG.
>
>
> And economies of scale are everything to hardware cost. Unit volume
> amortizes the development (and often licensing) costs down, in the long run
> who has the highest unit volume has the cheapest product. Being able to
> reuse off the shelf components is nice, but being able to repurpose existing
> high-volume smartphone packages semi-verbatim is nicer.

yes. ok, it's about time i mentioned the EOMA initiative, with the
first standard being EOMA-68 that will see mass-production. the main
thrust of the argument is: if the main gubbins (CPU, NAND, RAM) is on
a user-removable hot-swappable CPU Card, and the interfaces are
*MANDATORY* and comprise (mostly) self-describing buses (USB, SATA,
Ethernet) and use device-tree for the remainder (24-pin RGB/TTL, I2C
and 8 pins of GPIO) then two things happen:

1) the CPU Cards can be mass-produced for the [often very very short]
lifetime of the product, *BUT*, importantly, those CPU Cards can be
re-used across a seriously-large product range covering laptops,
tablets, MIDs, desktop PCs, SoHo servers, keyboard computers, LCD
monitors (yes - upgradeable LCD monitors that can be turned into
all-in-one PCs), LCD TVs - the list is almost endless and the only
things *not* covered are small smartphones [a market taken by samsung
and apple as you later point out rob], and high-end systems [a market
covered by intel and AMD anyway].

2) the "products" [i won't list them again] can *LITERALLY* be made
*WITHOUT MODIFICATION* except cosmetic or replacing end-of-life
components until the cows come home. i fully expect EOMA-68 laptops
for example to be made *literally* without a single modification for
at least five maybe even eight years [as long as its parts don't go
end-of-life].

the relevance here for the linux kernel mailing list is that the
effort required to create a new product combination becomes an N
***PLUS*** M [plus tiny porting overhead to link into EOMA68]
development effort, where N is "number of CPUs" and M is "products".

in this scenario, device tree is actually critical to the success of
the EOMA initiatives, because every peripheral which is not
self-describing (the 8 GPIO pins and the LCD panel's timings, size
etc.) is to be stored in an on-board I2C EEPROM at a known location,
in device tree format.

at boot time, the 9 [or so] device tree files will need to be read
from the product's EEPROM. only then will the CPU Card know, for
example, what the size of the LCD is, or what GPIO 0 does.

... actually the device drivers are going to have to be capable of
reconfiguring even *after* boot time because a CPU Card could be
suspended, removed from one product, reinserted into another one and
told to wake up.... and find that its hardware is completely
different!! :)

but that's another story.

> Also, your cheap little one-off product tends to have a lifespan measured in
> months. Especially since the most common southeast asian business model used
> to be something like "develop thing through shell company, fill inventory
> channel with product, launder profits through series of shell companies that
> patent infringement suits can't burrow through in a reasonable amount of
> time, dissolve company and shell companies before product ever winds up on
> store shelves leaving nobody to sue, re-hire the same set of engineers at
> new shell company, rinse, repeat.'

yeah. look up the zenithink ZT-180 tablet for a classic example.
you're _really_ helping to make the case for the EOMA initiatives, rob
:)

> (Has this changed recently? I haven't really been paying attention since
> smartphones started replacing the PC the way the PC replaced minicomputers
> and mainframes, so the billion seats of Android are more interesting than
> the rest of the embedded space combined. I did a talk about that at ELC if
> you're bored:
> http://www.youtube.com/watch?v=SGmtP5Lg_t0 )
>
>
>> the software development cost because it is a one-off (non-recurring
>> expense) is completely and utterly irrelevant. again, see reply to
>> oliver's message.
>
>
> Which is why this hardware tends to ship with crappy, unusable, unsupported
> software. Because actually programming the sucker is an afterthought, and
> the company that created it won't be _around_ long enough to support it,
> because if it did it would be around long enough to be sued for "we patented
> breathing, pay up".

yes. so this is why there is an open invitation to free software
developers to help out with the EOMA initiative, which is to do it
*right*, from the start. put the cart before the horse.

> The reason we should care about this business model when the vendors don't
> is...?

exactly.

>> > Say, if a manufacturer has 20 different models of mobile phone out
>> > there, all based on ARM.
>> > Currently, they need a different kernel for each one. If they could
>> > use the same kernel across all 20 models, that would reduce their
>> > development costs considerably.
>>
>> let's do some maths. let's say that the cost of re-using a DT-based
>> kernel is $50k, but the custom development (not using DT) is $250,000.
>
>
> If you use a DT-based kernel you can upgrade to new vanilla releases for 5
> years, and if you don't you probably never upgrade to a new version ever.

i'm not sure i follow the relevance here.

> Except the type of company you're describing won't be around long enough to
> provide an upgrade.

true! :)

> They'll just tell you to buy a new one next year, from
> the same engineers working at new shell company du jour (which has already
> dissolved by the time product hits the shelves; this stuff can get
> outsourced and rebadged with other people's branding to disguise the churn
> but the short-term thinking is there for a _reason_).

most people would call you a cynic for saying this, but unfortunately
i've actually encountered exactly the situation you describe, as well
as meeting several people who have, *after* paying a deposit based on
a promise that they would receive the GPL Source Code if only they
paid cash up-front for 1k to 20k units, learned that the fucking
fucking fuckers didn't actually *have* the source code in the fucking
first place: they bought a GPL-violating binary-only deal from some
fucker-of-an-ODM who pulled the wool over the eyes of the factory. i
_did_ try to warn a couple of people who actually paid up...

>
>> let's say that you have to use a freescale $20 processor (iMX6 Dual)
>> with that, because linaro is being paid $1m per year to help freescale
>> to make devicetree work, and it gets them in on that
>> one-kernel-across-20-models thing.
>>
>> so they work out the BOM, against the projected expenditure over say
>> 2 years they expect to sell say 100 million phones.
>
>
> You realize that nobody except Samsung and Apple is currently making money
> in the smartphone space, right?

ok, ok - substitute "tablet" or "laptop" or "media centre" for
"smartphone" . actually it doesn't matter what the product is, really.
the economics are the same: by the time you get to over 100 million
units, the software development costs are somewhere around the 4th
decimal place.


> Yes, you can install Linux on cheap plastic pieces of nonstandard crap that
> have already ceased production before you can buy one. It's about as
> interesting as hollowing out a Furby and making it run Linux.

tell me about it. now you know what drove me to come up with the
Rhombus Tech initiative. been there, rob, and decided i didn't like
being fucked about, and decided to do something about it.

>
>> do you see the point, james? the cost of the software development is
>> utterly, utterly, utterly irrelevant.
>
>
> Which means that nothing we do matters to them anyway, they will never
> listen to us, we have no reason to listen to them, and they can basically
> piss off and stop bothering us?

well, i'm listening. through some _really_ random and extremely
lucky - very very jammy - coincidences, i have access to some very
very large factories in china. we've been talking to them for some
time, and because of the sheer overwhelming scales that they're
dealing with, they reaaaaally like the advantages that 1) and 2) bring
to them [above, right at the top of this message].

mind you, it took us 18 months to explain it to them, but when we
finally managed, they were really fired up.

and this is the opportunity that i'm acting as the gateway for *you*
- free software developers - to gain access to, to make a difference
and finally stop having to fuck around cleaning up after the mess made
by the pathological profit-maximising corporations who get up our
noses year on year.

> Meanwhile, we pay attention to the companies that have a future, and not the
> modern gold rush iteration. (Before the smartphone we had the digital watch
> boom, the calculator boom, the incomptible 8-bit microcomputer boom, the
> dot-com pets.com/drkoop.com era... this is not a new thing, and unix has
> lived through all of it.)

i'll be sticking around and keeping an eye on the EOMA initiative for
the next decade, see how far it gets. that kind of long-term
commitment

> Don't get me wrong: I'm happy to provide them with good tools. But making
> their needs a primary design consideration when it comes to sustainability
> and upgrade paths is wrong.

indeed.

>A company that lives or dies based on half a
> cent in component selection is NOT worried about an upgrade path. It's
> making something disposable, and the company itself is disposable.

whereas the EOMA initiative is at the complete opposite end of the
spectrum. and products based around the EOMA standards, although
there is a cost overhead of e.g. around $6 in parts for EOMA-68, there
is a whopping great saving of 30 to 40% to the customer when compared
to other products *if* your end-user is prepared to swap / share CPU
Cards between two products. if they share the CPU Card between three
products then the saving to them is even greater.

not only that but rather than throw away an entire product just
because a CPU Card is obsolete [to them] the end-user can either
re-purpose the CPU Card in a slower product, or sell it on e-bay, or
re-use it in a freedombox.... whatever they like.

what they *don't* have to do is put the entire product in landfill.

etc. etc. i could go on about this at some length but i've already
done so lots of times.

>> but the amount of time taken on software development is *not* the
>> same as the *cost* of the software development.
>
>
> And neither is the same as the quality or sustainability of the resulting
> software. But if the product line will be be discontinued three months after
> its introduction, who cares about being able to maintain anything?

exactly. so in this case, with EOMA-68, even if a CPU has a 3 month
lifecycle, it's a 3 month lifecycle on *only* the CPU Card (not the
entire product range), and in that 3 months that CPU Card sold 10
times more than if it was used in only one single-board product.

so to a factory making EOMA-68 CPU Cards with that 3-month-lifecycle
CPU, it's still worth doing, and still worth doing well.

so. to summarise: have i made it clear, rob, that only by doing
things like EOMA - which is basically about creating mandatory
standards with device-tree in each product's EEPROM - does device-tree
actually become *truly* useful? if not, please do say so, because
this is really important to get the message over to people.

l.

Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, May 6, 2013 at 9:31 PM, Lennart Sorensen
<[email protected]> wrote:

>> And neither is the same as the quality or sustainability of the
>> resulting software. But if the product line will be be discontinued
>> three months after its introduction, who cares about being able to
>> maintain anything?
>
> Sounds good to me. Those that think devicetree will never work can go
> do whatever they want with the hardware they have from the makers of
> cheap junk with no expected lifetime in the market. The rest of us can
> go focus on making something efficient and usable long term.

ahh, lennart, it's a pity we were writing at the same time. i'd be
interested to hear your honest thoughts and opinions on my reply to
rob.

l.

2013-05-07 06:51:22

by Kim Enkovaara

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Mon, 6 May 2013, Lennart Sorensen wrote:

> I am getting the impression that we should ignore the cell phones given
> they seem to be thoroughly ignoring their customers and everyone else
> anyhow. If we then focus on the devices that perhaps do care to be around
> for a while and supported, we might actually have a manageable problem.
> Who knows maybe at some point the cell phone makers will smarted up and
> realize there is a market in having happy long term customers and join in.

ARM based SoC chips are quickly coming to industrial, automotive and
telecom markets. On those markets maintainability is important, also the
volumes are always not that great, and software engineers are a limited
resource. There will be even some chips where the peripherials are the
same in the SoC, just the core can be selected ARM vs. PPC for example and
PPC side already uses DT.

Regards,
--Kim

2013-05-08 03:45:07

by Rob Landley

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On 05/06/2013 03:55:11 PM, Luke Kenneth Casson Leighton wrote:
> On Mon, May 6, 2013 at 9:01 PM, Rob Landley <[email protected]> wrote:
> > You realize that nobody except Samsung and Apple is currently
> making money
> > in the smartphone space, right?
>
> ok, ok - substitute "tablet" or "laptop" or "media centre" for
> "smartphone" . actually it doesn't matter what the product is, really.
> the economics are the same: by the time you get to over 100 million
> units, the software development costs are somewhere around the 4th
> decimal place.

Actually it does. (That was the whole point of the video I posted a
link to.)

mainframe -> minicomputer -> microcomputer -> smartphone

We've seen this dance before. The new thing will coalesce into a
de-facto standard. (The interesting tablets are big phones, not small
PCs. The "surface" is this generation's microvax.) All gets back to
economies of scale again...

> > Yes, you can install Linux on cheap plastic pieces of nonstandard
> crap that
> > have already ceased production before you can buy one. It's about as
> > interesting as hollowing out a Furby and making it run Linux.
>
> tell me about it. now you know what drove me to come up with the
> Rhombus Tech initiative. been there, rob, and decided i didn't like
> being fucked about, and decided to do something about it.

I'm attempting to hijack android and convince it to evolve into
something usable (as I descibed in the ELC talk, starting around the 30
second mark), but day job's leaving me spread a touch thin this month...

> >> do you see the point, james? the cost of the software
> development is
> >> utterly, utterly, utterly irrelevant.
> >
> >
> > Which means that nothing we do matters to them anyway, they will
> never
> > listen to us, we have no reason to listen to them, and they can
> basically
> > piss off and stop bothering us?
>
> well, i'm listening. through some _really_ random and extremely
> lucky - very very jammy - coincidences, i have access to some very
> very large factories in china. we've been talking to them for some
> time, and because of the sheer overwhelming scales that they're
> dealing with, they reaaaaally like the advantages that 1) and 2) bring
> to them [above, right at the top of this message].
>
> mind you, it took us 18 months to explain it to them, but when we
> finally managed, they were really fired up.

Link above is the video of my speech trying to explain what I think's
coming (and how I hope to take advantage of it). Video and outline:

http://www.youtube.com/watch?v=SGmtP5Lg_t0
http://landley.net/talks/celf-2013.txt

Only the first 30 seconds are about "what is toybox". The rest is _why_
is toybox, I.E. attempting to steer the PC to smartphone transition so
we have a shot at a a non-locked-down general purpose computing device.

> and this is the opportunity that i'm acting as the gateway for *you*
> - free software developers - to gain access to, to make a difference
> and finally stop having to fuck around cleaning up after the mess made
> by the pathological profit-maximising corporations who get up our
> noses year on year.

Eh, pathological short term profit-maximizing loses out long-term to
sustainable initiatives. We're not always sure what the

> > Meanwhile, we pay attention to the companies that have a future,
> and not the
> > modern gold rush iteration. (Before the smartphone we had the
> digital watch
> > boom, the calculator boom, the incomptible 8-bit microcomputer
> boom, the
> > dot-com pets.com/drkoop.com era... this is not a new thing, and
> unix has
> > lived through all of it.)
>
> i'll be sticking around and keeping an eye on the EOMA initiative for
> the next decade, see how far it gets. that kind of long-term
> commitment
>
> > Don't get me wrong: I'm happy to provide them with good tools. But
> making
> > their needs a primary design consideration when it comes to
> sustainability
> > and upgrade paths is wrong.
>
> indeed.
>
> >A company that lives or dies based on half a
> > cent in component selection is NOT worried about an upgrade path.
> It's
> > making something disposable, and the company itself is disposable.
>
> whereas the EOMA initiative is at the complete opposite end of the
> spectrum. and products based around the EOMA standards, although
> there is a cost overhead of e.g. around $6 in parts for EOMA-68, there
> is a whopping great saving of 30 to 40% to the customer when compared
> to other products *if* your end-user is prepared to swap / share CPU
> Cards between two products. if they share the CPU Card between three
> products then the saving to them is even greater.

In theory, Moore's Law says that buys you... 9 months?

(At the low end I'm never quite sure where the fixed costs come to
dominate. Moore's law was just about price/performance ratio, not about
absolute price. We haven't gone down to disposable devices because at a
certain point the battery and case cost more than the electronics...)

But as I mentioned in the video, smartphones have to be good _phones_
to tap into the billion-unit niche.

> not only that but rather than throw away an entire product just
> because a CPU Card is obsolete [to them] the end-user can either
> re-purpose the CPU Card in a slower product, or sell it on e-bay, or
> re-use it in a freedombox.... whatever they like.

A phone is a mass-produced consumer electronics device. Is "I can rip
the guts out of my DVD player and re-use it" a commercially interesting
statement?

> what they *don't* have to do is put the entire product in landfill.
>
> etc. etc. i could go on about this at some length but i've already
> done so lots of times.

Link?

> >> but the amount of time taken on software development is *not* the
> >> same as the *cost* of the software development.
> >
> >
> > And neither is the same as the quality or sustainability of the
> resulting
> > software. But if the product line will be be discontinued three
> months after
> > its introduction, who cares about being able to maintain anything?
>
> exactly. so in this case, with EOMA-68, even if a CPU has a 3 month
> lifecycle, it's a 3 month lifecycle on *only* the CPU Card (not the
> entire product range), and in that 3 months that CPU Card sold 10
> times more than if it was used in only one single-board product.
>
> so to a factory making EOMA-68 CPU Cards with that 3-month-lifecycle
> CPU, it's still worth doing, and still worth doing well.
>
> so. to summarise: have i made it clear, rob, that only by doing
> things like EOMA - which is basically about creating mandatory
> standards with device-tree in each product's EEPROM - does device-tree
> actually become *truly* useful? if not, please do say so, because
> this is really important to get the message over to people.

20 years ago all the bespoke 8-bit machines were replaced by commodity
PCs. Rather a lot of the bespoke embedded systems are going to be
replaced by repurposed smartphone packages. But a smartphone package
has to be a good phone in addition to whatever else it does, or else it
won't tap into the economies of scale of this billion-unit niche.

Everything I had to say on this topic was in the ELC talk. That was on
the _software_ side, not on the hardware side, but it might provide a
useful framework...

Rob

P.S. Well, not _everything_. I never mentioned that Apple Airport was
obviously Steve Jobs' solution to the display portion of the
"smartphone as workstation" problem, I didn't hammer very hard on LLVM
being primarily sponsored by Apple...-

Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Wed, May 8, 2013 at 4:44 AM, Rob Landley <[email protected]> wrote:

>> whereas the EOMA initiative is at the complete opposite end of the
>> spectrum. and products based around the EOMA standards, although
>> there is a cost overhead of e.g. around $6 in parts for EOMA-68, there
>> is a whopping great saving of 30 to 40% to the customer when compared
>> to other products *if* your end-user is prepared to swap / share CPU
>> Cards between two products. if they share the CPU Card between three
>> products then the saving to them is even greater.
>
>
> In theory, Moore's Law says that buys you... 9 months?

and 6 months in to that 9 months you bring out the next CPU Card, and
the next, and the next, and the next, and the next.

there's a hell of a lot of history already behind the EOMA
initiatives. i'm running this discussion down, btw - the point's been
made, and i'm inviting linux kernel developers who may not have been
aware of the initiative to be involved.

for many people i know they're absolutely fed up of always playing
catch-up: if that's the case then this is your opportunity to make a
difference.


>> not only that but rather than throw away an entire product just
>> because a CPU Card is obsolete [to them] the end-user can either
>> re-purpose the CPU Card in a slower product, or sell it on e-bay, or
>> re-use it in a freedombox.... whatever they like.
>
>
> A phone is a mass-produced consumer electronics device. Is "I can rip the
> guts out of my DVD player and re-use it" a commercially interesting
> statement?

you've missed the point. EOMA-68 CPU Cards are separately-sold
mass-volume *interchangeable* products, i.e. being packaged in legacy
PCMCIA housings they have the exact same advantages of PCMCIA except
now it's the *CPU* that's interchangeable between products.

nobody in their right mind swaps the DVD electronics, they just buy
another DVD player. including the mechanical part and the built-in
PSU, and the GPL-violating software running on it.

>> what they *don't* have to do is put the entire product in landfill.
>>
>> etc. etc. i could go on about this at some length but i've already
>> done so lots of times.
>
>
> Link?

links. here is a small non-exhaustive list.

http://www.c2mtl.com/eye50/ideas/the-rhombus-tech-eoma-68-initiative/
http://rhombus-tech.net/articles/eoma68_in_education/
http://lkcl.net/articles/tiny.computers.txt
http://rhombus-tech.net/
http://linux-sunxi.org/EOMA68-A10
http://elinux.org/Embedded_Open_Modular_Architecture/EOMA-68
http://elinux.org/Embedded_Open_Modular_Architecture
http://rhombus-tech.net/community_ideas/kde_tablet/news/
http://rhombus-tech.net/allwinner_a10/news/
http://rhombus-tech.net/allwinner/a31/news/
http://rhombus-tech.net/freescale/iMX6/news/
http://rhombus-tech.net/jz4760/news/
http://lists.phcomp.co.uk/pipermail/arm-netbook/2013-April/007168.html
http://lkcl.net/articles/eoma.txt
http://hardware.slashdot.org/story/12/09/07/2322207/rhombus-tech-a10-eoma-68-cpu-card-schematics-completed
http://hardware.slashdot.org/comments.pl?sid=3102545&cid=41270525
http://slashdot.org/comments.pl?sid=3643131&cid=43435993
http://slashdot.org/comments.pl?sid=3643131&cid=43435805
http://slashdot.org/comments.pl?sid=3643131&cid=43435635
http://slashdot.org/comments.pl?sid=3643131&cid=43435507

that's probably enough.

2013-05-08 09:43:37

by joem

[permalink] [raw]
Subject: Re: [Arm-netbook] device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On Sun, 2013-05-05 at 13:27 +0100, Luke Kenneth Casson Leighton wrote:

> when i say "completely and utterly different", i am not just talking
> about the processor, i am not just talking about the GPIO, or even the
> buses: i'm talking about the sensors, the power-up mechanisms, the
> startup procedures - everything. one device uses GPIO pin 32 for
> powering up and resetting a USB hub peripheral, yet for another device
> that exact same GPIO pin is used not even as a GPIO but as an
> alternate multiplexed function e.g. as RS232 TX pin!


Wrong approach Luke.

The guys in PIC world will be laughing at all this since they have
no such difficulties.

Only one file changes between CPUs in the myriads of PICs out there.

Every register and every bit field is defined.

*Unions take care of multiplexed function*
(and #define of exact CPU model + product + wake up modes etc
takes care of fine tuning including power-up mechanisms).


ARM realizes their mistake and introduces 'CMSIS' library.
It is available for NXP arm chips.
But the engineers who wrote it are too stupid.
They named the registers but bit fields and multiplexed pins etc
are not defined or just too stupid for words - they even change
similar register names like

RS232-UART_Tx_REGISTER to RS232-UART-TX-REGISTER

from one chip model to next bringing endless joy and happiness to
themselves as they aimlessly shoot themselves in both foot with one
bullet to make their library unusable from chip to chip. Doh!
(Ok just exaggerating, but its not far from the truth.)

The correct approach is to be critical of ARM lack of
a proper 'CMSIS' library and encourage them to hire just 4 open source
engineers and write some proper 'CMSIS' library, one chip at a time,
until a couple of chips get done, and then fan out and cover some
more bigger SoCs and families of chips. Its just too late to
go back and recover from all their mistakes. I'm sure once 4 engineers
publish each day their work on github, developers of SoC companies
will rapidly begin filling in the missing details for their custom
chips, because it is to their benefit to release one header file that
describe their chip completely to help port software quickly, and sell
more chips more quickly.

Who else could do this work? The SoC makers? Open source developers?
Distro makers? Not one fat chance!!

It is ARM's responsibility to make sure UART1 means UART1 in all
CPUs and not make a flat footed excuse about it.

This problem will never go away until they (ARM) do something about it.


????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-05-09 00:25:19

by Rob Landley

[permalink] [raw]
Subject: Re: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

On 05/08/2013 03:19:23 AM, Luke Kenneth Casson Leighton wrote:
> On Wed, May 8, 2013 at 4:44 AM, Rob Landley <[email protected]> wrote:
>
> >> whereas the EOMA initiative is at the complete opposite end of the
> >> spectrum. and products based around the EOMA standards, although
> >> there is a cost overhead of e.g. around $6 in parts for EOMA-68,
> there
> >> is a whopping great saving of 30 to 40% to the customer when
> compared
> >> to other products *if* your end-user is prepared to swap / share
> CPU
> >> Cards between two products. if they share the CPU Card between
> three
> >> products then the saving to them is even greater.

It's only "whopping great" if it allows them to lower the absolute cost
of the product. If it just buys you a 30% cost savings in a niche with
an 18 month half life of hardware depreciation, sheer inventory
management can get you that much.

The big limit of moore's law is these days you should be able to 4 megs
of memory for a nickel, and you can't. The overhead of slicing it that
small dwarfs the cost of the actual component, the RATIO changes but
the cheapest netbook is still $200 or so. They don't _make_ disks that
store less than a gigabyte anymore.

I should be able to put together the equivalent of Linus's original
1991 PC for under a dollar, and I can't. (Not unless I manage to
manufacture several million of them.) I've been awaiting disposable
computing for years, but it's not here yet.

If your savings of 30% to the customer just means their router has 2
gigs of ram instead of 1 gig of ram but is the same price as the one on
the next shelf because the fixed costs dominate and you added $6 to
that...

> > In theory, Moore's Law says that buys you... 9 months?
>
> and 6 months in to that 9 months you bring out the next CPU Card, and
> the next, and the next, and the next, and the next.
>
> there's a hell of a lot of history already behind the EOMA
> initiatives.

At what point does that history become a downside? (We've had 20 "year
of the Linux on the Desktop" announcements. Nobody pays any attention
to new ones, too much crying wolf. Anyway, I explained in the video why
that's a systemic problem in our development model, tangent...)

If I want a cheap plastic Linux system I can buy a raspberry-pi ($35)
or pandaboard-black ($45 and the HDMI driver isn't a binary-only blob).
Do these systems participate in your EOMA thing? Would they benefit
from it if they did? If so, given your history, why don't they?

> > A phone is a mass-produced consumer electronics device. Is "I can
> rip the
> > guts out of my DVD player and re-use it" a commercially interesting
> > statement?
>
> you've missed the point.

Agreed. That's why I keep asking, trying to figure out what the point
is.

> EOMA-68 CPU Cards are separately-sold
> mass-volume *interchangeable* products, i.e. being packaged in legacy
> PCMCIA housings they have the exact same advantages of PCMCIA except
> now it's the *CPU* that's interchangeable between products.

More mass-volume than phones?

I'm trying to think of the last time I got a new nebook with legacy
PCMCIA in it. It's been more than 5 years...

> nobody in their right mind swaps the DVD electronics, they just buy
> another DVD player. including the mechanical part and the built-in
> PSU, and the GPL-violating software running on it.

Yes, that was sort of my point. What's _different_? And how is this not
http://xkcd.com/927 ?

> >> what they *don't* have to do is put the entire product in
> landfill.
> >>
> >> etc. etc. i could go on about this at some length but i've already
> >> done so lots of times.
> >
> >
> > Link?
>
> links.

Links !> link. Links ! even = to link, actually. It's one of them
marketing 101 things. (See "paradox of choice", "elevator pitch",
"marketing hook"...)

You know how when you start a fire you have a spark, kindling, and THEN
logs? You escalate? Or when somebody writes a book the first sentence
gets them to read the first page, the first page gets them to read the
first chapter, the first chapter gets them invested in the story so
they read the rest of the book...

In marketing you earn five seconds of attention, use it to earn thirty
seconds of attention, use that to earn five minutes of attention...

You had five minutes worth of my interest, but that was not a five
minute list of links.

> http://www.c2mtl.com/eye50/ideas/the-rhombus-tech-eoma-68-initiative/
> http://rhombus-tech.net/articles/eoma68_in_education/

Ok, obvious from those first two links that rhombus-tech is behind it.
(Never heard of 'em, which is a bad sign given this "history" you speak
of.) So, rhombus-tech.net... And there is a FAQ! First question says
your goal is to create synergy.

Yeah, I'm not too big into synergy, I generally stick with the diet
Monster and Rockstar flavors, occasional Full Throttle citrus when I've
got some diet mountain dew to mix it with (but that's not diet, and the
sugar does bad things to my sinuses).

I don't think I'm the target audience here.

Rob-

2013-05-10 01:02:41

by Yuhong Bao

[permalink] [raw]
Subject: RE: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

----------------------------------------
> From: [email protected]
> To: [email protected]; [email protected]
> CC: [email protected]; [email protected]; [email protected]; [email protected]
> Subject: RE: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]
> Date: Thu, 9 May 2013 17:56:33 -0700
>
> > the economics of market forces don't work that way.
> > profit-maximising companies are pathologically and *LEGALLY* bound to
> > enact the articles of incorporation. so you'd need to show them that
> > it would hurt their profits to continue the way that they are going.
> I think legally bound is a myth, but that is off-topic for lkml of course.
And obviously it doesn't matter if it is actually required by law if they practically behave that way.

Yuhong Bao -

2013-05-10 01:02:46

by Yuhong Bao

[permalink] [raw]
Subject: RE: device tree not the answer in the ARM world [was: Re: running Debian on a Cubieboard]

> the economics of market forces don't work that way.
> profit-maximising companies are pathologically and *LEGALLY* bound to
> enact the articles of incorporation. so you'd need to show them that
> it would hurt their profits to continue the way that they are going.
I think legally bound is a myth, but that is off-topic for lkml of course.

Yuhong Bao -