Dear all,
A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
cpu latencies. We had a discussion about this patchset because it
reverse the modifications Deepthi did some months ago [2] and we may
want to provide a different implementation.
The Linaro Connect [3] event bring us the opportunity to meet people
involved in the power management and the cpuidle area for different SoC.
With the Tegra3 and big.LITTLE architecture, making per cpu latencies
for cpuidle is vital.
Also, the SoC vendors would like to have the ability to tune their cpu
latencies through the device tree.
We agreed in the following steps:
1. factor out / cleanup the cpuidle code as much as possible
2. better sharing of code amongst SoC idle drivers by moving common bits
to core code
3. make the cpuidle_state structure contain only data
4. add a API to register latencies per cpu
These four steps impacts all the architecture. I began the factor out
code / cleanup [4] and that has been accepted upstream and I proposed
some modifications [5] but I had a very few answers.
The patch review are very slow and done at the last minute at the merge
window and that makes code upstreaming very difficult. It is not a
reproach, it is just how it is and I would like to propose a solution
for that.
I propose to host a cpuidle-next tree where all these modifications will
be and where people can send patches against, preventing last minutes
conflicts and perhaps Lenb will agree to pull from this tree. In the
meantime, the tree will be part of the linux-next, the patches will be
more widely tested and could be fixed earlier.
Thanks
-- Daniel
[1] http://lwn.net/Articles/491257/
[2] http://lwn.net/Articles/464808/
[3] http://summit.linaro.org/
[4]
http://www.mail-archive.com/[email protected]/msg67033.html,
http://www.spinics.net/lists/linux-pm/msg27330.html,
http://comments.gmane.org/gmane.linux.ports.arm.omap/76311,
http://www.digipedia.pl/usenet/thread/18885/11795/
[5] https://lkml.org/lkml/2012/6/8/375
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>
> Dear all,
>
> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
> cpu latencies. We had a discussion about this patchset because it
> reverse the modifications Deepthi did some months ago [2] and we may
> want to provide a different implementation.
>
> The Linaro Connect [3] event bring us the opportunity to meet people
> involved in the power management and the cpuidle area for different SoC.
>
> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
> for cpuidle is vital.
>
> Also, the SoC vendors would like to have the ability to tune their cpu
> latencies through the device tree.
>
> We agreed in the following steps:
>
> 1. factor out / cleanup the cpuidle code as much as possible
> 2. better sharing of code amongst SoC idle drivers by moving common bits
> to core code
> 3. make the cpuidle_state structure contain only data
> 4. add a API to register latencies per cpu
On huge systems especially servers, doing a cpuidle registration on a
per-cpu basis creates a big overhead.
So global registration was introduced in the first place.
Why not have it as a configurable option or so ?
Architectures having uniform cpuidle state parameters can continue to
use global registration, else have an api to register latencies per cpu
as proposed. We can definitely work to see the best way to implement it.
> These four steps impacts all the architecture. I began the factor out
> code / cleanup [4] and that has been accepted upstream and I proposed
> some modifications [5] but I had a very few answers.
>
> The patch review are very slow and done at the last minute at the merge
> window and that makes code upstreaming very difficult. It is not a
> reproach, it is just how it is and I would like to propose a solution
> for that.
>
> I propose to host a cpuidle-next tree where all these modifications will
> be and where people can send patches against, preventing last minutes
> conflicts and perhaps Lenb will agree to pull from this tree. In the
> meantime, the tree will be part of the linux-next, the patches will be
> more widely tested and could be fixed earlier.
>
> Thanks
> -- Daniel
>
> [1] http://lwn.net/Articles/491257/
> [2] http://lwn.net/Articles/464808/
> [3] http://summit.linaro.org/
> [4]
> http://www.mail-archive.com/[email protected]/msg67033.html,
> http://www.spinics.net/lists/linux-pm/msg27330.html,
> http://comments.gmane.org/gmane.linux.ports.arm.omap/76311,
> http://www.digipedia.pl/usenet/thread/18885/11795/
>
> [5] https://lkml.org/lkml/2012/6/8/375
>
Cheers,
Deepthi
On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>
>>
>> Dear all,
>>
>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>> cpu latencies. We had a discussion about this patchset because it
>> reverse the modifications Deepthi did some months ago [2] and we may
>> want to provide a different implementation.
>>
>> The Linaro Connect [3] event bring us the opportunity to meet people
>> involved in the power management and the cpuidle area for different SoC.
>>
>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>> for cpuidle is vital.
>>
>> Also, the SoC vendors would like to have the ability to tune their cpu
>> latencies through the device tree.
>>
>> We agreed in the following steps:
>>
>> 1. factor out / cleanup the cpuidle code as much as possible
>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>> to core code
>> 3. make the cpuidle_state structure contain only data
>> 4. add a API to register latencies per cpu
>
> On huge systems especially servers, doing a cpuidle registration on a
> per-cpu basis creates a big overhead.
> So global registration was introduced in the first place.
>
> Why not have it as a configurable option or so ?
> Architectures having uniform cpuidle state parameters can continue to
> use global registration, else have an api to register latencies per cpu
> as proposed. We can definitely work to see the best way to implement it.
Absolutely, this is one reason I think adding a function:
cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
makes sense if it is used only for cpus with different latencies.
The other architecture will be kept untouched.
IMHO, before adding more functionalities to cpuidle, we should cleanup
and consolidate the code. For example, there is a dependency between
acpi_idle and intel_idle which can be resolved with the notifiers, or
there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is
also introduced to cpuidle which is related to x86 not the cpuidle core,
etc ...
Cleanup the code will help to move the different bits from the arch
specific code to the core code and reduce the impact of the core's
modifications. That should let a common pattern to emerge and will
facilitate the modifications in the future (per cpu latencies is one of
them).
That will be a lot of changes and this is why I proposed to put in place
a cpuidle-next tree in order to consolidate all the cpuidle
modifications people is willing to see upstream and provide better testing.
>> These four steps impacts all the architecture. I began the factor out
>> code / cleanup [4] and that has been accepted upstream and I proposed
>> some modifications [5] but I had a very few answers.
>>
>> The patch review are very slow and done at the last minute at the merge
>> window and that makes code upstreaming very difficult. It is not a
>> reproach, it is just how it is and I would like to propose a solution
>> for that.
>>
>> I propose to host a cpuidle-next tree where all these modifications will
>> be and where people can send patches against, preventing last minutes
>> conflicts and perhaps Lenb will agree to pull from this tree. In the
>> meantime, the tree will be part of the linux-next, the patches will be
>> more widely tested and could be fixed earlier.
>>
>> Thanks
>> -- Daniel
>>
>> [1] http://lwn.net/Articles/491257/
>> [2] http://lwn.net/Articles/464808/
>> [3] http://summit.linaro.org/
>> [4]
>> http://www.mail-archive.com/[email protected]/msg67033.html,
>> http://www.spinics.net/lists/linux-pm/msg27330.html,
>> http://comments.gmane.org/gmane.linux.ports.arm.omap/76311,
>> http://www.digipedia.pl/usenet/thread/18885/11795/
>>
>> [5] https://lkml.org/lkml/2012/6/8/375
>>
>
>
> Cheers,
> Deepthi
>
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
> On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
> > On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
> >
> >>
> >> Dear all,
> >>
> >> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
> >> cpu latencies. We had a discussion about this patchset because it
> >> reverse the modifications Deepthi did some months ago [2] and we may
> >> want to provide a different implementation.
> >>
> >> The Linaro Connect [3] event bring us the opportunity to meet people
> >> involved in the power management and the cpuidle area for different SoC.
> >>
> >> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
> >> for cpuidle is vital.
> >>
> >> Also, the SoC vendors would like to have the ability to tune their cpu
> >> latencies through the device tree.
> >>
> >> We agreed in the following steps:
> >>
> >> 1. factor out / cleanup the cpuidle code as much as possible
> >> 2. better sharing of code amongst SoC idle drivers by moving common bits
> >> to core code
> >> 3. make the cpuidle_state structure contain only data
> >> 4. add a API to register latencies per cpu
> >
> > On huge systems especially servers, doing a cpuidle registration on a
> > per-cpu basis creates a big overhead.
> > So global registration was introduced in the first place.
> >
> > Why not have it as a configurable option or so ?
> > Architectures having uniform cpuidle state parameters can continue to
> > use global registration, else have an api to register latencies per cpu
> > as proposed. We can definitely work to see the best way to implement it.
>
> Absolutely, this is one reason I think adding a function:
>
> cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
>
> makes sense if it is used only for cpus with different latencies.
> The other architecture will be kept untouched.
>
> IMHO, before adding more functionalities to cpuidle, we should cleanup
> and consolidate the code. For example, there is a dependency between
> acpi_idle and intel_idle which can be resolved with the notifiers, or
> there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is
> also introduced to cpuidle which is related to x86 not the cpuidle core,
> etc ...
>
> Cleanup the code will help to move the different bits from the arch
> specific code to the core code and reduce the impact of the core's
> modifications. That should let a common pattern to emerge and will
> facilitate the modifications in the future (per cpu latencies is one of
> them).
>
> That will be a lot of changes and this is why I proposed to put in place
> a cpuidle-next tree in order to consolidate all the cpuidle
> modifications people is willing to see upstream and provide better testing.
Sounds like a good idea. Do you have something like that already?
Thanks,
Peter.
On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
> On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
>> On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
>>> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>>
>>>>
>>>> Dear all,
>>>>
>>>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>>>> cpu latencies. We had a discussion about this patchset because it
>>>> reverse the modifications Deepthi did some months ago [2] and we may
>>>> want to provide a different implementation.
>>>>
>>>> The Linaro Connect [3] event bring us the opportunity to meet people
>>>> involved in the power management and the cpuidle area for different SoC.
>>>>
>>>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>>>> for cpuidle is vital.
>>>>
>>>> Also, the SoC vendors would like to have the ability to tune their cpu
>>>> latencies through the device tree.
>>>>
>>>> We agreed in the following steps:
>>>>
>>>> 1. factor out / cleanup the cpuidle code as much as possible
>>>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>>>> to core code
>>>> 3. make the cpuidle_state structure contain only data
>>>> 4. add a API to register latencies per cpu
>>>
>>> On huge systems especially servers, doing a cpuidle registration on a
>>> per-cpu basis creates a big overhead.
>>> So global registration was introduced in the first place.
>>>
>>> Why not have it as a configurable option or so ?
>>> Architectures having uniform cpuidle state parameters can continue to
>>> use global registration, else have an api to register latencies per cpu
>>> as proposed. We can definitely work to see the best way to implement it.
>>
>> Absolutely, this is one reason I think adding a function:
>>
>> cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
>>
>> makes sense if it is used only for cpus with different latencies.
>> The other architecture will be kept untouched.
>>
>> IMHO, before adding more functionalities to cpuidle, we should cleanup
>> and consolidate the code. For example, there is a dependency between
>> acpi_idle and intel_idle which can be resolved with the notifiers, or
>> there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is
>> also introduced to cpuidle which is related to x86 not the cpuidle core,
>> etc ...
>>
>> Cleanup the code will help to move the different bits from the arch
>> specific code to the core code and reduce the impact of the core's
>> modifications. That should let a common pattern to emerge and will
>> facilitate the modifications in the future (per cpu latencies is one of
>> them).
>>
>> That will be a lot of changes and this is why I proposed to put in place
>> a cpuidle-next tree in order to consolidate all the cpuidle
>> modifications people is willing to see upstream and provide better testing.
>
> Sounds like a good idea. Do you have something like that already?
Yes but I need to cleanup the tree before.
http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git;a=summary
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
Hi Daniel,
On Mon, Jun 18, 2012 at 2:55 PM, Daniel Lezcano
<[email protected]> wrote:
> On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
>> On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
>>> On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
>>>> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>>>
>>>>>
>>>>> Dear all,
>>>>>
>>>>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>>>>> cpu latencies. We had a discussion about this patchset because it
>>>>> reverse the modifications Deepthi did some months ago [2] and we may
>>>>> want to provide a different implementation.
>>>>>
>>>>> The Linaro Connect [3] event bring us the opportunity to meet people
>>>>> involved in the power management and the cpuidle area for different SoC.
>>>>>
>>>>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>>>>> for cpuidle is vital.
>>>>>
>>>>> Also, the SoC vendors would like to have the ability to tune their cpu
>>>>> latencies through the device tree.
>>>>>
>>>>> We agreed in the following steps:
>>>>>
>>>>> 1. factor out / cleanup the cpuidle code as much as possible
>>>>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>>>>> to core code
>>>>> 3. make the cpuidle_state structure contain only data
>>>>> 4. add a API to register latencies per cpu
That makes sense, especially if you can refactor _and_ add new
functionality at the same time.
>>>> On huge systems especially servers, doing a cpuidle registration on a
>>>> per-cpu basis creates a big overhead.
>>>> So global registration was introduced in the first place.
>>>>
>>>> Why not have it as a configurable option or so ?
>>>> Architectures having uniform cpuidle state parameters can continue to
>>>> use global registration, else have an api to register latencies per cpu
>>>> as proposed. We can definitely work to see the best way to implement it.
>>>
>>> Absolutely, this is one reason I think adding a function:
>>>
>>> cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
>>>
>>> makes sense if it is used only for cpus with different latencies.
>>> The other architecture will be kept untouched.
Do you mean by keeping the parameters in the cpuidle_driver struct and
not calling the new API? That looks great.
>>>
>>> IMHO, before adding more functionalities to cpuidle, we should cleanup
>>> and consolidate the code. For example, there is a dependency between
>>> acpi_idle and intel_idle which can be resolved with the notifiers, or
>>> there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is
>>> also introduced to cpuidle which is related to x86 not the cpuidle core,
>>> etc ...
>>>
>>> Cleanup the code will help to move the different bits from the arch
>>> specific code to the core code and reduce the impact of the core's
>>> modifications. That should let a common pattern to emerge and will
>>> facilitate the modifications in the future (per cpu latencies is one of
>>> them).
>>>
>>> That will be a lot of changes and this is why I proposed to put in place
>>> a cpuidle-next tree in order to consolidate all the cpuidle
>>> modifications people is willing to see upstream and provide better testing.
Nice! The new tree needs to be as close as possible to mainline
though. Do you have plans for that?
Do not hesitate to ask for help on OMAPs!
Regards,
Jean
>>
>> Sounds like a good idea. Do you have something like that already?
>
> Yes but I need to cleanup the tree before.
>
> http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git;a=summary
>
> --
> <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
>
> Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
> <http://twitter.com/#!/linaroorg> Twitter |
> <http://www.linaro.org/linaro-blog/> Blog
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
On 06/18/2012 03:06 PM, Jean Pihet wrote:
> Hi Daniel,
>
> On Mon, Jun 18, 2012 at 2:55 PM, Daniel Lezcano
> <[email protected]> wrote:
>> On 06/18/2012 02:53 PM, Peter De Schrijver wrote:
>>> On Mon, Jun 18, 2012 at 02:35:42PM +0200, Daniel Lezcano wrote:
>>>> On 06/18/2012 01:54 PM, Deepthi Dharwar wrote:
>>>>> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>>>>
>>>>>>
>>>>>> Dear all,
>>>>>>
>>>>>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>>>>>> cpu latencies. We had a discussion about this patchset because it
>>>>>> reverse the modifications Deepthi did some months ago [2] and we may
>>>>>> want to provide a different implementation.
>>>>>>
>>>>>> The Linaro Connect [3] event bring us the opportunity to meet people
>>>>>> involved in the power management and the cpuidle area for different SoC.
>>>>>>
>>>>>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>>>>>> for cpuidle is vital.
>>>>>>
>>>>>> Also, the SoC vendors would like to have the ability to tune their cpu
>>>>>> latencies through the device tree.
>>>>>>
>>>>>> We agreed in the following steps:
>>>>>>
>>>>>> 1. factor out / cleanup the cpuidle code as much as possible
>>>>>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>>>>>> to core code
>>>>>> 3. make the cpuidle_state structure contain only data
>>>>>> 4. add a API to register latencies per cpu
> That makes sense, especially if you can refactor _and_ add new
> functionality at the same time.
Yes :)
>>>>> On huge systems especially servers, doing a cpuidle registration on a
>>>>> per-cpu basis creates a big overhead.
>>>>> So global registration was introduced in the first place.
>>>>>
>>>>> Why not have it as a configurable option or so ?
>>>>> Architectures having uniform cpuidle state parameters can continue to
>>>>> use global registration, else have an api to register latencies per cpu
>>>>> as proposed. We can definitely work to see the best way to implement it.
>>>>
>>>> Absolutely, this is one reason I think adding a function:
>>>>
>>>> cpuidle_register_latencies(int cpu, struct cpuidle_latencies);
>>>>
>>>> makes sense if it is used only for cpus with different latencies.
>>>> The other architecture will be kept untouched.
> Do you mean by keeping the parameters in the cpuidle_driver struct and
> not calling the new API?
Yes, right.
> That looks great.
>
>>>>
>>>> IMHO, before adding more functionalities to cpuidle, we should cleanup
>>>> and consolidate the code. For example, there is a dependency between
>>>> acpi_idle and intel_idle which can be resolved with the notifiers, or
>>>> there is intel specific code in cpuidle.c and cpuidle.h, cpu_relax is
>>>> also introduced to cpuidle which is related to x86 not the cpuidle core,
>>>> etc ...
>>>>
>>>> Cleanup the code will help to move the different bits from the arch
>>>> specific code to the core code and reduce the impact of the core's
>>>> modifications. That should let a common pattern to emerge and will
>>>> facilitate the modifications in the future (per cpu latencies is one of
>>>> them).
>>>>
>>>> That will be a lot of changes and this is why I proposed to put in place
>>>> a cpuidle-next tree in order to consolidate all the cpuidle
>>>> modifications people is willing to see upstream and provide better testing.
> Nice! The new tree needs to be as close as possible to mainline
> though. Do you have plans for that?
Yes, AFAIU as I ask for the cpuidle-next inclusion in linux-next, I have
to base the tree on top of Linus's tree and it will be pulled every day.
That will allow to detect conflicts and bogus commit early, especially
for the numerous x86 architecture variant and cpuidle combination.
For the moment I have a local commits in my tree and I am waiting for
some feedbacks from the lists about the RFC I sent for some cpuidle core
changes.
I will create a clean new tree cpuidle-next.
> Do not hesitate to ask for help on OMAPs!
Cool thanks, will do :)
-- Daniel
> Regards,
> Jean
>
>>>
>>> Sounds like a good idea. Do you have something like that already?
>>
>> Yes but I need to cleanup the tree before.
>>
>> http://git.linaro.org/gitweb?p=people/dlezcano/linux-next.git;a=summary
>>
>> --
>> <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
>>
>> Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
>> <http://twitter.com/#!/linaroorg> Twitter |
>> <http://www.linaro.org/linaro-blog/> Blog
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
Daniel,
On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>
> Dear all,
>
> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
> cpu latencies. We had a discussion about this patchset because it
> reverse the modifications Deepthi did some months ago [2] and we may
> want to provide a different implementation.
>
> The Linaro Connect [3] event bring us the opportunity to meet people
> involved in the power management and the cpuidle area for different SoC.
>
> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
> for cpuidle is vital.
>
> Also, the SoC vendors would like to have the ability to tune their cpu
> latencies through the device tree.
>
> We agreed in the following steps:
>
> 1. factor out / cleanup the cpuidle code as much as possible
> 2. better sharing of code amongst SoC idle drivers by moving common bits
> to core code
> 3. make the cpuidle_state structure contain only data
> 4. add a API to register latencies per cpu
>
> These four steps impacts all the architecture. I began the factor out
> code / cleanup [4] and that has been accepted upstream and I proposed
> some modifications [5] but I had a very few answers.
>
Another thing which we discussed is bringing the CPU cluster/package
notion in the core idle code. Couple idle did bring that idea to some
extent but in can be further extended and absratcted. Atm, most of
the work is done in back-end cpuidle drivers which can be easily
abstracted if the "cluster idle" notion is supported in the core layer.
Per CPU __and__ per operating point(OPP), latency is something which
can be also added to the list. From the discussion I remember, it
matters for few SoCs and can be beneficial.
Regards
Santosh
On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano
<[email protected]> wrote:
> I propose to host a cpuidle-next tree where all these modifications will
> be and where people can send patches against, preventing last minutes
> conflicts and perhaps Lenb will agree to pull from this tree. In the
> meantime, the tree will be part of the linux-next, the patches will be
> more widely tested and could be fixed earlier.
My coupled cpuidle patches were acked and temporarily in Len's
next/Linus pull branch, but were later dropped when the first pull
request to Linus was rejected. I asked Len to either put the coupled
cpuidle patches into his next branch, or let me host them so people
could base SoC branches off of them and let Len pull them later, but
got no response. If you do start a cpuidle for-next branch, can you
pull my coupled-cpuidle branch:
The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
Linux 3.4 (2012-05-20 15:29:13 -0700)
are available in the git repository at:
https://android.googlesource.com/kernel/common.git coupled-cpuidle
Colin Cross (4):
cpuidle: refactor out cpuidle_enter_state
cpuidle: fix error handling in __cpuidle_register_device
cpuidle: add support for states that affect multiple cpus
cpuidle: coupled: add parallel barrier function
drivers/cpuidle/Kconfig | 3 +
drivers/cpuidle/Makefile | 1 +
drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++
drivers/cpuidle/cpuidle.c | 68 ++++-
drivers/cpuidle/cpuidle.h | 32 ++
include/linux/cpuidle.h | 11 +
6 files changed, 813 insertions(+), 17 deletions(-)
create mode 100644 drivers/cpuidle/coupled.c
On 06/18/2012 08:15 PM, Colin Cross wrote:
> On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano
> <[email protected]> wrote:
>> I propose to host a cpuidle-next tree where all these modifications will
>> be and where people can send patches against, preventing last minutes
>> conflicts and perhaps Lenb will agree to pull from this tree. In the
>> meantime, the tree will be part of the linux-next, the patches will be
>> more widely tested and could be fixed earlier.
>
> My coupled cpuidle patches were acked and temporarily in Len's
> next/Linus pull branch, but were later dropped when the first pull
> request to Linus was rejected. I asked Len to either put the coupled
> cpuidle patches into his next branch, or let me host them so people
> could base SoC branches off of them and let Len pull them later, but
> got no response. If you do start a cpuidle for-next branch, can you
> pull my coupled-cpuidle branch:
No problem.
Thanks
-- Daniel
> The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
>
> Linux 3.4 (2012-05-20 15:29:13 -0700)
>
> are available in the git repository at:
> https://android.googlesource.com/kernel/common.git coupled-cpuidle
>
> Colin Cross (4):
> cpuidle: refactor out cpuidle_enter_state
> cpuidle: fix error handling in __cpuidle_register_device
> cpuidle: add support for states that affect multiple cpus
> cpuidle: coupled: add parallel barrier function
>
> drivers/cpuidle/Kconfig | 3 +
> drivers/cpuidle/Makefile | 1 +
> drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++
> drivers/cpuidle/cpuidle.c | 68 ++++-
> drivers/cpuidle/cpuidle.h | 32 ++
> include/linux/cpuidle.h | 11 +
> 6 files changed, 813 insertions(+), 17 deletions(-)
> create mode 100644 drivers/cpuidle/coupled.c
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
On 06/18/2012 08:15 PM, Colin Cross wrote:
> On Mon, Jun 18, 2012 at 1:40 AM, Daniel Lezcano
> <[email protected]> wrote:
>> I propose to host a cpuidle-next tree where all these modifications will
>> be and where people can send patches against, preventing last minutes
>> conflicts and perhaps Lenb will agree to pull from this tree. In the
>> meantime, the tree will be part of the linux-next, the patches will be
>> more widely tested and could be fixed earlier.
>
> My coupled cpuidle patches were acked and temporarily in Len's
> next/Linus pull branch, but were later dropped when the first pull
> request to Linus was rejected. I asked Len to either put the coupled
> cpuidle patches into his next branch, or let me host them so people
> could base SoC branches off of them and let Len pull them later, but
> got no response. If you do start a cpuidle for-next branch, can you
> pull my coupled-cpuidle branch:
>
> The following changes since commit 76e10d158efb6d4516018846f60c2ab5501900bc:
>
> Linux 3.4 (2012-05-20 15:29:13 -0700)
>
> are available in the git repository at:
> https://android.googlesource.com/kernel/common.git coupled-cpuidle
>
> Colin Cross (4):
> cpuidle: refactor out cpuidle_enter_state
> cpuidle: fix error handling in __cpuidle_register_device
> cpuidle: add support for states that affect multiple cpus
> cpuidle: coupled: add parallel barrier function
>
> drivers/cpuidle/Kconfig | 3 +
> drivers/cpuidle/Makefile | 1 +
> drivers/cpuidle/coupled.c | 715 +++++++++++++++++++++++++++++++++++++++++++++
> drivers/cpuidle/cpuidle.c | 68 ++++-
> drivers/cpuidle/cpuidle.h | 32 ++
> include/linux/cpuidle.h | 11 +
> 6 files changed, 813 insertions(+), 17 deletions(-)
> create mode 100644 drivers/cpuidle/coupled.c
Done.
http://git.linaro.org/gitweb?p=people/dlezcano/cpuidle-next.git;a=shortlog;h=refs/heads/cpuidle-next
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
On Mon, Jun 18, 2012 at 7:00 PM, a0393909 <[email protected]> wrote:
> Daniel,
>
>
> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>
>>
>> Dear all,
>>
>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>> cpu latencies. We had a discussion about this patchset because it
>> reverse the modifications Deepthi did some months ago [2] and we may
>> want to provide a different implementation.
>>
>> The Linaro Connect [3] event bring us the opportunity to meet people
>> involved in the power management and the cpuidle area for different SoC.
>>
>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>> for cpuidle is vital.
>>
>> Also, the SoC vendors would like to have the ability to tune their cpu
>> latencies through the device tree.
>>
>> We agreed in the following steps:
>>
>> 1. factor out / cleanup the cpuidle code as much as possible
>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>> to core code
>> 3. make the cpuidle_state structure contain only data
>> 4. add a API to register latencies per cpu
>>
>> These four steps impacts all the architecture. I began the factor out
>> code / cleanup [4] and that has been accepted upstream and I proposed
>> some modifications [5] but I had a very few answers.
>>
> Another thing which we discussed is bringing the CPU cluster/package
> notion in the core idle code. Couple idle did bring that idea to some
> extent but in can be further extended and abstracted. Atm, most of
> the work is done in back-end cpuidle drivers which can be easily
> abstracted if the "cluster idle" notion is supported in the core layer.
>
Are you considering the "cluster idle" as one of the topic ?
Regards
Santosh
On 06/25/2012 02:58 PM, Shilimkar, Santosh wrote:
> On Mon, Jun 18, 2012 at 7:00 PM, a0393909 <[email protected]> wrote:
>> Daniel,
>>
>>
>> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>>
>>>
>>> Dear all,
>>>
>>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>>> cpu latencies. We had a discussion about this patchset because it
>>> reverse the modifications Deepthi did some months ago [2] and we may
>>> want to provide a different implementation.
>>>
>>> The Linaro Connect [3] event bring us the opportunity to meet people
>>> involved in the power management and the cpuidle area for different SoC.
>>>
>>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>>> for cpuidle is vital.
>>>
>>> Also, the SoC vendors would like to have the ability to tune their cpu
>>> latencies through the device tree.
>>>
>>> We agreed in the following steps:
>>>
>>> 1. factor out / cleanup the cpuidle code as much as possible
>>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>>> to core code
>>> 3. make the cpuidle_state structure contain only data
>>> 4. add a API to register latencies per cpu
>>>
>>> These four steps impacts all the architecture. I began the factor out
>>> code / cleanup [4] and that has been accepted upstream and I proposed
>>> some modifications [5] but I had a very few answers.
>>>
>> Another thing which we discussed is bringing the CPU cluster/package
>> notion in the core idle code. Couple idle did bring that idea to some
>> extent but in can be further extended and abstracted. Atm, most of
>> the work is done in back-end cpuidle drivers which can be easily
>> abstracted if the "cluster idle" notion is supported in the core layer.
>>
> Are you considering the "cluster idle" as one of the topic ?
Yes, absolutely. ATM, I am looking for refactoring the cpuidle code and
cleanup whenever is possible.
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
On Mon, Jun 25, 2012 at 6:40 PM, Daniel Lezcano
<[email protected]> wrote:
> On 06/25/2012 02:58 PM, Shilimkar, Santosh wrote:
>> On Mon, Jun 18, 2012 at 7:00 PM, a0393909 <[email protected]> wrote:
>>> Daniel,
>>>
>>>
>>> On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>>>>
>>>>
>>>> Dear all,
>>>>
>>>> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
>>>> cpu latencies. We had a discussion about this patchset because it
>>>> reverse the modifications Deepthi did some months ago [2] and we may
>>>> want to provide a different implementation.
>>>>
>>>> The Linaro Connect [3] event bring us the opportunity to meet people
>>>> involved in the power management and the cpuidle area for different SoC.
>>>>
>>>> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
>>>> for cpuidle is vital.
>>>>
>>>> Also, the SoC vendors would like to have the ability to tune their cpu
>>>> latencies through the device tree.
>>>>
>>>> We agreed in the following steps:
>>>>
>>>> 1. factor out / cleanup the cpuidle code as much as possible
>>>> 2. better sharing of code amongst SoC idle drivers by moving common bits
>>>> to core code
>>>> 3. make the cpuidle_state structure contain only data
>>>> 4. add a API to register latencies per cpu
>>>>
>>>> These four steps impacts all the architecture. I began the factor out
>>>> code / cleanup [4] and that has been accepted upstream and I proposed
>>>> some modifications [5] but I had a very few answers.
>>>>
>>> Another thing which we discussed is bringing the CPU cluster/package
>>> notion in the core idle code. Couple idle did bring that idea to some
>>> extent but in can be further extended and abstracted. Atm, most of
>>> the work is done in back-end cpuidle drivers which can be easily
>>> abstracted if the "cluster idle" notion is supported in the core layer.
>>>
>> Are you considering the "cluster idle" as one of the topic ?
>
> Yes, absolutely. ATM, I am looking for refactoring the cpuidle code and
> cleanup whenever is possible.
>
Cool !!
regards
Santosh
Hi Stephen,
we discussed last week to put in place a tree grouping the cpuidle
modifications [1]. Is it possible to add the tree ?
git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
It contains for the moment Colin Cross's cpuidle coupled states.
Thanks in advance
-- Daniel
[1] https://lkml.org/lkml/2012/6/18/113
Hi Daniel,
On Mon, 25 Jun 2012 15:27:03 +0200 Daniel Lezcano <[email protected]> wrote:
>
> we discussed last week to put in place a tree grouping the cpuidle
> modifications [1]. Is it possible to add the tree ?
>
> git://git.linaro.org/people/dlezcano/cpuidle-next.git #cpuidle-next
>
> It contains for the moment Colin Cross's cpuidle coupled states.
Added from today.
Thanks for adding your subsystem tree as a participant of linux-next. As
you may know, this is not a judgment of your code. The purpose of
linux-next is for integration testing and to lower the impact of
conflicts between subsystems in the next merge window.
You will need to ensure that the patches/commits in your tree/series have
been:
* submitted under GPL v2 (or later) and include the Contributor's
Signed-off-by,
* posted to the relevant mailing list,
* reviewed by you (or another maintainer of your subsystem tree),
* successfully unit tested, and
* destined for the current or next Linux merge window.
Basically, this should be just what you would send to Linus (or ask him
to fetch). It is allowed to be rebased if you deem it necessary.
--
Cheers,
Stephen Rothwell
[email protected]
Legal Stuff:
By participating in linux-next, your subsystem tree contributions are
public and will be included in the linux-next trees. You may be sent
e-mail messages indicating errors or other issues when the
patches/commits from your subsystem tree are merged and tested in
linux-next. These messages may also be cross-posted to the linux-next
mailing list, the linux-kernel mailing list, etc. The linux-next tree
project and IBM (my employer) make no warranties regarding the linux-next
project, the testing procedures, the results, the e-mails, etc. If you
don't agree to these ground rules, let me know and I'll remove your tree
from participation in linux-next.