the lru_lock of struct mem_group_per_zone is used to avoid preemption
during the __mem_cgroup_stat_add_safe function.
therefore, a raw_spinlock_t should be used.
Signed-off-by: Tim Blechmann <[email protected]>
---
mm/memcontrol.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 517f945..8661732 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index {
struct mem_cgroup_per_zone {
/*
- * spin_lock to protect the per cgroup LRU
+ * raw_spin_lock to protect the per cgroup LRU
*/
- spinlock_t lru_lock;
+ raw_spinlock_t lru_lock;
struct list_head active_list;
struct list_head inactive_list;
unsigned long count[NR_MEM_CGROUP_ZSTAT];
--
1.5.6.3
Hi Tim,
On Mon, 8 Dec 2008, Tim Blechmann wrote:
> the lru_lock of struct mem_group_per_zone is used to avoid preemption
> during the __mem_cgroup_stat_add_safe function.
> therefore, a raw_spinlock_t should be used.
What is the reason that this must avoid preemption? Is there another
way to solve this? I rather not just add a raw spinlock if we can
help it.
-- Steve
>
> Signed-off-by: Tim Blechmann <[email protected]>
> ---
> mm/memcontrol.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 517f945..8661732 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index {
>
> struct mem_cgroup_per_zone {
> /*
> - * spin_lock to protect the per cgroup LRU
> + * raw_spin_lock to protect the per cgroup LRU
> */
> - spinlock_t lru_lock;
> + raw_spinlock_t lru_lock;
> struct list_head active_list;
> struct list_head inactive_list;
> unsigned long count[NR_MEM_CGROUP_ZSTAT];
> --
> 1.5.6.3
>
>
>
> > the lru_lock of struct mem_group_per_zone is used to avoid preemption
> > during the __mem_cgroup_stat_add_safe function.
> > therefore, a raw_spinlock_t should be used.
>
> What is the reason that this must avoid preemption?
it guards a call to smp_processor_id() in __mem_cgroup_stat_add_safe().
see http://article.gmane.org/gmane.linux.rt.user/3690
> Is there another
> way to solve this? I rather not just add a raw spinlock if we can
> help it.
not sure, maybe one can disable preemption for that specific function?
tim
--
[email protected]
http://tim.klingt.org
Art is either a complaint or do something else
John Cage quoting Jasper Johns
On Mon, 8 Dec 2008, Tim Blechmann wrote:
> > > the lru_lock of struct mem_group_per_zone is used to avoid preemption
> > > during the __mem_cgroup_stat_add_safe function.
> > > therefore, a raw_spinlock_t should be used.
> >
> > What is the reason that this must avoid preemption?
>
> it guards a call to smp_processor_id() in __mem_cgroup_stat_add_safe().
> see http://article.gmane.org/gmane.linux.rt.user/3690
We could simply create a
preempt_disable_rt();
that will only disable preemption when CONFIG_PREEMPT_RT is set.
and then we could add
int cpu;
preempt_disable_rt();
cpu = smp_processor_id();
stat->cpustat[cpu].count[idx] += val;
preempt_enable_rt();
Or something similar.
-- Steve
>
> > Is there another
> > way to solve this? I rather not just add a raw spinlock if we can
> > help it.
>
> not sure, maybe one can disable preemption for that specific function?
>
> tim
>
> --
> [email protected]
> http://tim.klingt.org
>
> Art is either a complaint or do something else
> John Cage quoting Jasper Johns
>
CC: Balbir Singh <[email protected]>
CC: KAMEZAWA Hiroyuki <[email protected]>
Tim Blechmann wrote:
> the lru_lock of struct mem_group_per_zone is used to avoid preemption
> during the __mem_cgroup_stat_add_safe function.
> therefore, a raw_spinlock_t should be used.
>
FYI, this lru_lock no longer exists in -mm tree. The following patch
removes that lock:
http://marc.info/?l=linux-mm&m=122665814801979&w=2
> Signed-off-by: Tim Blechmann <[email protected]>
> ---
> mm/memcontrol.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 517f945..8661732 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index {
>
> struct mem_cgroup_per_zone {
> /*
> - * spin_lock to protect the per cgroup LRU
> + * raw_spin_lock to protect the per cgroup LRU
> */
> - spinlock_t lru_lock;
> + raw_spinlock_t lru_lock;
> struct list_head active_list;
> struct list_head inactive_list;
> unsigned long count[NR_MEM_CGROUP_ZSTAT];
On Tue, 09 Dec 2008 09:13:12 +0800
Li Zefan <[email protected]> wrote:
> CC: Balbir Singh <[email protected]>
> CC: KAMEZAWA Hiroyuki <[email protected]>
>
> Tim Blechmann wrote:
> > the lru_lock of struct mem_group_per_zone is used to avoid preemption
> > during the __mem_cgroup_stat_add_safe function.
> > therefore, a raw_spinlock_t should be used.
> >
>
> FYI, this lru_lock no longer exists in -mm tree. The following patch
> removes that lock:
>
> http://marc.info/?l=linux-mm&m=122665814801979&w=2
>
> > Signed-off-by: Tim Blechmann <[email protected]>
> > ---
> > mm/memcontrol.c | 4 ++--
> > 1 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 517f945..8661732 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -95,9 +95,9 @@ enum mem_cgroup_zstat_index {
> >
> > struct mem_cgroup_per_zone {
> > /*
> > - * spin_lock to protect the per cgroup LRU
> > + * raw_spin_lock to protect the per cgroup LRU
> > */
> > - spinlock_t lru_lock;
> > + raw_spinlock_t lru_lock;
> > struct list_head active_list;
> > struct list_head inactive_list;
> > unsigned long count[NR_MEM_CGROUP_ZSTAT];
yes..I remvoed that. Please see mmotm and subscribe linux-mm,
if you have interests memory resource controller.
Anyway, thanks.
-Kame
in some cases, spinlocks are used to avoid preemption. does not work
correctly, when CONFIG_PREEMPT_RT is enabled.
therefore two new macros are introduced:
preempt_disable_rt/preempt_disable_rt behave like their equivalents, if
CONFIG_PREEMPT_RT is enabled, and as noops otherwise.
Signed-off-by: Tim Blechmann <[email protected]>
---
include/linux/preempt.h | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 72b1a10..2526170 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -93,6 +93,16 @@ do { \
#endif
+#ifdef CONFIG_PREEMPT_RT
+#define preempt_disable_rt preempt_disable
+#define preempt_enable_rt preempt_enable
+#else
+#define
+#define preempt_disable_rt do { } while (0)
+#define preempt_enable_rt do { } while (0)
+#endif
+
+
#ifdef CONFIG_PREEMPT_NOTIFIERS
struct preempt_notifier;
--
1.5.6.3