2010-01-11 21:27:20

by John Kacur

[permalink] [raw]
Subject: [PATCH 00/26] Convert locks that can't sleep in -rt to raw_spinlock

Thomas:

Now that your changes that free up the raw_spinlock name are upstream.
(described below for other readers)

http://lwn.net/Articles/365863/
http://lwn.net/Articles/366608/

I wanted to forward port the preempt-rt patches that convert locks to
atomic_spinlocks (rt tree only) to the new scheme.

The patches below are a result of that effort.
Please queue these up for 2.6.34 upstream, and please pull for preempt-rt

You can pull them from
git://git.kernel.org/pub/scm/linux/kernel/git/jkacur/jk-2.6.git
jk/v2.6.33-rc3-raw-spinlocks

Thanks

John Kacur (25):
xtime_lock: Convert atomic_seqlock to raw_seqlock, fix up all users
x86: Convert tlbstate_lock to raw_spinlock
sched: Convert thread_group_cputimer lock to raw_spinlock
x86: Convert ioapic_lock and vector_lock to raw_spinlocks
x86: Convert i8259A_lock to raw_spinlock
x86: Convert pci_config_lock to raw_spinlock
i8253: Convert i8253_lock to raw_spinlock
x86: Convert set_atomicity_lock to raw_spinlock
ACPI: Convert c3_lock to raw_spinlock
rtmutex: Convert wait_lock and pi_lock to raw_spinlock
printk: Convert lock to raw_spinlock
genirq: Convert locks to raw_spinlocks
trace: Convert various locks to raw_spinlock
clocksource: Convert watchdog_lock to raw_spinlock
timer_stats: Convert to raw_spinlocks
x86: kvm: Convert i8254/i8259 locks to raw_spinlock
x86 - nmi: Convert nmi_lock to raw_spinlock
cgroups: Convert cgroups release_list_lock to raw_spinlock
proportions: Convert spinlocks to raw_spinlocks.
percpu_counter: Convert to raw_spinlock
oprofile: Convert to raw_spinlock
vgacon: Convert vga console lock to raw_spinlock
pci-access: Convert pci_lock to raw_spinlock
kprobes: Convert to raw_spinlocks
softlockup: Convert to raw_spinlocks

Thomas Gleixner (1):
seqlock: Create raw_seqlock

arch/alpha/kernel/time.c | 4 +-
arch/arm/kernel/time.c | 12 ++--
arch/arm/oprofile/common.c | 4 +-
arch/arm/oprofile/op_model_mpcore.c | 4 +-
arch/blackfin/kernel/time.c | 4 +-
arch/cris/kernel/time.c | 4 +-
arch/frv/kernel/time.c | 4 +-
arch/h8300/kernel/time.c | 4 +-
arch/ia64/kernel/time.c | 4 +-
arch/ia64/xen/time.c | 4 +-
arch/m32r/kernel/time.c | 4 +-
arch/m68knommu/kernel/time.c | 4 +-
arch/mips/include/asm/i8253.h | 2 +-
arch/mips/kernel/i8253.c | 14 ++--
arch/mn10300/kernel/time.c | 4 +-
arch/parisc/kernel/time.c | 8 +-
arch/powerpc/kernel/time.c | 4 +-
arch/sparc/kernel/pcic.c | 8 +-
arch/sparc/kernel/time_32.c | 12 ++--
arch/x86/include/asm/i8253.h | 2 +-
arch/x86/include/asm/i8259.h | 2 +-
arch/x86/include/asm/pci_x86.h | 2 +-
arch/x86/kernel/apic/io_apic.c | 106 +++++++++++++++++-----------------
arch/x86/kernel/apic/nmi.c | 6 +-
arch/x86/kernel/apm_32.c | 4 +-
arch/x86/kernel/cpu/mtrr/generic.c | 6 +-
arch/x86/kernel/i8253.c | 14 ++--
arch/x86/kernel/i8259.c | 30 +++++-----
arch/x86/kernel/time.c | 4 +-
arch/x86/kernel/visws_quirks.c | 6 +-
arch/x86/kvm/i8254.c | 10 ++--
arch/x86/kvm/i8254.h | 2 +-
arch/x86/kvm/i8259.c | 30 +++++-----
arch/x86/kvm/irq.h | 2 +-
arch/x86/kvm/x86.c | 8 +-
arch/x86/mm/tlb.c | 8 +-
arch/x86/oprofile/nmi_int.c | 4 +-
arch/x86/pci/common.c | 2 +-
arch/x86/pci/direct.c | 16 +++---
arch/x86/pci/mmconfig_32.c | 8 +-
arch/x86/pci/numaq_32.c | 8 +-
arch/x86/pci/pcbios.c | 8 +-
arch/xtensa/kernel/time.c | 4 +-
drivers/acpi/processor_idle.c | 10 ++--
drivers/block/hd.c | 4 +-
drivers/input/gameport/gameport.c | 4 +-
drivers/input/joystick/analog.c | 4 +-
drivers/input/misc/pcspkr.c | 6 +-
drivers/oprofile/event_buffer.c | 4 +-
drivers/oprofile/oprofilefs.c | 6 +-
drivers/pci/access.c | 34 ++++++------
drivers/video/console/vgacon.c | 42 +++++++-------
include/linux/init_task.h | 2 +-
include/linux/kprobes.h | 2 +-
include/linux/oprofile.h | 2 +-
include/linux/percpu_counter.h | 2 +-
include/linux/proportions.h | 6 +-
include/linux/ratelimit.h | 4 +-
include/linux/rtmutex.h | 2 +-
include/linux/sched.h | 4 +-
include/linux/seqlock.h | 86 +++++++++++++++++++++++++++-
include/linux/time.h | 2 +-
kernel/cgroup.c | 18 +++---
kernel/hrtimer.c | 8 +-
kernel/kprobes.c | 34 ++++++------
kernel/posix-cpu-timers.c | 8 +-
kernel/printk.c | 42 +++++++-------
kernel/sched_stats.h | 12 ++--
kernel/softlockup.c | 6 +-
kernel/time.c | 8 +-
kernel/time/clocksource.c | 26 ++++----
kernel/time/ntp.c | 8 +-
kernel/time/tick-common.c | 8 +-
kernel/time/tick-sched.c | 12 ++--
kernel/time/timekeeping.c | 50 ++++++++--------
kernel/time/timer_stats.c | 6 +-
kernel/trace/ring_buffer.c | 52 +++++++++---------
kernel/trace/trace.c | 10 ++--
kernel/trace/trace_irqsoff.c | 6 +-
lib/percpu_counter.c | 18 +++---
lib/proportions.c | 12 ++--
lib/ratelimit.c | 4 +-
sound/drivers/pcsp/pcsp.h | 2 +-
sound/drivers/pcsp/pcsp_input.c | 4 +-
sound/drivers/pcsp/pcsp_lib.c | 12 ++--
85 files changed, 535 insertions(+), 457 deletions(-)


2010-01-11 21:27:44

by John Kacur

[permalink] [raw]
Subject: [PATCH 14/26] trace: Convert various locks to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw_spinlocks.

See also: 87654a70523a8c5baadcbbc07d80cbae8f912837

Signed-off-by: John Kacur <[email protected]>
---
kernel/trace/ring_buffer.c | 52 +++++++++++++++++++++---------------------
kernel/trace/trace.c | 10 ++++----
kernel/trace/trace_irqsoff.c | 6 ++--
3 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 2326b04..ffaddc5 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -422,7 +422,7 @@ int ring_buffer_print_page_header(struct trace_seq *s)
struct ring_buffer_per_cpu {
int cpu;
struct ring_buffer *buffer;
- spinlock_t reader_lock; /* serialize readers */
+ raw_spinlock_t reader_lock; /* serialize readers */
arch_spinlock_t lock;
struct lock_class_key lock_key;
struct list_head *pages;
@@ -996,7 +996,7 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)

cpu_buffer->cpu = cpu;
cpu_buffer->buffer = buffer;
- spin_lock_init(&cpu_buffer->reader_lock);
+ raw_spin_lock_init(&cpu_buffer->reader_lock);
lockdep_set_class(&cpu_buffer->reader_lock, buffer->reader_lock_key);
cpu_buffer->lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;

@@ -1193,7 +1193,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
struct list_head *p;
unsigned i;

- spin_lock_irq(&cpu_buffer->reader_lock);
+ raw_spin_lock_irq(&cpu_buffer->reader_lock);
rb_head_page_deactivate(cpu_buffer);

for (i = 0; i < nr_pages; i++) {
@@ -1210,7 +1210,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
rb_reset_cpu(cpu_buffer);
rb_check_pages(cpu_buffer);

- spin_unlock_irq(&cpu_buffer->reader_lock);
+ raw_spin_unlock_irq(&cpu_buffer->reader_lock);
}

static void
@@ -1221,7 +1221,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
struct list_head *p;
unsigned i;

- spin_lock_irq(&cpu_buffer->reader_lock);
+ raw_spin_lock_irq(&cpu_buffer->reader_lock);
rb_head_page_deactivate(cpu_buffer);

for (i = 0; i < nr_pages; i++) {
@@ -1235,7 +1235,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
rb_reset_cpu(cpu_buffer);
rb_check_pages(cpu_buffer);

- spin_unlock_irq(&cpu_buffer->reader_lock);
+ raw_spin_unlock_irq(&cpu_buffer->reader_lock);
}

/**
@@ -2735,9 +2735,9 @@ void ring_buffer_iter_reset(struct ring_buffer_iter *iter)

cpu_buffer = iter->cpu_buffer;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
rb_iter_reset(iter);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
}
EXPORT_SYMBOL_GPL(ring_buffer_iter_reset);

@@ -3157,12 +3157,12 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts)
again:
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
event = rb_buffer_peek(cpu_buffer, ts);
if (event && event->type_len == RINGBUF_TYPE_PADDING)
rb_advance_reader(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

if (event && event->type_len == RINGBUF_TYPE_PADDING)
@@ -3187,9 +3187,9 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
unsigned long flags;

again:
- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
event = rb_iter_peek(iter, ts);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

if (event && event->type_len == RINGBUF_TYPE_PADDING)
goto again;
@@ -3225,14 +3225,14 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);

event = rb_buffer_peek(cpu_buffer, ts);
if (event)
rb_advance_reader(cpu_buffer);

if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

out:
@@ -3278,11 +3278,11 @@ ring_buffer_read_start(struct ring_buffer *buffer, int cpu)
atomic_inc(&cpu_buffer->record_disabled);
synchronize_sched();

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
arch_spin_lock(&cpu_buffer->lock);
rb_iter_reset(iter);
arch_spin_unlock(&cpu_buffer->lock);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

return iter;
}
@@ -3319,7 +3319,7 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)
struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
unsigned long flags;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
again:
event = rb_iter_peek(iter, ts);
if (!event)
@@ -3330,7 +3330,7 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)

rb_advance_iter(iter);
out:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

return event;
}
@@ -3396,7 +3396,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)

atomic_inc(&cpu_buffer->record_disabled);

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);

if (RB_WARN_ON(cpu_buffer, local_read(&cpu_buffer->committing)))
goto out;
@@ -3408,7 +3408,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)
arch_spin_unlock(&cpu_buffer->lock);

out:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

atomic_dec(&cpu_buffer->record_disabled);
}
@@ -3446,10 +3446,10 @@ int ring_buffer_empty(struct ring_buffer *buffer)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
ret = rb_per_cpu_empty(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

if (!ret)
@@ -3480,10 +3480,10 @@ int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
ret = rb_per_cpu_empty(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

return ret;
@@ -3678,7 +3678,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
if (!bpage)
goto out;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);

reader = rb_get_reader_page(cpu_buffer);
if (!reader)
@@ -3753,7 +3753,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
ret = read;

out_unlock:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

out:
return ret;
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 0df1b0f..0c6bbcb 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -258,7 +258,7 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
TRACE_ITER_GRAPH_TIME;

static int trace_stop_count;
-static DEFINE_SPINLOCK(tracing_start_lock);
+static DEFINE_RAW_SPINLOCK(tracing_start_lock);

/**
* trace_wake_up - wake up tasks waiting for trace input
@@ -847,7 +847,7 @@ void tracing_start(void)
if (tracing_disabled)
return;

- spin_lock_irqsave(&tracing_start_lock, flags);
+ raw_spin_lock_irqsave(&tracing_start_lock, flags);
if (--trace_stop_count) {
if (trace_stop_count < 0) {
/* Someone screwed up their debugging */
@@ -868,7 +868,7 @@ void tracing_start(void)

ftrace_start();
out:
- spin_unlock_irqrestore(&tracing_start_lock, flags);
+ raw_spin_unlock_irqrestore(&tracing_start_lock, flags);
}

/**
@@ -883,7 +883,7 @@ void tracing_stop(void)
unsigned long flags;

ftrace_stop();
- spin_lock_irqsave(&tracing_start_lock, flags);
+ raw_spin_lock_irqsave(&tracing_start_lock, flags);
if (trace_stop_count++)
goto out;

@@ -896,7 +896,7 @@ void tracing_stop(void)
ring_buffer_record_disable(buffer);

out:
- spin_unlock_irqrestore(&tracing_start_lock, flags);
+ raw_spin_unlock_irqrestore(&tracing_start_lock, flags);
}

void trace_stop_cmdline_recording(void);
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 2974bc7..60ba58e 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -23,7 +23,7 @@ static int tracer_enabled __read_mostly;

static DEFINE_PER_CPU(int, tracing_cpu);

-static DEFINE_SPINLOCK(max_trace_lock);
+static DEFINE_RAW_SPINLOCK(max_trace_lock);

enum {
TRACER_IRQS_OFF = (1 << 1),
@@ -144,7 +144,7 @@ check_critical_timing(struct trace_array *tr,
if (!report_latency(delta))
goto out;

- spin_lock_irqsave(&max_trace_lock, flags);
+ raw_spin_lock_irqsave(&max_trace_lock, flags);

/* check if we are still the max latency */
if (!report_latency(delta))
@@ -167,7 +167,7 @@ check_critical_timing(struct trace_array *tr,
max_sequence++;

out_unlock:
- spin_unlock_irqrestore(&max_trace_lock, flags);
+ raw_spin_unlock_irqrestore(&max_trace_lock, flags);

out:
data->critical_sequence = max_sequence;
--
1.6.5.2

2010-01-11 21:27:23

by John Kacur

[permalink] [raw]
Subject: [PATCH 02/26] seqlock: Create raw_seqlock

From: Thomas Gleixner <[email protected]>

raw_seqlock_t will be used to annotate seqlocks which can not be
converted to sleeping locks in preempt-rt

Signed-off-by: Thomas Gleixner <[email protected]>

The original patch 09e46c7a86b2e81f97bd93f588b62c2d36cff58e used atomic_locks
I converted this patch to raw_locks.

Signed-off-by: John Kacur <[email protected]>
---
include/linux/seqlock.h | 86 ++++++++++++++++++++++++++++++++++++++++++++--
1 files changed, 82 insertions(+), 4 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 632205c..6f2685c 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -31,6 +31,11 @@

typedef struct {
unsigned sequence;
+ raw_spinlock_t lock;
+} raw_seqlock_t;
+
+typedef struct {
+ unsigned sequence;
spinlock_t lock;
} seqlock_t;

@@ -38,11 +43,23 @@ typedef struct {
* These macros triggered gcc-3.x compile-time problems. We think these are
* OK now. Be cautious.
*/
+#define __RAW_SEQLOCK_UNLOCKED(lockname) \
+ { 0, __RAW_SPIN_LOCK_UNLOCKED(lockname) }
+
+#define seqlock_raw_init(x) \
+ do { \
+ (x)->sequence = 0; \
+ raw_spin_lock_init(&(x)->lock); \
+ } while (0)
+
+#define DEFINE_RAW_SEQLOCK(x) \
+ raw_seqlock_t x = __RAW_SEQLOCK_UNLOCKED(x)
+
#define __SEQLOCK_UNLOCKED(lockname) \
- { 0, __SPIN_LOCK_UNLOCKED(lockname) }
+ { 0, __SPIN_LOCK_UNLOCKED(lockname) }

#define SEQLOCK_UNLOCKED \
- __SEQLOCK_UNLOCKED(old_style_seqlock_init)
+ __SEQLOCK_UNLOCKED(old_style_seqlock_init)

#define seqlock_init(x) \
do { \
@@ -51,12 +68,19 @@ typedef struct {
} while (0)

#define DEFINE_SEQLOCK(x) \
- seqlock_t x = __SEQLOCK_UNLOCKED(x)
+ seqlock_t x = __SEQLOCK_UNLOCKED(x)

/* Lock out other writers and update the count.
* Acts like a normal spin_lock/unlock.
* Don't need preempt_disable() because that is in the spin_lock already.
*/
+static inline void write_raw_seqlock(raw_seqlock_t *sl)
+{
+ raw_spin_lock(&sl->lock);
+ ++sl->sequence;
+ smp_wmb();
+}
+
static inline void write_seqlock(seqlock_t *sl)
{
spin_lock(&sl->lock);
@@ -64,6 +88,13 @@ static inline void write_seqlock(seqlock_t *sl)
smp_wmb();
}

+static inline void write_raw_sequnlock(raw_seqlock_t *sl)
+{
+ smp_wmb();
+ sl->sequence++;
+ raw_spin_unlock(&sl->lock);
+}
+
static inline void write_sequnlock(seqlock_t *sl)
{
smp_wmb();
@@ -83,6 +114,21 @@ static inline int write_tryseqlock(seqlock_t *sl)
}

/* Start of read calculation -- fetch last complete writer token */
+static __always_inline unsigned read_raw_seqbegin(const raw_seqlock_t *sl)
+{
+ unsigned ret;
+
+repeat:
+ ret = sl->sequence;
+ smp_rmb();
+ if (unlikely(ret & 1)) {
+ cpu_relax();
+ goto repeat;
+ }
+
+ return ret;
+}
+
static __always_inline unsigned read_seqbegin(const seqlock_t *sl)
{
unsigned ret;
@@ -103,6 +149,14 @@ repeat:
*
* If sequence value changed then writer changed data while in section.
*/
+static __always_inline int
+read_raw_seqretry(const raw_seqlock_t *sl, unsigned start)
+{
+ smp_rmb();
+
+ return (sl->sequence != start);
+}
+
static __always_inline int read_seqretry(const seqlock_t *sl, unsigned start)
{
smp_rmb();
@@ -170,12 +224,36 @@ static inline void write_seqcount_end(seqcount_t *s)
/*
* Possible sw/hw IRQ protected versions of the interfaces.
*/
+#define write_raw_seqlock_irqsave(lock, flags) \
+ do { local_irq_save(flags); write_raw_seqlock(lock); } while (0)
+#define write_raw_seqlock_irq(lock) \
+ do { local_irq_disable(); write_raw_seqlock(lock); } while (0)
+#define write_raw_seqlock_bh(lock) \
+ do { local_bh_disable(); write_raw_seqlock(lock); } while (0)
+
+#define write_raw_sequnlock_irqrestore(lock, flags) \
+ do { write_raw_sequnlock(lock); local_irq_restore(flags); } while(0)
+#define write_raw_sequnlock_irq(lock) \
+ do { write_raw_sequnlock(lock); local_irq_enable(); } while(0)
+#define write_raw_sequnlock_bh(lock) \
+ do { write_raw_sequnlock(lock); local_bh_enable(); } while(0)
+
+#define read_raw_seqbegin_irqsave(lock, flags) \
+ ({ local_irq_save(flags); read_raw_seqbegin(lock); })
+
+#define read_raw_seqretry_irqrestore(lock, iv, flags) \
+ ({ \
+ int ret = read_raw_seqretry(lock, iv); \
+ local_irq_restore(flags); \
+ ret; \
+ })
+
#define write_seqlock_irqsave(lock, flags) \
do { local_irq_save(flags); write_seqlock(lock); } while (0)
#define write_seqlock_irq(lock) \
do { local_irq_disable(); write_seqlock(lock); } while (0)
#define write_seqlock_bh(lock) \
- do { local_bh_disable(); write_seqlock(lock); } while (0)
+ do { local_bh_disable(); write_seqlock(lock); } while (0)

#define write_sequnlock_irqrestore(lock, flags) \
do { write_sequnlock(lock); local_irq_restore(flags); } while(0)
--
1.6.5.2

2010-01-11 21:27:29

by John Kacur

[permalink] [raw]
Subject: [PATCH 11/26] rtmutex: Convert wait_lock and pi_lock to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw_spinlocks.
See d227cf76b6f51e3245f1b3f47720c3d7df4b68b0
missing from d209d74d52ab39dc071656533cac095294f70de7

Signed-off-by: John Kacur <[email protected]>
---
include/linux/rtmutex.h | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index 281d8fd..ac76f7b 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -98,7 +98,7 @@ extern void rt_mutex_unlock(struct rt_mutex *lock);

#ifdef CONFIG_RT_MUTEXES
# define INIT_RT_MUTEXES(tsk) \
- .pi_waiters = PLIST_HEAD_INIT(tsk.pi_waiters, tsk.pi_lock), \
+ .pi_waiters = PLIST_HEAD_INIT_RAW(tsk.pi_waiters, tsk.pi_lock), \
INIT_RT_MUTEX_DEBUG(tsk)
#else
# define INIT_RT_MUTEXES(tsk)
--
1.6.5.2

2010-01-11 21:27:34

by John Kacur

[permalink] [raw]
Subject: [PATCH 15/26] clocksource: Convert watchdog_lock to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw-spinlocks.

See also fea886ed3f18a93ab76fbeed13f1f73c97bd8982

Signed-off-by: John Kacur <[email protected]>
---
kernel/time/clocksource.c | 26 +++++++++++++-------------
1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index e85c234..ffcb48f 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -183,7 +183,7 @@ static LIST_HEAD(watchdog_list);
static struct clocksource *watchdog;
static struct timer_list watchdog_timer;
static DECLARE_WORK(watchdog_work, clocksource_watchdog_work);
-static DEFINE_SPINLOCK(watchdog_lock);
+static DEFINE_RAW_SPINLOCK(watchdog_lock);
static cycle_t watchdog_last;
static int watchdog_running;

@@ -233,13 +233,13 @@ void clocksource_mark_unstable(struct clocksource *cs)
{
unsigned long flags;

- spin_lock_irqsave(&watchdog_lock, flags);
+ raw_spin_lock_irqsave(&watchdog_lock, flags);
if (!(cs->flags & CLOCK_SOURCE_UNSTABLE)) {
if (list_empty(&cs->wd_list))
list_add(&cs->wd_list, &watchdog_list);
__clocksource_unstable(cs);
}
- spin_unlock_irqrestore(&watchdog_lock, flags);
+ raw_spin_unlock_irqrestore(&watchdog_lock, flags);
}

static void clocksource_watchdog(unsigned long data)
@@ -249,7 +249,7 @@ static void clocksource_watchdog(unsigned long data)
int64_t wd_nsec, cs_nsec;
int next_cpu;

- spin_lock(&watchdog_lock);
+ raw_spin_lock(&watchdog_lock);
if (!watchdog_running)
goto out;

@@ -308,7 +308,7 @@ static void clocksource_watchdog(unsigned long data)
watchdog_timer.expires += WATCHDOG_INTERVAL;
add_timer_on(&watchdog_timer, next_cpu);
out:
- spin_unlock(&watchdog_lock);
+ raw_spin_unlock(&watchdog_lock);
}

static inline void clocksource_start_watchdog(void)
@@ -343,16 +343,16 @@ static void clocksource_resume_watchdog(void)
{
unsigned long flags;

- spin_lock_irqsave(&watchdog_lock, flags);
+ raw_spin_lock_irqsave(&watchdog_lock, flags);
clocksource_reset_watchdog();
- spin_unlock_irqrestore(&watchdog_lock, flags);
+ raw_spin_unlock_irqrestore(&watchdog_lock, flags);
}

static void clocksource_enqueue_watchdog(struct clocksource *cs)
{
unsigned long flags;

- spin_lock_irqsave(&watchdog_lock, flags);
+ raw_spin_lock_irqsave(&watchdog_lock, flags);
if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
/* cs is a clocksource to be watched. */
list_add(&cs->wd_list, &watchdog_list);
@@ -370,7 +370,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
}
/* Check if the watchdog timer needs to be started. */
clocksource_start_watchdog();
- spin_unlock_irqrestore(&watchdog_lock, flags);
+ raw_spin_unlock_irqrestore(&watchdog_lock, flags);
}

static void clocksource_dequeue_watchdog(struct clocksource *cs)
@@ -378,7 +378,7 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs)
struct clocksource *tmp;
unsigned long flags;

- spin_lock_irqsave(&watchdog_lock, flags);
+ raw_spin_lock_irqsave(&watchdog_lock, flags);
if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
/* cs is a watched clocksource. */
list_del_init(&cs->wd_list);
@@ -397,7 +397,7 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs)
cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
/* Check if the watchdog timer needs to be stopped. */
clocksource_stop_watchdog();
- spin_unlock_irqrestore(&watchdog_lock, flags);
+ raw_spin_unlock_irqrestore(&watchdog_lock, flags);
}

static int clocksource_watchdog_kthread(void *data)
@@ -407,7 +407,7 @@ static int clocksource_watchdog_kthread(void *data)
LIST_HEAD(unstable);

mutex_lock(&clocksource_mutex);
- spin_lock_irqsave(&watchdog_lock, flags);
+ raw_spin_lock_irqsave(&watchdog_lock, flags);
list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list)
if (cs->flags & CLOCK_SOURCE_UNSTABLE) {
list_del_init(&cs->wd_list);
@@ -415,7 +415,7 @@ static int clocksource_watchdog_kthread(void *data)
}
/* Check if the watchdog timer needs to be stopped. */
clocksource_stop_watchdog();
- spin_unlock_irqrestore(&watchdog_lock, flags);
+ raw_spin_unlock_irqrestore(&watchdog_lock, flags);

/* Needs to be done outside of watchdog lock */
list_for_each_entry_safe(cs, tmp, &unstable, wd_list) {
--
1.6.5.2

2010-01-11 21:27:50

by John Kacur

[permalink] [raw]
Subject: [PATCH 22/26] oprofile: Convert to raw_spinlock

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks.

See also 0ef0a8e3224c0edfce698beee0ecfc9fca07e9a8

Signed-off-by: John Kacur <[email protected]>
---
arch/arm/oprofile/common.c | 4 ++--
arch/x86/oprofile/nmi_int.c | 4 ++--
drivers/oprofile/event_buffer.c | 4 ++--
drivers/oprofile/oprofilefs.c | 6 +++---
include/linux/oprofile.h | 2 +-
5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm/oprofile/common.c b/arch/arm/oprofile/common.c
index 3fcd752..54a182b 100644
--- a/arch/arm/oprofile/common.c
+++ b/arch/arm/oprofile/common.c
@@ -48,9 +48,9 @@ static int op_arm_setup(void)
{
int ret;

- spin_lock(&oprofilefs_lock);
+ raw_spin_lock(&oprofilefs_lock);
ret = op_arm_model->setup_ctrs();
- spin_unlock(&oprofilefs_lock);
+ raw_spin_unlock(&oprofilefs_lock);
return ret;
}

diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
index cb88b1a..b5796ce 100644
--- a/arch/x86/oprofile/nmi_int.c
+++ b/arch/x86/oprofile/nmi_int.c
@@ -321,10 +321,10 @@ static void nmi_cpu_setup(void *dummy)
int cpu = smp_processor_id();
struct op_msrs *msrs = &per_cpu(cpu_msrs, cpu);
nmi_cpu_save_registers(msrs);
- spin_lock(&oprofilefs_lock);
+ raw_spin_lock(&oprofilefs_lock);
model->setup_ctrs(model, msrs);
nmi_cpu_setup_mux(cpu, msrs);
- spin_unlock(&oprofilefs_lock);
+ raw_spin_unlock(&oprofilefs_lock);
per_cpu(saved_lvtpc, cpu) = apic_read(APIC_LVTPC);
apic_write(APIC_LVTPC, APIC_DM_NMI);
}
diff --git a/drivers/oprofile/event_buffer.c b/drivers/oprofile/event_buffer.c
index 5df60a6..9a60ccc 100644
--- a/drivers/oprofile/event_buffer.c
+++ b/drivers/oprofile/event_buffer.c
@@ -82,10 +82,10 @@ int alloc_event_buffer(void)
{
unsigned long flags;

- spin_lock_irqsave(&oprofilefs_lock, flags);
+ raw_spin_lock_irqsave(&oprofilefs_lock, flags);
buffer_size = oprofile_buffer_size;
buffer_watershed = oprofile_buffer_watershed;
- spin_unlock_irqrestore(&oprofilefs_lock, flags);
+ raw_spin_unlock_irqrestore(&oprofilefs_lock, flags);

if (buffer_watershed >= buffer_size)
return -EINVAL;
diff --git a/drivers/oprofile/oprofilefs.c b/drivers/oprofile/oprofilefs.c
index 2766a6d..049ab37 100644
--- a/drivers/oprofile/oprofilefs.c
+++ b/drivers/oprofile/oprofilefs.c
@@ -21,7 +21,7 @@

#define OPROFILEFS_MAGIC 0x6f70726f

-DEFINE_SPINLOCK(oprofilefs_lock);
+DEFINE_RAW_SPINLOCK(oprofilefs_lock);

static struct inode *oprofilefs_get_inode(struct super_block *sb, int mode)
{
@@ -75,9 +75,9 @@ int oprofilefs_ulong_from_user(unsigned long *val, char const __user *buf, size_
if (copy_from_user(tmpbuf, buf, count))
return -EFAULT;

- spin_lock_irqsave(&oprofilefs_lock, flags);
+ raw_spin_lock_irqsave(&oprofilefs_lock, flags);
*val = simple_strtoul(tmpbuf, NULL, 0);
- spin_unlock_irqrestore(&oprofilefs_lock, flags);
+ raw_spin_unlock_irqrestore(&oprofilefs_lock, flags);
return 0;
}

diff --git a/include/linux/oprofile.h b/include/linux/oprofile.h
index 5171639..7d4ed4d 100644
--- a/include/linux/oprofile.h
+++ b/include/linux/oprofile.h
@@ -156,7 +156,7 @@ ssize_t oprofilefs_ulong_to_user(unsigned long val, char __user * buf, size_t co
int oprofilefs_ulong_from_user(unsigned long * val, char const __user * buf, size_t count);

/** lock for read/write safety */
-extern spinlock_t oprofilefs_lock;
+extern raw_spinlock_t oprofilefs_lock;

/**
* Add the contents of a circular buffer to the event buffer.
--
1.6.5.2

2010-01-11 21:28:33

by John Kacur

[permalink] [raw]
Subject: [PATCH 25/26] kprobes: Convert to raw_spinlocks

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks.

See also dc23e836d8d25fe5aa4057d54dae2094fbc614f6

Signed-off-by: John Kacur <[email protected]>
---
include/linux/kprobes.h | 2 +-
kernel/kprobes.c | 34 +++++++++++++++++-----------------
2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 1b672f7..620df87 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -170,7 +170,7 @@ struct kretprobe {
int nmissed;
size_t data_size;
struct hlist_head free_instances;
- spinlock_t lock;
+ raw_spinlock_t lock;
};

struct kretprobe_instance {
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index b7df302..40547e6 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -73,10 +73,10 @@ static bool kprobes_all_disarmed;
static DEFINE_MUTEX(kprobe_mutex); /* Protects kprobe_table */
static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
static struct {
- spinlock_t lock ____cacheline_aligned_in_smp;
+ raw_spinlock_t lock ____cacheline_aligned_in_smp;
} kretprobe_table_locks[KPROBE_TABLE_SIZE];

-static spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
+static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash)
{
return &(kretprobe_table_locks[hash].lock);
}
@@ -410,9 +410,9 @@ void __kprobes recycle_rp_inst(struct kretprobe_instance *ri,
hlist_del(&ri->hlist);
INIT_HLIST_NODE(&ri->hlist);
if (likely(rp)) {
- spin_lock(&rp->lock);
+ raw_spin_lock(&rp->lock);
hlist_add_head(&ri->hlist, &rp->free_instances);
- spin_unlock(&rp->lock);
+ raw_spin_unlock(&rp->lock);
} else
/* Unregistering */
hlist_add_head(&ri->hlist, head);
@@ -422,34 +422,34 @@ void __kprobes kretprobe_hash_lock(struct task_struct *tsk,
struct hlist_head **head, unsigned long *flags)
{
unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS);
- spinlock_t *hlist_lock;
+ raw_spinlock_t *hlist_lock;

*head = &kretprobe_inst_table[hash];
hlist_lock = kretprobe_table_lock_ptr(hash);
- spin_lock_irqsave(hlist_lock, *flags);
+ raw_spin_lock_irqsave(hlist_lock, *flags);
}

static void __kprobes kretprobe_table_lock(unsigned long hash,
unsigned long *flags)
{
- spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
- spin_lock_irqsave(hlist_lock, *flags);
+ raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
+ raw_spin_lock_irqsave(hlist_lock, *flags);
}

void __kprobes kretprobe_hash_unlock(struct task_struct *tsk,
unsigned long *flags)
{
unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS);
- spinlock_t *hlist_lock;
+ raw_spinlock_t *hlist_lock;

hlist_lock = kretprobe_table_lock_ptr(hash);
- spin_unlock_irqrestore(hlist_lock, *flags);
+ raw_spin_unlock_irqrestore(hlist_lock, *flags);
}

void __kprobes kretprobe_table_unlock(unsigned long hash, unsigned long *flags)
{
- spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
- spin_unlock_irqrestore(hlist_lock, *flags);
+ raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
+ raw_spin_unlock_irqrestore(hlist_lock, *flags);
}

/*
@@ -982,12 +982,12 @@ static int __kprobes pre_handler_kretprobe(struct kprobe *p,

/*TODO: consider to only swap the RA after the last pre_handler fired */
hash = hash_ptr(current, KPROBE_HASH_BITS);
- spin_lock_irqsave(&rp->lock, flags);
+ raw_spin_lock_irqsave(&rp->lock, flags);
if (!hlist_empty(&rp->free_instances)) {
ri = hlist_entry(rp->free_instances.first,
struct kretprobe_instance, hlist);
hlist_del(&ri->hlist);
- spin_unlock_irqrestore(&rp->lock, flags);
+ raw_spin_unlock_irqrestore(&rp->lock, flags);

ri->rp = rp;
ri->task = current;
@@ -1004,7 +1004,7 @@ static int __kprobes pre_handler_kretprobe(struct kprobe *p,
kretprobe_table_unlock(hash, &flags);
} else {
rp->nmissed++;
- spin_unlock_irqrestore(&rp->lock, flags);
+ raw_spin_unlock_irqrestore(&rp->lock, flags);
}
return 0;
}
@@ -1040,7 +1040,7 @@ int __kprobes register_kretprobe(struct kretprobe *rp)
rp->maxactive = num_possible_cpus();
#endif
}
- spin_lock_init(&rp->lock);
+ raw_spin_lock_init(&rp->lock);
INIT_HLIST_HEAD(&rp->free_instances);
for (i = 0; i < rp->maxactive; i++) {
inst = kmalloc(sizeof(struct kretprobe_instance) +
@@ -1227,7 +1227,7 @@ static int __init init_kprobes(void)
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
INIT_HLIST_HEAD(&kprobe_table[i]);
INIT_HLIST_HEAD(&kretprobe_inst_table[i]);
- spin_lock_init(&(kretprobe_table_locks[i].lock));
+ raw_spin_lock_init(&(kretprobe_table_locks[i].lock));
}

/*
--
1.6.5.2

2010-01-11 21:27:52

by John Kacur

[permalink] [raw]
Subject: [PATCH 23/26] vgacon: Convert vga console lock to raw_spinlock

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks

See also 87f163a2ab9601820c2a6b7d14f81ab3d7d2b642

Signed-off-by: John Kacur <[email protected]>
---
drivers/video/console/vgacon.c | 42 ++++++++++++++++++++--------------------
1 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
index cc4bbbe..7f29a2e 100644
--- a/drivers/video/console/vgacon.c
+++ b/drivers/video/console/vgacon.c
@@ -51,7 +51,7 @@
#include <video/vga.h>
#include <asm/io.h>

-static DEFINE_SPINLOCK(vga_lock);
+static DEFINE_RAW_SPINLOCK(vga_lock);
static int cursor_size_lastfrom;
static int cursor_size_lastto;
static u32 vgacon_xres;
@@ -158,7 +158,7 @@ static inline void write_vga(unsigned char reg, unsigned int val)
* ddprintk might set the console position from interrupt
* handlers, thus the write has to be IRQ-atomic.
*/
- spin_lock_irqsave(&vga_lock, flags);
+ raw_spin_lock_irqsave(&vga_lock, flags);

#ifndef SLOW_VGA
v1 = reg + (val & 0xff00);
@@ -171,7 +171,7 @@ static inline void write_vga(unsigned char reg, unsigned int val)
outb_p(reg + 1, vga_video_port_reg);
outb_p(val & 0xff, vga_video_port_val);
#endif
- spin_unlock_irqrestore(&vga_lock, flags);
+ raw_spin_unlock_irqrestore(&vga_lock, flags);
}

static inline void vga_set_mem_top(struct vc_data *c)
@@ -668,7 +668,7 @@ static void vgacon_set_cursor_size(int xpos, int from, int to)
cursor_size_lastfrom = from;
cursor_size_lastto = to;

- spin_lock_irqsave(&vga_lock, flags);
+ raw_spin_lock_irqsave(&vga_lock, flags);
if (vga_video_type >= VIDEO_TYPE_VGAC) {
outb_p(VGA_CRTC_CURSOR_START, vga_video_port_reg);
curs = inb_p(vga_video_port_val);
@@ -686,7 +686,7 @@ static void vgacon_set_cursor_size(int xpos, int from, int to)
outb_p(curs, vga_video_port_val);
outb_p(VGA_CRTC_CURSOR_END, vga_video_port_reg);
outb_p(cure, vga_video_port_val);
- spin_unlock_irqrestore(&vga_lock, flags);
+ raw_spin_unlock_irqrestore(&vga_lock, flags);
}

static void vgacon_cursor(struct vc_data *c, int mode)
@@ -761,7 +761,7 @@ static int vgacon_doresize(struct vc_data *c,
unsigned int scanlines = height * c->vc_font.height;
u8 scanlines_lo = 0, r7 = 0, vsync_end = 0, mode, max_scan;

- spin_lock_irqsave(&vga_lock, flags);
+ raw_spin_lock_irqsave(&vga_lock, flags);

vgacon_xres = width * VGA_FONTWIDTH;
vgacon_yres = height * c->vc_font.height;
@@ -812,7 +812,7 @@ static int vgacon_doresize(struct vc_data *c,
outb_p(vsync_end, vga_video_port_val);
}

- spin_unlock_irqrestore(&vga_lock, flags);
+ raw_spin_unlock_irqrestore(&vga_lock, flags);
return 0;
}

@@ -895,11 +895,11 @@ static void vga_vesa_blank(struct vgastate *state, int mode)
{
/* save original values of VGA controller registers */
if (!vga_vesa_blanked) {
- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
vga_state.SeqCtrlIndex = vga_r(state->vgabase, VGA_SEQ_I);
vga_state.CrtCtrlIndex = inb_p(vga_video_port_reg);
vga_state.CrtMiscIO = vga_r(state->vgabase, VGA_MIS_R);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);

outb_p(0x00, vga_video_port_reg); /* HorizontalTotal */
vga_state.HorizontalTotal = inb_p(vga_video_port_val);
@@ -922,7 +922,7 @@ static void vga_vesa_blank(struct vgastate *state, int mode)

/* assure that video is enabled */
/* "0x20" is VIDEO_ENABLE_bit in register 01 of sequencer */
- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
vga_wseq(state->vgabase, VGA_SEQ_CLOCK_MODE, vga_state.ClockingMode | 0x20);

/* test for vertical retrace in process.... */
@@ -958,13 +958,13 @@ static void vga_vesa_blank(struct vgastate *state, int mode)
/* restore both index registers */
vga_w(state->vgabase, VGA_SEQ_I, vga_state.SeqCtrlIndex);
outb_p(vga_state.CrtCtrlIndex, vga_video_port_reg);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);
}

static void vga_vesa_unblank(struct vgastate *state)
{
/* restore original values of VGA controller registers */
- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
vga_w(state->vgabase, VGA_MIS_W, vga_state.CrtMiscIO);

outb_p(0x00, vga_video_port_reg); /* HorizontalTotal */
@@ -989,7 +989,7 @@ static void vga_vesa_unblank(struct vgastate *state)
/* restore index/control registers */
vga_w(state->vgabase, VGA_SEQ_I, vga_state.SeqCtrlIndex);
outb_p(vga_state.CrtCtrlIndex, vga_video_port_reg);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);
}

static void vga_pal_blank(struct vgastate *state)
@@ -1109,7 +1109,7 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512)
#endif

unlock_kernel();
- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
/* First, the Sequencer */
vga_wseq(state->vgabase, VGA_SEQ_RESET, 0x1);
/* CPU writes only to map 2 */
@@ -1125,7 +1125,7 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512)
vga_wgfx(state->vgabase, VGA_GFX_MODE, 0x00);
/* map start at A000:0000 */
vga_wgfx(state->vgabase, VGA_GFX_MISC, 0x00);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);

if (arg) {
if (set)
@@ -1152,7 +1152,7 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512)
}
}

- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
/* First, the sequencer, Synchronous reset */
vga_wseq(state->vgabase, VGA_SEQ_RESET, 0x01);
/* CPU writes to maps 0 and 1 */
@@ -1191,7 +1191,7 @@ static int vgacon_do_font_op(struct vgastate *state,char *arg,int set,int ch512)
inb_p(video_port_status);
vga_wattr(state->vgabase, VGA_AR_ENABLE_DISPLAY, 0);
}
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);
lock_kernel();
return 0;
}
@@ -1217,26 +1217,26 @@ static int vgacon_adjust_height(struct vc_data *vc, unsigned fontheight)
registers; they are write-only on EGA, but it appears that they
are all don't care bits on EGA, so I guess it doesn't matter. */

- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
outb_p(0x07, vga_video_port_reg); /* CRTC overflow register */
ovr = inb_p(vga_video_port_val);
outb_p(0x09, vga_video_port_reg); /* Font size register */
fsr = inb_p(vga_video_port_val);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);

vde = maxscan & 0xff; /* Vertical display end reg */
ovr = (ovr & 0xbd) + /* Overflow register */
((maxscan & 0x100) >> 7) + ((maxscan & 0x200) >> 3);
fsr = (fsr & 0xe0) + (fontheight - 1); /* Font size register */

- spin_lock_irq(&vga_lock);
+ raw_spin_lock_irq(&vga_lock);
outb_p(0x07, vga_video_port_reg); /* CRTC overflow register */
outb_p(ovr, vga_video_port_val);
outb_p(0x09, vga_video_port_reg); /* Font size */
outb_p(fsr, vga_video_port_val);
outb_p(0x12, vga_video_port_reg); /* Vertical display limit */
outb_p(vde, vga_video_port_val);
- spin_unlock_irq(&vga_lock);
+ raw_spin_unlock_irq(&vga_lock);
vga_video_font_height = fontheight;

for (i = 0; i < MAX_NR_CONSOLES; i++) {
--
1.6.5.2

2010-01-11 21:28:28

by John Kacur

[permalink] [raw]
Subject: [PATCH 26/26] softlockup: Convert to raw_spinlocks

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks

See also b20de918527a9c1558b3e8a02f935cf4cb53e3ba

Signed-off-by: John Kacur <[email protected]>
---
kernel/softlockup.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/softlockup.c b/kernel/softlockup.c
index d225790..8e63d20 100644
--- a/kernel/softlockup.c
+++ b/kernel/softlockup.c
@@ -20,7 +20,7 @@

#include <asm/irq_regs.h>

-static DEFINE_SPINLOCK(print_lock);
+static DEFINE_RAW_SPINLOCK(print_lock);

static DEFINE_PER_CPU(unsigned long, softlockup_touch_ts); /* touch timestamp */
static DEFINE_PER_CPU(unsigned long, softlockup_print_ts); /* print timestamp */
@@ -149,7 +149,7 @@ void softlockup_tick(void)

per_cpu(softlockup_print_ts, this_cpu) = touch_ts;

- spin_lock(&print_lock);
+ raw_spin_lock(&print_lock);
printk(KERN_ERR "BUG: soft lockup - CPU#%d stuck for %lus! [%s:%d]\n",
this_cpu, now - touch_ts,
current->comm, task_pid_nr(current));
@@ -159,7 +159,7 @@ void softlockup_tick(void)
show_regs(regs);
else
dump_stack();
- spin_unlock(&print_lock);
+ raw_spin_unlock(&print_lock);

if (softlockup_panic)
panic("softlockup: hung tasks");
--
1.6.5.2

2010-01-11 21:29:06

by John Kacur

[permalink] [raw]
Subject: [PATCH 20/26] proportions: Convert spinlocks to raw_spinlocks.

Convert locks which cannot be sleeping locks in preempt-rt to
raw_spinlocks

See also: 0fc7741cfd53c5c5ca710e075e05808e1bf9be71

Signed-off-by: John Kacur <[email protected]>
---
include/linux/proportions.h | 6 +++---
lib/proportions.c | 12 ++++++------
2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/proportions.h b/include/linux/proportions.h
index cf793bb..ef35bb7 100644
--- a/include/linux/proportions.h
+++ b/include/linux/proportions.h
@@ -58,7 +58,7 @@ struct prop_local_percpu {
*/
int shift;
unsigned long period;
- spinlock_t lock; /* protect the snapshot state */
+ raw_spinlock_t lock; /* protect the snapshot state */
};

int prop_local_init_percpu(struct prop_local_percpu *pl);
@@ -106,11 +106,11 @@ struct prop_local_single {
*/
unsigned long period;
int shift;
- spinlock_t lock; /* protect the snapshot state */
+ raw_spinlock_t lock; /* protect the snapshot state */
};

#define INIT_PROP_LOCAL_SINGLE(name) \
-{ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
+{ .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
}

int prop_local_init_single(struct prop_local_single *pl);
diff --git a/lib/proportions.c b/lib/proportions.c
index d50746a..05df848 100644
--- a/lib/proportions.c
+++ b/lib/proportions.c
@@ -190,7 +190,7 @@ prop_adjust_shift(int *pl_shift, unsigned long *pl_period, int new_shift)

int prop_local_init_percpu(struct prop_local_percpu *pl)
{
- spin_lock_init(&pl->lock);
+ raw_spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
return percpu_counter_init(&pl->events, 0);
@@ -226,7 +226,7 @@ void prop_norm_percpu(struct prop_global *pg, struct prop_local_percpu *pl)
if (pl->period == global_period)
return;

- spin_lock_irqsave(&pl->lock, flags);
+ raw_spin_lock_irqsave(&pl->lock, flags);
prop_adjust_shift(&pl->shift, &pl->period, pg->shift);

/*
@@ -247,7 +247,7 @@ void prop_norm_percpu(struct prop_global *pg, struct prop_local_percpu *pl)
percpu_counter_set(&pl->events, 0);

pl->period = global_period;
- spin_unlock_irqrestore(&pl->lock, flags);
+ raw_spin_unlock_irqrestore(&pl->lock, flags);
}

/*
@@ -324,7 +324,7 @@ void prop_fraction_percpu(struct prop_descriptor *pd,

int prop_local_init_single(struct prop_local_single *pl)
{
- spin_lock_init(&pl->lock);
+ raw_spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
pl->events = 0;
@@ -356,7 +356,7 @@ void prop_norm_single(struct prop_global *pg, struct prop_local_single *pl)
if (pl->period == global_period)
return;

- spin_lock_irqsave(&pl->lock, flags);
+ raw_spin_lock_irqsave(&pl->lock, flags);
prop_adjust_shift(&pl->shift, &pl->period, pg->shift);
/*
* For each missed period, we half the local counter.
@@ -367,7 +367,7 @@ void prop_norm_single(struct prop_global *pg, struct prop_local_single *pl)
else
pl->events = 0;
pl->period = global_period;
- spin_unlock_irqrestore(&pl->lock, flags);
+ raw_spin_unlock_irqrestore(&pl->lock, flags);
}

/*
--
1.6.5.2

2010-01-11 21:28:57

by John Kacur

[permalink] [raw]
Subject: [PATCH 21/26] percpu_counter: Convert to raw_spinlock

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks.

See also 609368d881acb7c5bcf5560cb4098a202f2e0ba6

Signed-off-by: John Kacur <[email protected]>
---
include/linux/percpu_counter.h | 2 +-
lib/percpu_counter.c | 18 +++++++++---------
2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index a7684a5..7823c33 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -16,7 +16,7 @@
#ifdef CONFIG_SMP

struct percpu_counter {
- spinlock_t lock;
+ raw_spinlock_t lock;
s64 count;
#ifdef CONFIG_HOTPLUG_CPU
struct list_head list; /* All percpu_counters are on a list */
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index aeaa6d7..10fb740 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -16,13 +16,13 @@ void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
{
int cpu;

- spin_lock(&fbc->lock);
+ raw_spin_lock(&fbc->lock);
for_each_possible_cpu(cpu) {
s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
*pcount = 0;
}
fbc->count = amount;
- spin_unlock(&fbc->lock);
+ raw_spin_unlock(&fbc->lock);
}
EXPORT_SYMBOL(percpu_counter_set);

@@ -35,10 +35,10 @@ void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
pcount = per_cpu_ptr(fbc->counters, cpu);
count = *pcount + amount;
if (count >= batch || count <= -batch) {
- spin_lock(&fbc->lock);
+ raw_spin_lock(&fbc->lock);
fbc->count += count;
*pcount = 0;
- spin_unlock(&fbc->lock);
+ raw_spin_unlock(&fbc->lock);
} else {
*pcount = count;
}
@@ -55,13 +55,13 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
s64 ret;
int cpu;

- spin_lock(&fbc->lock);
+ raw_spin_lock(&fbc->lock);
ret = fbc->count;
for_each_online_cpu(cpu) {
s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
ret += *pcount;
}
- spin_unlock(&fbc->lock);
+ raw_spin_unlock(&fbc->lock);
return ret;
}
EXPORT_SYMBOL(__percpu_counter_sum);
@@ -69,7 +69,7 @@ EXPORT_SYMBOL(__percpu_counter_sum);
int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
struct lock_class_key *key)
{
- spin_lock_init(&fbc->lock);
+ raw_spin_lock_init(&fbc->lock);
lockdep_set_class(&fbc->lock, key);
fbc->count = amount;
fbc->counters = alloc_percpu(s32);
@@ -126,11 +126,11 @@ static int __cpuinit percpu_counter_hotcpu_callback(struct notifier_block *nb,
s32 *pcount;
unsigned long flags;

- spin_lock_irqsave(&fbc->lock, flags);
+ raw_spin_lock_irqsave(&fbc->lock, flags);
pcount = per_cpu_ptr(fbc->counters, cpu);
fbc->count += *pcount;
*pcount = 0;
- spin_unlock_irqrestore(&fbc->lock, flags);
+ raw_spin_unlock_irqrestore(&fbc->lock, flags);
}
mutex_unlock(&percpu_counters_lock);
#endif
--
1.6.5.2

2010-01-11 21:28:05

by John Kacur

[permalink] [raw]
Subject: [PATCH 24/26] pci-access: Convert pci_lock to raw_spinlock

Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks.

See also b68d8890994c099f089ab36eb55302fdc1884612

Signed-off-by: John Kacur <[email protected]>
---
drivers/pci/access.c | 34 +++++++++++++++++-----------------
1 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index db23200..b54f7a4 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -12,7 +12,7 @@
* configuration space.
*/

-static DEFINE_SPINLOCK(pci_lock);
+static DEFINE_RAW_SPINLOCK(pci_lock);

/*
* Wrappers for all PCI configuration access functions. They just check
@@ -32,10 +32,10 @@ int pci_bus_read_config_##size \
unsigned long flags; \
u32 data = 0; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
- spin_lock_irqsave(&pci_lock, flags); \
+ raw_spin_lock_irqsave(&pci_lock, flags); \
res = bus->ops->read(bus, devfn, pos, len, &data); \
*value = (type)data; \
- spin_unlock_irqrestore(&pci_lock, flags); \
+ raw_spin_unlock_irqrestore(&pci_lock, flags); \
return res; \
}

@@ -46,9 +46,9 @@ int pci_bus_write_config_##size \
int res; \
unsigned long flags; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
- spin_lock_irqsave(&pci_lock, flags); \
+ raw_spin_lock_irqsave(&pci_lock, flags); \
res = bus->ops->write(bus, devfn, pos, len, value); \
- spin_unlock_irqrestore(&pci_lock, flags); \
+ raw_spin_unlock_irqrestore(&pci_lock, flags); \
return res; \
}

@@ -78,10 +78,10 @@ struct pci_ops *pci_bus_set_ops(struct pci_bus *bus, struct pci_ops *ops)
struct pci_ops *old_ops;
unsigned long flags;

- spin_lock_irqsave(&pci_lock, flags);
+ raw_spin_lock_irqsave(&pci_lock, flags);
old_ops = bus->ops;
bus->ops = ops;
- spin_unlock_irqrestore(&pci_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_lock, flags);
return old_ops;
}
EXPORT_SYMBOL(pci_bus_set_ops);
@@ -135,9 +135,9 @@ static noinline void pci_wait_ucfg(struct pci_dev *dev)
__add_wait_queue(&pci_ucfg_wait, &wait);
do {
set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock_irq(&pci_lock);
+ raw_spin_unlock_irq(&pci_lock);
schedule();
- spin_lock_irq(&pci_lock);
+ raw_spin_lock_irq(&pci_lock);
} while (dev->block_ucfg_access);
__remove_wait_queue(&pci_ucfg_wait, &wait);
}
@@ -149,11 +149,11 @@ int pci_user_read_config_##size \
int ret = 0; \
u32 data = -1; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
- spin_lock_irq(&pci_lock); \
+ raw_spin_lock_irq(&pci_lock); \
if (unlikely(dev->block_ucfg_access)) pci_wait_ucfg(dev); \
ret = dev->bus->ops->read(dev->bus, dev->devfn, \
pos, sizeof(type), &data); \
- spin_unlock_irq(&pci_lock); \
+ raw_spin_unlock_irq(&pci_lock); \
*val = (type)data; \
return ret; \
}
@@ -164,11 +164,11 @@ int pci_user_write_config_##size \
{ \
int ret = -EIO; \
if (PCI_##size##_BAD) return PCIBIOS_BAD_REGISTER_NUMBER; \
- spin_lock_irq(&pci_lock); \
+ raw_spin_lock_irq(&pci_lock); \
if (unlikely(dev->block_ucfg_access)) pci_wait_ucfg(dev); \
ret = dev->bus->ops->write(dev->bus, dev->devfn, \
pos, sizeof(type), val); \
- spin_unlock_irq(&pci_lock); \
+ raw_spin_unlock_irq(&pci_lock); \
return ret; \
}

@@ -395,10 +395,10 @@ void pci_block_user_cfg_access(struct pci_dev *dev)
unsigned long flags;
int was_blocked;

- spin_lock_irqsave(&pci_lock, flags);
+ raw_spin_lock_irqsave(&pci_lock, flags);
was_blocked = dev->block_ucfg_access;
dev->block_ucfg_access = 1;
- spin_unlock_irqrestore(&pci_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_lock, flags);

/* If we BUG() inside the pci_lock, we're guaranteed to hose
* the machine */
@@ -416,7 +416,7 @@ void pci_unblock_user_cfg_access(struct pci_dev *dev)
{
unsigned long flags;

- spin_lock_irqsave(&pci_lock, flags);
+ raw_spin_lock_irqsave(&pci_lock, flags);

/* This indicates a problem in the caller, but we don't need
* to kill them, unlike a double-block above. */
@@ -424,6 +424,6 @@ void pci_unblock_user_cfg_access(struct pci_dev *dev)

dev->block_ucfg_access = 0;
wake_up_all(&pci_ucfg_wait);
- spin_unlock_irqrestore(&pci_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_lock, flags);
}
EXPORT_SYMBOL_GPL(pci_unblock_user_cfg_access);
--
1.6.5.2

2010-01-11 21:27:41

by John Kacur

[permalink] [raw]
Subject: [PATCH 17/26] x86: kvm: Convert i8254/i8259 locks to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks.

See also 4b064e4f9ec7a5befcab91b9b1f2b0314c5a051a

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/kvm/i8254.c | 10 +++++-----
arch/x86/kvm/i8254.h | 2 +-
arch/x86/kvm/i8259.c | 30 +++++++++++++++---------------
arch/x86/kvm/irq.h | 2 +-
arch/x86/kvm/x86.c | 8 ++++----
5 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 296aba4..23265ee 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -242,11 +242,11 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
{
struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state,
irq_ack_notifier);
- spin_lock(&ps->inject_lock);
+ raw_spin_lock(&ps->inject_lock);
if (atomic_dec_return(&ps->pit_timer.pending) < 0)
atomic_inc(&ps->pit_timer.pending);
ps->irq_ack = 1;
- spin_unlock(&ps->inject_lock);
+ raw_spin_unlock(&ps->inject_lock);
}

void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
@@ -621,7 +621,7 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)

mutex_init(&pit->pit_state.lock);
mutex_lock(&pit->pit_state.lock);
- spin_lock_init(&pit->pit_state.inject_lock);
+ raw_spin_lock_init(&pit->pit_state.inject_lock);

kvm->arch.vpit = pit;
pit->kvm = kvm;
@@ -720,12 +720,12 @@ void kvm_inject_pit_timer_irqs(struct kvm_vcpu *vcpu)
/* Try to inject pending interrupts when
* last one has been acked.
*/
- spin_lock(&ps->inject_lock);
+ raw_spin_lock(&ps->inject_lock);
if (atomic_read(&ps->pit_timer.pending) && ps->irq_ack) {
ps->irq_ack = 0;
inject = 1;
}
- spin_unlock(&ps->inject_lock);
+ raw_spin_unlock(&ps->inject_lock);
if (inject)
__inject_pit_timer_intr(kvm);
}
diff --git a/arch/x86/kvm/i8254.h b/arch/x86/kvm/i8254.h
index d4c1c7f..900d6b0 100644
--- a/arch/x86/kvm/i8254.h
+++ b/arch/x86/kvm/i8254.h
@@ -27,7 +27,7 @@ struct kvm_kpit_state {
u32 speaker_data_on;
struct mutex lock;
struct kvm_pit *pit;
- spinlock_t inject_lock;
+ raw_spinlock_t inject_lock;
unsigned long irq_ack;
struct kvm_irq_ack_notifier irq_ack_notifier;
};
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index d057c0c..3271405 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -44,18 +44,18 @@ static void pic_clear_isr(struct kvm_kpic_state *s, int irq)
* Other interrupt may be delivered to PIC while lock is dropped but
* it should be safe since PIC state is already updated at this stage.
*/
- spin_unlock(&s->pics_state->lock);
+ raw_spin_unlock(&s->pics_state->lock);
kvm_notify_acked_irq(s->pics_state->kvm, SELECT_PIC(irq), irq);
- spin_lock(&s->pics_state->lock);
+ raw_spin_lock(&s->pics_state->lock);
}

void kvm_pic_clear_isr_ack(struct kvm *kvm)
{
struct kvm_pic *s = pic_irqchip(kvm);
- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
s->pics[0].isr_ack = 0xff;
s->pics[1].isr_ack = 0xff;
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);
}

/*
@@ -156,9 +156,9 @@ static void pic_update_irq(struct kvm_pic *s)

void kvm_pic_update_irq(struct kvm_pic *s)
{
- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
pic_update_irq(s);
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);
}

int kvm_pic_set_irq(void *opaque, int irq, int level)
@@ -166,14 +166,14 @@ int kvm_pic_set_irq(void *opaque, int irq, int level)
struct kvm_pic *s = opaque;
int ret = -1;

- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
if (irq >= 0 && irq < PIC_NUM_PINS) {
ret = pic_set_irq1(&s->pics[irq >> 3], irq & 7, level);
pic_update_irq(s);
trace_kvm_pic_set_irq(irq >> 3, irq & 7, s->pics[irq >> 3].elcr,
s->pics[irq >> 3].imr, ret == 0);
}
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);

return ret;
}
@@ -203,7 +203,7 @@ int kvm_pic_read_irq(struct kvm *kvm)
int irq, irq2, intno;
struct kvm_pic *s = pic_irqchip(kvm);

- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
irq = pic_get_irq(&s->pics[0]);
if (irq >= 0) {
pic_intack(&s->pics[0], irq);
@@ -228,7 +228,7 @@ int kvm_pic_read_irq(struct kvm *kvm)
intno = s->pics[0].irq_base + irq;
}
pic_update_irq(s);
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);

return intno;
}
@@ -442,7 +442,7 @@ static int picdev_write(struct kvm_io_device *this,
printk(KERN_ERR "PIC: non byte write\n");
return 0;
}
- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
switch (addr) {
case 0x20:
case 0x21:
@@ -455,7 +455,7 @@ static int picdev_write(struct kvm_io_device *this,
elcr_ioport_write(&s->pics[addr & 1], addr, data);
break;
}
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);
return 0;
}

@@ -472,7 +472,7 @@ static int picdev_read(struct kvm_io_device *this,
printk(KERN_ERR "PIC: non byte read\n");
return 0;
}
- spin_lock(&s->lock);
+ raw_spin_lock(&s->lock);
switch (addr) {
case 0x20:
case 0x21:
@@ -486,7 +486,7 @@ static int picdev_read(struct kvm_io_device *this,
break;
}
*(unsigned char *)val = data;
- spin_unlock(&s->lock);
+ raw_spin_unlock(&s->lock);
return 0;
}

@@ -520,7 +520,7 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
s = kzalloc(sizeof(struct kvm_pic), GFP_KERNEL);
if (!s)
return NULL;
- spin_lock_init(&s->lock);
+ raw_spin_lock_init(&s->lock);
s->kvm = kvm;
s->pics[0].elcr_mask = 0xf8;
s->pics[1].elcr_mask = 0xde;
diff --git a/arch/x86/kvm/irq.h b/arch/x86/kvm/irq.h
index be399e2..b994083 100644
--- a/arch/x86/kvm/irq.h
+++ b/arch/x86/kvm/irq.h
@@ -62,7 +62,7 @@ struct kvm_kpic_state {
};

struct kvm_pic {
- spinlock_t lock;
+ raw_spinlock_t lock;
unsigned pending_acks;
struct kvm *kvm;
struct kvm_kpic_state pics[2]; /* 0 is master pic, 1 is slave pic */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6651dbf..920d5c0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2274,18 +2274,18 @@ static int kvm_vm_ioctl_set_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
r = 0;
switch (chip->chip_id) {
case KVM_IRQCHIP_PIC_MASTER:
- spin_lock(&pic_irqchip(kvm)->lock);
+ raw_spin_lock(&pic_irqchip(kvm)->lock);
memcpy(&pic_irqchip(kvm)->pics[0],
&chip->chip.pic,
sizeof(struct kvm_pic_state));
- spin_unlock(&pic_irqchip(kvm)->lock);
+ raw_spin_unlock(&pic_irqchip(kvm)->lock);
break;
case KVM_IRQCHIP_PIC_SLAVE:
- spin_lock(&pic_irqchip(kvm)->lock);
+ raw_spin_lock(&pic_irqchip(kvm)->lock);
memcpy(&pic_irqchip(kvm)->pics[1],
&chip->chip.pic,
sizeof(struct kvm_pic_state));
- spin_unlock(&pic_irqchip(kvm)->lock);
+ raw_spin_unlock(&pic_irqchip(kvm)->lock);
break;
case KVM_IRQCHIP_IOAPIC:
r = kvm_set_ioapic(kvm, &chip->chip.ioapic);
--
1.6.5.2

2010-01-11 21:30:27

by John Kacur

[permalink] [raw]
Subject: [PATCH 12/26] printk: Convert lock to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw_spinlocks.

See also: 0835f1f82a88e366d8dd20874c94133915dcccdb

Signed-off-by: John Kacur <[email protected]>
---
include/linux/ratelimit.h | 4 ++--
kernel/printk.c | 42 +++++++++++++++++++++---------------------
lib/ratelimit.c | 4 ++--
3 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/include/linux/ratelimit.h b/include/linux/ratelimit.h
index 668cf1b..7596e38 100644
--- a/include/linux/ratelimit.h
+++ b/include/linux/ratelimit.h
@@ -8,7 +8,7 @@
#define DEFAULT_RATELIMIT_BURST 10

struct ratelimit_state {
- spinlock_t lock; /* protect the state */
+ raw_spinlock_t lock; /* protect the state */

int interval;
int burst;
@@ -20,7 +20,7 @@ struct ratelimit_state {
#define DEFINE_RATELIMIT_STATE(name, interval_init, burst_init) \
\
struct ratelimit_state name = { \
- .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
.interval = interval_init, \
.burst = burst_init, \
}
diff --git a/kernel/printk.c b/kernel/printk.c
index 17463ca..4ebb6e0 100644
--- a/kernel/printk.c
+++ b/kernel/printk.c
@@ -102,7 +102,7 @@ static int console_locked, console_suspended;
* It is also used in interesting ways to provide interlocking in
* release_console_sem().
*/
-static DEFINE_SPINLOCK(logbuf_lock);
+static DEFINE_RAW_SPINLOCK(logbuf_lock);

#define LOG_BUF_MASK (log_buf_len-1)
#define LOG_BUF(idx) (log_buf[(idx) & LOG_BUF_MASK])
@@ -181,7 +181,7 @@ static int __init log_buf_len_setup(char *str)
goto out;
}

- spin_lock_irqsave(&logbuf_lock, flags);
+ raw_spin_lock_irqsave(&logbuf_lock, flags);
log_buf_len = size;
log_buf = new_log_buf;

@@ -195,7 +195,7 @@ static int __init log_buf_len_setup(char *str)
log_start -= offset;
con_start -= offset;
log_end -= offset;
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);

printk(KERN_NOTICE "log_buf_len: %d\n", log_buf_len);
}
@@ -305,18 +305,18 @@ int do_syslog(int type, char __user *buf, int len)
if (error)
goto out;
i = 0;
- spin_lock_irq(&logbuf_lock);
+ raw_spin_lock_irq(&logbuf_lock);
while (!error && (log_start != log_end) && i < len) {
c = LOG_BUF(log_start);
log_start++;
- spin_unlock_irq(&logbuf_lock);
+ raw_spin_unlock_irq(&logbuf_lock);
error = __put_user(c,buf);
buf++;
i++;
cond_resched();
- spin_lock_irq(&logbuf_lock);
+ raw_spin_lock_irq(&logbuf_lock);
}
- spin_unlock_irq(&logbuf_lock);
+ raw_spin_unlock_irq(&logbuf_lock);
if (!error)
error = i;
break;
@@ -337,7 +337,7 @@ int do_syslog(int type, char __user *buf, int len)
count = len;
if (count > log_buf_len)
count = log_buf_len;
- spin_lock_irq(&logbuf_lock);
+ raw_spin_lock_irq(&logbuf_lock);
if (count > logged_chars)
count = logged_chars;
if (do_clear)
@@ -354,12 +354,12 @@ int do_syslog(int type, char __user *buf, int len)
if (j + log_buf_len < log_end)
break;
c = LOG_BUF(j);
- spin_unlock_irq(&logbuf_lock);
+ raw_spin_unlock_irq(&logbuf_lock);
error = __put_user(c,&buf[count-1-i]);
cond_resched();
- spin_lock_irq(&logbuf_lock);
+ raw_spin_lock_irq(&logbuf_lock);
}
- spin_unlock_irq(&logbuf_lock);
+ raw_spin_unlock_irq(&logbuf_lock);
if (error)
break;
error = i;
@@ -542,7 +542,7 @@ static void zap_locks(void)
oops_timestamp = jiffies;

/* If a crash is occurring, make sure we can't deadlock */
- spin_lock_init(&logbuf_lock);
+ raw_spin_lock_init(&logbuf_lock);
/* And make sure that we print immediately */
init_MUTEX(&console_sem);
}
@@ -646,7 +646,7 @@ static int acquire_console_semaphore_for_printk(unsigned int cpu)
}
}
printk_cpu = UINT_MAX;
- spin_unlock(&logbuf_lock);
+ raw_spin_unlock(&logbuf_lock);
return retval;
}
static const char recursion_bug_msg [] =
@@ -704,7 +704,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
}

lockdep_off();
- spin_lock(&logbuf_lock);
+ raw_spin_lock(&logbuf_lock);
printk_cpu = this_cpu;

if (recursion_bug) {
@@ -1053,14 +1053,14 @@ void release_console_sem(void)
console_may_schedule = 0;

for ( ; ; ) {
- spin_lock_irqsave(&logbuf_lock, flags);
+ raw_spin_lock_irqsave(&logbuf_lock, flags);
wake_klogd |= log_start - log_end;
if (con_start == log_end)
break; /* Nothing to print */
_con_start = con_start;
_log_end = log_end;
con_start = log_end; /* Flush */
- spin_unlock(&logbuf_lock);
+ raw_spin_unlock(&logbuf_lock);
stop_critical_timings(); /* don't trace print latency */
call_console_drivers(_con_start, _log_end);
start_critical_timings();
@@ -1068,7 +1068,7 @@ void release_console_sem(void)
}
console_locked = 0;
up(&console_sem);
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);
if (wake_klogd)
wake_up_klogd();
}
@@ -1286,9 +1286,9 @@ void register_console(struct console *newcon)
* release_console_sem() will print out the buffered messages
* for us.
*/
- spin_lock_irqsave(&logbuf_lock, flags);
+ raw_spin_lock_irqsave(&logbuf_lock, flags);
con_start = log_start;
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);
}
release_console_sem();

@@ -1496,10 +1496,10 @@ void kmsg_dump(enum kmsg_dump_reason reason)
/* Theoretically, the log could move on after we do this, but
there's not a lot we can do about that. The new messages
will overwrite the start of what we dump. */
- spin_lock_irqsave(&logbuf_lock, flags);
+ raw_spin_lock_irqsave(&logbuf_lock, flags);
end = log_end & LOG_BUF_MASK;
chars = logged_chars;
- spin_unlock_irqrestore(&logbuf_lock, flags);
+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);

if (logged_chars > end) {
s1 = log_buf + log_buf_len - logged_chars + end;
diff --git a/lib/ratelimit.c b/lib/ratelimit.c
index 09f5ce1..39588b3 100644
--- a/lib/ratelimit.c
+++ b/lib/ratelimit.c
@@ -34,7 +34,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
* in addition to the one that will be printed by
* the entity that is holding the lock already:
*/
- if (!spin_trylock_irqsave(&rs->lock, flags))
+ if (!raw_spin_trylock_irqsave(&rs->lock, flags))
return 1;

if (!rs->begin)
@@ -55,7 +55,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
rs->missed++;
ret = 0;
}
- spin_unlock_irqrestore(&rs->lock, flags);
+ raw_spin_unlock_irqrestore(&rs->lock, flags);

return ret;
}
--
1.6.5.2

2010-01-11 21:29:33

by John Kacur

[permalink] [raw]
Subject: [PATCH 19/26] cgroups: Convert cgroups release_list_lock to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks

See also 58814bae5de64d5291b813ea0a52192e4fa714ad

Signed-off-by: John Kacur <[email protected]>
---
kernel/cgroup.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 0249f4b..32a80b2 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -204,7 +204,7 @@ list_for_each_entry(_root, &roots, root_list)
/* the list of cgroups eligible for automatic release. Protected by
* release_list_lock */
static LIST_HEAD(release_list);
-static DEFINE_SPINLOCK(release_list_lock);
+static DEFINE_RAW_SPINLOCK(release_list_lock);
static void cgroup_release_agent(struct work_struct *work);
static DECLARE_WORK(release_agent_work, cgroup_release_agent);
static void check_for_release(struct cgroup *cgrp);
@@ -3151,11 +3151,11 @@ again:
finish_wait(&cgroup_rmdir_waitq, &wait);
clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);

- spin_lock(&release_list_lock);
+ raw_spin_lock(&release_list_lock);
set_bit(CGRP_REMOVED, &cgrp->flags);
if (!list_empty(&cgrp->release_list))
list_del(&cgrp->release_list);
- spin_unlock(&release_list_lock);
+ raw_spin_unlock(&release_list_lock);

cgroup_lock_hierarchy(cgrp->root);
/* delete this cgroup from parent->children */
@@ -3691,13 +3691,13 @@ static void check_for_release(struct cgroup *cgrp)
* already queued for a userspace notification, queue
* it now */
int need_schedule_work = 0;
- spin_lock(&release_list_lock);
+ raw_spin_lock(&release_list_lock);
if (!cgroup_is_removed(cgrp) &&
list_empty(&cgrp->release_list)) {
list_add(&cgrp->release_list, &release_list);
need_schedule_work = 1;
}
- spin_unlock(&release_list_lock);
+ raw_spin_unlock(&release_list_lock);
if (need_schedule_work)
schedule_work(&release_agent_work);
}
@@ -3747,7 +3747,7 @@ static void cgroup_release_agent(struct work_struct *work)
{
BUG_ON(work != &release_agent_work);
mutex_lock(&cgroup_mutex);
- spin_lock(&release_list_lock);
+ raw_spin_lock(&release_list_lock);
while (!list_empty(&release_list)) {
char *argv[3], *envp[3];
int i;
@@ -3756,7 +3756,7 @@ static void cgroup_release_agent(struct work_struct *work)
struct cgroup,
release_list);
list_del_init(&cgrp->release_list);
- spin_unlock(&release_list_lock);
+ raw_spin_unlock(&release_list_lock);
pathbuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!pathbuf)
goto continue_free;
@@ -3786,9 +3786,9 @@ static void cgroup_release_agent(struct work_struct *work)
continue_free:
kfree(pathbuf);
kfree(agentbuf);
- spin_lock(&release_list_lock);
+ raw_spin_lock(&release_list_lock);
}
- spin_unlock(&release_list_lock);
+ raw_spin_unlock(&release_list_lock);
mutex_unlock(&cgroup_mutex);
}

--
1.6.5.2

2010-01-11 21:29:35

by John Kacur

[permalink] [raw]
Subject: [PATCH 18/26] x86 - nmi: Convert nmi_lock to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw_spinlocks

See also: a76dd8f3ffcbb1bb446a9ca059b77f40fe100bf1

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/kernel/apic/nmi.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/apic/nmi.c b/arch/x86/kernel/apic/nmi.c
index 0159a69..51bc6a6 100644
--- a/arch/x86/kernel/apic/nmi.c
+++ b/arch/x86/kernel/apic/nmi.c
@@ -416,13 +416,13 @@ nmi_watchdog_tick(struct pt_regs *regs, unsigned reason)

/* We can be called before check_nmi_watchdog, hence NULL check. */
if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
- static DEFINE_SPINLOCK(lock); /* Serialise the printks */
+ static DEFINE_RAW_SPINLOCK(lock); /* Serialise the printks */

- spin_lock(&lock);
+ raw_spin_lock(&lock);
printk(KERN_WARNING "NMI backtrace for cpu %d\n", cpu);
show_regs(regs);
dump_stack();
- spin_unlock(&lock);
+ raw_spin_unlock(&lock);
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));

rc = 1;
--
1.6.5.2

2010-01-11 21:30:05

by John Kacur

[permalink] [raw]
Subject: [PATCH 16/26] timer_stats: Convert to raw_spinlocks

Convert locks which cannot sleep in preempt-rt to raw_spinlocks.

See also 54852508231ef28058a88480b2f9ab9b859b0e38
Completes the conversion started in ecb49d1a639acbacfc3771cae5ec07bed5df3847

Signed-off-by: John Kacur <[email protected]>
---
kernel/time/timer_stats.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/time/timer_stats.c b/kernel/time/timer_stats.c
index 2f3b585..30cb955 100644
--- a/kernel/time/timer_stats.c
+++ b/kernel/time/timer_stats.c
@@ -81,7 +81,7 @@ struct entry {
/*
* Spinlock protecting the tables - not taken during lookup:
*/
-static DEFINE_SPINLOCK(table_lock);
+static DEFINE_RAW_SPINLOCK(table_lock);

/*
* Per-CPU lookup locks for fast hash lookup:
@@ -188,7 +188,7 @@ static struct entry *tstat_lookup(struct entry *entry, char *comm)
prev = NULL;
curr = *head;

- spin_lock(&table_lock);
+ raw_spin_lock(&table_lock);
/*
* Make sure we have not raced with another CPU:
*/
@@ -215,7 +215,7 @@ static struct entry *tstat_lookup(struct entry *entry, char *comm)
*head = curr;
}
out_unlock:
- spin_unlock(&table_lock);
+ raw_spin_unlock(&table_lock);

return curr;
}
--
1.6.5.2

2010-01-11 21:30:35

by John Kacur

[permalink] [raw]
Subject: [PATCH 13/26] genirq: Convert locks to raw_spinlocks

Convert locks that cannot sleep in preempt-rt to raw_spinlocks.

See also: fd2bde5dd1689cc8ede833604cc19d1c835faf61

Signed-off-by: John Kacur <[email protected]>
---
arch/arm/oprofile/op_model_mpcore.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/oprofile/op_model_mpcore.c b/arch/arm/oprofile/op_model_mpcore.c
index 4ce0f98..cc125c2 100644
--- a/arch/arm/oprofile/op_model_mpcore.c
+++ b/arch/arm/oprofile/op_model_mpcore.c
@@ -263,10 +263,10 @@ static void em_route_irq(int irq, unsigned int cpu)
struct irq_desc *desc = irq_desc + irq;
const struct cpumask *mask = cpumask_of(cpu);

- spin_lock_irq(&desc->lock);
+ raw_spin_lock_irq(&desc->lock);
cpumask_copy(desc->affinity, mask);
desc->chip->set_affinity(irq, mask);
- spin_unlock_irq(&desc->lock);
+ raw_spin_unlock_irq(&desc->lock);
}

static int em_setup(void)
--
1.6.5.2

2010-01-11 21:30:57

by John Kacur

[permalink] [raw]
Subject: [PATCH 10/26] ACPI: Convert c3_lock to raw_spinlock

Convert locks which cannot sleep under preempt-rt to raw_spinlocks

See also 50fee3b3ad7cd773f0b50739bb0ed9ac43561c7d

Signed-off-by: John Kacur <[email protected]>
---
drivers/acpi/processor_idle.c | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index d1676b1..42b30d0 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -929,7 +929,7 @@ static int acpi_idle_enter_simple(struct cpuidle_device *dev,
}

static int c3_cpu_count;
-static DEFINE_SPINLOCK(c3_lock);
+static DEFINE_RAW_SPINLOCK(c3_lock);

/**
* acpi_idle_enter_bm - enters C3 with proper BM handling
@@ -1004,12 +1004,12 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,
* without doing anything.
*/
if (pr->flags.bm_check && pr->flags.bm_control) {
- spin_lock(&c3_lock);
+ raw_spin_lock(&c3_lock);
c3_cpu_count++;
/* Disable bus master arbitration when all CPUs are in C3 */
if (c3_cpu_count == num_online_cpus())
acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1);
- spin_unlock(&c3_lock);
+ raw_spin_unlock(&c3_lock);
} else if (!pr->flags.bm_check) {
ACPI_FLUSH_CPU_CACHE();
}
@@ -1018,10 +1018,10 @@ static int acpi_idle_enter_bm(struct cpuidle_device *dev,

/* Re-enable bus master arbitration */
if (pr->flags.bm_check && pr->flags.bm_control) {
- spin_lock(&c3_lock);
+ raw_spin_lock(&c3_lock);
acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);
c3_cpu_count--;
- spin_unlock(&c3_lock);
+ raw_spin_unlock(&c3_lock);
}
kt2 = ktime_get_real();
idle_time = ktime_to_us(ktime_sub(kt2, kt1));
--
1.6.5.2

2010-01-11 21:30:58

by John Kacur

[permalink] [raw]
Subject: [PATCH 07/26] x86: Convert pci_config_lock to raw_spinlock

Convert locks that cannot sleep in preempt-rt to raw_spinlocks.

See also bde31e2e149219765bb3a4bb3c568fa9623b5521

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/include/asm/pci_x86.h | 2 +-
arch/x86/pci/common.c | 2 +-
arch/x86/pci/direct.c | 16 ++++++++--------
arch/x86/pci/mmconfig_32.c | 8 ++++----
arch/x86/pci/numaq_32.c | 8 ++++----
arch/x86/pci/pcbios.c | 8 ++++----
6 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h
index b4bf9a9..567d009 100644
--- a/arch/x86/include/asm/pci_x86.h
+++ b/arch/x86/include/asm/pci_x86.h
@@ -83,7 +83,7 @@ struct irq_routing_table {
extern unsigned int pcibios_irq_mask;

extern int pcibios_scanned;
-extern spinlock_t pci_config_lock;
+extern raw_spinlock_t pci_config_lock;

extern int (*pcibios_enable_irq)(struct pci_dev *dev);
extern void (*pcibios_disable_irq)(struct pci_dev *dev);
diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index d2552c6..b79d322 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -81,7 +81,7 @@ int pcibios_scanned;
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
*/
-DEFINE_SPINLOCK(pci_config_lock);
+DEFINE_RAW_SPINLOCK(pci_config_lock);

static int __devinit can_skip_ioresource_align(const struct dmi_system_id *d)
{
diff --git a/arch/x86/pci/direct.c b/arch/x86/pci/direct.c
index 347d882..bd33620 100644
--- a/arch/x86/pci/direct.c
+++ b/arch/x86/pci/direct.c
@@ -27,7 +27,7 @@ static int pci_conf1_read(unsigned int seg, unsigned int bus,
return -EINVAL;
}

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

outl(PCI_CONF1_ADDRESS(bus, devfn, reg), 0xCF8);

@@ -43,7 +43,7 @@ static int pci_conf1_read(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
@@ -56,7 +56,7 @@ static int pci_conf1_write(unsigned int seg, unsigned int bus,
if ((bus > 255) || (devfn > 255) || (reg > 4095))
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

outl(PCI_CONF1_ADDRESS(bus, devfn, reg), 0xCF8);

@@ -72,7 +72,7 @@ static int pci_conf1_write(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
@@ -108,7 +108,7 @@ static int pci_conf2_read(unsigned int seg, unsigned int bus,
if (dev & 0x10)
return PCIBIOS_DEVICE_NOT_FOUND;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

outb((u8)(0xF0 | (fn << 1)), 0xCF8);
outb((u8)bus, 0xCFA);
@@ -127,7 +127,7 @@ static int pci_conf2_read(unsigned int seg, unsigned int bus,

outb(0, 0xCF8);

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
@@ -147,7 +147,7 @@ static int pci_conf2_write(unsigned int seg, unsigned int bus,
if (dev & 0x10)
return PCIBIOS_DEVICE_NOT_FOUND;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

outb((u8)(0xF0 | (fn << 1)), 0xCF8);
outb((u8)bus, 0xCFA);
@@ -166,7 +166,7 @@ static int pci_conf2_write(unsigned int seg, unsigned int bus,

outb(0, 0xCF8);

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
diff --git a/arch/x86/pci/mmconfig_32.c b/arch/x86/pci/mmconfig_32.c
index 90d5fd4..a3d9c54 100644
--- a/arch/x86/pci/mmconfig_32.c
+++ b/arch/x86/pci/mmconfig_32.c
@@ -64,7 +64,7 @@ err: *value = -1;
if (!base)
goto err;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

pci_exp_set_dev_base(base, bus, devfn);

@@ -79,7 +79,7 @@ err: *value = -1;
*value = mmio_config_readl(mmcfg_virt_addr + reg);
break;
}
- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
@@ -97,7 +97,7 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
if (!base)
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

pci_exp_set_dev_base(base, bus, devfn);

@@ -112,7 +112,7 @@ static int pci_mmcfg_write(unsigned int seg, unsigned int bus,
mmio_config_writel(mmcfg_virt_addr + reg, value);
break;
}
- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
diff --git a/arch/x86/pci/numaq_32.c b/arch/x86/pci/numaq_32.c
index 8eb295e..2dad4dc 100644
--- a/arch/x86/pci/numaq_32.c
+++ b/arch/x86/pci/numaq_32.c
@@ -41,7 +41,7 @@ static int pci_conf1_mq_read(unsigned int seg, unsigned int bus,
if (!value || (bus >= MAX_MP_BUSSES) || (devfn > 255) || (reg > 255))
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

write_cf8(bus, devfn, reg);

@@ -66,7 +66,7 @@ static int pci_conf1_mq_read(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
@@ -80,7 +80,7 @@ static int pci_conf1_mq_write(unsigned int seg, unsigned int bus,
if ((bus >= MAX_MP_BUSSES) || (devfn > 255) || (reg > 255))
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

write_cf8(bus, devfn, reg);

@@ -105,7 +105,7 @@ static int pci_conf1_mq_write(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return 0;
}
diff --git a/arch/x86/pci/pcbios.c b/arch/x86/pci/pcbios.c
index 1c975cc..2daa521 100644
--- a/arch/x86/pci/pcbios.c
+++ b/arch/x86/pci/pcbios.c
@@ -161,7 +161,7 @@ static int pci_bios_read(unsigned int seg, unsigned int bus,
if (!value || (bus > 255) || (devfn > 255) || (reg > 255))
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

switch (len) {
case 1:
@@ -212,7 +212,7 @@ static int pci_bios_read(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return (int)((result & 0xff00) >> 8);
}
@@ -227,7 +227,7 @@ static int pci_bios_write(unsigned int seg, unsigned int bus,
if ((bus > 255) || (devfn > 255) || (reg > 255))
return -EINVAL;

- spin_lock_irqsave(&pci_config_lock, flags);
+ raw_spin_lock_irqsave(&pci_config_lock, flags);

switch (len) {
case 1:
@@ -268,7 +268,7 @@ static int pci_bios_write(unsigned int seg, unsigned int bus,
break;
}

- spin_unlock_irqrestore(&pci_config_lock, flags);
+ raw_spin_unlock_irqrestore(&pci_config_lock, flags);

return (int)((result & 0xff00) >> 8);
}
--
1.6.5.2

2010-01-11 21:31:30

by John Kacur

[permalink] [raw]
Subject: [PATCH 08/26] i8253: Convert i8253_lock to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks

See also 48c7e18d4d6933e2c03a30c2859d37795c3efc7b

Signed-off-by: John Kacur <[email protected]>
---
arch/mips/include/asm/i8253.h | 2 +-
arch/mips/kernel/i8253.c | 14 +++++++-------
arch/x86/include/asm/i8253.h | 2 +-
arch/x86/kernel/apm_32.c | 4 ++--
arch/x86/kernel/i8253.c | 14 +++++++-------
drivers/block/hd.c | 4 ++--
drivers/input/gameport/gameport.c | 4 ++--
drivers/input/joystick/analog.c | 4 ++--
drivers/input/misc/pcspkr.c | 6 +++---
sound/drivers/pcsp/pcsp.h | 2 +-
sound/drivers/pcsp/pcsp_input.c | 4 ++--
sound/drivers/pcsp/pcsp_lib.c | 12 ++++++------
12 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/arch/mips/include/asm/i8253.h b/arch/mips/include/asm/i8253.h
index 032ca73..48bb823 100644
--- a/arch/mips/include/asm/i8253.h
+++ b/arch/mips/include/asm/i8253.h
@@ -12,7 +12,7 @@
#define PIT_CH0 0x40
#define PIT_CH2 0x42

-extern spinlock_t i8253_lock;
+extern raw_spinlock_t i8253_lock;

extern void setup_pit_timer(void);

diff --git a/arch/mips/kernel/i8253.c b/arch/mips/kernel/i8253.c
index ed5c441..9479406 100644
--- a/arch/mips/kernel/i8253.c
+++ b/arch/mips/kernel/i8253.c
@@ -15,7 +15,7 @@
#include <asm/io.h>
#include <asm/time.h>

-DEFINE_SPINLOCK(i8253_lock);
+DEFINE_RAW_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);

/*
@@ -26,7 +26,7 @@ EXPORT_SYMBOL(i8253_lock);
static void init_pit_timer(enum clock_event_mode mode,
struct clock_event_device *evt)
{
- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);

switch(mode) {
case CLOCK_EVT_MODE_PERIODIC:
@@ -55,7 +55,7 @@ static void init_pit_timer(enum clock_event_mode mode,
/* Nothing to do here */
break;
}
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);
}

/*
@@ -65,10 +65,10 @@ static void init_pit_timer(enum clock_event_mode mode,
*/
static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
{
- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);
outb_p(delta & 0xff , PIT_CH0); /* LSB */
outb(delta >> 8 , PIT_CH0); /* MSB */
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);

return 0;
}
@@ -137,7 +137,7 @@ static cycle_t pit_read(struct clocksource *cs)
static int old_count;
static u32 old_jifs;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
/*
* Although our caller may have the read side of xtime_lock,
* this is now a seqlock, and we are cheating in this routine
@@ -183,7 +183,7 @@ static cycle_t pit_read(struct clocksource *cs)
old_count = count;
old_jifs = jifs;

- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);

count = (LATCH - 1) - count;

diff --git a/arch/x86/include/asm/i8253.h b/arch/x86/include/asm/i8253.h
index 1edbf89..fc1f579 100644
--- a/arch/x86/include/asm/i8253.h
+++ b/arch/x86/include/asm/i8253.h
@@ -6,7 +6,7 @@
#define PIT_CH0 0x40
#define PIT_CH2 0x42

-extern spinlock_t i8253_lock;
+extern raw_spinlock_t i8253_lock;

extern struct clock_event_device *global_clock_event;

diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
index b5b6b23..a920e00 100644
--- a/arch/x86/kernel/apm_32.c
+++ b/arch/x86/kernel/apm_32.c
@@ -1224,7 +1224,7 @@ static void reinit_timer(void)
#ifdef INIT_TIMER_AFTER_SUSPEND
unsigned long flags;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
/* set the clock to HZ */
outb_pit(0x34, PIT_MODE); /* binary, mode 2, LSB/MSB, ch 0 */
udelay(10);
@@ -1232,7 +1232,7 @@ static void reinit_timer(void)
udelay(10);
outb_pit(LATCH >> 8, PIT_CH0); /* MSB */
udelay(10);
- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);
#endif
}

diff --git a/arch/x86/kernel/i8253.c b/arch/x86/kernel/i8253.c
index 23c1679..2dfd315 100644
--- a/arch/x86/kernel/i8253.c
+++ b/arch/x86/kernel/i8253.c
@@ -16,7 +16,7 @@
#include <asm/hpet.h>
#include <asm/smp.h>

-DEFINE_SPINLOCK(i8253_lock);
+DEFINE_RAW_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);

/*
@@ -33,7 +33,7 @@ struct clock_event_device *global_clock_event;
static void init_pit_timer(enum clock_event_mode mode,
struct clock_event_device *evt)
{
- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);

switch (mode) {
case CLOCK_EVT_MODE_PERIODIC:
@@ -62,7 +62,7 @@ static void init_pit_timer(enum clock_event_mode mode,
/* Nothing to do here */
break;
}
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);
}

/*
@@ -72,10 +72,10 @@ static void init_pit_timer(enum clock_event_mode mode,
*/
static int pit_next_event(unsigned long delta, struct clock_event_device *evt)
{
- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);
outb_pit(delta & 0xff , PIT_CH0); /* LSB */
outb_pit(delta >> 8 , PIT_CH0); /* MSB */
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);

return 0;
}
@@ -130,7 +130,7 @@ static cycle_t pit_read(struct clocksource *cs)
int count;
u32 jifs;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
/*
* Although our caller may have the read side of xtime_lock,
* this is now a seqlock, and we are cheating in this routine
@@ -176,7 +176,7 @@ static cycle_t pit_read(struct clocksource *cs)
old_count = count;
old_jifs = jifs;

- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);

count = (LATCH - 1) - count;

diff --git a/drivers/block/hd.c b/drivers/block/hd.c
index d5cdce0..d6efe0c 100644
--- a/drivers/block/hd.c
+++ b/drivers/block/hd.c
@@ -165,12 +165,12 @@ unsigned long read_timer(void)
unsigned long t, flags;
int i;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
t = jiffies * 11932;
outb_p(0, 0x43);
i = inb_p(0x40);
i |= inb(0x40) << 8;
- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);
return(t - i);
}
#endif
diff --git a/drivers/input/gameport/gameport.c b/drivers/input/gameport/gameport.c
index ac11be0..c04ae8d 100644
--- a/drivers/input/gameport/gameport.c
+++ b/drivers/input/gameport/gameport.c
@@ -57,11 +57,11 @@ static unsigned int get_time_pit(void)
unsigned long flags;
unsigned int count;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
outb_p(0x00, 0x43);
count = inb_p(0x40);
count |= inb_p(0x40) << 8;
- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);

return count;
}
diff --git a/drivers/input/joystick/analog.c b/drivers/input/joystick/analog.c
index 1c0b529..4afe0a3 100644
--- a/drivers/input/joystick/analog.c
+++ b/drivers/input/joystick/analog.c
@@ -146,11 +146,11 @@ static unsigned int get_time_pit(void)
unsigned long flags;
unsigned int count;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
outb_p(0x00, 0x43);
count = inb_p(0x40);
count |= inb_p(0x40) << 8;
- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);

return count;
}
diff --git a/drivers/input/misc/pcspkr.c b/drivers/input/misc/pcspkr.c
index ea4e1fd..f080dd3 100644
--- a/drivers/input/misc/pcspkr.c
+++ b/drivers/input/misc/pcspkr.c
@@ -30,7 +30,7 @@ MODULE_ALIAS("platform:pcspkr");
#include <asm/i8253.h>
#else
#include <asm/8253pit.h>
-static DEFINE_SPINLOCK(i8253_lock);
+static DEFINE_RAW_SPINLOCK(i8253_lock);
#endif

static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int code, int value)
@@ -50,7 +50,7 @@ static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int c
if (value > 20 && value < 32767)
count = PIT_TICK_RATE / value;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);

if (count) {
/* set command for counter 2, 2 byte write */
@@ -65,7 +65,7 @@ static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int c
outb(inb_p(0x61) & 0xFC, 0x61);
}

- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);

return 0;
}
diff --git a/sound/drivers/pcsp/pcsp.h b/sound/drivers/pcsp/pcsp.h
index 1e12307..4ff6c8c 100644
--- a/sound/drivers/pcsp/pcsp.h
+++ b/sound/drivers/pcsp/pcsp.h
@@ -16,7 +16,7 @@
#include <asm/i8253.h>
#else
#include <asm/8253pit.h>
-static DEFINE_SPINLOCK(i8253_lock);
+static DEFINE_RAW_SPINLOCK(i8253_lock);
#endif

#define PCSP_SOUND_VERSION 0x400 /* read 4.00 */
diff --git a/sound/drivers/pcsp/pcsp_input.c b/sound/drivers/pcsp/pcsp_input.c
index 0444cde..b5e2b54 100644
--- a/sound/drivers/pcsp/pcsp_input.c
+++ b/sound/drivers/pcsp/pcsp_input.c
@@ -21,7 +21,7 @@ static void pcspkr_do_sound(unsigned int count)
{
unsigned long flags;

- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);

if (count) {
/* set command for counter 2, 2 byte write */
@@ -36,7 +36,7 @@ static void pcspkr_do_sound(unsigned int count)
outb(inb_p(0x61) & 0xFC, 0x61);
}

- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);
}

void pcspkr_stop_sound(void)
diff --git a/sound/drivers/pcsp/pcsp_lib.c b/sound/drivers/pcsp/pcsp_lib.c
index e1145ac..f6a2e72 100644
--- a/sound/drivers/pcsp/pcsp_lib.c
+++ b/sound/drivers/pcsp/pcsp_lib.c
@@ -65,7 +65,7 @@ static u64 pcsp_timer_update(struct snd_pcsp *chip)
timer_cnt = val * CUR_DIV() / 256;

if (timer_cnt && chip->enable) {
- spin_lock_irqsave(&i8253_lock, flags);
+ raw_spin_lock_irqsave(&i8253_lock, flags);
if (!nforce_wa) {
outb_p(chip->val61, 0x61);
outb_p(timer_cnt, 0x42);
@@ -74,7 +74,7 @@ static u64 pcsp_timer_update(struct snd_pcsp *chip)
outb(chip->val61 ^ 2, 0x61);
chip->thalf = 1;
}
- spin_unlock_irqrestore(&i8253_lock, flags);
+ raw_spin_unlock_irqrestore(&i8253_lock, flags);
}

chip->ns_rem = PCSP_PERIOD_NS();
@@ -158,10 +158,10 @@ static int pcsp_start_playing(struct snd_pcsp *chip)
return -EIO;
}

- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);
chip->val61 = inb(0x61) | 0x03;
outb_p(0x92, 0x43); /* binary, mode 1, LSB only, ch 2 */
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);
atomic_set(&chip->timer_active, 1);
chip->thalf = 0;

@@ -178,11 +178,11 @@ static void pcsp_stop_playing(struct snd_pcsp *chip)
return;

atomic_set(&chip->timer_active, 0);
- spin_lock(&i8253_lock);
+ raw_spin_lock(&i8253_lock);
/* restore the timer */
outb_p(0xb6, 0x43); /* binary, mode 3, LSB/MSB, ch 2 */
outb(chip->val61 & 0xFC, 0x61);
- spin_unlock(&i8253_lock);
+ raw_spin_unlock(&i8253_lock);
}

/*
--
1.6.5.2

2010-01-11 21:31:28

by John Kacur

[permalink] [raw]
Subject: [PATCH 09/26] x86: Convert set_atomicity_lock to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks

See also 11f7e656720b244c062223529e9a8aafeb8d6076

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/kernel/cpu/mtrr/generic.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 55da0c5..46f40f0 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -570,7 +570,7 @@ static unsigned long set_mtrr_state(void)


static unsigned long cr4;
-static DEFINE_SPINLOCK(set_atomicity_lock);
+static DEFINE_RAW_SPINLOCK(set_atomicity_lock);

/*
* Since we are disabling the cache don't allow any interrupts,
@@ -590,7 +590,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
* changes to the way the kernel boots
*/

- spin_lock(&set_atomicity_lock);
+ raw_spin_lock(&set_atomicity_lock);

/* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
cr0 = read_cr0() | X86_CR0_CD;
@@ -627,7 +627,7 @@ static void post_set(void) __releases(set_atomicity_lock)
/* Restore value of CR4 */
if (cpu_has_pge)
write_cr4(cr4);
- spin_unlock(&set_atomicity_lock);
+ raw_spin_unlock(&set_atomicity_lock);
}

static void generic_set_all(void)
--
1.6.5.2

2010-01-11 21:27:19

by John Kacur

[permalink] [raw]
Subject: [PATCH 01/26] xtime_lock: Convert atomic_seqlock to raw_seqlock, fix up all users

Rewritten from 5a950072e4c1036abcdb35610d053e49bdde55c9

Signed-off-by: John Kacur <[email protected]>
---
arch/alpha/kernel/time.c | 4 +-
arch/arm/kernel/time.c | 12 +++++-----
arch/blackfin/kernel/time.c | 4 +-
arch/cris/kernel/time.c | 4 +-
arch/frv/kernel/time.c | 4 +-
arch/h8300/kernel/time.c | 4 +-
arch/ia64/kernel/time.c | 4 +-
arch/ia64/xen/time.c | 4 +-
arch/m32r/kernel/time.c | 4 +-
arch/m68knommu/kernel/time.c | 4 +-
arch/mn10300/kernel/time.c | 4 +-
arch/parisc/kernel/time.c | 8 +++---
arch/powerpc/kernel/time.c | 4 +-
arch/sparc/kernel/pcic.c | 8 +++---
arch/sparc/kernel/time_32.c | 12 +++++-----
arch/xtensa/kernel/time.c | 4 +-
include/linux/time.h | 2 +-
kernel/hrtimer.c | 8 +++---
kernel/time.c | 8 +++---
kernel/time/ntp.c | 8 +++---
kernel/time/tick-common.c | 8 +++---
kernel/time/tick-sched.c | 12 +++++-----
kernel/time/timekeeping.c | 50 +++++++++++++++++++++---------------------
23 files changed, 92 insertions(+), 92 deletions(-)

diff --git a/arch/alpha/kernel/time.c b/arch/alpha/kernel/time.c
index 5d08266..760dd1b 100644
--- a/arch/alpha/kernel/time.c
+++ b/arch/alpha/kernel/time.c
@@ -106,7 +106,7 @@ irqreturn_t timer_interrupt(int irq, void *dev)
profile_tick(CPU_PROFILING);
#endif

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

/*
* Calculate how many ticks have passed since the last update,
@@ -136,7 +136,7 @@ irqreturn_t timer_interrupt(int irq, void *dev)
state.last_rtc_update = xtime.tv_sec - (tmp ? 600 : 0);
}

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

#ifndef CONFIG_SMP
while (nticks--)
diff --git a/arch/arm/kernel/time.c b/arch/arm/kernel/time.c
index d38cdf2..3654ecf 100644
--- a/arch/arm/kernel/time.c
+++ b/arch/arm/kernel/time.c
@@ -245,11 +245,11 @@ void do_gettimeofday(struct timeval *tv)
unsigned long usec, sec;

do {
- seq = read_seqbegin_irqsave(&xtime_lock, flags);
+ seq = read_raw_seqbegin_irqsave(&xtime_lock, flags);
usec = system_timer->offset();
sec = xtime.tv_sec;
usec += xtime.tv_nsec / 1000;
- } while (read_seqretry_irqrestore(&xtime_lock, seq, flags));
+ } while (read_raw_seqretry_irqrestore(&xtime_lock, seq, flags));

/* usec may have gone up a lot: be safe */
while (usec >= 1000000) {
@@ -271,7 +271,7 @@ int do_settimeofday(struct timespec *tv)
if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
return -EINVAL;

- write_seqlock_irq(&xtime_lock);
+ write_raw_seqlock_irq(&xtime_lock);
/*
* This is revolting. We need to set "xtime" correctly. However, the
* value in this location is the value at the most recent update of
@@ -287,7 +287,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);

ntp_clear();
- write_sequnlock_irq(&xtime_lock);
+ write_raw_sequnlock_irq(&xtime_lock);
clock_was_set();
return 0;
}
@@ -337,9 +337,9 @@ void timer_tick(void)
profile_tick(CPU_PROFILING);
do_leds();
do_set_rtc();
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(1);
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
#endif
diff --git a/arch/blackfin/kernel/time.c b/arch/blackfin/kernel/time.c
index 13c1ee3..8ded01f 100644
--- a/arch/blackfin/kernel/time.c
+++ b/arch/blackfin/kernel/time.c
@@ -129,7 +129,7 @@ irqreturn_t timer_interrupt(int irq, void *dummy)
/* last time the cmos clock got updated */
static long last_rtc_update;

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(1);

/*
@@ -149,7 +149,7 @@ irqreturn_t timer_interrupt(int irq, void *dummy)
/* Do it again in 60s. */
last_rtc_update = xtime.tv_sec - 600;
}
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

#ifdef CONFIG_IPIPE
update_root_process_times(get_irq_regs());
diff --git a/arch/cris/kernel/time.c b/arch/cris/kernel/time.c
index 074fe7d..58d2a1a 100644
--- a/arch/cris/kernel/time.c
+++ b/arch/cris/kernel/time.c
@@ -87,7 +87,7 @@ int do_settimeofday(struct timespec *tv)
if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
return -EINVAL;

- write_seqlock_irq(&xtime_lock);
+ write_raw_seqlock_irq(&xtime_lock);
/*
* This is revolting. We need to set "xtime" correctly. However, the
* value in this location is the value at the most recent update of
@@ -103,7 +103,7 @@ int do_settimeofday(struct timespec *tv)
set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);

ntp_clear();
- write_sequnlock_irq(&xtime_lock);
+ write_raw_sequnlock_irq(&xtime_lock);
clock_was_set();
return 0;
}
diff --git a/arch/frv/kernel/time.c b/arch/frv/kernel/time.c
index fb0ce75..82943ba 100644
--- a/arch/frv/kernel/time.c
+++ b/arch/frv/kernel/time.c
@@ -70,7 +70,7 @@ static irqreturn_t timer_interrupt(int irq, void *dummy)
* the irq version of write_lock because as just said we have irq
* locally disabled. -arca
*/
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

do_timer(1);

@@ -96,7 +96,7 @@ static irqreturn_t timer_interrupt(int irq, void *dummy)
__set_LEDS(n);
#endif /* CONFIG_HEARTBEAT */

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

update_process_times(user_mode(get_irq_regs()));

diff --git a/arch/h8300/kernel/time.c b/arch/h8300/kernel/time.c
index 7f2d6cf..d08012c 100644
--- a/arch/h8300/kernel/time.c
+++ b/arch/h8300/kernel/time.c
@@ -35,9 +35,9 @@ void h8300_timer_tick(void)
{
if (current->pid)
profile_tick(CPU_PROFILING);
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(1);
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
update_process_times(user_mode(get_irq_regs()));
}

diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c
index a35c661..bf9daaa 100644
--- a/arch/ia64/kernel/time.c
+++ b/arch/ia64/kernel/time.c
@@ -197,10 +197,10 @@ timer_interrupt (int irq, void *dev_id)
* another CPU. We need to avoid to SMP race by acquiring the
* xtime_lock.
*/
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(1);
local_cpu_data->itm_next = new_itm;
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
} else
local_cpu_data->itm_next = new_itm;

diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index c1c5445..f681845 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -140,10 +140,10 @@ consider_steal_time(unsigned long new_itm)
delta_itm += local_cpu_data->itm_delta * (stolen + blocked);

if (cpu == time_keeper_id) {
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(stolen + blocked);
local_cpu_data->itm_next = delta_itm + new_itm;
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
} else {
local_cpu_data->itm_next = delta_itm + new_itm;
}
diff --git a/arch/m32r/kernel/time.c b/arch/m32r/kernel/time.c
index 9cedcef..47632ca 100644
--- a/arch/m32r/kernel/time.c
+++ b/arch/m32r/kernel/time.c
@@ -143,7 +143,7 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
* CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
* called as close as possible to 500 ms before the new second starts.
*/
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
if (ntp_synced()
&& xtime.tv_sec > last_rtc_update + 660
&& (xtime.tv_nsec / 1000) >= 500000 - ((unsigned)TICK_SIZE) / 2
@@ -154,7 +154,7 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
else /* do it again in 60 s */
last_rtc_update = xtime.tv_sec - 600;
}
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
/* As we return to user mode fire off the other CPU schedulers..
this is basically because we don't yet share IRQ's around.
This message is rigged to be safe on the 386 - basically it's
diff --git a/arch/m68knommu/kernel/time.c b/arch/m68knommu/kernel/time.c
index a90acf5..f8eb60f 100644
--- a/arch/m68knommu/kernel/time.c
+++ b/arch/m68knommu/kernel/time.c
@@ -44,11 +44,11 @@ irqreturn_t arch_timer_interrupt(int irq, void *dummy)
if (current->pid)
profile_tick(CPU_PROFILING);

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

do_timer(1);

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
diff --git a/arch/mn10300/kernel/time.c b/arch/mn10300/kernel/time.c
index 395caf0..82e6bb8 100644
--- a/arch/mn10300/kernel/time.c
+++ b/arch/mn10300/kernel/time.c
@@ -99,7 +99,7 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
{
unsigned tsc, elapse;

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

while (tsc = get_cycles(),
elapse = mn10300_last_tsc - tsc, /* time elapsed since last
@@ -114,7 +114,7 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
check_rtc_time();
}

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

update_process_times(user_mode(get_irq_regs()));

diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
index a79c6f9..52c8cf7 100644
--- a/arch/parisc/kernel/time.c
+++ b/arch/parisc/kernel/time.c
@@ -163,9 +163,9 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
}

if (cpu == 0) {
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
do_timer(ticks_elapsed);
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
}

return IRQ_HANDLED;
@@ -268,12 +268,12 @@ void __init time_init(void)
if (pdc_tod_read(&tod_data) == 0) {
unsigned long flags;

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);
xtime.tv_sec = tod_data.tod_sec;
xtime.tv_nsec = tod_data.tod_usec * 1000;
set_normalized_timespec(&wall_to_monotonic,
-xtime.tv_sec, -xtime.tv_nsec);
- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);
} else {
printk(KERN_ERR "Error reading tod clock\n");
xtime.tv_sec = 0;
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 9ba2cc8..b24bfa3 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -1040,7 +1040,7 @@ void __init time_init(void)
/* Save the current timebase to pretty up CONFIG_PRINTK_TIME */
boot_tb = get_tb_or_rtc();

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);

/* If platform provided a timezone (pmac), we correct the time */
if (timezone_offset) {
@@ -1054,7 +1054,7 @@ void __init time_init(void)
vdso_data->stamp_xsec = (u64) xtime.tv_sec * XSEC_PER_SEC;
vdso_data->tb_to_xs = tb_to_xs;

- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);

/* Start the decrementer on CPUs that have manual control
* such as BookE
diff --git a/arch/sparc/kernel/pcic.c b/arch/sparc/kernel/pcic.c
index 85e7037..ad60d3b 100644
--- a/arch/sparc/kernel/pcic.c
+++ b/arch/sparc/kernel/pcic.c
@@ -703,10 +703,10 @@ static void pcic_clear_clock_irq(void)

static irqreturn_t pcic_timer_handler (int irq, void *h)
{
- write_seqlock(&xtime_lock); /* Dummy, to show that we remember */
+ write_raw_seqlock(&xtime_lock); /* Dummy, to show that we remember */
pcic_clear_clock_irq();
do_timer(1);
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
#endif
@@ -766,7 +766,7 @@ static void pci_do_gettimeofday(struct timeval *tv)
unsigned long max_ntp_tick = tick_usec - tickadj;

do {
- seq = read_seqbegin_irqsave(&xtime_lock, flags);
+ seq = read_raw_seqbegin_irqsave(&xtime_lock, flags);
usec = do_gettimeoffset();

/*
@@ -779,7 +779,7 @@ static void pci_do_gettimeofday(struct timeval *tv)

sec = xtime.tv_sec;
usec += (xtime.tv_nsec / 1000);
- } while (read_seqretry_irqrestore(&xtime_lock, seq, flags));
+ } while (read_raw_seqretry_irqrestore(&xtime_lock, seq, flags));

while (usec >= 1000000) {
usec -= 1000000;
diff --git a/arch/sparc/kernel/time_32.c b/arch/sparc/kernel/time_32.c
index 5b2f595..c581970 100644
--- a/arch/sparc/kernel/time_32.c
+++ b/arch/sparc/kernel/time_32.c
@@ -93,7 +93,7 @@ static irqreturn_t timer_interrupt(int dummy, void *dev_id)
#endif

/* Protect counter clear so that do_gettimeoffset works */
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

clear_clock_irq();

@@ -109,7 +109,7 @@ static irqreturn_t timer_interrupt(int dummy, void *dev_id)
else
last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
}
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
@@ -248,7 +248,7 @@ void do_gettimeofday(struct timeval *tv)
unsigned long max_ntp_tick = tick_usec - tickadj;

do {
- seq = read_seqbegin_irqsave(&xtime_lock, flags);
+ seq = read_raw_seqbegin_irqsave(&xtime_lock, flags);
usec = do_gettimeoffset();

/*
@@ -261,7 +261,7 @@ void do_gettimeofday(struct timeval *tv)

sec = xtime.tv_sec;
usec += (xtime.tv_nsec / 1000);
- } while (read_seqretry_irqrestore(&xtime_lock, seq, flags));
+ } while (read_raw_seqretry_irqrestore(&xtime_lock, seq, flags));

while (usec >= 1000000) {
usec -= 1000000;
@@ -278,9 +278,9 @@ int do_settimeofday(struct timespec *tv)
{
int ret;

- write_seqlock_irq(&xtime_lock);
+ write_raw_seqlock_irq(&xtime_lock);
ret = bus_do_settimeofday(tv);
- write_sequnlock_irq(&xtime_lock);
+ write_raw_sequnlock_irq(&xtime_lock);
clock_was_set();
return ret;
}
diff --git a/arch/xtensa/kernel/time.c b/arch/xtensa/kernel/time.c
index 19f7df3..e8184d5 100644
--- a/arch/xtensa/kernel/time.c
+++ b/arch/xtensa/kernel/time.c
@@ -101,7 +101,7 @@ again:
update_process_times(user_mode(get_irq_regs()));
#endif

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

do_timer(1); /* Linux handler in kernel/timer.c */

@@ -110,7 +110,7 @@ again:
next += CCOUNT_PER_JIFFY;
set_linux_timer(next);

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
}

/* Allow platform to do something useful (Wdog). */
diff --git a/include/linux/time.h b/include/linux/time.h
index 6e026e4..49278a9 100644
--- a/include/linux/time.h
+++ b/include/linux/time.h
@@ -99,7 +99,7 @@ static inline struct timespec timespec_sub(struct timespec lhs,

extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
-extern seqlock_t xtime_lock;
+extern raw_seqlock_t xtime_lock;

extern void read_persistent_clock(struct timespec *ts);
extern void read_boot_clock(struct timespec *ts);
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index 0086628..54cf84f 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -88,10 +88,10 @@ static void hrtimer_get_softirq_time(struct hrtimer_cpu_base *base)
unsigned long seq;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
xts = current_kernel_time();
tom = wall_to_monotonic;
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

xtim = timespec_to_ktime(xts);
tomono = timespec_to_ktime(tom);
@@ -619,11 +619,11 @@ static void retrigger_next_event(void *arg)
return;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
set_normalized_timespec(&realtime_offset,
-wall_to_monotonic.tv_sec,
-wall_to_monotonic.tv_nsec);
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

base = &__get_cpu_var(hrtimer_bases);

diff --git a/kernel/time.c b/kernel/time.c
index 8047980..adbe583 100644
--- a/kernel/time.c
+++ b/kernel/time.c
@@ -133,11 +133,11 @@ SYSCALL_DEFINE2(gettimeofday, struct timeval __user *, tv,
*/
static inline void warp_clock(void)
{
- write_seqlock_irq(&xtime_lock);
+ write_raw_seqlock_irq(&xtime_lock);
wall_to_monotonic.tv_sec -= sys_tz.tz_minuteswest * 60;
xtime.tv_sec += sys_tz.tz_minuteswest * 60;
update_xtime_cache(0);
- write_sequnlock_irq(&xtime_lock);
+ write_raw_sequnlock_irq(&xtime_lock);
clock_was_set();
}

@@ -699,9 +699,9 @@ u64 get_jiffies_64(void)
u64 ret;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
ret = jiffies_64;
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));
return ret;
}
EXPORT_SYMBOL(get_jiffies_64);
diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
index 4800f93..ed2aec1 100644
--- a/kernel/time/ntp.c
+++ b/kernel/time/ntp.c
@@ -188,7 +188,7 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
{
enum hrtimer_restart res = HRTIMER_NORESTART;

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

switch (time_state) {
case TIME_OK:
@@ -218,7 +218,7 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
break;
}

- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);

return res;
}
@@ -476,7 +476,7 @@ int do_adjtimex(struct timex *txc)

getnstimeofday(&ts);

- write_seqlock_irq(&xtime_lock);
+ write_raw_seqlock_irq(&xtime_lock);

if (txc->modes & ADJ_ADJTIME) {
long save_adjust = time_adjust;
@@ -524,7 +524,7 @@ int do_adjtimex(struct timex *txc)
txc->errcnt = 0;
txc->stbcnt = 0;

- write_sequnlock_irq(&xtime_lock);
+ write_raw_sequnlock_irq(&xtime_lock);

txc->time.tv_sec = ts.tv_sec;
txc->time.tv_usec = ts.tv_nsec;
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index b6b898d..01165a7 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -60,13 +60,13 @@ int tick_is_oneshot_available(void)
static void tick_periodic(int cpu)
{
if (tick_do_timer_cpu == cpu) {
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

/* Keep track of the next tick event */
tick_next_period = ktime_add(tick_next_period, tick_period);

do_timer(1);
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
}

update_process_times(user_mode(get_irq_regs()));
@@ -127,9 +127,9 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
ktime_t next;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
next = tick_next_period;
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

clockevents_set_mode(dev, CLOCK_EVT_MODE_ONESHOT);

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index f992762..bc625d9 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -57,7 +57,7 @@ static void tick_do_update_jiffies64(ktime_t now)
return;

/* Reevalute with xtime_lock held */
- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);

delta = ktime_sub(now, last_jiffies_update);
if (delta.tv64 >= tick_period.tv64) {
@@ -80,7 +80,7 @@ static void tick_do_update_jiffies64(ktime_t now)
/* Keep the tick_next_period variable up to date */
tick_next_period = ktime_add(last_jiffies_update, tick_period);
}
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
}

/*
@@ -90,12 +90,12 @@ static ktime_t tick_init_jiffy_update(void)
{
ktime_t period;

- write_seqlock(&xtime_lock);
+ write_raw_seqlock(&xtime_lock);
/* Did we start the jiffies update yet ? */
if (last_jiffies_update.tv64 == 0)
last_jiffies_update = tick_next_period;
period = last_jiffies_update;
- write_sequnlock(&xtime_lock);
+ write_raw_sequnlock(&xtime_lock);
return period;
}

@@ -265,11 +265,11 @@ void tick_nohz_stop_sched_tick(int inidle)
ts->idle_calls++;
/* Read jiffies and the time when jiffies were updated last */
do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
last_update = last_jiffies_update;
last_jiffies = jiffies;
time_delta = timekeeping_max_deferment();
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

if (rcu_needs_cpu(cpu) || printk_needs_cpu(cpu) ||
arch_needs_cpu(cpu)) {
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7faaa32..005217e 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -135,7 +135,7 @@ static inline s64 timekeeping_get_ns_raw(void)
* This read-write spinlock protects us from races in SMP while
* playing with xtime.
*/
-__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
+__cacheline_aligned_in_smp DEFINE_RAW_SEQLOCK(xtime_lock);


/*
@@ -226,7 +226,7 @@ void getnstimeofday(struct timespec *ts)
WARN_ON(timekeeping_suspended);

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);

*ts = xtime;
nsecs = timekeeping_get_ns();
@@ -234,7 +234,7 @@ void getnstimeofday(struct timespec *ts)
/* If arch requires, add in gettimeoffset() */
nsecs += arch_gettimeoffset();

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

timespec_add_ns(ts, nsecs);
}
@@ -249,12 +249,12 @@ ktime_t ktime_get(void)
WARN_ON(timekeeping_suspended);

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
secs = xtime.tv_sec + wall_to_monotonic.tv_sec;
nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec;
nsecs += timekeeping_get_ns();

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));
/*
* Use ktime_set/ktime_add_ns to create a proper ktime on
* 32-bit architectures without CONFIG_KTIME_SCALAR.
@@ -280,12 +280,12 @@ void ktime_get_ts(struct timespec *ts)
WARN_ON(timekeeping_suspended);

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
*ts = xtime;
tomono = wall_to_monotonic;
nsecs = timekeeping_get_ns();

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
ts->tv_nsec + tomono.tv_nsec + nsecs);
@@ -322,7 +322,7 @@ int do_settimeofday(struct timespec *tv)
if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
return -EINVAL;

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);

timekeeping_forward_now();

@@ -339,7 +339,7 @@ int do_settimeofday(struct timespec *tv)

update_vsyscall(&xtime, timekeeper.clock, timekeeper.mult);

- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);

/* signal hrtimers about time change */
clock_was_set();
@@ -418,11 +418,11 @@ void ktime_get_ts(struct timespec *ts)
unsigned long seq;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
getnstimeofday(ts);
tomono = wall_to_monotonic;

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
ts->tv_nsec + tomono.tv_nsec);
@@ -458,11 +458,11 @@ void getrawmonotonic(struct timespec *ts)
s64 nsecs;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);
nsecs = timekeeping_get_ns_raw();
*ts = raw_time;

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

timespec_add_ns(ts, nsecs);
}
@@ -478,11 +478,11 @@ int timekeeping_valid_for_hres(void)
int ret;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);

ret = timekeeper.clock->flags & CLOCK_SOURCE_VALID_FOR_HRES;

- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

return ret;
}
@@ -540,7 +540,7 @@ void __init timekeeping_init(void)
read_persistent_clock(&now);
read_boot_clock(&boot);

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);

ntp_init();

@@ -562,7 +562,7 @@ void __init timekeeping_init(void)
update_xtime_cache(0);
total_sleep_time.tv_sec = 0;
total_sleep_time.tv_nsec = 0;
- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);
}

/* time in seconds when suspend began */
@@ -585,7 +585,7 @@ static int timekeeping_resume(struct sys_device *dev)

clocksource_resume();

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);

if (timespec_compare(&ts, &timekeeping_suspend_time) > 0) {
ts = timespec_sub(ts, timekeeping_suspend_time);
@@ -598,7 +598,7 @@ static int timekeeping_resume(struct sys_device *dev)
timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock);
timekeeper.ntp_error = 0;
timekeeping_suspended = 0;
- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);

touch_softlockup_watchdog();

@@ -616,10 +616,10 @@ static int timekeeping_suspend(struct sys_device *dev, pm_message_t state)

read_persistent_clock(&timekeeping_suspend_time);

- write_seqlock_irqsave(&xtime_lock, flags);
+ write_raw_seqlock_irqsave(&xtime_lock, flags);
timekeeping_forward_now();
timekeeping_suspended = 1;
- write_sequnlock_irqrestore(&xtime_lock, flags);
+ write_raw_sequnlock_irqrestore(&xtime_lock, flags);

clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL);

@@ -907,10 +907,10 @@ struct timespec current_kernel_time(void)
unsigned long seq;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);

now = xtime_cache;
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

return now;
}
@@ -922,11 +922,11 @@ struct timespec get_monotonic_coarse(void)
unsigned long seq;

do {
- seq = read_seqbegin(&xtime_lock);
+ seq = read_raw_seqbegin(&xtime_lock);

now = xtime_cache;
mono = wall_to_monotonic;
- } while (read_seqretry(&xtime_lock, seq));
+ } while (read_raw_seqretry(&xtime_lock, seq));

set_normalized_timespec(&now, now.tv_sec + mono.tv_sec,
now.tv_nsec + mono.tv_nsec);
--
1.6.5.2

2010-01-11 21:32:46

by John Kacur

[permalink] [raw]
Subject: [PATCH 05/26] x86: Convert ioapic_lock and vector_lock to raw_spinlocks

Convert locks which cannot sleep in preempt-rt to raw_spinlocks.

See also: f32843e644be4d636101117c377a59e0646796e4

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/kernel/apic/io_apic.c | 102 ++++++++++++++++++++--------------------
1 files changed, 51 insertions(+), 51 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index de00c46..7719fa0 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -73,8 +73,8 @@
*/
int sis_apic_bug = -1;

-static DEFINE_SPINLOCK(ioapic_lock);
-static DEFINE_SPINLOCK(vector_lock);
+static DEFINE_RAW_SPINLOCK(ioapic_lock);
+static DEFINE_RAW_SPINLOCK(vector_lock);

/*
* # of IRQ routing registers
@@ -406,7 +406,7 @@ static bool io_apic_level_ack_pending(struct irq_cfg *cfg)
struct irq_pin_list *entry;
unsigned long flags;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
for_each_irq_pin(entry, cfg->irq_2_pin) {
unsigned int reg;
int pin;
@@ -415,11 +415,11 @@ static bool io_apic_level_ack_pending(struct irq_cfg *cfg)
reg = io_apic_read(entry->apic, 0x10 + pin*2);
/* Is the remote IRR bit set? */
if (reg & IO_APIC_REDIR_REMOTE_IRR) {
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
return true;
}
}
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

return false;
}
@@ -433,10 +433,10 @@ static struct IO_APIC_route_entry ioapic_read_entry(int apic, int pin)
{
union entry_union eu;
unsigned long flags;
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
eu.w1 = io_apic_read(apic, 0x10 + 2 * pin);
eu.w2 = io_apic_read(apic, 0x11 + 2 * pin);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
return eu.entry;
}

@@ -459,9 +459,9 @@ __ioapic_write_entry(int apic, int pin, struct IO_APIC_route_entry e)
void ioapic_write_entry(int apic, int pin, struct IO_APIC_route_entry e)
{
unsigned long flags;
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
__ioapic_write_entry(apic, pin, e);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
}

/*
@@ -474,10 +474,10 @@ static void ioapic_mask_entry(int apic, int pin)
unsigned long flags;
union entry_union eu = { .entry.mask = 1 };

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x10 + 2*pin, eu.w1);
io_apic_write(apic, 0x11 + 2*pin, eu.w2);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
}

/*
@@ -604,9 +604,9 @@ static void mask_IO_APIC_irq_desc(struct irq_desc *desc)

BUG_ON(!cfg);

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
__mask_IO_APIC_irq(cfg);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
}

static void unmask_IO_APIC_irq_desc(struct irq_desc *desc)
@@ -614,9 +614,9 @@ static void unmask_IO_APIC_irq_desc(struct irq_desc *desc)
struct irq_cfg *cfg = desc->chip_data;
unsigned long flags;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
__unmask_IO_APIC_irq(cfg);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
}

static void mask_IO_APIC_irq(unsigned int irq)
@@ -1140,12 +1140,12 @@ void lock_vector_lock(void)
/* Used to the online set of cpus does not change
* during assign_irq_vector.
*/
- spin_lock(&vector_lock);
+ raw_spin_lock(&vector_lock);
}

void unlock_vector_lock(void)
{
- spin_unlock(&vector_lock);
+ raw_spin_unlock(&vector_lock);
}

static int
@@ -1232,9 +1232,9 @@ int assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask)
int err;
unsigned long flags;

- spin_lock_irqsave(&vector_lock, flags);
+ raw_spin_lock_irqsave(&vector_lock, flags);
err = __assign_irq_vector(irq, cfg, mask);
- spin_unlock_irqrestore(&vector_lock, flags);
+ raw_spin_unlock_irqrestore(&vector_lock, flags);
return err;
}

@@ -1601,14 +1601,14 @@ __apicdebuginit(void) print_IO_APIC(void)

for (apic = 0; apic < nr_ioapics; apic++) {

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_00.raw = io_apic_read(apic, 0);
reg_01.raw = io_apic_read(apic, 1);
if (reg_01.bits.version >= 0x10)
reg_02.raw = io_apic_read(apic, 2);
if (reg_01.bits.version >= 0x20)
reg_03.raw = io_apic_read(apic, 3);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

printk("\n");
printk(KERN_DEBUG "IO APIC #%d......\n", mp_ioapics[apic].apicid);
@@ -1903,9 +1903,9 @@ void __init enable_IO_APIC(void)
* The number of IO-APIC IRQ registers (== #pins):
*/
for (apic = 0; apic < nr_ioapics; apic++) {
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_01.raw = io_apic_read(apic, 1);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
nr_ioapic_registers[apic] = reg_01.bits.entries+1;
}

@@ -2045,9 +2045,9 @@ void __init setup_ioapic_ids_from_mpc(void)
for (apic_id = 0; apic_id < nr_ioapics; apic_id++) {

/* Read the register 0 value */
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_00.raw = io_apic_read(apic_id, 0);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

old_id = mp_ioapics[apic_id].apicid;

@@ -2106,16 +2106,16 @@ void __init setup_ioapic_ids_from_mpc(void)
mp_ioapics[apic_id].apicid);

reg_00.bits.ID = mp_ioapics[apic_id].apicid;
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic_id, 0, reg_00.raw);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

/*
* Sanity check
*/
- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_00.raw = io_apic_read(apic_id, 0);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
if (reg_00.bits.ID != mp_ioapics[apic_id].apicid)
printk("could not set ID!\n");
else
@@ -2198,7 +2198,7 @@ static unsigned int startup_ioapic_irq(unsigned int irq)
unsigned long flags;
struct irq_cfg *cfg;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
if (irq < nr_legacy_irqs) {
disable_8259A_irq(irq);
if (i8259A_irq_pending(irq))
@@ -2206,7 +2206,7 @@ static unsigned int startup_ioapic_irq(unsigned int irq)
}
cfg = irq_cfg(irq);
__unmask_IO_APIC_irq(cfg);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

return was_pending;
}
@@ -2217,9 +2217,9 @@ static int ioapic_retrigger_irq(unsigned int irq)
struct irq_cfg *cfg = irq_cfg(irq);
unsigned long flags;

- spin_lock_irqsave(&vector_lock, flags);
+ raw_spin_lock_irqsave(&vector_lock, flags);
apic->send_IPI_mask(cpumask_of(cpumask_first(cfg->domain)), cfg->vector);
- spin_unlock_irqrestore(&vector_lock, flags);
+ raw_spin_unlock_irqrestore(&vector_lock, flags);

return 1;
}
@@ -2312,14 +2312,14 @@ set_ioapic_affinity_irq_desc(struct irq_desc *desc, const struct cpumask *mask)
irq = desc->irq;
cfg = desc->chip_data;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
ret = set_desc_affinity(desc, mask, &dest);
if (!ret) {
/* Only the high 8 bits are valid. */
dest = SET_APIC_LOGICAL_ID(dest);
__target_IO_APIC_irq(irq, dest, cfg);
}
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

return ret;
}
@@ -2547,9 +2547,9 @@ static void eoi_ioapic_irq(struct irq_desc *desc)
irq = desc->irq;
cfg = desc->chip_data;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
__eoi_ioapic_irq(irq, cfg);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
}

static void ack_apic_level(unsigned int irq)
@@ -3131,13 +3131,13 @@ static int ioapic_resume(struct sys_device *dev)
data = container_of(dev, struct sysfs_ioapic_data, dev);
entry = data->entry;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_00.raw = io_apic_read(dev->id, 0);
if (reg_00.bits.ID != mp_ioapics[dev->id].apicid) {
reg_00.bits.ID = mp_ioapics[dev->id].apicid;
io_apic_write(dev->id, 0, reg_00.raw);
}
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);
for (i = 0; i < nr_ioapic_registers[dev->id]; i++)
ioapic_write_entry(dev->id, i, entry[i]);

@@ -3200,7 +3200,7 @@ unsigned int create_irq_nr(unsigned int irq_want, int node)
if (irq_want < nr_irqs_gsi)
irq_want = nr_irqs_gsi;

- spin_lock_irqsave(&vector_lock, flags);
+ raw_spin_lock_irqsave(&vector_lock, flags);
for (new = irq_want; new < nr_irqs; new++) {
desc_new = irq_to_desc_alloc_node(new, node);
if (!desc_new) {
@@ -3219,7 +3219,7 @@ unsigned int create_irq_nr(unsigned int irq_want, int node)
irq = new;
break;
}
- spin_unlock_irqrestore(&vector_lock, flags);
+ raw_spin_unlock_irqrestore(&vector_lock, flags);

if (irq > 0) {
dynamic_irq_init(irq);
@@ -3259,9 +3259,9 @@ void destroy_irq(unsigned int irq)
desc->chip_data = cfg;

free_irte(irq);
- spin_lock_irqsave(&vector_lock, flags);
+ raw_spin_lock_irqsave(&vector_lock, flags);
__clear_irq_vector(irq, cfg);
- spin_unlock_irqrestore(&vector_lock, flags);
+ raw_spin_unlock_irqrestore(&vector_lock, flags);
}

/*
@@ -3798,9 +3798,9 @@ int __init io_apic_get_redir_entries (int ioapic)
union IO_APIC_reg_01 reg_01;
unsigned long flags;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_01.raw = io_apic_read(ioapic, 1);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

return reg_01.bits.entries;
}
@@ -3962,9 +3962,9 @@ int __init io_apic_get_unique_id(int ioapic, int apic_id)
if (physids_empty(apic_id_map))
apic->ioapic_phys_id_map(&phys_cpu_present_map, &apic_id_map);

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_00.raw = io_apic_read(ioapic, 0);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

if (apic_id >= get_physical_broadcast()) {
printk(KERN_WARNING "IOAPIC[%d]: Invalid apic_id %d, trying "
@@ -3998,10 +3998,10 @@ int __init io_apic_get_unique_id(int ioapic, int apic_id)
if (reg_00.bits.ID != apic_id) {
reg_00.bits.ID = apic_id;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic, 0, reg_00.raw);
reg_00.raw = io_apic_read(ioapic, 0);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

/* Sanity check */
if (reg_00.bits.ID != apic_id) {
@@ -4022,9 +4022,9 @@ int __init io_apic_get_version(int ioapic)
union IO_APIC_reg_01 reg_01;
unsigned long flags;

- spin_lock_irqsave(&ioapic_lock, flags);
+ raw_spin_lock_irqsave(&ioapic_lock, flags);
reg_01.raw = io_apic_read(ioapic, 1);
- spin_unlock_irqrestore(&ioapic_lock, flags);
+ raw_spin_unlock_irqrestore(&ioapic_lock, flags);

return reg_01.bits.version;
}
--
1.6.5.2

2010-01-11 21:32:08

by John Kacur

[permalink] [raw]
Subject: [PATCH 04/26] sched: Convert thread_group_cputimer lock to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks.

See also a3f22fd7ae186a29b413ad959184f9b4c1d32173

Signed-off-by: John Kacur <[email protected]>
---
include/linux/init_task.h | 2 +-
include/linux/sched.h | 4 ++--
kernel/posix-cpu-timers.c | 8 ++++----
kernel/sched_stats.h | 12 ++++++------
4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index abec69b..e93b8cd 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -27,7 +27,7 @@ extern struct fs_struct init_fs;
.cputimer = { \
.cputime = INIT_CPUTIME, \
.running = 0, \
- .lock = __SPIN_LOCK_UNLOCKED(sig.cputimer.lock), \
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(sig.cputimer.lock), \
}, \
}

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8d4991b..ed5029e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -552,7 +552,7 @@ struct task_cputime {
struct thread_group_cputimer {
struct task_cputime cputime;
int running;
- spinlock_t lock;
+ raw_spinlock_t lock;
};

/*
@@ -2447,7 +2447,7 @@ void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times);
static inline void thread_group_cputime_init(struct signal_struct *sig)
{
sig->cputimer.cputime = INIT_CPUTIME;
- spin_lock_init(&sig->cputimer.lock);
+ raw_spin_lock_init(&sig->cputimer.lock);
sig->cputimer.running = 0;
}

diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
index 438ff45..359cc24 100644
--- a/kernel/posix-cpu-timers.c
+++ b/kernel/posix-cpu-timers.c
@@ -280,7 +280,7 @@ void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
struct task_cputime sum;
unsigned long flags;

- spin_lock_irqsave(&cputimer->lock, flags);
+ raw_spin_lock_irqsave(&cputimer->lock, flags);
if (!cputimer->running) {
cputimer->running = 1;
/*
@@ -293,7 +293,7 @@ void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
update_gt_cputime(&cputimer->cputime, &sum);
}
*times = cputimer->cputime;
- spin_unlock_irqrestore(&cputimer->lock, flags);
+ raw_spin_unlock_irqrestore(&cputimer->lock, flags);
}

/*
@@ -1068,9 +1068,9 @@ static void stop_process_timers(struct task_struct *tsk)
if (!cputimer->running)
return;

- spin_lock_irqsave(&cputimer->lock, flags);
+ raw_spin_lock_irqsave(&cputimer->lock, flags);
cputimer->running = 0;
- spin_unlock_irqrestore(&cputimer->lock, flags);
+ raw_spin_unlock_irqrestore(&cputimer->lock, flags);
}

static u32 onecputick;
diff --git a/kernel/sched_stats.h b/kernel/sched_stats.h
index 32d2bd4..9ecca2f 100644
--- a/kernel/sched_stats.h
+++ b/kernel/sched_stats.h
@@ -306,10 +306,10 @@ static inline void account_group_user_time(struct task_struct *tsk,
if (!cputimer->running)
return;

- spin_lock(&cputimer->lock);
+ raw_spin_lock(&cputimer->lock);
cputimer->cputime.utime =
cputime_add(cputimer->cputime.utime, cputime);
- spin_unlock(&cputimer->lock);
+ raw_spin_unlock(&cputimer->lock);
}

/**
@@ -336,10 +336,10 @@ static inline void account_group_system_time(struct task_struct *tsk,
if (!cputimer->running)
return;

- spin_lock(&cputimer->lock);
+ raw_spin_lock(&cputimer->lock);
cputimer->cputime.stime =
cputime_add(cputimer->cputime.stime, cputime);
- spin_unlock(&cputimer->lock);
+ raw_spin_unlock(&cputimer->lock);
}

/**
@@ -369,7 +369,7 @@ static inline void account_group_exec_runtime(struct task_struct *tsk,
if (!cputimer->running)
return;

- spin_lock(&cputimer->lock);
+ raw_spin_lock(&cputimer->lock);
cputimer->cputime.sum_exec_runtime += ns;
- spin_unlock(&cputimer->lock);
+ raw_spin_unlock(&cputimer->lock);
}
--
1.6.5.2

2010-01-11 21:32:32

by John Kacur

[permalink] [raw]
Subject: [PATCH 06/26] x86: Convert i8259A_lock to raw_spinlock

Convert locks which cannot sleep in preempt-rt to raw_spinlocks.

See also: 62dff70985cfec2c958f27c76f66ac2dfbfa3cef

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/include/asm/i8259.h | 2 +-
arch/x86/kernel/apic/io_apic.c | 4 ++--
arch/x86/kernel/i8259.c | 30 +++++++++++++++---------------
arch/x86/kernel/time.c | 4 ++--
arch/x86/kernel/visws_quirks.c | 6 +++---
5 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
index 58d7091..7ec65b1 100644
--- a/arch/x86/include/asm/i8259.h
+++ b/arch/x86/include/asm/i8259.h
@@ -24,7 +24,7 @@ extern unsigned int cached_irq_mask;
#define SLAVE_ICW4_DEFAULT 0x01
#define PIC_ICW4_AEOI 2

-extern spinlock_t i8259A_lock;
+extern raw_spinlock_t i8259A_lock;

extern void init_8259A(int auto_eoi);
extern void enable_8259A_irq(unsigned int irq);
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 7719fa0..06757c8 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1830,7 +1830,7 @@ __apicdebuginit(void) print_PIC(void)

printk(KERN_DEBUG "\nprinting PIC contents\n");

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);

v = inb(0xa1) << 8 | inb(0x21);
printk(KERN_DEBUG "... PIC IMR: %04x\n", v);
@@ -1844,7 +1844,7 @@ __apicdebuginit(void) print_PIC(void)
outb(0x0a,0xa0);
outb(0x0a,0x20);

- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);

printk(KERN_DEBUG "... PIC ISR: %04x\n", v);

diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
index df89102..8c93a84 100644
--- a/arch/x86/kernel/i8259.c
+++ b/arch/x86/kernel/i8259.c
@@ -32,7 +32,7 @@
*/

static int i8259A_auto_eoi;
-DEFINE_SPINLOCK(i8259A_lock);
+DEFINE_RAW_SPINLOCK(i8259A_lock);
static void mask_and_ack_8259A(unsigned int);

struct irq_chip i8259A_chip = {
@@ -68,13 +68,13 @@ void disable_8259A_irq(unsigned int irq)
unsigned int mask = 1 << irq;
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);
cached_irq_mask |= mask;
if (irq & 8)
outb(cached_slave_mask, PIC_SLAVE_IMR);
else
outb(cached_master_mask, PIC_MASTER_IMR);
- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
}

void enable_8259A_irq(unsigned int irq)
@@ -82,13 +82,13 @@ void enable_8259A_irq(unsigned int irq)
unsigned int mask = ~(1 << irq);
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);
cached_irq_mask &= mask;
if (irq & 8)
outb(cached_slave_mask, PIC_SLAVE_IMR);
else
outb(cached_master_mask, PIC_MASTER_IMR);
- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
}

int i8259A_irq_pending(unsigned int irq)
@@ -97,12 +97,12 @@ int i8259A_irq_pending(unsigned int irq)
unsigned long flags;
int ret;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);
if (irq < 8)
ret = inb(PIC_MASTER_CMD) & mask;
else
ret = inb(PIC_SLAVE_CMD) & (mask >> 8);
- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);

return ret;
}
@@ -150,7 +150,7 @@ static void mask_and_ack_8259A(unsigned int irq)
unsigned int irqmask = 1 << irq;
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);
/*
* Lightweight spurious IRQ detection. We do not want
* to overdo spurious IRQ handling - it's usually a sign
@@ -183,7 +183,7 @@ handle_real_irq:
outb(cached_master_mask, PIC_MASTER_IMR);
outb(0x60+irq, PIC_MASTER_CMD); /* 'Specific EOI to master */
}
- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
return;

spurious_8259A_irq:
@@ -285,24 +285,24 @@ void mask_8259A(void)
{
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);

outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */
outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-2 */

- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
}

void unmask_8259A(void)
{
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);

outb(cached_master_mask, PIC_MASTER_IMR); /* restore master IRQ mask */
outb(cached_slave_mask, PIC_SLAVE_IMR); /* restore slave IRQ mask */

- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
}

void init_8259A(int auto_eoi)
@@ -311,7 +311,7 @@ void init_8259A(int auto_eoi)

i8259A_auto_eoi = auto_eoi;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);

outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */
outb(0xff, PIC_SLAVE_IMR); /* mask all of 8259A-2 */
@@ -356,5 +356,5 @@ void init_8259A(int auto_eoi)
outb(cached_master_mask, PIC_MASTER_IMR); /* restore master IRQ mask */
outb(cached_slave_mask, PIC_SLAVE_IMR); /* restore slave IRQ mask */

- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
}
diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
index be25734..fb5cc5e 100644
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -70,11 +70,11 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
* manually to deassert NMI lines for the watchdog if run
* on an 82489DX-based system.
*/
- spin_lock(&i8259A_lock);
+ raw_spin_lock(&i8259A_lock);
outb(0x0c, PIC_MASTER_OCW3);
/* Ack the IRQ; AEOI will end it automatically. */
inb(PIC_MASTER_POLL);
- spin_unlock(&i8259A_lock);
+ raw_spin_unlock(&i8259A_lock);
}

global_clock_event->event_handler(global_clock_event);
diff --git a/arch/x86/kernel/visws_quirks.c b/arch/x86/kernel/visws_quirks.c
index 34a279a..ab38ce0 100644
--- a/arch/x86/kernel/visws_quirks.c
+++ b/arch/x86/kernel/visws_quirks.c
@@ -559,7 +559,7 @@ static irqreturn_t piix4_master_intr(int irq, void *dev_id)
struct irq_desc *desc;
unsigned long flags;

- spin_lock_irqsave(&i8259A_lock, flags);
+ raw_spin_lock_irqsave(&i8259A_lock, flags);

/* Find out what's interrupting in the PIIX4 master 8259 */
outb(0x0c, 0x20); /* OCW3 Poll command */
@@ -596,7 +596,7 @@ static irqreturn_t piix4_master_intr(int irq, void *dev_id)
outb(0x60 + realirq, 0x20);
}

- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);

desc = irq_to_desc(realirq);

@@ -614,7 +614,7 @@ static irqreturn_t piix4_master_intr(int irq, void *dev_id)
return IRQ_HANDLED;

out_unlock:
- spin_unlock_irqrestore(&i8259A_lock, flags);
+ raw_spin_unlock_irqrestore(&i8259A_lock, flags);
return IRQ_NONE;
}

--
1.6.5.2

2010-01-11 21:33:08

by John Kacur

[permalink] [raw]
Subject: [PATCH 03/26] x86: Convert tlbstate_lock to raw_spinlock

Convert locks which cannot sleep on in preempt-rt to raw_spinlocks.

See also: a74dbe8ff33a5ff1fbfdea84c4f8e2da56c7ebe2

Signed-off-by: John Kacur <[email protected]>
---
arch/x86/mm/tlb.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 65b58e4..426f3a1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -41,7 +41,7 @@ union smp_flush_state {
struct {
struct mm_struct *flush_mm;
unsigned long flush_va;
- spinlock_t tlbstate_lock;
+ raw_spinlock_t tlbstate_lock;
DECLARE_BITMAP(flush_cpumask, NR_CPUS);
};
char pad[INTERNODE_CACHE_BYTES];
@@ -181,7 +181,7 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
* num_online_cpus() <= NUM_INVALIDATE_TLB_VECTORS, but it is
* probably not worth checking this for a cache-hot lock.
*/
- spin_lock(&f->tlbstate_lock);
+ raw_spin_lock(&f->tlbstate_lock);

f->flush_mm = mm;
f->flush_va = va;
@@ -199,7 +199,7 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,

f->flush_mm = NULL;
f->flush_va = 0;
- spin_unlock(&f->tlbstate_lock);
+ raw_spin_unlock(&f->tlbstate_lock);
}

void native_flush_tlb_others(const struct cpumask *cpumask,
@@ -223,7 +223,7 @@ static int __cpuinit init_smp_flush(void)
int i;

for (i = 0; i < ARRAY_SIZE(flush_state); i++)
- spin_lock_init(&flush_state[i].tlbstate_lock);
+ raw_spin_lock_init(&flush_state[i].tlbstate_lock);

return 0;
}
--
1.6.5.2

2010-01-11 21:50:18

by Paul Menage

[permalink] [raw]
Subject: Re: [PATCH 19/26] cgroups: Convert cgroups release_list_lock to raw_spinlock

Does this patch take the lock out of the scope of lockdep? Or is
raw_spinlock still high-level enough to support lockdep?

Paul

On Mon, Jan 11, 2010 at 1:26 PM, John Kacur <[email protected]> wrote:
> Convert locks which cannot sleep in preempt-rt to raw_spinlocks
>
> See also 58814bae5de64d5291b813ea0a52192e4fa714ad
>
> Signed-off-by: John Kacur <[email protected]>
> ---
> ?kernel/cgroup.c | ? 18 +++++++++---------
> ?1 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 0249f4b..32a80b2 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -204,7 +204,7 @@ list_for_each_entry(_root, &roots, root_list)
> ?/* the list of cgroups eligible for automatic release. Protected by
> ?* release_list_lock */
> ?static LIST_HEAD(release_list);
> -static DEFINE_SPINLOCK(release_list_lock);
> +static DEFINE_RAW_SPINLOCK(release_list_lock);
> ?static void cgroup_release_agent(struct work_struct *work);
> ?static DECLARE_WORK(release_agent_work, cgroup_release_agent);
> ?static void check_for_release(struct cgroup *cgrp);
> @@ -3151,11 +3151,11 @@ again:
> ? ? ? ?finish_wait(&cgroup_rmdir_waitq, &wait);
> ? ? ? ?clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
>
> - ? ? ? spin_lock(&release_list_lock);
> + ? ? ? raw_spin_lock(&release_list_lock);
> ? ? ? ?set_bit(CGRP_REMOVED, &cgrp->flags);
> ? ? ? ?if (!list_empty(&cgrp->release_list))
> ? ? ? ? ? ? ? ?list_del(&cgrp->release_list);
> - ? ? ? spin_unlock(&release_list_lock);
> + ? ? ? raw_spin_unlock(&release_list_lock);
>
> ? ? ? ?cgroup_lock_hierarchy(cgrp->root);
> ? ? ? ?/* delete this cgroup from parent->children */
> @@ -3691,13 +3691,13 @@ static void check_for_release(struct cgroup *cgrp)
> ? ? ? ? ? ? ? ? * already queued for a userspace notification, queue
> ? ? ? ? ? ? ? ? * it now */
> ? ? ? ? ? ? ? ?int need_schedule_work = 0;
> - ? ? ? ? ? ? ? spin_lock(&release_list_lock);
> + ? ? ? ? ? ? ? raw_spin_lock(&release_list_lock);
> ? ? ? ? ? ? ? ?if (!cgroup_is_removed(cgrp) &&
> ? ? ? ? ? ? ? ? ? ?list_empty(&cgrp->release_list)) {
> ? ? ? ? ? ? ? ? ? ? ? ?list_add(&cgrp->release_list, &release_list);
> ? ? ? ? ? ? ? ? ? ? ? ?need_schedule_work = 1;
> ? ? ? ? ? ? ? ?}
> - ? ? ? ? ? ? ? spin_unlock(&release_list_lock);
> + ? ? ? ? ? ? ? raw_spin_unlock(&release_list_lock);
> ? ? ? ? ? ? ? ?if (need_schedule_work)
> ? ? ? ? ? ? ? ? ? ? ? ?schedule_work(&release_agent_work);
> ? ? ? ?}
> @@ -3747,7 +3747,7 @@ static void cgroup_release_agent(struct work_struct *work)
> ?{
> ? ? ? ?BUG_ON(work != &release_agent_work);
> ? ? ? ?mutex_lock(&cgroup_mutex);
> - ? ? ? spin_lock(&release_list_lock);
> + ? ? ? raw_spin_lock(&release_list_lock);
> ? ? ? ?while (!list_empty(&release_list)) {
> ? ? ? ? ? ? ? ?char *argv[3], *envp[3];
> ? ? ? ? ? ? ? ?int i;
> @@ -3756,7 +3756,7 @@ static void cgroup_release_agent(struct work_struct *work)
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?struct cgroup,
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?release_list);
> ? ? ? ? ? ? ? ?list_del_init(&cgrp->release_list);
> - ? ? ? ? ? ? ? spin_unlock(&release_list_lock);
> + ? ? ? ? ? ? ? raw_spin_unlock(&release_list_lock);
> ? ? ? ? ? ? ? ?pathbuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
> ? ? ? ? ? ? ? ?if (!pathbuf)
> ? ? ? ? ? ? ? ? ? ? ? ?goto continue_free;
> @@ -3786,9 +3786,9 @@ static void cgroup_release_agent(struct work_struct *work)
> ?continue_free:
> ? ? ? ? ? ? ? ?kfree(pathbuf);
> ? ? ? ? ? ? ? ?kfree(agentbuf);
> - ? ? ? ? ? ? ? spin_lock(&release_list_lock);
> + ? ? ? ? ? ? ? raw_spin_lock(&release_list_lock);
> ? ? ? ?}
> - ? ? ? spin_unlock(&release_list_lock);
> + ? ? ? raw_spin_unlock(&release_list_lock);
> ? ? ? ?mutex_unlock(&cgroup_mutex);
> ?}
>
> --
> 1.6.5.2
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/
>

2010-01-11 22:11:27

by John Kacur

[permalink] [raw]
Subject: Re: [PATCH 19/26] cgroups: Convert cgroups release_list_lock to raw_spinlock


----- "Paul Menage" <[email protected]> wrote:

> Does this patch take the lock out of the scope of lockdep? Or is
> raw_spinlock still high-level enough to support lockdep?

lockdep should work as before - in fact everything should work as before.
This is pretty much a no-op until preempt-rt changes are pushed upstream.
>
> Paul
>
> On Mon, Jan 11, 2010 at 1:26 PM, John Kacur <[email protected]>
> wrote:
> > Convert locks which cannot sleep in preempt-rt to raw_spinlocks
> >
> > See also 58814bae5de64d5291b813ea0a52192e4fa714ad
> >
> > Signed-off-by: John Kacur <[email protected]>
> > ---
> >  kernel/cgroup.c |   18 +++++++++---------
> >  1 files changed, 9 insertions(+), 9 deletions(-)
> >
> > diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> > index 0249f4b..32a80b2 100644
> > --- a/kernel/cgroup.c
> > +++ b/kernel/cgroup.c
> > @@ -204,7 +204,7 @@ list_for_each_entry(_root, &roots, root_list)
> >  /* the list of cgroups eligible for automatic release. Protected
> by
> >  * release_list_lock */
> >  static LIST_HEAD(release_list);
> > -static DEFINE_SPINLOCK(release_list_lock);
> > +static DEFINE_RAW_SPINLOCK(release_list_lock);
> >  static void cgroup_release_agent(struct work_struct *work);
> >  static DECLARE_WORK(release_agent_work, cgroup_release_agent);
> >  static void check_for_release(struct cgroup *cgrp);
> > @@ -3151,11 +3151,11 @@ again:
> >        finish_wait(&cgroup_rmdir_waitq, &wait);
> >        clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
> >
> > -       spin_lock(&release_list_lock);
> > +       raw_spin_lock(&release_list_lock);
> >        set_bit(CGRP_REMOVED, &cgrp->flags);
> >        if (!list_empty(&cgrp->release_list))
> >                list_del(&cgrp->release_list);
> > -       spin_unlock(&release_list_lock);
> > +       raw_spin_unlock(&release_list_lock);
> >
> >        cgroup_lock_hierarchy(cgrp->root);
> >        /* delete this cgroup from parent->children */
> > @@ -3691,13 +3691,13 @@ static void check_for_release(struct cgroup
> *cgrp)
> >                 * already queued for a userspace notification,
> queue
> >                 * it now */
> >                int need_schedule_work = 0;
> > -               spin_lock(&release_list_lock);
> > +               raw_spin_lock(&release_list_lock);
> >                if (!cgroup_is_removed(cgrp) &&
> >                    list_empty(&cgrp->release_list)) {
> >                        list_add(&cgrp->release_list,
> &release_list);
> >                        need_schedule_work = 1;
> >                }
> > -               spin_unlock(&release_list_lock);
> > +               raw_spin_unlock(&release_list_lock);
> >                if (need_schedule_work)
> >                        schedule_work(&release_agent_work);
> >        }
> > @@ -3747,7 +3747,7 @@ static void cgroup_release_agent(struct
> work_struct *work)
> >  {
> >        BUG_ON(work != &release_agent_work);
> >        mutex_lock(&cgroup_mutex);
> > -       spin_lock(&release_list_lock);
> > +       raw_spin_lock(&release_list_lock);
> >        while (!list_empty(&release_list)) {
> >                char *argv[3], *envp[3];
> >                int i;
> > @@ -3756,7 +3756,7 @@ static void cgroup_release_agent(struct
> work_struct *work)
> >                                                    struct cgroup,
> >                                                    release_list);
> >                list_del_init(&cgrp->release_list);
> > -               spin_unlock(&release_list_lock);
> > +               raw_spin_unlock(&release_list_lock);
> >                pathbuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
> >                if (!pathbuf)
> >                        goto continue_free;
> > @@ -3786,9 +3786,9 @@ static void cgroup_release_agent(struct
> work_struct *work)
> >  continue_free:
> >                kfree(pathbuf);
> >                kfree(agentbuf);
> > -               spin_lock(&release_list_lock);
> > +               raw_spin_lock(&release_list_lock);
> >        }
> > -       spin_unlock(&release_list_lock);
> > +       raw_spin_unlock(&release_list_lock);
> >        mutex_unlock(&cgroup_mutex);
> >  }
> >
> > --
> > 1.6.5.2
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> >

2010-01-12 03:24:35

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 00/26] Convert locks that can't sleep in -rt to raw_spinlock

On Mon, Jan 11, 2010 at 10:26:30PM +0100, John Kacur wrote:
> Thomas:
>
> Now that your changes that free up the raw_spinlock name are upstream.
> (described below for other readers)
>
> http://lwn.net/Articles/365863/
> http://lwn.net/Articles/366608/
>
> I wanted to forward port the preempt-rt patches that convert locks to
> atomic_spinlocks (rt tree only) to the new scheme.
>
> The patches below are a result of that effort.
> Please queue these up for 2.6.34 upstream, and please pull for preempt-rt
>
> You can pull them from
> git://git.kernel.org/pub/scm/linux/kernel/git/jkacur/jk-2.6.git
> jk/v2.6.33-rc3-raw-spinlocks
>
> Thanks
>
> John Kacur (25):
> xtime_lock: Convert atomic_seqlock to raw_seqlock, fix up all users
> x86: Convert tlbstate_lock to raw_spinlock
> sched: Convert thread_group_cputimer lock to raw_spinlock
> x86: Convert ioapic_lock and vector_lock to raw_spinlocks
> x86: Convert i8259A_lock to raw_spinlock
> x86: Convert pci_config_lock to raw_spinlock
> i8253: Convert i8253_lock to raw_spinlock
> x86: Convert set_atomicity_lock to raw_spinlock
> ACPI: Convert c3_lock to raw_spinlock
> rtmutex: Convert wait_lock and pi_lock to raw_spinlock
> printk: Convert lock to raw_spinlock
> genirq: Convert locks to raw_spinlocks
> trace: Convert various locks to raw_spinlock
> clocksource: Convert watchdog_lock to raw_spinlock
> timer_stats: Convert to raw_spinlocks
> x86: kvm: Convert i8254/i8259 locks to raw_spinlock
> x86 - nmi: Convert nmi_lock to raw_spinlock
> cgroups: Convert cgroups release_list_lock to raw_spinlock
> proportions: Convert spinlocks to raw_spinlocks.
> percpu_counter: Convert to raw_spinlock
> oprofile: Convert to raw_spinlock
> vgacon: Convert vga console lock to raw_spinlock
> pci-access: Convert pci_lock to raw_spinlock
> kprobes: Convert to raw_spinlocks
> softlockup: Convert to raw_spinlocks
>
> Thomas Gleixner (1):
> seqlock: Create raw_seqlock
>
> arch/alpha/kernel/time.c | 4 +-
> arch/arm/kernel/time.c | 12 ++--
> arch/arm/oprofile/common.c | 4 +-
> arch/arm/oprofile/op_model_mpcore.c | 4 +-
> arch/blackfin/kernel/time.c | 4 +-
> arch/cris/kernel/time.c | 4 +-
> arch/frv/kernel/time.c | 4 +-
> arch/h8300/kernel/time.c | 4 +-
> arch/ia64/kernel/time.c | 4 +-
> arch/ia64/xen/time.c | 4 +-
> arch/m32r/kernel/time.c | 4 +-
> arch/m68knommu/kernel/time.c | 4 +-
> arch/mips/include/asm/i8253.h | 2 +-
> arch/mips/kernel/i8253.c | 14 ++--
> arch/mn10300/kernel/time.c | 4 +-
> arch/parisc/kernel/time.c | 8 +-
> arch/powerpc/kernel/time.c | 4 +-
> arch/sparc/kernel/pcic.c | 8 +-
> arch/sparc/kernel/time_32.c | 12 ++--
> arch/x86/include/asm/i8253.h | 2 +-
> arch/x86/include/asm/i8259.h | 2 +-
> arch/x86/include/asm/pci_x86.h | 2 +-
> arch/x86/kernel/apic/io_apic.c | 106 +++++++++++++++++-----------------
> arch/x86/kernel/apic/nmi.c | 6 +-
> arch/x86/kernel/apm_32.c | 4 +-
> arch/x86/kernel/cpu/mtrr/generic.c | 6 +-
> arch/x86/kernel/i8253.c | 14 ++--
> arch/x86/kernel/i8259.c | 30 +++++-----
> arch/x86/kernel/time.c | 4 +-
> arch/x86/kernel/visws_quirks.c | 6 +-
> arch/x86/kvm/i8254.c | 10 ++--
> arch/x86/kvm/i8254.h | 2 +-
> arch/x86/kvm/i8259.c | 30 +++++-----
> arch/x86/kvm/irq.h | 2 +-
> arch/x86/kvm/x86.c | 8 +-
> arch/x86/mm/tlb.c | 8 +-
> arch/x86/oprofile/nmi_int.c | 4 +-
> arch/x86/pci/common.c | 2 +-
> arch/x86/pci/direct.c | 16 +++---
> arch/x86/pci/mmconfig_32.c | 8 +-
> arch/x86/pci/numaq_32.c | 8 +-
> arch/x86/pci/pcbios.c | 8 +-
> arch/xtensa/kernel/time.c | 4 +-
> drivers/acpi/processor_idle.c | 10 ++--
> drivers/block/hd.c | 4 +-
> drivers/input/gameport/gameport.c | 4 +-
> drivers/input/joystick/analog.c | 4 +-
> drivers/input/misc/pcspkr.c | 6 +-
> drivers/oprofile/event_buffer.c | 4 +-
> drivers/oprofile/oprofilefs.c | 6 +-
> drivers/pci/access.c | 34 ++++++------
> drivers/video/console/vgacon.c | 42 +++++++-------
> include/linux/init_task.h | 2 +-
> include/linux/kprobes.h | 2 +-
> include/linux/oprofile.h | 2 +-
> include/linux/percpu_counter.h | 2 +-
> include/linux/proportions.h | 6 +-
> include/linux/ratelimit.h | 4 +-
> include/linux/rtmutex.h | 2 +-
> include/linux/sched.h | 4 +-
> include/linux/seqlock.h | 86 +++++++++++++++++++++++++++-
> include/linux/time.h | 2 +-
> kernel/cgroup.c | 18 +++---
> kernel/hrtimer.c | 8 +-
> kernel/kprobes.c | 34 ++++++------
> kernel/posix-cpu-timers.c | 8 +-
> kernel/printk.c | 42 +++++++-------
> kernel/sched_stats.h | 12 ++--
> kernel/softlockup.c | 6 +-
> kernel/time.c | 8 +-
> kernel/time/clocksource.c | 26 ++++----
> kernel/time/ntp.c | 8 +-
> kernel/time/tick-common.c | 8 +-
> kernel/time/tick-sched.c | 12 ++--
> kernel/time/timekeeping.c | 50 ++++++++--------
> kernel/time/timer_stats.c | 6 +-
> kernel/trace/ring_buffer.c | 52 +++++++++---------
> kernel/trace/trace.c | 10 ++--
> kernel/trace/trace_irqsoff.c | 6 +-
> lib/percpu_counter.c | 18 +++---
> lib/proportions.c | 12 ++--
> lib/ratelimit.c | 4 +-
> sound/drivers/pcsp/pcsp.h | 2 +-
> sound/drivers/pcsp/pcsp_input.c | 4 +-
> sound/drivers/pcsp/pcsp_lib.c | 12 ++--
> 85 files changed, 535 insertions(+), 457 deletions(-)



Looking at this whole patchset. I have the feeling the
changelogs don't tell us much about why we do that.

I mean, I understand the general purpose of this patchset,
but taken invidually, some of them make me stuck into
existential questions.

Could you just at least put a one liner detailed and
individual clue in the changelogs that tells us why a
pointed spinlock can not sleep in preempt_rt? And may
be a comment in the code? There are places where it
is pretty obvious, such as the rq lock. But some
others...

Or may be I'm just too ignorant and these sleeping
in rt places are just simply damn too obvious to deserve
any individual words... :-)

Thanks.

2010-01-12 03:39:37

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock

On Mon, Jan 11, 2010 at 10:26:44PM +0100, John Kacur wrote:
> Convert locks that cannot sleep in preempt-rt to raw_spinlocks.
>
> See also: 87654a70523a8c5baadcbbc07d80cbae8f912837
>
> Signed-off-by: John Kacur <[email protected]>
> ---
> kernel/trace/ring_buffer.c | 52 +++++++++++++++++++++---------------------
> kernel/trace/trace.c | 10 ++++----
> kernel/trace/trace_irqsoff.c | 6 ++--
> 3 files changed, 34 insertions(+), 34 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 2326b04..ffaddc5 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -422,7 +422,7 @@ int ring_buffer_print_page_header(struct trace_seq *s)
> struct ring_buffer_per_cpu {
> int cpu;
> struct ring_buffer *buffer;
> - spinlock_t reader_lock; /* serialize readers */
> + raw_spinlock_t reader_lock; /* serialize readers */



Why this one? This is a reader lock, not taken in any tracing fast path
places. Why should it never sleep in rt, it's taken by a reader of the ring
buffer, which doesn't seem to me involved in any critical work.

I may be wrong though, better wait for Steve to correct me
if needed.

In any case, the changelog needs more details about the individual
purpose of this patch.


> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 0df1b0f..0c6bbcb 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -258,7 +258,7 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
> TRACE_ITER_GRAPH_TIME;
>
> static int trace_stop_count;
> -static DEFINE_SPINLOCK(tracing_start_lock);
> +static DEFINE_RAW_SPINLOCK(tracing_start_lock);


Same here. I don't understand why this should never sleep in -rt.
This is not a critical lock. It is taken in rare and not critical
places, mostly in reader places when we open/release a trace
file and also in tracing selftests.



> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index 2974bc7..60ba58e 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -23,7 +23,7 @@ static int tracer_enabled __read_mostly;
>
> static DEFINE_PER_CPU(int, tracing_cpu);
>
> -static DEFINE_SPINLOCK(max_trace_lock);
> +static DEFINE_RAW_SPINLOCK(max_trace_lock);


But there yeah, it does seem necessary as it is involved
in irqsoff tracing fastpath.

This needs a comment though.

Thanks.

2010-01-12 03:49:53

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 00/26] Convert locks that can't sleep in -rt to raw_spinlock

On Tue, Jan 12, 2010 at 04:24:29AM +0100, Frederic Weisbecker wrote:
> On Mon, Jan 11, 2010 at 10:26:30PM +0100, John Kacur wrote:
> > Thomas:
> >
> > Now that your changes that free up the raw_spinlock name are upstream.
> > (described below for other readers)
> >
> > http://lwn.net/Articles/365863/
> > http://lwn.net/Articles/366608/
> >
> > I wanted to forward port the preempt-rt patches that convert locks to
> > atomic_spinlocks (rt tree only) to the new scheme.
> >
> > The patches below are a result of that effort.
> > Please queue these up for 2.6.34 upstream, and please pull for preempt-rt
> >
> > You can pull them from
> > git://git.kernel.org/pub/scm/linux/kernel/git/jkacur/jk-2.6.git
> > jk/v2.6.33-rc3-raw-spinlocks
> >
> > Thanks
> >
> > John Kacur (25):
> > xtime_lock: Convert atomic_seqlock to raw_seqlock, fix up all users
> > x86: Convert tlbstate_lock to raw_spinlock
> > sched: Convert thread_group_cputimer lock to raw_spinlock
> > x86: Convert ioapic_lock and vector_lock to raw_spinlocks
> > x86: Convert i8259A_lock to raw_spinlock
> > x86: Convert pci_config_lock to raw_spinlock
> > i8253: Convert i8253_lock to raw_spinlock
> > x86: Convert set_atomicity_lock to raw_spinlock
> > ACPI: Convert c3_lock to raw_spinlock
> > rtmutex: Convert wait_lock and pi_lock to raw_spinlock
> > printk: Convert lock to raw_spinlock
> > genirq: Convert locks to raw_spinlocks
> > trace: Convert various locks to raw_spinlock
> > clocksource: Convert watchdog_lock to raw_spinlock
> > timer_stats: Convert to raw_spinlocks
> > x86: kvm: Convert i8254/i8259 locks to raw_spinlock
> > x86 - nmi: Convert nmi_lock to raw_spinlock
> > cgroups: Convert cgroups release_list_lock to raw_spinlock
> > proportions: Convert spinlocks to raw_spinlocks.
> > percpu_counter: Convert to raw_spinlock
> > oprofile: Convert to raw_spinlock
> > vgacon: Convert vga console lock to raw_spinlock
> > pci-access: Convert pci_lock to raw_spinlock
> > kprobes: Convert to raw_spinlocks
> > softlockup: Convert to raw_spinlocks
> >
> > Thomas Gleixner (1):
> > seqlock: Create raw_seqlock
> >
> > arch/alpha/kernel/time.c | 4 +-
> > arch/arm/kernel/time.c | 12 ++--
> > arch/arm/oprofile/common.c | 4 +-
> > arch/arm/oprofile/op_model_mpcore.c | 4 +-
> > arch/blackfin/kernel/time.c | 4 +-
> > arch/cris/kernel/time.c | 4 +-
> > arch/frv/kernel/time.c | 4 +-
> > arch/h8300/kernel/time.c | 4 +-
> > arch/ia64/kernel/time.c | 4 +-
> > arch/ia64/xen/time.c | 4 +-
> > arch/m32r/kernel/time.c | 4 +-
> > arch/m68knommu/kernel/time.c | 4 +-
> > arch/mips/include/asm/i8253.h | 2 +-
> > arch/mips/kernel/i8253.c | 14 ++--
> > arch/mn10300/kernel/time.c | 4 +-
> > arch/parisc/kernel/time.c | 8 +-
> > arch/powerpc/kernel/time.c | 4 +-
> > arch/sparc/kernel/pcic.c | 8 +-
> > arch/sparc/kernel/time_32.c | 12 ++--
> > arch/x86/include/asm/i8253.h | 2 +-
> > arch/x86/include/asm/i8259.h | 2 +-
> > arch/x86/include/asm/pci_x86.h | 2 +-
> > arch/x86/kernel/apic/io_apic.c | 106 +++++++++++++++++-----------------
> > arch/x86/kernel/apic/nmi.c | 6 +-
> > arch/x86/kernel/apm_32.c | 4 +-
> > arch/x86/kernel/cpu/mtrr/generic.c | 6 +-
> > arch/x86/kernel/i8253.c | 14 ++--
> > arch/x86/kernel/i8259.c | 30 +++++-----
> > arch/x86/kernel/time.c | 4 +-
> > arch/x86/kernel/visws_quirks.c | 6 +-
> > arch/x86/kvm/i8254.c | 10 ++--
> > arch/x86/kvm/i8254.h | 2 +-
> > arch/x86/kvm/i8259.c | 30 +++++-----
> > arch/x86/kvm/irq.h | 2 +-
> > arch/x86/kvm/x86.c | 8 +-
> > arch/x86/mm/tlb.c | 8 +-
> > arch/x86/oprofile/nmi_int.c | 4 +-
> > arch/x86/pci/common.c | 2 +-
> > arch/x86/pci/direct.c | 16 +++---
> > arch/x86/pci/mmconfig_32.c | 8 +-
> > arch/x86/pci/numaq_32.c | 8 +-
> > arch/x86/pci/pcbios.c | 8 +-
> > arch/xtensa/kernel/time.c | 4 +-
> > drivers/acpi/processor_idle.c | 10 ++--
> > drivers/block/hd.c | 4 +-
> > drivers/input/gameport/gameport.c | 4 +-
> > drivers/input/joystick/analog.c | 4 +-
> > drivers/input/misc/pcspkr.c | 6 +-
> > drivers/oprofile/event_buffer.c | 4 +-
> > drivers/oprofile/oprofilefs.c | 6 +-
> > drivers/pci/access.c | 34 ++++++------
> > drivers/video/console/vgacon.c | 42 +++++++-------
> > include/linux/init_task.h | 2 +-
> > include/linux/kprobes.h | 2 +-
> > include/linux/oprofile.h | 2 +-
> > include/linux/percpu_counter.h | 2 +-
> > include/linux/proportions.h | 6 +-
> > include/linux/ratelimit.h | 4 +-
> > include/linux/rtmutex.h | 2 +-
> > include/linux/sched.h | 4 +-
> > include/linux/seqlock.h | 86 +++++++++++++++++++++++++++-
> > include/linux/time.h | 2 +-
> > kernel/cgroup.c | 18 +++---
> > kernel/hrtimer.c | 8 +-
> > kernel/kprobes.c | 34 ++++++------
> > kernel/posix-cpu-timers.c | 8 +-
> > kernel/printk.c | 42 +++++++-------
> > kernel/sched_stats.h | 12 ++--
> > kernel/softlockup.c | 6 +-
> > kernel/time.c | 8 +-
> > kernel/time/clocksource.c | 26 ++++----
> > kernel/time/ntp.c | 8 +-
> > kernel/time/tick-common.c | 8 +-
> > kernel/time/tick-sched.c | 12 ++--
> > kernel/time/timekeeping.c | 50 ++++++++--------
> > kernel/time/timer_stats.c | 6 +-
> > kernel/trace/ring_buffer.c | 52 +++++++++---------
> > kernel/trace/trace.c | 10 ++--
> > kernel/trace/trace_irqsoff.c | 6 +-
> > lib/percpu_counter.c | 18 +++---
> > lib/proportions.c | 12 ++--
> > lib/ratelimit.c | 4 +-
> > sound/drivers/pcsp/pcsp.h | 2 +-
> > sound/drivers/pcsp/pcsp_input.c | 4 +-
> > sound/drivers/pcsp/pcsp_lib.c | 12 ++--
> > 85 files changed, 535 insertions(+), 457 deletions(-)
>
>
>
> Looking at this whole patchset. I have the feeling the
> changelogs don't tell us much about why we do that.
>
> I mean, I understand the general purpose of this patchset,
> but taken invidually, some of them make me stuck into
> existential questions.
>
> Could you just at least put a one liner detailed and
> individual clue in the changelogs that tells us why a
> pointed spinlock can not sleep in preempt_rt? And may
> be a comment in the code? There are places where it
> is pretty obvious, such as the rq lock. But some
> others...
>
> Or may be I'm just too ignorant and these sleeping
> in rt places are just simply damn too obvious to deserve
> any individual words... :-)
>
> Thanks.
>


And please add people working on these files in Cc.

2010-01-12 03:54:51

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 25/26] kprobes: Convert to raw_spinlocks

On Mon, Jan 11, 2010 at 10:26:55PM +0100, John Kacur wrote:
> Convert locks which cannot be sleeping locks in preempt-rt to raw_spinlocks.
>
> See also dc23e836d8d25fe5aa4057d54dae2094fbc614f6
>
> Signed-off-by: John Kacur <[email protected]>
> ---
> include/linux/kprobes.h | 2 +-
> kernel/kprobes.c | 34 +++++++++++++++++-----------------
> 2 files changed, 18 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index 1b672f7..620df87 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -170,7 +170,7 @@ struct kretprobe {
> int nmissed;
> size_t data_size;
> struct hlist_head free_instances;
> - spinlock_t lock;
> + raw_spinlock_t lock;
> };


Indeed, this lock seems to be taken when a probe triggers, which
can happen in about every places/context.

Please add a comment to explain this though.

(Adding Masami in Cc).

2010-01-13 15:28:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock

On Tue, Jan 12, 2010 at 04:39:32AM +0100, Frederic Weisbecker wrote:
> On Mon, Jan 11, 2010 at 10:26:44PM +0100, John Kacur wrote:
> > Convert locks that cannot sleep in preempt-rt to raw_spinlocks.
> >
> > See also: 87654a70523a8c5baadcbbc07d80cbae8f912837
> >
> > Signed-off-by: John Kacur <[email protected]>
> > ---
> > kernel/trace/ring_buffer.c | 52 +++++++++++++++++++++---------------------
> > kernel/trace/trace.c | 10 ++++----
> > kernel/trace/trace_irqsoff.c | 6 ++--
> > 3 files changed, 34 insertions(+), 34 deletions(-)
> >
> > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> > index 2326b04..ffaddc5 100644
> > --- a/kernel/trace/ring_buffer.c
> > +++ b/kernel/trace/ring_buffer.c
> > @@ -422,7 +422,7 @@ int ring_buffer_print_page_header(struct trace_seq *s)
> > struct ring_buffer_per_cpu {
> > int cpu;
> > struct ring_buffer *buffer;
> > - spinlock_t reader_lock; /* serialize readers */
> > + raw_spinlock_t reader_lock; /* serialize readers */
>
>
>
> Why this one? This is a reader lock, not taken in any tracing fast path
> places. Why should it never sleep in rt, it's taken by a reader of the ring
> buffer, which doesn't seem to me involved in any critical work.
>
> I may be wrong though, better wait for Steve to correct me
> if needed.
>
> In any case, the changelog needs more details about the individual
> purpose of this patch.

At first I would agree, but looking at where the lock is taken, the one place
that worries me is in ftrace_dump(). It can be called with interrupts
disabled, and if so, it will take this lock. If we let this lock convert
to a mutex, then it can cause issues when ftrace_dump() takes the lock with
interrupts disabled.


>
>
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 0df1b0f..0c6bbcb 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -258,7 +258,7 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
> > TRACE_ITER_GRAPH_TIME;
> >
> > static int trace_stop_count;
> > -static DEFINE_SPINLOCK(tracing_start_lock);
> > +static DEFINE_RAW_SPINLOCK(tracing_start_lock);
>
>
> Same here. I don't understand why this should never sleep in -rt.
> This is not a critical lock. It is taken in rare and not critical
> places, mostly in reader places when we open/release a trace
> file and also in tracing selftests.

This one I don't see as an issue in changing to a mutex in -rt.
So I agree with Frederic on this.


>
>
>
> > diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> > index 2974bc7..60ba58e 100644
> > --- a/kernel/trace/trace_irqsoff.c
> > +++ b/kernel/trace/trace_irqsoff.c
> > @@ -23,7 +23,7 @@ static int tracer_enabled __read_mostly;
> >
> > static DEFINE_PER_CPU(int, tracing_cpu);
> >
> > -static DEFINE_SPINLOCK(max_trace_lock);
> > +static DEFINE_RAW_SPINLOCK(max_trace_lock);
>
>
> But there yeah, it does seem necessary as it is involved
> in irqsoff tracing fastpath.
>
> This needs a comment though.

Yeah, you can simply say, this lock is taken by the interrupts off latency
tracer and will always be taken with interrupts disabled.

-- Steve

2010-01-13 15:37:00

by John Kacur

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock



On Wed, 13 Jan 2010, Steven Rostedt wrote:

> On Tue, Jan 12, 2010 at 04:39:32AM +0100, Frederic Weisbecker wrote:
> > On Mon, Jan 11, 2010 at 10:26:44PM +0100, John Kacur wrote:
> > > Convert locks that cannot sleep in preempt-rt to raw_spinlocks.
> > >
> > > See also: 87654a70523a8c5baadcbbc07d80cbae8f912837
> > >
> > > Signed-off-by: John Kacur <[email protected]>
> > > ---
> > > kernel/trace/ring_buffer.c | 52 +++++++++++++++++++++---------------------
> > > kernel/trace/trace.c | 10 ++++----
> > > kernel/trace/trace_irqsoff.c | 6 ++--
> > > 3 files changed, 34 insertions(+), 34 deletions(-)
> > >
> > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> > > index 2326b04..ffaddc5 100644
> > > --- a/kernel/trace/ring_buffer.c
> > > +++ b/kernel/trace/ring_buffer.c
> > > @@ -422,7 +422,7 @@ int ring_buffer_print_page_header(struct trace_seq *s)
> > > struct ring_buffer_per_cpu {
> > > int cpu;
> > > struct ring_buffer *buffer;
> > > - spinlock_t reader_lock; /* serialize readers */
> > > + raw_spinlock_t reader_lock; /* serialize readers */
> >
> >
> >
> > Why this one? This is a reader lock, not taken in any tracing fast path
> > places. Why should it never sleep in rt, it's taken by a reader of the ring
> > buffer, which doesn't seem to me involved in any critical work.
> >
> > I may be wrong though, better wait for Steve to correct me
> > if needed.
> >
> > In any case, the changelog needs more details about the individual
> > purpose of this patch.
>
> At first I would agree, but looking at where the lock is taken, the one place
> that worries me is in ftrace_dump(). It can be called with interrupts
> disabled, and if so, it will take this lock. If we let this lock convert
> to a mutex, then it can cause issues when ftrace_dump() takes the lock with
> interrupts disabled.
>
>
> >
> >
> > > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > > index 0df1b0f..0c6bbcb 100644
> > > --- a/kernel/trace/trace.c
> > > +++ b/kernel/trace/trace.c
> > > @@ -258,7 +258,7 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK |
> > > TRACE_ITER_GRAPH_TIME;
> > >
> > > static int trace_stop_count;
> > > -static DEFINE_SPINLOCK(tracing_start_lock);
> > > +static DEFINE_RAW_SPINLOCK(tracing_start_lock);
> >
> >
> > Same here. I don't understand why this should never sleep in -rt.
> > This is not a critical lock. It is taken in rare and not critical
> > places, mostly in reader places when we open/release a trace
> > file and also in tracing selftests.
>
> This one I don't see as an issue in changing to a mutex in -rt.
> So I agree with Frederic on this.
>
>
> >
> >
> >
> > > diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> > > index 2974bc7..60ba58e 100644
> > > --- a/kernel/trace/trace_irqsoff.c
> > > +++ b/kernel/trace/trace_irqsoff.c
> > > @@ -23,7 +23,7 @@ static int tracer_enabled __read_mostly;
> > >
> > > static DEFINE_PER_CPU(int, tracing_cpu);
> > >
> > > -static DEFINE_SPINLOCK(max_trace_lock);
> > > +static DEFINE_RAW_SPINLOCK(max_trace_lock);
> >
> >
> > But there yeah, it does seem necessary as it is involved
> > in irqsoff tracing fastpath.
> >
> > This needs a comment though.
>
> Yeah, you can simply say, this lock is taken by the interrupts off latency
> tracer and will always be taken with interrupts disabled.
>
> -- Steve
>
>
Thanks for the review Steve and Frederic - I'll spin a new patch that
doesn't convert tracing_start_lock.

However, let's give it some good testing in preempt-rt, because we may
have forgotten over time, why we converted these locks.

John

2010-01-13 15:41:58

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock

On Wed, 2010-01-13 at 16:36 +0100, John Kacur wrote:
>
> On Wed, 13 Jan 2010, Steven Rostedt wrote:

> >
> Thanks for the review Steve and Frederic - I'll spin a new patch that
> doesn't convert tracing_start_lock.
>
> However, let's give it some good testing in preempt-rt, because we may
> have forgotten over time, why we converted these locks.

Well, I think we just never unconverted them :-) I think we took the
paranoid approach and converted all locks in the trace directory to raw.
In fact, I think I even had a script to do it.

-- Steve

2010-01-13 18:09:52

by John Kacur

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock



On Wed, 13 Jan 2010, Steven Rostedt wrote:

> On Wed, 2010-01-13 at 16:36 +0100, John Kacur wrote:
> >
> > On Wed, 13 Jan 2010, Steven Rostedt wrote:
>
> > >
> > Thanks for the review Steve and Frederic - I'll spin a new patch that
> > doesn't convert tracing_start_lock.
> >
> > However, let's give it some good testing in preempt-rt, because we may
> > have forgotten over time, why we converted these locks.
>
> Well, I think we just never unconverted them :-) I think we took the
> paranoid approach and converted all locks in the trace directory to raw.
> In fact, I think I even had a script to do it.
>
> -- Steve

Okay, here is the new version of the patch.
Frederic and Steve, can I have Acked-bys before I push to git?

Thanks

>From ee03cc607493e58f34bf93afa2b8a23da5510927 Mon Sep 17 00:00:00 2001
From: John Kacur <[email protected]>
Date: Sun, 10 Jan 2010 03:09:48 +0100
Subject: [PATCH] trace: Convert reader_lock and max_trace_lock to raw_spinlocks

Convert reader_lock and max_trace_lock to raw_spinlock.
These locks cannot sleep in preempt-rt because they are called with
interrupts disabled. This should have zero-impact on mainline.

See also: 87654a70523a8c5baadcbbc07d80cbae8f912837

Signed-off-by: John Kacur <[email protected]>
---
kernel/trace/ring_buffer.c | 53 +++++++++++++++++++++--------------------
kernel/trace/trace_irqsoff.c | 10 +++++--
2 files changed, 34 insertions(+), 29 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 2326b04..56b073c 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -422,7 +422,8 @@ int ring_buffer_print_page_header(struct trace_seq *s)
struct ring_buffer_per_cpu {
int cpu;
struct ring_buffer *buffer;
- spinlock_t reader_lock; /* serialize readers */
+ /* Can be called with interrupts disabled via ftrace_dump */
+ raw_spinlock_t reader_lock; /* serialize readers */
arch_spinlock_t lock;
struct lock_class_key lock_key;
struct list_head *pages;
@@ -996,7 +997,7 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)

cpu_buffer->cpu = cpu;
cpu_buffer->buffer = buffer;
- spin_lock_init(&cpu_buffer->reader_lock);
+ raw_spin_lock_init(&cpu_buffer->reader_lock);
lockdep_set_class(&cpu_buffer->reader_lock, buffer->reader_lock_key);
cpu_buffer->lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;

@@ -1193,7 +1194,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
struct list_head *p;
unsigned i;

- spin_lock_irq(&cpu_buffer->reader_lock);
+ raw_spin_lock_irq(&cpu_buffer->reader_lock);
rb_head_page_deactivate(cpu_buffer);

for (i = 0; i < nr_pages; i++) {
@@ -1210,7 +1211,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
rb_reset_cpu(cpu_buffer);
rb_check_pages(cpu_buffer);

- spin_unlock_irq(&cpu_buffer->reader_lock);
+ raw_spin_unlock_irq(&cpu_buffer->reader_lock);
}

static void
@@ -1221,7 +1222,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
struct list_head *p;
unsigned i;

- spin_lock_irq(&cpu_buffer->reader_lock);
+ raw_spin_lock_irq(&cpu_buffer->reader_lock);
rb_head_page_deactivate(cpu_buffer);

for (i = 0; i < nr_pages; i++) {
@@ -1235,7 +1236,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
rb_reset_cpu(cpu_buffer);
rb_check_pages(cpu_buffer);

- spin_unlock_irq(&cpu_buffer->reader_lock);
+ raw_spin_unlock_irq(&cpu_buffer->reader_lock);
}

/**
@@ -2735,9 +2736,9 @@ void ring_buffer_iter_reset(struct ring_buffer_iter *iter)

cpu_buffer = iter->cpu_buffer;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
rb_iter_reset(iter);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
}
EXPORT_SYMBOL_GPL(ring_buffer_iter_reset);

@@ -3157,12 +3158,12 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts)
again:
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
event = rb_buffer_peek(cpu_buffer, ts);
if (event && event->type_len == RINGBUF_TYPE_PADDING)
rb_advance_reader(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

if (event && event->type_len == RINGBUF_TYPE_PADDING)
@@ -3187,9 +3188,9 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
unsigned long flags;

again:
- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
event = rb_iter_peek(iter, ts);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

if (event && event->type_len == RINGBUF_TYPE_PADDING)
goto again;
@@ -3225,14 +3226,14 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);

event = rb_buffer_peek(cpu_buffer, ts);
if (event)
rb_advance_reader(cpu_buffer);

if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

out:
@@ -3278,11 +3279,11 @@ ring_buffer_read_start(struct ring_buffer *buffer, int cpu)
atomic_inc(&cpu_buffer->record_disabled);
synchronize_sched();

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
arch_spin_lock(&cpu_buffer->lock);
rb_iter_reset(iter);
arch_spin_unlock(&cpu_buffer->lock);
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

return iter;
}
@@ -3319,7 +3320,7 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)
struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
unsigned long flags;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
again:
event = rb_iter_peek(iter, ts);
if (!event)
@@ -3330,7 +3331,7 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)

rb_advance_iter(iter);
out:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

return event;
}
@@ -3396,7 +3397,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)

atomic_inc(&cpu_buffer->record_disabled);

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);

if (RB_WARN_ON(cpu_buffer, local_read(&cpu_buffer->committing)))
goto out;
@@ -3408,7 +3409,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)
arch_spin_unlock(&cpu_buffer->lock);

out:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

atomic_dec(&cpu_buffer->record_disabled);
}
@@ -3446,10 +3447,10 @@ int ring_buffer_empty(struct ring_buffer *buffer)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
ret = rb_per_cpu_empty(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

if (!ret)
@@ -3480,10 +3481,10 @@ int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu)
cpu_buffer = buffer->buffers[cpu];
local_irq_save(flags);
if (dolock)
- spin_lock(&cpu_buffer->reader_lock);
+ raw_spin_lock(&cpu_buffer->reader_lock);
ret = rb_per_cpu_empty(cpu_buffer);
if (dolock)
- spin_unlock(&cpu_buffer->reader_lock);
+ raw_spin_unlock(&cpu_buffer->reader_lock);
local_irq_restore(flags);

return ret;
@@ -3678,7 +3679,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
if (!bpage)
goto out;

- spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);

reader = rb_get_reader_page(cpu_buffer);
if (!reader)
@@ -3753,7 +3754,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
ret = read;

out_unlock:
- spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

out:
return ret;
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 2974bc7..7125f71 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -23,7 +23,11 @@ static int tracer_enabled __read_mostly;

static DEFINE_PER_CPU(int, tracing_cpu);

-static DEFINE_SPINLOCK(max_trace_lock);
+/*
+ * This lock is always taken with interrupts disabled,
+ * and therefore cannot sleep in preempt-rt.
+ */
+static DEFINE_RAW_SPINLOCK(max_trace_lock);

enum {
TRACER_IRQS_OFF = (1 << 1),
@@ -144,7 +148,7 @@ check_critical_timing(struct trace_array *tr,
if (!report_latency(delta))
goto out;

- spin_lock_irqsave(&max_trace_lock, flags);
+ raw_spin_lock_irqsave(&max_trace_lock, flags);

/* check if we are still the max latency */
if (!report_latency(delta))
@@ -167,7 +171,7 @@ check_critical_timing(struct trace_array *tr,
max_sequence++;

out_unlock:
- spin_unlock_irqrestore(&max_trace_lock, flags);
+ raw_spin_unlock_irqrestore(&max_trace_lock, flags);

out:
data->critical_sequence = max_sequence;
--
1.6.5.2

2010-01-17 13:00:52

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 14/26] trace: Convert various locks to raw_spinlock

On Wed, Jan 13, 2010 at 07:09:22PM +0100, John Kacur wrote:
>
>
> On Wed, 13 Jan 2010, Steven Rostedt wrote:
>
> > On Wed, 2010-01-13 at 16:36 +0100, John Kacur wrote:
> > >
> > > On Wed, 13 Jan 2010, Steven Rostedt wrote:
> >
> > > >
> > > Thanks for the review Steve and Frederic - I'll spin a new patch that
> > > doesn't convert tracing_start_lock.
> > >
> > > However, let's give it some good testing in preempt-rt, because we may
> > > have forgotten over time, why we converted these locks.
> >
> > Well, I think we just never unconverted them :-) I think we took the
> > paranoid approach and converted all locks in the trace directory to raw.
> > In fact, I think I even had a script to do it.
> >
> > -- Steve
>
> Okay, here is the new version of the patch.
> Frederic and Steve, can I have Acked-bys before I push to git?
>
> Thanks
>
> From ee03cc607493e58f34bf93afa2b8a23da5510927 Mon Sep 17 00:00:00 2001
> From: John Kacur <[email protected]>
> Date: Sun, 10 Jan 2010 03:09:48 +0100
> Subject: [PATCH] trace: Convert reader_lock and max_trace_lock to raw_spinlocks
>
> Convert reader_lock and max_trace_lock to raw_spinlock.
> These locks cannot sleep in preempt-rt because they are called with
> interrupts disabled. This should have zero-impact on mainline.
>
> See also: 87654a70523a8c5baadcbbc07d80cbae8f912837
>
> Signed-off-by: John Kacur <[email protected]>



Acked-by: Frederic Weisbecker <[email protected]>