This patch replaces the flush_map() function in the arch/i386/mm/pageattr.c file with flush_tlb_all() calls, as the flush_map() function wants to do the same, but just forgot the preempt_disable() and preempt_enable() calls.
To minimize future inconsistency I think this patch should be applied...
Best regards
Thomas Schlichter
Thomas Schlichter <[email protected]> wrote:
>
> This patch replaces the flush_map() function in the arch/i386/mm/pageattr.c
> file with flush_tlb_all() calls, as the flush_map() function wants to do
> the same, but just forgot the preempt_disable() and preempt_enable() calls.
>
flush_map() is doing a wbinvd:
> -static void flush_kernel_map(void *dummy)
> -{
> - /* Could use CLFLUSH here if the CPU supports it (Hammer,P4) */
> - if (boot_cpu_data.x86_model >= 4)
> - asm volatile("wbinvd":::"memory");
It appears that by replacing flush_map() with flush_tlb_all(), you are no
longer doing this?
On Thu, Feb 20, 2003 at 21:36, Dave Jones wrote:
> This looks bogus. You're killing the wbinvd() in flush_kernel_map() which
> is needed.
I must admit I don't exactly know the wbinvd() command, but as the comment
says:
/* Could use CLFLUSH here if the CPU supports it (Hammer,P4) */
I thought it is not NEEDED, just a COULD...
Thomas
On Thu, Feb 20, 2003 at 09:00:05PM +0100, Thomas Schlichter wrote:
> This patch replaces the flush_map() function in the arch/i386/mm/pageattr.c file with flush_tlb_all() calls, as the flush_map() function wants to do the same, but just forgot the preempt_disable() and preempt_enable() calls.
>
> To minimize future inconsistency I think this patch should be applied...
This looks bogus. You're killing the wbinvd() in flush_kernel_map() which
is needed.
Dave
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
On Thu, Feb 20, 2003 at 09:30:55PM +0100, Thomas Schlichter wrote:
> > This looks bogus. You're killing the wbinvd() in flush_kernel_map() which
> > is needed.
> I must admit I don't exactly know the wbinvd() command, but as the comment
> says:
> /* Could use CLFLUSH here if the CPU supports it (Hammer,P4) */
>
> I thought it is not NEEDED, just a COULD...
Its hinting at a possible optimisation, not saying
that it is unneeded.
Dave
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
On Thu, Feb 20, 2003 21:50, Dave Jones wrote:
> Its hinting at a possible optimisation, not saying
> that it is unneeded.
OK, sorry, than I just misunderstood the comment...
So here is a minimal change patch that should solve the preempt issue in
flush_map().
Instead of just doing a preempt_disable() before and a preempt_enable() after
the flush_kernel_map() calls I just changed the order so that the preempt
point is not between them...
Thomas
On Fri, 21 Feb 2003, Thomas Schlichter wrote:
>
> So here is a minimal change patch that should solve the preempt issue in
> flush_map().
>
> Instead of just doing a preempt_disable() before and a preempt_enable()
> after
> the flush_kernel_map() calls I just changed the order so that the preempt
> point is not between them...
No. All that does is make sure that the cpu you start out on is
flushed, once or twice, and the cpu you end up on may be missed.
Use preempt_disable and preempt_enable.
Hugh
On Fri, 21 Feb 2003, Hugh Dickins wrote:
> No. All that does is make sure that the cpu you start out on is
> flushed, once or twice, and the cpu you end up on may be missed.
> Use preempt_disable and preempt_enable.
Oh, you are right! I think I am totally stupid this morning...!
Now finally I hope this is the correct patch...
Thomas Schlichter
On Fri, Feb 21, 2003 at 01:42:12PM +0100, Thomas Schlichter wrote:
> > No. All that does is make sure that the cpu you start out on is
> > flushed, once or twice, and the cpu you end up on may be missed.
> > Use preempt_disable and preempt_enable.
>
> Oh, you are right! I think I am totally stupid this morning...!
> Now finally I hope this is the correct patch...
That would appear to do what you want, but its an ugly construct to
be repeating everywhere that wants to call a function on all CPUs.
It would probably clean things up a lot if we had a function to do..
static inline void on_each_cpu(void *func)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_call_function(func, NULL, 1, 1);
func(NULL);
preempt_enable();
#else
func(NULL);
#endif
}
Bluesmoke and agpgart could both use this to cleanup some mess,
and no doubt there are others
Comments?
Dave
--
| Dave Jones. http://www.codemonkey.org.uk
Yes, you are right. I was just looking for this preempt-problem where a
flush_tlb* was done, but there are many other places where this problem
occours, too... Nearly everywhere where smp_call_function() is used!
I found a function in the file mm/slab.c called smp_call_function_all_cpus()
which tries to do the thing we want, but I think not even this function is
preempt-safe...!
But here I think I better get off my fingers, I just wanted to help a bit with
this issue, but I don't think I have the time and knowledge to solve it
completely in the mentioned good way...
Perhaps even the semantic of the function smp_call_function() could be changed
to call the function on every CPU? Just an idea...
Best regards
Thomas Schlichter
On Fri, 21 Feb 2003 15:20, Dave Jones wrote:
> On Fri, Feb 21, 2003 at 01:42:12PM +0100, Thomas Schlichter wrote:
> > > No. All that does is make sure that the cpu you start out on is
> > > flushed, once or twice, and the cpu you end up on may be missed.
> > > Use preempt_disable and preempt_enable.
> >
> > Oh, you are right! I think I am totally stupid this morning...!
> > Now finally I hope this is the correct patch...
>
> That would appear to do what you want, but its an ugly construct to
> be repeating everywhere that wants to call a function on all CPUs.
> It would probably clean things up a lot if we had a function to do..
>
> static inline void on_each_cpu(void *func)
> {
> #ifdef CONFIG_SMP
> preempt_disable();
> smp_call_function(func, NULL, 1, 1);
> func(NULL);
> preempt_enable();
> #else
> func(NULL);
> #endif
> }
>
> Bluesmoke and agpgart could both use this to cleanup some mess,
> and no doubt there are others
>
> Comments?
>
> Dave
On Fri, Feb 21, 2003 at 03:25:01PM +0100, Thomas Schlichter wrote:
> I found a function in the file mm/slab.c called smp_call_function_all_cpus()
> which tries to do the thing we want, but I think not even this function is
> preempt-safe...!
That certainly looks erm, odd.
> Perhaps even the semantic of the function smp_call_function() could be changed
> to call the function on every CPU? Just an idea...
Some places that use it actually rely on it not executing on the local cpu.
It would also break backwards compatabiltiy for no real good reason.
Dave
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
On Fri, 21 Feb 2003, Dave Jones wrote:
> It would probably clean things up a lot if we had a function to do..
>
> static inline void on_each_cpu(void *func)
> {
> #ifdef CONFIG_SMP
> preempt_disable();
> smp_call_function(func, NULL, 1, 1);
> func(NULL);
> preempt_enable();
> #else
> func(NULL);
> #endif
> }
Of course that's much much better. But I think rather better as
static inline void on_each_cpu(void (*func) (void *info), void *info)
passing info to func instead of assuming NULL. inline? maybe.
Hugh
On Fri, Feb 21, 2003 at 02:52:38PM +0000, Hugh Dickins wrote:
> Of course that's much much better. But I think rather better as
> static inline void on_each_cpu(void (*func) (void *info), void *info)
> passing info to func instead of assuming NULL. inline? maybe.
Sounds good to me. As for inline - This thing will be generating lots
of IPIs on SMP, so its hardly going to be performance critical, so
we could leave it out of line IMO.
Dave
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
On Fri, Feb 21, 2003 at 02:20:39PM +0000, Dave Jones wrote:
> That would appear to do what you want, but its an ugly construct to
> be repeating everywhere that wants to call a function on all CPUs.
> It would probably clean things up a lot if we had a function to do..
Ok, here's a first stab at an implementation. Compiles, but is untested..
Fixes up a few preemption races Thomas highlighted, and converts
a few smp_call_function() users over to on_each_cpu(), which
saves quite a bit of code.
Dave
diff -Nru a/arch/i386/kernel/cpuid.c b/arch/i386/kernel/cpuid.c
--- a/arch/i386/kernel/cpuid.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/kernel/cpuid.c Fri Feb 21 17:16:00 2003
@@ -64,6 +64,7 @@
{
struct cpuid_command cmd;
+ preempt_disable();
if ( cpu == smp_processor_id() ) {
cpuid(reg, &data[0], &data[1], &data[2], &data[3]);
} else {
@@ -73,6 +74,7 @@
smp_call_function(cpuid_smp_cpuid, &cmd, 1, 1);
}
+ preempt_enable();
}
#else /* ! CONFIG_SMP */
diff -Nru a/arch/i386/kernel/microcode.c b/arch/i386/kernel/microcode.c
--- a/arch/i386/kernel/microcode.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/kernel/microcode.c Fri Feb 21 17:16:00 2003
@@ -183,11 +183,8 @@
int i, error = 0, err;
struct microcode *m;
- if (smp_call_function(do_update_one, NULL, 1, 1) != 0) {
- printk(KERN_ERR "microcode: IPI timeout, giving up\n");
+ if (on_each_cpu(do_update_one, NULL)==-1)
return -EIO;
- }
- do_update_one(NULL);
for (i=0; i<NR_CPUS; i++) {
err = update_req[i].err;
diff -Nru a/arch/i386/kernel/msr.c b/arch/i386/kernel/msr.c
--- a/arch/i386/kernel/msr.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/kernel/msr.c Fri Feb 21 17:16:00 2003
@@ -115,9 +115,13 @@
static inline int do_wrmsr(int cpu, u32 reg, u32 eax, u32 edx)
{
struct msr_command cmd;
+ int ret;
+ preempt_disable();
if ( cpu == smp_processor_id() ) {
- return wrmsr_eio(reg, eax, edx);
+ ret = wrmsr_eio(reg, eax, edx);
+ preempt_enable();
+ return ret;
} else {
cmd.cpu = cpu;
cmd.reg = reg;
@@ -125,6 +129,7 @@
cmd.data[1] = edx;
smp_call_function(msr_smp_wrmsr, &cmd, 1, 1);
+ preempt_enable();
return cmd.err;
}
}
diff -Nru a/arch/i386/kernel/smp.c b/arch/i386/kernel/smp.c
--- a/arch/i386/kernel/smp.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/kernel/smp.c Fri Feb 21 17:16:00 2003
@@ -547,6 +547,41 @@
return 0;
}
+/*
+ * [SUMMARY] Run a function on every CPU.
+ * <func> The function to run. This must be fast and non-blocking.
+ * <info> An arbitrary pointer to pass to the function.
+ * <nonatomic> currently unused.
+ * [RETURNS] 0 on success, else a negative status code. Does not return until
+ * remote CPUs are nearly ready to execute <<func>> or are or have executed.
+ *
+ * You must not call this function with disabled interrupts or from a
+ * hardware interrupt handler or from a bottom half handler.
+ */
+int on_each_cpu(void (*func) (void *info), void *info)
+{
+#ifdef CONFIG_SMP
+ preempt_disable();
+
+ if (num_online_cpus() == 1)
+ goto only_one;
+
+ if (smp_call_function(func, info, 1, 1) != 0) {
+ printk (KERN_ERR "%p: IPI timeout, giving up\n",
+ __builtin_return_address(0));
+ preempt_enable();
+ return -1;
+ }
+
+only_one:
+ func(info);
+ preempt_enable();
+#else
+ func(info);
+#endif
+ return 0;
+}
+
static void stop_this_cpu (void * dummy)
{
/*
diff -Nru a/arch/i386/kernel/sysenter.c b/arch/i386/kernel/sysenter.c
--- a/arch/i386/kernel/sysenter.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/kernel/sysenter.c Fri Feb 21 17:16:00 2003
@@ -17,6 +17,7 @@
#include <asm/msr.h>
#include <asm/pgtable.h>
#include <asm/unistd.h>
+#include <asm/smp.h>
extern asmlinkage void sysenter_entry(void);
@@ -97,8 +98,7 @@
return 0;
memcpy((void *) page, sysent, sizeof(sysent));
- enable_sep_cpu(NULL);
- smp_call_function(enable_sep_cpu, NULL, 1, 1);
+ on_each_cpu (enable_sep_cpu, NULL);
return 0;
}
diff -Nru a/arch/i386/mm/pageattr.c b/arch/i386/mm/pageattr.c
--- a/arch/i386/mm/pageattr.c Fri Feb 21 17:16:00 2003
+++ b/arch/i386/mm/pageattr.c Fri Feb 21 17:16:00 2003
@@ -131,10 +131,7 @@
static inline void flush_map(void)
{
-#ifdef CONFIG_SMP
- smp_call_function(flush_kernel_map, NULL, 1, 1);
-#endif
- flush_kernel_map(NULL);
+ on_each_cpu(flush_kernel_map, NULL);
}
struct deferred_page {
diff -Nru a/arch/x86_64/mm/pageattr.c b/arch/x86_64/mm/pageattr.c
--- a/arch/x86_64/mm/pageattr.c Fri Feb 21 17:16:00 2003
+++ b/arch/x86_64/mm/pageattr.c Fri Feb 21 17:16:00 2003
@@ -123,10 +123,7 @@
static inline void flush_map(unsigned long address)
{
-#ifdef CONFIG_SMP
- smp_call_function(flush_kernel_map, (void *)address, 1, 1);
-#endif
- flush_kernel_map((void *)address);
+ on_each_cpu(flush_kernel_map,(void *)address);
}
struct deferred_page {
diff -Nru a/drivers/char/agp/agp.h b/drivers/char/agp/agp.h
--- a/drivers/char/agp/agp.h Fri Feb 21 17:16:00 2003
+++ b/drivers/char/agp/agp.h Fri Feb 21 17:16:00 2003
@@ -34,24 +34,10 @@
#define PFX "agpgart: "
-#ifdef CONFIG_SMP
-static void ipi_handler(void *null)
-{
- flush_agp_cache();
-}
-
-static void __attribute__((unused)) global_cache_flush(void)
-{
- if (smp_call_function(ipi_handler, NULL, 1, 1) != 0)
- panic(PFX "timed out waiting for the other CPUs!\n");
- flush_agp_cache();
-}
-#else
static inline void global_cache_flush(void)
{
- flush_agp_cache();
+ on_each_cpu(flush_agp_cache, NULL);
}
-#endif /* !CONFIG_SMP */
enum aper_size_type {
U8_APER_SIZE,
diff -Nru a/include/asm-i386/agp.h b/include/asm-i386/agp.h
--- a/include/asm-i386/agp.h Fri Feb 21 17:16:00 2003
+++ b/include/asm-i386/agp.h Fri Feb 21 17:16:00 2003
@@ -18,6 +18,8 @@
/* Could use CLFLUSH here if the cpu supports it. But then it would
need to be called for each cacheline of the whole page so it may not be
worth it. Would need a page for it. */
-#define flush_agp_cache() asm volatile("wbinvd":::"memory")
-
+static void flush_agp_cache(void *info)
+{
+ __asm__ __volatile__ ("wbinvd": : :"memory");
+}
#endif
diff -Nru a/include/asm-i386/smp.h b/include/asm-i386/smp.h
--- a/include/asm-i386/smp.h Fri Feb 21 17:16:00 2003
+++ b/include/asm-i386/smp.h Fri Feb 21 17:16:00 2003
@@ -46,6 +46,8 @@
extern void (*mtrr_hook) (void);
extern void zap_low_mappings (void);
+extern int on_each_cpu(void (*func) (void *info), void *info);
+
#define MAX_APICID 256
/*
diff -Nru a/include/asm-x86_64/agp.h b/include/asm-x86_64/agp.h
--- a/include/asm-x86_64/agp.h Fri Feb 21 17:16:00 2003
+++ b/include/asm-x86_64/agp.h Fri Feb 21 17:16:00 2003
@@ -18,6 +18,8 @@
/* Could use CLFLUSH here if the cpu supports it. But then it would
need to be called for each cacheline of the whole page so it may not be
worth it. Would need a page for it. */
-#define flush_agp_cache() asm volatile("wbinvd":::"memory")
-
+static void flush_agp_cache(void *info)
+{
+ __asm__ __volatile__ ("wbinvd": : :"memory");
+}
#endif
diff -Nru a/include/asm-x86_64/smp.h b/include/asm-x86_64/smp.h
--- a/include/asm-x86_64/smp.h Fri Feb 21 17:16:00 2003
+++ b/include/asm-x86_64/smp.h Fri Feb 21 17:16:00 2003
@@ -46,6 +46,8 @@
extern void (*mtrr_hook) (void);
extern void zap_low_mappings(void);
+extern int on_each_cpu(void (*func) (void *info), void *info);
+
#define SMP_TRAMPOLINE_BASE 0x6000
/*
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
On Fri, Feb 21, 2003 at 18:36, Dave Jones wrote:
> Ok, here's a first stab at an implementation. Compiles, but is untested..
> Fixes up a few preemption races Thomas highlighted, and converts
> a few smp_call_function() users over to on_each_cpu(), which
> saves quite a bit of code.
Your patch was really fine, I just modified it a bit and fixed some more
preempt-issues with the smp_call_function() calls. It compiles and workes
with no problems so far... I hope I did not make any big mistakes... ;-)
I hope you like this one, too...
Thomas