[PATCH] x86: disable_mtrr_trim only need for x86_64
mtrr_trim_uncached_memory is only used in x86_64,
so disable_mtrr_trim is not needed for x86_32
Signed-off-by: Yinghai Lu <[email protected]>
diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
index 46763e3..608c88a 100644
--- a/arch/x86/kernel/cpu/mtrr/main.c
+++ b/arch/x86/kernel/cpu/mtrr/main.c
@@ -624,6 +624,7 @@ static struct sysdev_driver mtrr_sysdev_driver = {
.resume = mtrr_restore,
};
+#ifdef CONFIG_X86_64
static int disable_mtrr_trim;
static int __init disable_mtrr_trim_setup(char *str)
@@ -633,7 +634,6 @@ static int __init disable_mtrr_trim_setup(char *str)
}
early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
-#ifdef CONFIG_X86_64
/**
* mtrr_trim_uncached_memory - trim RAM not covered by MTRRs
*
Yinghai Lu wrote:
> [PATCH] x86: disable_mtrr_trim only need for x86_64
>
> mtrr_trim_uncached_memory is only used in x86_64,
>
> so disable_mtrr_trim is not needed for x86_32
>
> Signed-off-by: Yinghai Lu <[email protected]>
That seems like a bug, and if so this patch goes the wrong direction.
(If the 32-bit code has another solution for the same problem, they
should be unified.)
The trimming of uncachable memory affects both 32- and 64-bit kernels;
it's the same hardware, and even 32-bit kernels (with PAE) can access
memory above 4 GB.
-hpa
On Jan 19, 2008 9:37 PM, H. Peter Anvin <[email protected]> wrote:
> Yinghai Lu wrote:
> > [PATCH] x86: disable_mtrr_trim only need for x86_64
> >
> > mtrr_trim_uncached_memory is only used in x86_64,
> >
> > so disable_mtrr_trim is not needed for x86_32
> >
> > Signed-off-by: Yinghai Lu <[email protected]>
>
> That seems like a bug, and if so this patch goes the wrong direction.
> (If the 32-bit code has another solution for the same problem, they
> should be unified.)
>
> The trimming of uncachable memory affects both 32- and 64-bit kernels;
> it's the same hardware, and even 32-bit kernels (with PAE) can access
> memory above 4 GB.
>
the trim fix for x86_64 is update end_pfn, and use that as max_pfn,
and max_low_pfn in setup_arch.
but i386 is setup_arch is only using max_low_pfn.
to make you happy, the simple way is update e820 table so code could
be unified between i386 and x86_64
update the e820 table by checking mtrr like checking gart hole...
is that OK?
YH
[PATCH] x86_64: update e820 instead of updating end_pfn
when mtrr is not covering all e820 table, need to trim the ram, need to update e820
so we can reuse some code for x86_32.
need to add early_identify_cpu for x86_32, and move mtrr_bp_init early
Signed-off-by: Yinghai Lu <[email protected]>
diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
index 46763e3..28ae79a 100644
--- a/arch/x86/kernel/cpu/mtrr/main.c
+++ b/arch/x86/kernel/cpu/mtrr/main.c
@@ -644,16 +644,17 @@ early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
* it from the kernel's allocation pools, warning the user with an obnoxious
* message.
*/
-void __init mtrr_trim_uncached_memory(void)
+int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
{
unsigned long i, base, size, highest_addr = 0, def, dummy;
mtrr_type type;
+ u64 trim_start, trim_size;
/* Make sure we only trim uncachable memory on Intel machines */
rdmsr(MTRRdefType_MSR, def, dummy);
def &= 0xff;
if (!is_cpu(INTEL) || disable_mtrr_trim || def != MTRR_TYPE_UNCACHABLE)
- return;
+ return 0;
/* Find highest cached pfn */
for (i = 0; i < num_var_ranges; i++) {
@@ -673,8 +674,18 @@ void __init mtrr_trim_uncached_memory(void)
"memory, trimmed %ld pages\n", end_pfn -
(highest_addr >> PAGE_SHIFT));
printk(KERN_WARNING "***************\n");
- end_pfn = highest_addr >> PAGE_SHIFT;
+
+ printk(KERN_INFO "update e820 for mtrr\n");
+ trim_start = highest_addr;
+ trim_size = end_pfn;
+ trim_size <<= PAGE_SHIFT;
+ trim_size -= trim_start;
+ add_memory_region(trim_start, trim_size, E820_RESERVED);
+ update_e820();
+ return 1;
}
+
+ return 0;
}
#endif
diff --git a/arch/x86/kernel/setup_64.c b/arch/x86/kernel/setup_64.c
index 2ad8bdd..5194c2a 100644
--- a/arch/x86/kernel/setup_64.c
+++ b/arch/x86/kernel/setup_64.c
@@ -324,9 +324,12 @@ void __init setup_arch(char **cmdline_p)
* we are rounding upwards:
*/
end_pfn = e820_end_of_ram();
- /* Trim memory not covered by WB MTRRs */
+ /* update e820 for memory not covered by WB MTRRs */
mtrr_bp_init();
- mtrr_trim_uncached_memory();
+ if (mtrr_trim_uncached_memory(end_pfn)) {
+ e820_register_active_regions(0, 0, -1UL);
+ end_pfn = e820_end_of_ram();
+ }
num_physpages = end_pfn;
diff --git a/include/asm-x86/mtrr.h b/include/asm-x86/mtrr.h
index 96876ec..e552503 100644
--- a/include/asm-x86/mtrr.h
+++ b/include/asm-x86/mtrr.h
@@ -97,7 +97,7 @@ extern int mtrr_del_page (int reg, unsigned long base, unsigned long size);
extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
extern void mtrr_ap_init(void);
extern void mtrr_bp_init(void);
-extern void mtrr_trim_uncached_memory(void);
+extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
# else
#define mtrr_save_fixed_ranges(arg) do {} while (0)
#define mtrr_save_state() do {} while (0)
@@ -121,8 +121,9 @@ static __inline__ int mtrr_del_page (int reg, unsigned long base,
{
return -ENODEV;
}
-static __inline__ void mtrr_trim_uncached_memory(void)
+static __inline__ int mtrr_trim_uncached_memory(unsigned long end_pfn)
{
+ return 0;
}
static __inline__ void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) {;}
* Yinghai Lu <[email protected]> wrote:
> [PATCH] x86_64: update e820 instead of updating end_pfn
>
> when mtrr is not covering all e820 table, need to trim the ram, need
> to update e820
ok, i like this approach even more - applied. Would you like to have a
shot at adding mtrr_trim_uncached_memory() to 32-bit too? It is affected
by the same problem (if not more).
Ingo
On Sunday 20 January 2008 10:20:27 Ingo Molnar wrote:
> * Yinghai Lu <[email protected]> wrote:
> > [PATCH] x86_64: update e820 instead of updating end_pfn
> >
> > when mtrr is not covering all e820 table, need to trim the ram, need
> > to update e820
>
> ok, i like this approach even more - applied. Would you like to have a
> shot at adding mtrr_trim_uncached_memory() to 32-bit too? It is affected
> by the same problem (if not more).
What about the AMD fix I posted? You ignored that one for now but it is far
more critical because it makes these patches break any modern AMD machine
with >3GB.
-Andi
On Jan 20, 2008 1:20 AM, Ingo Molnar <[email protected]> wrote:
>
> * Yinghai Lu <[email protected]> wrote:
>
> > [PATCH] x86_64: update e820 instead of updating end_pfn
> >
> > when mtrr is not covering all e820 table, need to trim the ram, need
> > to update e820
>
> ok, i like this approach even more - applied. Would you like to have a
> shot at adding mtrr_trim_uncached_memory() to 32-bit too? It is affected
> by the same problem (if not more).
for 32 bit, it could use the same mtrr_trim_uncached_memory.
but we need to move mtrr_bp_init calling into setup_arch.
YH
YH
[PATCH] x86_32: trim memory by updating e820
need to be applied after the patch for x86_64 version mtrr fix e820 v2.
when mtrr is not covering all e820 table, need to trim the ram, need to update e820
reuse some code for x86_64
here need to add early_identify_cpu for x86_32, and move mtrr_bp_init early
compiled only, need someone test it.
Index: linux-2.6/arch/x86/kernel/cpu/common.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/cpu/common.c
+++ linux-2.6/arch/x86/kernel/cpu/common.c
@@ -396,11 +396,9 @@ __setup("serialnumber", x86_serial_nr_se
/*
* This does the hard work of actually picking apart the CPU stuff...
*/
-void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+void __cpuinit early_identify_cpu(struct cpuinfo_x86 *c)
{
- int i;
- c->loops_per_jiffy = loops_per_jiffy;
c->x86_cache_size = -1;
c->x86_vendor = X86_VENDOR_UNKNOWN;
c->cpuid_level = -1; /* CPUID not detected */
@@ -424,7 +422,14 @@ void __cpuinit identify_cpu(struct cpuin
if (this_cpu->c_identify)
this_cpu->c_identify(c);
+}
+void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+{
+ int i;
+
+ c->loops_per_jiffy = loops_per_jiffy;
+ early_identify_cpu(c);
/*
* Vendor-specific initialization. In this section we
* canonicalize the feature flags, meaning if there are
@@ -485,7 +490,6 @@ void __init identify_boot_cpu(void)
identify_cpu(&boot_cpu_data);
sysenter_setup();
enable_sep_cpu();
- mtrr_bp_init();
}
void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c)
Index: linux-2.6/arch/x86/kernel/setup_32.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/setup_32.c
+++ linux-2.6/arch/x86/kernel/setup_32.c
@@ -49,6 +49,7 @@
#include <video/edid.h>
+#include <asm/mtrr.h>
#include <asm/apic.h>
#include <asm/e820.h>
#include <asm/mpspec.h>
@@ -747,6 +748,7 @@ void __init setup_arch(char **cmdline_p)
bss_resource.start = virt_to_phys(&__bss_start);
bss_resource.end = virt_to_phys(&__bss_stop)-1;
+ early_identify_cpu(&boot_cpu_data);
parse_early_param();
if (user_defined_memmap) {
@@ -762,6 +764,12 @@ void __init setup_arch(char **cmdline_p)
max_low_pfn = setup_memory();
+ /* update e820 for memory not covered by WB MTRRs */
+ mtrr_bp_init();
+ if (mtrr_trim_uncached_memory(max_pfn)) {
+ max_low_pfn = setup_memory();
+ }
+
#ifdef CONFIG_VMI
/*
* Must be after max_low_pfn is determined, and before kernel
Index: linux-2.6/arch/x86/kernel/cpu/mtrr/main.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/cpu/mtrr/main.c
+++ linux-2.6/arch/x86/kernel/cpu/mtrr/main.c
@@ -624,7 +624,6 @@ static struct sysdev_driver mtrr_sysdev_
.resume = mtrr_restore,
};
-#ifdef CONFIG_X86_64
static int disable_mtrr_trim;
static int __init disable_mtrr_trim_setup(char *str)
@@ -726,7 +725,6 @@ int __init mtrr_trim_uncached_memory(uns
return 0;
}
-#endif
/**
* mtrr_bp_init - initialize mtrrs on the boot CPU
Index: linux-2.6/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.orig/Documentation/kernel-parameters.txt
+++ linux-2.6/Documentation/kernel-parameters.txt
@@ -575,7 +575,7 @@ and is between 256 and 4096 characters.
See drivers/char/README.epca and
Documentation/digiepca.txt.
- disable_mtrr_trim [X86-64, Intel only]
+ disable_mtrr_trim [X86, Intel and AMD only]
By default the kernel will trim any uncacheable
memory out of your available memory pool based on
MTRR settings. This parameter disables that behavior,
Index: linux-2.6/include/asm-x86/processor.h
===================================================================
--- linux-2.6.orig/include/asm-x86/processor.h
+++ linux-2.6/include/asm-x86/processor.h
@@ -131,6 +131,7 @@ DECLARE_PER_CPU(struct cpuinfo_x86, cpu_
void cpu_detect(struct cpuinfo_x86 *c);
+extern void early_identify_cpu(struct cpuinfo_x86 *);
extern void identify_cpu(struct cpuinfo_x86 *);
extern void identify_boot_cpu(void);
extern void identify_secondary_cpu(struct cpuinfo_x86 *);
Index: linux-2.6/arch/x86/kernel/e820_32.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/e820_32.c
+++ linux-2.6/arch/x86/kernel/e820_32.c
@@ -827,3 +827,14 @@ static int __init parse_memmap(char *arg
return 0;
}
early_param("memmap", parse_memmap);
+void __init update_e820(void)
+{
+ u8 nr_map;
+
+ nr_map = e820.nr_map;
+ if (sanitize_e820_map(e820.map, &nr_map))
+ return;
+ e820.nr_map = nr_map;
+ printk(KERN_INFO "modified physical RAM map:\n");
+ print_memory_map("modified");
+}
Index: linux-2.6/include/asm-x86/e820_32.h
===================================================================
--- linux-2.6.orig/include/asm-x86/e820_32.h
+++ linux-2.6/include/asm-x86/e820_32.h
@@ -19,6 +19,7 @@
#ifndef __ASSEMBLY__
extern struct e820map e820;
+extern void update_e820(void);
extern int e820_all_mapped(u64 start, u64 end, unsigned type);
extern int e820_any_mapped(u64 start, u64 end, unsigned type);
@@ -29,6 +30,8 @@ extern int is_memory_all_valid(u64 start
extern int is_memory_all_reserved(u64 start, u64 end);
extern void find_max_pfn(void);
extern void register_bootmem_low_pages(unsigned long max_low_pfn);
+extern void add_memory_region(unsigned long long start,
+ unsigned long long size, int type);
extern void e820_register_memory(void);
extern void limit_regions(unsigned long long size);
extern void print_memory_map(char *who);
[PATCH] x86_64: update e820 instead of updating end_pfn v2
need to be applied after andi's patch for AMD tom2 wb check
when mtrr is not covering all e820 table, need to trim the ram, need to update e820
so we can reuse some code for x86_32.
need to add early_identify_cpu for x86_32, and move mtrr_bp_init early
Signed-off-by: Yinghai Lu <[email protected]>
Index: linux-2.6/arch/x86/kernel/cpu/mtrr/main.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/cpu/mtrr/main.c
+++ linux-2.6/arch/x86/kernel/cpu/mtrr/main.c
@@ -624,6 +624,7 @@ static struct sysdev_driver mtrr_sysdev_
.resume = mtrr_restore,
};
+#ifdef CONFIG_X86_64
static int disable_mtrr_trim;
static int __init disable_mtrr_trim_setup(char *str)
@@ -633,17 +634,16 @@ static int __init disable_mtrr_trim_setu
}
early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
-#ifdef CONFIG_X86_64
-
/*
* Newer AMD K8s and later CPUs have a special magic MSR way to force WB
* for memory >4GB. Check for that here.
* Note this won't check if the MTRRs < 4GB where the magic bit doesn't
* apply to are wrong, but so far we don't know of any such case in the wild.
*/
+#define Tom2Enabled (1U << 21)
#define Tom2ForceMemTypeWB (1U << 22)
-static __init int amd_special_default_mtrr(void)
+static __init int amd_special_default_mtrr(unsigned long end_pfn)
{
u32 l, h;
@@ -661,8 +661,9 @@ static __init int amd_special_default_mt
* Memory between 4GB and top of mem is forced WB by this magic bit.
* Reserved before K8RevF, but should be zero there.
*/
- if (l & Tom2ForceMemTypeWB)
- return 1;
+ if (l & Tom2Enabled)
+ if (l & Tom2ForceMemTypeWB)
+ return 1;
return 0;
}
@@ -676,10 +677,11 @@ static __init int amd_special_default_mt
* memory off the end by adjusting end_pfn, removing it from the kernel's
* allocation pools, warning the user with an obnoxious message.
*/
-void __init mtrr_trim_uncached_memory(void)
+int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
{
unsigned long i, base, size, highest_addr = 0, def, dummy;
mtrr_type type;
+ u64 trim_start, trim_size;
/*
* Make sure we only trim uncachable memory on machines that
@@ -688,7 +690,7 @@ void __init mtrr_trim_uncached_memory(vo
rdmsr(MTRRdefType_MSR, def, dummy);
def &= 0xff;
if (!is_cpu(INTEL) || disable_mtrr_trim || def != MTRR_TYPE_UNCACHABLE)
- return;
+ return 0;
/* Find highest cached pfn */
for (i = 0; i < num_var_ranges; i++) {
@@ -701,8 +703,8 @@ void __init mtrr_trim_uncached_memory(vo
highest_addr = base + size;
}
- if (amd_special_default_mtrr())
- return;
+ if (amd_special_default_mtrr(end_pfn))
+ return 0;
if ((highest_addr >> PAGE_SHIFT) < end_pfn) {
printk(KERN_WARNING "***************\n");
@@ -711,8 +713,18 @@ void __init mtrr_trim_uncached_memory(vo
"memory, trimmed %ld pages\n", end_pfn -
(highest_addr >> PAGE_SHIFT));
printk(KERN_WARNING "***************\n");
- end_pfn = highest_addr >> PAGE_SHIFT;
+
+ printk(KERN_INFO "update e820 for mtrr\n");
+ trim_start = highest_addr;
+ trim_size = end_pfn;
+ trim_size <<= PAGE_SHIFT;
+ trim_size -= trim_start;
+ add_memory_region(trim_start, trim_size, E820_RESERVED);
+ update_e820();
+ return 1;
}
+
+ return 0;
}
#endif
Index: linux-2.6/arch/x86/kernel/setup_64.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/setup_64.c
+++ linux-2.6/arch/x86/kernel/setup_64.c
@@ -324,9 +324,12 @@ void __init setup_arch(char **cmdline_p)
* we are rounding upwards:
*/
end_pfn = e820_end_of_ram();
- /* Trim memory not covered by WB MTRRs */
+ /* update e820 for memory not covered by WB MTRRs */
mtrr_bp_init();
- mtrr_trim_uncached_memory();
+ if (mtrr_trim_uncached_memory(end_pfn)) {
+ e820_register_active_regions(0, 0, -1UL);
+ end_pfn = e820_end_of_ram();
+ }
num_physpages = end_pfn;
Index: linux-2.6/include/asm-x86/mtrr.h
===================================================================
--- linux-2.6.orig/include/asm-x86/mtrr.h
+++ linux-2.6/include/asm-x86/mtrr.h
@@ -97,7 +97,7 @@ extern int mtrr_del_page (int reg, unsig
extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
extern void mtrr_ap_init(void);
extern void mtrr_bp_init(void);
-extern void mtrr_trim_uncached_memory(void);
+extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
# else
#define mtrr_save_fixed_ranges(arg) do {} while (0)
#define mtrr_save_state() do {} while (0)
@@ -121,8 +121,9 @@ static __inline__ int mtrr_del_page (int
{
return -ENODEV;
}
-static __inline__ void mtrr_trim_uncached_memory(void)
+static __inline__ int mtrr_trim_uncached_memory(unsigned long end_pfn)
{
+ return 0;
}
static __inline__ void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) {;}
> {
> u32 l, h;
>
> @@ -661,8 +661,9 @@ static __init int amd_special_default_mt
> * Memory between 4GB and top of mem is forced WB by this magic bit.
> * Reserved before K8RevF, but should be zero there.
> */
> - if (l & Tom2ForceMemTypeWB)
> - return 1;
> + if (l & Tom2Enabled)
> + if (l & Tom2ForceMemTypeWB)
> + return 1;
That change is good agreed, but I would suggest to put it into a separate
patch with a description
-Andi
On Mon, 2008-01-21 at 06:58 +0100, Andi Kleen wrote:
> > {
> > u32 l, h;
> >
> > @@ -661,8 +661,9 @@ static __init int amd_special_default_mt
> > * Memory between 4GB and top of mem is forced WB by this magic bit.
> > * Reserved before K8RevF, but should be zero there.
> > */
> > - if (l & Tom2ForceMemTypeWB)
> > - return 1;
> > + if (l & Tom2Enabled)
> > + if (l & Tom2ForceMemTypeWB)
> > + return 1;
>
> That change is good agreed, but I would suggest to put it into a separate
> patch with a description
>
Perhaps like this:
if (l & (Tom2Enabled|Tom2ForceMemTypeWB))
return 1;
Harvey
> > That change is good agreed, but I would suggest to put it into a separate
> > patch with a description
> >
>
> Perhaps like this:
>
> if (l & (Tom2Enabled|Tom2ForceMemTypeWB))
> return 1;
That's not equivalent.
-Andi
Andi Kleen wrote:
>>> That change is good agreed, but I would suggest to put it into a separate
>>> patch with a description
>>>
>> Perhaps like this:
>>
>> if (l & (Tom2Enabled|Tom2ForceMemTypeWB))
>> return 1;
>
> That's not equivalent.
>
> -Andi
>
The equivalence is:
if ((1 & Tom2Enabled) && (1 & Tom2ForceMemTypeWB))
return 1;
[PATCH] x86_64: check if Tom2 is enabled
need to applied after andi's amd special tom2 wb check patch
in amd_special_default_mtrr need to check if TOM2 is enabled
Signed-off-by: Yinghai Lu <[email protected]>
diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
index 289ba1a..26eed57 100644
--- a/arch/x86/kernel/cpu/mtrr/main.c
+++ b/arch/x86/kernel/cpu/mtrr/main.c
@@ -641,6 +641,7 @@ early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
* Note this won't check if the MTRRs < 4GB where the magic bit doesn't
* apply to are wrong, but so far we don't know of any such case in the wild.
*/
+#define Tom2Enabled (1U << 21)
#define Tom2ForceMemTypeWB (1U << 22)
static __init int amd_special_default_mtrr(void)
@@ -661,7 +662,8 @@ static __init int amd_special_default_mtrr(void)
* Memory between 4GB and top of mem is forced WB by this magic bit.
* Reserved before K8RevF, but should be zero there.
*/
- if (l & Tom2ForceMemTypeWB)
+ if ((l & (Tom2Enabled | Tom2ForceMemTypeWB)) ==
+ (Tom2Enabled | Tom2ForceMemTypeWB))
return 1;
return 0;
}
[Yinghai Lu - Sun, Jan 20, 2008 at 10:57:46PM -0800]
| [PATCH] x86_64: check if Tom2 is enabled
|
| need to applied after andi's amd special tom2 wb check patch
|
| in amd_special_default_mtrr need to check if TOM2 is enabled
|
| Signed-off-by: Yinghai Lu <[email protected]>
|
| diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
| index 289ba1a..26eed57 100644
| --- a/arch/x86/kernel/cpu/mtrr/main.c
| +++ b/arch/x86/kernel/cpu/mtrr/main.c
| @@ -641,6 +641,7 @@ early_param("disable_mtrr_trim", disable_mtrr_trim_setup);
| * Note this won't check if the MTRRs < 4GB where the magic bit doesn't
| * apply to are wrong, but so far we don't know of any such case in the wild.
| */
| +#define Tom2Enabled (1U << 21)
| #define Tom2ForceMemTypeWB (1U << 22)
|
| static __init int amd_special_default_mtrr(void)
| @@ -661,7 +662,8 @@ static __init int amd_special_default_mtrr(void)
| * Memory between 4GB and top of mem is forced WB by this magic bit.
| * Reserved before K8RevF, but should be zero there.
| */
| - if (l & Tom2ForceMemTypeWB)
| + if ((l & (Tom2Enabled | Tom2ForceMemTypeWB)) ==
| + (Tom2Enabled | Tom2ForceMemTypeWB))
| return 1;
| return 0;
| }
is it possible to change 'l' and 'h' to 'low' and 'high'?
'cause 'l' does look like '1' (one) number...
- Cyrill -
Cyrill Gorcunov wrote:
>
> is it possible to change 'l' and 'h' to 'low' and 'high'?
> 'cause 'l' does look like '1' (one) number...
>
I think you should use a different font. Otherwise we're soon in a
position where we can't abbreviate anything that starts with L and keep
using lower case throughout. That doesn't seem like a valid tradeoff to me.
There are plenty of fonts which have good visual distinction between l
and 1 and O and 0.
-hpa
[H. Peter Anvin - Mon, Jan 21, 2008 at 09:39:16AM -0800]
> Cyrill Gorcunov wrote:
>> is it possible to change 'l' and 'h' to 'low' and 'high'?
>> 'cause 'l' does look like '1' (one) number...
>
> I think you should use a different font. Otherwise we're soon in a
> position where we can't abbreviate anything that starts with L and keep
> using lower case throughout. That doesn't seem like a valid tradeoff to
> me.
>
> There are plenty of fonts which have good visual distinction between l and
> 1 and O and 0.
>
> -hpa
>
Hi Peter,
of course you're right!!! but... actually the only several symbols
affected on this: 1-l, O-0 (maybe a few more, dunno) and that is the
case for one-letter-ID only and that is so easy to avoid a such situation.
Anyway - as you wish ;)
- Cyrill -
> is it possible to change 'l' and 'h' to 'low' and 'high'?
> 'cause 'l' does look like '1' (one) number...
It would be fine for me for someone to implement safe_rdtscll() and get rid
of l and h everywhere. IMHO all the l and h accesses of MSRs are just harder
to read and error prone over the ll 64bit variants.
But I didn't want to add it just for this.
-Andi
[Andi Kleen - Mon, Jan 21, 2008 at 07:03:27PM +0100]
|
| > is it possible to change 'l' and 'h' to 'low' and 'high'?
| > 'cause 'l' does look like '1' (one) number...
|
| It would be fine for me for someone to implement safe_rdtscll() and get rid
| of l and h everywhere. IMHO all the l and h accesses of MSRs are just harder
| to read and error prone over the ll 64bit variants.
|
| But I didn't want to add it just for this.
|
| -Andi
|
clear enough, thanks
- Cyrill -
Cyrill Gorcunov wrote:
> [Andi Kleen - Mon, Jan 21, 2008 at 07:03:27PM +0100]
> |
> | > is it possible to change 'l' and 'h' to 'low' and 'high'?
> | > 'cause 'l' does look like '1' (one) number...
> |
> | It would be fine for me for someone to implement safe_rdtscll() and get rid
> | of l and h everywhere. IMHO all the l and h accesses of MSRs are just harder
> | to read and error prone over the ll 64bit variants.
> |
> | But I didn't want to add it just for this.
> |
>
> clear enough, thanks
>
Actually, I think it depends on the specific MSR - some use the halves
for different data, whereas others treat it as a large 64-bit object.
Ironically enough, the way the MSR interfaces were carried into the
64-bit world makes the situation worse on 64 bits; edx:eax is the common
way to represent a 64-bit value on 32 bits.
-hpa
>
> Actually, I think it depends on the specific MSR - some use the halves
> for different data, whereas others treat it as a large 64-bit object.
Even if there are different fields in there it is still much cleaner
and simpler if there is only a single number to manipulate.
> Ironically enough, the way the MSR interfaces were carried into the
> 64-bit world makes the situation worse on 64 bits; edx:eax is the common
> way to represent a 64-bit value on 32 bits.
It's merely an implementation detail of the instruction. But even on 32bit
there is about zero reason to expose that to C code. rdmsr/wrmsr et.al. should
have been defined as 64bit only interface in Linux from day zero.
-Andi