Commit graph

2624 commits

Author SHA1 Message Date
Joerg Roedel 2842e5bf31 x86: move GART TLB flushing options to generic code
The GART currently implements the iommu=[no]fullflush command line
parameters which influence its IO/TLB flushing strategy. This patch
makes these parameters generic so that they can be used by the AMD IOMMU
too.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:06 +02:00
Joerg Roedel 270cab2426 AMD IOMMU: move TLB flushing to the map/unmap helper functions
This patch moves the invocation of the flushing functions to the
map/unmap helpers because its common code in all dma_ops relevant
mapping/unmapping code.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:04 +02:00
Joerg Roedel dbcc112e3b AMD IOMMU: check for invalid device pointers
Currently AMD IOMMU code triggers a BUG_ON if NULL is passed as the
device. This is inconsistent with other IOMMU implementations.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:03 +02:00
Yinghai Lu 279b0bbba2 x86: fix arch/x86/kernel/cpu/mtrr/main.c warning
fix this warning reported by Andrew Morton:

> arch/x86/kernel/cpu/mtrr/main.c: In function 'mtrr_bp_init':
> arch/x86/kernel/cpu/mtrr/main.c:1170: warning: 'extra_remove_base' may be used uninitialized in this function

the warning is bogus but the logic that prevents uninitialized use
is a bit convoluted so simplify it all.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 09:16:06 +02:00
Ingo Molnar 5e51900be6 Merge commit 'v2.6.27-rc6' into x86/cleanups 2008-09-19 09:15:50 +02:00
Joerg Roedel 7e4f88da7b AMD IOMMU: protect completion wait loop with iommu lock
The unlocked polling of the ComWaitInt bit in the IOMMU completion wait
path is racy. Protect it with the iommu lock.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-18 09:25:44 +02:00
Joerg Roedel ee2fa7435b AMD IOMMU: set iommu sunc flag after command queuing
The iommu->need_sync flag must be set after the command is queued to
avoid race conditions.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-18 09:25:04 +02:00
Arjan van de Ven 90f7d25c6b x86: print DMI information in the oops trace
in order to diagnose hard system specific issues, it's useful to
have the system name in the oops (as provided by DMI)

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-17 11:53:03 +02:00
H. Peter Anvin ba0593bf55 x86: completely disable NOPL on 32 bits
Completely disable NOPL on 32 bits.  It turns out that Microsoft
Virtual PC is so broken it can't even reliably *fail* in the presence
of NOPL.

This leaves the infrastructure in place but disables it
unconditionally.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-16 09:33:57 -07:00
FUJITA Tomonori f6a32a36ab x86: gart alloc_coherent does virtual mapppings only when necessary
gart alloc_coherent need to do virtual mapppings only when an
allocated buffer is not DMA-capable for a device.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:43:58 +02:00
FUJITA Tomonori f10ac8a232 x86: avoid unnecessary low zone allocation in Calgary's alloc_coherent
x86's common alloc_coherent (dma_alloc_coherent in dma-mapping.h) sets
up the gfp flag according to the device dma_mask but Calgary doesn't
need it because of virtual mappings. This patch avoids unnecessary low
zone allocation.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:43:58 +02:00
FUJITA Tomonori bee44f294e x86: make GART to respect device's dma_mask about virtual mappings
Currently, GART IOMMU ingores device's dma_mask when it does virtual
mappings. So it could give a device a virtual address that the device
can't access to.

This patch fixes the above problem.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:42:37 +02:00
Ingo Molnar a9853dd6d2 x86: cpuid, fix typo
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:46:58 +02:00
Yinghai Lu afae865613 x86: move transmeta cap read to early_init_transmeta()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:09:14 +02:00
Yinghai Lu aef93c8bd5 x86: identify_cpu_without_cpuid v2
Krzysztof found some old cyrix cpu where an mtrr-alike cpu feature was
not detected properly.

this one is based on Krzysztof' patch, and we call ->c_identify() in
early_identify_cpu.

need to call c_identify() for cpus without cpuid even earlier ...

v2: Krzysztof point out need to give cyrix another chance about cpuid
    checking again, after ->c_identify() enables cpuid for it

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:09:13 +02:00
Jeremy Fitzhardinge 0ad5bce740 x86: fix possible x86_64 and EFI regression
Russ Anderson reported a boot crash with EFI and latest mainline:

 BIOS-e820: 00000000fffa0000 - 00000000fffac000 (reserved)
Pid: 0, comm: swapper Not tainted 2.6.27-rc5-00100-gec0c15a-dirty #5

Call Trace:
 [<ffffffff80849195>] early_idt_handler+0x55/0x69
 [<ffffffff80313e52>] __memcpy+0x12/0xa4
 [<ffffffff80859015>] efi_init+0xce/0x932
 [<ffffffff80869c83>] setup_early_serial8250_console+0x2d/0x36a
 [<ffffffff80238688>] __insert_resource+0x18/0xc8
 [<ffffffff8084f6de>] setup_arch+0x3a7/0x632
 [<ffffffff808499ed>] start_kernel+0x91/0x367
 [<ffffffff80849393>] x86_64_start_kernel+0xe3/0xe7
 [<ffffffff808492b0>] x86_64_start_kernel+0x0/0xe7

 RIP 0x10

Such a crash is possible if the CPU in this system is a 64-bit
processor which doesn't support NX (ie, old Intel P4 -based64-bit
processors).

Certainly, if we support such processors, then we should start with
_PAGE_NX initially clear in __supported_pte_flags, and then set it once
we've established that the processor does indeed support NX.  That will
prevent early_ioremap - or anything else - from trying to set it.

The simple fix is to simply call check_efer() earlier.

Reported-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-12 11:40:57 +02:00
Julia Lawall f461a1d80c arch/x86/kernel/kdebugfs.c: introduce missing kfree
Error handling code following a kmalloc should free the allocated data.
Note that at the point of the change, node has not yet been stored in d, so
it is not affected by the existing cleanup code.

The semantic match that finds the problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@r exists@
local idexpression x;
statement S;
expression E;
identifier f,l;
position p1,p2;
expression *ptr != NULL;
@@

(
if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S
|
x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...);
...
if (x == NULL) S
)
<... when != x
     when != if (...) { <+...x...+> }
x->f = E
...>
(
 return \(0\|<+...x...+>\|ptr\);
|
 return@p2 ...;
)

@script:python@
p1 << r.p1;
p2 << r.p2;
@@

print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line)
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:03:49 +02:00
Sheng Yang e38e05a858 x86: extended "flags" to show virtualization HW feature in /proc/cpuinfo
The hardware virtualization technology evolves very fast. But currently
it's hard to tell if your CPU support a certain kind of HW technology
without digging into the source code.

The patch add a new catagory in "flags" under /proc/cpuinfo. Now "flags"
can indicate the (important) HW virtulization features the CPU supported
as well.

Current implementation just cover Intel VMX side.

Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:00:56 +02:00
Ingo Molnar 59c37bf892 Merge commit 'v2.6.27-rc6' into x86/unify-cpu-detect
Conflicts:
	arch/x86/kernel/cpu/amd.c
	arch/x86/kernel/cpu/common.c
	arch/x86/kernel/cpu/common_64.c
	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:00:45 +02:00
FUJITA Tomonori 49fbf4e9f9 x86: convert pci-nommu to use is_buffer_dma_capable helper function
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 11:33:44 +02:00
FUJITA Tomonori ac4ff656c0 x86: convert gart to use is_buffer_dma_capable helper function
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 11:33:44 +02:00
Ingo Molnar e92b4fdacc Merge commit 'v2.6.27-rc6' into x86/iommu 2008-09-10 11:32:52 +02:00
Yinghai Lu ec70cae869 x86: centaur_64.c remove duplicated setting of CONSTANT_TSC
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:06 +02:00
Yinghai Lu 4052704d92 x86: intel.c put workaround for old cpus together
consolidate the code some more.

No change in functionality intended.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:06 +02:00
Yinghai Lu 879d792b66 x86: let intel 64-bit use intel.c
now that arch/x86/kernel/cpu/intel_64.c and
arch/x86/kernel/cpu/intel.c are equal, drop
arch/x86/kernel/cpu/intel_64.c and fix up
the glue.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:05 +02:00
Yinghai Lu 58602c1681 x86: make intel_64.c the same as intel.c
No change in functionality intended - this only adds the 32-bit side.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:04 +02:00
Yinghai Lu 185f3b9da2 x86: make intel.c have 64-bit support code
prepare for unification.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:03 +02:00
Ingo Molnar 81faaae457 Merge branch 'x86/pebs' into x86/unify-cpu-detect
Conflicts:
	arch/x86/Kconfig.cpu
	include/asm-x86/ds.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:20:51 +02:00
Linus Torvalds 93811d94f7 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: fix memmap=exactmap boot argument
  x86: disable static NOPLs on 32 bits
  xen: fix 2.6.27-rc5 xen balloon driver warnings
2008-09-09 12:23:41 -07:00
Prarit Bhargava d6be118a97 x86: fix memmap=exactmap boot argument
When using kdump modifying the e820 map is yielding strange results.

For example starting with

 BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000100 - 0000000000093400 (usable)
 BIOS-e820: 0000000000093400 - 00000000000a0000 (reserved)
 BIOS-e820: 0000000000100000 - 000000003fee0000 (usable)
 BIOS-e820: 000000003fee0000 - 000000003fef3000 (ACPI data)
 BIOS-e820: 000000003fef3000 - 000000003ff80000 (ACPI NVS)
 BIOS-e820: 000000003ff80000 - 0000000040000000 (reserved)
 BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved)
 BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved)
 BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)
 BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved)

and booting with args

memmap=exactmap memmap=640K@0K memmap=5228K@16384K memmap=125188K@22252K memmap=76K#1047424K memmap=564K#1047500K

resulted in:

 user-defined physical RAM map:
 user: 0000000000000000 - 0000000000093400 (usable)
 user: 0000000000093400 - 00000000000a0000 (reserved)
 user: 0000000000100000 - 000000003fee0000 (usable)
 user: 000000003fee0000 - 000000003fef3000 (ACPI data)
 user: 000000003fef3000 - 000000003ff80000 (ACPI NVS)
 user: 000000003ff80000 - 0000000040000000 (reserved)
 user: 00000000e0000000 - 00000000f0000000 (reserved)
 user: 00000000fec00000 - 00000000fec10000 (reserved)
 user: 00000000fee00000 - 00000000fee01000 (reserved)
 user: 00000000ff000000 - 0000000100000000 (reserved)

But should have resulted in:

 user-defined physical RAM map:
 user: 0000000000000000 - 00000000000a0000 (usable)
 user: 0000000001000000 - 000000000151b000 (usable)
 user: 00000000015bb000 - 0000000008ffc000 (usable)
 user: 000000003fee0000 - 000000003ff80000 (ACPI data)

This is happening because of an improper usage of strcmp() in the
e820 parsing code.  The strcmp() always returns !0 and never resets the
value for e820.nr_map and returns an incorrect user-defined map.

This patch fixes the problem.

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-09 11:54:53 -07:00
Manfred Spraul e545a6140b kernel/cpu.c: create a CPU_STARTING cpu_chain notifier
Right now, there is no notifier that is called on a new cpu, before the new
cpu begins processing interrupts/softirqs.
Various kernel function would need that notification, e.g. kvm works around
by calling smp_call_function_single(), rcu polls cpu_online_map.

The patch adds a CPU_STARTING notification. It also adds a helper function
that sends the message to all cpu_chain handlers.

Tested on x86-64.
All other archs are untested. Especially on sparc, I'm not sure if I got
it right.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 19:25:24 +02:00
FUJITA Tomonori 823e7e8c6e x86: dma_alloc_coherent sets gfp flags properly
Non real IOMMU implemenations (which doesn't do virtual mappings,
e.g. swiotlb, pci-nommu, etc) need to use proper gfp flags and
dma_mask to allocate pages in their own dma_alloc_coherent()
(allocated page need to be suitable for device's coherent_dma_mask).

This patch makes dma_alloc_coherent do this job so that IOMMUs don't
need to take care of it any more.

Real IOMMU implemenataions can simply ignore the gfp flags.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:07 +02:00
FUJITA Tomonori 8a53ad675f x86: fix nommu_alloc_coherent allocation with NULL device argument
We need to use __GFP_DMA for NULL device argument (fallback_dev) with
pci-nommu. It's a hack for ISA (and some old code) so we need to use
GFP_DMA.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:06 +02:00
FUJITA Tomonori de9f521fb7 x86: move pci-nommu's dma_mask check to common code
The check to see if dev->dma_mask is NULL in pci-nommu is more
appropriate for dma_alloc_coherent().

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:06 +02:00
Yinghai Lu f69feff720 x86: little clean up of intel.c/intel_64.c
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:46:03 +02:00
Yinghai Lu ff73152ced x86: make 64 bit to use amd.c
arch/x86/kernel/cpu/amd.c is now 100% identical to
arch/x86/kernel/cpu/amd_64.c, so use amd.c on 64-bit too
and fix up the namespace impact.

Simplify the Kconfig glue as well.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:06 +02:00
Yinghai Lu 2a02505055 x86: make amd_64 have 32 bit code
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:03 +02:00
Yinghai Lu 6c62aa4a3c x86: make amd.c have 64bit support code
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:02 +02:00
Yinghai Lu 8d71a2ea0a x86: merge header in amd_64.c
Singed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:01 +02:00
Yinghai Lu 2b86473604 x86: add srat_detect_node for amd64
separate that from amd_detect_cmp()

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:00 +02:00
Yinghai Lu c58606ad55 x86: remove duplicated force_mwait
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:31:59 +02:00
Yinghai Lu 11fdd252bb x86: cpu make amd.c more like amd_64.c v2
1. make 32bit have early_init_amd_mc and amd_detect_cmp
2. seperate init_amd_k5/k6/k7 ...

v2: fix compiling for !CONFIG_SMP

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:31:58 +02:00
Linus Torvalds 64f996f670 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: cpu_init(): fix memory leak when using CPU hotplug
  x86: pda_init(): fix memory leak when using CPU hotplug
  x86, xen: Use native_pte_flags instead of native_pte_val for .pte_flags
  x86: move mtrr cpu cap setting early in early_init_xxxx
  x86: delay early cpu initialization until cpuid is done
  x86: use X86_FEATURE_NOPL in alternatives
  x86: add NOPL as a synthetic CPU feature bit
  x86: boot: stub out unimplemented CPU feature words
2008-09-06 19:36:23 -07:00
Linus Torvalds f532522565 Merge branch 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  clocksource, acpi_pm.c: check for monotonicity
  clocksource, acpi_pm.c: use proper read function also in errata mode
  ntp: fix calculation of the next jiffie to trigger RTC sync
  x86: HPET: read back compare register before reading counter
  x86: HPET fix moronic 32/64bit thinko
  clockevents: broadcast fixup possible waiters
  HPET: make minimum reprogramming delta useful
  clockevents: prevent endless loop lockup
  clockevents: prevent multiple init/shutdown
  clockevents: enforce reprogram in oneshot setup
  clockevents: prevent endless loop in periodic broadcast handler
  clockevents: prevent clockevent event_handler ending up handler_noop
2008-09-06 19:33:26 -07:00
Ingo Molnar 5df4551551 x86, tsc calibration: fix
my brown paperbag day ...

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 23:55:40 +02:00
Andreas Herrmann 23952a96ae x86: cpu_init(): fix memory leak when using CPU hotplug
Exception stacks are allocated each time a CPU is set online.
But the allocated space is never freed. Thus with one CPU hotplug
offline/online cycle there is a memory leak of 24K (6 pages) for
a CPU.

Fix is to allocate exception stacks only once -- when the CPU is
set online for the first time.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: akpm@linux-foundation.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 20:48:16 +02:00
Andreas Herrmann d04ec773d7 x86: pda_init(): fix memory leak when using CPU hotplug
pda->irqstackptr is allocated whenever a CPU is set online.
But it is never freed. This results in a memory leak of 16K
for each CPU offline/online cycle.

Fix is to allocate pda->irqstackptr only once.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: akpm@linux-foundation.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 20:48:02 +02:00
Jan Beulich 2d9cd6c27f x86-64: add two __cpuinit annotations
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 19:50:41 +02:00
Yinghai Lu dd786dd12c x86: move mtrr cpu cap setting early in early_init_xxxx
Krzysztof Helt found MTRR is not detected on k6-2

root cause:
	we moved mtrr_bp_init() early for mtrr trimming,
and in early_detect we only read the CPU capability from cpuid,
so some cpu doesn't have that bit in cpuid.

So we need to add early_init_xxxx to preset those bit before mtrr_bp_init
for those earlier cpus.

this patch is for v2.6.27

Reported-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 17:50:55 +02:00
Krzysztof Helt 12cf105cd6 x86: delay early cpu initialization until cpuid is done
Move early cpu initialization after cpu early get cap so the
early cpu initialization can fix up cpu caps.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 17:50:38 +02:00
Yinghai Lu e322423471 x86, cpu init: call early_init_xxx in init_xxx
so we:

 1. could set some cap to ap
 2. restore some cap after memset in identify_cpu for boot cpu

esp for CONSTANT_TSC this matters, as:

before this patch:
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow rep_good nopl pni monitor cx16 lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs

after this patch:
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl pni monitor cx16 lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs

so constant_tsc is back...

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:09:14 +02:00
Yinghai Lu 1b05d60d60 x86: remove duplicated get_model_name() calling
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:09:12 +02:00
Thomas Gleixner 72d43d9bc9 x86: HPET: read back compare register before reading counter
After fixing the u32 thinko I sill had occasional hickups on ATI chipsets
with small deltas. There seems to be a delay between writing the compare
register and the transffer to the internal register which triggers the
interrupt. Reading back the value makes sure, that it hit the internal
match register befor we compare against the counter value.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-06 07:21:17 +02:00
Thomas Gleixner f7676254f1 x86: HPET fix moronic 32/64bit thinko
We use the HPET only in 32bit mode because:
1) some HPETs are 32bit only
2) on i386 there is no way to read/write the HPET atomic 64bit wide

The HPET code unification done by the "moron of the year" did
not take into account that unsigned long is different on 32 and
64 bit.

This thinko results in a possible endless loop in the clockevents
code, when the return comparison fails due to the 64bit/332bit
unawareness. 

unsigned long cnt = (u32) hpet_read() + delta can wrap over 32bit.
but the final compare will fail and return -ETIME causing endless
loops.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-06 07:21:17 +02:00
H. Peter Anvin f31d731e44 x86: use X86_FEATURE_NOPL in alternatives
Use X86_FEATURE_NOPL to determine if it is safe to use P6 NOPs in
alternatives.  Also, replace table and loop with simple if statement.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 16:14:01 -07:00
H. Peter Anvin b6734c35af x86: add NOPL as a synthetic CPU feature bit
The long noops ("NOPL") are supposed to be detected by family >= 6.
Unfortunately, several non-Intel x86 implementations, both hardware
and software, don't obey this dictum.  Instead, probe for NOPL
directly by executing a NOPL instruction and see if we get #UD.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 16:13:52 -07:00
Linus Torvalds 1c402c8cd1 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: add io delay quirk for Presario F700
2008-09-05 14:36:21 -07:00
David Woodhouse e51af66308 x86: blacklist DMAR on Intel G31/G33 chipsets
Some BIOSes (the Intel DG33BU, for example) wrongly claim to have DMAR
when they don't. Avoid the resulting crashes when it doesn't work as
expected.

I'd still be grateful if someone could test it on a DG33BU with the old
BIOS though, since I've killed mine. I tested the DMI version, but not
this one.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 20:20:25 +02:00
Joerg Roedel cf169702ba x86, gart: add detection of AMD family 0x11 northbridges
This patch adds the detection of the northbridges in the AMD family 0x11
processors. It also fixes the magic numbers there while changing this code.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 19:11:44 +02:00
Ingo Molnar 28c3cfd5fb Merge branch 'linus' into x86/tracehook 2008-09-05 17:53:05 +02:00
FUJITA Tomonori 551b4545bf x86: gart alloc_coherent doesn't need to check NULL device argument
asm/dma-mapping.h guarantees that gart alloc_coherent doesn't get NULL
device argument.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 12:48:13 +02:00
Thomas Gleixner 7cfb043533 HPET: make minimum reprogramming delta useful
The minimum reprogramming delta was hardcoded in HPET ticks,
which is stupid as it does not work with faster running HPETs.
The C1E idle patches made this prominent on AMD/RS690 chipsets,
where the HPET runs with 25MHz. Set it to 5us which seems to be
a reasonable value and fixes the problems on the bug reporters
machines. We have a further sanity check now in the clock events,
which increases the delta when it is not sufficient.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br>
Tested-by: Dmitry Nezhevenko <dion@inhex.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 11:11:54 +02:00
Yinghai Lu bd220a24a9 x86: move nonx_setup etc from common.c to init_64.c
like 32 bit put it in init_32.c

Signed-off-by: Yinghai <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 10:23:47 +02:00
Yinghai Lu f5017cfa35 x86: use cpu/common.c on 64 bit
Use cpu/common.c on both 64-bit and 32-bit and remove cpu/common_64.c.

We started out with this linecount:

  816  arch/x86/kernel/cpu/common_64.c
  805  arch/x86/kernel/cpu/common.c

and the resulting common.c is 1197 lines long, so there's already
424 lines of code eliminated in this phase of the unification.

Signed-off-by: Yinghai <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:57 +02:00
Ingo Molnar 143b604a2d x86: cpu/common*.c, merge whitespaces
Merge leftover whitespaces, to make arch/x86/kernel/cpu/common_64.c
exactly identical to arch/x86/kernel/cpu/common.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu 102bbe3ab8 x86: cpu/common*.c, merge identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu b89d3b3e2c x86: cpu/common*.c, merge generic_identify()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:55 +02:00
Yinghai Lu 56f0d033be x86: cpu/common*.c: merge print_cpu_info()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu 6627d24230 x86: cpu/common*.c, merge early_identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu 5122c890ba x86: cpu/common.c: merge get_cpu_cap()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:53 +02:00
Yinghai Lu 1cd78776c7 x86: cpu/common*.c, merge detect_ht()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:52 +02:00
Yinghai Lu 140fc72709 x86: cpu/common*.c, merge display_cacheinfo()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:51 +02:00
Yinghai Lu b9e67f0042 x86: cpu/common.c, merge default_init()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu fab334c1d5 x86: cpu/common*.c, merge switch_to_new_gdt()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu 1ba76586f7 x86: cpu/common*.c have same cpu_init(), with copying and #ifdef
hard to merge by lines... (as here we have material differences between
32-bit and 64-bit mode) - will try to do it later.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:49 +02:00
Yinghai Lu d5494d4f51 x86: cpu/common*.c, make 32-bit have 64-bit only functions
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:48 +02:00
Yinghai Lu ba51dced0b x86: cpu/common.c, let 64-bit code have 32-bit only functions
No effect on 64-bit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu 950ad7ff6e x86: same gdt_page with macro
Move the 32-bit and 64-bit gdt_page definitions next to each
other, separated with an #ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu f0fc4aff1f x86: make header file the same in arch/x86/kernel/cpu/common_xx.c
Make the files more similar in preparation to unification, no
code changed.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:46 +02:00
Yinghai Lu 97e4db7c87 x86: make detect_ht depend on CONFIG_X86_HT
64-bit has X86_HT set too, so use that instead of SMP.

This also removes a include/asm-x86/processor.h ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:45 +02:00
Ingo Molnar 0c8c708a7e Merge branch 'x86/core' into x86/unify-cpu-detect 2008-09-05 09:27:23 +02:00
Ingo Molnar d3d0ba7b8f Merge commit '63cc8c75156462d4b42cbdd76c293b7eee7ddbfe':
"percpu: introduce DEFINE_PER_CPU_PAGE_ALIGNED() macro"

into x86/core

Conflicts:
	arch/x86/kernel/cpu/common.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:24:30 +02:00
Ingo Molnar 9042763808 Merge branch 'x86/x2apic' into x86/core
Conflicts:
	arch/x86/kernel/cpu/common_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:21:21 +02:00
Ingo Molnar 446d27338d Merge branch 'x86/cpu' into x86/core 2008-09-05 09:19:50 +02:00
Ingo Molnar accf0fa697 Merge branch 'x86/xsave' into x86/core 2008-09-05 09:18:39 +02:00
Ingo Molnar 4156e9a8ef x86: quick TSC calibration, improve
- make sure the final TSC timestamp is reliable too

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 23:21:57 +02:00
Linus Torvalds 6ac40ed041 x86: quick TSC calibration
Introduce a fast TSC-calibration method on sane hardware.

It only uses 17920 PIT timer ticks to calibrate the TSC, plus 256 ticks on
each side to make sure the TSC values were very close to the tick, so the
whole calibration takes 15ms. Yet, despite only takign 15ms,
we can actually give pretty stringent guarantees of accuracy:

 - the code requires that we hit each 256-counter block at least 50 times,
   so the TSC error is basically at *MOST* just a few PIT cycles off in
   any direction. In practice, it's going to be about one microseconds
   off (which is how long it takes to read the counter)

 - so over 17920 PIT cycles, we can pretty much guarantee that the
   calibration error is less than one half of a percent.

My testing bears this out: on my machine, the quick-calibration reports
2934.085kHz, while the slow one reports 2933.415.

Yes, the slower calibration is still more precise. For me, the slow
calibration is stable to within about one hundreth of a percent, so it's
(at a guess) roughly an order-and-a-half of magnitude more precise. The
longer you wait, the more precise you can be.

However, the nice thing about the fast TSC PIT synchronization is that
it's pretty much _guaranteed_ to give that 0.5% precision, and fail
gracefully (and very quickly) if it doesn't get it. And it really is
fairly simple (even if there's a lot of _details_ there, and I didn't get
all of those right ont he first try or even the second ;)

The patch says "110 insertions", but 63 of those new lines are actually
comments.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/kernel/tsc.c |  111 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 110 insertions(+), 1 deletions(-)
2008-09-04 22:54:50 +02:00
Yinghai Lu 0a488a53d7 x86: move 32bit related functions together
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:47 +02:00
Yinghai Lu 01b2e16a7a x86: make get_mode_name of 64bit the same as 32bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu a0854a46c5 x86: make 32bit support show_msr like 64 bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu 10a434fcb2 x86: remove cpu_vendor_dev
1. add c_x86_vendor into cpu_dev
2. change cpu_devs to static
3. check c_x86_vendor before put that cpu_dev into array
4. remove alignment for 64bit
5. order the sequence in cpu_devs according to link sequence...
   so could put intel at first, then amd...

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:45 +02:00
Yinghai Lu 9d31d35b5f x86: order functions in cpu/common.c and cpu/common_64.c v2
v2: make 64 bit get c->x86_cache_alignment = c->x86_clfush_size

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Yinghai Lu 3da99c9776 x86: make (early)_identify_cpu more the same between 32bit and 64 bit
1. add extended_cpuid_level for 32bit
 2. add generic_identify for 64bit
 3. add early_identify_cpu for 32bit
 4. early_identify_cpu not be called by identify_cpu
 5. remove early in get_cpu_vendor for 32bit
 6. add get_cpu_cap
 7. add cpu_detect for 64bit

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Krzysztof Helt 5031088dbc x86: delay early cpu initialization until cpuid is done
Move early cpu initialization after cpu early get cap so the
early cpu initialization can fix up cpu caps.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Yinghai Lu 5fef55fddb x86: move mtrr cpu cap setting early in early_init_xxxx
Krzysztof Helt found MTRR is not detected on k6-2

root cause:
	we moved mtrr_bp_init() early for mtrr trimming,
and in early_detect we only read the CPU capability from cpuid,
so some cpu doesn't have that bit in cpuid.

So we need to add early_init_xxxx to preset those bit before mtrr_bp_init
for those earlier cpus.

this patch is for v2.6.27

Reported-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Ingo Molnar 62b3f98188 Merge branch 'x86/debug' into x86/cpu 2008-09-04 21:08:09 +02:00
Yinghai Lu fac8f1e4f9 x86: split e820 reserved entries record to late, v7
try to insert_resource second time, by expanding the resource...

for case: e820 reserved entry is partially overlapped with bar res...

hope it will never happen

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:04:25 +02:00
Andi Kleen dc44e65943 x86: capitalize function call interrupts consistently
Impact: aestetic

Capitalize function call interrupts consistently.

All other descriptions in /proc/interrupts are capitalized except
for "function call interrupts". Capitalize it too for consistency.

While that's technically a published ABI I think the risk of anyone
relying on that text to stay the same is negligible.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-04 10:51:36 -07:00
H. Peter Anvin aa3341a168 Merge branch 'x86/cpu' into x86/x2apic
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:21:21 -07:00
H. Peter Anvin fe47784ba5 Merge branch 'x86/cpu' into x86/xsave
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:04:45 -07:00
Ingo Molnar a5444d15b6 x86: split e820 reserved entries record to late v4
this one replaces:

| commit a2bd7274b4
| Author: Yinghai Lu <yhlu.kernel@gmail.com>
| Date:   Mon Aug 25 00:56:08 2008 -0700
|
|    x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3

v2: insert e820 reserve resources before pnp_system_init
v3: fix merging problem in tip/x86/core
v4: address Linus's review about comments and condition in _late()

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:39:25 -07:00
Yinghai Lu 58f7c98850 x86: split e820 reserved entries record to late v2
so could let BAR res register at first, or even pnp.

v2: insert e820 reserve resources before pnp_system_init

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:37:57 -07:00
Thomas Gleixner a977c40095 x86: TSC make the calibration loop smarter
The last changes made the calibration loop 250ms long which is far
too much. Try to do that more clever.

Experiments have shown that using a 10ms delay for the PIT based calibration
gives us a good enough value. If we have a reference (HPET/PMTIMER) and the
result of the PIT and the reference is close enough, then we can break out of
the calibration loop on a match right away and use the reference value.

Otherwise we just loop 3 times and decide then, which value to take.

One caveat is that for virtualized environments the PIT calibration often does
not work at all and I found out that 10us is a bit too short as well for the
reference to give a sane result. The solution here is to make the last loop
longer when the first two PIT calibrations failed.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:35 +02:00
Thomas Gleixner 827014be05 x86: TSC: use one set of reference variables
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:34 +02:00
Thomas Gleixner d683ef7afe x86: TSC: separate hpet/pmtimer calculation out
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:33 +02:00
Thomas Gleixner cce3e05724 x86: TSC: define the PIT latch value separate
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:33 +02:00
H. Peter Anvin 0ccd8c39bc Merge branch 'linus' into x86/core 2008-09-04 08:09:09 -07:00
Yinghai Lu 1625324d22 x86: move dir es7000 to es7000_32.c
to be aligned with numaq, summit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:08:53 -07:00
H. Peter Anvin 7203781c98 Merge branch 'x86/cpu' into x86/core
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 08:08:42 -07:00
Ingo Molnar 42390cdec5 Merge branch 'linus' into x86/x2apic
Conflicts:
	arch/x86/kernel/cpu/cyrix.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 13:02:35 +02:00
Alok N Kataria de014d6176 x86: Change warning message in TSC calibration.
When calibration against PIT fails, the warning that we print is misleading.
In a virtualized environment the VM may get descheduled while calibration
or, the check in PIT calibration may fail due to other virtualization
overheads.

The warning message explicitly assumes that calibration failed due to SMI's
which may not be the case. Change that to something proper.

Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 20:10:37 -07:00
Chuck Ebbert e6a5652fd1 x86: add io delay quirk for Presario F700
Manually adding "io_delay=0xed" fixes system lockups in ioapic
mode on this machine.

System Information
	Manufacturer: Hewlett-Packard
	Product Name: Presario F700 (KA695EA#ABF)

Base Board Information
	Manufacturer: Quanta
	Product Name: 30D3

Reference:
https://bugzilla.redhat.com/show_bug.cgi?id=459546

Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-03 16:42:51 -07:00
Linus Torvalds ec0c15afb4 Split up PIT part of TSC calibration from native_calibrate_tsc
The TSC calibration function is still very complicated, but this makes
it at least a little bit less so by moving the PIT part out into a
helper function of its own.

Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-of-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 07:30:13 -07:00
Thomas Gleixner fbb16e2438 [x86] Fix TSC calibration issues
Larry Finger reported at http://lkml.org/lkml/2008/9/1/90:
An ancient laptop of mine started throwing errors from b43legacy when
I started using 2.6.27 on it. This has been bisected to commit bfc0f59
"x86: merge tsc calibration".

The unification of the TSC code adopted mostly the 64bit code, which
prefers PMTIMER/HPET over the PIT calibration.

Larrys system has an AMD K6 CPU. Such systems are known to have
PMTIMER incarnations which run at double speed. This results in a
miscalibration of the TSC by factor 0.5. So the resulting calibrated
CPU/TSC speed is half of the real CPU speed, which means that the TSC
based delay loop will run half the time it should run. That might
explain why the b43legacy driver went berserk.

On the other hand we know about systems, where the PIT based
calibration results in random crap due to heavy SMI/SMM
disturbance. On those systems the PMTIMER/HPET based calibration logic
with SMI detection shows better results.

According to Alok also virtualized systems suffer from the PIT
calibration method.

The solution is to use a more wreckage aware aproach than the current
either/or decision.

1) reimplement the retry loop which was dropped from the 32bit code
during the merge. It repeats the calibration and selects the lowest
frequency value as this is probably the closest estimate to the real
frequency

2) Monitor the delta of the TSC values in the delay loop which waits
for the PIT counter to reach zero. If the maximum value is
significantly different from the minimum, then we have a pretty safe
indicator that the loop was disturbed by an SMI.

3) keep the pmtimer/hpet reference as a backup solution for systems
where the SMI disturbance is a permanent point of failure for PIT
based calibration

4) do the loop iteration for both methods, record the lowest value and
decide after all iterations finished.

5) Set a clear preference to PIT based calibration when the result
makes sense.

The implementation does the reference calibration based on
HPET/PMTIMER around the delay, which is necessary for the PIT anyway,
but keeps separate TSC values to ensure the "independency" of the
resulting calibration values.

Tested on various 32bit/64bit machines including Geode 266Mhz, AMD K6
(affected machine with a double speed pmtimer which I grabbed out of
the dump), Pentium class machines and AMD/Intel 64 bit boxen.

Bisected-by:  Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-02 20:35:56 -07:00
Joe Korty 2c7e9fd4c6 x86: make poll_idle behave more like the other idle methods
Make poll_idle() behave more like the other idle methods.

Currently, poll_idle() returns immediately.  The other
idle methods all wait indefinately for some condition
to come true before returning.  poll_idle should emulate
these other methods and also wait for a return condition,
in this case, for need_resched() to become 'true'.

Without this delay the idle loop spends all of its time
in the outer loop that calls poll_idle.  This outer loop,
these days, does real work, some of it under rcu locks.
That work should only be done when idle is entered and
when idle exits, not continuously while idle is spinning.

Signed-off-by: Joe Korty <joe.korty@ccur.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-28 11:29:48 +02:00
H. Peter Anvin 7414aa41a6 x86: generate names for /proc/cpuinfo from <asm/cpufeature.h>
We have had a number of cases where <asm/cpufeature.h> (and its
predecessors) have diverged substantially from the names list in
/proc/cpuinfo.  This patch generates the latter from the former.

It retains the option for explicitly overriding the strings, but by
making that require a separate action it should at least be less
likely to happen.

It would be good to do a future pass and rename strings that are
gratuituously different in the kernel (/proc/cpuinfo is a userspace
interface and must remain constant.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-27 19:23:22 -07:00
H. Peter Anvin b30a72a7ed Merge branch 'x86/urgent' into x86/cpu
Conflicts:

	arch/x86/kernel/cpu/cyrix.c
2008-08-27 19:17:07 -07:00
Suresh Siddha 11c231a962 x86: use x2apic id reported by cpuid during topology discovery, fix
v2: Fix for !SMP build

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-27 09:02:19 +02:00
Hiroshi Shimamoto a817260874 x86: acpi: move acpi_mcfg_64bit_base_addr into CONFIG_PCI_MMCONFIG
acpi_mcfg_64bit_base_addr is used when CONFIG_PCI_MMCONFIG is enabled.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-27 08:25:06 +02:00
H. Peter Anvin 94d4ac2f4a Merge branch 'x86/urgent' into x86/cleanups 2008-08-25 22:45:37 -07:00
H. Peter Anvin 9ea2b82ed6 x86: cpuid: correct return value on partial operations
Return the correct return value when the CPUID driver partially
completes a request (we should return the number of bytes actually
read or written, instead of the error code.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:46:12 -07:00
H. Peter Anvin 85f1cb6015 x86: msr: correct return value on partial operations
Return the correct return value when the MSR driver partially
completes a request (we should return the number of bytes actually
read or written, instead of the error code.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:46:12 -07:00
H. Peter Anvin 4b46ca701b x86: cpuid: propagate error from smp_call_function_single()
Propagate error (-ENXIO) from smp_call_function_single() in the CPUID
driver.  This can happen when a CPU is unplugged while the CPUID
driver is open.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:45:48 -07:00
H. Peter Anvin c6f31932d0 x86: msr: propagate errors from smp_call_function_single()
Propagate error (-ENXIO) from smp_call_function_single().  These
errors can happen when a CPU is unplugged while the MSR driver is
open.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:45:48 -07:00
Peter Zijlstra 52a8968ce9 x86: fix cpufreq + sched_clock() regression
I noticed that my sched_clock() was slow on a number of machine, so I
started looking at cpufreq.

The below seems to fix the problem for me.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 14:39:19 +02:00
Ingo Molnar f58899bb02 Merge branch 'linus' into x86/urgent 2008-08-25 14:39:12 +02:00
Avi Kivity c7ffa6c262 x86: default to reboot via ACPI
Triple-fault and keyboard reset may assert INIT instead of RESET; however
INIT is blocked when Intel VT is enabled.  This leads to a partially reset
machine when invoking emergency_restart via sysrq-b: the processor is still
working but other parts of the system are dead.

Default to rebooting via ACPI, which correctly asserts RESET and reboots the
machine.

This is safe since we will fall back to keyboard reset and triple fault if
acpi is not enabled or if the reset is not successful.

Signed-off-by: Avi Kivity <avi@qumranet.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 12:31:32 +02:00
Ingo Molnar ea1c9de45e Merge branch 'x86/urgent' into x86/cleanups 2008-08-25 11:10:42 +02:00
Linus Torvalds 060700b571 x86: do not enable TSC notifier if we don't need it
Impact: crash on non-TSC-equipped CPUs

Don't enable the TSC notifier if we *either*:

1. don't have a CPU, or
2. have a CPU with constant TSC.

In either of those cases, the notifier is either damaging (1) or useless(2).

From: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-24 17:16:28 -07:00
Rafael J. Wysocki 8735728ef8 x86 MCE: Fix CPU hotplug problem with multiple multicore AMD CPUs
During CPU hot-remove the sysfs directory created by
threshold_create_bank(), defined in
arch/x86/kernel/cpu/mcheck/mce_amd_64.c, has to be removed before
its parent directory, created by mce_create_device(), defined in
arch/x86/kernel/cpu/mcheck/mce_64.c .  Moreover, when the CPU in
question is hotplugged again, obviously the latter has to be created
before the former.  At present, the right ordering is not enforced,
because all of these operations are carried out by CPU hotplug
notifiers which are not appropriately ordered with respect to each
other.  This leads to serious problems on systems with two or more
multicore AMD CPUs, among other things during suspend and hibernation.

Fix the problem by placing threshold bank CPU hotplug callbacks in
mce_cpu_callback(), so that they are invoked at the right places,
if defined.  Additionally, use kobject_del() to remove the sysfs
directory associated with the kobject created by
kobject_create_and_add() in threshold_create_bank(), to prevent the
kernel from crashing during CPU hotplug operations on systems with
two or more multicore AMD CPUs.

This patch fixes bug #11337.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Andi Kleen <andi@firstfloor.org>
Tested-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-23 17:49:19 +02:00
Suresh Siddha e17941b0c1 x86: use x2apic id reported by cpuid during topology discovery
use x2apic id reported by cpuid during topology discovery, instead of the
apic id configured in the APIC. For most of the systems, x2apic id
reported by cpuid leaf 0xb will be same as the physical apic id reported
by the APIC_ID register of the APIC. We follow the suggested guidelines
and use the apic id reported by the cpuid.

No change to non-generic UV platforms, will use the apic id reported in the
APIC_ID register as the cpuid reported apic id's may not be unique.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-23 17:47:11 +02:00
Suresh Siddha bbb65d2d36 x86: use cpuid vector 0xb when available for detecting cpu topology
cpuid leaf 0xb provides extended topology enumeration. This interface provides
the 32-bit x2APIC id of the logical processor and it also provides a new
mechanism to detect SMT and core siblings (which provides increased
addressability).

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-23 17:47:10 +02:00
Ingo Molnar 87ce786ae5 Merge branch 'x86/cpu' into x86/x2apic 2008-08-23 17:46:59 +02:00
Linus Torvalds 358c323c17 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: work around MTRR mask setting, v2
  x86: fix section mismatch warning - uv_cpu_init
  x86: fix VMI for early params
  x86: fix two modpost warnings in mm/init_64.c
  x86: fix 1:1 mapping init on 64-bit (memory hotplug case)
  x86: work around MTRR mask setting
  x86: PAT Update validate_pat_support for intel CPUs
  devmem, x86: PAT Change /dev/mem mmap with O_SYNC to use UC_MINUS
  x86: PAT proper tracking of set_memory_uc and friends
  x86: fix BUG: unable to handle kernel paging request (numaq_tsc_disable)
  x86: export pv_lock_ops non-GPL
  x86, mmiotrace: silence section mismatch warning - leave_uniprocessor
  x86: use WARN() in arch/x86/kernel
  x86: use WARN() in arch/x86/mm/ioremap.c
  werror: fix pci calgary
  x86: fix oprofile + hibernation badness
  x86, SGI UV: hardcode the TLB flush interrupt system vector
  x86: fix Xorg startup/shutdown slowdown with PAT
  x86: fix "kernel won't boot on a Cyrix MediaGXm (Geode)"
  x86 iommu: remove unneeded parenthesis
2008-08-22 08:23:53 -07:00
Ingo Molnar 9754a5b840 x86: work around MTRR mask setting, v2
improve the debug printout:

- make it actually display something
- print it only once

would be nice to have a WARN_ONCE() facility, to feed such things to
kerneloops.org.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 14:12:31 +02:00
Marcin Slusarz c4bd1fdab0 x86: fix section mismatch warning - uv_cpu_init
WARNING: vmlinux.o(.cpuinit.text+0x3cc4): Section mismatch in reference from the function uv_cpu_init() to the function .init.text:uv_system_init()
The function __cpuinit uv_cpu_init() references
a function __init uv_system_init().
If uv_system_init is only used by uv_cpu_init then
annotate uv_system_init with a matching annotation.

uv_system_init was ment to be called only once, so do it from codepath
(native_smp_prepare_cpus) which is called once, right before activation
of other cpus (smp_init).

Note: old code relied on uv_node_to_blade being initialized to 0,
but it'a not initialized from anywhere.

Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Acked-by: Jack Steiner <steiner@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 14:12:20 +02:00
Yinghai Lu b05f78f5c7 x86_64: printout msr -v2
commandline show_msr=1 for bsp, show_msr=32 for all 32 cpus.

[ mingo@elte.hu: added documentation ]

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 10:43:21 +02:00
FUJITA Tomonori 421076e2be x86: dma_*_coherent rework patchset v2, fix
alloc_coherent dma_ops callback was added to GART, however, it doesn't
return a size aligned address wrt dma_alloc_coherent, as
DMA-mapping.txt defines. This patch fixes it.

This patch also removes unused gart_map_simple
(dma_mapping_ops->map_simple has gone).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 10:18:26 +02:00
Ingo Molnar 0d8136ea50 Merge branch 'x86/gart' into x86/iommu 2008-08-22 09:03:43 +02:00
FUJITA Tomonori 766af9fa81 dma-mapping.h, x86: remove last user of dma_mapping_ops->map_simple
pci-dma.c doesn't use map_simple hook any more so we can remove it
from struct dma_mapping_ops now.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:43:25 +02:00
Joerg Roedel 2cd54961ca x86, AMD IOMMU: remove obsolete FIXME comment
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:52 +02:00
Joerg Roedel 6c505ce393 x86: move dma_*_coherent functions to include file
All the x86 DMA-API functions are defined in asm/dma-mapping.h. This patch
moves the dma_*_coherent functions also to this header file because they are
now small enough to do so.
This is done as a separate patch because it also includes some renaming and
restructuring of the dma-mapping.h file.

Signed-off-by: Joerg Roedel <joerg.roede@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:51 +02:00
Joerg Roedel c647c3bb2d x86: cleanup dma_*_coherent functions
All dma_ops implementations support the alloc_coherent and free_coherent
callbacks now. This allows a big simplification of the dma_alloc_coherent
function which is done with this patch. The dma_free_coherent functions is also
cleaned up and calls now the free_coherent callback of the dma_ops
implementation.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:50 +02:00
Joerg Roedel a3a76532e0 x86: add free_coherent dma_ops callback to NOMMU driver
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:49 +02:00
Joerg Roedel c5e835f964 x86: add alloc_coherent dma_ops callback to NOMMU driver
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:48 +02:00
Joerg Roedel e4ad68b651 x86: add free_coherent dma_ops callback to Calgary IOMMU driver
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:47 +02:00
Joerg Roedel 43a5a5a09b x86: add free_coherent dma_ops callback to GART driver
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:34:46 +02:00
Joerg Roedel 94581094e7 x86: add alloc_coherent dma_ops callback to GART driver
[ v2 - x86: make gart_alloc_coherent return zeroed memory

  FUJITA Tomonori pointed it out that the dma_alloc_coherent function
  should return memory set to zero. This patch adds this to the GART
  implementation too. ]

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:33:55 +02:00
Alok Kataria 3a6ddd5f18 x86: fix VMI for early params
while fixing a different bug i moved the call to vmi_init before
early params could be parsed.

This broke the vmi specific commandline parameters.
Fix that, by moving vmi initialization after kernel has got a chance to
parse early parameters.

Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 08:01:54 +02:00
Yinghai Lu 38cc1c3df7 x86: work around MTRR mask setting
Joshua Hoblitt reported that only 3 GB of his 16 GB of RAM is
usable. Booting with mtrr_show showed us the BIOS-initialized
MTRR settings - which are all wrong.

So the root cause is that the BIOS has not set the mask correctly:

>               [    0.429971]  MSR00000200: 00000000d0000000
>               [    0.433305]  MSR00000201: 0000000ff0000800
> should be ==> [    0.433305]  MSR00000201: 0000003ff0000800
>
>               [    0.436638]  MSR00000202: 00000000e0000000
>               [    0.439971]  MSR00000203: 0000000fe0000800
> should be ==> [    0.439971]  MSR00000203: 0000003fe0000800
>
>               [    0.443304]  MSR00000204: 0000000000000006
>               [    0.446637]  MSR00000205: 0000000c00000800
> should be ==> [    0.446637]  MSR00000205: 0000003c00000800
>
>               [    0.449970]  MSR00000206: 0000000400000006
>               [    0.453303]  MSR00000207: 0000000fe0000800
> should be ==> [    0.453303]  MSR00000207: 0000003fe0000800
>
>               [    0.456636]  MSR00000208: 0000000420000006
>               [    0.459970]  MSR00000209: 0000000ff0000800
> should be ==> [    0.459970]  MSR00000209: 0000003ff0000800

So detect this borkage and add the prefix 111.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-22 05:49:35 +02:00