Commit graph

13 commits

Author SHA1 Message Date
Anton Blanchard fd9648dff6 [PATCH] ppc64: Add ptrace data breakpoint support
Add hardware data breakpoint support.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-12 17:19:12 +10:00
Prasanna S Panchamukhi bb144a85c7 [PATCH] Kprobes: prevent possible race conditions ppc64 changes
This patch contains the ppc64 architecture specific changes to prevent the
possible race conditions.

Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
Jake Moilanen 04ed65190a [PATCH] oprofile PVR 970MP
Here's the 970MP's PVR (processor version register) entry for oprofile.

Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-30 13:38:19 +10:00
David Gibson e28f7faf05 [PATCH] Four level pagetables for ppc64
Implement 4-level pagetables for ppc64

This patch implements full four-level page tables for ppc64, thereby
extending the usable user address range to 44 bits (16T).

The patch uses a full page for the tables at the bottom and top level,
and a quarter page for the intermediate levels.  It uses full 64-bit
pointers at every level, thus also increasing the addressable range of
physical memory.  This patch also tweaks the VSID allocation to allow
matching range for user addresses (this halves the number of available
contexts) and adds some #if and BUILD_BUG sanity checks.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:31 +10:00
Anton Blanchard 8dc4fd87f2 [PATCH] ppc64: Turn runlatch on in exception entry
Enable the runlatch at the start of each exception.  Unfortunately we are out
of space in the 0x300 handler, so I added it a bit later.

The SPR write is fairly expensive, perhaps we should cache the runlatch state
in the paca and avoid the write when possible.

We don't need to turn the runlatch off, we do that in the idle loop.  Better
to take the hit in the idle loop than for each exception exit.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07 18:23:37 -07:00
Anton Blanchard a2f7a9ce2a [PATCH] ppc64: Fix runlatch code to work on pseries machines
Not all ppc64 CPUs have the CTRL SPR, so we need a cputable feature for it.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-07 18:23:37 -07:00
Arnd Bergmann fef1c772fa [PATCH] ppc64: add BPA platform type
This adds the basic support for running on BPA machines.
So far, this is only the IBM workstation, and it will
not run on others without a little more generalization.

It should be possible to configure a kernel for any
combination of CONFIG_PPC_BPA with any of the other
multiplatform targets.

Signed-off-by: Arnd Bergmann <arndb@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-06-23 09:43:37 +10:00
David Gibson 20cee16ced [PATCH] ppc64: Abolish ioremap_mm
Currently ppc64 has two mm_structs for the kernel, init_mm and also
ioremap_mm.  The latter really isn't necessary: this patch abolishes it,
instead restricting vmallocs to the lower 1TB of the init_mm's range and
placing io mappings in the upper 1TB.  This simplifies the code in a number
of places and eliminates an unecessary set of pagetables.  It also tweaks
the unmap/free path a little, allowing us to remove the unmap_im_area() set
of page table walkers, replacing them with unmap_vm_area().

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:26 -07:00
Anton Blanchard 6dc2f0c7df [PATCH] ppc64: cleanup iseries runlight support
The iseries has a bar graph on the front panel that shows how busy it is.
The operating system sets and clears a bit in the CTRL register to control
it.

Instead of going to the complexity of using a thread info bit, just set and
clear it in the idle loop.

Also create two helper functions, ppc64_runlatch_on and ppc64_runlatch_off.

Finally don't use the short form of the SPR defines.

Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-02 15:12:30 -07:00
Anton Blanchard 79f1248962 [PATCH] ppc64: cleanup SPR definitions
There are a bunch of irrelevant SPR definitions in asm/processer.h.  Cut
them down a bit, also add a DABR_TRANSLATION define which will be used
shortly.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-02 15:12:30 -07:00
Hugh Dickins ee39b37b23 [PATCH] freepgt: remove MM_VM_SIZE(mm)
There's only one usage of MM_VM_SIZE(mm) left, and it's a troublesome macro
because mm doesn't contain the (32-bit emulation?) info needed.  But it too is
only needed because we ignore the end from the vma list.

We could make flush_pgtables return that end, or unmap_vmas.  Choose the
latter, since it's a natural fit with unmap_mapping_range_vma needing to know
its restart addr.  This does make more than minimal change, but if unmap_vmas
had returned the end before, this is how we'd have done it, rather than
storing the break_addr in zap_details.

unmap_vmas used to return count of vmas scanned, but that's just debug which
hasn't been useful in a while; and if we want the map_count 0 on exit check
back, it can easily come from the final remove_vm_struct loop.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-19 13:29:15 -07:00
Olof Johansson e63f8f439d [PATCH] ppc64: no prefetch for NULL pointers
For prefetches of NULL (as when walking a short linked list), PPC64 will in
some cases take a performance hit.  The hardware needs to do the TLB walk,
and said walk will always miss, which means (up to) two L2 misses as
penalty.  This seems to hurt overall performance, so for NULL pointers skip
the prefetch alltogether.

Signed-off-by: Olof Johansson <olof@austin.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16 15:24:38 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00