Commit graph

609 commits

Author SHA1 Message Date
Andreas Färber
182735efaf cpu: Make first_cpu and next_cpu CPUState
Move next_cpu from CPU_COMMON to CPUState.
Move first_cpu variable to qom/cpu.h.

gdbstub needs to use CPUState::env_ptr for now.
cpu_copy() no longer needs to save and restore cpu_next.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Rebased, simplified cpu_copy()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:32:54 +02:00
Andreas Färber
4917cf4432 cpu: Replace cpu_single_env with CPUState current_cpu
Move it to qom/cpu.h.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-09 21:20:28 +02:00
Paolo Bonzini
c7086b4a23 exec: change some APIs to take AddressSpaceDispatch
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:50 +02:00
Paolo Bonzini
6092666ebd exec: remove cur_map
cur_map is not used anymore; instead, each AddressSpaceDispatch
has its own nodes/sections pair.  The priorities of the
MemoryListeners, and in the future RCU, guarantee that the
nodes/sections are not freed while they are still in use.

(In fact, next_map itself is not needed except to free the data on the
next update).

To avoid incorrect use, replace cur_map with a temporary copy that
is only valid while the topology is being updated.  If you use it,
the name prev_map makes it clear that you're doing something weird.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:50 +02:00
Paolo Bonzini
0475d94fff exec: put memory map in AddressSpaceDispatch
After this patch, AddressSpaceDispatch holds a constistent tuple of
(phys_map, nodes, sections).  This will be important when updates
of the topology will run concurrently with reads.

cur_map is not used anymore except for freeing it at the end of the
topology update.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:49 +02:00
Paolo Bonzini
0075270317 exec: separate current radix tree from the one being built
This same treatment previously done to phys_node_map and phys_sections
is now applied to the dispatch field of AddressSpace.  Topology updates
use as->next_dispatch while accesses use as->dispatch.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:49 +02:00
Paolo Bonzini
89ae337acb exec: move listener from AddressSpaceDispatch to AddressSpace
This will help having two copies of AddressSpaceDispatch during the
recreation of the radix tree (one being built, and one that is complete
and will be protected by RCU).  We do not want to have to unregister and
re-register the listener.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:49 +02:00
Paolo Bonzini
9affd6fc0e exec: separate current memory map from the one being built
Currently, phys_node_map and phys_sections are shared by all
of the AddressSpaceDispatch.  When updating mem topology, all
AddressSpaceDispatch will rebuild dispatch tables sequentially
on them.  In order to prepare for RCU access, leave the old
memory map alive while the next one is being accessed.

When rebuilding, the new dispatch tables will build and lookup
next_map; after all dispatch tables are rebuilt, we can switch
to next_* and free the previous table.

Based on a patch from Liu Ping Fan.

Signed-off-by: Liu Ping Fan <qemulist@gmail.com>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:49 +02:00
Liu Ping Fan
b41aac4f0d exec: change well-known physical sections to macros
Sections like phys_section_unassigned always have fixed address
in phys_sections.  Declared as macro, so we can use them
when having more than one phys_sections array.

Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
Signed-off-by: Liu Ping Fan <qemulist@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:49 +02:00
Paolo Bonzini
d3e71559a8 memory: ref/unref memory across address_space_map/unmap
The iothread mutex might be released between map and unmap, so the
mapped region might disappear.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:46 +02:00
Paolo Bonzini
e3127ae0cd exec: reorganize address_space_map
First of all, rename "todo" to "done".

Second, clearly separate the case of done == 0 with the case of done != 0.
This will help handling reference counting in the next patch.

Third, this test:

             if (memory_region_get_ram_addr(mr) + xlat != raddr + todo) {

does not guarantee that the memory region is the same across two iterations
of the while loop.  For example, you could have two blocks:

A) size 640 K, mapped at physical address 0, ram_addr_t 0
B) size 64 K, mapped at physical address 0xa0000, ram_addr_t 0xa0000

then mapping 1 M starting at physical address zero will erroneously treat
B as the continuation of block A.  qemu_ram_ptr_length ensures that no
invalid memory is accessed, but it is still a pointless complication of
the algorithm.  The patch makes the logic clearer with an explicit test
that the memory region is the same.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:46 +02:00
Paolo Bonzini
1b5ec23467 memory: return MemoryRegion from qemu_ram_addr_from_host
It will be needed in the next patch.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:46 +02:00
Paolo Bonzini
7443b43758 exec: move qemu_ram_addr_from_host_nofail to cputlb.c
After the next patch it would not be used elsewhere anyway.  Also,
the _nofail and the standard versions of this function return different
things, which is confusing.  Removing the function from the public headers
limits the confusion.

Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:45 +02:00
Paolo Bonzini
23887b79df exec: check MRU in qemu_ram_addr_from_host
This function is not used outside the iothread mutex, so it
can use ram_list.mru_block.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:45 +02:00
Paolo Bonzini
dfde4e6e1a memory: add ref/unref calls
Add ref/unref calls at the following places:

- places where memory regions are stashed by a listener and
  used outside the BQL (including in Xen or KVM).

- memory_region_find callsites

- creation of aliases and containers (only the aliased/contained
  region gets a reference to avoid loops)

- around calls to del_subregion/add_subregion, where the region
  could disappear after the first call

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:45 +02:00
Paolo Bonzini
b7e95164d1 exec: simplify destruction of the phys map
Do not bother visiting the radix tree when an address space is destroyed.
After the previous patch, this has become a pointless exercise.  When
called from address_space_destroy_dispatch, all you're doing is zeroing
out a structure that will be freed as soon as you come back.  When called
from mem_begin, when phys_page_set_level will call phys_map_node_alloc the
radix tree's array will be zeroed too.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:45 +02:00
Paolo Bonzini
058bc4b57f memory: destroy phys_sections one by one
phys_sections_clear is invoked after the dispatch tree has been
destroyed.  This leaves a window where phys_sections_nb > 0 but the
subpages are not valid anymore, which is a recipe for use-after-free
bugs.

Move the destruction of subpages in phys_sections_clear.  We will
still destroy the subpages when an address space is cleaned up,
because address_space_destroy will clear as->root and commit the
change before it calls address_space_destroy_dispatch.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:44 +02:00
Paolo Bonzini
2c9b15cab1 memory: add owner argument to initialization functions
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:44 +02:00
Jan Kiszka
b40acf99be ioport: Switch dispatching to memory core layer
The current ioport dispatcher is a complex beast, mostly due to the
need to deal with old portio interface users. But we can overcome it
without converting all portio users by embedding the required base
address of a MemoryRegionPortio access into that data structure. That
removes the need to have the additional MemoryRegionIORange structure
in the loop on every access.

To handle old portio memory ops, we simply install dispatching handlers
for portio memory regions when registering them with the memory core.
This removes the need for the old_portio field.

We can drop the additional aliasing of ioport regions and also the
special address space listener. cpu_in and cpu_out now simply call
address_space_read/write. And we can concentrate portio handling in a
single source file.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-07-04 17:42:44 +02:00
Andreas Färber
878096eeb2 cpu: Turn cpu_dump_{state,statistics}() into CPUState hooks
Make cpustats monitor command available unconditionally.

Prepares for changing kvm_handle_internal_error() and kvm_cpu_exec()
arguments to CPUState.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:12 +02:00
Andreas Färber
60a3e17a46 cpu: Change cpu_exit() argument to CPUState
It no longer depends on CPUArchState, so move it to qom/cpu.c.

Prepares for changing GDBState::c_cpu to CPUState.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:12 +02:00
Andreas Färber
1a1562f5ea cpu: Introduce VMSTATE_CPU() macro for CPUState
To be used to embed common CPU state into CPU subclasses.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-06-28 13:25:11 +02:00
Peter Maydell
ec3f8c9913 linux-user: Fix compilation failure
Fix compilation failures for linux-user targets following recent
migration related commits bd2fa51fcd and 43487c67.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1372362818-4740-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-06-27 15:38:35 -05:00
Michael R. Hines
bd2fa51fcd rdma: introduce qemu_ram_foreach_block()
This is used during RDMA initialization in order to
transmit a description of all the RAM blocks to the
peer for later dynamic chunk registration purposes.

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Chegu Vinod <chegu_vinod@hp.com>
Tested-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-06-27 02:38:36 +02:00
Alexey Kardashevskiy
7dca8043f3 memory: give name to every AddressSpace
The "info mtree" command in QEMU console prints only "memory" and "I/O"
address spaces while there are actually a lot more other AddressSpace
structs created by PCI and VIO devices. Those devices do not normally
have names and therefore not present in "info mtree" output.

The patch fixes this.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:39:52 +02:00
Paolo Bonzini
df32fd1c9f dma: eliminate DMAContext
The DMAContext is a simple pointer to an AddressSpace that is now always
already available.  Make everyone hold the address space directly,
and clean up the DMA API to use the AddressSpace directly.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:39:52 +02:00
Paolo Bonzini
24addbc76d dma: eliminate old-style IOMMU support
The translate function in the DMAContext is now always NULL.
Remove every reference to it.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Avi Kivity
3095115744 memory: iommu support
Add a new memory region type that translates addresses it is given,
then forwards them to a target address space.  This is similar to
an alias, except that the mapping is more flexible than a linear
translation and trucation, and also less efficient since the
translation happens at runtime.

The implementation uses an AddressSpace mapping the target region to
avoid hierarchical dispatch all the way to the resolved region; only
iommu regions are looked up dynamically.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Avi Kivity <avi.kivity@gmail.com>
[Modified to put translation in address_space_translate; assume
 IOMMUs are not reachable from TCG. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
052e87b073 memory: make section size a 128-bit integer
So far, the size of all regions passed to listeners could fit in 64 bits,
because artificial regions (containers and aliases) are eliminated by
the memory core, leaving only device regions which have reasonable sizes

An IOMMU however cannot be eliminated by the memory core, and may have
an artificial size, hence we may need 65 bits to represent its size.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
733d5ef527 exec: reorganize mem_add to match Int128 version
When adding support for 2^64-byte sections, we will have to change
the structure of mem_add to avoid failures in int128_get64.
Reorganize the code now before introducing Int128.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:47 +02:00
Paolo Bonzini
99b9cc0679 Revert "memory: limit sections in the radix tree to the actual address space size"
This reverts commit 86a8623692.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
5c8a00ce18 exec: return MemoryRegion from address_space_translate
Only address_space_translate_for_iotlb needs to return the section.
Every caller of address_space_translate now uses only section->mr,
return it directly.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
acc9d80b26 exec: Implement subpage_read/write via address_space_rw
This will allow to add support for unaligned memory regions: the subpage
container region can activate unaligned support unconditionally because
the read/write handler will now ensure that accesses are split as
required by calling address_space_rw. We can furthermore drop the
special handling of RAM subpages, address_space_rw takes care of this
already.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
90260c6c09 exec: Resolve subpages in one step except for IOTLB fills
Except for the case of setting the IOTLB entry in TCG mode, we can avoid
the subpage dispatching handlers and do the resolution directly on
address_space_lookup_region. An IOTLB entry describes a full page, not
only the region that the first access to a sub-divided page may return.

This patch therefore introduces a special translation function,
address_space_translate_for_iotlb, that avoids the subpage resolutions.
In contrast, callers of the existing address_space_translate service
will now always receive the terminal memory region section. This will be
important for breaking the BQL and for enabling unaligned memory region.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
f52cc46742 exec: Allow unaligned address_space_rw
This will be needed for some corner cases with para-virtual I/O ports.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
1db8abb102 memory: move private types to exec.c
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Jan Kiszka
9f029603ab memory: Introduce address_space_lookup_region
This introduces a wrapper for phys_page_find (before we complicate
address_space_translate with IOMMU translation).  This function will
also encapsulate locking and reference counting when we introduce
BQL-free dispatching.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Peter Maydell
3752a03648 exec.c: address_space_translate: handle access to addr 0 of 2^64 sized region
The memory API allows a MemoryRegion's size to be 2^64, as a special
case (otherwise the size always fits in a 64 bit integer). This meant
that attempts to access address zero in a 2^64 sized region would
assert in address_space_translate():

  #3  0x00007ffff3e4d192 in __GI___assert_fail#(assertion=0x555555a43f32
    "!a.hi", file=0x555555a43ef0 "include/qemu/int128.h", line=18,
    function=0x555555a4439f "int128_get64") at assert.c:103
  #4  0x0000555555877642 in int128_get64 (a=...)
    at include/qemu/int128.h:18
  #5  0x00005555558782f2 in address_space_translate (as=0x55555668d140,
   /addr=0, xlat=0x7fffafac9918, plen=0x7fffafac9920, is_write=false)
    at exec.c:221

Fix this by doing the 'min' operation in 128 bit arithmetic
rather than 64 bit arithmetic (we know the result of the 'min'
definitely fits in 64 bits because one of the inputs did).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-06-20 16:32:46 +02:00
Paolo Bonzini
fd8aaa767a memory: add return value to address_space_rw/read/write
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:34 +02:00
Paolo Bonzini
791af8c861 memory: propagate errors on I/O dispatch
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:32 +02:00
Paolo Bonzini
a649b9168c exec: just use io_mem_read/io_mem_write for 8-byte I/O accesses
The memory API is able to split it in two 4-byte accesses.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:29 +02:00
Paolo Bonzini
968a5627c8 memory: correctly handle endian-swapped 64-bit accesses
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:26 +02:00
Paolo Bonzini
51644ab70b memory: add address_space_access_valid
The old-style IOMMU lets you check whether an access is valid in a
given DMAContext.  There is no equivalent for AddressSpace in the
memory API, implement it with a lookup of the dispatch tree.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:16 +02:00
Paolo Bonzini
c353e4cc08 exec: implement .valid.accepts for subpages
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:14 +02:00
Paolo Bonzini
82f2563fc8 exec: introduce memory_access_size
This will be used by address_space_access_valid too.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:08 +02:00
Paolo Bonzini
2bbfa05d20 exec: introduce memory_access_is_direct
After the previous patches, this is a common test for all read/write
functions.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:04 +02:00
Paolo Bonzini
d17d45e95f exec: expect mr->ops to be initialized for ROM
There is no need to use the special phys_section_rom section.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:27:01 +02:00
Paolo Bonzini
d197063fcf memory: move unassigned_mem_ops to memory.c
reservation_ops is already doing the same thing.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:56 +02:00
Paolo Bonzini
149f54b53b memory: add address_space_translate
Using phys_page_find to translate an AddressSpace to a MemoryRegionSection
is unwieldy.  It requires to pass the page index rather than the address,
and later memory_region_section_addr has to be called.  Replace
memory_region_section_addr with a function that does all of it: call
phys_page_find, compute the offset within the region, and check how
big the current mapping is.  This way, a large flat region can be written
with a single lookup rather than a page at a time.

address_space_translate will also provide a single point where IOMMU
forwarding is implemented.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:50 +02:00
Paolo Bonzini
b018ddf633 memory: dispatch unassigned accesses based on .valid.accepts
This provides the basics for detecting accesses to unassigned memory
as soon as they happen, and also for a simple implementation of
address_space_access_valid.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-05-29 16:26:47 +02:00