Commit graph

143 commits

Author SHA1 Message Date
Peter Crosthwaite
5f12a788c0 translate: move real_host_page setting to -common
Move the size and mask globals for the "real" host page size to
translate-common. This is to allow system-level code to use
REAL_HOST_PAGE_ALIGN and friends in builds which hide translate-all
behind arch-obj.

Cc: dgilbert@redhat.com
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <b437638691f044bc690a7f03b1240c8b0f34ab57.1441614289.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16 17:33:33 +02:00
Peter Crosthwaite
9b68a7754a translate-all: Move tcg_handle_interrupt() to -common
Move this function to common code. It has no arch specific
dependencies. Prepares support for multi-arch where the translate-all
interface needs to be virtualised. One less thing to virtualise.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <44a7c73604ed2552af47ed02b047b6a772b683e0.1441614289.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-16 17:33:33 +02:00
Peter Maydell
a2aa09e181 * Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
 * cutils: qemu_strto* wrappers
 * iohandler.c simplification
 * Many other fixes and misc patches.
 
 And some MTTCG work (with Emilio's fixes squashed):
 * Signal-free TCG kick
 * Removing spinlock in favor of QemuMutex
 * User-mode emulation multi-threading fixes/docs
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJV8Tk7AAoJEL/70l94x66Ds3QH/3bi0RRR2NtKIXAQrGo5tfuD
 NPMu1K5Hy+/26AC6mEVNRh4kh7dPH5E4NnDGbxet1+osvmpjxAjc2JrxEybhHD0j
 fkpzqynuBN6cA2Gu5GUNoKzxxTmi2RrEYigWDZqCftRXBeO2Hsr1etxJh9UoZw5H
 dgpU3j/n0Q8s08jUJ1o789knZI/ckwL4oXK4u2KhSC7ZTCWhJT7Qr7c0JmiKReaF
 JEYAsKkQhICVKRVmC8NxML8U58O8maBjQ62UN6nQpVaQd0Yo/6cstFTZsRrHMHL3
 7A2Tyg862cMvp+1DOX3Bk02yXA+nxnzLF8kUe0rYo6llqDBDStzqyn1j9R0qeqA=
 =nB06
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* Support for jemalloc
* qemu_mutex_lock_iothread "No such process" fix
* cutils: qemu_strto* wrappers
* iohandler.c simplification
* Many other fixes and misc patches.

And some MTTCG work (with Emilio's fixes squashed):
* Signal-free TCG kick
* Removing spinlock in favor of QemuMutex
* User-mode emulation multi-threading fixes/docs

# gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"

* remotes/bonzini/tags/for-upstream: (44 commits)
  cutils: work around platform differences in strto{l,ul,ll,ull}
  cpu-exec: fix lock hierarchy for user-mode emulation
  exec: make mmap_lock/mmap_unlock globally available
  tcg: comment on which functions have to be called with mmap_lock held
  tcg: add memory barriers in page_find_alloc accesses
  remove unused spinlock.
  replace spinlock by QemuMutex.
  cpus: remove tcg_halt_cond and tcg_cpu_thread globals
  cpus: protect work list with work_mutex
  scripts/dump-guest-memory.py: fix after RAMBlock change
  configure: Add support for jemalloc
  add macro file for coccinelle
  configure: factor out adding disas configure
  vhost-scsi: fix wrong vhost-scsi firmware path
  checkpatch: remove tests that are not relevant outside the kernel
  checkpatch: adapt some tests to QEMU
  CODING_STYLE: update mixed declaration rules
  qmp: Add example usage of strto*l() qemu wrapper
  cutils: Add qemu_strtoull() wrapper
  cutils: Add qemu_strtoll() wrapper
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-09-14 16:13:16 +01:00
Markus Armbruster
012aef0734 maint: avoid useless "if (foo) free(foo)" pattern
My Coccinelle semantic patch finds a few more, because it also fixes up
the equally pointless conditional

    if (foo) {
        free(foo);
        foo = NULL;
    }

Result (feel free to squash it into your patch):

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-09-11 10:21:38 +03:00
Paolo Bonzini
9fd1a94888 cpu-exec: fix lock hierarchy for user-mode emulation
tb_lock has to be taken inside the mmap_lock (example:
tb_invalidate_phys_range is called by target_mmap), but
tb_link_page is taking the mmap_lock and it is called
with the tb_lock held.

To fix this, take the mmap_lock in tb_find_slow, not
in tb_link_page.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:56 +02:00
Paolo Bonzini
8fd19e6cfd exec: make mmap_lock/mmap_unlock globally available
There is some iffy lock hierarchy going on in translate-all.c.  To
fix it, we need to take the mmap_lock in cpu-exec.c.  Make the
functions globally available.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:56 +02:00
Paolo Bonzini
756920876f tcg: comment on which functions have to be called with mmap_lock held
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
Paolo Bonzini
6940fab84b tcg: add memory barriers in page_find_alloc accesses
page_find is reading the radix tree outside all locks, so it has to
use the RCU primitives.  It does not need RCU critical sections
because the PageDescs are never removed, so there is never a need
to wait for the end of code sections that use a PageDesc.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
KONRAD Frederic
677ef6230b replace spinlock by QemuMutex.
spinlock is only used in two cases:
  * cpu-exec.c: to protect TranslationBlock
  * mem_helper.c: for lock helper in target-i386 (which seems broken).

It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly,
with an #ifdef.  The #ifdef will be removed when multithreaded TCG
will need the mutex as well.

Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
[Merge Emilio G. Cota's patch to remove volatile. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:55 +02:00
Emilio G. Cota
d1142fb83e translate-all: remove obsolete comment about l1_map
l1_map is based on physical addresses in full-system mode, as pointed
out in an earlier comment. Said comment also mentions that virtual
addresses are only used in l1_map in user-only mode.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1440375847-17603-11-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-09 15:34:54 +02:00
Laurent Vivier
4cbea59869 linux-user: remove --enable-guest-base/--disable-guest-base
All tcg host architectures now support the guest base and as
there is no real performance lost, it can be always enabled.

Anyway, guest base use can be disabled lively by setting guest
base to 0.

CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY),
it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY
parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to
use !CONFIG_SOFTMMU instead.

Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2015-08-24 11:14:17 -07:00
Paolo Bonzini
414b15c909 exec: drop cpu_can_do_io, just read cpu->can_do_io
After commit 626cf8f (icount: set can_do_io outside TB execution,
2014-12-08), can_do_io is set to 1 if not executing code.  It is
no longer necessary to make this assumption in cpu_can_do_io.

It is also possible to remove the use_icount test, simply by
never setting cpu->can_do_io to 0 unless use_icount is true.

With these changes cpu_can_do_io boils down to a read of
cpu->can_do_io.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-14 23:40:32 +02:00
Sergey Fedorov
02d57ea115 cpu-exec: Do not invalidate original TB in cpu_exec_nocache()
Instead of invalidating an original TB in cpu_exec_nocache()
prematurely, just save a link to it in the temporary generated TB. If
cpu_io_recompile() is raised subsequently from the temporary TB,
invalidate the original one as well. That allows reusing the original TB
each time cpu_exec_nocache() is called to handle expired instruction
counter in icount mode.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-Id: <1435656909-29116-1-git-send-email-serge.fdrv@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-06 12:04:08 +02:00
Peter Crosthwaite
bbd77c180d translate-all: Change tb_flush() env argument to cpu
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.

Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
2015-07-09 15:20:40 +02:00
Peter Crosthwaite
4e51361d79 cpu-all: complete "real" host page size API
Currently the "host" page size alignment API is really aligning to both
host and target page sizes. There is the qemu_real_page_size which can
be used for the actual host page size but it's missing a mask and ALIGN
macro as provided for qemu_page_size. Complete the API. This allows
system level code that cares about the host page size to use a
consistent alignment interface without having to un-needingly align to
the target page size. This also reduces system level code dependency
on the cpu specific TARGET_PAGE_SIZE.

Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2015-07-06 12:15:12 -06:00
Peter Crosthwaite
e1b89321ba include/exec: Move tb hash functions out
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.

The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.

Reviewed-by: Richard Henderson <rth@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <9d048b96f7cfa64a4d9c0b88e0dd2877fac51d41.1433052532.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-26 16:00:51 +02:00
Aurelien Jarno
8d302e7675 translate-all: fix watchpoints if retranslation not possible
The tb_check_watchpoint function currently assumes that all memory
access is done either directly through the TCG code or through an
helper which knows its return address. This is obviously wrong as the
helpers use cpu_ldxx/stxx_data functions to access the memory.

Instead of aborting in that case, don't try to retranslate the code, but
assume that the CPU state (and especially the program counter) has been
saved before calling the helper. Then invalidate the TB based on this
address.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
2015-06-17 12:40:52 +02:00
Paolo Bonzini
fc377bcf61 translate-all: make less of tb_invalidate_phys_page_range depend on is_cpu_write_access
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:

- the code bitmap can be built directly in tb_invalidate_phys_page_fast
  (unconditionally, since is_cpu_write_access would always be passed as 1);

- the virtual address is not needed to mark the page as "not containing
  code" (dirty code bitmap = 1), so we can also remove that use of
  is_cpu_write_access.  For calls of tb_invalidate_phys_page_range
  that do not come from notdirty_mem_write, the next call to
  notdirty_mem_write will notice that the page does not contain code
  anymore, and will fix up the TLB entry.

The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:09:59 +02:00
Paolo Bonzini
9564f52da7 cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:09:59 +02:00
Paolo Bonzini
358653391b translate-all: remove unnecessary argument to tb_invalidate_phys_range
The is_cpu_write_access argument is always 0, remove it.

Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05 17:09:59 +02:00
Paolo Bonzini
41063e1e7a exec: move rcu_read_lock/unlock to address_space_translate callers
Once address_space_translate will be called outside the BQL, the returned
MemoryRegion might disappear as soon as the RCU read-side critical section
ends.  Avoid this by moving the critical section to the callers.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1426684909-95030-3-git-send-email-pbonzini@redhat.com>
2015-04-30 16:55:32 +02:00
Emilio G. Cota
510a647fa2 translate-all: use bitmap helpers for PageDesc's bitmap
Here we have an open-coded byte-based bitmap implementation.
Get rid of it since there's a ulong-based implementation to be
used by all code.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-28 22:14:14 +02:00
Emilio G. Cota
e3a0abfda7 translate-all: use glib for all page descriptor allocations
Since commit

  b7b5233a "bsd-user/mmap.c: Don't try to override g_malloc/g_free"

the exception we make here for usermode has been unnecessary.
Get rid of it.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1428610053-26148-1-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-27 18:24:17 +02:00
Emilio G. Cota
9c04146ad4 target-i386: remove superfluous TARGET_HAS_SMC macro
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-04-04 09:45:59 +03:00
Markus Armbruster
8b98ade31e translate-all: Use g_try_malloc() for dynamic translator buffer
The USE_MMAP code can fail, and the caller handles the failure
already.  Let the !USE_MMAP code fail as well, for consistency.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-02-10 09:27:21 +03:00
Peter Maydell
ec53b45bcd exec.c: Drop TARGET_HAS_ICE define and checks
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
SeokYeon Hwang
2d8ac5eb7a translate-all: Mark map_exec() with the 'unused' attribute
Mark map_exec() with the 'unused' attribute to avoid '-Wunused-function'
warnings on clang 3.4 or later. This means we don't need to mark it
'inline', which is what we were previously using to suppress the warning
(a trick which only works with gcc, not clang).

Signed-off-by: SeokYeon Hwang <syeon.hwang@samsung.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
[PMM: tweaked comment message a little]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2015-01-15 10:44:13 +03:00
Paolo Bonzini
bd79255d25 translate: check cflags instead of use_icount global
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-01-03 09:22:10 +01:00
Paolo Bonzini
0266359e57 cpu-exec: add a new CF_USE_ICOUNT cflag
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-23 10:14:53 +01:00
Peter Maydell
86b182ac0e Xtensa updates for 2.3:
- fix cross-page opcode handling;
 - move window overflow exception generation decision to translation phase;
 - don't generate dead code after privilege, window overflow or coprocessor
   exception;
 - add monitor command 'info opcount' for dumping TCG opcode counters.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJUkPD9AAoJEFH5zJH4P6BE7BQP/jYFMY1+52A99Iwwwf+RpCjA
 Bt5sH6csHlZS+95xI5uuSzaNDjwtzgsJlKdB43e4FtHqRdkPH7p/iFvqAMZW/hJM
 nsx5Hm7LeUM3Ox16o01EZE8HUEMo045RU8Gdt1V6qWRGhskaXgZ756p7zI8R9CrQ
 zKuvie0HMIGa6W126hodvu4paQ9yu49dZt2qrO5DlLujZeYH5OpQeoGs06gV4yb7
 G/A1hQSDSWOl0vyFH9P+zPyB7GLQOvCvokSPbDLoYlnKassYPCELtYILdV1fkgXn
 Nb0Xuv/SktEuEmJf8s6lFa8UOZPOWNsjAO4SY74Z7rR8EGLyCJj+damUdmtlpKDc
 ihysj048UCpWZDU952xWeQisrsoyBQ0Cbste7364Hdm+P0zhslEAX2mKEDzqTIff
 nfHBMvMjqdsmSsgMBGLkcY5Fdrgz0HP1J2JFtFZI0Wh9X8HvQcEuDtwpjZN5WRlu
 LEIp2bI0S1Lyy5I7Okr12m7g4tpFKyzNmw42wPyl6YVpSfT20ErCDFfZL4Wnu88B
 +Be/QlX4AAPpD1paRYyI0yyJvO+bFmcv5bejH3GrGXm1MGo9DSumGIxiHmCYCo4l
 z55/RLdujRxlwnDhOTruIei9gF+BnsQK6WR3f0yIjVZwtzKIe4BaLjqKOad/9INA
 VhrOyDE33GibpFYUygCI
 =quXp
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/xtensa/tags/20141217-xtensa' into staging

Xtensa updates for 2.3:

- fix cross-page opcode handling;
- move window overflow exception generation decision to translation phase;
- don't generate dead code after privilege, window overflow or coprocessor
  exception;
- add monitor command 'info opcount' for dumping TCG opcode counters.

# gpg: Signature made Wed 17 Dec 2014 02:57:01 GMT using RSA key ID F83FA044
# gpg: Good signature from "Max Filippov <max.filippov@cogentembedded.com>"
# gpg:                 aka "Max Filippov <jcmvbkbc@gmail.com>"

* remotes/xtensa/tags/20141217-xtensa:
  target-xtensa: don't generate dead code
  target-xtensa: record available window in TB flags
  target-xtensa: test cross-page opcode
  target-xtensa: fix translation for opcodes crossing page boundary
  tcg: add separate monitor command to dump opcode counters

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-12-17 17:31:26 +00:00
Max Filippov
246ae24d7d tcg: add separate monitor command to dump opcode counters
Currently 'info jit' outputs half of the information to monitor and the
rest to qemu log. Dumping opcode counts to monitor as a part of 'info
jit' command doesn't sound useful. Add new monitor command 'info
opcount' that only dumps opcode counters.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2014-12-17 05:49:32 +03:00
Maciej W. Rozycki
c357747981 target-mips: Correct MIPS16/microMIPS branch size calculation
Correct MIPS16/microMIPS branch size calculation in PC adjustment
needed:

- to set the value of CP0.ErrorEPC at the entry to the reset exception,

- for the purpose of branch reexecution in the context of device I/O.

Follow the approach taken in `exception_resume_pc' for ordinary, Debug
and NMI exceptions.

MIPS16 and microMIPS branches can be 2 or 4 bytes in size and that has
to be reflected in calculation.  Original MIPS ISA branches, which is
where this code originates from, are always 4 bytes long, just as all
original MIPS ISA instructions.

Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
2014-12-16 12:45:19 +00:00
Pavel Dovgalyuk
d8a499f17e cpu-exec: invalidate nocache translation if they are interrupted
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache.  Do this manually through a new compile
flag.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Mikhail Ilyin
1a1c4db9b2 translate-all.c: memory walker initial address miscalculation
The initial base address is miscalculated in walk_memory_regions().
It has to be shifted TARGET_PAGE_BITS more. Holder variables are
extended to target_ulong size otherwise they don't fit for MIPS N32
(a 32-bit ABI with a 64-bit address space) and qemu won't compile.
The issue led to incorrect debug output of memory maps and a
mis-formed coredumped file.

Signed-off-by: Mikhail Ilyin <m.ilin@samsung.com>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2014-10-06 21:53:35 +03:00
Alex Bennée
6db8b53866 trace: add some tcg tracing support
This adds a couple of tcg specific trace-events which are useful for
tracing execution though tcg generated blocks. It's been tested with
lttng user space tracing but is generic enough for all systems. The tcg
events are:

  * translate_block - when a subject block is translated
  * exec_tb - when a translated block is entered
  * exec_tb_exit - when we exit the translated code
  * exec_tb_nocache - special case translations

Of course we can only trace the entrance to the first block of a chain
as each block will jump directly to the next when it can. See the -d
nochain patch to allow more complete tracing at the expense of
performance.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-08-12 14:26:12 +01:00
Stefan Weil
5d831be272 Fix new typos (found by codespell)
* accomodate -> accommodate
* aquiring -> acquiring
* beacuse -> because
* loosing -> losing
* prefering -> preferring
* threshhold -> threshold

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-06-24 20:01:24 +04:00
Paolo Bonzini
38183310be memory: move preallocation code out of exec.c
So that backends can use it.

Since we need the page size for efficiency, move code to compute it
out of translate-all.c and into util/oslib-win32.c.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-06-19 18:44:19 +03:00
Richard Henderson
483c76e140 tcg-mips: Constrain the code_gen_buffer to be within one 256mb segment
This assures us use of J for exit_tb and goto_tb, and JAL for calling
into the generated bswap helpers.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:45:00 -07:00
Richard Henderson
479eb12108 tcg-mips: Layout executable and code_gen_buffer
Choosing good addresses for them means we can use JAL for helper calls.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-24 08:44:44 -07:00
Richard Henderson
1813e1758d tcg: Define tcg_insn_unit for code pointers
To be defined by the tcg backend based on the elemental unit of the ISA.
During the transition, allow TCG_TARGET_INSN_UNIT_SIZE to be undefined,
which allows us to default tcg_insn_unit to the current uint8_t.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-05-12 10:03:04 -07:00
Andrei Warkentin
cd7ccc8351 page_check_range: don't bail out early after unprotecting page
When checking a page range, if we found that a page was
made read-only by QEMU because it contained translated code,
we were incorrectly returning immediately after unprotecting
that page, rather than continuing to check the entire range,
so we might fail to unprotect pages later in the range, or
might incorrectly return a "success" result even if later
pages were not writable.

In particular, this could cause segfaults in a case where
signals are delivered back to back on a target architecture
which uses trampoline code in the stack frame (as AArch64
currently does). The second signal causes a segfault because
the frame cannot be written to (it was protected because
we translated and executed the restorer trampoline, and the
unprotect logic did not unprotect the whole range).

Signed-off-by: Andrei Warkentin <andrey.warkentin@gmail.com
[PMM: expanded commit message a bit]
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-04 18:16:03 +01:00
Andreas Färber
a47dddd734 exec: Change cpu_abort() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:52:28 +01:00
Andreas Färber
baea4fae7b cputlb: Change tlb_unprotect_code_phys() argument to CPUState
Note that the argument is unused.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
0ea8cb8895 cpu-exec: Change cpu_resume_from_signal() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
611d4f996f translate-all: Change tb_flush_jmp_cache() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
648f034c6c translate-all: Change tb_gen_code() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
90b40a696a translate-all: Change cpu_io_recompile() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
239c51a54f translate-all: Change tb_check_watchpoint() argument to CPUState
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:48 +01:00
Andreas Färber
74f10515d1 translate-all: Change cpu_restore_state_from_tb() argument to CPUState
And normalize the argument order.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00
Andreas Färber
3f38f309b2 translate-all: Change cpu_restore_state() argument to CPUState
This lets us drop some local variables in tlb_fill() functions.

Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13 19:20:47 +01:00