The cross i586-mingw32msvc-gcc 4.4.4 from Debian Squeeze does not support
__sync_val_compare_and_swap by default.
Using -march=i686 fixes that and should also result in better code.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The type cast must use tcg_target_long instead of long.
This makes a difference for hosts where sizeof(long) != sizeof(void *).
Cc: Anthony Green <green@moxielogic.com>
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Anthony Green <green@moxielogic.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Define and use I440FX_PCI_DEVICE() instead of using DO_UPCAST().
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1361580039-4459-2-git-send-email-dwmw2@infradead.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Linux uses the lowest enslaved MAC address as the MAC address of
the bridge. Set MAC address to a high value so that it does not
affect the MAC address of the bridge.
Changing the MAC address of the bridge could cause a few seconds
of network downtime.
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1363971468-21154-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The qdev field is no longer needed, just drop it.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-7-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
As the virtio-balloon-pci is switched to the new API, we can use QOM
casts.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-6-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This remove old init and exit function as they are no longer needed.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-5-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-balloon-ccw is modified for the new API. The device
virtio-balloon-ccw extends virtio-ccw-device as before. It creates and
connects a virtio-balloon during the init. The properties are not modified.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-4-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-balloon-pci is modified for the new API. The device
virtio-balloon-pci extends virtio-pci. It creates and connects a
virtio-balloon during the init. The properties are not changed.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-3-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Create virtio-balloon which extends virtio-device, so it can be connected on
virtio-bus.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1364377755-15508-2-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Kevin Wolf (22) and Peter Lieven (1)
# Via Stefan Hajnoczi
* stefanha/block: (23 commits)
block: Fix direct use of protocols as driver for bdrv_open()
qcow2: Gather clusters in a looping loop
qcow2: Move cluster gathering to a non-looping loop
qcow2: Allow requests with multiple l2metas
qcow2: Use byte granularity in qcow2_alloc_cluster_offset()
qcow2: Prepare handle_alloc/copied() for byte granularity
qcow2: handle_copied(): Implement non-zero host_offset
qcow2: handle_copied(): Get rid of keep_clusters parameter
qcow2: handle_copied(): Get rid of nb_clusters parameter
qcow2: Factor out handle_copied()
qcow2: Clean up handle_alloc()
qcow2: Finalise interface of handle_alloc()
qcow2: handle_alloc(): Get rid of keep_clusters parameter
qcow2: handle_alloc(): Get rid of nb_clusters parameter
qcow2: Factor out handle_alloc()
qcow2: Decouple cluster allocation from cluster reuse code
qcow2: Change handle_dependency to byte granularity
qcow2: Improve check for overlapping allocations
qcow2: Handle dependencies earlier
qcow2: Remove bogus unlock of s->lock
...
# By Lluís Vilanova (7) and others
# Via Stefan Hajnoczi
* stefanha/tracing:
vl: add runstate_set tracepoint
.gitignore: rename trace/generated-tracers.dtrace
.gitignore: add trace/generated-events.[ch]
trace: rebuild generated-events.o when configuration changes
trace: [stderr] Port to generic event information and new control interface
trace: [simple] Port to generic event information and new control interface
trace: [default] Port to generic event information and new control interface
trace: [monitor] Use new event control interface
trace: Provide a detailed event control interface
trace: Provide a generic tracing event descriptor
trace: [tracetool] Explicitly identify public backends
This patch enables us to know RunState transition. It will be userful
for investigation when the trouble occured in special event such like
live migration, shutdown, suspend, and so on.
Signed-off-by: Kazuya Saito <saito.kazuya@jp.fujitsu.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
For a while the file was called trace/generated-tracers-dtrace.dtrace
but today it's called trace/generated-tracers.dtrace.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Make sure to rebuild generated-events.o when ./configure options change.
This prevents linker errors when a stale generated-events.o gets linked
with code compiled against fresh headers. For example, try building
with ./configure --enable-trace-backend=stderr followed by ./configure
--enable-trace-backend=dtrace.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The backend is forced to dump event numbers using 64 bits, as TraceEventID is
an enum.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This interface decouples event obtaining from interaction.
Events can be obtained through three different methods:
* identifier
* name
* simple wildcard pattern
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Uses tracetool to generate a backend-independent tracing event description
(struct TraceEvent).
The values for such structure are generated with the non-public "events"
backend ("events-c" frontend).
The generation of the defines to check if an event is statically enabled is also
moved to the "events" backend ("events-h" frontend).
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Public backends are those printed by "--list-backends" and thus considered valid
by the configure script.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bdrv_open_common() implements direct use of protocols by copying the
pre-opened BlockDriverStates to bs using bdrv_swap(). It did however
first set some fields in bs, which end up in file after the swap. When
bdrv_open() destroys file, it appears to be open, and because it isn't,
qemu could segfault while trying to close it.
Reorder the operations to return immediately in such cases so that file
is correctly detected as closed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of just checking once in exactly this order if there are
dependendies, non-COW clusters and new allocation, this starts looping
around these. This way we can, for example, gather non-COW clusters after
new allocations as long as the host cluster offsets stay contiguous.
Once handle_dependencies() is extended so that COW areas of in-flight
allocations can be overwritten, this allows to continue with gathering
other clusters (we wouldn't be able to do that without this change
because we would have missed a possible second dependency in one of the
next clusters).
This means that in the typical sequential write case, we can combine the
COW overwrite of one cluster with the allocation of the next cluster as
soon as something like Delayed COW gets actually implemented. It is only
by avoiding splitting requests this way that Delayed COW actually starts
improving performance noticably.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This patch is mainly to separate the indentation change from the
semantic changes. All that really changes here is that everything moves
into a while loop, all 'goto done' become 'break' and at the end of the
loop a new 'break is inserted.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of expecting a single l2meta, have a list of them. This allows
to still have a single I/O request for the guest data, even though
multiple l2meta may be needed in order to describe both a COW overwrite
and a new cluster allocation (typical sequential write case).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This gets rid of the nb_clusters and keep_clusters and the associated
complicated calculations. Just advance the number of bytes that have
been processed and everything is fine.
This patch advances the variables even after the last operation even
though they aren't used any more afterwards to make things look more
uniform. A later patch will turn the whole thing into a loop and then
it actually starts making sense.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This makes handle_alloc() and handle_copied() return byte-granularity
host offsets instead of returning always the cluster start. This is
required so that qcow2_alloc_cluster_offset() can stop aligning
everything to cluster boundaries.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Look only for clusters that start at a given physical offset.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now *bytes is used to return the length of the area that can be written
to without performing an allocation or COW.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
handle_copied() uses its bytes parameter now to determine how many
clusters it should try to find.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Things can be simplified a bit now. No semantic changes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The interface works completely on a byte granularity now and duplicated
parameters are removed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
handle_alloc() is now called with the offset at which the actual new
allocation starts instead of the offset at which the whole write request
starts, part of which may already be processed.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
We already communicate the same information in *bytes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This moves some code that prepares the allocation of new clusters to
where the actual allocation happens. This is the minimum required to be
able to move it to a separate function in the next patch.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a more precise description of what really constitutes a
dependency. The behaviour doesn't change at this point because the COW
area of the old request is still aligned to cluster boundaries and
therefore an overlap is detected wheneven the requests touch any part of
the same cluster.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The old code detected an overlapping allocation even when the
allocations didn't actually overlap, but were only adjacent.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Handling overlapping allocations isn't just a detail of cluster
allocation. It is rather one of three ways to get the host cluster
offset for a write request:
1. If a request overlaps an in-flight allocations, the cluster offset
can be taken from there (this is what handle_dependencies will evolve
into) or the request must just wait until the allocation has
completed. Accessing the L2 is not valid in this case, it has
outdated information.
2. Outside overlapping areas, check the clusters that can be written to
as they are, with no COW involved.
3. If a COW is required, allocate new clusters
Changing the code to reflect this doesn't change the behaviour because
overlaps cannot exist for clusters that are kept in step 2. It does
however make it easier for later patches to work on clusters that belong
to an allocation that is still in flight.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The unlock wakes up the next coroutine, but the currently running
coroutine will lock it again before it yields, so this doesn't make a
lot of sense.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This should be based on the virtual disk size, not on the size of the
image.
Interesting observation: With some VM state stored in the image file,
percentages higher than 100% are possible, even though snapshots
themselves are ignored. This is a qcow2 bug to be fixed another day: The
VM state should be discarded in the active L2 tables after completing
the snapshot creation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
commit 4d454574 "qemu-option: move standard option definitions
out of qemu-config.c" broke support for commandline option
groups that where registered during bdrv_init(). In particular
support for -iscsi options was broken since that commit.
Fix by moving the bdrv_init_with_whitelist() before command
line argument parsing.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that the core takes care of fe_open tracking we no longer need this hack.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-12-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
When migrating a host with with a spice agent running the mouse becomes
non operational after the migration due to the agent state being
inconsistent between the guest and the client.
After migration the spicevmc backend on the destination has never been notified
of the (non 0) guest_connected state. Virtio-serial holds this state
information and migrates it, this patch properly propagates this information
to virtio-console and through that to interested chardev backends.
rhbz #725965
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Message-id: 1364292483-16564-11-git-send-email-hdegoede@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>