Migration Pull request (20231031)

Hi
 
 This is repeat of the Migration PULL for 20231020.
 - I removed vmstate_register(big problems with s390x)
 - I added yet more countes (juan)
 
 CI: https://gitlab.com/juan.quintela/qemu/-/pipelines/1055797950
 
 Please apply.
 
 Thanks, Juan.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmVAwmYACgkQ9IfvGFhy
 1yPJ9g//f8Up+5Az0DmJMWwRe+08vLa3ZRCSh5aCRJguFVfMZSVxRNuoikQ/C/Gz
 1ePB+Q8H0NcP86FF7pifhtLU0uE9L4At4Z+vOQP1+n67p7aush050kKQxyDYIfO2
 3tO2HkfHvC/R3S5FtqQtE1Y0/MpHdj1vgV9bNidPorA6EZ01KEEfWw3soptuD14I
 LPvXA8BG5mOvB7R55MymTAej3ZDmOUQlZotsE2KmlkOfzYoqTtApkLtW03/WH8b8
 fAYJ0ghYpesRTO1rF61n1peLMUr+/HRLqGJmhLDSEZZlB5tnUYeiLR9dRJ1/1+o2
 zNjLr6X2hnia6Kb0UibRoAcyyy8lSLp79Zt5nhDneuTSQxeYhNh6EecxAzKvd/02
 vfE/reOEkZn7KzYH/MvlD5P6XmwrT5aV9cqmyC/8BkNnipHAtJ2Av1H4ONdnahuK
 hOhLRAGE7SINtgo8jdauQNor1QAsIX19nvYk9p7ta5VAysrDSbuD+9Yq7HtUErlP
 585z5BPGfaP2GwIXPNJNcqXwPh0InInGASqEWmYSlu8GF3Ic0KNWWrC5bwSn7tHL
 I7qaMrCHxvWGYx6cRzzp08EqCcbOQCixrPyk8g6o3SgXHrTGKthzjPG5bLe+QXpv
 P2gblC7Fo3sUo89IwVjsRMO3nU9wBfb9skE7iZM06SILO7QD3u8=
 =r1DI
 -----END PGP SIGNATURE-----

Merge tag 'migration-20231031-pull-request' of https://gitlab.com/juan.quintela/qemu into staging

Migration Pull request (20231031)

Hi

This is repeat of the Migration PULL for 20231020.
- I removed vmstate_register(big problems with s390x)
- I added yet more countes (juan)

CI: https://gitlab.com/juan.quintela/qemu/-/pipelines/1055797950

Please apply.

Thanks, Juan.

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmVAwmYACgkQ9IfvGFhy
# 1yPJ9g//f8Up+5Az0DmJMWwRe+08vLa3ZRCSh5aCRJguFVfMZSVxRNuoikQ/C/Gz
# 1ePB+Q8H0NcP86FF7pifhtLU0uE9L4At4Z+vOQP1+n67p7aush050kKQxyDYIfO2
# 3tO2HkfHvC/R3S5FtqQtE1Y0/MpHdj1vgV9bNidPorA6EZ01KEEfWw3soptuD14I
# LPvXA8BG5mOvB7R55MymTAej3ZDmOUQlZotsE2KmlkOfzYoqTtApkLtW03/WH8b8
# fAYJ0ghYpesRTO1rF61n1peLMUr+/HRLqGJmhLDSEZZlB5tnUYeiLR9dRJ1/1+o2
# zNjLr6X2hnia6Kb0UibRoAcyyy8lSLp79Zt5nhDneuTSQxeYhNh6EecxAzKvd/02
# vfE/reOEkZn7KzYH/MvlD5P6XmwrT5aV9cqmyC/8BkNnipHAtJ2Av1H4ONdnahuK
# hOhLRAGE7SINtgo8jdauQNor1QAsIX19nvYk9p7ta5VAysrDSbuD+9Yq7HtUErlP
# 585z5BPGfaP2GwIXPNJNcqXwPh0InInGASqEWmYSlu8GF3Ic0KNWWrC5bwSn7tHL
# I7qaMrCHxvWGYx6cRzzp08EqCcbOQCixrPyk8g6o3SgXHrTGKthzjPG5bLe+QXpv
# P2gblC7Fo3sUo89IwVjsRMO3nU9wBfb9skE7iZM06SILO7QD3u8=
# =r1DI
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 31 Oct 2023 18:01:26 JST
# gpg:                using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full]
# gpg:                 aka "Juan Quintela <quintela@trasno.org>" [full]
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03  4B82 F487 EF18 5872 D723

* tag 'migration-20231031-pull-request' of https://gitlab.com/juan.quintela/qemu: (38 commits)
  qemu-file: Make qemu_fflush() return errors
  migration: Remove transferred atomic counter
  migration: Use migration_transferred_bytes()
  qemu-file: Simplify qemu_file_get_error()
  migration: migration_rate_limit_reset() don't need the QEMUFile
  migration: migration_transferred_bytes() don't need the QEMUFile
  qemu-file: Remove _noflush from qemu_file_transferred_noflush()
  qemu_file: Remove unused qemu_file_transferred()
  migration: Use the number of transferred bytes directly
  qemu_file: total_transferred is not used anymore
  qemu_file: Use a stat64 for qemu_file_transferred
  qemu-file: Don't increment qemu_file_transferred at qemu_file_fill_buffer
  migration: Stop migration immediately in RDMA error paths
  migration: Deprecate old compression method
  migration: Deprecate block migration
  migration: migrate 'blk' command option is deprecated.
  migration: migrate 'inc' command option is deprecated.
  qemu-iotests: Filter warnings about block migration being deprecated
  migration: set file error on subsection loading
  migration: rename vmstate_save_needed->vmstate_section_needed
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
Stefan Hajnoczi 2023-11-01 06:56:53 +09:00
commit f13b978cc7
23 changed files with 880 additions and 276 deletions

View file

@ -469,3 +469,38 @@ Migration
``skipped`` field in Migration stats has been deprecated. It hasn't
been used for more than 10 years.
``inc`` migrate command option (since 8.2)
''''''''''''''''''''''''''''''''''''''''''
Use blockdev-mirror with NBD instead.
As an intermediate step the ``inc`` functionality can be achieved by
setting the ``block-incremental`` migration parameter to ``true``.
But this parameter is also deprecated.
``blk`` migrate command option (since 8.2)
''''''''''''''''''''''''''''''''''''''''''
Use blockdev-mirror with NBD instead.
As an intermediate step the ``blk`` functionality can be achieved by
setting the ``block`` migration capability to ``true``. But this
capability is also deprecated.
block migration (since 8.2)
'''''''''''''''''''''''''''
Block migration is too inflexible. It needs to migrate all block
devices or none.
Please see "QMP invocation for live storage migration with
``blockdev-mirror`` + NBD" in docs/interop/live-block-operations.rst
for a detailed explanation.
old compression method (since 8.2)
''''''''''''''''''''''''''''''''''
Compression method fails too much. Too many races. We are going to
remove it if nobody fixes it. For starters, migration-test
compression tests are disabled becase they fail randomly. If you need
compression, use multifd compression methods.

View file

@ -28,6 +28,8 @@ the guest to be stopped. Typically the time that the guest is
unresponsive during live migration is the low hundred of milliseconds
(notice that this depends on a lot of things).
.. contents::
Transports
==========
@ -917,3 +919,521 @@ versioned machine types to cut down on the combinations that will need
support. This is also useful when newer versions of firmware outgrow
the padding.
Backwards compatibility
=======================
How backwards compatibility works
---------------------------------
When we do migration, we have two QEMU processes: the source and the
target. There are two cases, they are the same version or they are
different versions. The easy case is when they are the same version.
The difficult one is when they are different versions.
There are two things that are different, but they have very similar
names and sometimes get confused:
- QEMU version
- machine type version
Let's start with a practical example, we start with:
- qemu-system-x86_64 (v5.2), from now on qemu-5.2.
- qemu-system-x86_64 (v5.1), from now on qemu-5.1.
Related to this are the "latest" machine types defined on each of
them:
- pc-q35-5.2 (newer one in qemu-5.2) from now on pc-5.2
- pc-q35-5.1 (newer one in qemu-5.1) from now on pc-5.1
First of all, migration is only supposed to work if you use the same
machine type in both source and destination. The QEMU hardware
configuration needs to be the same also on source and destination.
Most aspects of the backend configuration can be changed at will,
except for a few cases where the backend features influence frontend
device feature exposure. But that is not relevant for this section.
I am going to list the number of combinations that we can have. Let's
start with the trivial ones, QEMU is the same on source and
destination:
1 - qemu-5.2 -M pc-5.2 -> migrates to -> qemu-5.2 -M pc-5.2
This is the latest QEMU with the latest machine type.
This have to work, and if it doesn't work it is a bug.
2 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
Exactly the same case than the previous one, but for 5.1.
Nothing to see here either.
This are the easiest ones, we will not talk more about them in this
section.
Now we start with the more interesting cases. Consider the case where
we have the same QEMU version in both sides (qemu-5.2) but we are using
the latest machine type for that version (pc-5.2) but one of an older
QEMU version, in this case pc-5.1.
3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
It needs to use the definition of pc-5.1 and the devices as they
were configured on 5.1, but this should be easy in the sense that
both sides are the same QEMU and both sides have exactly the same
idea of what the pc-5.1 machine is.
4 - qemu-5.1 -M pc-5.2 -> migrates to -> qemu-5.1 -M pc-5.2
This combination is not possible as the qemu-5.1 doen't understand
pc-5.2 machine type. So nothing to worry here.
Now it comes the interesting ones, when both QEMU processes are
different. Notice also that the machine type needs to be pc-5.1,
because we have the limitation than qemu-5.1 doesn't know pc-5.2. So
the possible cases are:
5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
This migration is known as newer to older. We need to make sure
when we are developing 5.2 we need to take care about not to break
migration to qemu-5.1. Notice that we can't make updates to
qemu-5.1 to understand whatever qemu-5.2 decides to change, so it is
in qemu-5.2 side to make the relevant changes.
6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
This migration is known as older to newer. We need to make sure
than we are able to receive migrations from qemu-5.1. The problem is
similar to the previous one.
If qemu-5.1 and qemu-5.2 were the same, there will not be any
compatibility problems. But the reason that we create qemu-5.2 is to
get new features, devices, defaults, etc.
If we get a device that has a new feature, or change a default value,
we have a problem when we try to migrate between different QEMU
versions.
So we need a way to tell qemu-5.2 that when we are using machine type
pc-5.1, it needs to **not** use the feature, to be able to migrate to
real qemu-5.1.
And the equivalent part when migrating from qemu-5.1 to qemu-5.2.
qemu-5.2 has to expect that it is not going to get data for the new
feature, because qemu-5.1 doesn't know about it.
How do we tell QEMU about these device feature changes? In
hw/core/machine.c:hw_compat_X_Y arrays.
If we change a default value, we need to put back the old value on
that array. And the device, during initialization needs to look at
that array to see what value it needs to get for that feature. And
what are we going to put in that array, the value of a property.
To create a property for a device, we need to use one of the
DEFINE_PROP_*() macros. See include/hw/qdev-properties.h to find the
macros that exist. With it, we set the default value for that
property, and that is what it is going to get in the latest released
version. But if we want a different value for a previous version, we
can change that in the hw_compat_X_Y arrays.
hw_compat_X_Y is an array of registers that have the format:
- name_device
- name_property
- value
Let's see a practical example.
In qemu-5.2 virtio-blk-device got multi queue support. This is a
change that is not backward compatible. In qemu-5.1 it has one
queue. In qemu-5.2 it has the same number of queues as the number of
cpus in the system.
When we are doing migration, if we migrate from a device that has 4
queues to a device that have only one queue, we don't know where to
put the extra information for the other 3 queues, and we fail
migration.
Similar problem when we migrate from qemu-5.1 that has only one queue
to qemu-5.2, we only sent information for one queue, but destination
has 4, and we have 3 queues that are not properly initialized and
anything can happen.
So, how can we address this problem. Easy, just convince qemu-5.2
that when it is running pc-5.1, it needs to set the number of queues
for virtio-blk-devices to 1.
That way we fix the cases 5 and 6.
5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
qemu-5.2 -M pc-5.1 sets number of queues to be 1.
qemu-5.1 -M pc-5.1 expects number of queues to be 1.
correct. migration works.
6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
qemu-5.1 -M pc-5.1 sets number of queues to be 1.
qemu-5.2 -M pc-5.1 expects number of queues to be 1.
correct. migration works.
And now the other interesting case, case 3. In this case we have:
3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
Here we have the same QEMU in both sides. So it doesn't matter a
lot if we have set the number of queues to 1 or not, because
they are the same.
WRONG!
Think what happens if we do one of this double migrations:
A -> migrates -> B -> migrates -> C
where:
A: qemu-5.1 -M pc-5.1
B: qemu-5.2 -M pc-5.1
C: qemu-5.2 -M pc-5.1
migration A -> B is case 6, so number of queues needs to be 1.
migration B -> C is case 3, so we don't care. But actually we
care because we haven't started the guest in qemu-5.2, it came
migrated from qemu-5.1. So to be in the safe place, we need to
always use number of queues 1 when we are using pc-5.1.
Now, how was this done in reality? The following commit shows how it
was done::
commit 9445e1e15e66c19e42bea942ba810db28052cd05
Author: Stefan Hajnoczi <stefanha@redhat.com>
Date: Tue Aug 18 15:33:47 2020 +0100
virtio-blk-pci: default num_queues to -smp N
The relevant parts for migration are::
@@ -1281,7 +1284,8 @@ static Property virtio_blk_properties[] = {
#endif
DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
true),
- DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
+ DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues,
+ VIRTIO_BLK_AUTO_NUM_QUEUES),
DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256),
It changes the default value of num_queues. But it fishes it for old
machine types to have the right value::
@@ -31,6 +31,7 @@
GlobalProperty hw_compat_5_1[] = {
...
+ { "virtio-blk-device", "num-queues", "1"},
...
};
A device with diferent features on both sides
---------------------------------------------
Let's assume that we are using the same QEMU binary on both sides,
just to make the things easier. But we have a device that has
different features on both sides of the migration. That can be
because the devices are different, because the kernel driver of both
devices have different features, whatever.
How can we get this to work with migration. The way to do that is
"theoretically" easy. You have to get the features that the device
has in the source of the migration. The features that the device has
on the target of the migration, you get the intersection of the
features of both sides, and that is the way that you should launch
QEMU.
Notice that this is not completely related to QEMU. The most
important thing here is that this should be handled by the managing
application that launches QEMU. If QEMU is configured correctly, the
migration will succeed.
That said, actually doing it is complicated. Almost all devices are
bad at being able to be launched with only some features enabled.
With one big exception: cpus.
You can read the documentation for QEMU x86 cpu models here:
https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html
See when they talk about migration they recommend that one chooses the
newest cpu model that is supported for all cpus.
Let's say that we have:
Host A:
Device X has the feature Y
Host B:
Device X has not the feature Y
If we try to migrate without any care from host A to host B, it will
fail because when migration tries to load the feature Y on
destination, it will find that the hardware is not there.
Doing this would be the equivalent of doing with cpus:
Host A:
$ qemu-system-x86_64 -cpu host
Host B:
$ qemu-system-x86_64 -cpu host
When both hosts have different cpu features this is guaranteed to
fail. Especially if Host B has less features than host A. If host A
has less features than host B, sometimes it works. Important word of
last sentence is "sometimes".
So, forgetting about cpu models and continuing with the -cpu host
example, let's see that the differences of the cpus is that Host A and
B have the following features:
Features: 'pcid' 'stibp' 'taa-no'
Host A: X X
Host B: X
And we want to migrate between them, the way configure both QEMU cpu
will be:
Host A:
$ qemu-system-x86_64 -cpu host,pcid=off,stibp=off
Host B:
$ qemu-system-x86_64 -cpu host,taa-no=off
And you would be able to migrate between them. It is responsability
of the management application or of the user to make sure that the
configuration is correct. QEMU doesn't know how to look at this kind
of features in general.
Notice that we don't recomend to use -cpu host for migration. It is
used in this example because it makes the example simpler.
Other devices have worse control about individual features. If they
want to be able to migrate between hosts that show different features,
the device needs a way to configure which ones it is going to use.
In this section we have considered that we are using the same QEMU
binary in both sides of the migration. If we use different QEMU
versions process, then we need to have into account all other
differences and the examples become even more complicated.
How to mitigate when we have a backward compatibility error
-----------------------------------------------------------
We broke migration for old machine types continuously during
development. But as soon as we find that there is a problem, we fix
it. The problem is what happens when we detect after we have done a
release that something has gone wrong.
Let see how it worked with one example.
After the release of qemu-8.0 we found a problem when doing migration
of the machine type pc-7.2.
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2
This migration works
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2
This migration works
- $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2
This migration fails
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2
This migration fails
So clearly something fails when migration between qemu-7.2 and
qemu-8.0 with machine type pc-7.2. The error messages, and git bisect
pointed to this commit.
In qemu-8.0 we got this commit::
commit 010746ae1db7f52700cb2e2c46eb94f299cfa0d2
Author: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Date: Thu Mar 2 13:37:02 2023 +0000
hw/pci/aer: Implement PCI_ERR_UNCOR_MASK register
The relevant bits of the commit for our example are this ones::
--- a/hw/pci/pcie_aer.c
+++ b/hw/pci/pcie_aer.c
@@ -112,6 +112,10 @@ int pcie_aer_init(PCIDevice *dev,
pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
PCI_ERR_UNC_SUPPORTED);
+ pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
+ PCI_ERR_UNC_MASK_DEFAULT);
+ pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
+ PCI_ERR_UNC_SUPPORTED);
pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
PCI_ERR_UNC_SEVERITY_DEFAULT);
The patch changes how we configure PCI space for AER. But QEMU fails
when the PCI space configuration is different between source and
destination.
The following commit shows how this got fixed::
commit 5ed3dabe57dd9f4c007404345e5f5bf0e347317f
Author: Leonardo Bras <leobras@redhat.com>
Date: Tue May 2 21:27:02 2023 -0300
hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0
[...]
The relevant parts of the fix in QEMU are as follow:
First, we create a new property for the device to be able to configure
the old behaviour or the new behaviour::
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 8a87ccc8b0..5153ad63d6 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -79,6 +79,8 @@ static Property pci_props[] = {
DEFINE_PROP_STRING("failover_pair_id", PCIDevice,
failover_pair_id),
DEFINE_PROP_UINT32("acpi-index", PCIDevice, acpi_index, 0),
+ DEFINE_PROP_BIT("x-pcie-err-unc-mask", PCIDevice, cap_present,
+ QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
DEFINE_PROP_END_OF_LIST()
};
Notice that we enable the feature for new machine types.
Now we see how the fix is done. This is going to depend on what kind
of breakage happens, but in this case it is quite simple::
diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c
index 103667c368..374d593ead 100644
--- a/hw/pci/pcie_aer.c
+++ b/hw/pci/pcie_aer.c
@@ -112,10 +112,13 @@ int pcie_aer_init(PCIDevice *dev, uint8_t cap_ver,
uint16_t offset,
pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
PCI_ERR_UNC_SUPPORTED);
- pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
- PCI_ERR_UNC_MASK_DEFAULT);
- pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
- PCI_ERR_UNC_SUPPORTED);
+
+ if (dev->cap_present & QEMU_PCIE_ERR_UNC_MASK) {
+ pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
+ PCI_ERR_UNC_MASK_DEFAULT);
+ pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
+ PCI_ERR_UNC_SUPPORTED);
+ }
pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
PCI_ERR_UNC_SEVERITY_DEFAULT);
I.e. If the property bit is enabled, we configure it as we did for
qemu-8.0. If the property bit is not set, we configure it as it was in 7.2.
And now, everything that is missing is disabling the feature for old
machine types::
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 47a34841a5..07f763eb2e 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -48,6 +48,7 @@ GlobalProperty hw_compat_7_2[] = {
{ "e1000e", "migrate-timadj", "off" },
{ "virtio-mem", "x-early-migration", "false" },
{ "migration", "x-preempt-pre-7-2", "true" },
+ { TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" },
};
const size_t hw_compat_7_2_len = G_N_ELEMENTS(hw_compat_7_2);
And now, when qemu-8.0.1 is released with this fix, all combinations
are going to work as supposed.
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works)
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works)
- $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works)
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works)
So the normality has been restored and everything is ok, no?
Not really, now our matrix is much bigger. We started with the easy
cases, migration from the same version to the same version always
works:
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
Now the interesting ones. When the QEMU processes versions are
different. For the 1st set, their fail and we can do nothing, both
versions are released and we can't change anything.
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2
- $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2
This two are the ones that work. The whole point of making the
change in qemu-8.0.1 release was to fix this issue:
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
- $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2
But now we found that qemu-8.0 neither can migrate to qemu-7.2 not
qemu-8.0.1.
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0 -M pc-7.2
So, if we start a pc-7.2 machine in qemu-8.0 we can't migrate it to
anything except to qemu-8.0.
Can we do better?
Yeap. If we know that we are going to do this migration:
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
We can launch the appropriate devices with::
--device...,x-pci-e-err-unc-mask=on
And now we can receive a migration from 8.0. And from now on, we can
do that migration to new machine types if we remember to enable that
property for pc-7.2. Notice that we need to remember, it is not
enough to know that the source of the migration is qemu-8.0. Think of
this example:
$ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 -> qemu-8.2 -M pc-7.2
In the second migration, the source is not qemu-8.0, but we still have
that "problem" and have that property enabled. Notice that we need to
continue having this mark/property until we have this machine
rebooted. But it is not a normal reboot (that don't reload QEMU) we
need the machine to poweroff/poweron on a fixed QEMU. And from now
on we can use the proper real machine.

View file

@ -1202,7 +1202,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
void *opaque, JSONWriter *vmdesc,
int version_id, Error **errp);
bool vmstate_save_needed(const VMStateDescription *vmsd, void *opaque);
bool vmstate_section_needed(const VMStateDescription *vmsd, void *opaque);
#define VMSTATE_INSTANCE_ID_ANY -1

View file

@ -731,6 +731,9 @@ static int block_save_setup(QEMUFile *f, void *opaque)
trace_migration_block_save("setup", block_mig_state.submitted,
block_mig_state.transferred);
warn_report("block migration is deprecated;"
" use blockdev-mirror with NBD instead");
ret = init_blk_migration(f);
if (ret < 0) {
return ret;
@ -752,7 +755,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
uint64_t last_bytes = qemu_file_transferred_noflush(f);
uint64_t last_bytes = qemu_file_transferred(f);
trace_migration_block_save("iterate", block_mig_state.submitted,
block_mig_state.transferred);
@ -804,7 +807,7 @@ static int block_save_iterate(QEMUFile *f, void *opaque)
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
uint64_t delta_bytes = qemu_file_transferred_noflush(f) - last_bytes;
uint64_t delta_bytes = qemu_file_transferred(f) - last_bytes;
return (delta_bytes > 0);
}

View file

@ -314,9 +314,7 @@ static void colo_send_message(QEMUFile *f, COLOMessage msg,
return;
}
qemu_put_be32(f, msg);
qemu_fflush(f);
ret = qemu_file_get_error(f);
ret = qemu_fflush(f);
if (ret < 0) {
error_setg_errno(errp, -ret, "Can't send COLO message");
}
@ -335,9 +333,7 @@ static void colo_send_message_value(QEMUFile *f, COLOMessage msg,
return;
}
qemu_put_be64(f, value);
qemu_fflush(f);
ret = qemu_file_get_error(f);
ret = qemu_fflush(f);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to send value for message:%s",
COLOMessage_str(msg));
@ -483,8 +479,7 @@ static int colo_do_checkpoint_transaction(MigrationState *s,
}
qemu_put_buffer(s->to_dst_file, bioc->data, bioc->usage);
qemu_fflush(s->to_dst_file);
ret = qemu_file_get_error(s->to_dst_file);
ret = qemu_fflush(s->to_dst_file);
if (ret < 0) {
goto out;
}

View file

@ -745,6 +745,16 @@ void hmp_migrate(Monitor *mon, const QDict *qdict)
const char *uri = qdict_get_str(qdict, "uri");
Error *err = NULL;
if (inc) {
warn_report("option '-i' is deprecated;"
" use blockdev-mirror with NBD instead");
}
if (blk) {
warn_report("option '-b' is deprecated;"
" use blockdev-mirror with NBD instead");
}
qmp_migrate(uri, !!blk, blk, !!inc, inc,
false, false, true, resume, &err);
if (hmp_handle_error(mon, err)) {

View file

@ -30,7 +30,7 @@ bool migration_rate_exceeded(QEMUFile *f)
}
uint64_t rate_limit_start = stat64_get(&mig_stats.rate_limit_start);
uint64_t rate_limit_current = migration_transferred_bytes(f);
uint64_t rate_limit_current = migration_transferred_bytes();
uint64_t rate_limit_used = rate_limit_current - rate_limit_start;
if (rate_limit_max > 0 && rate_limit_used > rate_limit_max) {
@ -54,16 +54,16 @@ void migration_rate_set(uint64_t limit)
stat64_set(&mig_stats.rate_limit_max, limit / XFER_LIMIT_RATIO);
}
void migration_rate_reset(QEMUFile *f)
void migration_rate_reset(void)
{
stat64_set(&mig_stats.rate_limit_start, migration_transferred_bytes(f));
stat64_set(&mig_stats.rate_limit_start, migration_transferred_bytes());
}
uint64_t migration_transferred_bytes(QEMUFile *f)
uint64_t migration_transferred_bytes(void)
{
uint64_t multifd = stat64_get(&mig_stats.multifd_bytes);
uint64_t rdma = stat64_get(&mig_stats.rdma_bytes);
uint64_t qemu_file = qemu_file_transferred(f);
uint64_t qemu_file = stat64_get(&mig_stats.qemu_file_transferred);
trace_migration_transferred_bytes(qemu_file, multifd, rdma);
return qemu_file + multifd + rdma;

View file

@ -81,6 +81,10 @@ typedef struct {
* Number of bytes sent during precopy stage.
*/
Stat64 precopy_bytes;
/*
* Number of bytes transferred with QEMUFile.
*/
Stat64 qemu_file_transferred;
/*
* Amount of transferred data at the start of current cycle.
*/
@ -93,10 +97,6 @@ typedef struct {
* Number of bytes sent through RDMA.
*/
Stat64 rdma_bytes;
/*
* Total number of bytes transferred.
*/
Stat64 transferred;
/*
* Number of pages transferred that were full of zeros.
*/
@ -116,10 +116,8 @@ uint64_t migration_rate_get(void);
* migration_rate_reset: Reset the rate limit counter.
*
* This is called when we know we start a new transfer cycle.
*
* @f: QEMUFile used for main migration channel
*/
void migration_rate_reset(QEMUFile *f);
void migration_rate_reset(void);
/**
* migration_rate_set: Set the maximum amount that can be transferred.
@ -133,11 +131,9 @@ void migration_rate_set(uint64_t new_rate);
/**
* migration_transferred_bytes: Return number of bytes transferred
*
* @f: QEMUFile used for main migration channel
*
* Returns how many bytes have we transferred since the beginning of
* the migration. It accounts for bytes sent through any migration
* channel, multifd, qemu_file, rdma, ....
*/
uint64_t migration_transferred_bytes(QEMUFile *f);
uint64_t migration_transferred_bytes(void);
#endif

View file

@ -305,12 +305,7 @@ static int migrate_send_rp_message(MigrationIncomingState *mis,
qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
qemu_put_be16(mis->to_src_file, len);
qemu_put_buffer(mis->to_src_file, data, len);
qemu_fflush(mis->to_src_file);
/* It's possible that qemu file got error during sending */
ret = qemu_file_get_error(mis->to_src_file);
return ret;
return qemu_fflush(mis->to_src_file);
}
/* Request one page from the source VM at the given start address.
@ -942,7 +937,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
size_t page_size = qemu_target_page_size();
info->ram = g_malloc0(sizeof(*info->ram));
info->ram->transferred = stat64_get(&mig_stats.transferred);
info->ram->transferred = migration_transferred_bytes();
info->ram->total = ram_bytes_total();
info->ram->duplicate = stat64_get(&mig_stats.zero_pages);
/* legacy value. It is not used anymore */
@ -1620,6 +1615,16 @@ static bool migrate_prepare(MigrationState *s, bool blk, bool blk_inc,
{
Error *local_err = NULL;
if (blk_inc) {
warn_report("parameter 'inc' is deprecated;"
" use blockdev-mirror with NBD instead");
}
if (blk) {
warn_report("parameter 'blk' is deprecated;"
" use blockdev-mirror with NBD instead");
}
if (resume) {
if (s->state != MIGRATION_STATUS_POSTCOPY_PAUSED) {
error_setg(errp, "Cannot resume if there is no "
@ -2694,7 +2699,7 @@ static MigThrError migration_detect_error(MigrationState *s)
static void migration_calculate_complete(MigrationState *s)
{
uint64_t bytes = migration_transferred_bytes(s->to_dst_file);
uint64_t bytes = migration_transferred_bytes();
int64_t end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
int64_t transfer_time;
@ -2720,7 +2725,7 @@ static void update_iteration_initial_status(MigrationState *s)
* wrong speed calculation.
*/
s->iteration_start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
s->iteration_initial_bytes = migration_transferred_bytes(s->to_dst_file);
s->iteration_initial_bytes = migration_transferred_bytes();
s->iteration_initial_pages = ram_get_total_transferred_pages();
}
@ -2739,7 +2744,7 @@ static void migration_update_counters(MigrationState *s,
}
switchover_bw = migrate_avail_switchover_bandwidth();
current_bytes = migration_transferred_bytes(s->to_dst_file);
current_bytes = migration_transferred_bytes();
transferred = current_bytes - s->iteration_initial_bytes;
time_spent = current_time - s->iteration_start_time;
bandwidth = (double)transferred / time_spent;
@ -2775,7 +2780,7 @@ static void migration_update_counters(MigrationState *s,
stat64_get(&mig_stats.dirty_bytes_last_sync) / expected_bw_per_ms;
}
migration_rate_reset(s->to_dst_file);
migration_rate_reset();
update_iteration_initial_status(s);

View file

@ -188,7 +188,6 @@ static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
return -1;
}
stat64_add(&mig_stats.multifd_bytes, size);
stat64_add(&mig_stats.transferred, size);
return 0;
}
@ -733,8 +732,6 @@ static void *multifd_send_thread(void *opaque)
stat64_add(&mig_stats.multifd_bytes,
p->next_packet_size + p->packet_len);
stat64_add(&mig_stats.transferred,
p->next_packet_size + p->packet_len);
p->next_packet_size = 0;
qemu_mutex_lock(&p->mutex);
p->pending_job--;

View file

@ -12,6 +12,7 @@
*/
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "exec/target_page.h"
#include "qapi/clone-visitor.h"
#include "qapi/error.h"
@ -473,10 +474,19 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
if (new_caps[MIGRATION_CAPABILITY_BLOCK]) {
error_setg(errp, "QEMU compiled without old-style (blk/-b, inc/-i) "
"block migration");
error_append_hint(errp, "Use drive_mirror+NBD instead.\n");
error_append_hint(errp, "Use blockdev-mirror with NBD instead.\n");
return false;
}
#endif
if (new_caps[MIGRATION_CAPABILITY_BLOCK]) {
warn_report("block migration is deprecated;"
" use blockdev-mirror with NBD instead");
}
if (new_caps[MIGRATION_CAPABILITY_COMPRESS]) {
warn_report("old compression method is deprecated;"
" use multifd compression methods instead");
}
#ifndef CONFIG_REPLICATION
if (new_caps[MIGRATION_CAPABILITY_X_COLO]) {
@ -618,6 +628,20 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
}
}
if (new_caps[MIGRATION_CAPABILITY_MULTIFD]) {
if (new_caps[MIGRATION_CAPABILITY_XBZRLE]) {
error_setg(errp, "Multifd is not compatible with xbzrle");
return false;
}
}
if (new_caps[MIGRATION_CAPABILITY_COMPRESS]) {
if (new_caps[MIGRATION_CAPABILITY_XBZRLE]) {
error_setg(errp, "Compression is not compatible with xbzrle");
return false;
}
}
return true;
}
@ -1316,18 +1340,26 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
/* TODO use QAPI_CLONE() instead of duplicating it inline */
if (params->has_compress_level) {
warn_report("old compression is deprecated;"
" use multifd compression methods instead");
s->parameters.compress_level = params->compress_level;
}
if (params->has_compress_threads) {
warn_report("old compression is deprecated;"
" use multifd compression methods instead");
s->parameters.compress_threads = params->compress_threads;
}
if (params->has_compress_wait_thread) {
warn_report("old compression is deprecated;"
" use multifd compression methods instead");
s->parameters.compress_wait_thread = params->compress_wait_thread;
}
if (params->has_decompress_threads) {
warn_report("old compression is deprecated;"
" use multifd compression methods instead");
s->parameters.decompress_threads = params->decompress_threads;
}
@ -1386,6 +1418,8 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
}
if (params->has_block_incremental) {
warn_report("block migration is deprecated;"
" use blockdev-mirror with NBD instead");
s->parameters.block_incremental = params->block_incremental;
}
if (params->has_multifd_channels) {

View file

@ -41,9 +41,6 @@ struct QEMUFile {
QIOChannel *ioc;
bool is_writable;
/* The sum of bytes transferred on the wire */
uint64_t total_transferred;
int buf_index;
int buf_size; /* 0 when writing */
uint8_t buf[IO_BUF_SIZE];
@ -207,7 +204,7 @@ void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err)
*/
int qemu_file_get_error(QEMUFile *f)
{
return qemu_file_get_error_obj(f, NULL);
return f->last_error;
}
/*
@ -265,14 +262,14 @@ static void qemu_iovec_release_ram(QEMUFile *f)
* This will flush all pending data. If data was only partially flushed, it
* will set an error state.
*/
void qemu_fflush(QEMUFile *f)
int qemu_fflush(QEMUFile *f)
{
if (!qemu_file_is_writable(f)) {
return;
return f->last_error;
}
if (qemu_file_get_error(f)) {
return;
if (f->last_error) {
return f->last_error;
}
if (f->iovcnt > 0) {
Error *local_error = NULL;
@ -282,7 +279,7 @@ void qemu_fflush(QEMUFile *f)
qemu_file_set_error_obj(f, -EIO, local_error);
} else {
uint64_t size = iov_size(f->iov, f->iovcnt);
f->total_transferred += size;
stat64_add(&mig_stats.qemu_file_transferred, size);
}
qemu_iovec_release_ram(f);
@ -290,6 +287,7 @@ void qemu_fflush(QEMUFile *f)
f->buf_index = 0;
f->iovcnt = 0;
return f->last_error;
}
/*
@ -337,7 +335,6 @@ static ssize_t coroutine_mixed_fn qemu_fill_buffer(QEMUFile *f)
if (len > 0) {
f->buf_size += len;
f->total_transferred += len;
} else if (len == 0) {
qemu_file_set_error_obj(f, -EIO, local_error);
} else {
@ -357,22 +354,12 @@ static ssize_t coroutine_mixed_fn qemu_fill_buffer(QEMUFile *f)
*/
int qemu_fclose(QEMUFile *f)
{
int ret, ret2;
qemu_fflush(f);
ret = qemu_file_get_error(f);
ret2 = qio_channel_close(f->ioc, NULL);
int ret = qemu_fflush(f);
int ret2 = qio_channel_close(f->ioc, NULL);
if (ret >= 0) {
ret = ret2;
}
g_clear_pointer(&f->ioc, object_unref);
/* If any error was spotted before closing, we should report it
* instead of the close() return value.
*/
if (f->last_error) {
ret = f->last_error;
}
error_free(f->last_error_obj);
g_free(f);
trace_qemu_file_fclose();
@ -622,11 +609,13 @@ int coroutine_mixed_fn qemu_get_byte(QEMUFile *f)
return result;
}
uint64_t qemu_file_transferred_noflush(QEMUFile *f)
uint64_t qemu_file_transferred(QEMUFile *f)
{
uint64_t ret = f->total_transferred;
uint64_t ret = stat64_get(&mig_stats.qemu_file_transferred);
int i;
g_assert(qemu_file_is_writable(f));
for (i = 0; i < f->iovcnt; i++) {
ret += f->iov[i].iov_len;
}
@ -634,12 +623,6 @@ uint64_t qemu_file_transferred_noflush(QEMUFile *f)
return ret;
}
uint64_t qemu_file_transferred(QEMUFile *f)
{
qemu_fflush(f);
return f->total_transferred;
}
void qemu_put_be16(QEMUFile *f, unsigned int v)
{
qemu_put_byte(f, v >> 8);

View file

@ -36,31 +36,12 @@ int qemu_fclose(QEMUFile *f);
/*
* qemu_file_transferred:
*
* Report the total number of bytes transferred with
* this file.
*
* For writable files, any pending buffers will be
* flushed, so the reported value will be equal to
* the number of bytes transferred on the wire.
*
* For readable files, the reported value will be
* equal to the number of bytes transferred on the
* wire.
*
* Returns: the total bytes transferred
*/
uint64_t qemu_file_transferred(QEMUFile *f);
/*
* qemu_file_transferred_noflush:
*
* As qemu_file_transferred except for writable files, where no flush
* is performed and the reported amount will include the size of any
* queued buffers, on top of the amount actually transferred.
* No flush is performed and the reported amount will include the size
* of any queued buffers, on top of the amount actually transferred.
*
* Returns: the total bytes transferred and queued
*/
uint64_t qemu_file_transferred_noflush(QEMUFile *f);
uint64_t qemu_file_transferred(QEMUFile *f);
/*
* put_buffer without copying the buffer.
@ -90,7 +71,7 @@ void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err);
void qemu_file_set_error(QEMUFile *f, int ret);
int qemu_file_shutdown(QEMUFile *f);
QEMUFile *qemu_file_get_return_path(QEMUFile *f);
void qemu_fflush(QEMUFile *f);
int qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);

View file

@ -41,7 +41,20 @@
#include "ram.h"
#include "migration-stats.h"
CompressionStats compression_counters;
static struct {
int64_t pages;
int64_t busy;
double busy_rate;
int64_t compressed_size;
double compression_rate;
/* compression statistics since the beginning of the period */
/* amount of count that no free thread to compress data */
uint64_t compress_thread_busy_prev;
/* amount bytes after compression */
uint64_t compressed_size_prev;
/* amount of compressed pages */
uint64_t compress_pages_prev;
} compression_counters;
static CompressParam *comp_param;
static QemuThread *compress_threads;
@ -228,10 +241,14 @@ static inline void compress_reset_result(CompressParam *param)
param->offset = 0;
}
void flush_compressed_data(int (send_queued_data(CompressParam *)))
void compress_flush_data(void)
{
int thread_count = migrate_compress_threads();
if (!migrate_compress()) {
return;
}
qemu_mutex_lock(&comp_done_lock);
for (int i = 0; i < thread_count; i++) {
while (!comp_param[i].done) {
@ -244,7 +261,7 @@ void flush_compressed_data(int (send_queued_data(CompressParam *)))
qemu_mutex_lock(&comp_param[i].mutex);
if (!comp_param[i].quit) {
CompressParam *param = &comp_param[i];
send_queued_data(param);
compress_send_queued_data(param);
assert(qemu_file_buffer_empty(param->file));
compress_reset_result(param);
}
@ -260,43 +277,47 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
param->trigger = true;
}
int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset,
int (send_queued_data(CompressParam *)))
/*
* Return true when it compress a page
*/
bool compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset,
int (send_queued_data(CompressParam *)))
{
int thread_count, pages = -1;
int thread_count;
bool wait = migrate_compress_wait_thread();
thread_count = migrate_compress_threads();
qemu_mutex_lock(&comp_done_lock);
retry:
for (int i = 0; i < thread_count; i++) {
if (comp_param[i].done) {
CompressParam *param = &comp_param[i];
qemu_mutex_lock(&param->mutex);
param->done = false;
send_queued_data(param);
assert(qemu_file_buffer_empty(param->file));
compress_reset_result(param);
set_compress_params(param, block, offset);
qemu_cond_signal(&param->cond);
qemu_mutex_unlock(&param->mutex);
pages = 1;
break;
while (true) {
for (int i = 0; i < thread_count; i++) {
if (comp_param[i].done) {
CompressParam *param = &comp_param[i];
qemu_mutex_lock(&param->mutex);
param->done = false;
send_queued_data(param);
assert(qemu_file_buffer_empty(param->file));
compress_reset_result(param);
set_compress_params(param, block, offset);
qemu_cond_signal(&param->cond);
qemu_mutex_unlock(&param->mutex);
qemu_mutex_unlock(&comp_done_lock);
return true;
}
}
}
/*
* wait for the free thread if the user specifies 'compress-wait-thread',
* otherwise we will post the page out in the main thread as normal page.
*/
if (pages < 0 && wait) {
if (!wait) {
qemu_mutex_unlock(&comp_done_lock);
compression_counters.busy++;
return false;
}
/*
* wait for a free thread if the user specifies
* 'compress-wait-thread', otherwise we will post the page out
* in the main thread as normal page.
*/
qemu_cond_wait(&comp_done_cond, &comp_done_lock);
goto retry;
}
qemu_mutex_unlock(&comp_done_lock);
return pages;
}
/* return the size after decompression, or negative value on error */
@ -495,7 +516,7 @@ void populate_compress(MigrationInfo *info)
info->compression->compression_rate = compression_counters.compression_rate;
}
uint64_t ram_compressed_pages(void)
uint64_t compress_ram_pages(void)
{
return compression_counters.pages;
}
@ -514,3 +535,30 @@ void update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
compression_counters.pages++;
}
void compress_update_rates(uint64_t page_count)
{
if (!migrate_compress()) {
return;
}
compression_counters.busy_rate = (double)(compression_counters.busy -
compression_counters.compress_thread_busy_prev) / page_count;
compression_counters.compress_thread_busy_prev =
compression_counters.busy;
double compressed_size = compression_counters.compressed_size -
compression_counters.compressed_size_prev;
if (compressed_size) {
double uncompressed_size = (compression_counters.pages -
compression_counters.compress_pages_prev) *
qemu_target_page_size();
/* Compression-Ratio = Uncompressed-size / Compressed-size */
compression_counters.compression_rate =
uncompressed_size / compressed_size;
compression_counters.compress_pages_prev =
compression_counters.pages;
compression_counters.compressed_size_prev =
compression_counters.compressed_size;
}
}

View file

@ -59,9 +59,8 @@ typedef struct CompressParam CompressParam;
void compress_threads_save_cleanup(void);
int compress_threads_save_setup(void);
void flush_compressed_data(int (send_queued_data(CompressParam *)));
int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset,
int (send_queued_data(CompressParam *)));
bool compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset,
int (send_queued_data(CompressParam *)));
int wait_for_decompress_done(void);
void compress_threads_load_cleanup(void);
@ -69,7 +68,10 @@ int compress_threads_load_setup(QEMUFile *f);
void decompress_data_with_multi_threads(QEMUFile *f, void *host, int len);
void populate_compress(MigrationInfo *info);
uint64_t ram_compressed_pages(void);
uint64_t compress_ram_pages(void);
void update_compress_thread_counts(const CompressParam *param, int bytes_xmit);
void compress_update_rates(uint64_t page_count);
int compress_send_queued_data(CompressParam *param);
void compress_flush_data(void);
#endif

View file

@ -305,17 +305,15 @@ int64_t ramblock_recv_bitmap_send(QEMUFile *file,
qemu_put_be64(file, size);
qemu_put_buffer(file, (const uint8_t *)le_bitmap, size);
g_free(le_bitmap);
/*
* Mark as an end, in case the middle part is screwed up due to
* some "mysterious" reason.
*/
qemu_put_be64(file, RAMBLOCK_RECV_BITMAP_ENDING);
qemu_fflush(file);
g_free(le_bitmap);
if (qemu_file_get_error(file)) {
return qemu_file_get_error(file);
int ret = qemu_fflush(file);
if (ret) {
return ret;
}
return size + sizeof(size);
@ -369,13 +367,6 @@ struct RAMState {
bool xbzrle_started;
/* Are we on the last stage of migration */
bool last_stage;
/* compression statistics since the beginning of the period */
/* amount of count that no free thread to compress data */
uint64_t compress_thread_busy_prev;
/* amount bytes after compression */
uint64_t compressed_size_prev;
/* amount of compressed pages */
uint64_t compress_pages_prev;
/* total handled target pages at the beginning of period */
uint64_t target_page_count_prev;
@ -455,7 +446,6 @@ void ram_transferred_add(uint64_t bytes)
} else {
stat64_add(&mig_stats.downtime_bytes, bytes);
}
stat64_add(&mig_stats.transferred, bytes);
}
struct MigrationOps {
@ -564,7 +554,7 @@ void mig_throttle_counter_reset(void)
rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
rs->num_dirty_pages_period = 0;
rs->bytes_xfer_prev = stat64_get(&mig_stats.transferred);
rs->bytes_xfer_prev = migration_transferred_bytes();
}
/**
@ -939,13 +929,12 @@ uint64_t ram_get_total_transferred_pages(void)
{
return stat64_get(&mig_stats.normal_pages) +
stat64_get(&mig_stats.zero_pages) +
ram_compressed_pages() + xbzrle_counters.pages;
compress_ram_pages() + xbzrle_counters.pages;
}
static void migration_update_rates(RAMState *rs, int64_t end_time)
{
uint64_t page_count = rs->target_page_count - rs->target_page_count_prev;
double compressed_size;
/* calculate period counters */
stat64_set(&mig_stats.dirty_pages_rate,
@ -973,26 +962,7 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
rs->xbzrle_pages_prev = xbzrle_counters.pages;
rs->xbzrle_bytes_prev = xbzrle_counters.bytes;
}
if (migrate_compress()) {
compression_counters.busy_rate = (double)(compression_counters.busy -
rs->compress_thread_busy_prev) / page_count;
rs->compress_thread_busy_prev = compression_counters.busy;
compressed_size = compression_counters.compressed_size -
rs->compressed_size_prev;
if (compressed_size) {
double uncompressed_size = (compression_counters.pages -
rs->compress_pages_prev) * TARGET_PAGE_SIZE;
/* Compression-Ratio = Uncompressed-size / Compressed-size */
compression_counters.compression_rate =
uncompressed_size / compressed_size;
rs->compress_pages_prev = compression_counters.pages;
rs->compressed_size_prev = compression_counters.compressed_size;
}
}
compress_update_rates(page_count);
}
/*
@ -1030,7 +1000,7 @@ static void migration_trigger_throttle(RAMState *rs)
{
uint64_t threshold = migrate_throttle_trigger_threshold();
uint64_t bytes_xfer_period =
stat64_get(&mig_stats.transferred) - rs->bytes_xfer_prev;
migration_transferred_bytes() - rs->bytes_xfer_prev;
uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
@ -1100,7 +1070,7 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage)
/* reset period counters */
rs->time_last_bitmap_sync = end_time;
rs->num_dirty_pages_period = 0;
rs->bytes_xfer_prev = stat64_get(&mig_stats.transferred);
rs->bytes_xfer_prev = migration_transferred_bytes();
}
if (migrate_events()) {
uint64_t generation = stat64_get(&mig_stats.dirty_sync_count);
@ -1291,9 +1261,7 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
return 1;
}
static bool save_page_use_compression(RAMState *rs);
static int send_queued_data(CompressParam *param)
int compress_send_queued_data(CompressParam *param)
{
PageSearchStatus *pss = &ram_state->pss[RAM_CHANNEL_PRECOPY];
MigrationState *ms = migrate_get_current();
@ -1329,15 +1297,6 @@ static int send_queued_data(CompressParam *param)
return len;
}
static void ram_flush_compressed_data(RAMState *rs)
{
if (!save_page_use_compression(rs)) {
return;
}
flush_compressed_data(send_queued_data);
}
#define PAGE_ALL_CLEAN 0
#define PAGE_TRY_AGAIN 1
#define PAGE_DIRTY_FOUND 2
@ -1393,7 +1352,7 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
* Also If xbzrle is on, stop using the data compression at this
* point. In theory, xbzrle can do better than compression.
*/
ram_flush_compressed_data(rs);
compress_flush_data();
/* Hit the end of the list */
pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
@ -2042,24 +2001,6 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
return 0;
}
static bool save_page_use_compression(RAMState *rs)
{
if (!migrate_compress()) {
return false;
}
/*
* If xbzrle is enabled (e.g., after first round of migration), stop
* using the data compression. In theory, xbzrle can do better than
* compression.
*/
if (rs->xbzrle_started) {
return false;
}
return true;
}
/*
* try to compress the page before posting it out, return true if the page
* has been properly handled by compression, otherwise needs other
@ -2068,7 +2009,7 @@ static bool save_page_use_compression(RAMState *rs)
static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
ram_addr_t offset)
{
if (!save_page_use_compression(rs)) {
if (!migrate_compress()) {
return false;
}
@ -2083,17 +2024,12 @@ static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
* much CPU resource.
*/
if (pss->block != pss->last_sent_block) {
ram_flush_compressed_data(rs);
compress_flush_data();
return false;
}
if (compress_page_with_multi_thread(pss->block, offset,
send_queued_data) > 0) {
return true;
}
compression_counters.busy++;
return false;
return compress_page_with_multi_thread(pss->block, offset,
compress_send_queued_data);
}
/**
@ -3034,11 +2970,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
ret = rdma_registration_start(f, RAM_CONTROL_SETUP);
if (ret < 0) {
qemu_file_set_error(f, ret);
return ret;
}
ret = rdma_registration_stop(f, RAM_CONTROL_SETUP);
if (ret < 0) {
qemu_file_set_error(f, ret);
return ret;
}
migration_ops = g_malloc0(sizeof(MigrationOps));
@ -3056,9 +2994,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
return 0;
return qemu_fflush(f);
}
/**
@ -3104,6 +3040,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
ret = rdma_registration_start(f, RAM_CONTROL_ROUND);
if (ret < 0) {
qemu_file_set_error(f, ret);
goto out;
}
t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
@ -3135,7 +3072,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* page is sent in one chunk.
*/
if (migrate_postcopy_ram()) {
ram_flush_compressed_data(rs);
compress_flush_data();
}
/*
@ -3177,10 +3114,8 @@ out:
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
ram_transferred_add(8);
ret = qemu_file_get_error(f);
ret = qemu_fflush(f);
}
if (ret < 0) {
return ret;
@ -3215,6 +3150,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
ret = rdma_registration_start(f, RAM_CONTROL_FINISH);
if (ret < 0) {
qemu_file_set_error(f, ret);
return ret;
}
/* try transferring iterative blocks of memory */
@ -3230,24 +3166,21 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
break;
}
if (pages < 0) {
ret = pages;
break;
qemu_mutex_unlock(&rs->bitmap_mutex);
return pages;
}
}
qemu_mutex_unlock(&rs->bitmap_mutex);
ram_flush_compressed_data(rs);
compress_flush_data();
int ret = rdma_registration_stop(f, RAM_CONTROL_FINISH);
ret = rdma_registration_stop(f, RAM_CONTROL_FINISH);
if (ret < 0) {
qemu_file_set_error(f, ret);
return ret;
}
}
if (ret < 0) {
return ret;
}
ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
if (ret < 0) {
return ret;
@ -3257,9 +3190,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_FLUSH);
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
return 0;
return qemu_fflush(f);
}
static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy,
@ -3446,7 +3377,7 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
}
/**
* ram_handle_compressed: handle the zero page case
* ram_handle_zero: handle the zero page case
*
* If a page (or a whole RDMA chunk) has been
* determined to be zero, then zap it.
@ -3455,10 +3386,10 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
* @ch: what the page is filled from. We only support zero
* @size: size of the zero page
*/
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
void ram_handle_zero(void *host, uint64_t size)
{
if (ch != 0 || !buffer_is_zero(host, size)) {
memset(host, ch, size);
if (!buffer_is_zero(host, size)) {
memset(host, 0, size);
}
}
@ -3715,16 +3646,18 @@ int ram_load_postcopy(QEMUFile *f, int channel)
switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
case RAM_SAVE_FLAG_ZERO:
ch = qemu_get_byte(f);
if (ch != 0) {
error_report("Found a zero page with value %d", ch);
ret = -EINVAL;
break;
}
/*
* Can skip to set page_buffer when
* this is a zero page and (block->page_size == TARGET_PAGE_SIZE).
*/
if (ch || !matches_target_page_size) {
if (!matches_target_page_size) {
memset(page_buffer, ch, TARGET_PAGE_SIZE);
}
if (ch) {
tmp_page->all_zero = false;
}
break;
case RAM_SAVE_FLAG_PAGE:
@ -4030,7 +3963,12 @@ static int ram_load_precopy(QEMUFile *f)
case RAM_SAVE_FLAG_ZERO:
ch = qemu_get_byte(f);
ram_handle_compressed(host, ch, TARGET_PAGE_SIZE);
if (ch != 0) {
error_report("Found a zero page with value %d", ch);
ret = -EINVAL;
break;
}
ram_handle_zero(host, TARGET_PAGE_SIZE);
break;
case RAM_SAVE_FLAG_PAGE:

View file

@ -34,7 +34,6 @@
#include "io/channel.h"
extern XBZRLECacheStats xbzrle_counters;
extern CompressionStats compression_counters;
/* Should be holding either ram_list.mutex, or the RCU lock. */
#define RAMBLOCK_FOREACH_NOT_IGNORED(block) \
@ -60,7 +59,7 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length);
int ram_postcopy_incoming_init(MigrationIncomingState *mis);
int ram_load_postcopy(QEMUFile *f, int channel);
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
void ram_handle_zero(void *host, uint64_t size);
void ram_transferred_add(uint64_t bytes);
void ram_release_page(const char *rbname, uint64_t offset);

View file

@ -3592,8 +3592,12 @@ int rdma_registration_handle(QEMUFile *f)
host_addr = block->local_host_addr +
(comp->offset - block->offset);
ram_handle_compressed(host_addr, comp->value, comp->length);
if (comp->value) {
error_report("rdma: Zero page with non-zero (%d) value",
comp->value);
goto err;
}
ram_handle_zero(host_addr, comp->length);
break;
case RDMA_CONTROL_REGISTER_FINISHED:
@ -3849,9 +3853,7 @@ int rdma_registration_start(QEMUFile *f, uint64_t flags)
trace_rdma_registration_start(flags);
qemu_put_be64(f, RAM_SAVE_FLAG_HOOK);
qemu_fflush(f);
return 0;
return qemu_fflush(f);
}
/*

View file

@ -927,9 +927,9 @@ static int vmstate_load(QEMUFile *f, SaveStateEntry *se)
static void vmstate_save_old_style(QEMUFile *f, SaveStateEntry *se,
JSONWriter *vmdesc)
{
uint64_t old_offset = qemu_file_transferred_noflush(f);
uint64_t old_offset = qemu_file_transferred(f);
se->ops->save_state(f, se->opaque);
uint64_t size = qemu_file_transferred_noflush(f) - old_offset;
uint64_t size = qemu_file_transferred(f) - old_offset;
if (vmdesc) {
json_writer_int64(vmdesc, "size", size);
@ -985,7 +985,7 @@ static int vmstate_save(QEMUFile *f, SaveStateEntry *se, JSONWriter *vmdesc)
if ((!se->ops || !se->ops->save_state) && !se->vmsd) {
return 0;
}
if (se->vmsd && !vmstate_save_needed(se->vmsd, se->opaque)) {
if (se->vmsd && !vmstate_section_needed(se->vmsd, se->opaque)) {
trace_savevm_section_skip(se->idstr, se->section_id);
return 0;
}
@ -1583,8 +1583,7 @@ int qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only,
}
flush:
qemu_fflush(f);
return 0;
return qemu_fflush(f);
}
/* Give an estimate of the amount left to be transferred,
@ -3053,7 +3052,7 @@ bool save_snapshot(const char *name, bool overwrite, const char *vmstate,
goto the_end;
}
ret = qemu_savevm_state(f, errp);
vm_state_size = qemu_file_transferred_noflush(f);
vm_state_size = qemu_file_transferred(f);
ret2 = qemu_fclose(f);
if (ret < 0) {
goto the_end;

View file

@ -179,6 +179,7 @@ int vmstate_load_state(QEMUFile *f, const VMStateDescription *vmsd,
assert(field->flags == VMS_END);
ret = vmstate_subsection_load(f, vmsd, opaque);
if (ret != 0) {
qemu_file_set_error(f, ret);
return ret;
}
if (vmsd->post_load) {
@ -324,7 +325,7 @@ static void vmsd_desc_field_end(const VMStateDescription *vmsd,
}
bool vmstate_save_needed(const VMStateDescription *vmsd, void *opaque)
bool vmstate_section_needed(const VMStateDescription *vmsd, void *opaque)
{
if (vmsd->needed && !vmsd->needed(opaque)) {
/* optional section not needed */
@ -386,7 +387,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
void *curr_elem = first_elem + size * i;
vmsd_desc_field_start(vmsd, vmdesc_loop, field, i, n_elems);
old_offset = qemu_file_transferred_noflush(f);
old_offset = qemu_file_transferred(f);
if (field->flags & VMS_ARRAY_OF_POINTER) {
assert(curr_elem);
curr_elem = *(void **)curr_elem;
@ -416,7 +417,7 @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
return ret;
}
written_bytes = qemu_file_transferred_noflush(f) - old_offset;
written_bytes = qemu_file_transferred(f) - old_offset;
vmsd_desc_field_end(vmsd, vmdesc_loop, field, written_bytes, i);
/* Compressed arrays only care about the first element */
@ -522,7 +523,7 @@ static int vmstate_subsection_save(QEMUFile *f, const VMStateDescription *vmsd,
trace_vmstate_subsection_save_top(vmsd->name);
while (sub && *sub) {
if (vmstate_save_needed(*sub, opaque)) {
if (vmstate_section_needed(*sub, opaque)) {
const VMStateDescription *vmsdsub = *sub;
uint8_t len;

View file

@ -269,11 +269,19 @@
# average memory load of the virtual CPU indirectly. Note that
# zero means guest doesn't dirty memory. (Since 8.1)
#
# Features:
#
# @deprecated: Member @disk is deprecated because block migration is.
# Member @compression is deprecated because it is unreliable and
# untested. It is recommended to use multifd migration, which
# offers an alternative compression implementation that is
# reliable and tested.
#
# Since: 0.14
##
{ 'struct': 'MigrationInfo',
'data': {'*status': 'MigrationStatus', '*ram': 'MigrationStats',
'*disk': 'MigrationStats',
'*disk': { 'type': 'MigrationStats', 'features': [ 'deprecated' ] },
'*vfio': 'VfioStats',
'*xbzrle-cache': 'XBZRLECacheStats',
'*total-time': 'int',
@ -285,7 +293,7 @@
'*blocked-reasons': ['str'],
'*postcopy-blocktime': 'uint32',
'*postcopy-vcpu-blocktime': ['uint32'],
'*compression': 'CompressionStats',
'*compression': { 'type': 'CompressionStats', 'features': [ 'deprecated' ] },
'*socket-address': ['SocketAddress'],
'*dirty-limit-throttle-time-per-round': 'uint64',
'*dirty-limit-ring-full-time': 'uint64'} }
@ -525,16 +533,24 @@
#
# Features:
#
# @deprecated: Member @block is deprecated. Use blockdev-mirror with
# NBD instead. Member @compression is deprecated because it is
# unreliable and untested. It is recommended to use multifd
# migration, which offers an alternative compression
# implementation that is reliable and tested.
#
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
#
# Since: 1.2
##
{ 'enum': 'MigrationCapability',
'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
'compress', 'events', 'postcopy-ram',
{ 'name': 'compress', 'features': [ 'deprecated' ] },
'events', 'postcopy-ram',
{ 'name': 'x-colo', 'features': [ 'unstable' ] },
'release-ram',
'block', 'return-path', 'pause-before-switchover', 'multifd',
{ 'name': 'block', 'features': [ 'deprecated' ] },
'return-path', 'pause-before-switchover', 'multifd',
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
@ -835,6 +851,11 @@
#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
# blockdev-mirror with NBD instead. Members @compress-level,
# @compress-threads, @decompress-threads and @compress-wait-thread
# are deprecated because @compression is deprecated.
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
# are experimental.
#
@ -843,14 +864,17 @@
{ 'enum': 'MigrationParameter',
'data': ['announce-initial', 'announce-max',
'announce-rounds', 'announce-step',
'compress-level', 'compress-threads', 'decompress-threads',
'compress-wait-thread', 'throttle-trigger-threshold',
{ 'name': 'compress-level', 'features': [ 'deprecated' ] },
{ 'name': 'compress-threads', 'features': [ 'deprecated' ] },
{ 'name': 'decompress-threads', 'features': [ 'deprecated' ] },
{ 'name': 'compress-wait-thread', 'features': [ 'deprecated' ] },
'throttle-trigger-threshold',
'cpu-throttle-initial', 'cpu-throttle-increment',
'cpu-throttle-tailslow',
'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
'avail-switchover-bandwidth', 'downtime-limit',
{ 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
'block-incremental',
{ 'name': 'block-incremental', 'features': [ 'deprecated' ] },
'multifd-channels',
'xbzrle-cache-size', 'max-postcopy-bandwidth',
'max-cpu-throttle', 'multifd-compression',
@ -1011,6 +1035,11 @@
#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
# blockdev-mirror with NBD instead. Members @compress-level,
# @compress-threads, @decompress-threads and @compress-wait-thread
# are deprecated because @compression is deprecated.
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
# are experimental.
#
@ -1024,10 +1053,14 @@
'*announce-max': 'size',
'*announce-rounds': 'size',
'*announce-step': 'size',
'*compress-level': 'uint8',
'*compress-threads': 'uint8',
'*compress-wait-thread': 'bool',
'*decompress-threads': 'uint8',
'*compress-level': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*compress-threads': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*compress-wait-thread': { 'type': 'bool',
'features': [ 'deprecated' ] },
'*decompress-threads': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*throttle-trigger-threshold': 'uint8',
'*cpu-throttle-initial': 'uint8',
'*cpu-throttle-increment': 'uint8',
@ -1040,7 +1073,8 @@
'*downtime-limit': 'uint64',
'*x-checkpoint-delay': { 'type': 'uint32',
'features': [ 'unstable' ] },
'*block-incremental': 'bool',
'*block-incremental': { 'type': 'bool',
'features': [ 'deprecated' ] },
'*multifd-channels': 'uint8',
'*xbzrle-cache-size': 'size',
'*max-postcopy-bandwidth': 'size',
@ -1063,7 +1097,7 @@
# Example:
#
# -> { "execute": "migrate-set-parameters" ,
# "arguments": { "compress-level": 1 } }
# "arguments": { "multifd-channels": 5 } }
# <- { "return": {} }
##
{ 'command': 'migrate-set-parameters', 'boxed': true,
@ -1225,6 +1259,11 @@
#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
# blockdev-mirror with NBD instead. Members @compress-level,
# @compress-threads, @decompress-threads and @compress-wait-thread
# are deprecated because @compression is deprecated.
#
# @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
# are experimental.
#
@ -1235,10 +1274,14 @@
'*announce-max': 'size',
'*announce-rounds': 'size',
'*announce-step': 'size',
'*compress-level': 'uint8',
'*compress-threads': 'uint8',
'*compress-wait-thread': 'bool',
'*decompress-threads': 'uint8',
'*compress-level': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*compress-threads': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*compress-wait-thread': { 'type': 'bool',
'features': [ 'deprecated' ] },
'*decompress-threads': { 'type': 'uint8',
'features': [ 'deprecated' ] },
'*throttle-trigger-threshold': 'uint8',
'*cpu-throttle-initial': 'uint8',
'*cpu-throttle-increment': 'uint8',
@ -1251,7 +1294,8 @@
'*downtime-limit': 'uint64',
'*x-checkpoint-delay': { 'type': 'uint32',
'features': [ 'unstable' ] },
'*block-incremental': 'bool',
'*block-incremental': { 'type': 'bool',
'features': [ 'deprecated' ] },
'*multifd-channels': 'uint8',
'*xbzrle-cache-size': 'size',
'*max-postcopy-bandwidth': 'size',
@ -1277,10 +1321,8 @@
#
# -> { "execute": "query-migrate-parameters" }
# <- { "return": {
# "decompress-threads": 2,
# "multifd-channels": 2,
# "cpu-throttle-increment": 10,
# "compress-threads": 8,
# "compress-level": 1,
# "cpu-throttle-initial": 20,
# "max-bandwidth": 33554432,
# "downtime-limit": 300
@ -1524,6 +1566,11 @@
#
# @resume: resume one paused migration, default "off". (since 3.0)
#
# Features:
#
# @deprecated: Members @inc and @blk are deprecated. Use
# blockdev-mirror with NBD instead.
#
# Returns: nothing on success
#
# Since: 0.14
@ -1545,7 +1592,9 @@
# <- { "return": {} }
##
{ 'command': 'migrate',
'data': {'uri': 'str', '*blk': 'bool', '*inc': 'bool',
'data': {'uri': 'str',
'*blk': { 'type': 'bool', 'features': [ 'deprecated' ] },
'*inc': { 'type': 'bool', 'features': [ 'deprecated' ] },
'*detach': 'bool', '*resume': 'bool' } }
##

View file

@ -90,7 +90,7 @@ echo
reply="$(_send_qemu_cmd $src \
"{ 'execute': 'migrate',
'arguments': { 'uri': 'unix:${MIG_SOCKET}', 'blk': true } }" \
'return\|error')"
'return\|error' | _filter_migration_block_deprecated)"
echo "$reply"
if echo "$reply" | grep "compiled without old-style" > /dev/null; then
_notrun "migrate -b support not compiled in"

View file

@ -359,5 +359,12 @@ _filter_qcow2_compression_type_bit()
-e 's/\(incompatible_features.*\), 3\(,.*\)/\1\2/'
}
# filter warnings caused for block migration deprecation
_filter_migration_block_deprecated()
{
gsed -e '/warning: parameter .blk. is deprecated; use blockdev-mirror with NBD instead/d' \
-e '/warning: block migration is deprecated; use blockdev-mirror with NBD instead/d'
}
# make sure this script returns success
true