mirror of
https://gitlab.com/qemu-project/qemu
synced 2024-10-14 15:02:54 +00:00
Rename "QEMU global mutex" to "BQL" in comments and docs
The term "QEMU global mutex" is identical to the more widely used Big QEMU Lock ("BQL"). Update the code comments and documentation to use "BQL" instead of "QEMU global mutex". Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Message-id: 20240102153529.486531-6-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
parent
a4a411fbaf
commit
0b2675c473
|
@ -226,10 +226,9 @@ instruction. This could be a future optimisation.
|
||||||
Emulated hardware state
|
Emulated hardware state
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Currently thanks to KVM work any access to IO memory is automatically
|
Currently thanks to KVM work any access to IO memory is automatically protected
|
||||||
protected by the global iothread mutex, also known as the BQL (Big
|
by the BQL (Big QEMU Lock). Any IO region that doesn't use the BQL is expected
|
||||||
QEMU Lock). Any IO region that doesn't use global mutex is expected to
|
to do its own locking.
|
||||||
do its own locking.
|
|
||||||
|
|
||||||
However IO memory isn't the only way emulated hardware state can be
|
However IO memory isn't the only way emulated hardware state can be
|
||||||
modified. Some architectures have model specific registers that
|
modified. Some architectures have model specific registers that
|
||||||
|
|
|
@ -5,7 +5,7 @@ the COPYING file in the top-level directory.
|
||||||
|
|
||||||
|
|
||||||
This document explains the IOThread feature and how to write code that runs
|
This document explains the IOThread feature and how to write code that runs
|
||||||
outside the QEMU global mutex.
|
outside the BQL.
|
||||||
|
|
||||||
The main loop and IOThreads
|
The main loop and IOThreads
|
||||||
---------------------------
|
---------------------------
|
||||||
|
@ -29,13 +29,13 @@ scalability bottleneck on hosts with many CPUs. Work can be spread across
|
||||||
several IOThreads instead of just one main loop. When set up correctly this
|
several IOThreads instead of just one main loop. When set up correctly this
|
||||||
can improve I/O latency and reduce jitter seen by the guest.
|
can improve I/O latency and reduce jitter seen by the guest.
|
||||||
|
|
||||||
The main loop is also deeply associated with the QEMU global mutex, which is a
|
The main loop is also deeply associated with the BQL, which is a
|
||||||
scalability bottleneck in itself. vCPU threads and the main loop use the QEMU
|
scalability bottleneck in itself. vCPU threads and the main loop use the BQL
|
||||||
global mutex to serialize execution of QEMU code. This mutex is necessary
|
to serialize execution of QEMU code. This mutex is necessary because a lot of
|
||||||
because a lot of QEMU's code historically was not thread-safe.
|
QEMU's code historically was not thread-safe.
|
||||||
|
|
||||||
The fact that all I/O processing is done in a single main loop and that the
|
The fact that all I/O processing is done in a single main loop and that the
|
||||||
QEMU global mutex is contended by all vCPU threads and the main loop explain
|
BQL is contended by all vCPU threads and the main loop explain
|
||||||
why it is desirable to place work into IOThreads.
|
why it is desirable to place work into IOThreads.
|
||||||
|
|
||||||
The experimental virtio-blk data-plane implementation has been benchmarked and
|
The experimental virtio-blk data-plane implementation has been benchmarked and
|
||||||
|
@ -66,7 +66,7 @@ There are several old APIs that use the main loop AioContext:
|
||||||
|
|
||||||
Since they implicitly work on the main loop they cannot be used in code that
|
Since they implicitly work on the main loop they cannot be used in code that
|
||||||
runs in an IOThread. They might cause a crash or deadlock if called from an
|
runs in an IOThread. They might cause a crash or deadlock if called from an
|
||||||
IOThread since the QEMU global mutex is not held.
|
IOThread since the BQL is not held.
|
||||||
|
|
||||||
Instead, use the AioContext functions directly (see include/block/aio.h):
|
Instead, use the AioContext functions directly (see include/block/aio.h):
|
||||||
* aio_set_fd_handler() - monitor a file descriptor
|
* aio_set_fd_handler() - monitor a file descriptor
|
||||||
|
|
|
@ -594,7 +594,7 @@ blocking the guest and other background operations.
|
||||||
Coroutine safety can be hard to prove, similar to thread safety. Common
|
Coroutine safety can be hard to prove, similar to thread safety. Common
|
||||||
pitfalls are:
|
pitfalls are:
|
||||||
|
|
||||||
- The global mutex isn't held across ``qemu_coroutine_yield()``, so
|
- The BQL isn't held across ``qemu_coroutine_yield()``, so
|
||||||
operations that used to assume that they execute atomically may have
|
operations that used to assume that they execute atomically may have
|
||||||
to be more careful to protect against changes in the global state.
|
to be more careful to protect against changes in the global state.
|
||||||
|
|
||||||
|
|
|
@ -184,7 +184,7 @@ modes.
|
||||||
Reading and writing requests are created by CPU thread of QEMU. Later these
|
Reading and writing requests are created by CPU thread of QEMU. Later these
|
||||||
requests proceed to block layer which creates "bottom halves". Bottom
|
requests proceed to block layer which creates "bottom halves". Bottom
|
||||||
halves consist of callback and its parameters. They are processed when
|
halves consist of callback and its parameters. They are processed when
|
||||||
main loop locks the global mutex. These locks are not synchronized with
|
main loop locks the BQL. These locks are not synchronized with
|
||||||
replaying process because main loop also processes the events that do not
|
replaying process because main loop also processes the events that do not
|
||||||
affect the virtual machine state (like user interaction with monitor).
|
affect the virtual machine state (like user interaction with monitor).
|
||||||
|
|
||||||
|
|
|
@ -84,7 +84,7 @@ apply_vq_mapping(IOThreadVirtQueueMappingList *iothread_vq_mapping_list,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
|
bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
|
||||||
VirtIOBlockDataPlane **dataplane,
|
VirtIOBlockDataPlane **dataplane,
|
||||||
Error **errp)
|
Error **errp)
|
||||||
|
@ -148,7 +148,7 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
|
void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
|
||||||
{
|
{
|
||||||
VirtIOBlock *vblk;
|
VirtIOBlock *vblk;
|
||||||
|
@ -179,7 +179,7 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
|
||||||
g_free(s);
|
g_free(s);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
int virtio_blk_data_plane_start(VirtIODevice *vdev)
|
int virtio_blk_data_plane_start(VirtIODevice *vdev)
|
||||||
{
|
{
|
||||||
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
||||||
|
@ -310,7 +310,7 @@ static void virtio_blk_data_plane_stop_vq_bh(void *opaque)
|
||||||
virtio_queue_host_notifier_read(host_notifier);
|
virtio_queue_host_notifier_read(host_notifier);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
void virtio_blk_data_plane_stop(VirtIODevice *vdev)
|
void virtio_blk_data_plane_stop(VirtIODevice *vdev)
|
||||||
{
|
{
|
||||||
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
VirtIOBlock *vblk = VIRTIO_BLK(vdev);
|
||||||
|
|
|
@ -1539,7 +1539,7 @@ static void virtio_blk_resize(void *opaque)
|
||||||
VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
|
VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* virtio_notify_config() needs to acquire the global mutex,
|
* virtio_notify_config() needs to acquire the BQL,
|
||||||
* so it can't be called from an iothread. Instead, schedule
|
* so it can't be called from an iothread. Instead, schedule
|
||||||
* it to be run in the main context BH.
|
* it to be run in the main context BH.
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -20,7 +20,7 @@
|
||||||
#include "scsi/constants.h"
|
#include "scsi/constants.h"
|
||||||
#include "hw/virtio/virtio-bus.h"
|
#include "hw/virtio/virtio-bus.h"
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp)
|
void virtio_scsi_dataplane_setup(VirtIOSCSI *s, Error **errp)
|
||||||
{
|
{
|
||||||
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
|
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
|
||||||
|
@ -93,7 +93,7 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
int virtio_scsi_dataplane_start(VirtIODevice *vdev)
|
int virtio_scsi_dataplane_start(VirtIODevice *vdev)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
@ -185,7 +185,7 @@ fail_guest_notifiers:
|
||||||
return -ENOSYS;
|
return -ENOSYS;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Context: QEMU global mutex held */
|
/* Context: BQL held */
|
||||||
void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
|
void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
|
||||||
{
|
{
|
||||||
BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
|
BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
|
||||||
|
|
|
@ -54,7 +54,7 @@ typedef struct BlockJob {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Speed that was set with @block_job_set_speed.
|
* Speed that was set with @block_job_set_speed.
|
||||||
* Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE).
|
* Always modified and read under the BQL (GLOBAL_STATE_CODE).
|
||||||
*/
|
*/
|
||||||
int64_t speed;
|
int64_t speed;
|
||||||
|
|
||||||
|
@ -66,7 +66,7 @@ typedef struct BlockJob {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Block other operations when block job is running.
|
* Block other operations when block job is running.
|
||||||
* Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE).
|
* Always modified and read under the BQL (GLOBAL_STATE_CODE).
|
||||||
*/
|
*/
|
||||||
Error *blocker;
|
Error *blocker;
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@ typedef struct BlockJob {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* BlockDriverStates that are involved in this block job.
|
* BlockDriverStates that are involved in this block job.
|
||||||
* Always modified and read under QEMU global mutex (GLOBAL_STATE_CODE).
|
* Always modified and read under the BQL (GLOBAL_STATE_CODE).
|
||||||
*/
|
*/
|
||||||
GSList *nodes;
|
GSList *nodes;
|
||||||
} BlockJob;
|
} BlockJob;
|
||||||
|
|
|
@ -149,7 +149,7 @@ typedef void (*QIOTaskWorker)(QIOTask *task,
|
||||||
* lookups) to be easily run non-blocking. Reporting the
|
* lookups) to be easily run non-blocking. Reporting the
|
||||||
* results in the main thread context means that the caller
|
* results in the main thread context means that the caller
|
||||||
* typically does not need to be concerned about thread
|
* typically does not need to be concerned about thread
|
||||||
* safety wrt the QEMU global mutex.
|
* safety wrt the BQL.
|
||||||
*
|
*
|
||||||
* For example, the socket_listen() method will block the caller
|
* For example, the socket_listen() method will block the caller
|
||||||
* while DNS lookups take place if given a name, instead of IP
|
* while DNS lookups take place if given a name, instead of IP
|
||||||
|
|
|
@ -22,7 +22,7 @@
|
||||||
* rather than callbacks, for operations that need to give up control while
|
* rather than callbacks, for operations that need to give up control while
|
||||||
* waiting for events to complete.
|
* waiting for events to complete.
|
||||||
*
|
*
|
||||||
* These functions are re-entrant and may be used outside the global mutex.
|
* These functions are re-entrant and may be used outside the BQL.
|
||||||
*
|
*
|
||||||
* Functions that execute in coroutine context cannot be called
|
* Functions that execute in coroutine context cannot be called
|
||||||
* directly from normal functions. Use @coroutine_fn to mark such
|
* directly from normal functions. Use @coroutine_fn to mark such
|
||||||
|
|
|
@ -26,7 +26,7 @@
|
||||||
* rather than callbacks, for operations that need to give up control while
|
* rather than callbacks, for operations that need to give up control while
|
||||||
* waiting for events to complete.
|
* waiting for events to complete.
|
||||||
*
|
*
|
||||||
* These functions are re-entrant and may be used outside the global mutex.
|
* These functions are re-entrant and may be used outside the BQL.
|
||||||
*
|
*
|
||||||
* Functions that execute in coroutine context cannot be called
|
* Functions that execute in coroutine context cannot be called
|
||||||
* directly from normal functions. Use @coroutine_fn to mark such
|
* directly from normal functions. Use @coroutine_fn to mark such
|
||||||
|
|
|
@ -219,7 +219,7 @@ static void tap_send(void *opaque)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* When the host keeps receiving more packets while tap_send() is
|
* When the host keeps receiving more packets while tap_send() is
|
||||||
* running we can hog the QEMU global mutex. Limit the number of
|
* running we can hog the BQL. Limit the number of
|
||||||
* packets that are processed per tap_send() callback to prevent
|
* packets that are processed per tap_send() callback to prevent
|
||||||
* stalling the guest.
|
* stalling the guest.
|
||||||
*/
|
*/
|
||||||
|
|
Loading…
Reference in a new issue