Commit graph

5 commits

Author SHA1 Message Date
Emilio G. Cota
cb764d0665 qsp: track BQL callers explicitly
The BQL is acquired via qemu_mutex_lock_iothread(), which makes
the profiler assign the associated wait time (i.e. most of
BQL wait time) entirely to that function. This loses the original
call site information, which does not help diagnose BQL contention.
Fix it by tracking the callers explicitly.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23 18:46:25 +02:00
Emilio G. Cota
d557de4a0e qsp: support call site coalescing
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23 18:46:25 +02:00
Emilio G. Cota
996e8d9a45 qsp: add qsp_reset
I first implemented this by deleting all entries in the global
hash table. But doing that safely slows down profiling, since
we'd need to introduce rcu_read_lock/unlock in the fast path.

What's implemented here avoids messing with the thread-local
data in the global hash table. It achieves this by taking a snapshot
of the current state, so that subsequent reports present the delta
wrt to the snapshot.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23 18:46:25 +02:00
Emilio G. Cota
0a22777c71 qsp: add sort_by option to qsp_report
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23 18:46:25 +02:00
Emilio G. Cota
fe9959a275 qsp: QEMU's Synchronization Profiler
The goal of this module is to profile synchronization primitives (i.e.
mutexes, recursive mutexes and condition variables) so that scalability
issues can be quickly diagnosed.

Sync primitives are profiled by QSP based on the vaddr of the object accessed
as well as the call site (file:line_nr). That means the same object called
from two different call sites will be tracked in separate entries, which
might be reported together or separately (see subsequent commit on
call site coalescing).

Some perf numbers:

Host: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Command: taskset -c 0 tests/atomic_add-bench -d 5 -m

- Before: 54.80 Mops/s
- After:  54.75 Mops/s

That is, a negligible slowdown due to the now indirect call to
qemu_mutex_lock. Note that using a branch instead of an indirect
call introduces a more severe slowdown (53.65 Mops/s, i.e. 2% slowdown).

Enabling the profiler (with -p, added in this series) is more interesting:

- No profiling: 54.75 Mops/s
- W/ profiling: 12.53 Mops/s

That is, a 4.36X slowdown.

We can break down this slowdown by removing the get_clock calls or
the entry lookup:

- No profiling:     54.75 Mops/s
- W/o get_clock:    25.37 Mops/s
- W/o entry lookup: 19.30 Mops/s
- W/ profiling:     12.53 Mops/s

Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23 18:46:25 +02:00