mirror of
git://source.winehq.org/git/wine.git
synced 2024-11-01 15:17:59 +00:00
2a44723d4d
There is no evidence that the extra overhead should matter, and this allows us to be consistent, and potentially change timeGetTime() without having to worry about quartz. On Windows, timeGetTime() has identical resolution to the interrupt time [i.e. the "InterruptTime" member of the shared user data, or QueryInterruptTime()]. Like those sources, it approximately measures the boot time. However, the values are not identical; timeGetTime() lags behind QueryInterruptTime() anywhere from 1 to 12 ms (regardless of timer period) on my Windows 10 virtual machine. The actual lag is consistent within a process but varies between runs. I have not been able to account for this lag—it's not the suspend bias, nor is it an attempt to match the tick count more closely. In short, timeGetTime() seems to be idiosyncratic to winmm. Since quartz has been shown to follow winmm exactly on Windows, let's follow it on Wine as well. Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=53005 Signed-off-by: Zebediah Figura <zfigura@codeweavers.com> Signed-off-by: Alexandre Julliard <julliard@winehq.org> |
||
---|---|---|
.. | ||
tests | ||
acmwrapper.c | ||
avidec.c | ||
control_tlb.idl | ||
dsoundrender.c | ||
filesource.c | ||
filtergraph.c | ||
filtermapper.c | ||
main.c | ||
Makefile.in | ||
memallocator.c | ||
passthrough.c | ||
quartz.rc | ||
quartz.rgs | ||
quartz.spec | ||
quartz_private.h | ||
quartz_strmif.idl | ||
regsvr.c | ||
systemclock.c | ||
videorenderer.c | ||
vmr9.c | ||
window.c |