serenity/Userland/Libraries/LibAudio/ConnectionToServer.cpp

148 lines
4.3 KiB
C++
Raw Normal View History

/*
* Copyright (c) 2018-2020, Andreas Kling <kling@serenityos.org>
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
* Copyright (c) 2022, kleines Filmröllchen <filmroellchen@serenityos.org>
*
* SPDX-License-Identifier: BSD-2-Clause
*/
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
#include <AK/Atomic.h>
#include <AK/Format.h>
#include <AK/OwnPtr.h>
#include <AK/Time.h>
#include <AK/Types.h>
#include <LibAudio/ConnectionToServer.h>
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
#include <LibAudio/UserSampleQueue.h>
#include <LibCore/Event.h>
#include <LibThreading/Mutex.h>
#include <time.h>
namespace Audio {
ConnectionToServer::ConnectionToServer(NonnullOwnPtr<Core::Stream::LocalSocket> socket)
: IPC::ConnectionToServer<AudioClientEndpoint, AudioServerEndpoint>(*this, move(socket))
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
, m_buffer(make<AudioQueue>(MUST(AudioQueue::try_create())))
, m_user_queue(make<UserSampleQueue>())
, m_background_audio_enqueuer(Threading::Thread::construct([this]() {
// All the background thread does is run an event loop.
Core::EventLoop enqueuer_loop;
m_enqueuer_loop = &enqueuer_loop;
enqueuer_loop.exec();
m_enqueuer_loop_destruction.lock();
m_enqueuer_loop = nullptr;
m_enqueuer_loop_destruction.unlock();
return (intptr_t) nullptr;
}))
{
async_pause_playback();
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
set_buffer(*m_buffer);
}
ConnectionToServer::~ConnectionToServer()
{
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
die();
}
void ConnectionToServer::die()
{
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
// We're sometimes getting here after the other thread has already exited and its event loop does no longer exist.
m_enqueuer_loop_destruction.lock();
if (m_enqueuer_loop != nullptr) {
m_enqueuer_loop->wake();
m_enqueuer_loop->quit(0);
}
m_enqueuer_loop_destruction.unlock();
(void)m_background_audio_enqueuer->join();
}
ErrorOr<void> ConnectionToServer::async_enqueue(FixedArray<Sample>&& samples)
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
{
if (!m_background_audio_enqueuer->is_started())
m_background_audio_enqueuer->start();
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
update_good_sleep_time();
m_user_queue->append(move(samples));
// Wake the background thread to make sure it starts enqueuing audio.
LibAudio: Prevent racy eternal deadlock of the audio enqueue thread The audio enqueuer thread goes to sleep when there is no more audio data present, and through normal Core::EventLoop events it can be woken up. However, that waking up only happens when the thread is not currently running, so that the wake-up events don't queue up and cause weirdness. The atomic variable responsible for keeping track of whether the thread is active can lead to a racy deadlock however, where the audio enqueuer thread will never wake up again despite there being audio data to enqueue. Consider this scenario: - Main thread calls into async_enqueue. It detects that according to the atomic variable, the other thread is still running, skipping the event queue wake. - Enqueuer thread has just finished playing the last chunk of audio and detects that there is no audio left. It enters the if block with the dbgln "Reached end of provided audio data..." - Main thread enqueues audio, making the user sample queue non-empty. - Enqueuer thread does not check this condition again, instead setting the atomic variable to indicate that it is not running. It exits into an event loop sleep. - Main thread exits async_enqueue. The calling audio enqueuing system (see e.g. Piano, but all of them function similarly) will wait until the enqueuer thread has played enough samples before async_enqueue is called again. However, since the enqueuer thread will never play any audio, this condition is never fulfilled and audio playback deadlocks This commit fixes that by allowing the event loop to not enqueue an event that already exists, therefore overloading the audio enqueuer event loop by at maximum one message in weird situations. We entirely get rid of the atomic variable and the race condition is prevented.
2022-07-13 08:36:57 +00:00
m_enqueuer_loop->wake_once(*this, 0);
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
async_start_playback();
return {};
}
void ConnectionToServer::clear_client_buffer()
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
{
m_user_queue->clear();
}
void ConnectionToServer::update_good_sleep_time()
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
{
auto sample_rate = static_cast<double>(get_sample_rate());
auto buffer_play_time_ns = 1'000'000'000.0 / (sample_rate / static_cast<double>(AUDIO_BUFFER_SIZE));
// A factor of 1 should be good for now.
m_good_sleep_time = Time::from_nanoseconds(static_cast<unsigned>(buffer_play_time_ns)).to_timespec();
}
// Non-realtime audio writing loop
void ConnectionToServer::custom_event(Core::CustomEvent&)
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
{
Array<Sample, AUDIO_BUFFER_SIZE> next_chunk;
while (true) {
if (m_user_queue->is_empty()) {
dbgln("Reached end of provided audio data, going to sleep");
break;
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
}
auto available_samples = min(AUDIO_BUFFER_SIZE, m_user_queue->size());
for (size_t i = 0; i < available_samples; ++i)
next_chunk[i] = (*m_user_queue)[i];
m_user_queue->discard_samples(available_samples);
2022-05-29 10:50:10 +00:00
// FIXME: Could we receive interrupts in a good non-IPC way instead?
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
auto result = m_buffer->try_blocking_enqueue(next_chunk, [this]() {
nanosleep(&m_good_sleep_time, nullptr);
});
if (result.is_error())
dbgln("Error while writing samples to shared buffer: {}", result.error());
}
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
}
ErrorOr<void, AudioQueue::QueueStatus> ConnectionToServer::realtime_enqueue(Array<Sample, AUDIO_BUFFER_SIZE> samples)
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
{
return m_buffer->try_enqueue(samples);
}
unsigned ConnectionToServer::total_played_samples() const
{
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
return m_buffer->weak_tail() * AUDIO_BUFFER_SIZE;
}
unsigned ConnectionToServer::remaining_samples()
{
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
return static_cast<unsigned>(m_user_queue->remaining_samples());
}
size_t ConnectionToServer::remaining_buffers() const
{
LibAudio+Userland: Use new audio queue in client-server communication Previously, we were sending Buffers to the server whenever we had new audio data for it. This meant that for every audio enqueue action, we needed to create a new shared memory anonymous buffer, send that buffer's file descriptor over IPC (+recfd on the other side) and then map the buffer into the audio server's memory to be able to play it. This was fine for sending large chunks of audio data, like when playing existing audio files. However, in the future we want to move to real-time audio in some applications like Piano. This means that the size of buffers that are sent need to be very small, as just the size of a buffer itself is part of the audio latency. If we were to try real-time audio with the existing system, we would run into problems really quickly. Dealing with a continuous stream of new anonymous files like the current audio system is rather expensive, as we need Kernel help in multiple places. Additionally, every enqueue incurs an IPC call, which are not optimized for >1000 calls/second (which would be needed for real-time audio with buffer sizes of ~40 samples). So a fundamental change in how we handle audio sending in userspace is necessary. This commit moves the audio sending system onto a shared single producer circular queue (SSPCQ) (introduced with one of the previous commits). This queue is intended to live in shared memory and be accessed by multiple processes at the same time. It was specifically written to support the audio sending case, so e.g. it only supports a single producer (the audio client). Now, audio sending follows these general steps: - The audio client connects to the audio server. - The audio client creates a SSPCQ in shared memory. - The audio client sends the SSPCQ's file descriptor to the audio server with the set_buffer() IPC call. - The audio server receives the SSPCQ and maps it. - The audio client signals start of playback with start_playback(). - At the same time: - The audio client writes its audio data into the shared-memory queue. - The audio server reads audio data from the shared-memory queue(s). Both sides have additional before-queue/after-queue buffers, depending on the exact application. - Pausing playback is just an IPC call, nothing happens to the buffer except that the server stops reading from it until playback is resumed. - Muting has nothing to do with whether audio data is read or not. - When the connection closes, the queues are unmapped on both sides. This should already improve audio playback performance in a bunch of places. Implementation & commit notes: - Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept for WavLoader, see previous commit message. - Most intra-process audio data passing is done with FixedArray<Sample> or Vector<Sample>. - Improvements to most audio-enqueuing applications. (If necessary I can try to extract some of the aplay improvements.) - New APIs on LibAudio/ClientConnection which allows non-realtime applications to enqueue audio in big chunks like before. - Removal of status APIs from the audio server connection for information that can be directly obtained from the shared queue. - Split the pause playback API into two APIs with more intuitive names. I know this is a large commit, and you can kinda tell from the commit message. It's basically impossible to break this up without hacks, so please forgive me. These are some of the best changes to the audio subsystem and I hope that that makes up for this :yaktangle: commit. :yakring:
2022-02-20 12:01:22 +00:00
return m_buffer->size() - m_buffer->weak_remaining_capacity();
}
void ConnectionToServer::main_mix_muted_state_changed(bool muted)
{
if (on_main_mix_muted_state_change)
on_main_mix_muted_state_change(muted);
}
void ConnectionToServer::main_mix_volume_changed(double volume)
{
if (on_main_mix_volume_change)
on_main_mix_volume_change(volume);
}
void ConnectionToServer::client_volume_changed(double volume)
{
if (on_client_volume_change)
on_client_volume_change(volume);
}
}