Commit graph

347 commits

Author SHA1 Message Date
Andreas Kling b72f067f0d Kernel+Userland: Remove unused "effective priority" from threads
This has been merged with the regular Thread::priority field after
the recent changes to the scheduler.
2021-01-28 08:25:53 +01:00
Tom ccaeb47401 Kernel: Don't hold scheduler lock while setting up blocker in Thread::block
This fixes a deadlock when one processor is trying to block while another is
trying to unblock the same.
2021-01-27 22:48:41 +01:00
Tom ac3927086f Kernel: Keep a list of threads per Process
This allow us to iterate only the threads of the process.
2021-01-27 22:48:41 +01:00
Tom 03a9ee79fa Kernel: Implement thread priority queues
Rather than walking all Thread instances and putting them into
a vector to be sorted by priority, queue them into priority sorted
linked lists as soon as they become ready to be executed.
2021-01-27 22:48:41 +01:00
Tom 21d288a10e Kernel: Make Thread::current smp-safe
Change Thread::current to be a static function and read using the fs
register, which eliminates a window between Processor::current()
returning and calling a function on it, which can trigger preemption
and a move to a different processor, which then causes operating
on the wrong object.
2021-01-27 21:12:24 +01:00
Tom f88a8b16d7 Kernel: Make entering and leaving critical sections atomic
We also need to store m_in_critical in the Thread upon switching,
and we need to restore it. This solves a problem where threads
moving between different processors could end up with an unexpected
value.
2021-01-27 21:12:24 +01:00
Tom 33cdc1d2f1 Kernel: Use new Thread::previous_mode to track ticks 2021-01-27 21:12:24 +01:00
Tom 0bd558081e Kernel: Track previous mode when entering/exiting traps
This allows us to determine what the previous mode (user or kernel)
was, e.g. in the timer interrupt. This is used e.g. to determine
whether a signal handler should be set up.

Fixes #5096
2021-01-27 21:12:24 +01:00
asynts 7cf0c7cc0d Meta: Split debug defines into multiple headers.
The following script was used to make these changes:

    #!/bin/bash
    set -e

    tmp=$(mktemp -d)

    echo "tmp=$tmp"

    find Kernel \( -name '*.cpp' -o -name '*.h' \) | sort > $tmp/Kernel.files
    find . \( -path ./Toolchain -prune -o -path ./Build -prune -o -path ./Kernel -prune \) -o \( -name '*.cpp' -o -name '*.h' \) -print | sort > $tmp/EverythingExceptKernel.files

    cat $tmp/Kernel.files | xargs grep -Eho '[A-Z0-9_]+_DEBUG' | sort | uniq > $tmp/Kernel.macros
    cat $tmp/EverythingExceptKernel.files | xargs grep -Eho '[A-Z0-9_]+_DEBUG' | sort | uniq > $tmp/EverythingExceptKernel.macros

    comm -23 $tmp/Kernel.macros $tmp/EverythingExceptKernel.macros > $tmp/Kernel.unique
    comm -1 $tmp/Kernel.macros $tmp/EverythingExceptKernel.macros > $tmp/EverythingExceptKernel.unique

    cat $tmp/Kernel.unique | awk '{ print "#cmakedefine01 "$1 }' > $tmp/Kernel.header
    cat $tmp/EverythingExceptKernel.unique | awk '{ print "#cmakedefine01 "$1 }' > $tmp/EverythingExceptKernel.header

    for macro in $(cat $tmp/Kernel.unique)
    do
        cat $tmp/Kernel.files | xargs grep -l $macro >> $tmp/Kernel.new-includes ||:
    done
    cat $tmp/Kernel.new-includes | sort > $tmp/Kernel.new-includes.sorted

    for macro in $(cat $tmp/EverythingExceptKernel.unique)
    do
        cat $tmp/Kernel.files | xargs grep -l $macro >> $tmp/Kernel.old-includes ||:
    done
    cat $tmp/Kernel.old-includes | sort > $tmp/Kernel.old-includes.sorted

    comm -23 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.new
    comm -13 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.old
    comm -12 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.mixed

    for file in $(cat $tmp/Kernel.includes.new)
    do
        sed -i -E 's/#include <AK\/Debug\.h>/#include <Kernel\/Debug\.h>/' $file
    done

    for file in $(cat $tmp/Kernel.includes.mixed)
    do
        echo "mixed include in $file, requires manual editing."
    done
2021-01-26 21:20:00 +01:00
asynts 1a3a0836c0 Everywhere: Use CMake to generate AK/Debug.h.
This was done with the help of several scripts, I dump them here to
easily find them later:

    awk '/#ifdef/ { print "#cmakedefine01 "$2 }' AK/Debug.h.in

    for debug_macro in $(awk '/#ifdef/ { print $2 }' AK/Debug.h.in)
    do
        find . \( -name '*.cpp' -o -name '*.h' -o -name '*.in' \) -not -path './Toolchain/*' -not -path './Build/*' -exec sed -i -E 's/#ifdef '$debug_macro'/#if '$debug_macro'/' {} \;
    done

    # Remember to remove WRAPPER_GERNERATOR_DEBUG from the list.
    awk '/#cmake/ { print "set("$2" ON)" }' AK/Debug.h.in
2021-01-25 09:47:36 +01:00
Andreas Kling 928ee2c791 Kernel: Don't let signals unblock threads while handling a page fault
It was possible to signal a process while it was paging in an inode
backed VM object. This would cause the inode read to EINTR, and the
page fault handler would assert.

Solve this by simply not unblocking threads due to signals if they are
currently busy handling a page fault. This is probably not the best way
to solve this issue, so I've added a FIXME to that effect.
2021-01-21 00:14:56 +01:00
Andreas Kling 19d3f8cab7 Kernel+LibC: Turn errno codes into a strongly typed enum
..and allow implicit creation of KResult and KResultOr from ErrnoCode.
This means that kernel functions that return those types can finally
do "return EINVAL;" and it will just work.

There's a handful of functions that still deal with signed integers
that should be converted to return KResults.
2021-01-20 23:20:02 +01:00
Tom 1d621ab172 Kernel: Some futex improvements
This adds support for FUTEX_WAKE_OP, FUTEX_WAIT_BITSET, FUTEX_WAKE_BITSET,
FUTEX_REQUEUE, and FUTEX_CMP_REQUEUE, as well well as global and private
futex and absolute/relative timeouts against the appropriate clock. This
also changes the implementation so that kernel resources are only used when
a thread is blocked on a futex.

Global futexes are implemented as offsets in VMObjects, so that different
processes can share a futex against the same VMObject despite potentially
being mapped at different virtual addresses.
2021-01-17 20:30:31 +01:00
Tom b17a889320 Kernel: Add safe atomic functions
This allows us to perform atomic operations on potentially unsafe
user space pointers.
2021-01-17 20:30:31 +01:00
Andreas Kling 01c2480eb3 Kernel+LibC+WindowServer: Remove unused thread/process boost mechanism
The priority boosting mechanism has been broken for a very long time.
Let's remove it from the codebase and we can bring it back the day
someone feels like implementing it in a working way. :^)
2021-01-16 14:52:04 +01:00
Lenny Maiorani e6f907a155 AK: Simplify constructors and conversions from nullptr_t
Problem:
- Many constructors are defined as `{}` rather than using the ` =
  default` compiler-provided constructor.
- Some types provide an implicit conversion operator from `nullptr_t`
  instead of requiring the caller to default construct. This violates
  the C++ Core Guidelines suggestion to declare single-argument
  constructors explicit
  (https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#c46-by-default-declare-single-argument-constructors-explicit).

Solution:
- Change default constructors to use the compiler-provided default
  constructor.
- Remove implicit conversion operators from `nullptr_t` and change
  usage to enforce type consistency without conversion.
2021-01-12 09:11:45 +01:00
asynts 4e8fd0216b Everywhere: Replace a bundle of dbg with dbgln.
These changes are arbitrarily divided into multiple commits to make it
easier to find potentially introduced bugs with git bisect.
2021-01-09 21:11:09 +01:00
Tom 901ef3f1c8 Kernel: Specify default memory order for some non-synchronizing Atomics 2021-01-04 19:13:52 +01:00
Tom a0c91719d8 Kernel: Restore thread count if thread cannot be fully created 2021-01-01 23:43:44 +01:00
Tom 476f17b3f1 Kernel: Merge PurgeableVMObject into AnonymousVMObject
This implements memory commitments and lazy-allocation of committed
memory.
2021-01-01 23:43:44 +01:00
Tom 72440d90fe Kernel: Fix BlockCondition::unblock return value
BlockCondition::unblock should return true if it unblocked at
least one thread, not if iterating the blockers had been stopped.
This is a regression introduced by 49a76164c.

Fixes #4670
2020-12-31 10:52:58 +01:00
Tom 49a76164c8 Kernel: Consolidate the various BlockCondition::unblock variants
The unblock_all variant used to ASSERT if a blocker didn't unblock,
but it wasn't clear from the name that it would do that. Because
the BlockCondition already asserts that no blockers are left at
destruction time, it would still catch blockers that haven't been
unblocked for whatever reason.

Fixes #4496
2020-12-30 13:23:17 +01:00
Brian Gianforcaro 815d39886f Kernel: Tag more methods and types as [[nodiscard]]
Tag methods at where not obvserving the return value is an obvious error
with [[nodiscard]] to catch potential future bugs.
2020-12-27 11:09:30 +01:00
Andreas Kling cb2c8f71f4 AK: Remove custom %b format string specifier
This was a non-standard specifier alias for %02x. This patch replaces
all uses of it with new-style formatting functions instead.
2020-12-25 17:04:28 +01:00
Andreas Kling 89d3b09638 Kernel: Allocate new main thread stack before committing to exec
If the allocation fails (e.g ENOMEM) we want to simply return an error
from sys$execve() and continue executing the current executable.

This patch also moves make_userspace_stack_for_main_thread() out of the
Thread class since it had nothing in particular to do with Thread.
2020-12-25 16:22:01 +01:00
Andreas Kling 40e9edd798 LibELF: Move AuxiliaryValue into the ELF namespace 2020-12-25 14:48:30 +01:00
Tom 5f51d85184 Kernel: Improve time keeping and dramatically reduce interrupt load
This implements a number of changes related to time:
* If a HPET is present, it is now used only as a system timer, unless
  the Local APIC timer is used (in which case the HPET timer will not
  trigger any interrupts at all).
* If a HPET is present, the current time can now be as accurate as the
  chip can be, independently from the system timer. We now query the
  HPET main counter for the current time in CPU #0's system timer
  interrupt, and use that as a base line. If a high precision time is
  queried, that base line is used in combination with quering the HPET
  timer directly, which should give a much more accurate time stamp at
  the expense of more overhead. For faster time stamps, the more coarse
  value based on the last interrupt will be returned. This also means
  that any missed interrupts should not cause the time to drift.
* The default system interrupt rate is reduced to about 250 per second.
* Fix calculation of Thread CPU usage by using the amount of ticks they
  used rather than the number of times a context switch happened.
* Implement CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE and use it
  for most cases where precise timestamps are not needed.
2020-12-21 18:26:12 +01:00
Tom c4176b0da1 Kernel: Fix Lock race causing infinite spinning between two threads
We need to account for how many shared lock instances the current
thread owns, so that we can properly release such references when
yielding execution.

We also need to release the process lock when donating.
2020-12-16 23:38:17 +01:00
Tom 1042762deb Kernel: Fix block recursion
Since the process lock is using the Lock class, re-locking the process
lock may cause another call to Thread::block. This caused some problems
with multiple blockers attempting to be used at the same time. To solve
this problem, remember if the process lock was held, and if it was then
relock after we're done with the blockers, just before returning.
2020-12-12 21:28:12 +01:00
Tom c455fc2030 Kernel: Change wait blocking to Process-only blocking
This prevents zombies created by multi-threaded applications and brings
our model back to closer to what other OSs do.

This also means that SIGSTOP needs to halt all threads, and SIGCONT needs
to resume those threads.
2020-12-12 21:28:12 +01:00
Tom 4bbee00650 Kernel: disown should unblock any potential waiters
This is necessary because if a process changes the state to Stopped
or resumes from that state, a wait entry is created in the parent
process. So, if a child process does this before disown is called,
we need to clear those entries to avoid leaking references/zombies
that won't be cleaned up until the former parent exits.

This also should solve an even more unlikely corner case where another
thread is waiting on a pid that is being disowned by another thread.
2020-12-12 21:28:12 +01:00
Tom da5cc34ebb Kernel: Fix some issues related to fixes and block conditions
Fix some problems with join blocks where the joining thread block
condition was added twice, which lead to a crash when trying to
unblock that condition a second time.

Deferred block condition evaluation by File objects were also not
properly keeping the File object alive, which lead to some random
crashes and corruption problems.

Other problems were caused by the fact that the Queued state didn't
handle signals/interruptions consistently. To solve these issues we
remove this state entirely, along with Thread::wait_on and change
the WaitQueue into a BlockCondition instead.

Also, deliver signals even if there isn't going to be a context switch
to another thread.

Fixes #4336 and #4330
2020-12-12 21:28:12 +01:00
Tom 12cf6f8650 Kernel: Add CLOCK_REALTIME support to the TimerQueue
This allows us to use blocking timeouts with either monotonic or
real time for all blockers. Which means that clock_nanosleep()
now also supports CLOCK_REALTIME.

Also, switch alarm() to use CLOCK_REALTIME as per specification.
2020-12-02 13:02:04 +01:00
Tom 601a688b6f Kernel: TimerQueue::cancel_timer needs to wait if timer is executing
We need to be able to guarantee that a timer won't be executing after
TimerQueue::cancel_timer returns. In the case of multiple processors
this means that we may need to wait while the timer handler finishes
execution on another core.

This also fixes a problem in Thread::block and Thread::wait_on where
theoretically the timer could execute after the function returned
and the Thread disappeared.
2020-12-02 13:02:04 +01:00
Tom 78f1b5e359 Kernel: Fix some problems with Thread::wait_on and Lock
This changes the Thread::wait_on function to not enable interrupts
upon leaving, which caused some problems with page fault handlers
and in other situations. It may now be called from critical
sections, with interrupts enabled or disabled, and returns to the
same state.

This also requires some fixes to Lock. To aid debugging, a new
define LOCK_DEBUG is added that enables checking for Lock leaks
upon finalization of a Thread.
2020-12-01 09:48:34 +01:00
Tom 046d6855f5 Kernel: Move block condition evaluation out of the Scheduler
This makes the Scheduler a lot leaner by not having to evaluate
block conditions every time it is invoked. Instead evaluate them as
the states change, and unblock threads at that point.

This also implements some more waitid/waitpid/wait features and
behavior. For example, WUNTRACED and WNOWAIT are now supported. And
wait will now not return EINTR when SIGCHLD is delivered at the
same time.
2020-11-30 13:17:02 +01:00
Tom 6a620562cc Kernel: Allow passing a thread argument for new kernel threads
This adds the ability to pass a pointer to kernel thread/process.
Also add the ability to use a closure as thread function, which
allows passing information to a kernel thread more easily.
2020-11-30 13:17:02 +01:00
Tom 6cb640eeba Kernel: Move some time related code from Scheduler into TimeManagement
Use the TimerQueue to expire blocking operations, which is one less thing
the Scheduler needs to check on every iteration.

Also, add a BlockTimeout class that will automatically handle relative or
absolute timeouts as well as overriding timeouts (e.g. socket timeouts)
more consistently.

Also, rework the TimerQueue class to be able to fire events from
any processor, which requires Timer to be RefCounted. Also allow
creating id-less timers for use by blocking operations.
2020-11-30 13:17:02 +01:00
Andreas Kling 94ff04b536 Kernel: Make CLOCK_MONOTONIC respect the system tick frequency
The time returned by sys$clock_gettime() was not aligned with the delay
calculations in sys$clock_nanosleep(). This patch fixes that by taking
the system's ticks_per_second value into account in both functions.

This patch also removes the need for Thread::sleep_until() and uses
Thread::sleep() for both absolute and relative sleeps.

This was causing the nesalizer emulator port to sleep for a negative
amount of time at the end of each frame, making it run way too fast.
2020-11-22 17:20:58 +01:00
Tom 1e2e3eed62 Kernel: Fix a few deadlocks with Thread::m_lock and g_scheduler_lock
g_scheduler_lock cannot safely be acquired after Thread::m_lock
because another processor may already hold g_scheduler_lock and wait
for the same Thread::m_lock.
2020-10-26 08:57:25 +01:00
Tom 838d9fa251 Kernel: Make Thread refcounted
Similar to Process, we need to make Thread refcounted. This will solve
problems that will appear once we schedule threads on more than one
processor. This allows us to hold onto threads without necessarily
holding the scheduler lock for the entire duration.
2020-09-27 19:46:04 +02:00
Tom 69a9c78783 Kernel: Allow killing queued threads
We need to dequeue and wake threads that are waiting if the process
terminates.

Fixes #3603 without the HackStudio fixes in #3606.
2020-09-26 20:03:16 +02:00
Tom 1727b2d7cd Kernel: Fix thread joining issues
The thread joining logic hadn't been updated to account for the subtle
differences introduced by software context switching. This fixes several
race conditions related to thread destruction and joining, as well as
finalization which did not properly account for detached state and the
fact that threads can be joined after termination as long as they're not
detached.

Fixes #3596
2020-09-26 13:03:13 +02:00
Luke 68b361bd21 Kernel: Return ENOMEM in more places
There are plenty of places in the kernel that aren't
checking if they actually got their allocation.

This fixes some of them, but definitely not all.

Fixes #3390
Fixes #3391

Also, let's make find_one_free_page() return nullptr
if it doesn't get a free index. This stops the kernel
crashing when out of memory and allows memory purging
to take place again.

Fixes #3487
2020-09-16 20:38:19 +02:00
Tom c8d9f1b9c9 Kernel: Make copy_to/from_user safe and remove unnecessary checks
Since the CPU already does almost all necessary validation steps
for us, we don't really need to attempt to do this. Doing it
ourselves doesn't really work very reliably, because we'd have to
account for other processors modifying virtual memory, and we'd
have to account for e.g. pages not being able to be allocated
due to insufficient resources.

So change the copy_to/from_user (and associated helper functions)
to use the new safe_memcpy, which will return whether it succeeded
or not. The only manual validation step needed (which the CPU
can't perform for us) is making sure the pointers provided by user
mode aren't pointing to kernel mappings.

To make it easier to read/write from/to either kernel or user mode
data add the UserOrKernelBuffer helper class, which will internally
either use copy_from/to_user or directly memcpy, or pass the data
through directly using a temporary buffer on the stack.

Last but not least we need to keep syscall params trivial as we
need to copy them from/to user mode using copy_from/to_user.
2020-09-13 21:19:15 +02:00
Ben Wiederhake 0d79e57c4d Kernel: Fix various forward declarations
I decided to modify MappedROM.h because all other entried in Forward.h
are also classes, and this is visually more pleasing.

Other than that, it just doesn't make any difference which way we resolve
the conflicts.
2020-09-12 13:46:15 +02:00
Tom 92bfe40954 Kernel: Keep signal state in sync
In c3d231616c we added the atomic variable
m_have_any_unmasked_pending_signals tracking the state of pending signals.
Add helper functions that automatically update this variable as needed.
2020-09-09 12:43:56 +02:00
Tom c3d231616c Kernel: Fix crash when delivering signal to barely created thread
We need to wait until a thread is fully set up and ready for running
before attempting to deliver a signal. Otherwise we may not have a
user stack yet.

Also, remove the Skip0SchedulerPasses and Skip1SchedulerPass thread
states that we don't really need anymore with software context switching.

Fixes the kernel crash reported in #3419
2020-09-07 16:49:19 +02:00
Muhammad Zahalqa 64ea64fca5
Kernel: Remove an unimplemented function (#3210) 2020-08-19 11:24:40 +02:00
Nico Weber 430b265cd4 AK: Rename KB, MB, GB to KiB, MiB, GiB
The SI prefixes "k", "M", "G" mean "10^3", "10^6", "10^9".
The IEC prefixes "Ki", "Mi", "Gi" mean "2^10", "2^20", "2^30".

Let's use the correct name, at least in code.

Only changes the name of the constants, no other behavior change.
2020-08-16 16:33:28 +02:00
Tom 72960fedc6 Kernel: Briefly resume stopped threads when being killed
We need to briefly put Stopped threads back into Running state
so that the kernel stacks can get cleaned up when they're being
killed.

Fixes #3130
2020-08-15 00:15:00 +02:00
Tom 49d5232f33 Kernel: Always return from Thread::wait_on
We need to always return from Thread::wait_on, even when a thread
is being killed. This is necessary so that the kernel call stack
can clean up and release references held by it. Then, right before
transitioning back to user mode, we check if the thread is
supposed to die, and at that point change the thread state to
Dying to prevent further scheduling of this thread.

This addresses some possible resource leaks similar to #3073
2020-08-11 14:54:36 +02:00
Ben Wiederhake bee08a4b9f Kernel: More PID/TID typing 2020-08-10 11:51:45 +02:00
Ben Wiederhake f5744a6f2f Kernel: PID/TID typing
This compiles, and contains exactly the same bugs as before.
The regex 'FIXME: PID/' should reveal all markers that I left behind, including:
- Incomplete conversion
- Issues or things that look fishy
- Actual bugs that will go wrong during runtime
2020-08-10 11:51:45 +02:00
Tom 41d2a0e9f7 Kernel: Dequeue dying threads from WaitQueue
If a thread is waiting but getting killed, we need to dequeue
the thread from the WaitQueue so that a potential wake before
finalization doesn't happen.
2020-08-06 10:02:55 +02:00
Tom f4a5c9b6c2 Kernel: Consolidate timeout logic
Allow passing in an optional timeout to Thread::block and move
the timeout check out of Thread::Blocker. This way all Blockers
implicitly support timeouts and don't need to implement it
themselves. Do however allow them to override timeouts (e.g.
for sockets).
2020-08-03 18:23:00 +02:00
Tom c813bb7355 Kernel: Fix a few Thread::block related races
We need to have a Thread lock to protect threading related
operations, such as Thread::m_blocker which is used in
Thread::block.

Also, if a Thread::Blocker indicates that it should be
unblocking immediately, don't actually block the Thread
and instead return immediately in Thread::block.
2020-08-03 15:59:11 +02:00
Tom f011c420c1 Kernel: Fix signal delivery when no syscall is made
This fixes a regression introduced by the new software context
switching where the Kernel would not deliver a signal unless the
process is making system calls. This is because the TSS no longer
updates the CS value, so the scheduler never considered delivery
as the process always appeared to be in kernel mode. With software
context switching we can just set up the signal trampoline at
any time and when the processor returns back to user mode it'll
get executed. This should fix e.g. killing programs that are
stuck in some tight loop that doesn't make any system calls and
is only pre-empted by the timer interrupt.

Fixes #2958
2020-08-02 20:50:29 +02:00
Tom 538b985487 Kernel: Remove ProcessInspectionHandle and make Process RefCounted
By making the Process class RefCounted we don't really need
ProcessInspectionHandle anymore. This also fixes some race
conditions where a Process may be deleted while still being
used by ProcFS.

Also make sure to acquire the Process' lock when accessing
regions.

Last but not least, there's no reason why a thread can't be
scheduled while being inspected, though in practice it won't
happen anyway because the scheduler lock is held at the same
time.
2020-08-02 17:15:11 +02:00
Ben Wiederhake d8c8820ee9 Kernel: Allow Thread::sleep for more than 388 days
Because Thread::sleep is an internal interface, it's easy to check that there
are only few callers: Process::sys$sleep, usleep, and nanosleep are happy
with this increased size, because now they support the entire range of their
arguments (assuming small-ish values for ticks_per_second()).
SyncTask doesn't care.

Note that the old behavior wasn't "cap out at 388 days", which would have been
reasonable. Instead, the code resulted in unsigned overflow, meaning that a
very long sleep would "on average" end after about 194 days, sometimes much
quicker.
2020-07-25 20:21:25 +02:00
Tom 419703a1f2 Kernel: Fix checking BlockResult
We now have BlockResult::WokeNormally and BlockResult::NotBlocked,
both of which indicate no error. We can no longer just check for
BlockResult::WokeNormally and assume anything else must be an
interruption.
2020-07-07 15:46:58 +02:00
Andrew Kaster f96b827990 Kernel+LibELF: Expose ELF Auxiliary Vector to Userspace
The AT_* entries are placed after the environment variables, so that
they can be found by iterating until the end of the envp array, and then
going even further beyond :^)
2020-07-07 10:38:54 +02:00
Tom 9725bda63e Kernel: Enhance WaitQueue to remember pending wakes
If WaitQueue::wake_all, WaitQueue::wake_one, or WaitQueue::wake_n
is called but nobody is currently waiting, we should remember that
fact and prevent someone from waiting after such a request. This
solves a race condition where the Finalizer thread is notified
to finalize a thread, but it is not (yet) waiting on this queue.

Fixes #2693
2020-07-06 10:00:24 +02:00
Tom 2a82a25fec Kernel: Various context switch fixes
These changes solve a number of problems with the software
context swithcing:

* The scheduler lock really should be held throughout context switches
* Transitioning from the initial (idle) thread to another needs to
  hold the scheduler lock
* Transitioning from a dying thread to another also needs to hold
  the scheduler lock
* Dying threads cannot necessarily be finalized if they haven't
  switched out of it yet, so flag them as active while a processor
  is running it (the Running state may be switched to Dying while
  it still is actually running)
2020-07-06 10:00:24 +02:00
Tom 788b2d64c6 Kernel: Require a reason to be passed to Thread::wait_on
The Lock class still permits no reason, but for everything else
require a reason to be passed to Thread::wait_on. This makes it
easier to diagnose why a Thread is in Queued state.
2020-07-06 10:00:24 +02:00
Tom bb84fad0bf Kernel: Fix retreiving frame pointer from a thread
If we're trying to walk the stack for another thread, we can
no longer retreive the EBP register from Thread::m_tss. Instead,
we need to look at the top of the kernel stack, because all threads
not currently running were last in kernel mode. Context switches
now always trigger a brief switch to kernel mode, and Thread::m_tss
only is used to save ESP and EIP.

Fixes #2678
2020-07-03 21:16:56 +02:00
Tom e373e5f007 Kernel: Fix signal delivery
When delivering urgent signals to the current thread
we need to check if we should be unblocked, and if not
we need to yield to another process.

We also need to make sure that we suppress context switches
during Process::exec() so that we don't clobber the registers
that it sets up (eip mainly) by a context switch. To be able
to do that we add the concept of a critical section, which are
similar to Process::m_in_irq but different in that they can be
requested at any time. Calls to Scheduler::yield and
Scheduler::donate_to will return instantly without triggering
a context switch, but the processor will then asynchronously
trigger a context switch once the critical section is left.
2020-07-03 19:32:34 +02:00
Andreas Kling 47f5b24cc8 Kernel: Remove no-longer-used GDT selector from Thread
Now that we use software context switching, each thread no longer has
its own GDT entry (yay!) so we can get rid of this Thread member. :^)
2020-07-02 21:50:42 +02:00
Tom 16783bd14d Kernel: Turn Thread::current and Process::current into functions
This allows us to query the current thread and process on a
per processor basis
2020-07-01 12:07:01 +02:00
Tom d99901660d Kernel/LibCore: Expose processor id where a thread last ran 2020-07-01 12:07:01 +02:00
Tom fb41d89384 Kernel: Implement software context switching and Processor structure
Moving certain globals into a new Processor structure for
each CPU allows us to eventually run an instance of the
scheduler on each CPU.
2020-07-01 12:07:01 +02:00
Nico Weber d23e655c83 LibC: Implement pselect
pselect() is similar() to select(), but it takes its timeout
as timespec instead of as timeval, and it takes an additional
sigmask parameter.

Change the sys$select parameters to match pselect() and implement
select() in terms of pselect().
2020-06-22 16:00:20 +02:00
Sergey Bugaev d2b500fbcb AK+Kernel: Help the compiler inline a bunch of trivial methods
If these methods get inlined, the compiler is able to statically eliminate most
of the assertions. Alas, it doesn't realize this, and believes inlining them to
be too expensive. So give it a strong hint that it's not the case.

This *decreases* the kernel binary size.
2020-05-20 14:11:13 +02:00
Brian Gianforcaro faf15e3721 Kernel: Add timeout support to Thread::wait_on
This change plumbs a new optional timeout option to wait_on.
The timeout is enabled by enqueing a timer on the timer queue
while we are waiting. We can then see if we were woken up or
timed out by checking if we are still on the wait queue or not.
2020-04-26 21:31:52 +02:00
Andreas Kling bed0e6d250 Kernel: Make Process and Thread non-copyable and non-movable 2020-04-22 12:36:35 +02:00
Itamar 9e51e295cf ptrace: Add PT_SETREGS
PT_SETTREGS sets the regsiters of the traced thread. It can only be
used when the tracee is stopped.

Also, refactor ptrace.
The implementation was getting long and cluttered the alraedy large
Process.cpp file.

This commit moves the bulk of the implementation to Kernel/Ptrace.cpp,
and factors out peek & poke to separate methods of the Process class.
2020-04-13 00:53:22 +02:00
Itamar 77f671b462 CPU: Handle breakpoint trap
Also, start working on the debugger app.
2020-04-13 00:53:22 +02:00
Andreas Kling b7ff3b5ad1 Kernel: Include the current instruction pointer in profile samples
We were missing the innermost instruction pointer when sampling.
This makes the instruction-level profile info a lot cooler! :^)
2020-04-11 21:04:45 +02:00
Itamar 6b74d38aab Kernel: Add 'ptrace' syscall
This commit adds a basic implementation of
the ptrace syscall, which allows one process
(the tracer) to control another process (the tracee).

While a process is being traced, it is stopped whenever a signal is
received (other than SIGCONT).

The tracer can start tracing another thread with PT_ATTACH,
which causes the tracee to stop.

From there, the tracer can use PT_CONTINUE
to continue the execution of the tracee,
or use other request codes (which haven't been implemented yet)
to modify the state of the tracee.

Additional request codes are PT_SYSCALL, which causes the tracee to
continue exection but stop at the next entry or exit from a syscall,
and PT_GETREGS which fethces the last saved register set of the tracee
(can be used to inspect syscall arguments and return value).

A special request code is PT_TRACE_ME, which is issued by the tracee
and causes it to stop when it calls execve and wait for the
tracer to attach.
2020-03-28 18:27:18 +01:00
Andreas Kling 7d862dd5fc AK: Reduce header dependency graph of String.h
String.h no longer pulls in StringView.h. We do this by moving a bunch
of String functions out-of-line.
2020-03-23 13:48:44 +01:00
Andreas Kling b1058b33fb AK: Add global FlatPtr typedef. It's u32 or u64, based on sizeof(void*)
Use this instead of uintptr_t throughout the codebase. This makes it
possible to pass a FlatPtr to something that has u32 and u64 overloads.
2020-03-08 13:06:51 +01:00
Andreas Kling 2839bb0be1 Kernel: Restore the previous thread state on SIGCONT after SIGSTOP
When stopping a thread with the SIGSTOP signal, we now store the thread
state in Thread::m_stop_state. That state is then restored on SIGCONT.
This fixes an issue where previously-blocked threads would unblock
upon resume. Now they simply resume in the Blocked state, and it's up
to the regular unblocking mechanism to unblock them.

Fixes #1326.
2020-03-01 15:14:17 +01:00
Andreas Kling 9aa234cc47 Kernel: Reset FPU state on exec() 2020-02-18 13:44:27 +01:00
Andreas Kling 48f7c28a5c Kernel: Replace "current" with Thread::current and Process::current
Suggested by Sergey. The currently running Thread and Process are now
Thread::current and Process::current respectively. :^)
2020-02-17 15:04:27 +01:00
Andreas Kling 16818322c5 Kernel: Reduce header dependencies of Process and Thread 2020-02-16 02:01:42 +01:00
Andreas Kling e28809a996 Kernel: Add forward declaration header 2020-02-16 01:50:32 +01:00
Andreas Kling a356e48150 Kernel: Move all code into the Kernel namespace 2020-02-16 01:27:42 +01:00
Andreas Kling 0341ddc5eb Kernel: Rename RegisterDump => RegisterState 2020-02-16 00:15:37 +01:00
Andreas Kling ea8d386146 Kernel: Update Thread::raw_backtrace() signature to use uintptr_t 2020-02-02 19:00:38 +01:00
Andreas Kling 5163c5cc63 Kernel: Expose the signal that stopped a thread via sys$waitpid() 2020-01-27 20:47:10 +01:00
Andreas Kling 137a45dff2 Kernel: read()/write() should respect timeouts when used on a sockets
Move timeout management to the ReadBlocker and WriteBlocker classes.
Also get rid of the specialized ReceiveBlocker since it no longer does
anything that ReadBlocker can't do.
2020-01-26 17:54:23 +01:00
Andreas Kling e901a3695a Kernel: Use the templated copy_to/from_user() in more places
These ensure that the "to" and "from" pointers have the same type,
and also that we copy the correct number of bytes.
2020-01-20 13:41:21 +01:00
Andreas Kling 94ca55cefd Meta: Add license header to source files
As suggested by Joshua, this commit adds the 2-clause BSD license as a
comment block to the top of every source file.

For the first pass, I've just added myself for simplicity. I encourage
everyone to add themselves as copyright holders of any file they've
added or modified in some significant way. If I've added myself in
error somewhere, feel free to replace it with the appropriate copyright
holder instead.

Going forward, all new source files should include a license header.
2020-01-18 09:45:54 +01:00
Andreas Kling 41376d4662 Kernel: Fix Lock racing to the WaitQueue
There was a time window between releasing Lock::m_lock and calling into
the lock's WaitQueue where someone else could take m_lock and bring two
threads into a deadlock situation.

Fix this issue by holding Lock::m_lock until interrupts are disabled by
either Thread::wait_on() or WaitQueue::wake_one().
2020-01-12 19:04:16 +01:00
Andreas Kling 8c5cd97b45 Kernel: Fix kernel null deref on process crash during join_thread()
The join_thread() syscall is not supposed to be interruptible by
signals, but it was. And since the process death mechanism piggybacked
on signal interrupts, it was possible to interrupt a pthread_join() by
killing the process that was doing it, leading to confusing due to some
assumptions being made by Thread::finalize() for threads that have a
pending joiner.

This patch fixes the issue by making "interrupted by death" a distinct
block result separate from "interrupted by signal". Then we handle that
state in join_thread() and tidy things up so that thread finalization
doesn't get confused by the pending joiner being gone.

Test: Tests/Kernel/null-deref-crash-during-pthread_join.cpp
2020-01-10 19:23:45 +01:00
Andreas Kling e23f05a157 Kernel: Remove unused variable Thread::m_userspace_stack_region 2020-01-09 12:31:18 +01:00
Andreas Kling fd740829d1 Kernel: Switch to eagerly restoring x86 FPU state on context switch
Lazy FPU restore is well known to be vulnerable to timing attacks,
and eager restore is a lot simpler anyway, so let's just do it eagerly.
2020-01-01 16:54:21 +01:00
Andreas Kling a69734bf2e Kernel: Also add a process boosting mechanism
Let's also have set_process_boost() for giving all threads in a process
the same boost.
2019-12-30 20:10:00 +01:00
Andreas Kling 610f3ad12f Kernel: Add a basic thread boosting mechanism
This patch introduces a syscall:

    int set_thread_boost(int tid, int amount)

You can use this to add a permanent boost value to the effective thread
priority of any thread with your UID (or any thread in the system if
you are the superuser.)

This is quite crude, but opens up some interesting opportunities. :^)
2019-12-30 19:23:13 +01:00
Andreas Kling 50677bf806 Kernel: Refactor scheduler to use dynamic thread priorities
Threads now have numeric priorities with a base priority in the 1-99
range.

Whenever a runnable thread is *not* scheduled, its effective priority
is incremented by 1. This is tracked in Thread::m_extra_priority.
The effective priority of a thread is m_priority + m_extra_priority.

When a runnable thread *is* scheduled, its m_extra_priority is reset to
zero and the effective priority returns to base.

This means that lower-priority threads will always eventually get
scheduled to run, once its effective priority becomes high enough to
exceed the base priority of threads "above" it.

The previous values for ThreadPriority (Low, Normal and High) are now
replaced as follows:

    Low -> 10
    Normal -> 30
    High -> 50

In other words, it will take 20 ticks for a "Low" priority thread to
get to "Normal" effective priority, and another 20 to reach "High".

This is not perfect, and I've used some quite naive data structures,
but I think the mechanism will allow us to build various new and
interesting optimizations, and we can figure out better data structures
later on. :^)
2019-12-30 18:46:17 +01:00
Andreas Kling abdd5aa08a Kernel: Separate runnable thread queues by priority
This patch introduces three separate thread queues, one for each thread
priority available to userspace (Low, Normal and High.)

Each queue operates in a round-robin fashion, but we now always prefer
to schedule the highest priority thread that currently wants to run.

There are tons of tweaks and improvements that we can and should make
to this mechanism, but I think this is a step in the right direction.

This makes WindowServer significantly more responsive while one of its
clients is burning CPU. :^)
2019-12-27 00:52:30 +01:00
Andreas Kling f4978b2be1 Kernel: Use IntrusiveList to make WaitQueue allocation-free :^) 2019-12-22 12:38:01 +01:00
Andreas Kling 3012b224f0 Kernel: Fix intermittent assertion failure in sys$exec()
While setting up the main thread stack for a new process, we'd incur
some zero-fill page faults. This was to be expected, since we allocate
a huge stack but lazily populate it with physical pages.

The problem is that page fault handlers may enable interrupts in order
to grab a VMObject lock (or to page in from an inode.)

During exec(), a process is reorganizing itself and will be in a very
unrunnable state if the scheduler should interrupt it and then later
ask it to run again. Which is exactly what happens if the process gets
pre-empted while the new stack's zero-fill page fault grabs the lock.

This patch fixes the issue by creating new main thread stacks before
disabling interrupts and going into the critical part of exec().
2019-12-18 23:03:23 +01:00
Andreas Kling 7a64f55c0f Kernel: Fix get_register_dump_from_stack() after IRQ entry changes
I had to change the layout of RegisterDump a little bit to make the new
IRQ entry points work. This broke get_register_dump_from_stack() which
was expecting the RegisterDump to be badly aligned due to a goofy extra
16 bits which are no longer there.
2019-12-15 17:58:53 +01:00
Andreas Kling b32e961a84 Kernel: Implement a simple process time profiler
The kernel now supports basic profiling of all the threads in a process
by calling profiling_enable(pid_t). You finish the profiling by calling
profiling_disable(pid_t).

This all works by recording thread stacks when the timer interrupt
fires and the current thread is in a process being profiled.
Note that symbolication is deferred until profiling_disable() to avoid
adding more noise than necessary to the profile.

A simple "/bin/profile" command is included here that can be used to
start/stop profiling like so:

    $ profile 10 on
    ... wait ...
    $ profile 10 off

After a profile has been recorded, it can be fetched in /proc/profile

There are various limits (or "bugs") on this mechanism at the moment:

- Only one process can be profiled at a time.
- We allocate 8MB for the samples, if you use more space, things will
  not work, and probably break a bit.
- Things will probably fall apart if the profiled process dies during
  profiling, or while extracing /proc/profile
2019-12-11 20:36:56 +01:00
Andrew Kaster 9058962712 Kernel: Allow setting thread names
The main thread of each kernel/user process will take the name of
the process. Extra threads will get a fancy new name
"ProcessName[<tid>]".

Thread backtraces now list the thread name in addtion to tid.

Add the thread name to /proc/all (should it get its own proc
file?).

Add two new syscalls, set_thread_name and get_thread_name.
2019-12-08 14:09:29 +01:00
Andreas Kling 8bb98aa31b Kernel: Use a WaitQueue to implement finalizer wakeup
This gets rid of the special "Lurking" thread state and replaces it
with a generic WaitQueue :^)
2019-12-01 19:17:17 +01:00
Andreas Kling 5a45376180 Kernel+SystemMonitor: Log amounts of I/O per thread
This patch adds these I/O counters to each thread:

- (Inode) file read bytes
- (Inode) file write bytes
- Unix socket read bytes
- Unix socket write bytes
- IPv4 socket read bytes
- IPv4 socket write bytes

These are then exposed in /proc/all and seen in SystemMonitor.
2019-12-01 17:40:27 +01:00
Andreas Kling 5859e16e53 Kernel: Use a dedicated thread state for wait-queued threads
Instead of using the generic block mechanism, wait-queued threads now
go into the special Queued state.

This fixes an issue where signal dispatch would unblock a wait-queued
thread (because signal dispatch unblocks blocked threads) and cause
confusion since the thread only expected to be awoken by the queue.
2019-12-01 16:02:58 +01:00
Andreas Kling 8b129476b1 Kernel: Use a WaitQueue in PATAChannel
Instead of waking up repeatedly to check if a disk operation has
finished, use a WaitQueue and wake it up in the IRQ handler.

This simplifies the device driver a bit, and makes it more responsive
as well :^)
2019-12-01 12:54:38 +01:00
Andreas Kling 9ed272ce98 Kernel: Disable interrupts while setting up a thread blocker
There was a race window between instantiating a WaitQueueBlocker and
setting the thread state to Blocked. If a thread was preempted between
those steps, someone else might try to wake the wait queue and find an
unblocked thread in a wait queue, which is not sane.
2019-12-01 12:47:33 +01:00
Andreas Kling f067730f6b Kernel: Add a WaitQueue for Thread queueing/waking and use it for Lock
The kernel's Lock class now uses a proper wait queue internally instead
of just having everyone wake up regularly to try to acquire the lock.

We also keep the donation mechanism, so that whenever someone tries to
take the lock and fails, that thread donates the remainder of its
timeslice to the current lock holder.

After unlocking a Lock, the unlocking thread calls WaitQueue::wake_one,
which unblocks the next thread in queue.
2019-12-01 12:07:43 +01:00
Andreas Kling 5b8cf2ee23 Kernel: Make syscall counters and page fault counters per-thread
Now that we show individual threads in SystemMonitor and "top",
it's also very nice to have individual counters for the threads. :^)
2019-11-26 21:37:38 +01:00
Andrew Kaster 618aebdd8a Kernel+LibPthread: pthread_create handles pthread_attr_t
Add an initial implementation of pthread attributes for:
  * detach state (joinable, detached)
  * schedule params (just priority)
  * guard page size (as skeleton) (requires kernel support maybe?)
  * stack size and user-provided stack location (4 or 8 MB only, must be aligned)

Add some tests too, to the thread test program.

Also, LibC: Move pthread declarations to sys/types.h, where they belong.
2019-11-18 09:04:32 +01:00
Andreas Kling e34ed04d1e Kernel+LibPthread+LibC: Create secondary thread stacks in userspace
Have pthread_create() allocate a stack and passing it to the kernel
instead of this work happening in the kernel. The more of this we can
do in userspace, the better.

This patch also unexposes the raw create_thread() and exit_thread()
syscalls since they are now only used by LibPthread anyway.
2019-11-17 17:29:20 +01:00
Andreas Kling 73d6a69b3f Kernel: Release the big process lock while yielding in sys$yield()
Otherwise, a thread calling sched_yield() will prevent other threads
in that process from entering the kernel.
2019-11-16 12:18:59 +01:00
Andreas Kling cb5021419e Kernel: Move Thread::m_joinee_exit_value into the JoinBlocker
There's no need for this to be a permanent Thread member. Just use a
reference in the JoinBlocker instead.
2019-11-14 21:04:34 +01:00
Andreas Kling 69efa3f630 Kernel+LibPthread: Implement pthread_join()
It's now possible to block until another thread in the same process has
exited. We can also retrieve its exit value, which is whatever value it
passed to pthread_exit(). :^)
2019-11-14 20:58:23 +01:00
Sergey Bugaev 1e1ddce9d8 Kernel: Unwind kernel stacks before dying
While executing in the kernel, a thread can acquire various resources
that need cleanup, such as locks and references to RefCounted objects.
This cleanup normally happens on the exit path, such as in destructors
for various RAII guards. But we weren't calling those exit paths when
killing threads that have been executing in the kernel, such as threads
blocked on reading or sleeping, thus causing leaks.

This commit changes how killing threads works. Now, instead of killing
a thread directly, one is supposed to call thread->set_should_die(),
which will unblock it and make it unwind the stack if it is blocked
in the kernel. Then, just before returning to the userspace, the thread
will automatically die.
2019-11-14 20:05:58 +01:00
Andreas Kling 083c5f8b89 Kernel: Rework Process::Priority into ThreadPriority
Scheduling priority is now set at the thread level instead of at the
process level.

This is a step towards allowing processes to set different priorities
for threads. There's no userspace API for that yet, since only the main
thread's priority is affected by sched_setparam().
2019-11-06 16:30:06 +01:00
Drew Stratford 5efbb4ae95 Kernel: Fix bug in Thread::dispatch_signal().
dispatch_signal() expected a RegisterDump on the kernel stack. However
in certain cases, like just after a clone, this was not the case and
dispatch_signal() would instead write to an incorrect user stack pointer.

We now use the threads TSS in situations where the RegisterDump may not
be valid, fixing the issue.
2019-11-04 10:12:59 +01:00
Drew Stratford 44f22c99ef Thread.cpp: add method get_RegisterDump_from_stack().
This refactors some the RegisterDump code from dispatch_signal
into a stand-alone function, allowing for better reuse.
2019-11-04 10:12:59 +01:00
Andreas Kling cc68654a44 Kernel+LibC: Implement clock_gettime() and clock_nanosleep()
Only the CLOCK_MONOTONIC clock is supported at the moment, and it only
has millisecond precision. :^)
2019-11-02 19:34:06 +01:00
Andreas Kling 904c871727 Kernel: Allow userspace stacks to grow up to 4 MB by default
Make userspace stacks lazily allocated and allow them to grow up to
4 megabytes. This avoids a lot of silly crashes we were running into
with software expecting much larger stacks. :^)
2019-10-31 13:57:07 +01:00
Andrew Kaster 98c86e5109 Kernel: Move E2BIG calculation from Thread to Process
Thread::make_userspace_stack_for_main_thread is only ever called from
Process::do_exec, after all the fun ELF loading and TSS setup has
occured.

The calculations in there that check if the combined argv + envp
size will exceed the default stack size are not used in the rest of
the stack setup. So, it should be safe to move this to the beginning
of do_exec and bail early with -E2BIG, just like the man pages say.

Additionally, advertise this limit in limits.h to be a good POSIX.1
citizen. :)
2019-10-23 07:45:41 +02:00
Andreas Kling 40beb4c5c0 Kernel: Don't leak an FPU state buffer for every spawned thread
We were leaking 512 bytes of kmalloc memory for every new thread.
This patch fixes that, and also makes sure to zero out the FPU state
buffer after allocating it, and finally also makes the LogStream
operator<< for Thread look a little bit nicer. :^)
2019-10-13 14:36:55 +02:00
Drew Stratford c136fd3fe2 Kernel: Send SIGSEGV on seg-fault
Now programs can catch the SIGSEGV signal when they segfault.

This commit also introduced the send_urgent_signal_to_self method,
which is needed to send signals to a thread when handling exceptions
caused by the same thread.
2019-10-07 16:39:47 +02:00
Andreas Kling 7f9a33dba1 Kernel: Make Region single-owner instead of ref-counted
This simplifies the ownership model and makes Region easier to reason
about. Userspace Regions are now primarily kept by Process::m_regions.

Kernel Regions are kept in various OwnPtr<Regions>'s.

Regions now only ever get unmapped when they are destroyed.
2019-09-27 14:25:42 +02:00
Drew Stratford b65bedd610 Kernel: Change m_blockers to m_blocker.
Because of the way signals now work there should
not be more than one blocker per thread. This
changes the blocker and thread class to reflect
that.
2019-09-09 08:35:43 +02:00
Drew Stratford e529042895 Kernel: Remove reduntant kernel/user signal stacks.
Due to the changes in signal handling m_kernel_stack_for_signal_handler_region
and m_signal_stack_user_region are no longer necessary, and so, have been
removed. I've also removed the similarly reduntant m_tss_to_resume_kernel.
2019-09-09 08:35:43 +02:00
Andreas Kling ec6bceaa08 Kernel: Support thread-local storage
This patch adds support for TLS according to the x86 System V ABI.
Each thread gets a thread-specific memory region, and the GS segment
register always points _to a pointer_ to the thread-specific memory.

In other words, to access thread-local variables, userspace programs
start by dereferencing the pointer at [gs:0].

The Process keeps a master copy of the TLS segment that new threads
should use, and when a new thread is created, they get a copy of it.
It's basically whatever the PT_TLS program header in the ELF says.
2019-09-07 15:55:36 +02:00
Andreas Kling 73fdbba59c AK: Rename <AK/AKString.h> to <AK/String.h>
This was a workaround to be able to build on case-insensitive file
systems where it might get confused about <string.h> vs <String.h>.

Let's just not support building that way, so String.h can have an
objectively nicer name. :^)
2019-09-06 15:36:54 +02:00
Drew Stratford 81d0f96f20 Kernel: Use user stack for signal handlers.
This commit drastically changes how signals are handled.

In the case that an unblocked thread is signaled it works much
in the same way as previously. However, when a blocking syscall
is interrupted, we set up the signal trampoline on the user
stack, complete the blocking syscall, return down the kernel
stack and then jump to the handler. This means that from the
kernel stack's perspective, we only ever get one system call deep.

The signal trampoline has also been changed in order to properly
store the return value from system calls. This is necessary due
to the new way we exit from signaled system calls.
2019-09-05 16:37:09 +02:00
Drew Stratford 259a1d56b0 Thread: added member m_kernel_stack_top.
This value stores the top of a threads kernel_stack.
2019-09-05 16:37:09 +02:00
Andreas Kling e3f3c980bf IntrusiveList: Make Iterator::operator* return a T&
This makes iteration a little more pleasant :^)
2019-08-17 11:25:32 +02:00
Andreas Kling 1f9b8f0e7c Kernel: Don't create Function objects in the scheduling code
Each Function is a heap allocation, so let's make an effort to avoid
doing that during scheduling. Because of header dependencies, I had to
put the runnables iteration helpers in Thread.h, which is a bit meh but
at least this cuts out all the kmalloc() traffic in pick_next().
2019-08-07 20:43:54 +02:00
Andreas Kling 83fdad25ed Kernel: For signal-killed threads, dump backtrace from finalizer thread
Instead of dumping the dying thread's backtrace in the signal handling
code, wait until we're finalizing the thread. Since signalling happens
during scheduling, the less work we do there the better.

Basically the less that happens during a scheduler pass the better. :^)
2019-08-06 19:45:08 +02:00
Andreas Kling 5e01ebfc56 Kernel: Clean up thread stacks when a thread dies
We were forgetting where we put the userspace thread stacks, so added a
member called Thread::m_userspace_thread_stack to keep track of it.

Then, in ~Thread(), we now deallocate the userspace, kernel and signal
stacks (if present.)

Out of curiosity, the "init_stage2" process doesn't have a kernel stack
which I found surprising. :^)
2019-08-01 20:17:12 +02:00
Andreas Kling 4316fa8123 Kernel: Dump backtrace to debugger for DefaultSignalAction::DumpCore.
This makes assertion failures generate backtraces again. Sorry to everyone
who suffered from the lack of backtraces lately. :^)

We share code with the /proc/PID/stack implementation. You can now get the
current backtrace for a Thread via Thread::backtrace(), and all the traces
for a Process via Process::backtrace().
2019-07-25 21:02:19 +02:00
Robin Burchell 342f7a6b0f Move runnable/non-runnable list control entirely over to Scheduler
This way, we can change how the scheduler works without having to change Thread too.
2019-07-22 09:42:39 +02:00
Robin Burchell dea7f937bf Scheduler: Allow reentry into block()
With the presence of signal handlers, it is possible that a thread might
be blocked multiple times. Picture for instance a signal handler using
read(), or wait() while the thread is already blocked elsewhere before
the handler is invoked.

To fix this, we turn m_blocker into a chain of handlers. Each block()
call now prepends to the list, and unblocking will only consider the
most recent (first) blocker in the chain.

Fixes #309
2019-07-21 12:42:22 +02:00
Robin Burchell d48c73b10a Thread: Cleanup m_blocker handling
The only two places we set m_blocker now are Thread::set_state(), and
Thread::block(). set_state is mostly just an issue of clarity: we don't
want to end up with state() != Blocked with an m_blocker, because that's
weird. It's also possible: if we yield, someone else may set_state() us.

We also now set_state() and set m_blocker under lock in block(), rather
than unlocking which might allow someone else to mess with our internals
while we're in the process of trying to block.

This seems to fix sending STOP & CONT causing a panic.

My guess as to what was happening is this:

    thread A blocks in select(): Blocking & m_blocker != nullptr
    thread B sends SIGSTOP: Stopped & m_blocker != nullptr
    thread B sends SIGCONT: we continue execution. Runnable & m_blocker != nullptr
    thread A tries to block in select() again:
        * sets m_blocker
        * unlocks (in block_helper)
        * someone else tries to unblock us? maybe from the old m_blocker? unclear -- clears m_blocker
        * sets Blocked (while unlocked!)

So, thread A is left with state Blocked & m_blocker == nullptr, leading
to the scheduler assert (m_blocker != nullptr) failing.

Long story short, let's do all our data management with the lock _held_.
2019-07-20 19:31:52 +02:00
Robin Burchell 96de90ceef Net: Merge Thread::wait_for_connect into LocalSocket (as the only place that uses it)
Also do this more like other blockers, don't call yield ourselves, as
block will do that for us.
2019-07-20 12:15:24 +02:00
Robin Burchell 833d444cd8 Thread: Return a result from block() indicating why the block terminated
And use this to return EINTR in various places; some of which we were
not handling properly before.

This might expose a few bugs in userspace, but should be more compatible
with other POSIX systems, and is certainly a little cleaner.
2019-07-20 12:15:24 +02:00
Robin Burchell 4547a301c4 Thread: Fix a regression introduced in 80a6df9022
Accidentally forgot to check the state parameter, which made this rather useless.

Bug found, and cause identified by Andreas
2019-07-19 16:06:51 +02:00
Robin Burchell 53262cd08b AK: Introduce IntrusiveList
And use it in the scheduler.

IntrusiveList is similar to InlineLinkedList, except that rather than
making assertions about the type (and requiring inheritance), it
provides an IntrusiveListNode type that can be used to put an instance
into many different lists at once.

As a proof of concept, port the scheduler over to use it. The only
downside here is that the "list" global needs to know the position of
the IntrusiveListNode member, so we have to position things a little
awkwardly to make that happen. We also move the runnable lists to
Thread, to avoid having to publicize the node.
2019-07-19 15:42:30 +02:00
Andreas Kling 218069f421 Kernel: Make the Thread::FileDescriptionBlocker constructor protected.
Nobody should ever construct one of these directly.
2019-07-19 13:32:56 +02:00
Andreas Kling 705cd2491c Kernel: Some small refinements to the thread blockers.
Committing some things my hands did while browsing through this code.

- Mark all leaf classes "final".
- FileDescriptionBlocker now stores a NonnullRefPtr<FileDescription>.
- FileDescriptionBlocker::blocked_description() now returns a reference.
- ConditionBlocker takes a Function&&.
2019-07-19 13:19:47 +02:00
Robin Burchell 80a6df9022 Thread: More composition in for_each :) 2019-07-19 13:19:02 +02:00
Robin Burchell e74dce65e6 Thread: Normalize all for_each constructs to use IterationDecision
This way a caller can abort the for_each early if they want.
2019-07-19 13:19:02 +02:00
Robin Burchell cd76b691fb Kernel: Remove memory allocations from the new Blocker API 2019-07-19 11:03:22 +02:00
Robin Burchell 99c5377653 Kernel: Remove old block(State) API
New API should be used always :)
2019-07-19 11:03:22 +02:00
Robin Burchell 762333ba95 Kernel: Restore state strings for block states
"Blocking" is not terribly informative, but now that everything is
ported over, we can force the blocker to provide us with a reason.

This does mean that to_string(State) needed to become a member, but
that's OK.
2019-07-19 11:03:22 +02:00
Robin Burchell b13f1699fc Kernel: Rename Condition state to Blocked now we only have one blocking mechanism :) 2019-07-19 11:03:22 +02:00
Robin Burchell d2ca91c024 Kernel: Convert BlockedSignal and BlockedLurking to the new Blocker mechanism
The last two of the old block states gone :)
2019-07-19 11:03:22 +02:00
Robin Burchell 750dbe986d Kernel: Avoid allocations for Select vectors by using inline capacity
Good tip by Andreas :)
2019-07-19 11:03:22 +02:00
Robin Burchell 52743f9eec Kernel: Rename ThreadBlocker classes to avoid stutter
Thread::ThreadBlockerFoo is a lot less nice to read than Thread::FooBlocker
2019-07-19 11:03:22 +02:00
Robin Burchell 782e4ee6e1 Kernel: Port wait to ThreadBlocker 2019-07-19 11:03:22 +02:00
Robin Burchell 4f9ae9b970 Kernel: Port select to ThreadBlocker 2019-07-19 11:03:22 +02:00
Robin Burchell 32fcfb79e9 Kernel: Port sleep to ThreadBlocker 2019-07-19 11:03:22 +02:00
Robin Burchell 0c8813e6d9 Kernel: Introduce ThreadBlocker as a way to make unblocking neater :)
And port all the descriptor-based blocks over to it as a proof of concept.
2019-07-19 11:03:22 +02:00
Robin Burchell 4da2521606 Scheduler: Move thread unblocking out of a lambda to help make things more readable
We also use a switch to explicitly make sure we handle all cases properly.
2019-07-18 16:14:09 +02:00
Robin Burchell f2fdac789c Kernel: Add a new block state for accept() on a blocking socket
Rather than asserting, which really ruins everyone's day.
2019-07-18 10:56:49 +02:00
Andreas Kling b2e502e533 Kernel: Add Thread::block_until(Condition).
Replace the class-based snooze alarm mechanism with a per-thread callback.
This makes it easy to block the current thread on an arbitrary condition:

    void SomeDevice::wait_for_irq() {
        m_interrupted = false;
        current->block_until([this] { return m_interrupted; });
    }
    void SomeDevice::handle_irq() {
        m_interrupted = true;
    }

Use this in the SB16 driver, and in NetworkTask :^)
2019-07-14 14:54:54 +02:00
Andreas Kling 3073ea7d84 Kernel: Add support for the WSTOPPED flag to the waitpid() syscall.
This makes waitpid() return when a child process is stopped via a signal.
Use this in Shell to catch stopped children and return control to the
command line. :^)

Fixes #298.
2019-07-14 11:35:49 +02:00
Andreas Kling 54e79a4640 Kernel: Make it easier to add Thread block states in the future. 2019-07-13 20:14:39 +02:00
Andreas Kling eca5c2bdf8 Kernel: Move VirtualAddress.h into VM/ 2019-07-09 15:04:45 +02:00
Andreas Kling 4d904340b4 Kernel: Don't interrupt blocked syscalls to dispatch ignored signals.
This was just causing syscalls to return EINTR for no reason.
2019-07-08 18:59:48 +02:00
Andreas Kling 27f699ef0c AK: Rename the common integer typedefs to make it obvious what they are.
These types can be picked up by including <AK/Types.h>:

* u8, u16, u32, u64 (unsigned)
* i8, i16, i32, i64 (signed)
2019-07-03 21:20:13 +02:00
Andreas Kling 550b0b062b AK: Rename RetainPtr.h => RefPtr.h, Retained.h => NonnullRefPtr.h. 2019-06-21 18:45:59 +02:00
Andreas Kling 90b1354688 AK: Rename RetainPtr => RefPtr and Retained => NonnullRefPtr. 2019-06-21 18:37:47 +02:00
Andreas Kling c1bbd40b9e Kernel: Rename "descriptor" to "description" where appropriate.
Now that FileDescription is called that, variables of that type should not
be called "descriptor". This is kinda wordy but we'll get used to it.
2019-06-13 22:03:04 +02:00
Andreas Kling 736092a087 Kernel: Move i386.{cpp,h} => Arch/i386/CPU.{cpp,h}
There's a ton of work that would need to be done before we could spin up on
another architecture, but let's at least try to separate things out a bit.
2019-06-07 20:02:01 +02:00
Andreas Kling 39d1a9ae66 Meta: Tweak .clang-format to not wrap braces after enums. 2019-06-07 17:13:23 +02:00
Andreas Kling e42c3b4fd7 Kernel: Rename LinearAddress => VirtualAddress. 2019-06-07 12:56:50 +02:00
Andreas Kling 08cd75ac4b Kernel: Rename FileDescriptor to FileDescription.
After reading a bunch of POSIX specs, I've learned that a file descriptor
is the number that refers to a file description, not the description itself.
So this patch renames FileDescriptor to FileDescription, and Process now has
FileDescription* file_description(int fd).
2019-06-07 09:36:51 +02:00
Robin Burchell 0dc9af5f7e Add clang-format file
Also run it across the whole tree to get everything using the One True Style.
We don't yet run this in an automated fashion as it's a little slow, but
there is a snippet to do so in makeall.sh.
2019-05-28 17:31:20 +02:00
Andreas Kling 64a4f3df69 Kernel: Add a Thread::set_thread_list() helper to keep logic in one place. 2019-05-18 20:28:04 +02:00
Andreas Kling 8c7d5abdc4 Kernel: Refactor thread scheduling a bit, breaking it into multiple lists.
There are now two thread lists, one for runnable threads and one for non-
runnable threads. Thread::set_state() is responsible for moving threads
between the lists.

Each thread also has a back-pointer to the list it's currently in.
2019-05-18 20:28:04 +02:00
Andreas Kling 45ff3a7e6a Kernel: Make Thread::kernel_stack_base() work for kernel processes. 2019-05-17 03:43:51 +02:00
Andreas Kling 9e2116ff6b Kernel: Signal stacks are lazily allocated so don't crash in getter. 2019-05-14 12:17:59 +02:00
Andreas Kling 486c675850 Kernel: Allocate kernel signal stacks using the region allocator as well. 2019-05-14 12:06:09 +02:00
Andreas Kling c8a216b107 Kernel: Allocate kernel stacks for threads using the region allocator.
This patch moves away from using kmalloc memory for thread kernel stacks.
This reduces pressure on kmalloc (16 KB per thread adds up fast) and
prevents kernel stack overflow from scribbling all over random unrelated
kernel memory.
2019-05-14 11:51:00 +02:00
Andreas Kling 03da7046bd Kernel: Prepare Socket for becoming a File.
Make the Socket functions take a FileDescriptor& rather than a socket role
throughout the code. Also change threads to block on a FileDescriptor,
rather than either an fd index or a Socket.
2019-05-03 20:15:54 +02:00
Andreas Kling 0a0d739e98 Kernel: Make FIFO inherit from File. 2019-04-29 04:55:54 +02:00
Andreas Kling 5f63f8120c Kernel: Remove "restorer" field from SignalActionData.
I was originally implementing signals by looking at some man page about
sigaction() to see how it works. It seems like the restorer thingy is
system-specific and not required by POSIX, so let's get rid of it.
2019-04-20 19:32:14 +02:00
Andreas Kling 5562ab3f5a Kernel: Remove some more unnecessary Thread members. 2019-04-20 19:29:48 +02:00
Andreas Kling b2ebf6c798 Kernel: Shrink Thread by making kernel resume TSS heap-allocated. 2019-04-20 19:23:45 +02:00
Andreas Kling c59f8cd663 Kernel: Scheduler donations need to verify that the beneficiary is valid.
Add a Thread::is_thread(void*) helper that we can use to check that the
incoming donation beneficiary is a valid thread. The O(n) here is a bit sad
and we should eventually rethink the process/thread table data structures.
2019-04-17 12:41:51 +02:00
Andreas Kling 4132f645ee Kernel: Merge TSS.h into i386.h. 2019-04-14 04:39:56 +02:00
Andreas Kling 29d0412a06 Kernel: Remove system.h and make the uptime global a qword. 2019-04-14 01:29:14 +02:00
Andreas Kling a58d7fd8bb Kernel: Get rid of Kernel/types.h, separate LinearAddress/PhysicalAddress. 2019-04-06 14:29:29 +02:00
Andreas Kling e9f2cc3595 Kernel: Save/restore the SSE context on context switch. 2019-03-27 15:27:45 +01:00
Andreas Kling 239c0bd6a6 Kernel: Make block() and yield() automatically call Scheduler::yield().
This exposed some places we were accidentally doing a double yield().
2019-03-24 01:52:10 +01:00
Andreas Kling fa7f532c35 Kernel: Add a Thread::all_threads() helper. 2019-03-23 23:50:49 +01:00
Andreas Kling e561ab1b0b Kernel+LibC: Add a simple create_thread() syscall.
It takes two parameters, a function pointer for the entry function,
and a void* argument to be passed to that function on the new thread.
2019-03-23 22:59:08 +01:00
Andreas Kling 60d25f0f4a Kernel: Introduce threads, and refactor everything in support of it.
The scheduler now operates on threads, rather than on processes.
Each process has a main thread, and can have any number of additional
threads. The process exits when the main thread exits.

This patch doesn't actually spawn any additional threads, it merely
does all the plumbing needed to make it possible. :^)
2019-03-23 22:03:17 +01:00