Commit graph

6172 commits

Author SHA1 Message Date
Poul-Henning Kamp b0fc6220b8 Remove BIO_SETATTR from non-GEOM part of kernel as well. 2003-04-03 19:22:32 +00:00
Jeff Roberson a8949de20e - Keep seperate statistics and run queues for different scheduling classes.
- Treat each class specially in kseq_{choose,add,rem}.  Let the rest of the
   code be less aware of scheduling classes.
 - Skip the interactivity calculation for non TIMESHARE ksegrps.
 - Move slice and runq selection into kseq_add().  Uninline it now that it's
   big.
2003-04-03 00:29:28 +00:00
Peter Wemm cc66ebe2a9 Commit a partial lazy thread switch mechanism for i386. it isn't as lazy
as it could be and can do with some more cleanup.  Currently its under
options LAZY_SWITCH.  What this does is avoid %cr3 reloads for short
context switches that do not involve another user process.  ie: we can
take an interrupt, switch to a kthread and return to the user without
explicitly flushing the tlb.  However, this isn't as exciting as it could
be, the interrupt overhead is still high and too much blocks on Giant
still.  There are some debug sysctls, for stats and for an on/off switch.

The main problem with doing this has been "what if the process that you're
running on exits while we're borrowing its address space?" - in this case
we use an IPI to give it a kick when we're about to reclaim the pmap.

Its not compiled in unless you add the LAZY_SWITCH option.  I want to fix a
few more things and get some more feedback before turning it on by default.

This is NOT a replacement for Bosko's lazy interrupt stuff.  This was more
meant for the kthread case, while his was for interrupts.  Mine helps a
little for interrupts, but his helps a lot more.

The stats are enabled with options SWTCH_OPTIM_STATS - this has been a
pseudo-option for years, I just added a bunch of stuff to it.

One non-trivial change was to select a new thread before calling
cpu_switch() in the first place.  This allows us to catch the silly
case of doing a cpu_switch() to the current process.  This happens
uncomfortably often.  This simplifies a bit of the asm code in cpu_switch
(no longer have to call choosethread() in the middle).  This has been
implemented on i386 and (thanks to jake) sparc64.  The others will come
soon.  This is actually seperate to the lazy switch stuff.

Glanced at by:  jake, jhb
2003-04-02 23:53:30 +00:00
John Baldwin 6751370f6f Lock the process before sending it a SIGIO. Not doing so is a panic(2)
implementation with INVARIANTS.
2003-04-02 21:54:51 +00:00
Jeffrey Hsu c31548c820 Need to hold the same SMP lock for (knote) list traversal as for
list manipulation.  This lock also protects read-modify-write operations
on the pipe_state field.
2003-04-02 15:24:50 +00:00
Jeff Roberson 5053d272c2 - Make the interactivity calculator decay faster.
- Make the pcpu estimator update faster.
2003-04-02 08:22:33 +00:00
Jeff Roberson 98c9b132d1 - I meant divide by two and not shift by two in SCHED_PRI_NHALF. 2003-04-02 08:21:24 +00:00
Jake Burkholder cef57e7624 - Make casuptr return the old value of the location we're trying to update,
and change the umtx code to expect this.

Reviewed by:	jeff
2003-04-02 08:02:27 +00:00
Jeff Roberson 245f3abfd5 - Add in support for KSEs with 0 slice values on the run queue. If we try
to select a KSE with a slice of 0 we will update its slice and insert it
   onto the next queue.
 - Pass the KSE instead of the ksegrp into sched_slice().  This more
   accurately reflects the behavior of the code.  Slices are granted to kses.
 - Add a function kseq_nice_min() which finds the smallest nice value
   assigned to the kseg of any KSE on the queue.
 - Rewrite the logic in sched_slice().  Add a large comment describing the
   new slice selection scheme.  To summarize, slices are assigned based on
   the nice value.  Priorities are still calculated based on the nice and
   interactivity of a process.  Slice sizes of 0 may be granted for KSEs
   whos nice is 20 or futher away from the lowest nice on the run queue.
   Other nice values are scaled across the range [min, min+20].  This fixes
   ULEs bad behavior with positively niced processes.
2003-04-02 06:46:43 +00:00
Jake Burkholder fc2fca74d8 - Fix UC_COPY_SIZE. Adding up the size of structure fields doesn't take
alignment into account.
- Return EJUSTRETURN from set_context on success to avoid clobbering the
  first 2 out registers with td_retval on sparc64.
2003-04-01 23:25:18 +00:00
Poul-Henning Kamp 817509273e #include <geom/geom_disk.h> 2003-04-01 19:00:38 +00:00
Poul-Henning Kamp af6ca7f4a9 Introduce bioq_flush() function. 2003-04-01 12:49:40 +00:00
Jeff Roberson c9dfa2e08b - p will be unused in cursig() if INVARIANTS is not defined. Access it
through td->td_proc to avoid the unused variable.

Spotted by:	Maxim Konovalov <maxim@macomnet.ru>
2003-04-01 09:07:36 +00:00
Jeff Roberson 4518589564 - Regen. 2003-04-01 02:34:21 +00:00
Jeff Roberson 8446303e01 - thr_exit() should no longer be called with Giant held. 2003-04-01 02:32:53 +00:00
Jeff Roberson f27bf63b8a - Mark the various thr syscalls as MP safe. Previously there was a bug if
this was not done since thr_exit() unwinds giant.
2003-04-01 02:32:07 +00:00
Jeff Roberson 2c10d16a4b - Borrow the KSE single threading code for exec and exit. We use the check
if (p->p_numthreads > 1) and not a flag because action is only necessary
   if there are other threads.  The rest of the system has no need to
   identify thr threaded processes.
 - In kern_thread.c use thr_exit1() instead of thread_exit() if P_THREADED
   is not set.
2003-04-01 01:26:20 +00:00
Jeff Roberson 8af830c374 - Regen for umtx. 2003-04-01 01:22:18 +00:00
Jeff Roberson 6eeb9653aa - Include umtx.h in files generated by makesyscalls.sh
- Add system calls for umtx.
2003-04-01 01:12:24 +00:00
Jeff Roberson 69404b5090 - Add an api for doing smp safe locks in userland.
- umtx_lock() is defined as an inline in umtx.h.  It tries to do an
   uncontested acquire of a lock which falls back to the _umtx_lock()
   system-call if that fails.
 - umtx_unlock() is also an inline which falls back to _umtx_unlock() if the
   uncontested unlock fails.
 - Locks are keyed off of the thr_id_t of the currently running thread which
   is currently just the pointer to the 'struct thread' in kernel.
 - _umtx_lock() uses the proc pointer to synchronize access to blocked thread
   queues which are stored in the first blocked thread.
2003-04-01 01:10:42 +00:00
Jeff Roberson 90e38817b7 - We now have to include umtx.h and ucontext.h in the system call related
headers.
2003-04-01 00:35:12 +00:00
Jeff Roberson d4a63cb9c8 - Regen for thr related system calls. 2003-04-01 00:34:29 +00:00
Jeff Roberson 8d5377e538 - Add the four thr related system calls. 2003-04-01 00:31:37 +00:00
Jeff Roberson 89bb1cef1d - Add two files to support the thr threading interface.
- sys/thr.h contains the user space visible api that is intended only for
   use in threading library packages.
 - kern/kern_thr.c contains thr system calls and other thr specific code.
2003-04-01 00:30:30 +00:00
Jeff Roberson 722547925e - Regen for the sig*wait* system calls. 2003-03-31 23:33:45 +00:00
Jeff Roberson a447cd8b28 - Define sigwait, sigtimedwait, and sigwaitinfo in terms of
kern_sigtimedwait() which is capable of supporting all of their semantics.
 - These should be POSIX compliant but more careful review is needed before
   we announce this.
2003-03-31 23:30:41 +00:00
Jeff Roberson 4093529dee - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread with
a follow on commit to kern_sig.c
 - signotify() now operates on a thread since unmasked pending signals are
   stored in the thread.
 - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK.
2003-03-31 22:49:17 +00:00
Julian Elischer 0d49bb4b30 Do NOT return from an non-interruptable cv_wait, falsely
claiming to have timed out. I don't know what I was thinking..
2003-03-31 22:41:47 +00:00
Jeff Roberson da33176f39 - Mark signals which may be delivered to any thread in the process with
SA_PROC.  Signals without this flag should be directed to a particular
   thread if this is possible.
2003-03-31 22:12:09 +00:00
Jeff Roberson 1bf4700bff - Change trapsignal() to accept a thread and not a proc.
- Change all consumers to pass in a thread.

Right now this does not cause any functional changes but it will be important
later when signals can be delivered to specific threads.
2003-03-31 22:02:38 +00:00
Alan Cox 7be80f55ba Recent changes to uipc_cow.c have eliminated the need for some sf_buf-
related variables to be global.  Make them either local to sf_buf_init() or
static.
2003-03-31 06:25:42 +00:00
Poul-Henning Kamp d2a0822e9d retire the "busy" field in bioqueues, it's served it's purpose. 2003-03-30 10:16:31 +00:00
Poul-Henning Kamp d086f85ac4 Preparation commit before I start on the bioqueue lockdown:
Collect all the bits of bioqueue handing in subr_disk.c, vfs_bio.c is big
enough as it is and disksort already lives in subr_disk.c.
2003-03-30 08:51:23 +00:00
Jeff Roberson abb0e6da6b - We are not guaranteed that read ahead blocks are not in memory already.
Check for B_DELWRI as well as B_CACHED before issuing io on a buffer.  This
   is especially important since we are changing the b_iocmd.
2003-03-30 02:57:32 +00:00
Alan Cox 9f6d45b1a4 Pass the vm_page's address to sf_buf_alloc(); map the vm_page as part
of sf_buf_alloc() instead of expecting sf_buf_alloc()'s caller to map it.

The ultimate reason for this change is to enable two optimizations:
(1) that there never be more than one sf_buf mapping a vm_page at a time
and (2) 64-bit architectures can transparently use their 1-1 virtual
to physical mapping (e.g., "K0SEG") avoiding the overhead of pmap_qenter()
and pmap_qremove().
2003-03-29 06:14:14 +00:00
Mike Silbersack 55e9f80d76 Add the m_defrag routine, as discussed on committers@. This
incarnation should address the concerns of all in the discussion,
and keeps statistics which show how much it is used.

MFC after:	2 weeks
2003-03-29 05:48:36 +00:00
John Baldwin 16088e4a88 Check for the PS_NEEDSIGCHK flag in the right flags field. 2003-03-28 18:08:57 +00:00
Mike Silbersack df8c7fc96e Allow m_dup_pkthdr to accept mbufs with attached clusters as
targets.

Submitted by:	bmilekic
2003-03-28 05:57:48 +00:00
Ian Dowse 6205bf3107 Add a checksum to the kernel message buffer, and update it every
time a character is written. Use this at boot time to reject the
existing buffer contents if they are corrupt. This fixes a problem
seen on some hardware (especially laptops) where the message buffer
gets partially corrupted during a short power cycle or reset, but
the msgbuf structure is left intact so it gets reused, resulting
in random junk and control characters appearing in dmesg and
/var/log/messages.

PR:		kern/28497
2003-03-28 02:50:10 +00:00
Tor Egge 5bbb806004 Add support for reading directly from file to userland buffer when the
O_DIRECT descriptor status flag is set and both offset and length is a
multiple of the physical media sector size.
2003-03-26 23:40:42 +00:00
Tor Egge 6b08046175 Adjust the number of vnodes scanned by vlrureclaim() according to the
size of the vnode list.
2003-03-26 22:15:58 +00:00
Robert Watson f2538508f6 Permit debug.malloc.failure_rate to be specified using a tunable so
that the feature can be enabled during the boot process.  Note the
continued limitation that FreeBSD fails so rapidly with this setting
enabled that it's hard to narrow down particular failures for
correction; we really need per-malloc type failure rates.
2003-03-26 20:44:29 +00:00
Robert Watson eae870cdb4 Add a new kernel option, MALLOC_MAKE_FAILURES, which compiles
in a debugging feature causing M_NOWAIT allocations to fail at
a specified rate.  This can be useful for detecting poor
handling of M_NOWAIT: the most frequent problems I've bumped
into are unconditional deference of the pointer even though
it's NULL, and hangs as a result of a lost event where memory
for the event couldn't be allocated.  Two sysctls are added:

debug.malloc.failure_rate

  How often to generate a failure: if set to 0 (default), this
  feature is disabled.  Otherwise, the frequency of failures --
  I've been using 10 (one in ten mallocs fails), but other
  popular settings might be much lower or much higher.

debug.malloc.failure_count

  Number of times a coerced malloc failure has occurred as a
  result of this feature.  Useful for tracking what might have
  happened and whether failures are being generated.

Useful possible additions: tying failure rate to malloc type,
printfs indicating the thread that experienced the coerced
failure.

Reviewed by:	jeffr, jhb
2003-03-26 20:18:40 +00:00
Tor Egge 128a0bb7e9 fp->f_offset doesn't need any protection when it isn't accessed. 2003-03-26 19:21:12 +00:00
Robert Watson 5e7ce4785f Modify the mac_init_ipq() MAC Framework entry point to accept an
additional flags argument to indicate blocking disposition, and
pass in M_NOWAIT from the IP reassembly code to indicate that
blocking is not OK when labeling a new IP fragment reassembly
queue.  This should eliminate some of the WITNESS warnings that
have started popping up since fine-grained IP stack locking
started going in; if memory allocation fails, the creation of
the fragment queue will be aborted.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2003-03-26 15:12:03 +00:00
John Baldwin 7908a1d477 Remove extraneous check. We are not going to return from copyin/out on
the stack of a thread A but actually be thread B instead of thread A.
2003-03-25 20:13:24 +00:00
Matthew N. Dodd c844066969 Give print_child a default method. 2003-03-25 04:32:52 +00:00
Jake Burkholder 227f9a1c58 - Add vm_paddr_t, a physical address type. This is required for systems
where physical addresses larger than virtual addresses, such as i386s
  with PAE.
- Use this to represent physical addresses in the MI vm system and in the
  i386 pmap code.  This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
  detection code, and due to kvtop returning vm_paddr_t instead of u_long.

Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.

Sponsored by:	DARPA, Network Associates Laboratories
Discussed with:	re, phk (cdevsw change)
2003-03-25 00:07:06 +00:00
John Baldwin 75b8b3b25c Replace the at_fork, at_exec, and at_exit functions with the slightly more
flexible process_fork, process_exec, and process_exit eventhandlers.  This
reduces code duplication and also means that I don't have to go duplicate
the eventhandler locking three more times for each of at_fork, at_exec, and
at_exit.

Reviewed by:	phk, jake, almost complete silence on arch@
2003-03-24 21:15:35 +00:00
John Baldwin 959d22329a - Remove witness_dead and just use witness_watch instead. If witness_watch
is set to 0, it now has the same affect as setting witness_dead used to
  have.
- Added a sysctl handler that allows root to change witness_watch from a
  non-zero value to zero to disable witness at runtime.  Note that you
  can't turn witness back on once it is off.  You can only turn it off as
  a one-way switch.
- Added a comment describing the possible values of witness_watch.
2003-03-24 21:03:53 +00:00