SVNews r327413

NOTE: This service is experimental and subject to change! Use at your own risk!

2017-12-31 05:06:35 - r327413 by mjg (Mateusz Guzik)

Complete list of files affected by revision r327413:

(Note: At the moment, these links point to ViewVC on svn.freebsd.org. They are probably slow. Do not overuse.)

   Contents     MODIFY   /stable/11  
  History   Contents   Diff   MODIFY   /stable/11/sys/kern/kern_mutex.c  
  History   Contents   Diff   MODIFY   /stable/11/sys/kern/kern_rwlock.c  
  History   Contents   Diff   MODIFY   /stable/11/sys/kern/kern_sx.c  
  History   Contents   Diff   MODIFY   /stable/11/sys/sys/lock.h  
  History   Contents   Diff   MODIFY   /stable/11/sys/sys/mutex.h  
  History   Contents   Diff   MODIFY   /stable/11/sys/sys/rwlock.h  
  History   Contents   Diff   MODIFY   /stable/11/sys/sys/sx.h  

Commit message:

MFC r320561,r323236,r324041,r324314,r324609,r324613,r324778,r324780,r324787,
  r324803,r324836,r325469,r325706,r325917,r325918,r325919,r325920,r325921,
  r325922,r325925,r325963,r326106,r326107,r326110,r326111,r326112,r326194,
  r326195,r326196,r326197,r326198,r326199,r326200,r326237:

  rwlock: perform the typically false td_rw_rlocks check later

  Check if the lock is available first instead.

=============

  Sprinkle __read_frequently on few obvious places.

  Note that some of annotated variables should probably change their types
  to something smaller, preferably bit-sized.

=============

  mtx: drop the tid argument from _mtx_lock_sleep

  tid must be equal to curthread and the target routine was already reading
  it anyway, which is not a problem. Not passing it as a parameter allows for
  a little bit shorter code in callers.

=============

  locks: partially tidy up waiting on readers

  spin first instant of instantly re-readoing and don't re-read after
  spinning is finished - the state is already known.

  Note the code is subject to significant changes later.

=============

  locks: take the number of readers into account when waiting

  Previous code would always spin once before checking the lock. But a lock
  with e.g. 6 readers is not going to become free in the duration of once spin
  even if they start draining immediately.

  Conservatively perform one for each reader.

  Note that the total number of allowed spins is still extremely small and is
  subject to change later.

=============

  mtx: change MTX_UNOWNED from 4 to 0

  The value is spread all over the kernel and zeroing a register is
  cheaper/shorter than setting it up to an arbitrary value.

  Reduces amd64 GENERIC-NODEBUG .text size by 0.4%.

=============

  mtx: fix up owner_mtx after r324609

  Now that MTX_UNOWNED is 0 the test was alwayas false.

=============

  mtx: clean up locking spin mutexes

  1) shorten the fast path by pushing the lockstat probe to the slow path
  2) test for kernel panic only after it turns out we will have to spin,
  in particular test only after we know we are not recursing

=============

  mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes

  There is nothing panic-breaking to do in the unlock case and the lock
  case will fallback to the slow path doing the check already.

=============

  rwlock: reduce lockstat branches in the slowpath

=============

  mtx: fix up UP build after r324778

=============

  mtx: implement thread lock fastpath

=============

  rwlock: fix up compilation without KDTRACE_HOOKS after r324787

=============

  rwlock: use fcmpset for setting RW_LOCK_WRITE_SPINNER

=============

  sx: avoid branches if in the slow path if lockstat is disabled

=============

  rwlock: avoid branches in the slow path if lockstat is disabled

=============

  locks: pull up PMC_SOFT_CALLs out of slow path loops

=============

  mtx: unlock before traversing threads to wake up

  This shortens the lock hold time while not affecting corretness.
  All the woken up threads end up competing can lose the race against
  a completely unrelated thread getting the lock anyway.

=============

  rwlock: unlock before traversing threads to wake up

  While here perform a minor cleanup of the unlock path.

=============

  sx: perform a minor cleanup of the unlock slowpath

  No functional changes.

=============

  mtx: add missing parts of the diff in r325920

  Fixes build breakage.

=============

  locks: fix compilation issues without SMP or KDTRACE_HOOKS

=============

  locks: remove the file + line argument from internal primitives when not
used

  The pair is of use only in debug or LOCKPROF kernels, but was passed
(zeroed)
  for many locks even in production kernels.

  While here whack the tid argument from wlock hard and xlock hard.

  There is no kbi change of any sort - "external" primitives still accept the
  pair.

=============

  locks: pass the found lock value to unlock slow path

  This avoids an explicit read later.

  While here whack the cheaply obtainable 'tid' argument.

=============

  rwlock: don't check for curthread's read lock count in the fast path

=============

  rwlock: unbreak WITNESS builds after r326110

=============

  sx: unbreak debug after r326107

  An assertion was modified to use the found value, but it was not updated to
  handle a race where blocked threads appear after the entrance to the func.

  Move the assertion down to the area protected with sleepq lock where the
  lock is read anyway. This does not affect coverage of the assertion and
  is consistent with what rw locks are doing.

=============

  rwlock: stop re-reading the owner when going to sleep

=============

  locks: retry turnstile/sleepq loops on failed cmpset

  In order to go to sleep threads set waiter flags, but that can spuriously
  fail e.g. when a new reader arrives. Instead of unlocking everything and
  looping back, re-evaluate the new state while still holding the lock
necessary
  to go to sleep.

=============

  sx: change sunlock to wake waiters up if it locked sleepq

  sleepq is only locked if the curhtread is the last reader. By the time
  the lock gets acquired new ones could have arrived. The previous code
  would unlock and loop back. This results spurious relocking of sleepq.

  This is a step towards xadd-based unlock routine.

=============

  rwlock: add __rw_try_{r,w}lock_int

=============

  rwlock: fix up compilation of the previous change

  commmitted wrong version of the patch

=============

  Convert in-kernel thread_lock_flags calls to thread_lock when debug is
disabled

  The flags argument is not used in this case.

=============

  Add the missing lockstat check for thread lock.

=============

  rw: fix runlock_hard when new readers show up

  When waiters/writer spinner flags are set no new readers can show up unless
  they already have a different rw rock read locked. The change in r326195
failed
  to take that into account - in presence of new readers it would spin until
  they all drain, which would be lead to trouble if e.g. they go off cpu and
  can get scheduled because of this thread.

 


Powered by Python FreeBSD support by secnetix GmbH & Co. KG

Page generated in 31 ms, 8 files printed. Current time is 2018-01-18 19:40:45. All times are in UTC/GMT.