1. 22 Dec, 2017 1 commit
    • Jens Axboe's avatar
      blk-mq: improve heavily contended tag case · 4e5dff41
      Jens Axboe authored
      Even with a number of waitqueues, we can get into a situation where we
      are heavily contended on the waitqueue lock. I got a report on spc1
      where we're spending seconds doing this. Arguably the use case is nasty,
      I reproduce it with one device and 1000 threads banging on the device.
      But that doesn't mean we shouldn't be handling it better.
      
      What ends up happening is that a thread will fail to get a tag, add
      itself to the waitqueue, and subsequently get woken up when a tag is
      freed - only to find itself going back to sleep on the waitqueue.
      
      Instead of waking all threads, use an exclusive wait and wake up our
      sbitmap batch count instead. This seems to work well for me (massive
      improvement for this use case), and it survives basic testing. But I
      haven't fully verified it yet.
      
      An additional improvement is running the queue and checking for a new
      tag BEFORE needing to add ourselves to the waitqueue.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4e5dff41
  2. 14 Apr, 2017 1 commit
    • Omar Sandoval's avatar
      sbitmap: add sbitmap_get_shallow() operation · c05e6673
      Omar Sandoval authored
      This operation supports the use case of limiting the number of bits that
      can be allocated for a given operation. Rather than setting aside some
      bits at the end of the bitmap, we can set aside bits in each word of the
      bitmap. This means we can keep the allocation hints spread out and
      support sbitmap_resize() nicely at the cost of lower granularity for the
      allowed depth.
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      c05e6673
  3. 02 Mar, 2017 1 commit
    • Ingo Molnar's avatar
      kasan, sched/headers: Uninline kasan_enable/disable_current() · af8601ad
      Ingo Molnar authored
      <linux/kasan.h> is a low level header that is included early
      in affected kernel headers. But it includes <linux/sched.h>
      which complicates the cleanup of sched.h dependencies.
      
      But kasan.h has almost no need for sched.h: its only use of
      scheduler functionality is in two inline functions which are
      not used very frequently - so uninline kasan_enable_current()
      and kasan_disable_current().
      
      Also add a <linux/sched.h> dependency to a .c file that depended
      on kasan.h including it.
      
      This paves the way to remove the <linux/sched.h> include from kasan.h.
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      af8601ad
  4. 27 Jan, 2017 1 commit
  5. 18 Jan, 2017 2 commits
    • Omar Sandoval's avatar
      sbitmap: fix wakeup hang after sbq resize · 6c0ca7ae
      Omar Sandoval authored
      When we resize a struct sbitmap_queue, we update the wakeup batch size,
      but we don't update the wait count in the struct sbq_wait_states. If we
      resized down from a size which could use a bigger batch size, these
      counts could be too large and cause us to miss necessary wakeups. To fix
      this, update the wait counts when we resize (ensuring some careful
      memory ordering so that it's safe w.r.t. concurrent clears).
      
      This also fixes a theoretical issue where two threads could end up
      bumping the wait count up by the batch size, which could also
      potentially lead to hangs.
      Reported-by: default avatarMartin Raiber <martin@urbackup.org>
      Fixes: e3a2b3f9 ("blk-mq: allow changing of queue depth through sysfs")
      Fixes: 2971c35f ("blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt")
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      6c0ca7ae
    • Omar Sandoval's avatar
      sbitmap: use smp_mb__after_atomic() in sbq_wake_up() · f66227de
      Omar Sandoval authored
      We always do an atomic clear_bit() right before we call sbq_wake_up(),
      so we can use smp_mb__after_atomic(). While we're here, comment the
      memory barriers in here a little more.
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      f66227de
  6. 19 Sep, 2016 1 commit
  7. 17 Sep, 2016 7 commits