1. 17 Feb, 2010 1 commit
  2. 30 Mar, 2009 1 commit
  3. 20 Feb, 2009 1 commit
  4. 04 Jan, 2009 1 commit
    • Heiko Carstens's avatar
      stop_machine: introduce stop_machine_create/destroy. · 9ea09af3
      Heiko Carstens authored
      Introduce stop_machine_create/destroy. With this interface subsystems
      that need a non-failing stop_machine environment can create the
      stop_machine machine threads before actually calling stop_machine.
      When the threads aren't needed anymore they can be killed with
      stop_machine_destroy again.
      When stop_machine gets called and the threads aren't present they
      will be created and destroyed automatically. This restores the old
      behaviour of stop_machine.
      This patch also converts cpu hotplug to the new interface since it
      is special: cpu_down calls __stop_machine instead of stop_machine.
      However the kstop threads will only be created when stop_machine
      gets called.
      Changing the code so that the threads would be created automatically
      on __stop_machine is currently not possible: when __stop_machine gets
      called we hold cpu_add_remove_lock, which is the same lock that
      create_rt_workqueue would take. So the workqueue needs to be created
      before the cpu hotplug code locks cpu_add_remove_lock.
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
  5. 31 Dec, 2008 1 commit
  6. 16 Nov, 2008 1 commit
  7. 26 Oct, 2008 1 commit
    • Linus Torvalds's avatar
      Revert "Call init_workqueues before pre smp initcalls." · 4403b406
      Linus Torvalds authored
      This reverts commit a802dd0e by moving
      the call to init_workqueues() back where it belongs - after SMP has been
      It also moves stop_machine_init() - which needs workqueues - to a later
      phase using a core_initcall() instead of early_initcall().  That should
      satisfy all ordering requirements, and was apparently the reason why
      init_workqueues() was moved to be too early.
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  8. 21 Oct, 2008 2 commits
  9. 12 Aug, 2008 1 commit
  10. 28 Jul, 2008 3 commits
  11. 26 Jul, 2008 1 commit
  12. 18 Jul, 2008 1 commit
    • Mike Travis's avatar
      cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptr · 65c01184
      Mike Travis authored
        * This patch replaces the dangerous lvalue version of cpumask_of_cpu
          with new cpumask_of_cpu_ptr macros.  These are patterned after the
          node_to_cpumask_ptr macros.
          In general terms, if there is a cpumask_of_cpu_map[] then a pointer to
          the cpumask_of_cpu_map[cpu] entry is used.  The cpumask_of_cpu_map
          is provided when there is a large NR_CPUS count, reducing
          greatly the amount of code generated and stack space used for
          cpumask_of_cpu().  The pointer to the cpumask_t value is needed for
          calling set_cpus_allowed_ptr() to reduce the amount of stack space
          needed to pass the cpumask_t value.
          If there isn't a cpumask_of_cpu_map[], then a temporary variable is
          declared and filled in with value from cpumask_of_cpu(cpu) as well as
          a pointer variable pointing to this temporary variable.  Afterwards,
          the pointer is used to reference the cpumask value.  The compiler
          will optimize out the extra dereference through the pointer as well
          as the stack space used for the pointer, resulting in identical code.
          A good example of the orthogonal usages is in net/sunrpc/svc.c:
      	case SVC_POOL_PERCPU:
      		unsigned int cpu = m->pool_to[pidx];
      		cpumask_of_cpu_ptr(cpumask, cpu);
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, cpumask);
      		return 1;
      	case SVC_POOL_PERNODE:
      		unsigned int node = m->pool_to[pidx];
      		node_to_cpumask_ptr(nodecpumask, node);
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, nodecpumask);
      		return 1;
      Signed-off-by: default avatarMike Travis <travis@sgi.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
  13. 23 Jun, 2008 1 commit
    • Rusty Russell's avatar
      sched: add new API sched_setscheduler_nocheck: add a flag to control access checks · 961ccddd
      Rusty Russell authored
      Hidehiro Kawai noticed that sched_setscheduler() can fail in
      stop_machine: it calls sched_setscheduler() from insmod, which can
      have CAP_SYS_MODULE without CAP_SYS_NICE.
      Two cases could have failed, so are changed to sched_setscheduler_nocheck:
      	- CPU hotplug callback
      	- Called from various places, including modprobe()
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mm@kvack.org
      Cc: sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi OSHIMA <satoshi.oshima.fk@hitachi.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
  14. 23 May, 2008 1 commit
    • Christian Borntraeger's avatar
      stop_machine: make stop_machine_run more virtualization friendly · 3401a61e
      Christian Borntraeger authored
      On kvm I have seen some rare hangs in stop_machine when I used more guest
      cpus than hosts cpus. e.g. 32 guest cpus on 1 host cpu triggered the
      hang quite often. I could also reproduce the problem on a 4 way z/VM host with
      a 64 way guest.
      It turned out that the guest was consuming all available cpus mostly for
      spinning on scheduler locks like rq->lock. This is expected as the threads are
      calling yield all the time.
      The problem is now, that the host scheduling decisings together with the guest
      scheduling decisions and spinlocks not being fair managed to create an
      interesting scenario similar to a live lock. (Sometimes the hang resolved
      itself after some minutes)
      Changing stop_machine to yield the cpu to the hypervisor when yielding inside
      the guest fixed the problem for me. While I am not completely happy with this
      patch, I think it causes no harm and it really improves the situation for me.
      I used cpu_relax for yielding to the hypervisor, does that work on all
      p.s.: If you want to reproduce the problem, cpu hotplug and kprobes use
      stop_machine_run and both triggered the problem after some retries.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      CC: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
  15. 21 Apr, 2008 1 commit
  16. 19 Apr, 2008 2 commits
  17. 06 Feb, 2008 1 commit
  18. 25 Jan, 2008 1 commit
    • Gautham R Shenoy's avatar
      cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() · 86ef5c9a
      Gautham R Shenoy authored
      Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
      get_online_cpus and put_online_cpus instead as it highlights the
      refcount semantics in these operations.
      The new API guarantees protection against the cpu-hotplug operation, but
      it doesn't guarantee serialized access to any of the local data
      structures. Hence the changes needs to be reviewed.
      In case of pseries_add_processor/pseries_remove_processor, use
      cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
      cpu_present_map there.
      Signed-off-by: default avatarGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
  19. 16 Jul, 2007 1 commit
  20. 11 May, 2007 1 commit
  21. 08 May, 2007 1 commit
    • Prarit Bhargava's avatar
      Use stop_machine_run in the Intel RNG driver · ee527cd3
      Prarit Bhargava authored
      Replace call_smp_function with stop_machine_run in the Intel RNG driver.
      CPU A has done read_lock(&lock)
      CPU B has done write_lock_irq(&lock) and is waiting for A to release the lock.
      A third CPU calls call_smp_function and issues the IPI.  CPU A takes CPU
      C's IPI.  CPU B is waiting with interrupts disabled and does not see the
      IPI.  CPU C is stuck waiting for CPU B to respond to the IPI.
      The solution is to use stop_machine_run instead of call_smp_function
      (call_smp_function should not be called in situations where the CPUs may be
      [haruo.tomita@toshiba.co.jp: fix a typo in mod_init()]
      [haruo.tomita@toshiba.co.jp: fix memory leak]
      Signed-off-by: default avatarPrarit Bhargava <prarit@redhat.com>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: "Tomita, Haruo" <haruo.tomita@toshiba.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  22. 29 Sep, 2006 1 commit
  23. 27 Aug, 2006 1 commit
  24. 04 Jul, 2006 1 commit
  25. 25 Jun, 2006 1 commit
  26. 10 Jan, 2006 1 commit
  27. 14 Nov, 2005 1 commit
    • Kirill Korotaev's avatar
      [PATCH] stop_machine() vs. synchronous IPI send deadlock · 4557398f
      Kirill Korotaev authored
      This fixes deadlock of stop_machine() vs.  synchronous IPI send.  The
      problem is that stop_machine() disables interrupts before disabling
      preemption on other CPUs.  So if another CPU is preempted and then calls
      something like flush_tlb_all() it will deadlock with CPU doing
      stop_machine() and which can't process IPI due to disabled IRQs.
      I changed stop_machine() to do the same things exactly as it does on other
      CPUs, i.e.  it should disable preemption first on _all_ CPUs including
      itself and only after that disable IRQs.
      Signed-off-by: default avatarKirill Korotaev <dev@sw.ru>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Andrey Savochkin" <saw@sawoct.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  28. 22 Jun, 2005 1 commit
    • Ingo Molnar's avatar
      [PATCH] smp_processor_id() cleanup · 39c715b7
      Ingo Molnar authored
      This patch implements a number of smp_processor_id() cleanup ideas that
      Arjan van de Ven and I came up with.
      The previous __smp_processor_id/_smp_processor_id/smp_processor_id API
      spaghetti was hard to follow both on the implementational and on the
      usage side.
      Some of the complexity arose from picking wrong names, some of the
      complexity comes from the fact that not all architectures defined
      In the new code, there are two externally visible symbols:
       - smp_processor_id(): debug variant.
       - raw_smp_processor_id(): nondebug variant. Replaces all existing
         uses of _smp_processor_id() and __smp_processor_id(). Defined
         by every SMP architecture in include/asm-*/smp.h.
      There is one new internal symbol, dependent on DEBUG_PREEMPT:
       - debug_smp_processor_id(): internal debug variant, mapped to
      Also, i moved debug_smp_processor_id() from lib/kernel_lock.c into a new
      lib/smp_processor_id.c file.  All related comments got updated and/or
      I have build/boot tested the following 8 .config combinations on x86:
      I have also build/boot tested x64 on UP/PREEMPT/DEBUG_PREEMPT.  (Other
      architectures are untested, but should work just fine.)
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarArjan van de Ven <arjan@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
  29. 01 May, 2005 1 commit
  30. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      Let it rip!