1. 20 Mar, 2018 1 commit
    • Josh Poimboeuf's avatar
      jump_label: Disable jump labels in __exit code · 578ae447
      Josh Poimboeuf authored
      With the following commit:
      
        33352244 ("jump_label: Explicitly disable jump labels in __init code")
      
      ... we explicitly disabled jump labels in __init code, so they could be
      detected and not warned about in the following commit:
      
        dc1dd184 ("jump_label: Warn on failed jump_label patching attempt")
      
      In-kernel __exit code has the same issue.  It's never used, so it's
      freed along with the rest of initmem.  But jump label entries in __exit
      code aren't explicitly disabled, so we get the following warning when
      enabling pr_debug() in __exit code:
      
        can't patch jump_label at dmi_sysfs_exit+0x0/0x2d
        WARNING: CPU: 0 PID: 22572 at kernel/jump_label.c:376 __jump_label_update+0x9d/0xb0
      
      Fix the warning by disabling all jump labels in initmem (which includes
      both __init and __exit code).
      Reported-and-tested-by: 's avatarLi Wang <liwang@redhat.com>
      Signed-off-by: 's avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: dc1dd184 ("jump_label: Warn on failed jump_label patching attempt")
      Link: http://lkml.kernel.org/r/7121e6e595374f06616c505b6e690e275c0054d1.1521483452.git.jpoimboe@redhat.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      578ae447
  2. 14 Mar, 2018 1 commit
  3. 21 Feb, 2018 3 commits
  4. 24 Jan, 2018 1 commit
  5. 14 Nov, 2017 1 commit
    • Jason Baron's avatar
      jump_label: Invoke jump_label_test() via early_initcall() · 92ee46ef
      Jason Baron authored
      Fengguang Wu reported that running the rcuperf test during boot can cause
      the jump_label_test() to hit a WARN_ON(). The issue is that the core jump
      label code relies on kernel_text_address() to detect when it can no longer
      update branches that may be contained in __init sections. The
      kernel_text_address() in turn assumes that if the system_state variable is
      greter than or equal to SYSTEM_RUNNING then __init sections are no longer
      valid (since the assumption is that they have been freed). However, when
      rcuperf is setup to run in early boot it can call kernel_power_off() which
      sets the system_state to SYSTEM_POWER_OFF.
      
      Since rcuperf initialization is invoked via a module_init(), we can make
      the dependency of jump_label_test() needing to complete before rcuperf
      explicit by calling it via early_initcall().
      Reported-by: 's avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: 's avatarJason Baron <jbaron@akamai.com>
      Acked-by: 's avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1510609727-2238-1-git-send-email-jbaron@akamai.comSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      92ee46ef
  6. 19 Oct, 2017 1 commit
    • Borislav Petkov's avatar
      locking/static_keys: Improve uninitialized key warning · 5cdda511
      Borislav Petkov authored
      Right now it says:
      
        static_key_disable_cpuslocked used before call to jump_label_init
        ------------[ cut here ]------------
        WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:161 static_key_disable_cpuslocked+0x68/0x70
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper Not tainted 4.14.0-rc5+ #1
        Hardware name: SGI.COM C2112-4GP3/X10DRT-P-Series, BIOS 2.0a 05/09/2016
        task: ffffffff81c0e480 task.stack: ffffffff81c00000
        RIP: 0010:static_key_disable_cpuslocked+0x68/0x70
        RSP: 0000:ffffffff81c03ef0 EFLAGS: 00010096 ORIG_RAX: 0000000000000000
        RAX: 0000000000000041 RBX: ffffffff81c32680 RCX: ffffffff81c5cbf8
        RDX: 0000000000000001 RSI: 0000000000000092 RDI: 0000000000000002
        RBP: ffff88807fffd240 R08: 726f666562206465 R09: 0000000000000136
        R10: 0000000000000000 R11: 696e695f6c656261 R12: ffffffff82158900
        R13: ffffffff8215f760 R14: 0000000000000001 R15: 0000000000000008
        FS:  0000000000000000(0000) GS:ffff883f7f400000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: ffff88807ffff000 CR3: 0000000001c09000 CR4: 00000000000606b0
        Call Trace:
         static_key_disable+0x16/0x20
         start_kernel+0x15a/0x45d
         ? load_ucode_intel_bsp+0x11/0x2d
         secondary_startup_64+0xa5/0xb0
        Code: 48 c7 c7 a0 15 cf 81 e9 47 53 4b 00 48 89 df e8 5f fc ff ff eb e8 48 c7 c6 \
      	c0 97 83 81 48 c7 c7 d0 ff a2 81 31 c0 e8 c5 9d f5 ff <0f> ff eb a7 0f ff eb \
      	b0 e8 eb a2 4b 00 53 48 89 fb e8 42 0e f0
      
      but it doesn't tell me which key it is. So dump the key's name too:
      
        static_key_disable_cpuslocked(): static key 'virt_spin_lock_key' used before call to jump_label_init()
      
      And that makes pinpointing which key is causing that a lot easier.
      
       include/linux/jump_label.h           |   14 +++++++-------
       include/linux/jump_label_ratelimit.h |    6 +++---
       kernel/jump_label.c                  |   14 +++++++-------
       3 files changed, 17 insertions(+), 17 deletions(-)
      Signed-off-by: 's avatarBorislav Petkov <bp@suse.de>
      Reviewed-by: 's avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20171018152428.ffjgak4o25f7ept6@pd.tnicSigned-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      5cdda511
  7. 10 Aug, 2017 5 commits
  8. 26 May, 2017 1 commit
    • Thomas Gleixner's avatar
      jump_label: Reorder hotplug lock and jump_label_lock · f2545b2d
      Thomas Gleixner authored
      The conversion of the hotplug locking to a percpu rwsem unearthed lock
      ordering issues all over the place.
      
      The jump_label code has two issues:
      
       1) Nested get_online_cpus() invocations
      
       2) Ordering problems vs. the cpus rwsem and the jump_label_mutex
      
      To cure these, the following lock order has been established;
      
         cpus_rwsem -> jump_label_lock -> text_mutex
      
      Even if not all architectures need protection against CPU hotplug, taking
      cpus_rwsem before jump_label_lock is now mandatory in code pathes which
      actually modify code and therefor need text_mutex protection.
      
      Move the get_online_cpus() invocations into the core jump label code and
      establish the proper lock order where required.
      Signed-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: 's avatarIngo Molnar <mingo@kernel.org>
      Acked-by: 's avatar"David S. Miller" <davem@davemloft.net>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Link: http://lkml.kernel.org/r/20170524081549.025830817@linutronix.de
      f2545b2d
  9. 15 Feb, 2017 1 commit
    • Jason Baron's avatar
      jump_label: Reduce the size of struct static_key · 3821fd35
      Jason Baron authored
      The static_key->next field goes mostly unused. The field is used for
      associating module uses with a static key. Most uses of struct static_key
      define a static key in the core kernel and make use of it entirely within
      the core kernel, or define the static key in a module and make use of it
      only from within that module. In fact, of the ~3,000 static keys defined,
      I found only about 5 or so that did not fit this pattern.
      
      Thus, we can remove the static_key->next field entirely and overload
      the static_key->entries field. That is, when all the static_key uses
      are contained within the same module, static_key->entries continues
      to point to those uses. However, if the static_key uses are not contained
      within the module where the static_key is defined, then we allocate a
      struct static_key_mod, store a pointer to the uses within that
      struct static_key_mod, and have the static key point at the static_key_mod.
      This does incur some extra memory usage when a static_key is used in a
      module that does not define it, but since there are only a handful of such
      cases there is a net savings.
      
      In order to identify if the static_key->entries pointer contains a
      struct static_key_mod or a struct jump_entry pointer, bit 1 of
      static_key->entries is set to 1 if it points to a struct static_key_mod and
      is 0 if it points to a struct jump_entry. We were already using bit 0 in a
      similar way to store the initial value of the static_key. This does mean
      that allocations of struct static_key_mod and that the struct jump_entry
      tables need to be at least 4-byte aligned in memory. As far as I can tell
      all arches meet this criteria.
      
      For my .config, the patch increased the text by 778 bytes, but reduced
      the data + bss size by 14912, for a net savings of 14,134 bytes.
      
         text	   data	    bss	    dec	    hex	filename
      8092427	5016512	 790528	13899467	 d416cb	vmlinux.pre
      8093205	5001600	 790528	13885333	 d3df95	vmlinux.post
      
      Link: http://lkml.kernel.org/r/1486154544-4321-1-git-send-email-jbaron@akamai.com
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: 's avatarJason Baron <jbaron@akamai.com>
      Signed-off-by: 's avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      3821fd35
  10. 12 Jan, 2017 1 commit
  11. 04 Aug, 2016 2 commits
  12. 01 Aug, 2016 1 commit
  13. 07 Jul, 2016 1 commit
  14. 24 Jun, 2016 1 commit
    • Paolo Bonzini's avatar
      locking/static_key: Fix concurrent static_key_slow_inc() · 4c5ea0a9
      Paolo Bonzini authored
      The following scenario is possible:
      
          CPU 1                                   CPU 2
          static_key_slow_inc()
           atomic_inc_not_zero()
            -> key.enabled == 0, no increment
           jump_label_lock()
           atomic_inc_return()
            -> key.enabled == 1 now
                                                  static_key_slow_inc()
                                                   atomic_inc_not_zero()
                                                    -> key.enabled == 1, inc to 2
                                                   return
                                                  ** static key is wrong!
           jump_label_update()
           jump_label_unlock()
      
      Testing the static key at the point marked by (**) will follow the
      wrong path for jumps that have not been patched yet.  This can
      actually happen when creating many KVM virtual machines with userspace
      LAPIC emulation; just run several copies of the following program:
      
          #include <fcntl.h>
          #include <unistd.h>
          #include <sys/ioctl.h>
          #include <linux/kvm.h>
      
          int main(void)
          {
              for (;;) {
                  int kvmfd = open("/dev/kvm", O_RDONLY);
                  int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
                  close(ioctl(vmfd, KVM_CREATE_VCPU, 1));
                  close(vmfd);
                  close(kvmfd);
              }
              return 0;
          }
      
      Every KVM_CREATE_VCPU ioctl will attempt a static_key_slow_inc() call.
      The static key's purpose is to skip NULL pointer checks and indeed one
      of the processes eventually dereferences NULL.
      
      As explained in the commit that introduced the bug:
      
        706249c2 ("locking/static_keys: Rework update logic")
      
      jump_label_update() needs key.enabled to be true.  The solution adopted
      here is to temporarily make key.enabled == -1, and use go down the
      slow path when key.enabled <= 0.
      Reported-by: 's avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: 's avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org> # v4.3+
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 706249c2 ("locking/static_keys: Rework update logic")
      Link: http://lkml.kernel.org/r/1466527937-69798-1-git-send-email-pbonzini@redhat.com
      [ Small stylistic edits to the changelog and the code. ]
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      4c5ea0a9
  15. 23 Nov, 2015 1 commit
    • Peter Zijlstra's avatar
      treewide: Remove old email address · 90eec103
      Peter Zijlstra authored
      There were still a number of references to my old Red Hat email
      address in the kernel source. Remove these while keeping the
      Red Hat copyright notices intact.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      90eec103
  16. 03 Aug, 2015 6 commits
    • Peter Zijlstra's avatar
      locking/static_keys: Add selftest · 1987c947
      Peter Zijlstra authored
      Add a little selftest that validates all combinations.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      1987c947
    • Peter Zijlstra's avatar
      locking/static_keys: Add a new static_key interface · 11276d53
      Peter Zijlstra authored
      There are various problems and short-comings with the current
      static_key interface:
      
       - static_key_{true,false}() read like a branch depending on the key
         value, instead of the actual likely/unlikely branch depending on
         init value.
      
       - static_key_{true,false}() are, as stated above, tied to the
         static_key init values STATIC_KEY_INIT_{TRUE,FALSE}.
      
       - we're limited to the 2 (out of 4) possible options that compile to
         a default NOP because that's what our arch_static_branch() assembly
         emits.
      
      So provide a new static_key interface:
      
        DEFINE_STATIC_KEY_TRUE(name);
        DEFINE_STATIC_KEY_FALSE(name);
      
      Which define a key of different types with an initial true/false
      value.
      
      Then allow:
      
         static_branch_likely()
         static_branch_unlikely()
      
      to take a key of either type and emit the right instruction for the
      case.
      
      This means adding a second arch_static_branch_jump() assembly helper
      which emits a JMP per default.
      
      In order to determine the right instruction for the right state,
      encode the branch type in the LSB of jump_entry::key.
      
      This is the final step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      
      ... but it also allows new static key combinations that will give us
      performance enhancements in the subsequent patches.
      
      Tested-by: Rabin Vincent <rabin@rab.in> # arm
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> # ppc
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      11276d53
    • Peter Zijlstra's avatar
      locking/static_keys: Rework update logic · 706249c2
      Peter Zijlstra authored
      Instead of spreading the branch_default logic all over the place,
      concentrate it into the one jump_label_type() function.
      
      This does mean we need to actually increment/decrement the enabled
      count _before_ calling the update path, otherwise jump_label_type()
      will not see the right state.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      706249c2
    • Peter Zijlstra's avatar
      jump_label: Add jump_entry_key() helper · 7dcfd915
      Peter Zijlstra authored
      Avoid some casting with a helper, also prepares the way for
      overloading the LSB of jump_entry::key.
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      7dcfd915
    • Peter Zijlstra's avatar
      jump_label, locking/static_keys: Rename JUMP_LABEL_TYPE_* and related helpers... · a1efb01f
      Peter Zijlstra authored
      jump_label, locking/static_keys: Rename JUMP_LABEL_TYPE_* and related helpers to the static_key* pattern
      
      Rename the JUMP_LABEL_TYPE_* macros to be JUMP_TYPE_* and move the
      inline helpers into kernel/jump_label.c, since that's the only place
      they're ever used.
      
      Also rename the helpers where it's all about static keys.
      
      This is the second step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      a1efb01f
    • Peter Zijlstra's avatar
      jump_label: Rename JUMP_LABEL_{EN,DIS}ABLE to JUMP_LABEL_{JMP,NOP} · 76b235c6
      Peter Zijlstra authored
      Since we've already stepped away from ENABLE is a JMP and DISABLE is a
      NOP with the branch_default bits, and are going to make it even worse,
      rename it to make it all clearer.
      
      This way we don't mix multiple levels of logic attributes, but have a
      plain 'physical' name for what the current instruction patching status
      of a jump label is.
      
      This is a first step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      [ Beefed up the changelog. ]
      Signed-off-by: 's avatarIngo Molnar <mingo@kernel.org>
      76b235c6
  17. 27 May, 2015 1 commit
    • Peter Zijlstra's avatar
      module, jump_label: Fix module locking · bed831f9
      Peter Zijlstra authored
      As per the module core lockdep annotations in the coming patch:
      
      [   18.034047] ---[ end trace 9294429076a9c673 ]---
      [   18.047760] Hardware name: Intel Corporation S2600GZ/S2600GZ, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
      [   18.059228]  ffffffff817d8676 ffff880036683c38 ffffffff8157e98b 0000000000000001
      [   18.067541]  0000000000000000 ffff880036683c78 ffffffff8105fbc7 ffff880036683c68
      [   18.075851]  ffffffffa0046b08 0000000000000000 ffffffffa0046d00 ffffffffa0046cc8
      [   18.084173] Call Trace:
      [   18.086906]  [<ffffffff8157e98b>] dump_stack+0x4f/0x7b
      [   18.092649]  [<ffffffff8105fbc7>] warn_slowpath_common+0x97/0xe0
      [   18.099361]  [<ffffffff8105fc2a>] warn_slowpath_null+0x1a/0x20
      [   18.105880]  [<ffffffff810ee502>] __module_address+0x1d2/0x1e0
      [   18.112400]  [<ffffffff81161153>] jump_label_module_notify+0x143/0x1e0
      [   18.119710]  [<ffffffff810814bf>] notifier_call_chain+0x4f/0x70
      [   18.126326]  [<ffffffff8108160e>] __blocking_notifier_call_chain+0x5e/0x90
      [   18.134009]  [<ffffffff81081656>] blocking_notifier_call_chain+0x16/0x20
      [   18.141490]  [<ffffffff810f0f00>] load_module+0x1b50/0x2660
      [   18.147720]  [<ffffffff810f1ade>] SyS_init_module+0xce/0x100
      [   18.154045]  [<ffffffff81587429>] system_call_fastpath+0x12/0x17
      [   18.160748] ---[ end trace 9294429076a9c674 ]---
      
      Jump labels is not doing it right; fix this.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jason Baron <jbaron@akamai.com>
      Acked-by: 's avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: 's avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: 's avatarRusty Russell <rusty@rustcorp.com.au>
      bed831f9
  18. 19 Oct, 2013 1 commit
    • Hannes Frederic Sowa's avatar
      static_key: WARN on usage before jump_label_init was called · c4b2c0c5
      Hannes Frederic Sowa authored
      Usage of the static key primitives to toggle a branch must not be used
      before jump_label_init() is called from init/main.c. jump_label_init
      reorganizes and wires up the jump_entries so usage before that could
      have unforeseen consequences.
      
      Following primitives are now checked for correct use:
      * static_key_slow_inc
      * static_key_slow_dec
      * static_key_slow_dec_deferred
      * jump_label_rate_limit
      
      The x86 architecture already checks this by testing if the default_nop
      was already replaced with an optimal nop or with a branch instruction. It
      will panic then. Other architectures don't check for this.
      
      Because we need to relax this check for the x86 arch to allow code to
      transition from default_nop to the enabled state and other architectures
      did not check for this at all this patch introduces checking on the
      static_key primitives in a non-arch dependent manner.
      
      All checked functions are considered slow-path so the additional check
      does no harm to performance.
      
      The warnings are best observed with earlyprintk.
      
      Based on a patch from Andi Kleen.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: 's avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: 's avatarDavid S. Miller <davem@davemloft.net>
      c4b2c0c5
  19. 09 Aug, 2013 1 commit
  20. 06 Aug, 2012 1 commit
  21. 28 Feb, 2012 1 commit
  22. 24 Feb, 2012 1 commit
    • Ingo Molnar's avatar
      static keys: Introduce 'struct static_key', static_key_true()/false() and... · c5905afb
      Ingo Molnar authored
      static keys: Introduce 'struct static_key', static_key_true()/false() and static_key_slow_[inc|dec]()
      
      So here's a boot tested patch on top of Jason's series that does
      all the cleanups I talked about and turns jump labels into a
      more intuitive to use facility. It should also address the
      various misconceptions and confusions that surround jump labels.
      
      Typical usage scenarios:
      
              #include <linux/static_key.h>
      
              struct static_key key = STATIC_KEY_INIT_TRUE;
      
              if (static_key_false(&key))
                      do unlikely code
              else
                      do likely code
      
      Or:
      
              if (static_key_true(&key))
                      do likely code
              else
                      do unlikely code
      
      The static key is modified via:
      
              static_key_slow_inc(&key);
              ...
              static_key_slow_dec(&key);
      
      The 'slow' prefix makes it abundantly clear that this is an
      expensive operation.
      
      I've updated all in-kernel code to use this everywhere. Note
      that I (intentionally) have not pushed through the rename
      blindly through to the lowest levels: the actual jump-label
      patching arch facility should be named like that, so we want to
      decouple jump labels from the static-key facility a bit.
      
      On non-jump-label enabled architectures static keys default to
      likely()/unlikely() branches.
      Signed-off-by: 's avatarIngo Molnar <mingo@elte.hu>
      Acked-by: 's avatarJason Baron <jbaron@redhat.com>
      Acked-by: 's avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: a.p.zijlstra@chello.nl
      Cc: mathieu.desnoyers@efficios.com
      Cc: davem@davemloft.net
      Cc: ddaney.cavm@gmail.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20120222085809.GA26397@elte.huSigned-off-by: 's avatarIngo Molnar <mingo@elte.hu>
      c5905afb
  23. 22 Feb, 2012 2 commits
  24. 27 Dec, 2011 1 commit
  25. 06 Dec, 2011 3 commits