1. 05 Jan, 2018 1 commit
  2. 22 Dec, 2017 1 commit
  3. 03 Nov, 2017 1 commit
  4. 02 Mar, 2017 1 commit
  5. 28 Nov, 2016 1 commit
  6. 21 Oct, 2016 1 commit
  7. 20 Oct, 2015 1 commit
    • Herbert Xu's avatar
      crypto: api - Only abort operations on fatal signal · 3fc89adb
      Herbert Xu authored
      Currently a number of Crypto API operations may fail when a signal
      occurs.  This causes nasty problems as the caller of those operations
      are often not in a good position to restart the operation.
      
      In fact there is currently no need for those operations to be
      interrupted by user signals at all.  All we need is for them to
      be killable.
      
      This patch replaces the relevant calls of signal_pending with
      fatal_signal_pending, and wait_for_completion_interruptible with
      wait_for_completion_killable, respectively.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3fc89adb
  8. 31 Mar, 2015 1 commit
    • Stephan Mueller's avatar
      crypto: api - prevent helper ciphers from being used · 06ca7f68
      Stephan Mueller authored
      Several hardware related cipher implementations are implemented as
      follows: a "helper" cipher implementation is registered with the
      kernel crypto API.
      
      Such helper ciphers are never intended to be called by normal users. In
      some cases, calling them via the normal crypto API may even cause
      failures including kernel crashes. In a normal case, the "wrapping"
      ciphers that use the helpers ensure that these helpers are invoked
      such that they cannot cause any calamity.
      
      Considering the AF_ALG user space interface, unprivileged users can
      call all ciphers registered with the crypto API, including these
      helper ciphers that are not intended to be called directly. That
      means, with AF_ALG user space may invoke these helper ciphers
      and may cause undefined states or side effects.
      
      To avoid any potential side effects with such helpers, the patch
      prevents the helpers to be called directly. A new cipher type
      flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
      to mark helper ciphers. These ciphers can only be used if the
      caller invoke the cipher with CRYPTO_ALG_INTERNAL in the type and
      mask field.
      Signed-off-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      06ca7f68
  9. 24 Nov, 2014 1 commit
  10. 08 Sep, 2013 1 commit
  11. 20 Aug, 2013 1 commit
  12. 25 Jun, 2013 1 commit
    • Herbert Xu's avatar
      crypto: algboss - Hold ref count on larval · 939e1779
      Herbert Xu authored
      On Thu, Jun 20, 2013 at 10:00:21AM +0200, Daniel Borkmann wrote:
      > After having fixed a NULL pointer dereference in SCTP 1abd165e ("net:
      > sctp: fix NULL pointer dereference in socket destruction"), I ran into
      > the following NULL pointer dereference in the crypto subsystem with
      > the same reproducer, easily hit each time:
      > 
      > BUG: unable to handle kernel NULL pointer dereference at (null)
      > IP: [<ffffffff81070321>] __wake_up_common+0x31/0x90
      > PGD 0
      > Oops: 0000 [#1] SMP
      > Modules linked in: padlock_sha(F-) sha256_generic(F) sctp(F) libcrc32c(F) [..]
      > CPU: 6 PID: 3326 Comm: cryptomgr_probe Tainted: GF            3.10.0-rc5+ #1
      > Hardware name: Dell Inc. PowerEdge T410/0H19HD, BIOS 1.6.3 02/01/2011
      > task: ffff88007b6cf4e0 ti: ffff88007b7cc000 task.ti: ffff88007b7cc000
      > RIP: 0010:[<ffffffff81070321>]  [<ffffffff81070321>] __wake_up_common+0x31/0x90
      > RSP: 0018:ffff88007b7cde08  EFLAGS: 00010082
      > RAX: ffffffffffffffe8 RBX: ffff88003756c130 RCX: 0000000000000000
      > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff88003756c130
      > RBP: ffff88007b7cde48 R08: 0000000000000000 R09: ffff88012b173200
      > R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000282
      > R13: ffff88003756c138 R14: 0000000000000000 R15: 0000000000000000
      > FS:  0000000000000000(0000) GS:ffff88012fc60000(0000) knlGS:0000000000000000
      > CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      > CR2: 0000000000000000 CR3: 0000000001a0b000 CR4: 00000000000007e0
      > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      > Stack:
      >  ffff88007b7cde28 0000000300000000 ffff88007b7cde28 ffff88003756c130
      >  0000000000000282 ffff88003756c128 ffffffff81227670 0000000000000000
      >  ffff88007b7cde78 ffffffff810722b7 ffff88007cdcf000 ffffffff81a90540
      > Call Trace:
      >  [<ffffffff81227670>] ? crypto_alloc_pcomp+0x20/0x20
      >  [<ffffffff810722b7>] complete_all+0x47/0x60
      >  [<ffffffff81227708>] cryptomgr_probe+0x98/0xc0
      >  [<ffffffff81227670>] ? crypto_alloc_pcomp+0x20/0x20
      >  [<ffffffff8106760e>] kthread+0xce/0xe0
      >  [<ffffffff81067540>] ? kthread_freezable_should_stop+0x70/0x70
      >  [<ffffffff815450dc>] ret_from_fork+0x7c/0xb0
      >  [<ffffffff81067540>] ? kthread_freezable_should_stop+0x70/0x70
      > Code: 41 56 41 55 41 54 53 48 83 ec 18 66 66 66 66 90 89 75 cc 89 55 c8
      >       4c 8d 6f 08 48 8b 57 08 41 89 cf 4d 89 c6 48 8d 42 e
      > RIP  [<ffffffff81070321>] __wake_up_common+0x31/0x90
      >  RSP <ffff88007b7cde08>
      > CR2: 0000000000000000
      > ---[ end trace b495b19270a4d37e ]---
      > 
      > My assumption is that the following is happening: the minimal SCTP
      > tool runs under ``echo 1 > /proc/sys/net/sctp/auth_enable'', hence
      > it's making use of crypto_alloc_hash() via sctp_auth_init_hmacs().
      > It forks itself, heavily allocates, binds, listens and waits in
      > accept on sctp sockets, and then randomly kills some of them (no
      > need for an actual client in this case to hit this). Then, again,
      > allocating, binding, etc, and then killing child processes.
      > 
      > The problem that might be happening here is that cryptomgr requests
      > the module to probe/load through cryptomgr_schedule_probe(), but
      > before the thread handler cryptomgr_probe() returns, we return from
      > the wait_for_completion_interruptible() function and probably already
      > have cleared up larval, thus we run into a NULL pointer dereference
      > when in cryptomgr_probe() complete_all() is being called.
      > 
      > If we wait with wait_for_completion() instead, this panic will not
      > occur anymore. This is valid, because in case a signal is pending,
      > cryptomgr_probe() returns from probing anyway with properly calling
      > complete_all().
      
      The use of wait_for_completion_interruptible is intentional so that
      we don't lock up the thread if a bug causes us to never wake up.
      
      This bug is caused by the helper thread using the larval without
      holding a reference count on it.  If the helper thread completes
      after the original thread requesting for help has gone away and
      destroyed the larval, then we get the crash above.
      
      So the fix is to hold a reference count on the larval.
      
      Cc: <stable@vger.kernel.org> # 3.6+
      Reported-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Tested-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      939e1779
  13. 16 Feb, 2010 1 commit
  14. 14 Jul, 2009 2 commits
  15. 08 Jul, 2009 1 commit
  16. 02 Jun, 2009 2 commits
  17. 21 Apr, 2009 1 commit
  18. 26 Feb, 2009 1 commit
    • Herbert Xu's avatar
      crypto: api - Fix module load deadlock with fallback algorithms · a760a665
      Herbert Xu authored
      With the mandatory algorithm testing at registration, we have
      now created a deadlock with algorithms requiring fallbacks.
      This can happen if the module containing the algorithm requiring
      fallback is loaded first, without the fallback module being loaded
      first.  The system will then try to test the new algorithm, find
      that it needs to load a fallback, and then try to load that.
      
      As both algorithms share the same module alias, it can attempt
      to load the original algorithm again and block indefinitely.
      
      As algorithms requiring fallbacks are a special case, we can fix
      this by giving them a different module alias than the rest.  Then
      it's just a matter of using the right aliases according to what
      algorithms we're trying to find.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a760a665
  19. 18 Feb, 2009 2 commits
    • Herbert Xu's avatar
      crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention · 3f683d61
      Herbert Xu authored
      This is based on a report and patch by Geert Uytterhoeven.
      
      The functions crypto_alloc_tfm and create_create_tfm return a
      pointer that needs to be adjusted by the caller when successful
      and otherwise an error value.  This means that the caller has
      to check for the error and only perform the adjustment if the
      pointer returned is valid.
      
      Since all callers want to make the adjustment and we know how
      to adjust it ourselves, it's much easier to just return adjusted
      pointer directly.
      
      The only caveat is that we have to return a void * instead of
      struct crypto_tfm *.  However, this isn't that bad because both
      of these functions are for internal use only (by types code like
      shash.c, not even algorithms code).
      
      This patch also moves crypto_alloc_tfm into crypto/internal.h
      (crypto_create_tfm is already there) to reflect this.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3f683d61
    • Herbert Xu's avatar
      crypto: api - crypto_alg_mod_lookup either tested or untested · ff753308
      Herbert Xu authored
      As it stands crypto_alg_mod_lookup will search either tested or
      untested algorithms, but never both at the same time.  However,
      we need exactly that when constructing givcipher and aead so
      this patch adds support for that by setting the tested bit in
      type but clearing it in mask.  This combination is currently
      unused.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ff753308
  20. 05 Feb, 2009 1 commit
  21. 25 Dec, 2008 2 commits
    • Herbert Xu's avatar
      crypto: api - Rebirth of crypto_alloc_tfm · 7b0bac64
      Herbert Xu authored
      This patch reintroduces a completely revamped crypto_alloc_tfm.
      The biggest change is that we now take two crypto_type objects
      when allocating a tfm, a frontend and a backend.  In fact this
      simply formalises what we've been doing behind the API's back.
      
      For example, as it stands crypto_alloc_ahash may use an
      actual ahash algorithm or a crypto_hash algorithm.  Putting
      this in the API allows us to do this much more cleanly.
      
      The existing types will be converted across gradually.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7b0bac64
    • Herbert Xu's avatar
      crypto: api - Move type exit function into crypto_tfm · 4a779486
      Herbert Xu authored
      The type exit function needs to undo any allocations done by the type
      init function.  However, the type init function may differ depending
      on the upper-level type of the transform (e.g., a crypto_blkcipher
      instantiated as a crypto_ablkcipher).
      
      So we need to move the exit function out of the lower-level
      structure and into crypto_tfm itself.
      
      As it stands this is a no-op since nobody uses exit functions at
      all.  However, all cases where a lower-level type is instantiated
      as a different upper-level type (such as blkcipher as ablkcipher)
      will be converted such that they allocate the underlying transform
      and use that instead of casting (e.g., crypto_ablkcipher casted
      into crypto_blkcipher).  That will need to use a different exit
      function depending on the upper-level type.
      
      This patch also allows the type init/exit functions to call (or not)
      cra_init/cra_exit instead of always calling them from the top level.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4a779486
  22. 29 Aug, 2008 2 commits
  23. 10 Jul, 2008 1 commit
  24. 21 Apr, 2008 1 commit
  25. 10 Jan, 2008 1 commit
    • Herbert Xu's avatar
      [CRYPTO] skcipher: Create default givcipher instances · b9c55aa4
      Herbert Xu authored
      This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
      return algorithms that are capable of generating their own IVs through
      givencrypt and givdecrypt.  Each algorithm may specify its default IV
      generator through the geniv field.
      
      For algorithms that do not set the geniv field, the blkcipher layer will
      pick a default.  Currently it's chainiv for synchronous algorithms and
      eseqiv for asynchronous algorithms.  Note that if these wrappers do not
      work on an algorithm then that algorithm must specify its own geniv or
      it can't be used at all.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b9c55aa4
  26. 19 Oct, 2007 1 commit
  27. 11 Jul, 2007 1 commit
  28. 19 May, 2007 1 commit
  29. 06 Feb, 2007 2 commits
  30. 07 Dec, 2006 1 commit
  31. 11 Oct, 2006 1 commit
  32. 21 Sep, 2006 3 commits
    • Herbert Xu's avatar
      [CRYPTO] api: Add crypto_comp and crypto_has_* · fce32d70
      Herbert Xu authored
      This patch adds the crypto_comp type to complete the compile-time checking
      conversion.  The functions crypto_has_alg and crypto_has_cipher, etc. are
      also added to replace crypto_alg_available.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fce32d70
    • Herbert Xu's avatar
      [CRYPTO] api: Added crypto_type support · e853c3cf
      Herbert Xu authored
      This patch adds the crypto_type structure which will be used for all new
      crypto algorithm types, beginning with block ciphers.
      
      The primary purpose of this abstraction is to allow different crypto_type
      objects for crypto algorithms of the same type, in particular, there will
      be a different crypto_type objects for asynchronous algorithms.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e853c3cf
    • Herbert Xu's avatar
      [CRYPTO] api: Added crypto_alloc_base · 6d7d684d
      Herbert Xu authored
      Up until now all crypto transforms have been of the same type, struct
      crypto_tfm, regardless of whether they are ciphers, digests, or other
      types.  As a result of that, we check the types at run-time before
      each crypto operation.
      
      This is rather cumbersome.  We could instead use different C types for
      each crypto type to ensure that the correct types are used at compile
      time.  That is, we would have crypto_cipher/crypto_digest instead of
      just crypto_tfm.  The appropriate type would then be required for the
      actual operations such as crypto_digest_digest.
      
      Now that we have the type/mask fields when looking up algorithms, it
      is easy to request for an algorithm of the precise type that the user
      wants.  However, crypto_alloc_tfm currently does not expose these new
      attributes.
      
      This patch introduces the function crypto_alloc_base which will carry
      these new parameters.  It will be renamed to crypto_alloc_tfm once
      all existing users have been converted.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6d7d684d