1. 23 Dec, 2018 1 commit
    • Eric Biggers's avatar
      crypto: skcipher - remove remnants of internal IV generators · c79b411e
      Eric Biggers authored
      Remove dead code related to internal IV generators, which are no longer
      used since they've been replaced with the "seqiv" and "echainiv"
      templates.  The removed code includes:
      - The "givcipher" (GIVCIPHER) algorithm type.  No algorithms are
        registered with this type anymore, so it's unneeded.
      - The "const char *geniv" member of aead_alg, ablkcipher_alg, and
        blkcipher_alg.  A few algorithms still set this, but it isn't used
        anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
        Just hardcode "<default>" or "<none>" in those cases.
      - The 'skcipher_givcrypt_request' structure, which is never used.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
  2. 20 Apr, 2018 1 commit
  3. 04 Aug, 2017 1 commit
    • Ard Biesheuvel's avatar
      crypto: algapi - make crypto_xor() take separate dst and src arguments · 45fe93df
      Ard Biesheuvel authored
      There are quite a number of occurrences in the kernel of the pattern
        if (dst != src)
                memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
        crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
        crypto_xor(keystream, src, nbytes);
        memcpy(dst, keystream, nbytes);
      where crypto_xor() is preceded or followed by a memcpy() invocation
      that is only there because crypto_xor() uses its output parameter as
      one of the inputs. To avoid having to add new instances of this pattern
      in the arm64 code, which will be refactored to implement non-SIMD
      fallbacks, add an alternative implementation called crypto_xor_cpy(),
      taking separate input and output arguments. This removes the need for
      the separate memcpy().
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
  4. 09 Mar, 2017 1 commit
    • Marcelo Cerri's avatar
      crypto: ctr - Propagate NEED_FALLBACK bit · d2c2a85c
      Marcelo Cerri authored
      When requesting a fallback algorithm, we should propagate the
      NEED_FALLBACK bit when search for the underlying algorithm.
      This will prevents drivers from allocating unnecessary fallbacks that
      are never called. For instance, currently the vmx-crypto driver will use
      the following chain of calls when calling the fallback implementation:
      p8_aes_ctr -> ctr(p8_aes) -> aes-generic
      However p8_aes will always delegate its calls to aes-generic. With this
      patch, p8_aes_ctr will be able to use ctr(aes-generic) directly as its
      fallback. The same applies to aes_s390.
      Signed-off-by: default avatarMarcelo Henrique Cerri <marcelo.cerri@canonical.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
  5. 11 Feb, 2017 1 commit
    • Ard Biesheuvel's avatar
      crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic · db91af0f
      Ard Biesheuvel authored
      Instead of unconditionally forcing 4 byte alignment for all generic
      chaining modes that rely on crypto_xor() or crypto_inc() (which may
      result in unnecessary copying of data when the underlying hardware
      can perform unaligned accesses efficiently), make those functions
      deal with unaligned input explicitly, but only if the Kconfig symbol
      HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
      the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.
      For crypto_inc(), this simply involves making the 4-byte stride
      conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
      it typically operates on 16 byte buffers.
      For crypto_xor(), an algorithm is implemented that simply runs through
      the input using the largest strides possible if unaligned accesses are
      allowed. If they are not, an optimal sequence of memory accesses is
      emitted that takes the relative alignment of the input buffers into
      account, e.g., if the relative misalignment of dst and src is 4 bytes,
      the entire xor operation will be completed using 4 byte loads and stores
      (modulo unaligned bits at the start and end). Note that all expressions
      involving misalign are simply eliminated by the compiler when
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
  6. 01 Nov, 2016 2 commits
  7. 18 Jul, 2016 1 commit
  8. 26 Nov, 2014 1 commit
  9. 24 Nov, 2014 1 commit
  10. 04 Feb, 2013 1 commit
  11. 08 Jan, 2013 1 commit
  12. 26 May, 2010 1 commit
  13. 13 Aug, 2009 1 commit
    • Herbert Xu's avatar
      crypto: ctr - Use chainiv on raw counter mode · aef27136
      Herbert Xu authored
      Raw counter mode only works with chainiv, which is no longer
      the default IV generator on SMP machines.  This broke raw counter
      mode as it can no longer instantiate as a givcipher.
      This patch fixes it by always picking chainiv on raw counter
      mode.  This is based on the diagnosis and a patch by Huang
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
  14. 10 Jan, 2008 6 commits
    • Herbert Xu's avatar
      [CRYPTO] seqiv: Add Sequence Number IV Generator · 0a270321
      Herbert Xu authored
      This generator generates an IV based on a sequence number by xoring it
      with a salt.  This algorithm is mainly useful for CTR and similar modes.
      This patch also sets it as the default IV generator for ctr.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    • Herbert Xu's avatar
      [CRYPTO] ctr: Refactor into ctr and rfc3686 · 5311f248
      Herbert Xu authored
      As discussed previously, this patch moves the basic CTR functionality
      into a chainable algorithm called ctr.  The IPsec-specific variant of
      it is now placed on top with the name rfc3686.
      So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
      variant will be called rfc3686(ctr(aes)).  This patch also adjusts
      gcm accordingly.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    • Herbert Xu's avatar
      [CRYPTO] ctr: Fix multi-page processing · 0971eb0d
      Herbert Xu authored
      When the data spans across a page boundary, CTR may incorrectly process
      a partial block in the middle because the blkcipher walking code may
      supply partial blocks in the middle as long as the total length of the
      supplied data is more than a block.  CTR is supposed to return any unused
      partial block in that case to the walker.
      This patch fixes this by doing exactly that, returning partial blocks to
      the walker unless we received less than a block-worth of data to start
      This also allows us to optimise the bulk of the processing since we no
      longer have to worry about partial blocks until the very end.
      Thanks to Tan Swee Heng for fixes and actually testing this :)
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    • Herbert Xu's avatar
      [CRYPTO] ctr: Use crypto_inc and crypto_xor · 3f8214ea
      Herbert Xu authored
      This patch replaces the custom inc/xor in CTR with the generic functions.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    • Joy Latten's avatar
      [CRYPTO] ctr: Add countersize · 41fdab3d
      Joy Latten authored
      This patch adds countersize to CTR mode.
      The template is now ctr(algo,noncesize,ivsize,countersize).
      For example, ctr(aes,4,8,4) indicates the counterblock
      will be composed of a salt/nonce that is 4 bytes, an iv
      that is 8 bytes and the counter is 4 bytes.
      When noncesize + ivsize < blocksize, CTR initializes the
      last block - ivsize - noncesize portion of the block to
      zero.  Otherwise the counter block is composed of the IV
      (and nonce if necessary).
      If noncesize + ivsize == blocksize, then this indicates that
      user is passing in entire counterblock. Thus countersize
      indicates the amount of bytes in counterblock to use as
      the counter for incrementing. CTR will increment counter
      portion by 1, and begin encryption with that value.
      Note that CTR assumes the counter portion of the block that
      will be incremented is stored in big endian.
      Signed-off-by: default avatarJoy Latten <latten@austin.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
    • Joy Latten's avatar
      [CRYPTO] ctr: Add CTR (Counter) block cipher mode · 23e353c8
      Joy Latten authored
      This patch implements CTR mode for IPsec.
      It is based off of RFC 3686.
      Please note:
      1. CTR turns a block cipher into a stream cipher.
      Encryption is done in blocks, however the last block
      may be a partial block.
      A "counter block" is encrypted, creating a keystream
      that is xor'ed with the plaintext. The counter portion
      of the counter block is incremented after each block
      of plaintext is encrypted.
      Decryption is performed in same manner.
      2. The CTR counterblock is composed of,
              nonce + IV + counter
      The size of the counterblock is equivalent to the
      blocksize of the cipher.
              sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize
      The CTR template requires the name of the cipher
      algorithm, the sizeof the nonce, and the sizeof the iv.
      So for example,
      specifies the counterblock will be composed of 4 bytes
      from a nonce, 8 bytes from the iv, and 4 bytes for counter
      since aes has a blocksize of 16 bytes.
      3. The counter portion of the counter block is stored
      in big endian for conformance to rfc 3686.
      Signed-off-by: default avatarJoy Latten <latten@austin.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>