Commit 5d26a105b5 ("crypto: prefix module autoloading with "crypto-"")
changed the automatic module loading when requesting crypto algorithms
to prefix all module requests with "crypto-". This requires all crypto
modules to have a crypto specific module alias even if their file name
would otherwise match the requested crypto algorithm.
Even though commit 5d26a105b5 added those aliases for a vast amount of
modules, it was missing a few. Add the required MODULE_ALIAS_CRYPTO
annotations to those files to make them get loaded automatically, again.
This fixes, e.g., requesting 'ecb(blowfish-generic)', which used to work
with kernels v3.18 and below.
Also change MODULE_ALIAS() lines to MODULE_ALIAS_CRYPTO(). The former
won't work for crypto modules any more.
Fixes: 5d26a105b5 ("crypto: prefix module autoloading with "crypto-"")
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This prefixes all crypto module loading with "crypto-" so we never run
the risk of exposing module auto-loading to userspace via a crypto API,
as demonstrated by Mathias Krause:
https://lkml.org/lkml/2013/3/4/70
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
optimized away by GCC. This is important when we are wiping
cryptographically sensitive material.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJUQTmuAAoJENNvdpvBGATwFToP/jOGL/Z5NE7Oa33jC+oRDdEC
6gDXi27emzkll5BsxRLOR26vxXZ9AsBBI+U9pmhy64pcSUSxocTIZ+Bh0bx/LQyd
w6HTTTYFk9GNtQCGrxRoNBPLdH/qz83ClvlWmpjsYpIEFfSOU3YncygSbps3uSeZ
tdXiI5G1zZNGrljQrL+roJCZX5TP4XxHFbdUjeyV9Z8210oYTwCfpzHjg9+D24f0
rwTOHa0Lp6IrecU4Vlq4PFP+y4/ZdYYVwnpyX5UtTHP3QP176PcrwvnAl4Ys/8Lx
9uqj+gNrUnC6KHsSKhUxwMq9Ch7nu6iLLAYuIUMvxZargsmbNQFShHZyu2mwDgko
bp+oTw8byOQyv6g/hbFpTVwfwpiv/AGu8VxmG3ORGqndOldTh+oQ9xMnuBZA8sXX
PxHxEUY9hr66nVFg4iuxT/2KJJA+Ol8ARkB0taCWhwavzxXJeedEVEw5nbtQxRsM
AJGxjBsAgSw7SJD03yAQH5kRGYvIdv03JRbIiMPmKjlP+pl1JkzOAPhVMUD+24vI
x6oFpSa5FH5utlt3nCZuxlOYBuWhWKIhUzEoY2HwCsyISQScPcwL9EP15sWceY5i
8+Wylvf+yqGVU3KopCBBV/oX3Wm/kj1A8OP/4Kk8UHw9k2btjYETYayhP1DHKnIt
/4pr4+oGd5GlFOHRteXp
=i29U
-----END PGP SIGNATURE-----
Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
Pull /dev/random updates from Ted Ts'o:
"This adds a memzero_explicit() call which is guaranteed not to be
optimized away by GCC. This is important when we are wiping
cryptographically sensitive material"
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
crypto: memzero_explicit - make sure to clear out sensitive data
random: add and use memzero_explicit() for clearing data
Recently, in commit 13aa93c70e71 ("random: add and use memzero_explicit()
for clearing data"), we have found that GCC may optimize some memset()
cases away when it detects a stack variable is not being used anymore
and going out of scope. This can happen, for example, in cases when we
are clearing out sensitive information such as keying material or any
e.g. intermediate results from crypto computations, etc.
With the help of Coccinelle, we can figure out and fix such occurences
in the crypto subsytem as well. Julia Lawall provided the following
Coccinelle program:
@@
type T;
identifier x;
@@
T x;
... when exists
when any
-memset
+memzero_explicit
(&x,
-0,
...)
... when != x
when strict
@@
type T;
identifier x;
@@
T x[...];
... when exists
when any
-memset
+memzero_explicit
(x,
-0,
...)
... when != x
when strict
Therefore, make use of the drop-in replacement memzero_explicit() for
exactly such cases instead of using memset().
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Like SHA1, use get_unaligned_be*() on the raw input data.
Reported-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
'sha512_generic' should set driver name now that there is alternative sha512
provider (sha512_ssse3).
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other SHA512 routines may need to use the generic routine when
FPU is not available.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Combine all shash algs to be registered and use new crypto_[un]register_shashes
functions. This simplifies init/exit code.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current code only increments the upper 64 bits of the SHA-512 byte
counter when the number of bytes hashed happens to hit 2^64 exactly.
This patch increments the upper 64 bits whenever the lower 64 bits
overflows.
Signed-off-by: Kent Yoder <key@linux.vnet.ibm.com>
Cc: stable@kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use standard ror64() instead of hand-written.
There is no standard ror64, so create it.
The difference is shift value being "unsigned int" instead of uint64_t
(for which there is no reason). gcc starts to emit native ROR instructions
which it doesn't do for some reason currently. This should make the code
faster.
Patch survives in-tree crypto test and ping flood with hmac(sha512) on.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unfortunately in reducing W from 80 to 16 we ended up unrolling
the loop twice. As gcc has issues dealing with 64-bit ops on
i386 this means that we end up using even more stack space (>1K).
This patch solves the W reduction by moving LOAD_OP/BLEND_OP
into the loop itself, thus avoiding the need to duplicate it.
While the stack space still isn't great (>0.5K) it is at least
in the same ball park as the amount of stack used for our C sha1
implementation.
Note that this patch basically reverts to the original code so
the diff looks bigger than it really is.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous patch used the modulus operator over a power of 2
unnecessarily which may produce suboptimal binary code. This
patch changes changes them to binary ands instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16].
Consequently, keeping all W[80] array on stack is unnecessary,
only 16 values are really needed.
Using W[16] instead of W[80] greatly reduces stack usage
(~750 bytes to ~340 bytes on x86_64).
Line by line explanation:
* BLEND_OP
array is "circular" now, all indexes have to be modulo 16.
Round number is positive, so remainder operation should be
without surprises.
* initial full message scheduling is trimmed to first 16 values which
come from data block, the rest is calculated before it's needed.
* original loop body is unrolled version of new SHA512_0_15 and
SHA512_16_79 macros, unrolling was done to not do explicit variable
renaming. Otherwise it's the very same code after preprocessing.
See sha1_transform() code which does the same trick.
Patch survives in-tree crypto test and original bugreport test
(ping flood with hmac(sha512).
See FIPS 180-2 for SHA-512 definition
http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
commit f9e2bca6c2
aka "crypto: sha512 - Move message schedule W[80] to static percpu area"
created global message schedule area.
If sha512_update will ever be entered twice, hash will be silently
calculated incorrectly.
Probably the easiest way to notice incorrect hashes being calculated is
to run 2 ping floods over AH with hmac(sha512):
#!/usr/sbin/setkey -f
flush;
spdflush;
add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025;
add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052;
spdadd IP1 IP2 any -P out ipsec ah/transport//require;
spdadd IP2 IP1 any -P in ipsec ah/transport//require;
XfrmInStateProtoError will start ticking with -EBADMSG being returned
from ah_input(). This never happens with, say, hmac(sha1).
With patch applied (on BOTH sides), XfrmInStateProtoError does not tick
with multiple bidirectional ping flood streams like it doesn't tick
with SHA-1.
After this patch sha512_transform() will start using ~750 bytes of stack on x86_64.
This is OK for simple loads, for something more heavy, stack reduction will be done
separatedly.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch replaces the 32-bit counters in sha512_generic with
64-bit counters. It also switches the bit count to the simpler
byte count.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch renames struct sha512_ctx and exports it as struct
sha512_state so that other sha512 implementations can use it
as the reference structure for exporting their state.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes sha512 and sha384 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The message schedule W (u64[80]) is too big for the stack. In order
for this algorithm to be used with shash it is moved to a static
percpu area.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Thu, Mar 27, 2008 at 03:40:36PM +0100, Bodo Eggert wrote:
> Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> wrote:
>
> > This patch cleanups the crypto code, replaces the init() and fini()
> > with the <algorithm name>_init/_fini
>
> This part ist OK.
>
> > or init/fini_<algorithm name> (if the
> > <algorithm name>_init/_fini exist)
>
> Having init_foo and foo_init won't be a good thing, will it? I'd start
> confusing them.
>
> What about foo_modinit instead?
Thanks for the suggestion, the init() is replaced with
<algorithm name>_mod_init ()
and fini () is replaced with <algorithm name>_mod_fini.
Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rename sha512 to sha512_generic and add a MODULE_ALIAS for sha512
so all sha512 implementations can be loaded automatically.
Keep the broken tabs so git recognizes this as a rename.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>