kernel-fxtec-pro1x/include/crypto/chacha20.h
Eric Biggers ad28c2ba43 crypto: chacha20 - Fix chacha20_block() keystream alignment (again)
[ Upstream commit a5e9f557098e54af44ade5d501379be18435bfbf ]

In commit 9f480faec5 ("crypto: chacha20 - Fix keystream alignment for
chacha20_block()"), I had missed that chacha20_block() can be called
directly on the buffer passed to get_random_bytes(), which can have any
alignment.  So, while my commit didn't break anything, it didn't fully
solve the alignment problems.

Revert my solution and just update chacha20_block() to use
put_unaligned_le32(), so the output buffer need not be aligned.
This is simpler, and on many CPUs it's the same speed.

But, I kept the 'tmp' buffers in extract_crng_user() and
_get_random_bytes() 4-byte aligned, since that alignment is actually
needed for _crng_backtrack_protect() too.

Reported-by: Stephan Müller <smueller@chronox.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-20 18:47:11 +01:00

27 lines
638 B
C

/* SPDX-License-Identifier: GPL-2.0 */
/*
* Common values for the ChaCha20 algorithm
*/
#ifndef _CRYPTO_CHACHA20_H
#define _CRYPTO_CHACHA20_H
#include <crypto/skcipher.h>
#include <linux/types.h>
#include <linux/crypto.h>
#define CHACHA20_IV_SIZE 16
#define CHACHA20_KEY_SIZE 32
#define CHACHA20_BLOCK_SIZE 64
struct chacha20_ctx {
u32 key[8];
};
void chacha20_block(u32 *state, u8 *stream);
void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv);
int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keysize);
int crypto_chacha20_crypt(struct skcipher_request *req);
#endif