Pull LLVM patches from Behan Webster:
"Next set of patches to support compiling the kernel with clang.
They've been soaking in linux-next since the last merge window.
More still in the works for the next merge window..."
* tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel:
arm, unwind, LLVMLinux: Enable clang to be used for unwinding the stack
ARM: LLVMLinux: Change "extern inline" to "static inline" in glue-cache.h
all: LLVMLinux: Change DWARF flag to support gcc and clang
net: netfilter: LLVMLinux: vlais-netfilter
crypto: LLVMLinux: aligned-attribute.patch
__attribute__((aligned)) applies the default alignment for the largest scalar
type for the target ABI. gcc allows it to be applied inline to a defined type.
Clang only allows it to be applied to a type definition (PR11071).
Making it into 2 lines makes it more readable and works with both compilers.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Test vectors were taken from existing test for
CBC(DES3_EDE). Associated data has been added to test vectors.
HMAC computed with Crypto++ has been used. Following algos have
been covered.
(a) "authenc(hmac(sha1),cbc(des))"
(b) "authenc(hmac(sha1),cbc(des3_ede))"
(c) "authenc(hmac(sha224),cbc(des))"
(d) "authenc(hmac(sha224),cbc(des3_ede))"
(e) "authenc(hmac(sha256),cbc(des))"
(f) "authenc(hmac(sha256),cbc(des3_ede))"
(g) "authenc(hmac(sha384),cbc(des))"
(h) "authenc(hmac(sha384),cbc(des3_ede))"
(i) "authenc(hmac(sha512),cbc(des))"
(j) "authenc(hmac(sha512),cbc(des3_ede))"
Signed-off-by: Vakul Garg <vakul@freescale.com>
[NiteshNarayanLal@freescale.com: added hooks for the missing algorithms test and tested the patch]
Signed-off-by: Nitesh Lal <NiteshNarayanLal@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With DMA-API debug enabled testmgr triggers a "DMA-API: device driver maps memory from stack" warning, when tested on a crypto HW accelerator.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
CRYPTO_USER requires CAP_NET_ADMIN for all operations. Most information
provided by CRYPTO_MSG_GETALG is also accessible through /proc/modules
and AF_ALG. CRYPTO_MSG_GETALG should not require CAP_NET_ADMIN so that
processes without CAP_NET_ADMIN can use CRYPTO_MSG_GETALG to get cipher
details, such as cipher priorities, for AF_ALG.
Signed-off-by: Matthias-Christian Ott <ott@mirix.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix leakage of memory for struct aead_request that is allocated via
aead_request_alloc() but not released via aead_request_free().
Reported by Coverity - CID 1163869.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a potential memory leak in the error handling of test_aead_speed(). In case
crypto_alloc_aead() fails, the function returns without going through the
centralized cleanup path. Reported by Coverity - CID 1163870.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix a potential memory leak in the error handling of test_aead_speed(). In case
the size check on the associate data length parameter fails, the function goes
through the wrong exit label. Reported by Coverity - CID 1163870.
Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is possible by passing a netlink socket to a more privileged
executable and then to fool that executable into writing to the socket
data that happens to be valid netlink message to do something that
privileged executable did not intend to do.
To keep this from happening replace bare capable and ns_capable calls
with netlink_capable, netlink_net_calls and netlink_ns_capable calls.
Which act the same as the previous calls except they verify that the
opener of the socket had the desired permissions as well.
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Patch adds large test-vectors for SHA algorithms for better code coverage in
optimized assembly implementations. Empty test-vectors are also added, as some
crypto drivers appear to have special case handling for empty input.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds test cases for SHA-1, SHA-224, SHA-256 and AES-CCM with an input size
that is an exact multiple of the block size. The reason is that some
implementations use a different code path for these cases.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This git patch adds x86_64 AVX2 optimization of SHA1
transform to crypto support. The patch has been tested with 3.14.0-rc1
kernel.
On a Haswell desktop, with turbo disabled and all cpus running
at maximum frequency, tcrypt shows AVX2 performance improvement
from 3% for 256 bytes update to 16% for 1024 bytes update over
AVX implementation.
This patch adds sha1_avx2_transform(), the glue, build and
configuration changes needed for AVX2 optimization of
SHA1 transform to crypto support.
sha1-ssse3 is one module which adds the necessary optimization
support (SSSE3/AVX/AVX2) for the low-level SHA1 transform function.
With better optimization support, transform function is overridden
as the case may be. In the case of AVX2, due to performance reasons
across datablock sizes, the AVX or AVX2 transform function is used
at run-time as it suits best. The Makefile change therefore appends
the necessary objects to the linkage. Due to this, the patch merely
appends AVX2 transform to the existing build mix and Kconfig support
and leaves the configuration build support as is.
Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto algorithm modules utilizing the crypto daemon could
be used early when the system start up. Using module_init
does not guarantee that the daemon's work queue is initialized
when the cypto alorithm depending on crypto_wq starts. It is necessary
to initialize the crypto work queue earlier at the subsystem
init time to make sure that it is initialized
when used.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add test vectors for aead with null encryption and md5,
respectively sha1 authentication.
Input data is taken from test vectors listed in RFC2410.
Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ahash_def_finup() can make use of the request save/restore functions,
thus make it so. This simplifies the code a little and unifies the code
paths.
Note that the same remark about free()ing the req->priv applies here, the
req->priv can only be free()'d after the original request was restored.
Finally, squash a bug in the invocation of completion in the ASYNC path.
In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err);
was called with X=areq->base.data . This is incorrect , as X=&areq->base
is the correct value. By analysis of the data structures, we see the areq is
of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request'
and areq->base.completion is of type crypto_completion_t, which is defined in
include/linux/crypto.h as:
typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
This is one lead that the X should be &areq->base . Next up, we can inspect
other code which calls the completion callback to give us kind-of statistical
idea of how this callback is used. We can try:
$ git grep base\.complete\( drivers/crypto/
Finally, by inspecting ahash_request_set_callback() implementation defined
in include/crypto/hash.h , we observe that the .data entry of 'struct
crypto_async_request' is intended for arbitrary data, not for completion
argument.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add documentation for the pointer voodoo that is happening in crypto/ahash.c
in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk
of documentation.
Moreover, make sure the mangled request is completely restored after finishing
this unaligned operation. This means restoring all of .result, .base.data
and .base.complete .
Also, remove the crypto_completion_t complete = ... line present in the
ahash_op_unaligned_done() function. This type actually declares a function
pointer, which is very confusing.
Finally, yet very important nonetheless, make sure the req->priv is free()'d
only after the original request is restored in ahash_op_unaligned_done().
The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(),
since we would be accessing previously free()'d data in ahash_op_unaligned_done()
and cause corruption.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We added the soft module dependency of crc32c module alias
to generic crc32c module so other hardware accelerated crc32c
modules could get loaded and used before the generic version.
We also renamed the crypto/crc32c.c containing the generic
crc32c crypto computation to crypto/crc32c_generic.c according
to convention.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>