`aes_desc` and `aes_enc_desc` now do auto-detection of the best suitable
AES implementation for the platform.
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
If we allow the length to be 0, we should also prepare for the case where
the user doesn't want to provide a valid input-data pointer.
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
It's a compile-only test, but we run it anyways so we can finally get
`crypt_fsa()` included in the coverage report. It's not really useful but
also doesn't hurt.
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
`NULL` as defined by the standard is not guaranteed to be of a pointer
type. In order to make sure that in vararg API's a pointer type is used,
define our own version and use that one internally.
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
* The RFC doesn't limit the context to be a string.
It talks about `octets` which means it could be any binary data.
* Move the context-preprocessing function out of tweetnacl.c
* Fix potential segfaults when Ed25519 signature verification fails and
`LTC_CLEAN_STACK` is enabled.
* Fix all the warnings.
* Update documentation.
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
In case when the signature is not verified the "mlen" variable
is equal to ULONG_MAX. When LTC_CLEAN_STACK has been defined
this results in a segmentation fault.
Signed-off-by: Valerii Chubar <valerii_chubar@epam.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Previously it was not detected if `inlen` itself was too big and would
overflow the multiplication by 8.
Related to #592
Signed-off-by: Steffen Jaeckel <s@jaeckel.eu>
Description:
Minor changes to help test and clarify the way utf8 strings are
decoded. This originated from my misunderstanding of the fix for
issue #507. The new test-vector uses two bytes to encode each
wide-char.
The utf8 format is described here:
https://tools.ietf.org/html/rfc3629#section-3
Testing:
$ make clean
$ make CFLAGS="-DUSE_LTM -DLTM_DESC -I../libtommath" EXTRALIBS="../libtommath/libtommath.a" test
$ ./test
You can confirm that the new utf8 test data is correct using python:
>>> s="\xD7\xA9\xD7\x9C\xD7\x95\xD7\x9D"
>>> s.decode("utf-8")
u'\u05e9\u05dc\u05d5\u05dd'
PR #373 did not really fix the issue of preventing a potential stack
overflow in case a lot of nested sequences have to be decoded.
Instead it only threw an error after successfully decoding all the nested
sequences.
This change fixes this and prevents the decoding.