Wire asprintf into test infrastructure for formatted printf output
specifiers.
Owing to mtrace logging of lots of memory allocation calls these tests
take a considerable amount of time to complete, except for the character
conversion, taking from 00m20s for 'tst-printf-format-as-s --direct s',
through 01m10s and 03m53s for 'tst-printf-format-as-char --direct i' and
'tst-printf-format-as-double --direct f' respectively, to 19m24s for
'tst-printf-format-as-ldouble --direct f', all in standalone execution
from NFS on a RISC-V FU740@1.2GHz system and with output redirected over
100Mbps network via SSH. It is with the skeleton's stub implementation
of dladdr(3); execution times with regular dladdr(3) are up to over
twice longer.
Set timeouts for the tests accordingly then, with a global default for
all the asprintf tests, and then individual higher settings for double
and long double tests each.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is a collection of tests for formatted printf output specifiers
covering the d, i, o, u, x, and X integer conversions, the e, E, f, F,
g, and G floating-point conversions, the c character conversion, and the
s string conversion. Also the hh, h, l, and ll length modifiers are
covered with the integer conversions as is the L length modifier with
the floating-point conversions.
The -, +, space, #, and 0 flags are iterated over, as permitted by the
conversion handled, in tuples of 1..5, including tuples with repetitions
of 2, and combined with field width and/or precision, again as permitted
by the conversion. The resulting format string is then used to produce
output from respective sets of input data corresponding to the specific
conversion under test. POSIX extensions beyond ISO C are not used.
Output is produced in the form of records which include both the format
string (and width and/or precision where given in the form of separate
arguments) and the conversion result, and is verified with GNU AWK using
the format obtained from each such record against the reference value
also supplied, relying on the fact that GNU AWK has its own independent
implementation of format processing, striving to be ISO C compatible.
In the course of implementation I have determined that in the non-bignum
mode GNU AWK uses system sprintf(3) for the floating-point conversions,
defeating the objective of doing the verification against an independent
implementation. Additionally the bignum mode (using MPFR) is required
to correctly output wider integer and floating-point data. Therefore
for the conversions affected the relevant shell scripts sanity-check AWK
and terminate with unsupported status if the bignum mode is unavailable
for floating-point data or where data is output incorrectly.
The f and F floating-point conversions are build-time options for GNU
AWK, depending on the environment, so they are probed for before being
used. Similarly the a and A floating-point conversions, however they
are currently not used, see below. Also GNU AWK does not handle the b
or B integer conversions at all at the moment, as at 5.3.0. Support for
the a, A, b, and B conversions can however be easily added following the
approach taken for the f and F conversions.
Output produced by gawk for the a and A floating-point conversions does
not match one produced by us: insufficient precision is used where one
hasn't been explicitly given, e.g. for the negated maximum finite IEEE
754 64-bit value of -1.79769313486231570814527423731704357e+308 and "%a"
format we produce -0x1.fffffffffffffp+1023 vs gawk's -0x1.000000p+1024
and a different exponent is chosen otherwise, such as with "%.a" where
we output -0x2p+1023 vs gawk's -0x1p+1024 for the same value, or "%.20a"
where -0x1.fffffffffffff0000000p+1023 is our output, but gawk produces
-0xf.ffffffffffff80000000p+1020 instead. Consequently I chose not to
include a and A conversions in testing at this time.
And last but not least there are numerous corner cases that GNU AWK does
not handle correctly, which are worked around by explicit handling in
the AWK script. These are in particular:
- extraneous leading 0 produced for the alternative form with the o
conversion, e.g. { printf "%#.2o", 1 } produces "001" rather than
"01",
- unexpected 0 produced where no characters are expected for the input
of 0 and the alternative form with the precision of 0 and the integer
hexadecimal conversions, e.g. { printf "%#.x", 0 } produces "0" rather
than "",
- missing + character in the non-bignum mode only for the input of 0
with the + flag, precision of 0 and the signed integer conversions,
e.g. { printf "%+.i", 0 } produces "" rather than "+",
- missing space character in the non-bignum mode only for the input of 0
with the space flag, precision of 0 and the signed integer
conversions, e.g. { printf "% .i", 0 } produces "" rather than " ",
- for released gawk versions of up to 4.2.1 missing - character for the
input of -NaN with the floating-point conversions, e.g. { printf "%e",
"-nan" }' produces "nan" rather than "-nan",
- for released gawk versions from 5.0.0 onwards + character output for
the input of -NaN with the floating-point conversions, e.g. { printf
"%e", "-nan" }' produces "+nan" rather than "-nan",
- for released gawk versions from 5.0.0 onwards + character output for
the input of Inf or NaN in the absence of the + or space flags with
the floating-point conversions, e.g. { printf "%e", "inf" }' produces
"+inf" rather than "inf",
- for released gawk versions of up to 4.2.1 missing + character for the
input of Inf or NaN with the + flag and the floating-point
conversions, e.g. { printf "%+e", "inf" }' produces "inf" rather than
"+inf",
- for released gawk versions of up to 4.2.1 missing space character for
the input of Inf or NaN with the space flag and the floating-point
conversions, e.g. { printf "% e", "nan" }' produces "nan" rather than
" nan",
- for released gawk versions from 5.0.0 onwards + character output for
the input of Inf or NaN with the space flag and the floating-point
conversions, e.g. { printf "% e", "inf" }' produces "+inf" rather than
" inf",
- for released gawk versions from 5.0.0 onwards the field width is
ignored for the input of Inf or NaN and the floating-point
conversions, e.g. { printf "%20e", "-inf" }' produces "-inf" rather
than " -inf",
NB for released gawk versions of up to 4.2.1 floating-point conversion
issues apply to the bignum mode only, as in the non-bignum mode system
sprintf(3) is used. As from version 5.0.0 specialized handling has been
added for [-]Inf and [-]NaN inputs and the issues listed apply to both
modes. The '--posix' flag makes gawk versions from 5.0.0 onwards avoid
the issue with field width and the + character unconditionally output
for the input of Inf or NaN, however not the remaining issues and then
the 'gensub' function is not supported in the POSIX mode, so to go this
path I deemed not worth it.
Each test completes within single seconds except for the long double
one. There the F/f formats produce a large number of digits, which
appears to be computationally intensive and CPU-bound. Standalone
execution time for 'tst-printf-format-p-ldouble --direct f' is in the
range of 00m36s for POWER9@2.166GHz and 09m52s for FU740@1.2GHz and
output redirected locally to /dev/null, and 10m11s for FU740 and output
redirected over 100Mbps network via SSH to /dev/null, so the throughput
of the network adds very little (~3.2% in this case) to the processing
time. This is with IEEE 754 quad.
So I have scaled the timeout for 'tst-printf-format-skeleton-ldouble'
accordingly. Regardless, following recent practice the test has been
added to the standard rather than extended set. However, unlike most
of the remaining tests it has been split by the conversion specifier,
so as to allow better parallelization of this long-running test. As
a side effect this lets the test report the unsupported status for the
F/f conversions where applicable, so 'tst-printf-format-p-double' has
been split for consistency as well.
Only printf itself is handled at the moment, but the infrastructure
provides for all the printf family functions to be verified, changes
for which to be supplied separately. The complication around having
some tests iterating over all the relevant conversion specifiers and
other verifying conversion specifiers individually combined with
iterating over printf family functions has hit a peculiarity in GNU
make where the use of multiple targets with a pattern rule is handled
differently from such use with an ordinary rule. Consequently it
seems impossible to bulk-define a pattern rule using '$(foreach ...)',
where each target would simply trigger the recipe according to the
pattern and matching dependencies individually (such a rule does work,
but implies all targets to be updated with a single recipe execution).
Therefore as a compromise a single single-target pattern rule has been
defined that has listed all the conversion-specific scripts and all the
test executables as dependencies. Consequently tests will be rerun in
the absence of changes to their actual sources or scripts whenever an
unrelated file has changed that has been listed. Also all the formatted
printf output tests will always be built whenever any single one is to
be run. This only affects test development and not test runs in the
field, though it does change the order of execution of the individual
steps and also acts as a Makefile barrier in parallel runs. As the
execution time dominates the compilation time for these tests it is not
seen as a serious shortcoming.
As pointed out by Florian Weimer <fweimer@redhat.com> the malloc tracing
facility can take a substantial amount of time in calling dladdr(3) to
determine the caller's location. This is not needed by the verification
made with these tests, so I chose to interpose the symbol with a stub
implementation that always fails in the shared skeleton. We have total
control over the test environment, so I think it is a safe and minimal
impact approach. If there's ever anything else added to the tests that
would actually rely on dladdr(3) returning usable results, only then we
can think of a different approach.
Reviewed-by: DJ Delorie <dj@redhat.com>
The scanf family of functions like sscanf and fscanf currently
ignore nan() and nan(n-char-sequence). This happens because
__vfscanf_internal only checks for 'nan'.
This commit adds support for all valid nan types i.e. nan, nan()
and nan(n-char-sequence), where n-char-sequence can be
[a-zA-Z0-9_]+, thus fixing the bug 30647. Any other representation
of NaN should result in conversion error.
New tests are also added to verify the correct parsing of NaN types for
float, double and long double formats.
Signed-off-by: Avinal Kumar <avinal.xlvii@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add tests of freopen adding or removing "c" (non-cancelling I/O) from
the mode string (so completing my planned tests of freopen with
different features used in the mode strings). Note that it's in the
nature of the uncertain time at which cancellation might act (possibly
during freopen, possibly during subsequent reads) that these can leak
memory or file descriptors, so these do not include leak tests.
Tested for x86_64.
The -Wp does not work properly if the compiler is configured to enable
fortify by default, since it bypasses the compiler driver (which defines
the fortify flags in this case).
This patch is similar to the one used on Ubuntu [1].
I checked with a build for x86_64-linux-gnu, i686-linux-gnu,
aarch64-linux-gnu, s390x-linux-gnu, and riscv64-linux-gnu with
gcc-13 that enables the fortify by default.
Co-authored-by: Matthias Klose <matthias.klose@canonical.com>
[1] https://git.launchpad.net/~ubuntu-core-dev/ubuntu/+source/glibc/tree/debian/patches/ubuntu/fix-fortify-source.patch
Reviewed-by: DJ Delorie <dj@redhat.com>
Exercises fwrite's internal buffer when doing a file operation.
The new test, exercises 2 overflow behaviors:
1. Call fwrite multiple times making usage of fwrite's internal buffer.
The total number of bytes written is larger than fwrite's internal
buffer, forcing an automatic flush.
2. Call fwrite a single time with an amount of data that is larger than
fwrite's internal buffer.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Add tests of special cases for freopen that were omitted from the more
general tests of different modes and similar issues. The special
cases in the three tests here are logically unconnected, it was simply
convenient to put these tests in one patch.
* Test freopen with a NULL path to the new file, in a chroot. Rather
than asserting that this fails (logically, failure in this case is
an implementation detail; it's not required for freopen to rely on
/proc), verify that either it fails (without memory leaks) or that
it succeeds and behaves as expected on success. There is no check
for file descriptor leaks because the machinery for that also
depends on /proc, so can't be used in a chroot.
* Test that freopen and freopen64 are genuinely different in
configurations with 32-bit off_t by checking for an EFBIG trying to
write past 2GB in a file opened with freopen in such a configuration
but no error with 64-bit off_t or when opening with freopen64.
* Test freopen of stdin, stdout and stderr.
Tested for x86_64 and x86.
As reported in bug 23675 and shown up in the recently added tests of
different cases of freopen (relevant part of the test currently
conditioned under #if 0 to avoid a failure resulting from this bug),
freopen wrongly forces the stream to unoriented even when a mode with
,ccs= is specified, though such a mode is supposed to result in a
wide-oriented stream. Move the clearing of _mode to before the actual
reopening occurs, so that the main fopen implementation can leave a
wide-oriented stream in the ,ccs= case.
Tested for x86_64.
As reported in bug 32140, freopen leaks the FILE object when it
returns NULL: there is no valid use of the FILE * pointer (including
passing to freopen again or to fclose) after such an error return, so
the underlying object should be freed. Add code to free it.
Note 1: while I think it's clear from the relevant standards that the
object should be freed and the FILE * can't be used after the call in
this case (the stream is closed, which ends the lifetime of the FILE),
it's entirely possible that some existing code does in fact try to use
the existing FILE * in some way and could be broken by this change.
(Though the most common case for freopen may be stdin / stdout /
stderr, which _IO_deallocate_file explicitly checks for and does not
deallocate.)
Note 2: the deallocation is only done in the _IO_IS_FILEBUF case.
Other kinds of streams bypass all the freopen logic handling closing
the file, meaning a call to _IO_deallocate_file would neither be safe
(the FILE might still be linked into the list of all open FILEs) nor
sufficient (other internal memory allocations associated with the file
would not have been freed). I think the validity of freopen for any
other kind of stream will need clarifying with the Austin Group, but
if it is valid in any such case (where "valid" means "not undefined
behavior so required to close the stream" rather than "required to
successfully associate the stream with the new file in cases where
fopen would work"), more significant changes would be needed to ensure
the stream gets fully closed.
Tested for x86_64.
As reported in bug 32134, freopen does not clear the flags set in
fp->_flags2 by the "e", "m" or "c" mode characters. Clear these so
that they can be set or not as appropriate from the mode string passed
to freopen. The relevant test for "e" in tst-freopen2-main.c is
enabled accordingly; "c" is expected to be covered in a separately
written test (and while tst-freopen2-main.c does include transitions
to and from "m", that's not really a semantic flag intended to result
in behaving in an observably different way).
Tested for x86_64.
freopen is rather minimally tested in libio/tst-freopen and
libio/test-freopen. Add some more thorough tests, covering different
cases for change of mode in particular. The tests are run for both
freopen and freopen64 (given that those functions have two separate
copies of much of the code, so any bug fix directly in the freopen
code would probably need applying in both places).
Note that there are two parts of the tests disabled because of bugs
discovered through running the tests, with bug numbers given in
comments. I expect to address those separately. The tests also don't
cover changes to cancellation ("c" in mode); I think that will better
be handled through a separate test. Also to handle separately:
testing on stdin / stdout / stderr; documenting lack of support for
streams opened with popen / fmemopen / open_memstream / fopencookie;
maybe also a chroot test without /proc; maybe also more thorough tests
for large file handling on 32-bit systems (freopen64).
Tested for x86_64.
There is very little test coverage for getline (only a minimal
stdio-common/tstgetln.c which doesn't verify anything about the
results of the getline calls). Add some more thorough tests
(generally using fopencookie for convenience in testing various cases
for what the input and possible errors / EOF in the file read might
look like).
Note the following regarding testing of error cases:
* Nothing is said in the specifications about what if anything might
be written into the buffer, and whether it might be reallocated, in
error cases. The expectation of the tests (required to avoid memory
leaks on error) is that at least on error cases, the invariant that
lineptr points to at least n bytes is maintained.
* The optional EOVERFLOW error case specified in POSIX, "The number of
bytes to be written into the buffer, including the delimiter
character (if encountered), would exceed {SSIZE_MAX}.", doesn't seem
practically testable, as any case reading so many characters (half
the address space) would also be liable to run into allocation
failure along (ENOMEM) the way.
* If a read error occurs part way through reading an input line, it
seems unclear whether a partial line should be returned by getline
(avoid input getting lost), which is what glibc does at least in the
fopencookie case used in this test, or whether getline should return
-1 (error) (so avoiding the program misbehaving by processing a
truncated line as if it were complete). (There was a short,
inconclusive discussion about this on the Austin Group list on 9-10
November 2014.)
* The POSIX specification of getline inherits errors from fgetc. I
didn't try to cover fgetc errors systematically, just one example of
such an error.
Tested for x86_64 and x86.
Macros will automatically use the correct types, without
having to fiddle with internal glibc macros. It's also
impossible to get the types wrong due to aliasing because
support_check_stat_fd and support_check_stat_path do not
depend on the struct stat* types.
The changes reveal some inconsistencies in tests.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
If a file descriptor is left unclosed and is cleaned up by _IO_cleanup
on exit, its backup buffer remains unfreed, registering as a leak in
valgrind. This is not strictly an issue since (1) the program should
ideally be closing the stream once it's not in use and (2) the program
is about to exit anyway, so keeping the backup buffer around a wee bit
longer isn't a real problem. Free it anyway to keep valgrind happy
when the streams in question are the standard ones, i.e. stdout, stdin
or stderr.
Also, the _IO_have_backup macro checks for _IO_save_base,
which is a roundabout way to check for a backup buffer instead of
directly looking for _IO_backup_base. The roundabout check breaks when
the main get area has not been used and user pushes a char into the
backup buffer with ungetc. Fix this to use the _IO_backup_base
directly.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
When ungetc is called on an unused stream, the backup buffer is
allocated without the main get area being present. This results in
every subsequent ungetc (as the stream remains in the backup area)
checking uninitialized memory in the backup buffer when trying to put a
character back into the stream.
Avoid comparing the input character with buffer contents when in backup
to avoid this uninitialized read. The uninitialized read is harmless in
this context since the location is promptly overwritten with the input
character, thus fulfilling ungetc functionality.
Also adjust wording in the manual to drop the paragraph that says glibc
cannot do multiple ungetc back to back since with this change, ungetc
can actually do this.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Complement commit b03e4d7bd2 ("stdio: fix vfscanf with matches longer
than INT_MAX (bug 27650)") and add a test case for the issue, inspired
by the reproducer provided with the bug report.
This has been verified to succeed as from the commit referred and fail
beforehand.
As the test requires 2GiB of data to be passed around its performance
has been evaluated using a choice of systems and the execution time
determined to be respectively in the range of 9s for POWER9@2.166GHz,
24s for FU740@1.2GHz, and 40s for 74Kf@950MHz. As this is on the verge
of and beyond the default timeout it has been increased by the factor of
8. Regardless, following recent practice the test has been added to the
standard rather than extended set.
Reviewed-by: DJ Delorie <dj@redhat.com>
The conditionals for several mtrace-based tests in catgets, elf, libio,
malloc, misc, nptl, posix, and stdio-common were incorrect leading to
test failures when bootstrapping glibc without perl.
The correct conditional for mtrace-based tests requires three checks:
first checking for run-built-tests, then build-shared, and lastly that
PERL is not equal to "no" (missing perl).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This reverts commit 30a745450e.
On ppc64le, without <stdio.h>, vfscanf is used and with <stdio.h>
__isoc23_vfscanfieee128 is used. I am reverting this since it doesn't
work on all targets.
Add a test for fscanf of long double without including <stdio.h>.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
The gnulib version contains an important change (9ce573cde), which
fixes some problems with multithreading, entropy loss, and ASLR leak
nfo. It also fixes an issue where getrandom is not being used
on some new files generation (only for __GT_NOCREATE on first try).
The 044bf893ac removed __path_search, which is now moved to another
gnulib shared files (stdio-common/tmpdir.{c,h}). Tthis patch
also fixes direxists to use __stat64_time64 instead of __xstat64,
and move the include of pathmax.h for !_LIBC (since it is not used
by glibc). The license is also changed from GPL 3.0 to 2.1, with
permission from the authors (Bruno Haible and Paul Eggert).
The sync also removed the clock fallback, since clock_gettime
with CLOCK_REALTIME is expected to always succeed.
It syncs with gnulib commit 323834962817af7b115187e8c9a833437f8d20ec.
Checked on x86_64-linux-gnu.
Co-authored-by: Bruno Haible <bruno@clisp.org>
Co-authored-by: Paul Eggert <eggert@cs.ucla.edu>
Reviewed-by: Bruno Haible <bruno@clisp.org>
Complete the internal renaming from "C2X" and related names in GCC by
renaming *-c2x and *-gnu2x tests to *-c23 and *-gnu23.
Tested for x86_64, and with build-many-glibcs.py for powerpc64le.
WG14 decided to use the name C23 as the informal name of the next
revision of the C standard (notwithstanding the publication date in
2024). Update references to C2X in glibc to use the C23 name.
This is intended to update everything *except* where it involves
renaming files (the changes involving renaming tests are intended to
be done separately). In the case of the _ISOC2X_SOURCE feature test
macro - the only user-visible interface involved - support for that
macro is kept for backwards compatibility, while adding
_ISOC23_SOURCE.
Tested for x86_64.
The strlen might trigger and invalid GOT entry if it used before
the process is self-relocated (for instance on dl-tunables if any
error occurs).
For i386, _dl_writev with PIE requires to use the old 'int $0x80'
syscall mode because the calling the TLS register (gs) is not yet
initialized.
Checked on x86_64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
All the crypt related functions, cryptographic algorithms, and
make requirements are removed, with only the exception of md5
implementation which is moved to locale folder since it is
required by localedef for integrity protection (libc's
locale-reading code does not check these, but localedef does
generate them).
Besides thec code itself, both internal documentation and the
manual is also adjusted. This allows to remove both --enable-crypt
and --enable-nss-crypt configure options.
Checked with a build for all affected ABIs.
Co-authored-by: Zack Weinberg <zack@owlfolio.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
ISO C2x defines scanf length modifiers wN (for intN_t / int_leastN_t /
uintN_t / uint_leastN_t) and wfN (for int_fastN_t / uint_fastN_t).
Add support for those length modifiers, similar to the printf support
previously added.
Tested for x86_64 and x86.
Avoid potential stack overflow from unbounded alloca. Use the existing
scratch_buffer instead.
Add testcases to exercise the code as suggested by Adhemerval Zanella Netto.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some locales define a list of mapping pairs of alternate digits and
separators for input digits (to_inpunct). This require the scanf
to create a list of all possible inputs for the optional type
modifier 'I'.
Checked on x86_64-linux-gnu.
Reviewed-by: Joe Simmons-Talbott <josimmon@redhat.com>
Since the _FORTIFY_SOURCE feature uses some routines of Glibc, they need to
be excluded from the fortification.
On top of that:
- some tests explicitly verify that some level of fortification works
appropriately, we therefore shouldn't modify the level set for them.
- some objects need to be build with optimization disabled, which
prevents _FORTIFY_SOURCE to be used for them.
Assembler files that implement architecture specific versions of the
fortified routines were not excluded from _FORTIFY_SOURCE as there is no
C header included that would impact their behavior.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Few tests using swprintf are passing incorrect maxlen parameter.
This triggers an abort when _FORTIFY_SOURCE is enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
ISO C2x defines scanf %b for input of binary integers (with an
optional 0b or 0B prefix). Implement such support, along with the
corresponding SCNb* macros in <inttypes.h>. Unlike the support for
binary integers with 0b or 0B prefix with scanf %i, this is supported
in all versions of scanf (independent of the standards mode used for
compilation), because there are no backwards compatibility concerns
(%b wasn't previously a supported format) the way there were for %i.
Tested for x86_64 and x86.
ISO C2x defines printf length modifiers wN (for intN_t / int_leastN_t
/ uintN_t / uint_leastN_t) and wfN (for int_fastN_t / uint_fastN_t).
Add support for those length modifiers (such a feature was previously
requested in bug 24466). scanf support is to be added separately.
GCC 13 has format checking support for these modifiers.
When used with the support for registering format specifiers, these
modifiers are translated to existing flags in struct printf_info,
rather than trying to add some way of distinguishing them without
breaking the printf_info ABI. C2x requires an error to be returned
for unsupported values of N; this is implemented for printf-family
functions, but the parse_printf_format interface doesn't support error
returns, so such an error gets discarded by that function.
Tested for x86_64 and x86.
With fortification enabled, fgets calls return result needs to be checked,
has it gets the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
With fortification enabled, fread calls return result needs to be checked,
has it gets the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
When enabling _FORTIFY_SOURCE, some functions now lead to warnings when
their result is not checked.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>