Add a malloc micro benchmark to enable accurate testing of the
various paths in malloc and free. The benchmark does a varying
number of allocations of a given block size, then frees them again.
It tests 3 different scenarios: single-threaded using main arena,
multi-threaded using thread-arena, main arena with SINGLE_THREAD_P
false.
* benchtests/Makefile: Add malloc-simple benchmark.
* benchtests/bench-malloc-simple.c: New benchmark.
The ORIG_SRC argument is likely a useless relic from the original
correctness tests that are not needed in the benchmarks. Remove the
argument and use S1 to point to the source to avoid confusion.
* benchtests/bench-memmove.c (do_one_test): Remove unused
ORIG_SRC.
(do_test): Adjust.
* benchtests/bench-memmove-large.c (do_one_test): Remove unused
ORIG_SRC.
(do_test): Adjust.
The current bench-strlen compares against a slow byte-oriented strlen which
is not useful given it's too easy to beat. Remove it and compare against the
generic C strlen version and memchr.
* benchtests/bench-strlen.c (generic_strlen): New function.
(memchr_strlen): New function.
Non-consumable data, alias data not related to benchmarks, should be sent to
the standard error, thus pipelines can work as expected.
* benchtests/scripts/compare_bench.py (do_compare): write to stderr in case
stat is not present.
* benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case
timings field is not present. Also string showing the output filename goes
into the stderr.
Allows user to pick a statistic, defaulting to min and mean, from command
line. At the same time, if stat does not exit, catch the run-time exception
and keep comparing the rest of benchmarked functions. Finally, take care of
division-by-zero exceptions and as the latter, keep comparing the rest of the
functions, turning the script a bit more fault tolerant thus useful.
* benchtests/scripts/compare_bench.py (do_compare): Catch KeyError and
ZeroDivisorError exceptions.
* benchtests/scripts/compare_bench.py (compare_runs): Use stats argument to
loop through user provided statistics.
* benchtests/scripts/compare_bench.py (main): Include the --stats argument.
Allows other functions to be processed, making the script a bit more fault
tolerant thus useful.
* benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
This patch makes Python 3.4 or later a required tool for building
glibc, so allowing changes of awk, perl etc. code used in the build
and test to Python code without any such changes needing makefile
conditionals or to handle older Python versions.
This patch makes the configure test for Python check the version and
give an error if Python is missing or too old, and removes makefile
conditionals that are no longer needed. It does not itself convert
any code from another language to Python, and does not remove any
compatibility with older Python versions from existing scripts.
Tested for x86_64.
* configure.ac (PYTHON_PROG): Use AC_CHECK_PROG_VER. Set
critic_missing for versions before 3.4.
* configure: Regenerated.
* manual/install.texi (Tools for Compilation): Document
requirement for Python to build glibc.
* INSTALL: Regenerated.
* Rules [PYTHON]: Make code unconditional.
* benchtests/Makefile [PYTHON]: Likewise.
* conform/Makefile [PYTHON]: Likewise.
* manual/Makefile [PYTHON]: Likewise.
* math/Makefile [PYTHON]: Likewise.
RDTSCP waits until all previous instructions have executed and all
previous loads are globally visible before reading the counter. RDTSC
doesn't wait until all previous instructions have been executed before
reading the counter. All x86 processors since 2010 support RDTSCP
instruction. This patch adds RDTSCP support to benchtests.
* benchtests/Makefile (CPPFLAGS-nonlib): Add -DUSE_RDTSCP if
USE_RDTSCP is defined.
* sysdeps/x86/hp-timing.h (HP_TIMING_NOW): Use RDTSCP if
USE_RDTSCP is defined.
Otherwise, we see the following runtime error when using the parameter:
File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare
if d > threshold:
TypeError: '>' not supported between instances of 'float' and 'str'
* benchtests/scripts/compare_bench.py (main): set float type on
threshold argument.
Add the workload test properties (max-throughput, latency, etc.) to
the schema to prevent benchmark output validation from failing.
* benchtests/scripts/benchout.schema.json (properties): Add
new properties.
Add the duration and iterations attributes to the workloads tests to
make the json schema parser happy
* benchtests/bench-skeleton.c (main): Add duration and
iterations attributes.
Drop realloc_bufs in favour of making alloc_bufs transparently
reallocate the buffers if it had allocated before. Also consolidate
computation of buffer lengths so that they don't get repeated on every
reallocation.
* benchtests/bench-string.h (buf1_size, buf2_size): New
variables.
(init_sizes): New function.
(test_init): Use it.
(alloc_buf, exit_error): New functions.
(alloc_bufs): Use ALLOC_BUF.
(realloc_bufs): Remove.
* benchtests/bench-memcmp.c (do_test): Adjust.
* benchtests/bench-memset-large.c (do_test): Likewise.
* benchtests/bench-memset-walk.c (do_test): Likewise.
* benchtests/bench-memset.c (do_test): Likewise.
* benchtests/bench-strncmp.c (do_test): Likewise.
Python 2 does not have a FileNotFoundError so drop it in favour of
simply printing out the last (and most informative) line of the
exception.
* benchtests/scripts/compare_strings.py: Import traceback.
(parse_file): Pretty-print error.
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.
* benchtests/scripts/compare_bench.py (__main__): use the argparse
library to improve command line parsing.
(__main__): make schema file as optional parameter (--schema),
defaulting to benchtests/scripts/benchout.schema.json.
(main): move out of the parsing stuff to __main_ and leave it
only as caller of main comparison functions.
Improve strstr performance. Strstr tends to be slow because it uses
many calls to memchr and a slow byte loop to scan for the next match.
Performance is significantly improved by using strnlen on larger blocks
and using strchr to search for the next matching character. strcasestr
can also use strnlen to scan ahead, and memmem can use memchr to check
for the next match.
On the GLIBC bench tests the performance gains on Cortex-A72 are:
strstr: +25%
strcasestr: +4.3%
memmem: +18%
On a 256KB dataset strstr performance improves by 67%, strcasestr by 47%.
Reviewd-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
On x86-64, there may be multiple IFUNC implementations for a given
function. But we may be only interested in a subset of them. This
patch adds -f/--functions argument to compare a subset of IFUNC
implementations.
* benchtests/scripts/compare_strings.py (process_results): Add
funcs argument. Compare only functions which are selected.
(main): Check if base function is among selected functions.
Pass selected functions to process_results.
(__main__): Add -f/--functions argument.
Catch runtime exceptions in case the user provided: wrong base
function, attribute(s) or input file. In any of the latter, quit
immediately with non-zero return code.
* benchtests/scripts/compare_string.py: (process_results) Catch
exception in non-existent base_func and catch exception in
non-existent attribute.
(parse_file) Catch exception in non-existent input file.
Having a string comparison report with neither diff numbers nor header
yields a more useful output to be consumed by other tools.
* benchtests/scripts/compare_string.py: Add --no-diff and --no-header
options to avoid diff calculation and omit header, respectively.
(main): process --no-diff and --no-header
This is a minor style change to move the definition of I to its usage
scope instead of at the top of the function. This is consistent with
glibc style guidelines and more importantly it was getting in the way
of my testing.
* benchtests/bench-memcpy-walk.c (do_test): Move declaration
of I into loop header.
* benchtests/bench-memmove-walk.c (do_test): Likewise.
Add an undefine of attribute_hidden since it may be defined in some cases
(it must be defined since it is used by some hp-timing configurations).
* benchtests/bench-timing.h (attribute_hidden): Undefine.
Currently the benchtests are run with internal GLIBC headers, which is incorrect.
Defining _ISOMAC in the makefile ensures the internal headers are bypassed.
Fix all tests which were relying on internal defines or includes.
* benchtests/Makefile: Define _ISOMAC.
* benchtests/bench-strcoll.c: Add missing sys/stat.h include.
* benchtests/bench-string.h: Define inhibit_loop_to_libcall macro.
* benchtests/bench-strstr.c: Define empty libc_hidden_builtin_def.
* benchtests/bench-strtok.c (oldstrtok): Use rawmemchr.
* benchtests/bench-timing.h: Define attribute_hidden.
The 0 length strncmp is interesting for correctness but not for
performance.
* benchtests/bench-strncmp.c (test_main): Remove 0 length tests.
(do_test_limit): Likewise.
Don't reuse buffers for different strncmp implementations since the
earlier implementation will end up warming the cache for the later
one. Eventually there should be a more elegant way to do this.
* benchtests/bench-strncmp.c (do_test_limit): Reallocate buffers
for every implementation.
(do_test): Likewise.
Remove the slow paths from pow. Like several other double precision math
functions, pow is exactly rounded. This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.
All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP. A simple test over a few hundred million
values shows pow is 10% faster on average. This fixes BZ #13932.
[BZ #13932]
* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
* benchtests/pow-inputs: Update comment for slow path cases.
* manual/probes.texi (slowpow_p10): Delete removed probe.
(slowpow_p10): Likewise.
* math/Makefile: Remove halfulp.c and slowpow.c.
* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/generic/math_private.h (__exp1): Remove error argument.
(__halfulp): Remove.
(__slowpow): Remove.
* sysdeps/i386/fpu/halfulp.c: Delete file.
* sysdeps/i386/fpu/slowpow.c: Likewise.
* sysdeps/ia64/fpu/halfulp.c: Likewise.
* sysdeps/ia64/fpu/slowpow.c: Likewise.
* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
improve comments and add error analysis.
* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
(power1): Remove function:
(log1): Remove error argument, add error analysis.
(my_log2): Remove function.
* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
Keeping the buffers the same across test runs gives later invocations
the advantage since they access cached data. Reallocate so that all
test runs are on equal grounds.
* benchtests/bench-memcmp.c (do_test): Call realloc_buf for
every test run.
This patch adds BENCHSET variable to benchtests/Makefile in order to
provide the capability to run a list of subsets of benchmark tests, ie;
make bench BENCHSET="bench-pthread bench-math malloc-thread"
This helps users to benchmark specific glibc area
ChangeLog:
* benchtests/Makefile:Add BENCHSET to allow subsets of
benchmarks to be run.
* benchtests/README: Add documentation for: Running subsets of
benchmarks.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Signed-off-by: Icarus Sparry <icarus.w.sparry@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
When executing bench-math the benchmark output is invalid with this
error msg:
Invalid benchmark output: 'workload-spec2006.wrf' does not match any of
the regexes: '^[_a-zA-Z0-9]*$¹ or Invalid benchmark output: Additional
properties are not allowed ('workload-spec2006.wrf' was unexpected)
The error was seen when running the test:
workload-spec2006.wrf, 'stack=1024,guard=1' and 'stack=1024,guard=2'.
The problem is that the current regex's do not accept the hyphen, dot, equal
and comma in the output.
This patch changes the regex in benchout.schema.json to accept symbols in
benchmark tests names.
ChangeLog:
* benchtests/scripts/benchout.schema.json: Fix regex to accept a
wider range of tests names.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
Benchmark workload-spec2006.wrf does not produce max, min or mean
results but instead produce throughput. This is represented in
benchtests/bench-skeleton.c. This patch adjust benchout.schema.json to consider
bench.out from bench-math benchmarks as valid
ChangeLog:
* benchtests/scripts/benchout.schema.json: Add throughput as accepted
result from property and remove "max", min" and "mean" from required
properties based on benchtests/bench-skeleton.c.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
Numbers for very small sizes (< 128B) are much noisier for non-cached
benchmarks like the walk benchmarks, so don't include them.
* benchtests/bench-memcpy-walk.c (START_SIZE): Set to 128.
* benchtests/bench-memmove-walk.c (START_SIZE): Likewise.
* benchtests/bench-memset-walk.c (START_SIZE): Likewise.
Make the walking benchmarks walk only backwards since copying both
ways is biased in favour of implementations that use non-temporal
stores for larger sizes; falkor is one of them. This also fixes up
bugs in computation of the result which ended up multiplying the
length with the timing result unnecessarily.
* benchtests/bench-memcpy-walk.c (do_one_test): Copy only
backwards. Fix timing computation.
* benchtests/bench-memmove-walk.c (do_one_test): Likewise.
* benchtests/bench-memset-walk.c (do_one_test): Walk backwards
on memset by N at a time. Fix timing computation.
This benchmark is an attempt to eliminate cache effects from string
benchmarks. The benchmark walks both ways through a large memory area
and copies different sizes of memory and alignments one at a time
instead of looping around in the same memory area. This is a good
metric to have alongside the simple memmove benchmark (which is only
really useful for smaller sizes) especially for larger sizes where the
likelihood of the call being done only once is pretty high.
This benchmark is different from memcpy in that it also tests
overlapping copies.
* benchtests/bench-memmove-walk.c: New file.
* benchtests/Makefile (string-benchset): Add it.
This benchmark is an attempt to eliminate cache effects from string
benchmarks. The benchmark walks backward through a large memory area
and sets different sizes of memory and alignments one at a time
instead of looping around in the same memory area. This is a good
metric to have alongside the simple memset benchmark (which is only
really useful for smaller sizes) especially for larger sizes where the
likelihood of the call being done only once is pretty high.
* benchtests/bench-memset-walk.c: New file.
* benchtests/Makefile (string-benchset): Add it.
This benchmark is an attempt to eliminate cache effects from string
benchmarks. The benchmark walks both ways through a large memory area
and copies different sizes of memory and alignments one at a time
instead of looping around in the same memory area. This is a good
metric to have alongside the other memcpy benchmarks, especially for
larger sizes where the likelihood of the call being done only once is
pretty high.
* benchtests/bench-memcpy-walk.c: New file.
* benchtests/Makefile (string-benchset): Add it.
exp2f and log2f benchmark traces are just copies of the existing
expf and logf traces from wrf_r.
* benchtests/Makefile: Add exp2f and log2f benchmarks.
* benchtests/exp2f-inputs: Copy of expf-inputs.
* benchtests/log2f-inputs: Copy of logf-inputs.
Add a trace for logf. This is a reduced trace based on 2.8 billion
samples extracted from wrf_r.
* benchtests/Makefile: Add logf benchmark.
* benchtests/logf-inputs: Add reduced trace from wrf_r.
Add a trace for expf. This is a reduced trace based on 2.4 billion
samples extracted from wrf_r.
* benchtests/Makefile: Add expf benchmark.
* benchtests/expf-inputs: Add reduced trace from wrf_r.
This patch adds benchtests for the trunc and truncf functions. The
inputs listed are fairly arbitrary; I do not assert they are
representative of any particular application.
* benchtests/Makefile (bench-math): Add trunc and truncf.
(CFLAGS-bench-trunc.c): New variable.
(CFLAGS-bench-truncf.c): Likewise.
* benchtests/trunc-inputs: New file.
* benchtests/truncf-inputs: Likewise.
The compare_strings.py option unconditionally generates a graph PNG
image of the input data, which can be unnecessary and slow. Put this
behind an optional flag -g.
* benchtests/scripts/compare_strings.py: New option -g.
(draw_graph): Print a message that a graph is being generated.
(process_results): Generate graph only if -g is passed.
(main): Process option -g.