The benchtests json allows {function {variant}} categorization of
results whereas the pthread-locks tests had {function {variant
{subvariant}}}, which broke validation. Fix that by serializing the
subvariants as variant-subvariant. Also update the schema to
recognize the new benchmark attributes after fixing the naming
conventions.
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
Change all of the #! lines in Python scripts that are called from
Makefiles to reference /usr/bin/python3.
All of the scripts called from Makefiles are already run with Python 3,
so let's make sure they are explicitly using Python 3 if called
manually.
Add a script for visualizing the JSON output generated by existing
glibc string microbenchmarks.
Overview:
plot_strings.py is capable of plotting benchmark results in the
following formats, which are controlled with the -p or --plot argument:
1. absolute timings (-p time): plot the timings as they are in the
input benchmark results file.
2. relative timings (-p rel): plot relative timing difference with
respect to a chosen ifunc (controlled with -b argument).
3. performance relative to max (-p max): for each varied parameter
value, plot 1/timing as the percentage of the maximum value out of
the plotted ifuncs.
4. throughput (-p thru): plot varied parameter value over timing
For all types of graphs, there is an option to explicitly specify
the subset of ifuncs to plot using the --ifuncs parameter.
For plot types 1. and 4. one can hide/expose exact benchmark figures
using the --values flag.
When plotting relative timing differences between ifuncs, the first
ifunc listed in the input JSON file is the baseline, unless the
baseline implementation is explicitly chosen with the --baseline
parameter. For the ease of reading, the script marks the statistically
insignificant range on the graphs. The default is +-5% but this
value can be controlled with the --threshold parameter.
To accommodate for the heterogeneity in benchmark results files,
one can control i.e the x-axis scale, the resolution (dpi) of the
generated figures or the key to access the varied parameter value
in the JSON file. The corresponding options are --logarithmic,
--resolution or --key. The --key parameter ensures that plot_strings.py
works with all files which pass JSON schema validation. The schema
can be chosen with the --schema parameter.
If a window manager is available, one can enable interactive
figure display using the --display flag.
Finally, one can use the --grid flag to enable grid lines in the
generated figures.
Implementation:
plot_strings.py traverses the JSON tree until a 'results' array
is found and generates a separate figure for each such array.
The figure is then saved to a file in one of the available formats
(controlled with the --extension parameter).
As the tree is traversed, the recursive function tracks the metadata
about the test being run, so that each figure has a unique and
meaningful title and filename.
While plot_strings.py works with existing benchmarks, provisions
have been made to allow adding more structure and metadata to these
benchmarks. Currently, many benchmarks produce multiple timing values
for the same value of the varied parameter (typically 'length').
Mutiple data points for the same parameter usually mean that some other
parameter was varied as well, for example, if memmove's src and dst
buffers overlap or not (see bench-memmove-walk.c and
bench-memmove-walk.out).
Unfortunately, this information is not exposed in the benchmark output
file, so plot_strings.py has to resort to computing the geometric mean
of these multiple values. In the process, useful information about the
benchmark configuration is lost. Also, averaging the timings for
different alignments can hide useful characterstics of the benchmarked
ifuncs.
Testing:
plot_strings.py has been tested on all existing string microbenchmarks
which produce results in JSON format. The script was tested on both
Windows 10 and Ubuntu 16.04.2 LTS. It runs on both python 2 and 3
(2.7.12 and 3.5.12 tested).
Useful commands:
1. Plot timings for all ifuncs in bench-strlen.out:
$ ./plot_strings.py bench-strlen.out
2. Display help:
$ ./plot_strings.py -h
3. Plot throughput for __memset_avx512_unaligned_erms and
__memset_avx512_unaligned. Save the generated figure in pdf format to
'results/'. Use logarithmic x-axis scale, show grid lines and expose
the performance numbers:
$ ./plot_strings.py bench.out -o results/ -lgv -e pdf -p thru \
-i __memset_avx512_unaligned_erms __memset_avx512_unaligned
4. Plot relative timings for all ifuncs in bench.out with __generic_memset
as baseline. Display percentage difference threshold of +-10%:
$ ./plot_strings.py bench.out -p rel -b __generic_memset -t 10
Discussion:
1. I would like to propose relaxing the benchout_strings.schema.json
to allow specifying either a 'results' array with 'timings' (as before)
or a 'variants' array. See below example:
{
"timing_type": "hp_timing",
"functions": {
"memcpy": {
"bench-variant": "default",
"ifuncs": ["generic_memcpy", "__memcpy_thunderx"],
"variants": [
{
"name": "powers of 2",
"variants": [
{
"name": "both aligned",
"results": [
{
"length": 1,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
{
"length": 2,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
...
{
"length": 65536,
"align1": 0,
"align2": 0,
"timings": [x, y]
}]
},
{
"name": "dst misaligned",
"results": [
{
"length": 1,
"align1": 0,
"align2": 0,
"timings": [x, y]
},
{
"length": 2,
"align1": 0,
"align2": 1,
"timings": [x, y]
},
...
'variants' array consists of objects such that each object has a 'name'
attribute to describe the configuration of a particular test in the
benchmark. This can be a description, for example, of how the parameter
was varied or what was the buffer alignment tested. The 'name' attribute
is then followed by another 'variants' array or a 'results' array.
The nesting of variants allows arbitrary grouping of benchmark timings,
while allowing description of these groups. Using recusion, it is
possible to proceduraly create titles and filenames for the figures being
generated.
Non-consumable data, alias data not related to benchmarks, should be sent to
the standard error, thus pipelines can work as expected.
* benchtests/scripts/compare_bench.py (do_compare): write to stderr in case
stat is not present.
* benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case
timings field is not present. Also string showing the output filename goes
into the stderr.
Allows user to pick a statistic, defaulting to min and mean, from command
line. At the same time, if stat does not exit, catch the run-time exception
and keep comparing the rest of benchmarked functions. Finally, take care of
division-by-zero exceptions and as the latter, keep comparing the rest of the
functions, turning the script a bit more fault tolerant thus useful.
* benchtests/scripts/compare_bench.py (do_compare): Catch KeyError and
ZeroDivisorError exceptions.
* benchtests/scripts/compare_bench.py (compare_runs): Use stats argument to
loop through user provided statistics.
* benchtests/scripts/compare_bench.py (main): Include the --stats argument.
Allows other functions to be processed, making the script a bit more fault
tolerant thus useful.
* benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
Otherwise, we see the following runtime error when using the parameter:
File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare
if d > threshold:
TypeError: '>' not supported between instances of 'float' and 'str'
* benchtests/scripts/compare_bench.py (main): set float type on
threshold argument.
Add the workload test properties (max-throughput, latency, etc.) to
the schema to prevent benchmark output validation from failing.
* benchtests/scripts/benchout.schema.json (properties): Add
new properties.
Python 2 does not have a FileNotFoundError so drop it in favour of
simply printing out the last (and most informative) line of the
exception.
* benchtests/scripts/compare_strings.py: Import traceback.
(parse_file): Pretty-print error.
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.
* benchtests/scripts/compare_bench.py (__main__): use the argparse
library to improve command line parsing.
(__main__): make schema file as optional parameter (--schema),
defaulting to benchtests/scripts/benchout.schema.json.
(main): move out of the parsing stuff to __main_ and leave it
only as caller of main comparison functions.
On x86-64, there may be multiple IFUNC implementations for a given
function. But we may be only interested in a subset of them. This
patch adds -f/--functions argument to compare a subset of IFUNC
implementations.
* benchtests/scripts/compare_strings.py (process_results): Add
funcs argument. Compare only functions which are selected.
(main): Check if base function is among selected functions.
Pass selected functions to process_results.
(__main__): Add -f/--functions argument.
Catch runtime exceptions in case the user provided: wrong base
function, attribute(s) or input file. In any of the latter, quit
immediately with non-zero return code.
* benchtests/scripts/compare_string.py: (process_results) Catch
exception in non-existent base_func and catch exception in
non-existent attribute.
(parse_file) Catch exception in non-existent input file.
Having a string comparison report with neither diff numbers nor header
yields a more useful output to be consumed by other tools.
* benchtests/scripts/compare_string.py: Add --no-diff and --no-header
options to avoid diff calculation and omit header, respectively.
(main): process --no-diff and --no-header
When executing bench-math the benchmark output is invalid with this
error msg:
Invalid benchmark output: 'workload-spec2006.wrf' does not match any of
the regexes: '^[_a-zA-Z0-9]*$¹ or Invalid benchmark output: Additional
properties are not allowed ('workload-spec2006.wrf' was unexpected)
The error was seen when running the test:
workload-spec2006.wrf, 'stack=1024,guard=1' and 'stack=1024,guard=2'.
The problem is that the current regex's do not accept the hyphen, dot, equal
and comma in the output.
This patch changes the regex in benchout.schema.json to accept symbols in
benchmark tests names.
ChangeLog:
* benchtests/scripts/benchout.schema.json: Fix regex to accept a
wider range of tests names.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
Benchmark workload-spec2006.wrf does not produce max, min or mean
results but instead produce throughput. This is represented in
benchtests/bench-skeleton.c. This patch adjust benchout.schema.json to consider
bench.out from bench-math benchmarks as valid
ChangeLog:
* benchtests/scripts/benchout.schema.json: Add throughput as accepted
result from property and remove "max", min" and "mean" from required
properties based on benchtests/bench-skeleton.c.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
The compare_strings.py option unconditionally generates a graph PNG
image of the input data, which can be unnecessary and slow. Put this
behind an optional flag -g.
* benchtests/scripts/compare_strings.py: New option -g.
(draw_graph): Print a message that a graph is being generated.
(process_results): Generate graph only if -g is passed.
(main): Process option -g.
Make the column widths for the outputs fixed so that they look a
little less messy. They will still look bad with lots of IFUNCs (like
on x86) but it's still a step forward.
* benchtests/scripts/compare_strings.py (process_results):
Better spacing for output.
Make the script more usable by adding proper command line options
along with a way to query the options. The script is capable of doing
a bunch of things right now like choosing a base for comparison,
choosing to generate graphs, etc. and they should be accessible via
command line switches.
* benchtests/scripts/compare_strings.py: Use argparse.
* benchtests/README: Document existence of compare_strings.py.
This patch further improves math function benchmarking by adding a latency
test in addition to throughput. This enables more accurate comparisons of the
math functions. The latency test works by creating a dependency on the previous
iteration: func_res = F (func_res * zero + input[i]). The multiply by zero
avoids changing the input.
It reports reciprocal throughput and latency in nanoseconds (depending on the
timing header used) and max/min throughput in iterations per second:
"workload-spec2006.wrf": {
"reciprocal-throughput": 100,
"latency": 200,
"max-throughput": 1.0e+07,
"min-throughput": 5.0e+06
}
* benchtests/bench-skeleton.c (main): Add support for
latency benchmarking.
* benchtests/scripts/bench.py: Add support for latency benchmarking.
The compare_strings.py script generates a graph for the benchmarks it
performs a comparison on and that fails if X is not available. Avoid
the error and ensure that only the graph is generated and saved as a
PNG file.
* benchtests/scripts/compare_strings.py: Avoid display error
when generating graph.
This patch allows one to provide the function name using an optional
-base option to compare all other functions against. This is useful
when pitching one implementation of a string function against
alternatives. In the absence of this option, comparisons are done
against the first ifunc in the list.
* benchtests/scripts/compare_strings.py (main): Add an
optional -base option.
(process_results): New argument base_func.
Read the memcpy results in json and print out the results in tabular
form, in addition to generating a graph of the results to compare all
of the implementations.
The format of the output is extensible enough to allow this kind of
analysis to be done on other string functions as well.
* benchtests/scripts/benchout_strings.schema.json: New file.
* benchtests/scripts/compare_strings.py: New file.
Prevent function calls that don't return anything from being optimized
out by the compiler by marking its input variables as used.
This prevents the sincos function call from being optimized out in the
benchmark.
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them. If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs. If
detailed timing information is not available, it points out
significant differences in aggregate times.
Call this script as follows:
compare_bench.py schema_file.json bench1.out bench2.out
Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):
compare_bench.py schema_file.json bench1.out bench2.out 25
The threshold in the example above is 25%. schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.
The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points. Any point in such
a group is at most 1.5x the smallest point in that group. The
detailed derivation is a comment in the function.
* benchtests/scripts/compare_bench.py: New file.
* benchtests/scripts/import_bench.py (mean): New function.
(split_list): Likewise.
(do_for_all_timings): Likewise.
(compress_timings): Likewise.
This is the beginning of a module to import and process benchmark
outputs. The module currently supports importing of a bench.out and
validating it against a schema file. In future this could grow a set
of routines that benchmark consumers may find useful to build their
own analysis tools. I have altered validate_bench to use this module
too.
* benchtests/scripts/import_bench.py: New file.
* benchtests/scripts/validate_benchout.py: Import import_bench
instead of jsonschema.
(validate_bench): Remove function.
(main): Use import_bench.
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization. This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
This patch adds an option to get detailed benchmark output for
functions. Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on. This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format. I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.
The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime). The mean
timing has now changed from iterations per unit to actual timing per
iteration.