Add the workload test properties (max-throughput, latency, etc.) to
the schema to prevent benchmark output validation from failing.
* benchtests/scripts/benchout.schema.json (properties): Add
new properties.
Python 2 does not have a FileNotFoundError so drop it in favour of
simply printing out the last (and most informative) line of the
exception.
* benchtests/scripts/compare_strings.py: Import traceback.
(parse_file): Pretty-print error.
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.
* benchtests/scripts/compare_bench.py (__main__): use the argparse
library to improve command line parsing.
(__main__): make schema file as optional parameter (--schema),
defaulting to benchtests/scripts/benchout.schema.json.
(main): move out of the parsing stuff to __main_ and leave it
only as caller of main comparison functions.
On x86-64, there may be multiple IFUNC implementations for a given
function. But we may be only interested in a subset of them. This
patch adds -f/--functions argument to compare a subset of IFUNC
implementations.
* benchtests/scripts/compare_strings.py (process_results): Add
funcs argument. Compare only functions which are selected.
(main): Check if base function is among selected functions.
Pass selected functions to process_results.
(__main__): Add -f/--functions argument.
Catch runtime exceptions in case the user provided: wrong base
function, attribute(s) or input file. In any of the latter, quit
immediately with non-zero return code.
* benchtests/scripts/compare_string.py: (process_results) Catch
exception in non-existent base_func and catch exception in
non-existent attribute.
(parse_file) Catch exception in non-existent input file.
Having a string comparison report with neither diff numbers nor header
yields a more useful output to be consumed by other tools.
* benchtests/scripts/compare_string.py: Add --no-diff and --no-header
options to avoid diff calculation and omit header, respectively.
(main): process --no-diff and --no-header
When executing bench-math the benchmark output is invalid with this
error msg:
Invalid benchmark output: 'workload-spec2006.wrf' does not match any of
the regexes: '^[_a-zA-Z0-9]*$¹ or Invalid benchmark output: Additional
properties are not allowed ('workload-spec2006.wrf' was unexpected)
The error was seen when running the test:
workload-spec2006.wrf, 'stack=1024,guard=1' and 'stack=1024,guard=2'.
The problem is that the current regex's do not accept the hyphen, dot, equal
and comma in the output.
This patch changes the regex in benchout.schema.json to accept symbols in
benchmark tests names.
ChangeLog:
* benchtests/scripts/benchout.schema.json: Fix regex to accept a
wider range of tests names.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
Benchmark workload-spec2006.wrf does not produce max, min or mean
results but instead produce throughput. This is represented in
benchtests/bench-skeleton.c. This patch adjust benchout.schema.json to consider
bench.out from bench-math benchmarks as valid
ChangeLog:
* benchtests/scripts/benchout.schema.json: Add throughput as accepted
result from property and remove "max", min" and "mean" from required
properties based on benchtests/bench-skeleton.c.
Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
The compare_strings.py option unconditionally generates a graph PNG
image of the input data, which can be unnecessary and slow. Put this
behind an optional flag -g.
* benchtests/scripts/compare_strings.py: New option -g.
(draw_graph): Print a message that a graph is being generated.
(process_results): Generate graph only if -g is passed.
(main): Process option -g.
Make the column widths for the outputs fixed so that they look a
little less messy. They will still look bad with lots of IFUNCs (like
on x86) but it's still a step forward.
* benchtests/scripts/compare_strings.py (process_results):
Better spacing for output.
Make the script more usable by adding proper command line options
along with a way to query the options. The script is capable of doing
a bunch of things right now like choosing a base for comparison,
choosing to generate graphs, etc. and they should be accessible via
command line switches.
* benchtests/scripts/compare_strings.py: Use argparse.
* benchtests/README: Document existence of compare_strings.py.
This patch further improves math function benchmarking by adding a latency
test in addition to throughput. This enables more accurate comparisons of the
math functions. The latency test works by creating a dependency on the previous
iteration: func_res = F (func_res * zero + input[i]). The multiply by zero
avoids changing the input.
It reports reciprocal throughput and latency in nanoseconds (depending on the
timing header used) and max/min throughput in iterations per second:
"workload-spec2006.wrf": {
"reciprocal-throughput": 100,
"latency": 200,
"max-throughput": 1.0e+07,
"min-throughput": 5.0e+06
}
* benchtests/bench-skeleton.c (main): Add support for
latency benchmarking.
* benchtests/scripts/bench.py: Add support for latency benchmarking.
The compare_strings.py script generates a graph for the benchmarks it
performs a comparison on and that fails if X is not available. Avoid
the error and ensure that only the graph is generated and saved as a
PNG file.
* benchtests/scripts/compare_strings.py: Avoid display error
when generating graph.
This patch allows one to provide the function name using an optional
-base option to compare all other functions against. This is useful
when pitching one implementation of a string function against
alternatives. In the absence of this option, comparisons are done
against the first ifunc in the list.
* benchtests/scripts/compare_strings.py (main): Add an
optional -base option.
(process_results): New argument base_func.
Read the memcpy results in json and print out the results in tabular
form, in addition to generating a graph of the results to compare all
of the implementations.
The format of the output is extensible enough to allow this kind of
analysis to be done on other string functions as well.
* benchtests/scripts/benchout_strings.schema.json: New file.
* benchtests/scripts/compare_strings.py: New file.
Prevent function calls that don't return anything from being optimized
out by the compiler by marking its input variables as used.
This prevents the sincos function call from being optimized out in the
benchmark.
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them. If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs. If
detailed timing information is not available, it points out
significant differences in aggregate times.
Call this script as follows:
compare_bench.py schema_file.json bench1.out bench2.out
Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):
compare_bench.py schema_file.json bench1.out bench2.out 25
The threshold in the example above is 25%. schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.
The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points. Any point in such
a group is at most 1.5x the smallest point in that group. The
detailed derivation is a comment in the function.
* benchtests/scripts/compare_bench.py: New file.
* benchtests/scripts/import_bench.py (mean): New function.
(split_list): Likewise.
(do_for_all_timings): Likewise.
(compress_timings): Likewise.
This is the beginning of a module to import and process benchmark
outputs. The module currently supports importing of a bench.out and
validating it against a schema file. In future this could grow a set
of routines that benchmark consumers may find useful to build their
own analysis tools. I have altered validate_bench to use this module
too.
* benchtests/scripts/import_bench.py: New file.
* benchtests/scripts/validate_benchout.py: Import import_bench
instead of jsonschema.
(validate_bench): Remove function.
(main): Use import_bench.
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization. This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
This patch adds an option to get detailed benchmark output for
functions. Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on. This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format. I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.
The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime). The mean
timing has now changed from iterations per unit to actual timing per
iteration.