I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
This patch fixed validate_benchout.py two exceptions,
1) AttributeError
if benchout_strings.schema.json is specified, and
2) json.decoder.JSONDecodeError
if benchout file is not JSON.
$ ~/glibc/benchtests/scripts/validate_benchout.py bench-memset.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
sys.exit(main(sys.argv[1:]))
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
bench.parse_bench(args[0], args[1])
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 139, in parse_bench
do_for_all_timings(bench, lambda b, f, v:
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 107, in do_for_all_timings
if 'timings' not in bench['functions'][func][k].keys():
AttributeError: 'str' object has no attribute 'keys'
$ ~/glibc/benchtests/scripts/validate_benchout.py bench-math-inlines.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
sys.exit(main(sys.argv[1:]))
File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
bench.parse_bench(args[0], args[1])
File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 137, in parse_bench
bench = json.load(benchfile)
File "/usr/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 17 (char 16)
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
Non-consumable data, alias data not related to benchmarks, should be sent to
the standard error, thus pipelines can work as expected.
* benchtests/scripts/compare_bench.py (do_compare): write to stderr in case
stat is not present.
* benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case
timings field is not present. Also string showing the output filename goes
into the stderr.
Allows user to pick a statistic, defaulting to min and mean, from command
line. At the same time, if stat does not exit, catch the run-time exception
and keep comparing the rest of benchmarked functions. Finally, take care of
division-by-zero exceptions and as the latter, keep comparing the rest of the
functions, turning the script a bit more fault tolerant thus useful.
* benchtests/scripts/compare_bench.py (do_compare): Catch KeyError and
ZeroDivisorError exceptions.
* benchtests/scripts/compare_bench.py (compare_runs): Use stats argument to
loop through user provided statistics.
* benchtests/scripts/compare_bench.py (main): Include the --stats argument.
Allows other functions to be processed, making the script a bit more fault
tolerant thus useful.
* benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
Otherwise, we see the following runtime error when using the parameter:
File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare
if d > threshold:
TypeError: '>' not supported between instances of 'float' and 'str'
* benchtests/scripts/compare_bench.py (main): set float type on
threshold argument.
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.
* benchtests/scripts/compare_bench.py (__main__): use the argparse
library to improve command line parsing.
(__main__): make schema file as optional parameter (--schema),
defaulting to benchtests/scripts/benchout.schema.json.
(main): move out of the parsing stuff to __main_ and leave it
only as caller of main comparison functions.
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them. If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs. If
detailed timing information is not available, it points out
significant differences in aggregate times.
Call this script as follows:
compare_bench.py schema_file.json bench1.out bench2.out
Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):
compare_bench.py schema_file.json bench1.out bench2.out 25
The threshold in the example above is 25%. schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.
The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points. Any point in such
a group is at most 1.5x the smallest point in that group. The
detailed derivation is a comment in the function.
* benchtests/scripts/compare_bench.py: New file.
* benchtests/scripts/import_bench.py (mean): New function.
(split_list): Likewise.
(do_for_all_timings): Likewise.
(compress_timings): Likewise.