Commit Graph

44 Commits

Author SHA1 Message Date
Nisha Menon
51a121eb36 compare_strings.py : Add --gmean flag
To calculate geometric mean for string benchmark results.

Signed-off-by: Nisha Poyarekar <nisha.s.menon@gmail.com>
2023-04-04 13:51:45 -05:00
Joseph Myers
6d7e8eda9b Update copyright dates with scripts/update-copyrights 2023-01-06 21:14:39 +00:00
Su Lifan
edddffc9df benchtests: make compare_strings.py accept string as attribute value
Commit ac759b1fbf added attribute
"overlap" to bench-memmove-walk, whose value is a string. This change
makes compare_strings.py fail since benchout_strings.schema.json
requires the values of attributes to be number.

This patch relaxes such constraint.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-03-08 19:42:52 +05:30
Paul Eggert
581c785bf3 Update copyright dates with scripts/update-copyrights
I used these shell commands:

../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")

and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.

I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah.  I don't
know why I run into these diagnostics whereas others evidently do not.

remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
2022-01-01 11:40:24 -08:00
Naohiro Tamura
cb5088cfd3 benchtests: Fix validate_benchout.py exceptions
This patch fixed validate_benchout.py two exceptions,
1) AttributeError
   if benchout_strings.schema.json is specified, and
2) json.decoder.JSONDecodeError
   if benchout file is not JSON.

$ ~/glibc/benchtests/scripts/validate_benchout.py bench-memset.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
  File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
    sys.exit(main(sys.argv[1:]))
  File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
    bench.parse_bench(args[0], args[1])
  File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 139, in parse_bench
    do_for_all_timings(bench, lambda b, f, v:
  File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 107, in do_for_all_timings
    if 'timings' not in bench['functions'][func][k].keys():
AttributeError: 'str' object has no attribute 'keys'

$ ~/glibc/benchtests/scripts/validate_benchout.py bench-math-inlines.out \
~/glibc/benchtests/scripts/benchout_strings.schema.json
Traceback (most recent call last):
  File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 86, in <module>
    sys.exit(main(sys.argv[1:]))
  File "/home/naohirot/glibc/benchtests/scripts/validate_benchout.py", line 69, in main
    bench.parse_bench(args[0], args[1])
  File "/home/naohirot/glibc/benchtests/scripts/import_bench.py", line 137, in parse_bench
    bench = json.load(benchfile)
  File "/usr/lib/python3.6/json/__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.6/json/decoder.py", line 342, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 17 (char 16)

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2021-09-16 09:19:55 +05:30
Naohiro Tamura
3886eaff9d benchtests: Enable scripts/plot_strings.py to read stdin
This patch enables scripts/plot_strings.py to read a benchmark result
file from stdin.
To keep backward compatibility, that is to keep accepting multiple of
benchmark result files in argument, blank argument doesn't mean stdin,
but '-' does.
Therefore nargs parameter of ArgumentParser.add_argument() method is
not changed to '?', but keep '+'.

ex:
  $ jq '.' bench-memset.out | plot_strings.py -
  $ jq '.' bench-memset.out | plot_strings.py - bench-memset-large.out
  $ plot_strings.py bench-memset.out bench-memset-large.out

error ex:
  $ jq '.' bench-memset.out | plot_strings.py

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2021-09-13 09:04:21 +05:30
Siddhesh Poyarekar
a373aa25c7 benchtests: Fix pthread-locks test to produce valid json
The benchtests json allows {function {variant}} categorization of
results whereas the pthread-locks tests had {function {variant
{subvariant}}}, which broke validation.  Fix that by serializing the
subvariants as variant-subvariant.  Also update the schema to
recognize the new benchmark attributes after fixing the naming
conventions.
2021-04-18 12:56:29 +05:30
Paul Eggert
2b778ceb40 Update copyright dates with scripts/update-copyrights
I used these shell commands:

../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")

and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
2021-01-02 12:17:34 -08:00
Alistair Francis
4f88b38097 Convert Python scripts to Python 3
Change all of the #! lines in Python scripts that are called from
Makefiles to reference /usr/bin/python3.

All of the scripts called from Makefiles are already run with Python 3,
so let's make sure they are explicitly using Python 3 if called
manually.
2020-03-03 15:52:09 -08:00
Joseph Myers
d614a75396 Update copyright dates with scripts/update-copyrights. 2020-01-01 00:14:33 +00:00
Krzysztof Koch
15740788d7 Add new script for plotting string benchmark JSON output
Add a script for visualizing the JSON output generated by existing
glibc string microbenchmarks.

Overview:
plot_strings.py is capable of plotting benchmark results in the
following formats, which are controlled with the -p or --plot argument:
1. absolute timings (-p time): plot the timings as they are in the
input benchmark results file.
2. relative timings (-p rel): plot relative timing difference with
respect to a chosen ifunc (controlled with -b argument).
3. performance relative to max (-p max): for each varied parameter
value, plot 1/timing as the percentage of the maximum value out of
the plotted ifuncs.
4. throughput (-p thru): plot varied parameter value over timing

For all types of graphs, there is an option to explicitly specify
the subset of ifuncs to plot using the --ifuncs parameter.

For plot types 1. and 4. one can hide/expose exact benchmark figures
using the --values flag.

When plotting relative timing differences between ifuncs, the first
ifunc listed in the input JSON file is the baseline, unless the
baseline implementation is explicitly chosen with the --baseline
parameter. For the ease of reading, the script marks the statistically
insignificant range on the graphs. The default is +-5% but this
value can be controlled with the --threshold parameter.

To accommodate for the heterogeneity in benchmark results files,
one can control i.e the x-axis scale, the resolution (dpi) of the
generated figures or the key to access the varied parameter value
in the JSON file. The corresponding options are --logarithmic,
--resolution or --key. The --key parameter ensures that plot_strings.py
works with all files which pass JSON schema validation. The schema
can be chosen with the --schema parameter.

If a window manager is available, one can enable interactive
figure display using the --display flag.

Finally, one can use the --grid flag to enable grid lines in the
generated figures.

Implementation:
plot_strings.py traverses the JSON tree until a 'results' array
is found and generates a separate figure for each such array.
The figure is then saved to a file in one of the available formats
(controlled with the --extension parameter).

As the tree is traversed, the recursive function tracks the metadata
about the test being run, so that each figure has a unique and
meaningful title and filename.

While plot_strings.py works with existing benchmarks, provisions
have been made to allow adding more structure and metadata to these
benchmarks. Currently, many benchmarks produce multiple timing values
for the same value of the varied parameter (typically 'length').
Mutiple data points for the same parameter usually mean that some other
parameter was varied as well, for example, if memmove's src and dst
buffers overlap or not (see bench-memmove-walk.c and
bench-memmove-walk.out).

Unfortunately, this information is not exposed in the benchmark output
file, so plot_strings.py has to resort to computing the geometric mean
of these multiple values. In the process, useful information about the
benchmark configuration is lost. Also, averaging the timings for
different alignments can hide useful characterstics of the benchmarked
ifuncs.

Testing:
plot_strings.py has been tested on all existing string microbenchmarks
which produce results in JSON format. The script was tested on both
Windows 10 and Ubuntu 16.04.2 LTS. It runs on both python 2 and 3
(2.7.12 and 3.5.12 tested).

Useful commands:
1. Plot timings for all ifuncs in bench-strlen.out:
$ ./plot_strings.py bench-strlen.out

2. Display help:
$ ./plot_strings.py -h

3. Plot throughput for __memset_avx512_unaligned_erms and
__memset_avx512_unaligned. Save the generated figure in pdf format to
'results/'. Use logarithmic x-axis scale, show grid lines and expose
the performance numbers:
$ ./plot_strings.py bench.out -o results/ -lgv -e pdf -p thru \
-i __memset_avx512_unaligned_erms __memset_avx512_unaligned

4. Plot relative timings for all ifuncs in bench.out with __generic_memset
as baseline. Display percentage difference threshold of +-10%:
$ ./plot_strings.py bench.out -p rel  -b __generic_memset -t 10

Discussion:
1. I would like to propose relaxing the benchout_strings.schema.json
to allow specifying either a 'results' array with 'timings' (as before)
or a 'variants' array. See below example:

{
 "timing_type": "hp_timing",
 "functions": {
  "memcpy": {
   "bench-variant": "default",
   "ifuncs": ["generic_memcpy", "__memcpy_thunderx"],
   "variants": [
    {
     "name": "powers of 2",
     "variants": [
      {
       "name": "both aligned",
       "results": [
        {
         "length": 1,
         "align1": 0,
         "align2": 0,
         "timings": [x, y]
        },
        {
         "length": 2,
         "align1": 0,
         "align2": 0,
         "timings": [x, y]
        },
...
        {
         "length": 65536,
         "align1": 0,
         "align2": 0,
         "timings": [x, y]
        }]
      },
      {
       "name": "dst misaligned",
       "results": [
        {
         "length": 1,
         "align1": 0,
         "align2": 0,
         "timings": [x, y]
        },
        {
         "length": 2,
         "align1": 0,
         "align2": 1,
         "timings": [x, y]
        },
...

'variants' array consists of objects such that each object has a 'name'
attribute to describe the configuration of a particular test in the
benchmark. This can be a description, for example, of how the parameter
was varied or what was the buffer alignment tested. The 'name' attribute
is then followed by another 'variants' array or a 'results' array.

The nesting of variants allows arbitrary grouping of benchmark timings,
while allowing description of these groups. Using recusion, it is
possible to proceduraly create titles and filenames for the figures being
generated.
2019-11-13 14:18:52 +00:00
Paul Eggert
5a82c74822 Prefer https to http for gnu.org and fsf.org URLs
Also, change sources.redhat.com to sourceware.org.
This patch was automatically generated by running the following shell
script, which uses GNU sed, and which avoids modifying files imported
from upstream:

sed -ri '
  s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g
  s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g
' \
  $(find $(git ls-files) -prune -type f \
      ! -name '*.po' \
      ! -name 'ChangeLog*' \
      ! -path COPYING ! -path COPYING.LIB \
      ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \
      ! -path manual/texinfo.tex ! -path scripts/config.guess \
      ! -path scripts/config.sub ! -path scripts/install-sh \
      ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \
      ! -path INSTALL ! -path  locale/programs/charmap-kw.h \
      ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \
      ! '(' -name configure \
            -execdir test -f configure.ac -o -f configure.in ';' ')' \
      ! '(' -name preconfigure \
            -execdir test -f preconfigure.ac ';' ')' \
      -print)

and then by running 'make dist-prepare' to regenerate files built
from the altered files, and then executing the following to cleanup:

  chmod a+x sysdeps/unix/sysv/linux/riscv/configure
  # Omit irrelevant whitespace and comment-only changes,
  # perhaps from a slightly-different Autoconf version.
  git checkout -f \
    sysdeps/csky/configure \
    sysdeps/hppa/configure \
    sysdeps/riscv/configure \
    sysdeps/unix/sysv/linux/csky/configure
  # Omit changes that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines
  git checkout -f \
    sysdeps/powerpc/powerpc64/ppc-mcount.S \
    sysdeps/unix/sysv/linux/s390/s390-64/syscall.S
  # Omit change that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline
  git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
2019-09-07 02:43:31 -07:00
Joseph Myers
04277e02d7 Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2019-01-01 00:11:28 +00:00
Leonardo Sandoval
de099757b6 benchtests: send non-consumable data to stderr
Non-consumable data, alias data not related to benchmarks, should be sent to
the standard error, thus pipelines can work as expected.

	* benchtests/scripts/compare_bench.py (do_compare): write to stderr in case
    stat is not present.
	* benchtests/scripts/compare_bench.py (plot_graphs): write to stderr in case
    timings field is not present. Also string showing the output filename goes
    into the stderr.
2018-12-12 11:05:22 -06:00
Leonardo Sandoval
1990185f5f benchtests: include --stats parameter
Allows user to pick a statistic, defaulting to min and mean, from command
line. At the same time, if stat does not exit, catch the run-time exception
and keep comparing the rest of benchmarked functions. Finally, take care of
division-by-zero exceptions and as the latter, keep comparing the rest of the
functions, turning the script a bit more fault tolerant thus useful.

	* benchtests/scripts/compare_bench.py (do_compare): Catch KeyError and
    ZeroDivisorError exceptions.
	* benchtests/scripts/compare_bench.py (compare_runs): Use stats argument to
    loop through user provided statistics.
	* benchtests/scripts/compare_bench.py (main): Include the --stats argument.
2018-12-12 11:05:22 -06:00
Leonardo Sandoval
587426d499 benchtests: keep comparing even if function timings do not match
Allows other functions to be processed, making the script a bit more fault
tolerant thus useful.

	* benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
2018-12-12 11:05:22 -06:00
Leonardo Sandoval
c892ae04f4 benchtests: Set float type on --threshold argument
Otherwise, we see the following runtime error when using the parameter:

  File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare
    if d > threshold:
TypeError: '>' not supported between instances of 'float' and 'str'

	* benchtests/scripts/compare_bench.py (main): set float type on
	threshold argument.
2018-10-08 09:11:30 -05:00
Siddhesh Poyarekar
8cac1f2635 [benchtests] Add workload test properties to schema
Add the workload test properties (max-throughput, latency, etc.) to
the schema to prevent benchmark output validation from failing.

	* benchtests/scripts/benchout.schema.json (properties): Add
	new properties.
2018-08-11 18:55:09 +05:30
Siddhesh Poyarekar
d67d634bef [benchtests] Fix compare_strings.py for python2
Python 2 does not have a FileNotFoundError so drop it in favour of
simply printing out the last (and most informative) line of the
exception.

	* benchtests/scripts/compare_strings.py: Import traceback.
	(parse_file): Pretty-print error.
2018-08-03 00:26:45 +05:30
Leonardo Sandoval
1cf4ae7fe6 benchtests: improve argument parsing through argparse library
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.

	* benchtests/scripts/compare_bench.py (__main__): use the argparse
	library to improve command line parsing.
	(__main__): make schema file as optional parameter (--schema),
	defaulting to benchtests/scripts/benchout.schema.json.
	(main): move out of the parsing stuff to __main_  and leave it
	only as caller of main comparison functions.
2018-07-19 14:53:37 -05:00
H.J. Lu
cb8f6affed benchtests: Add -f/--functions argument
On x86-64, there may be multiple IFUNC implementations for a given
function.  But we may be only interested in a subset of them.  This
patch adds -f/--functions argument to compare a subset of IFUNC
implementations.

	* benchtests/scripts/compare_strings.py (process_results): Add
	funcs argument.  Compare only functions which are selected.
	(main): Check if base function is among selected functions.
	Pass selected functions to process_results.
	(__main__): Add -f/--functions argument.
2018-06-12 09:10:42 -07:00
Leonardo Sandoval
a650b05ebe benchtests: Catch exceptions in input arguments
Catch runtime exceptions in case the user provided: wrong base
function, attribute(s) or input file. In any of the latter, quit
immediately with non-zero return code.

	* benchtests/scripts/compare_string.py: (process_results) Catch
	exception in non-existent base_func and catch exception in
	non-existent attribute.
	(parse_file) Catch exception in non-existent input file.
2018-06-01 16:32:43 -05:00
Leonardo Sandoval
195abbf4cd benchtests: Add --no-diff and --no-header options
Having a string comparison report with neither diff numbers nor header
yields a more useful output to be consumed by other tools.

	* benchtests/scripts/compare_string.py: Add --no-diff and --no-header
	options to avoid diff calculation and omit header, respectively.
	(main): process --no-diff and --no-header
2018-06-01 16:32:43 -05:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Victor Rodriguez
d5090db30e benchtests: Expand range of tests names in schema.json
When executing bench-math the benchmark output is invalid with this
error msg:

    Invalid benchmark output: 'workload-spec2006.wrf' does not match any of
    the regexes: '^[_a-zA-Z0-9]*$¹ or Invalid benchmark output: Additional
    properties are not allowed ('workload-spec2006.wrf' was unexpected)

The error was seen when running the test:
workload-spec2006.wrf, 'stack=1024,guard=1' and 'stack=1024,guard=2'.
The problem is that the current regex's do not accept the hyphen, dot, equal
and comma in the output.

This patch changes the regex in benchout.schema.json to accept symbols in
benchmark tests names.

ChangeLog:

        * benchtests/scripts/benchout.schema.json: Fix regex to accept a
        wider range of tests names.

Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-28 19:52:57 +05:30
Victor Rodriguez
0595e36034 benchtests: Adjust valid and accepted properties
Benchmark workload-spec2006.wrf does not produce max, min or mean
results but instead produce throughput. This is represented in
benchtests/bench-skeleton.c. This patch adjust benchout.schema.json to consider
bench.out from bench-math benchmarks as valid

ChangeLog:

	* benchtests/scripts/benchout.schema.json: Add throughput as accepted
	result from property and remove "max", min" and "mean" from required
	properties based on benchtests/bench-skeleton.c.

Signed-off-by: Victor Rodriguez <victor.rodriguez.bahena@intel.com>
Reviewed-By: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-28 19:49:59 +05:30
Siddhesh Poyarekar
140647ea6f benchtests: New -g option to generate graphs in compare_strings.py
The compare_strings.py option unconditionally generates a graph PNG
image of the input data, which can be unnecessary and slow.  Put this
behind an optional flag -g.

	* benchtests/scripts/compare_strings.py: New option -g.
	(draw_graph): Print a message that a graph is being generated.
	(process_results): Generate graph only if -g is passed.
	(main): Process option -g.
2017-09-16 15:24:00 +05:30
Siddhesh Poyarekar
5a6547b7b9 benchtests: Make compare_strings.py output a bit prettier
Make the column widths for the outputs fixed so that they look a
little less messy.  They will still look bad with lots of IFUNCs (like
on x86) but it's still a step forward.

	* benchtests/scripts/compare_strings.py (process_results):
	Better spacing for output.
2017-09-16 15:23:12 +05:30
Siddhesh Poyarekar
06b1de2378 benchtests: Use argparse to parse arguments
Make the script more usable by adding proper command line options
along with a way to query the options.  The script is capable of doing
a bunch of things right now like choosing a base for comparison,
choosing to generate graphs, etc. and they should be accessible via
command line switches.

	* benchtests/scripts/compare_strings.py: Use argparse.
	* benchtests/README: Document existence of compare_strings.py.
2017-09-16 11:47:32 +05:30
Wilco Dijkstra
d4505b895f Add math benchmark latency test
This patch further improves math function benchmarking by adding a latency
test in addition to throughput.  This enables more accurate comparisons of the
math functions. The latency test works by creating a dependency on the previous
iteration: func_res = F (func_res * zero + input[i]). The multiply by zero
avoids changing the input.

It reports reciprocal throughput and latency in nanoseconds (depending on the
timing header used) and max/min throughput in iterations per second:

   "workload-spec2006.wrf": {
    "reciprocal-throughput": 100,
    "latency": 200,
    "max-throughput": 1.0e+07,
    "min-throughput": 5.0e+06
   }

	* benchtests/bench-skeleton.c (main): Add support for
	latency benchmarking.
	* benchtests/scripts/bench.py: Add support for latency benchmarking.
2017-08-17 16:27:20 +01:00
Siddhesh Poyarekar
dd3e86ad7c benchtests: Avoid a display error when running in text terminal
The compare_strings.py script generates a graph for the benchmarks it
performs a comparison on and that fails if X is not available.  Avoid
the error and ensure that only the graph is generated and saved as a
PNG file.

	* benchtests/scripts/compare_strings.py: Avoid display error
	when generating graph.
2017-08-08 00:56:10 +05:30
Siddhesh Poyarekar
b115e819af benchtests: Allow selecting baseline for compare_string.py
This patch allows one to provide the function name using an optional
-base option to compare all other functions against.  This is useful
when pitching one implementation of a string function against
alternatives.  In the absence of this option, comparisons are done
against the first ifunc in the list.

	* benchtests/scripts/compare_strings.py (main): Add an
	optional -base option.
	(process_results): New argument base_func.
2017-08-08 00:55:12 +05:30
Siddhesh Poyarekar
25d5247277 benchtests: New script to parse memcpy results
Read the memcpy results in json and print out the results in tabular
form, in addition to generating a graph of the results to compare all
of the implementations.

The format of the output is extensible enough to allow this kind of
analysis to be done on other string functions as well.

	* benchtests/scripts/benchout_strings.schema.json: New file.
	* benchtests/scripts/compare_strings.py: New file.
2017-06-22 23:44:51 +05:30
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Siddhesh Poyarekar
4916acd87b benchtests: Mark output variables as used
Prevent function calls that don't return anything from being optimized
out by the compiler by marking its input variables as used.

This prevents the sincos function call from being optimized out in the
benchmark.
2015-11-17 16:01:15 +05:30
Siddhesh Poyarekar
0cd2828695 benchtest: script to compare two benchmarks
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them.  If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs.  If
detailed timing information is not available, it points out
significant differences in aggregate times.

Call this script as follows:

  compare_bench.py schema_file.json bench1.out bench2.out

Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):

  compare_bench.py schema_file.json bench1.out bench2.out 25

The threshold in the example above is 25%.  schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.

The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points.  Any point in such
a group is at most 1.5x the smallest point in that group.  The
detailed derivation is a comment in the function.

	* benchtests/scripts/compare_bench.py: New file.
	* benchtests/scripts/import_bench.py (mean): New function.
	(split_list): Likewise.
	(do_for_all_timings): Likewise.
	(compress_timings): Likewise.
2015-06-01 23:14:11 +05:30
Siddhesh Poyarekar
0994b9b6f6 New module to import and process benchmark output
This is the beginning of a module to import and process benchmark
outputs.  The module currently supports importing of a bench.out and
validating it against a schema file.  In future this could grow a set
of routines that benchmark consumers may find useful to build their
own analysis tools.  I have altered validate_bench to use this module
too.

	* benchtests/scripts/import_bench.py: New file.
	* benchtests/scripts/validate_benchout.py: Import import_bench
	instead of jsonschema.
	(validate_bench): Remove function.
	(main): Use import_bench.
2015-06-01 23:13:29 +05:30
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Siddhesh Poyarekar
42b1161e8c Validate bench.out against a JSON schema
This patch adds a JSON schema for the benchmark output file and also
adds a script that validates the generated output against the schema.
2014-06-11 14:16:29 +05:30
Siddhesh Poyarekar
15eaf6ffe3 benchtests: Add new directive for benchmark initialization hook
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization.  This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
2014-05-26 12:37:29 +05:30
Siddhesh Poyarekar
5673750800 Detailed benchmark outputs for functions
This patch adds an option to get detailed benchmark output for
functions.  Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on.  This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
2014-03-29 09:40:19 +05:30
Siddhesh Poyarekar
cb5e4aada7 Make bench.out in json format
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format.  I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.

The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime).  The mean
timing has now changed from iterations per unit to actual timing per
iteration.
2014-03-29 09:37:44 +05:30
Siddhesh Poyarekar
27c673b8de benchtests: Move bench.py to benchtests/scripts/
It makes much more sense to have all benchmarking-related scripts in a
single place away from everything else.
2014-03-24 21:16:36 +05:30