Added README describing the directory.

This commit is contained in:
Josh Haberman 2016-04-27 18:34:33 -07:00
parent 2e83110230
commit 30a2f70eb3
2 changed files with 33 additions and 3 deletions

28
benchmarks/README.md Normal file
View File

@ -0,0 +1,28 @@
# Protocol Buffers Benchmarks
This directory contains benchmarking schemas and data sets that you
can use to test a variety of performance scenarios against your
protobuf language runtime.
The schema for the datasets is described in `benchmarks.proto`.
Generate the data sets like so:
```
$ make
$ ./generate-datasets
Wrote dataset: dataset.google_message1_proto3.pb
Wrote dataset: dataset.google_message1_proto2.pb
Wrote dataset: dataset.google_message2.pb
$
```
Each data set will be written to its own file. Benchmarks will
likely want to run several benchmarks against each data set (parse,
serialize, possibly JSON, possibly using different APIs, etc).
We would like to add more data sets. In general we will favor data sets
that make the overall suite diverse without being too large or having
too many similar tests. Ideally everyone can run through the entire
suite without the test run getting too long.

View File

@ -38,10 +38,12 @@ message BenchmarkDataset {
string name = 1; string name = 1;
// Fully-qualified name of the protobuf message for this dataset. // Fully-qualified name of the protobuf message for this dataset.
// It will be one of the messages defined benchmark_messages.proto. // It will be one of the messages defined benchmark_messages_proto2.proto
// or benchmark_messages_proto3.proto.
//
// Implementations that do not support reflection can implement this with // Implementations that do not support reflection can implement this with
// an explicit "if/else" chain that lists every possible message defined // an explicit "if/else" chain that lists every known message defined
// in this file. // in those files.
string message_name = 2; string message_name = 2;
// The payload(s) for this dataset. They should be parsed or serialized // The payload(s) for this dataset. They should be parsed or serialized