2016-04-28 01:34:33 +00:00
# Protocol Buffers Benchmarks
This directory contains benchmarking schemas and data sets that you
can use to test a variety of performance scenarios against your
2019-02-21 03:28:50 +00:00
protobuf language runtime. If you are looking for performance
2018-03-23 00:08:06 +00:00
numbers of officially support languages, see [here](
2018-08-22 18:55:30 +00:00
https://github.com/protocolbuffers/protobuf/blob/master/docs/performance.md)
2016-04-28 01:34:33 +00:00
2017-12-15 01:26:16 +00:00
## Prerequisite
2016-04-28 01:34:33 +00:00
2018-01-09 18:24:50 +00:00
First, you need to follow the instruction in the root directory's README to
2017-12-13 22:34:52 +00:00
build your language's protobuf, then:
2017-12-12 20:05:29 +00:00
2017-12-13 22:34:52 +00:00
### CPP
2017-12-15 01:26:16 +00:00
You need to install [cmake ](https://cmake.org/ ) before building the benchmark.
2017-12-12 20:05:29 +00:00
2018-01-09 18:24:50 +00:00
We are using [google/benchmark ](https://github.com/google/benchmark ) as the
2019-12-05 09:26:43 +00:00
benchmark tool for testing cpp. This will be automatically made during build the
2017-12-15 01:26:16 +00:00
cpp benchmark.
2017-12-12 20:05:29 +00:00
2018-03-23 00:08:06 +00:00
The cpp protobuf performance can be improved by linking with [tcmalloc library](
https://gperftools.github.io/gperftools/tcmalloc.html). For using tcmalloc, you
need to build [gpertools ](https://github.com/gperftools/gperftools ) to generate
libtcmallc.so library.
2018-01-05 19:20:40 +00:00
### Java
2018-01-09 18:24:50 +00:00
We're using maven to build the java benchmarks, which is the same as to build
the Java protobuf. There're no other tools need to install. We're using
[google/caliper ](https://github.com/google/caliper ) as benchmark tool, which
2019-12-05 09:26:43 +00:00
can be automatically included by maven.
2018-01-02 17:57:04 +00:00
2018-01-05 19:20:40 +00:00
### Python
2018-01-09 18:24:50 +00:00
We're using python C++ API for testing the generated
2018-01-05 19:20:40 +00:00
CPP proto version of python protobuf, which is also a prerequisite for Python
2018-01-09 18:24:50 +00:00
protobuf cpp implementation. You need to install the correct version of Python
C++ extension package before run generated CPP proto version of Python
protobuf's benchmark. e.g. under Ubuntu, you need to
2018-01-05 19:20:40 +00:00
```
2018-01-09 18:24:50 +00:00
$ sudo apt-get install python-dev
2018-01-05 19:20:40 +00:00
$ sudo apt-get install python3-dev
```
2018-01-09 18:24:50 +00:00
And you also need to make sure `pkg-config` is installed.
2018-01-05 19:20:40 +00:00
2018-03-07 08:35:42 +00:00
### Go
2018-03-13 08:18:20 +00:00
Go protobufs are maintained at [github.com/golang/protobuf](
2019-02-21 03:28:50 +00:00
http://github.com/golang/protobuf). If not done already, you need to install the
toolchain and the Go protoc-gen-go plugin for protoc.
2018-03-13 08:18:20 +00:00
To install protoc-gen-go, run:
2018-03-07 08:35:42 +00:00
```
$ go get -u github.com/golang/protobuf/protoc-gen-go
2018-03-13 08:18:20 +00:00
$ export PATH=$PATH:$(go env GOPATH)/bin
2018-03-07 08:35:42 +00:00
```
2018-03-13 08:18:20 +00:00
The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH` .
The second command adds the `bin` directory to your `PATH` so that `protoc` can locate the plugin later.
2018-07-16 20:53:22 +00:00
### PHP
2019-12-05 09:26:43 +00:00
PHP benchmark's requirement is the same as PHP protobuf's requirements. The benchmark will automatically
2018-07-16 20:53:22 +00:00
include PHP protobuf's src and build the c extension if required.
2018-07-20 23:13:19 +00:00
### Node.js
2018-08-22 18:55:30 +00:00
Node.js benchmark need [node ](https://nodejs.org/en/ )(higher than V6) and [npm ](https://www.npmjs.com/ ) package manager installed. This benchmark is using the [benchmark ](https://www.npmjs.com/package/benchmark ) framework to test, which needn't to manually install. And another prerequisite is [protobuf js ](https://github.com/protocolbuffers/protobuf/tree/master/js ), which needn't to manually install either
2018-07-20 23:13:19 +00:00
2019-03-01 16:29:17 +00:00
### C#
The C# benchmark code is built as part of the main Google.Protobuf
solution. It requires the .NET Core SDK, and depends on
[BenchmarkDotNet ](https://github.com/dotnet/BenchmarkDotNet ), which
will be downloaded automatically.
2018-01-02 17:57:04 +00:00
### Big data
2018-01-09 18:24:50 +00:00
There's some optional big testing data which is not included in the directory
initially, you need to run the following command to download the testing data:
2018-01-02 17:57:04 +00:00
```
2018-01-09 18:24:50 +00:00
$ ./download_data.sh
2018-01-02 17:57:04 +00:00
```
2019-03-01 16:29:17 +00:00
After doing this the big data file will automatically generated in the
2018-01-09 18:24:50 +00:00
benchmark directory.
2017-12-13 22:34:52 +00:00
## Run instructions
2017-12-01 19:55:38 +00:00
To run all the benchmark dataset:
2018-01-05 19:20:40 +00:00
### Java:
2017-12-01 19:55:38 +00:00
```
$ make java
```
2018-01-05 19:20:40 +00:00
### CPP:
2016-04-28 01:34:33 +00:00
```
2017-12-01 19:55:38 +00:00
$ make cpp
2016-04-28 01:34:33 +00:00
```
2018-03-23 00:08:06 +00:00
For linking with tcmalloc:
```
$ env LD_PRELOAD={directory to libtcmalloc.so} make cpp
```
2018-01-05 19:20:40 +00:00
### Python:
2018-01-09 18:24:50 +00:00
We have three versions of python protobuf implementation: pure python, cpp
reflection and cpp generated code. To run these version benchmark, you need to:
2018-01-05 19:20:40 +00:00
#### Pure Python:
```
$ make python-pure-python
```
#### CPP reflection:
```
$ make python-cpp-reflection
```
#### CPP generated code:
```
$ make python-cpp-generated-code
2018-01-09 18:24:50 +00:00
```
2018-03-07 08:35:42 +00:00
### Go
```
$ make go
```
2018-07-16 20:53:22 +00:00
### PHP
We have two version of php protobuf implemention: pure php, php with c extension. To run these version benchmark, you need to:
#### Pure PHP
```
$ make php
```
#### PHP with c extension
```
$ make php_c
```
2018-07-20 23:13:19 +00:00
### Node.js
```
$ make js
```
2018-04-10 20:26:17 +00:00
To run a specific dataset or run with specific options:
2017-12-01 19:55:38 +00:00
2018-01-05 19:20:40 +00:00
### Java:
2017-12-01 19:55:38 +00:00
```
2017-12-12 20:05:29 +00:00
$ make java-benchmark
2018-04-10 20:26:17 +00:00
$ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
2017-12-01 19:55:38 +00:00
```
2018-01-05 19:20:40 +00:00
### CPP:
2017-12-01 19:55:38 +00:00
```
2017-12-12 20:05:29 +00:00
$ make cpp-benchmark
2018-04-10 20:26:17 +00:00
$ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
2017-12-01 19:55:38 +00:00
```
2018-01-05 19:20:40 +00:00
### Python:
2020-01-08 18:18:20 +00:00
For Python benchmark we have `--json` for outputting the json result
2018-04-10 20:26:17 +00:00
2018-01-05 19:20:40 +00:00
#### Pure Python:
```
$ make python-pure-python-benchmark
2018-04-10 20:26:17 +00:00
$ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
2018-01-05 19:20:40 +00:00
```
#### CPP reflection:
```
$ make python-cpp-reflection-benchmark
2018-04-10 20:26:17 +00:00
$ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
2018-01-05 19:20:40 +00:00
```
#### CPP generated code:
```
$ make python-cpp-generated-code-benchmark
2018-04-10 20:26:17 +00:00
$ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
2018-01-05 19:20:40 +00:00
```
2018-03-07 08:35:42 +00:00
### Go:
```
$ make go-benchmark
2018-04-10 20:26:17 +00:00
$ ./go-benchmark $(specific generated dataset file name) [go testing options]
2018-03-07 08:35:42 +00:00
```
2018-07-16 20:53:22 +00:00
### PHP
#### Pure PHP
```
$ make php-benchmark
$ ./php-benchmark $(specific generated dataset file name)
```
#### PHP with c extension
```
$ make php-c-benchmark
$ ./php-c-benchmark $(specific generated dataset file name)
```
2018-07-20 23:13:19 +00:00
### Node.js
```
$ make js-benchmark
$ ./js-benchmark $(specific generated dataset file name)
```
2018-03-07 08:35:42 +00:00
2019-03-01 16:29:17 +00:00
### C#
From `csharp/src/Google.Protobuf.Benchmarks` , run:
```
$ dotnet run -c Release
```
We intend to add support for this within the makefile in due course.
2017-12-13 22:34:52 +00:00
## Benchmark datasets
2017-12-01 19:55:38 +00:00
Each data set is in the format of benchmarks.proto:
2017-12-13 22:34:52 +00:00
2017-12-01 19:55:38 +00:00
1. name is the benchmark dataset's name.
2. message_name is the benchmark's message type full name (including package and message name)
3. payload is the list of raw data.
2017-12-13 22:34:52 +00:00
The schema for the datasets is described in `benchmarks.proto` .
2017-12-01 19:55:38 +00:00
Benchmark likely want to run several benchmarks against each data set (parse,
2016-04-28 01:34:33 +00:00
serialize, possibly JSON, possibly using different APIs, etc).
We would like to add more data sets. In general we will favor data sets
that make the overall suite diverse without being too large or having
too many similar tests. Ideally everyone can run through the entire
suite without the test run getting too long.