Initial checkin.

This commit is contained in:
temporal 2008-07-10 02:12:20 +00:00
commit 40ee551715
269 changed files with 96422 additions and 0 deletions

3
CHANGES.txt Normal file
View File

@ -0,0 +1,3 @@
2008-07-07 version 2.0.0:
* First public release.

35
CONTRIBUTORS.txt Normal file
View File

@ -0,0 +1,35 @@
This file contains a list of people who have made large contributions
to the public version of Protocol Buffers.
Original Protocol Buffers design and implementation:
Sanjay Ghemawat <sanjay@google.com>
Jeff Dean <jeff@google.com>
Daniel Dulitz <daniel@google.com>
Craig Silverstein
Paul Haahr <haahr@google.com>
Corey Anderson <corin@google.com>
(and many others)
Proto2 C++ and Java primary author:
Kenton Varda <kenton@google.com>
Proto2 Python primary authors:
Will Robinson <robinson@google.com>
Petar Petrov <petar@google.com>
Large code contributions:
Joseph Schorr <jschorr@google.com>
Wenbo Zhu <wenboz@google.com>
Large quantity of code reviews:
Scott Bruce <sbruce@google.com>
Frank Yellin
Neal Norwitz <nnorwitz@google.com>
Jeffrey Yasskin <jyasskin@google.com>
Ambrose Feinstein <ambrose@google.com>
Documentation:
Lisa Carey <lcarey@google.com>
Maven packaging:
Gregory Kick <gak@google.com>

202
COPYING.txt Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

237
INSTALL.txt Normal file
View File

@ -0,0 +1,237 @@
This file contains detailed but generic information on building and
installing the C++ part of this project. For shorter instructions,
as well as instructions for compiling and installing the Java or
Python parts, see README.
======================================================================
Copyright 1994, 1995, 1996, 1999, 2000, 2001, 2002 Free Software
Foundation, Inc.
This file is free documentation; the Free Software Foundation gives
unlimited permission to copy, distribute and modify it.
Basic Installation
==================
These are generic installation instructions.
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. (Caching is
disabled by default to prevent problems with accidental use of stale
cache files.)
If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release. If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.
The file `configure.ac' (or `configure.in') is used to create
`configure' by a program called `autoconf'. You only need
`configure.ac' if you want to change it or regenerate `configure' using
a newer version of `autoconf'.
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system. If you're
using `csh' on an old version of System V, you might need to type
`sh ./configure' instead to prevent `csh' from trying to execute
`configure' itself.
Running `configure' takes awhile. While running, it prints some
messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Optionally, type `make check' to run any self-tests that come with
the package.
4. Type `make install' to install the programs and any data files and
documentation.
5. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that
the `configure' script does not know about. Run `./configure --help'
for details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you must use a version of `make' that
supports the `VPATH' variable, such as GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.
If you have to use a `make' that does not support the `VPATH'
variable, you have to compile the package for one architecture at a
time in the source code directory. After you have installed the
package for one architecture, use `make distclean' before reconfiguring
for another architecture.
Installation Names
==================
By default, `make install' will install the package's files in
`/usr/local/bin', `/usr/local/man', etc. You can specify an
installation prefix other than `/usr/local' by giving `configure' the
option `--prefix=PATH'.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
give `configure' the option `--exec-prefix=PATH', the package will use
PATH as the prefix for installing programs and libraries.
Documentation and other data files will still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=PATH' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Optional Features
=================
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Specifying the System Type
==========================
There may be some features `configure' cannot figure out
automatically, but needs to determine by the type of machine the package
will run on. Usually, assuming the package is built to be run on the
_same_ architectures, `configure' can figure that out, but if it prints
a message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS KERNEL-OS
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the `--target=TYPE' option to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
================
If you want to set default values for `configure' scripts to share,
you can create a site shell script called `config.site' that gives
default values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
will cause the specified gcc to be used as the C compiler (unless it is
overridden in the site shell script).
`configure' Invocation
======================
`configure' recognizes the following options to control how it
operates.
`--help'
`-h'
Print a summary of the options to `configure', and exit.
`--version'
`-V'
Print the version of Autoconf used to generate the `configure'
script, and exit.
`--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
`--config-cache'
`-C'
Alias for `--cache-file=config.cache'.
`--quiet'
`--silent'
`-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
`--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.

117
Makefile.am Normal file
View File

@ -0,0 +1,117 @@
## Process this file with automake to produce Makefile.in
ACLOCAL_AMFLAGS = -I m4
SUBDIRS = src
EXTRA_DIST = \
autogen.sh \
generate_descriptor_proto.sh \
README.txt \
INSTALL.txt \
COPYING.txt \
CONTRIBUTORS.txt \
CHANGES.txt \
editors/README.txt \
editors/proto.vim \
vsprojects/config.h \
vsprojects/extract_includes.bat \
vsprojects/libprotobuf.vcproj \
vsprojects/libprotoc.vcproj \
vsprojects/protobuf.sln \
vsprojects/protoc.vcproj \
vsprojects/readme.txt \
vsprojects/tests.vcproj \
vsprojects/convert2008to2005.sh \
examples/README.txt \
examples/Makefile \
examples/addressbook.proto \
examples/add_person.cc \
examples/list_people.cc \
examples/AddPerson.java \
examples/ListPeople.java \
examples/add_person.py \
examples/list_people.py \
java/src/main/java/com/google/protobuf/AbstractMessage.java \
java/src/main/java/com/google/protobuf/ByteString.java \
java/src/main/java/com/google/protobuf/CodedInputStream.java \
java/src/main/java/com/google/protobuf/CodedOutputStream.java \
java/src/main/java/com/google/protobuf/Descriptors.java \
java/src/main/java/com/google/protobuf/DynamicMessage.java \
java/src/main/java/com/google/protobuf/ExtensionRegistry.java \
java/src/main/java/com/google/protobuf/FieldSet.java \
java/src/main/java/com/google/protobuf/GeneratedMessage.java \
java/src/main/java/com/google/protobuf/InvalidProtocolBufferException.java \
java/src/main/java/com/google/protobuf/Message.java \
java/src/main/java/com/google/protobuf/RpcCallback.java \
java/src/main/java/com/google/protobuf/RpcChannel.java \
java/src/main/java/com/google/protobuf/RpcController.java \
java/src/main/java/com/google/protobuf/RpcUtil.java \
java/src/main/java/com/google/protobuf/Service.java \
java/src/main/java/com/google/protobuf/TextFormat.java \
java/src/main/java/com/google/protobuf/UninitializedMessageException.java \
java/src/main/java/com/google/protobuf/UnknownFieldSet.java \
java/src/main/java/com/google/protobuf/WireFormat.java \
java/src/test/java/com/google/protobuf/AbstractMessageTest.java \
java/src/test/java/com/google/protobuf/CodedInputStreamTest.java \
java/src/test/java/com/google/protobuf/CodedOutputStreamTest.java \
java/src/test/java/com/google/protobuf/DescriptorsTest.java \
java/src/test/java/com/google/protobuf/DynamicMessageTest.java \
java/src/test/java/com/google/protobuf/GeneratedMessageTest.java \
java/src/test/java/com/google/protobuf/MessageTest.java \
java/src/test/java/com/google/protobuf/ServiceTest.java \
java/src/test/java/com/google/protobuf/TestUtil.java \
java/src/test/java/com/google/protobuf/TextFormatTest.java \
java/src/test/java/com/google/protobuf/UnknownFieldSetTest.java \
java/src/test/java/com/google/protobuf/WireFormatTest.java \
java/src/test/java/com/google/protobuf/multiple_files_test.proto \
java/pom.xml \
java/README.txt \
python/google/protobuf/internal/generator_test.py \
python/google/protobuf/internal/decoder.py \
python/google/protobuf/internal/decoder_test.py \
python/google/protobuf/internal/descriptor_test.py \
python/google/protobuf/internal/encoder.py \
python/google/protobuf/internal/encoder_test.py \
python/google/protobuf/internal/input_stream.py \
python/google/protobuf/internal/input_stream_test.py \
python/google/protobuf/internal/message_listener.py \
python/google/protobuf/internal/more_extensions.proto \
python/google/protobuf/internal/more_messages.proto \
python/google/protobuf/internal/output_stream.py \
python/google/protobuf/internal/output_stream_test.py \
python/google/protobuf/internal/reflection_test.py \
python/google/protobuf/internal/service_reflection_test.py \
python/google/protobuf/internal/test_util.py \
python/google/protobuf/internal/text_format_test.py \
python/google/protobuf/internal/wire_format.py \
python/google/protobuf/internal/wire_format_test.py \
python/google/protobuf/internal/__init__.py \
python/google/protobuf/descriptor.py \
python/google/protobuf/message.py \
python/google/protobuf/reflection.py \
python/google/protobuf/service.py \
python/google/protobuf/service_reflection.py \
python/google/protobuf/text_format.py \
python/google/protobuf/__init__.py \
python/google/__init__.py \
python/ez_setup.py \
python/setup.py \
python/mox.py \
python/stubout.py \
python/README.txt
# Deletes all the files generated by autogen.sh.
MAINTAINERCLEANFILES = \
aclocal.m4 \
config.guess \
config.sub \
configure \
depcomp \
install-sh \
ltmain.sh \
Makefile.in \
missing \
mkinstalldirs \
config.h.in \
stamp.h.in

57
README.txt Normal file
View File

@ -0,0 +1,57 @@
Protocol Buffers - Google's data interchange format
Copyright 2008 Google Inc.
http://code.google.com/apis/protocolbuffers/
BETA WARNING
============
This package is a beta. This means that API may change in an
incompatible way in the future. It's unlikely that any big changes
will be made, but we can make no promises. Expect a non-beta release
sometime in August 2008.
C++ Installation
================
To build and install the C++ Protocol Buffer runtime and the Protocol
Buffer compiler (protoc) execute the following:
$ ./configure
$ make
$ make check
$ make install
If "make check" fails, you can still install, but it is likely that
some features of this library will not work correctly on your system.
Proceed at your own risk.
"make install" may require superuser privileges.
For advanced usage information on configure and make, see INSTALL.
** Note for Solaris users **
Solaris 10 x86 has a bug that will make linking fail, complaining
about libstdc++.la being invalid. We have included a work-around
in this package. To use the work-around, run configure as follows:
./configure LDFLAGS=-L$PWD/src/solaris
See src/solaris/libstdc++.la for more info on this bug.
Java and Python Installation
============================
The Java and Python runtime libraries for Protocol Buffers are located
in the java and python directories. See the README file in each
directory for more information on how to compile and install them.
Note that both of them require you to first install the Protocol
Buffer compiler (protoc), which is part of the C++ package.
Usage
=====
The complete documentation for Protocol Buffers is available via the
web at:
http://code.google.com/apis/protocolbuffers/

27
autogen.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/sh
# Run this script to generate the configure script and other files that will
# be included in the distribution. These files are not checked in because they
# are automatically generated.
# Check that we're being run from the right directory.
if test ! -e src/google/protobuf/stubs/common.h; then
cat >&2 << __EOF__
Could not find source code. Make sure you are running this script from the
root of the distribution tree.
__EOF__
exit 1
fi
set -ex
rm -rf autom4te.cache
aclocal-1.9 --force -I m4
libtoolize -c -f
autoheader -f -W all
automake-1.9 --foreign -a -c -f -W all
autoconf -f -W all,no-obsolete
rm -rf autom4te.cache config.h.in~
exit 0

34
configure.ac Normal file
View File

@ -0,0 +1,34 @@
## Process this file with autoconf to produce configure.
## In general, the safest way to proceed is to run ./autogen.sh
AC_PREREQ(2.59)
# Note: If you change the version, you must also update it in:
# * java/pom.xml
# * python/setup.py
# * src/google/protobuf/stubs/common.h
AC_INIT(protobuf, 2.0.1-SNAPSHOT, protobuf@googlegroups.com)
AC_CONFIG_SRCDIR(src/google/protobuf/message.cc)
AM_CONFIG_HEADER(config.h)
AM_INIT_AUTOMAKE
# Checks for programs.
AC_PROG_CC
AC_PROG_CXX
AC_PROG_LIBTOOL
AM_CONDITIONAL(GCC, test "$GCC" = yes) # let the Makefile know if we're gcc
# Checks for header files.
AC_HEADER_STDC
AC_CHECK_HEADERS([fcntl.h inttypes.h limits.h stdlib.h unistd.h])
# Checks for library functions.
AC_FUNC_MEMCMP
AC_FUNC_STRTOD
AC_CHECK_FUNCS([ftruncate memset mkdir strchr strerror strtol])
ACX_PTHREAD
AC_CXX_STL_HASH
AC_OUTPUT( Makefile src/Makefile )

5
editors/README.txt Normal file
View File

@ -0,0 +1,5 @@
This directory contains syntax highlighting and configuration files for editors
to properly display Protocol Buffer files.
See each file's header comment for directions on how to use it with the
appropriate editor.

83
editors/proto.vim Normal file
View File

@ -0,0 +1,83 @@
" Protocol Buffers - Google's data interchange format
" Copyright 2008 Google Inc.
"
" Licensed under the Apache License, Version 2.0 (the "License");
" you may not use this file except in compliance with the License.
" You may obtain a copy of the License at
"
" http:"www.apache.org/licenses/LICENSE-2.0
"
" Unless required by applicable law or agreed to in writing, software
" distributed under the License is distributed on an "AS IS" BASIS,
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
" See the License for the specific language governing permissions and
" limitations under the License.
" This is the Vim syntax file for Google Protocol Buffers.
"
" Usage:
"
" 1. cp proto.vim ~/.vim/syntax/
" 2. Add the following to ~/.vimrc:
"
" augroup filetype
" au! BufRead,BufNewFile *.proto setfiletype proto
" augroup end
if version < 600
syntax clear
elseif exists("b:current_syntax")
finish
endif
syn case match
syn keyword pbSyntax syntax import option
syn keyword pbStructure package message group
syn keyword pbRepeat optional required repeated
syn keyword pbDefault default
syn keyword pbExtend extend extensions to max
syn keyword pbRPC service rpc returns
syn keyword pbType int32 int64 uint32 uint64 sint32 sint64
syn keyword pbType fixed32 fixed64 sfixed32 sfixed64
syn keyword pbType float double bool string bytes
syn keyword pbTypedef enum
syn keyword pbBool true false
syn match pbInt /-\?\<\d\+\>/
syn match pbInt /\<0[xX]\x+\>/
syn match pbFloat /\<-\?\d*\(\.\d*\)\?/
" TODO: .proto also supports C-style block comments;
" see /usr/share/vim/vim70/syntax/c.vim for how it's done.
syn match pbComment /\/\/.*$/
syn region pbString start=/"/ skip=/\\"/ end=/"/
syn region pbString start=/'/ skip=/\\'/ end=/'/
if version >= 508 || !exists("did_proto_syn_inits")
if version < 508
let did_proto_syn_inits = 1
command -nargs=+ HiLink hi link <args>
else
command -nargs=+ HiLink hi def link <args>
endif
HiLink pbSyntax Include
HiLink pbStructure Structure
HiLink pbRepeat Repeat
HiLink pbDefault Keyword
HiLink pbExtend Keyword
HiLink pbRPC Keyword
HiLink pbType Type
HiLink pbTypedef Typedef
HiLink pbBool Boolean
HiLink pbInt Number
HiLink pbFloat Float
HiLink pbComment Comment
HiLink pbString String
delcommand HiLink
endif
let b:current_syntax = "proto"

89
examples/AddPerson.java Normal file
View File

@ -0,0 +1,89 @@
// See README.txt for information and build instructions.
import com.example.tutorial.AddressBookProtos.AddressBook;
import com.example.tutorial.AddressBookProtos.Person;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.InputStreamReader;
import java.io.IOException;
import java.io.PrintStream;
class AddPerson {
// This function fills in a Person message based on user input.
static Person PromptForAddress(BufferedReader stdin,
PrintStream stdout) throws IOException {
Person.Builder person = Person.newBuilder();
stdout.print("Enter person ID: ");
person.setId(Integer.valueOf(stdin.readLine()));
stdout.print("Enter name: ");
person.setName(stdin.readLine());
stdout.print("Enter email address (blank for none): ");
String email = stdin.readLine();
if (email.length() > 0) {
person.setEmail(email);
}
while (true) {
stdout.print("Enter a phone number (or leave blank to finish): ");
String number = stdin.readLine();
if (number.length() == 0) {
break;
}
Person.PhoneNumber.Builder phoneNumber =
Person.PhoneNumber.newBuilder().setNumber(number);
stdout.print("Is this a mobile, home, or work phone? ");
String type = stdin.readLine();
if (type.equals("mobile")) {
phoneNumber.setType(Person.PhoneType.MOBILE);
} else if (type.equals("home")) {
phoneNumber.setType(Person.PhoneType.HOME);
} else if (type.equals("work")) {
phoneNumber.setType(Person.PhoneType.WORK);
} else {
stdout.println("Unknown phone type. Using default.");
}
person.addPhone(phoneNumber);
}
return person.build();
}
// Main function: Reads the entire address book from a file,
// adds one person based on user input, then writes it back out to the same
// file.
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: AddPerson ADDRESS_BOOK_FILE");
System.exit(-1);
}
AddressBook.Builder addressBook = AddressBook.newBuilder();
// Read the existing address book.
try {
FileInputStream input = new FileInputStream(args[0]);
addressBook.mergeFrom(input);
input.close();
} catch (FileNotFoundException e) {
System.out.println(args[0] + ": File not found. Creating a new file.");
}
// Add an address.
addressBook.addPerson(
PromptForAddress(new BufferedReader(new InputStreamReader(System.in)),
System.out));
// Write the new address book back to disk.
FileOutputStream output = new FileOutputStream(args[0]);
addressBook.build().writeTo(output);
output.close();
}
}

50
examples/ListPeople.java Normal file
View File

@ -0,0 +1,50 @@
// See README.txt for information and build instructions.
import com.example.tutorial.AddressBookProtos.AddressBook;
import com.example.tutorial.AddressBookProtos.Person;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.PrintStream;
class ListPeople {
// Iterates though all people in the AddressBook and prints info about them.
static void Print(AddressBook addressBook) {
for (Person person: addressBook.getPersonList()) {
System.out.println("Person ID: " + person.getId());
System.out.println(" Name: " + person.getName());
if (person.hasEmail()) {
System.out.println(" E-mail address: " + person.getEmail());
}
for (Person.PhoneNumber phoneNumber : person.getPhoneList()) {
switch (phoneNumber.getType()) {
case MOBILE:
System.out.print(" Mobile phone #: ");
break;
case HOME:
System.out.print(" Home phone #: ");
break;
case WORK:
System.out.print(" Work phone #: ");
break;
}
System.out.println(phoneNumber.getNumber());
}
}
}
// Main function: Reads the entire address book from a file and prints all
// the information inside.
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.err.println("Usage: ListPeople ADDRESS_BOOK_FILE");
System.exit(-1);
}
// Read the existing address book.
AddressBook addressBook =
AddressBook.parseFrom(new FileInputStream(args[0]));
Print(addressBook);
}
}

56
examples/Makefile Normal file
View File

@ -0,0 +1,56 @@
# See README.txt.
.PHONY: all cpp java python clean
all: cpp java python
cpp: add_person_cpp list_people_cpp
java: add_person_java list_people_java
python: add_person_python list_people_python
clean:
rm -f add_person_cpp list_people_cpp add_person_java list_people_java add_person_python list_people_python
rm -f javac_middleman AddPerson*.class ListPeople*.class com/example/tutorial/*.class
rm -f protoc_middleman addressbook.pb.cc addressbook.pb.h addressbook_pb2.py com/example/tutorial/AddressBookProtos.java
rm -f *.pyc
rmdir com/example/tutorial 2>/dev/null || true
rmdir com/example 2>/dev/null || true
rmdir com 2>/dev/null || true
protoc_middleman: addressbook.proto
protoc --cpp_out=. --java_out=. --python_out=. addressbook.proto
@touch protoc_middleman
add_person_cpp: add_person.cc protoc_middleman
c++ add_person.cc addressbook.pb.cc -lprotobuf -o add_person_cpp
list_people_cpp: list_people.cc protoc_middleman
c++ list_people.cc addressbook.pb.cc -lprotobuf -o list_people_cpp
javac_middleman: AddPerson.java ListPeople.java protoc_middleman
javac AddPerson.java ListPeople.java com/example/tutorial/AddressBookProtos.java
@touch javac_middleman
add_person_java: javac_middleman
@echo "Writing shortcut script add_person_java..."
@echo '#! /bin/sh' > add_person_java
@echo 'java -classpath .:$$CLASSPATH AddPerson "$$@"' >> add_person_java
@chmod +x add_person_java
list_people_java: javac_middleman
@echo "Writing shortcut script list_people_java..."
@echo '#! /bin/sh' > list_people_java
@echo 'java -classpath .:$$CLASSPATH ListPeople "$$@"' >> list_people_java
@chmod +x list_people_java
add_person_python: add_person.py protoc_middleman
@echo "Writing shortcut script add_person_python..."
@echo '#! /bin/sh' > add_person_python
@echo './add_person.py "$$@"' >> add_person_python
@chmod +x add_person_python
list_people_python: list_people.py protoc_middleman
@echo "Writing shortcut script list_people_python..."
@echo '#! /bin/sh' > list_people_python
@echo './list_people.py "$$@"' >> list_people_python
@chmod +x list_people_python

25
examples/README.txt Normal file
View File

@ -0,0 +1,25 @@
This directory contains example code that uses Protocol Buffers to manage an
address book. Two programs are provided, each with three different
implementations, one written in each of C++, Java, and Python. The add_person
example adds a new person to an address book, prompting the user to input
the person's information. The list_people example lists people already in the
address book. The examples use the exact same format in all three languages,
so you can, for example, use add_person_java to create an address book and then
use list_people_python to read it.
You must install the protobuf package before you can build these.
To build all the examples (on a unix-like system), simply run "make". This
creates the following executable files in the current directory:
add_person_cpp list_people_cpp
add_person_java list_people_java
add_person_python list_people_python
If you only want to compile examples in one language, use "make cpp",
"make java", or "make python".
All of these programs simply take an address book file as their parameter.
The add_person programs will create the file if it doesn't already exist.
These examples are part of the Protocol Buffers tutorial, located at:
http://code.google.com/apis/protocolbuffers/docs/tutorials.html

92
examples/add_person.cc Normal file
View File

@ -0,0 +1,92 @@
// See README.txt for information and build instructions.
#include <iostream>
#include <fstream>
#include <string>
#include "addressbook.pb.h"
using namespace std;
// This function fills in a Person message based on user input.
void PromptForAddress(tutorial::Person* person) {
cout << "Enter person ID number: ";
int id;
cin >> id;
person->set_id(id);
cin.ignore(256, '\n');
cout << "Enter name: ";
getline(cin, *person->mutable_name());
cout << "Enter email address (blank for none): ";
string email;
getline(cin, email);
if (!email.empty()) {
person->set_email(email);
}
while (true) {
cout << "Enter a phone number (or leave blank to finish): ";
string number;
getline(cin, number);
if (number.empty()) {
break;
}
tutorial::Person::PhoneNumber* phone_number = person->add_phone();
phone_number->set_number(number);
cout << "Is this a mobile, home, or work phone? ";
string type;
getline(cin, type);
if (type == "mobile") {
phone_number->set_type(tutorial::Person::MOBILE);
} else if (type == "home") {
phone_number->set_type(tutorial::Person::HOME);
} else if (type == "work") {
phone_number->set_type(tutorial::Person::WORK);
} else {
cout << "Unknown phone type. Using default." << endl;
}
}
}
// Main function: Reads the entire address book from a file,
// adds one person based on user input, then writes it back out to the same
// file.
int main(int argc, char* argv[]) {
// Verify that the version of the library that we linked against is
// compatible with the version of the headers we compiled against.
GOOGLE_PROTOBUF_VERIFY_VERSION;
if (argc != 2) {
cerr << "Usage: " << argv[0] << " ADDRESS_BOOK_FILE" << endl;
return -1;
}
tutorial::AddressBook address_book;
{
// Read the existing address book.
fstream input(argv[1], ios::in | ios::binary);
if (!input) {
cout << argv[1] << ": File not found. Creating a new file." << endl;
} else if (!address_book.ParseFromIstream(&input)) {
cerr << "Failed to parse address book." << endl;
return -1;
}
}
// Add an address.
PromptForAddress(address_book.add_person());
{
// Write the new address book back to disk.
fstream output(argv[1], ios::out | ios::trunc | ios::binary);
if (!address_book.SerializeToOstream(&output)) {
cerr << "Failed to write address book." << endl;
return -1;
}
}
return 0;
}

58
examples/add_person.py Executable file
View File

@ -0,0 +1,58 @@
#! /usr/bin/python
# See README.txt for information and build instructions.
import addressbook_pb2
import sys
# This function fills in a Person message based on user input.
def PromptForAddress(person):
person.id = int(raw_input("Enter person ID number: "))
person.name = raw_input("Enter name: ")
email = raw_input("Enter email address (blank for none): ")
if email != "":
person.email = email
while True:
number = raw_input("Enter a phone number (or leave blank to finish): ")
if number == "":
break
phone_number = person.phone.add()
phone_number.number = number
type = raw_input("Is this a mobile, home, or work phone? ")
if type == "mobile":
phone_number.type = addressbook_pb2.Person.MOBILE
elif type == "home":
phone_number.type = addressbook_pb2.Person.HOME
elif type == "work":
phone_number.type = addressbook_pb2.Person.WORK
else:
print "Unknown phone type; leaving as default value."
# Main procedure: Reads the entire address book from a file,
# adds one person based on user input, then writes it back out to the same
# file.
if len(sys.argv) != 2:
print "Usage:", sys.argv[0], "ADDRESS_BOOK_FILE"
sys.exit(-1)
address_book = addressbook_pb2.AddressBook()
# Read the existing address book.
try:
f = open(sys.argv[1], "rb")
address_book.ParseFromString(f.read())
f.close()
except IOError:
print sys.argv[1] + ": File not found. Creating a new file."
# Add an address.
PromptForAddress(address_book.person.add())
# Write the new address book back to disk.
f = open(sys.argv[1], "wb")
f.write(address_book.SerializeToString())
f.close()

View File

@ -0,0 +1,30 @@
// See README.txt for information and build instructions.
package tutorial;
option java_package = "com.example.tutorial";
option java_outer_classname = "AddressBookProtos";
message Person {
required string name = 1;
required int32 id = 2; // Unique ID number for this person.
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
message PhoneNumber {
required string number = 1;
optional PhoneType type = 2 [default = HOME];
}
repeated PhoneNumber phone = 4;
}
// Our address book file is just one of these.
message AddressBook {
repeated Person person = 1;
}

65
examples/list_people.cc Normal file
View File

@ -0,0 +1,65 @@
// See README.txt for information and build instructions.
#include <iostream>
#include <fstream>
#include <string>
#include "addressbook.pb.h"
using namespace std;
// Iterates though all people in the AddressBook and prints info about them.
void ListPeople(const tutorial::AddressBook& address_book) {
for (int i = 0; i < address_book.person_size(); i++) {
const tutorial::Person& person = address_book.person(i);
cout << "Person ID: " << person.id() << endl;
cout << " Name: " << person.name() << endl;
if (person.has_email()) {
cout << " E-mail address: " << person.email() << endl;
}
for (int j = 0; j < person.phone_size(); j++) {
const tutorial::Person::PhoneNumber& phone_number = person.phone(j);
switch (phone_number.type()) {
case tutorial::Person::MOBILE:
cout << " Mobile phone #: ";
break;
case tutorial::Person::HOME:
cout << " Home phone #: ";
break;
case tutorial::Person::WORK:
cout << " Work phone #: ";
break;
}
cout << phone_number.number() << endl;
}
}
}
// Main function: Reads the entire address book from a file and prints all
// the information inside.
int main(int argc, char* argv[]) {
// Verify that the version of the library that we linked against is
// compatible with the version of the headers we compiled against.
GOOGLE_PROTOBUF_VERIFY_VERSION;
if (argc != 2) {
cerr << "Usage: " << argv[0] << " ADDRESS_BOOK_FILE" << endl;
return -1;
}
tutorial::AddressBook address_book;
{
// Read the existing address book.
fstream input(argv[1], ios::in | ios::binary);
if (!address_book.ParseFromIstream(&input)) {
cerr << "Failed to parse address book." << endl;
return -1;
}
}
ListPeople(address_book);
return 0;
}

38
examples/list_people.py Executable file
View File

@ -0,0 +1,38 @@
#! /usr/bin/python
# See README.txt for information and build instructions.
import addressbook_pb2
import sys
# Iterates though all people in the AddressBook and prints info about them.
def ListPeople(address_book):
for person in address_book.person:
print "Person ID:", person.id
print " Name:", person.name
if person.HasField('email'):
print " E-mail address:", person.email
for phone_number in person.phone:
if phone_number.type == addressbook_pb2.Person.MOBILE:
print " Mobile phone #: ",
elif phone_number.type == addressbook_pb2.Person.HOME:
print " Home phone #: ",
elif phone_number.type == addressbook_pb2.Person.WORK:
print " Work phone #: ",
print phone_number.number
# Main procedure: Reads the entire address book from a file and prints all
# the information inside.
if len(sys.argv) != 2:
print "Usage:", sys.argv[0], "ADDRESS_BOOK_FILE"
sys.exit(-1)
address_book = addressbook_pb2.AddressBook()
# Read the existing address book.
f = open(sys.argv[1], "rb")
address_book.ParseFromString(f.read())
f.close()
ListPeople(address_book)

35
generate_descriptor_proto.sh Executable file
View File

@ -0,0 +1,35 @@
#!/bin/sh
# Run this script to regenerate descriptor.pb.{h,cc} after the protocol
# compiler changes. Since these files are compiled into the protocol compiler
# itself, they cannot be generated automatically by a make rule. "make check"
# will fail if these files do not match what the protocol compiler would
# generate.
# Note that this will always need to be run once after running
# extract_from_google3.sh. That script initially copies descriptor.pb.{h,cc}
# over from the google3 code and fixes it up to compile outside of google3, but
# it cannot fix the encoded descriptor embedded in descriptor.pb.cc. So, once
# the protocol compiler has been built with the slightly-broken
# descriptor.pb.cc, the files must be regenerated and the compiler must be
# built again.
if test ! -e src/google/protobuf/stubs/common.h; then
cat >&2 << __EOF__
Could not find source code. Make sure you are running this script from the
root of the distribution tree.
__EOF__
exit 1
fi
if test ! -e src/Makefile; then
cat >&2 << __EOF__
Could not find src/Makefile. You must run ./configure (and perhaps
./autogen.sh) first.
__EOF__
exit 1
fi
pushd src
make protoc && ./protoc --cpp_out=dllexport_decl=LIBPROTOBUF_EXPORT:. google/protobuf/descriptor.proto
popd

46
java/README.txt Normal file
View File

@ -0,0 +1,46 @@
Protocol Buffers - Google's data interchange format
Copyright 2008 Google Inc.
This directory contains the Java Protocol Buffers runtime library.
Installation
============
1) Install Apache Maven if you don't have it:
http://maven.apache.org/
2) Build the C++ code, or obtain a binary distribution of protoc. If
you install a binary distribution, make sure that it is the same
version as this package. If in doubt, run:
$ protoc --version
You will need to place the protoc executable in ../src. (If you
built it yourself, it should already be there.)
3) Run the tests:
$ mvn test
If some tests fail, this library may not work correctly on your
system. Continue at your own risk.
4) Install the library into your Maven repository:
$ mvn install
5) If you do not use Maven to manage your own build, you can build a
.jar file to use:
$ mvn package
The .jar will be placed in the "target" directory.
Usage
=====
The complete documentation for Protocol Buffers is available via the
web at:
http://code.google.com/apis/protocolbuffers/

107
java/pom.xml Normal file
View File

@ -0,0 +1,107 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<!-- This is commented until the parent can be deployed to a repository. -->
<!--
<parent>
<groupId>com</groupId>
<artifactId>google</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
-->
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.0.1-SNAPSHOT</version>
<name>Protocol Buffer Java API</name>
<packaging>jar</packaging>
<inceptionYear>2008</inceptionYear>
<url>http://code.google.com/p/protobuf</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.4</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.easymock</groupId>
<artifactId>easymock</artifactId>
<version>2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.easymock</groupId>
<artifactId>easymockclassextension</artifactId>
<version>2.2.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<includes>
<include>**/*Test.java</include>
</includes>
</configuration>
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>generate-sources</id>
<phase>generate-sources</phase>
<configuration>
<tasks>
<mkdir dir="target/generated-sources" />
<exec executable="../src/protoc">
<arg value="--java_out=target/generated-sources" />
<arg value="--proto_path=../src" />
<arg value="../src/google/protobuf/descriptor.proto" />
</exec>
</tasks>
<sourceRoot>target/generated-sources</sourceRoot>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>generate-test-sources</id>
<phase>generate-test-sources</phase>
<configuration>
<tasks>
<mkdir dir="target/generated-test-sources" />
<exec executable="../src/protoc">
<arg value="--java_out=target/generated-test-sources" />
<arg value="--proto_path=../src" />
<arg value="--proto_path=src/test/java" />
<arg value="../src/google/protobuf/unittest.proto" />
<arg value="../src/google/protobuf/unittest_import.proto" />
<arg value="../src/google/protobuf/unittest_mset.proto" />
<arg value="src/test/java/com/google/protobuf/multiple_files_test.proto" />
<arg
value="../src/google/protobuf/unittest_optimize_for.proto" />
</exec>
</tasks>
<testSourceRoot>target/generated-test-sources</testSourceRoot>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View File

@ -0,0 +1,343 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.FieldDescriptor;
import java.io.InputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.List;
import java.util.Map;
/**
* A partial implementation of the {@link Message} interface which implements
* as many methods of that interface as possible in terms of other methods.
*
* @author kenton@google.com Kenton Varda
*/
public abstract class AbstractMessage implements Message {
@SuppressWarnings("unchecked")
public boolean isInitialized() {
// Check that all required fields are present.
for (FieldDescriptor field : getDescriptorForType().getFields()) {
if (field.isRequired()) {
if (!hasField(field)) {
return false;
}
}
}
// Check that embedded messages are initialized.
for (Map.Entry<FieldDescriptor, Object> entry : getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
if (field.isRepeated()) {
for (Message element : (List<Message>) entry.getValue()) {
if (!element.isInitialized()) {
return false;
}
}
} else {
if (!((Message) entry.getValue()).isInitialized()) {
return false;
}
}
}
}
return true;
}
public final String toString() {
return TextFormat.printToString(this);
}
public void writeTo(CodedOutputStream output) throws IOException {
for (Map.Entry<FieldDescriptor, Object> entry : getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.isRepeated()) {
for (Object element : (List) entry.getValue()) {
output.writeField(field.getType(), field.getNumber(), element);
}
} else {
output.writeField(field.getType(), field.getNumber(), entry.getValue());
}
}
UnknownFieldSet unknownFields = getUnknownFields();
if (getDescriptorForType().getOptions().getMessageSetWireFormat()) {
unknownFields.writeAsMessageSetTo(output);
} else {
unknownFields.writeTo(output);
}
}
public ByteString toByteString() {
try {
ByteString.CodedBuilder out =
ByteString.newCodedBuilder(getSerializedSize());
writeTo(out.getCodedOutput());
return out.build();
} catch (IOException e) {
throw new RuntimeException(
"Serializing to a ByteString threw an IOException (should " +
"never happen).", e);
}
}
public byte[] toByteArray() {
try {
byte[] result = new byte[getSerializedSize()];
CodedOutputStream output = CodedOutputStream.newInstance(result);
writeTo(output);
output.checkNoSpaceLeft();
return result;
} catch (IOException e) {
throw new RuntimeException(
"Serializing to a byte array threw an IOException " +
"(should never happen).", e);
}
}
public void writeTo(OutputStream output) throws IOException {
CodedOutputStream codedOutput = CodedOutputStream.newInstance(output);
writeTo(codedOutput);
codedOutput.flush();
}
private int memoizedSize = -1;
public int getSerializedSize() {
int size = memoizedSize;
if (size != -1) return size;
size = 0;
for (Map.Entry<FieldDescriptor, Object> entry : getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.isRepeated()) {
for (Object element : (List) entry.getValue()) {
size += CodedOutputStream.computeFieldSize(
field.getType(), field.getNumber(), element);
}
} else {
size += CodedOutputStream.computeFieldSize(
field.getType(), field.getNumber(), entry.getValue());
}
}
UnknownFieldSet unknownFields = getUnknownFields();
if (getDescriptorForType().getOptions().getMessageSetWireFormat()) {
size += unknownFields.getSerializedSizeAsMessageSet();
} else {
size += unknownFields.getSerializedSize();
}
memoizedSize = size;
return size;
}
@Override
public boolean equals(Object other) {
if (other == this) {
return true;
}
if (!(other instanceof Message)) {
return false;
}
Message otherMessage = (Message) other;
if (getDescriptorForType() != otherMessage.getDescriptorForType()) {
return false;
}
return getAllFields().equals(otherMessage.getAllFields());
}
@Override
public int hashCode() {
int hash = 41;
hash = (19 * hash) + getDescriptorForType().hashCode();
hash = (53 * hash) + getAllFields().hashCode();
return hash;
}
// =================================================================
/**
* A partial implementation of the {@link Message.Builder} interface which
* implements as many methods of that interface as possible in terms of
* other methods.
*/
@SuppressWarnings("unchecked")
public static abstract class Builder<BuilderType extends Builder>
implements Message.Builder {
// The compiler produces an error if this is not declared explicitly.
public abstract BuilderType clone();
public BuilderType clear() {
for (Map.Entry<FieldDescriptor, Object> entry :
getAllFields().entrySet()) {
clearField(entry.getKey());
}
return (BuilderType) this;
}
public BuilderType mergeFrom(Message other) {
if (other.getDescriptorForType() != getDescriptorForType()) {
throw new IllegalArgumentException(
"mergeFrom(Message) can only merge messages of the same type.");
}
// Note: We don't attempt to verify that other's fields have valid
// types. Doing so would be a losing battle. We'd have to verify
// all sub-messages as well, and we'd have to make copies of all of
// them to insure that they don't change after verification (since
// the Message interface itself cannot enforce immutability of
// implementations).
// TODO(kenton): Provide a function somewhere called makeDeepCopy()
// which allows people to make secure deep copies of messages.
for (Map.Entry<FieldDescriptor, Object> entry :
other.getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.isRepeated()) {
for (Object element : (List)entry.getValue()) {
addRepeatedField(field, element);
}
} else if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
Message existingValue = (Message)getField(field);
if (existingValue == existingValue.getDefaultInstanceForType()) {
setField(field, entry.getValue());
} else {
setField(field,
existingValue.newBuilderForType()
.mergeFrom(existingValue)
.mergeFrom((Message)entry.getValue())
.build());
}
} else {
setField(field, entry.getValue());
}
}
return (BuilderType) this;
}
public BuilderType mergeFrom(CodedInputStream input) throws IOException {
return mergeFrom(input, ExtensionRegistry.getEmptyRegistry());
}
public BuilderType mergeFrom(CodedInputStream input,
ExtensionRegistry extensionRegistry)
throws IOException {
UnknownFieldSet.Builder unknownFields =
UnknownFieldSet.newBuilder(getUnknownFields());
FieldSet.mergeFrom(input, unknownFields, extensionRegistry, this);
setUnknownFields(unknownFields.build());
return (BuilderType) this;
}
public BuilderType mergeUnknownFields(UnknownFieldSet unknownFields) {
setUnknownFields(
UnknownFieldSet.newBuilder(getUnknownFields())
.mergeFrom(unknownFields)
.build());
return (BuilderType) this;
}
public BuilderType mergeFrom(ByteString data)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = data.newCodedInput();
mergeFrom(input);
input.checkLastTagWas(0);
return (BuilderType) this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a ByteString threw an IOException (should " +
"never happen).", e);
}
}
public BuilderType mergeFrom(ByteString data,
ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = data.newCodedInput();
mergeFrom(input, extensionRegistry);
input.checkLastTagWas(0);
return (BuilderType) this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a ByteString threw an IOException (should " +
"never happen).", e);
}
}
public BuilderType mergeFrom(byte[] data)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = CodedInputStream.newInstance(data);
mergeFrom(input);
input.checkLastTagWas(0);
return (BuilderType) this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a byte array threw an IOException (should " +
"never happen).", e);
}
}
public BuilderType mergeFrom(
byte[] data, ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = CodedInputStream.newInstance(data);
mergeFrom(input, extensionRegistry);
input.checkLastTagWas(0);
return (BuilderType) this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a byte array threw an IOException (should " +
"never happen).", e);
}
}
public BuilderType mergeFrom(InputStream input) throws IOException {
CodedInputStream codedInput = CodedInputStream.newInstance(input);
mergeFrom(codedInput);
codedInput.checkLastTagWas(0);
return (BuilderType) this;
}
public BuilderType mergeFrom(InputStream input,
ExtensionRegistry extensionRegistry)
throws IOException {
CodedInputStream codedInput = CodedInputStream.newInstance(input);
mergeFrom(codedInput, extensionRegistry);
codedInput.checkLastTagWas(0);
return (BuilderType) this;
}
}
}

View File

@ -0,0 +1,318 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import java.io.InputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.FilterOutputStream;
import java.io.UnsupportedEncodingException;
/**
* Immutable array of bytes.
*
* @author crazybob@google.com Bob Lee
* @author kenton@google.com Kenton Varda
*/
public final class ByteString {
private final byte[] bytes;
private ByteString(byte[] bytes) {
this.bytes = bytes;
}
/**
* Gets the byte at the given index.
*
* @throws ArrayIndexOutOfBoundsException {@code index} is < 0 or >= size
*/
public byte byteAt(int index) {
return bytes[index];
}
/**
* Gets the number of bytes.
*/
public int size() {
return this.bytes.length;
}
/**
* Returns {@code true} if the size is {@code 0}, {@code false} otherwise.
*/
public boolean isEmpty() {
return this.bytes.length == 0;
}
// =================================================================
// byte[] -> ByteString
/**
* Empty ByteString.
*/
public static final ByteString EMPTY = new ByteString(new byte[0]);
/**
* Copies the given bytes into a {@code ByteString}.
*/
public static ByteString copyFrom(byte[] bytes, int offset, int size) {
byte[] copy = new byte[size];
System.arraycopy(bytes, offset, copy, 0, size);
return new ByteString(copy);
}
/**
* Copies the given bytes into a {@code ByteString}.
*/
public static ByteString copyFrom(byte[] bytes) {
return copyFrom(bytes, 0, bytes.length);
}
/**
* Encodes {@code text} into a sequence of bytes using the named charset
* and returns the result as a {@code ByteString}.
*/
public static ByteString copyFrom(String text, String charsetName)
throws UnsupportedEncodingException {
return new ByteString(text.getBytes(charsetName));
}
/**
* Encodes {@code text} into a sequence of UTF-8 bytes and returns the
* result as a {@code ByteString}.
*/
public static ByteString copyFromUtf8(String text) {
try {
return new ByteString(text.getBytes("UTF-8"));
} catch (UnsupportedEncodingException e) {
throw new RuntimeException("UTF-8 not supported?", e);
}
}
// =================================================================
// ByteString -> byte[]
/**
* Copies bytes into a buffer at the given offset.
*
* @param target buffer to copy into
* @param offset in the target buffer
*/
public void copyTo(byte[] target, int offset) {
System.arraycopy(bytes, 0, target, offset, bytes.length);
}
/**
* Copies bytes into a buffer.
*
* @param target buffer to copy into
* @param sourceOffset offset within these bytes
* @param targetOffset offset within the target buffer
* @param size number of bytes to copy
*/
public void copyTo(byte[] target, int sourceOffset, int targetOffset,
int size) {
System.arraycopy(bytes, sourceOffset, target, targetOffset, size);
}
/**
* Copies bytes to a {@code byte[]}.
*/
public byte[] toByteArray() {
int size = this.bytes.length;
byte[] copy = new byte[size];
System.arraycopy(this.bytes, 0, copy, 0, size);
return copy;
}
/**
* Constructs a new {@code String} by decoding the bytes using the
* specified charset.
*/
public String toString(String charsetName)
throws UnsupportedEncodingException {
return new String(this.bytes, charsetName);
}
/**
* Constructs a new {@code String} by decoding the bytes as UTF-8.
*/
public String toStringUtf8() {
try {
return new String(this.bytes, "UTF-8");
} catch (UnsupportedEncodingException e) {
throw new RuntimeException("UTF-8 not supported?", e);
}
}
// =================================================================
// equals() and hashCode()
@Override
public boolean equals(Object o) {
if (o == this) {
return true;
}
if (!(o instanceof ByteString)) {
return false;
}
ByteString other = (ByteString) o;
int size = this.bytes.length;
if (size != other.bytes.length) {
return false;
}
byte[] bytes = this.bytes;
byte[] otherBytes = other.bytes;
for (int i = 0; i < size; i++) {
if (bytes[i] != otherBytes[i]) {
return false;
}
}
return true;
}
volatile int hash = 0;
@Override
public int hashCode() {
int h = this.hash;
if (h == 0) {
byte[] bytes = this.bytes;
int size = this.bytes.length;
h = size;
for (int i = 0; i < size; i++) {
h = h * 31 + bytes[i];
}
if (h == 0) {
h = 1;
}
this.hash = h;
}
return h;
}
// =================================================================
// Input stream
/**
* Creates an {@code InputStream} which can be used to read the bytes.
*/
public InputStream newInput() {
return new ByteArrayInputStream(bytes);
}
/**
* Creates a {@link CodedInputStream} which can be used to read the bytes.
* Using this is more efficient than creating a {@link CodedInputStream}
* wrapping the result of {@link #newInput()}.
*/
public CodedInputStream newCodedInput() {
// We trust CodedInputStream not to modify the bytes, or to give anyone
// else access to them.
return CodedInputStream.newInstance(bytes);
}
// =================================================================
// Output stream
/**
* Creates a new {@link Output} with the given initial capacity.
*/
public static Output newOutput(int initialCapacity) {
return new Output(new ByteArrayOutputStream(initialCapacity));
}
/**
* Creates a new {@link Output}.
*/
public static Output newOutput() {
return newOutput(32);
}
/**
* Outputs to a {@code ByteString} instance. Call {@link #toByteString()} to
* create the {@code ByteString} instance.
*/
public static final class Output extends FilterOutputStream {
private final ByteArrayOutputStream bout;
/**
* Constructs a new output with the given initial capacity.
*/
private Output(ByteArrayOutputStream bout) {
super(bout);
this.bout = bout;
}
/**
* Creates a {@code ByteString} instance from this {@code Output}.
*/
public ByteString toByteString() {
byte[] byteArray = bout.toByteArray();
return new ByteString(byteArray);
}
}
/**
* Constructs a new ByteString builder, which allows you to efficiently
* construct a {@code ByteString} by writing to a {@link CodedOutputSteam}.
* Using this is much more efficient than calling {@code newOutput()} and
* wrapping that in a {@code CodedOutputStream}.
*
* <p>This is package-private because it's a somewhat confusing interface.
* Users can call {@link Message#toByteString()} instead of calling this
* directly.
*
* @param size The target byte size of the {@code ByteString}. You must
* write exactly this many bytes before building the result.
*/
static CodedBuilder newCodedBuilder(int size) {
return new CodedBuilder(size);
}
/** See {@link ByteString#newCodedBuilder(int)}. */
static final class CodedBuilder {
private final CodedOutputStream output;
private final byte[] buffer;
private CodedBuilder(int size) {
buffer = new byte[size];
output = CodedOutputStream.newInstance(buffer);
}
public ByteString build() {
output.checkNoSpaceLeft();
// We can be confident that the CodedOutputStream will not modify the
// underlying bytes anymore because it already wrote all of them. So,
// no need to make a copy.
return new ByteString(buffer);
}
public CodedOutputStream getCodedOutput() {
return output;
}
}
}

View File

@ -0,0 +1,766 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;
/**
* Reads and decodes protocol message fields.
*
* This class contains two kinds of methods: methods that read specific
* protocol message constructs and field types (e.g. {@link #readTag()} and
* {@link #readInt32()}) and methods that read low-level values (e.g.
* {@link #readRawVarint32()} and {@link #readRawBytes}). If you are reading
* encoded protocol messages, you should use the former methods, but if you are
* reading some other format of your own design, use the latter.
*
* @author kenton@google.com Kenton Varda
*/
public final class CodedInputStream {
/**
* Create a new CodedInputStream wrapping the given InputStream.
*/
public static CodedInputStream newInstance(InputStream input) {
return new CodedInputStream(input);
}
/**
* Create a new CodedInputStream wrapping the given byte array.
*/
public static CodedInputStream newInstance(byte[] buf) {
return new CodedInputStream(buf);
}
// -----------------------------------------------------------------
/**
* Attempt to read a field tag, returning zero if we have reached EOF.
* Protocol message parsers use this to read tags, since a protocol message
* may legally end wherever a tag occurs, and zero is not a valid tag number.
*/
public int readTag() throws IOException {
if (bufferPos == bufferSize && !refillBuffer(false)) {
lastTag = 0;
return 0;
}
lastTag = readRawVarint32();
if (lastTag == 0) {
// If we actually read zero, that's not a valid tag.
throw InvalidProtocolBufferException.invalidTag();
}
return lastTag;
}
/**
* Verifies that the last call to readTag() returned the given tag value.
* This is used to verify that a nested group ended with the correct
* end tag.
*
* @throws InvalidProtocolBufferException {@code value} does not match the
* last tag.
*/
public void checkLastTagWas(int value) throws InvalidProtocolBufferException {
if (lastTag != value) {
throw InvalidProtocolBufferException.invalidEndTag();
}
}
/**
* Reads and discards a single field, given its tag value.
*
* @return {@code false} if the tag is an endgroup tag, in which case
* nothing is skipped. Otherwise, returns {@code true}.
*/
public boolean skipField(int tag) throws IOException {
switch (WireFormat.getTagWireType(tag)) {
case WireFormat.WIRETYPE_VARINT:
readInt32();
return true;
case WireFormat.WIRETYPE_FIXED64:
readRawLittleEndian64();
return true;
case WireFormat.WIRETYPE_LENGTH_DELIMITED:
skipRawBytes(readRawVarint32());
return true;
case WireFormat.WIRETYPE_START_GROUP:
skipMessage();
checkLastTagWas(
WireFormat.makeTag(WireFormat.getTagFieldNumber(tag),
WireFormat.WIRETYPE_END_GROUP));
return true;
case WireFormat.WIRETYPE_END_GROUP:
return false;
case WireFormat.WIRETYPE_FIXED32:
readRawLittleEndian32();
return true;
default:
throw InvalidProtocolBufferException.invalidWireType();
}
}
/**
* Reads and discards an entire message. This will read either until EOF
* or until an endgroup tag, whichever comes first.
*/
public void skipMessage() throws IOException {
while (true) {
int tag = readTag();
if (tag == 0 || !skipField(tag)) return;
}
}
// -----------------------------------------------------------------
/** Read a {@code double} field value from the stream. */
public double readDouble() throws IOException {
return Double.longBitsToDouble(readRawLittleEndian64());
}
/** Read a {@code float} field value from the stream. */
public float readFloat() throws IOException {
return Float.intBitsToFloat(readRawLittleEndian32());
}
/** Read a {@code uint64} field value from the stream. */
public long readUInt64() throws IOException {
return readRawVarint64();
}
/** Read an {@code int64} field value from the stream. */
public long readInt64() throws IOException {
return readRawVarint64();
}
/** Read an {@code int32} field value from the stream. */
public int readInt32() throws IOException {
return readRawVarint32();
}
/** Read a {@code fixed64} field value from the stream. */
public long readFixed64() throws IOException {
return readRawLittleEndian64();
}
/** Read a {@code fixed32} field value from the stream. */
public int readFixed32() throws IOException {
return readRawLittleEndian32();
}
/** Read a {@code bool} field value from the stream. */
public boolean readBool() throws IOException {
return readRawVarint32() != 0;
}
/** Read a {@code string} field value from the stream. */
public String readString() throws IOException {
int size = readRawVarint32();
if (size < bufferSize - bufferPos && size > 0) {
// Fast path: We already have the bytes in a contiguous buffer, so
// just copy directly from it.
String result = new String(buffer, bufferPos, size, "UTF-8");
bufferPos += size;
return result;
} else {
// Slow path: Build a byte array first then copy it.
return new String(readRawBytes(size), "UTF-8");
}
}
/** Read a {@code group} field value from the stream. */
public void readGroup(int fieldNumber, Message.Builder builder,
ExtensionRegistry extensionRegistry)
throws IOException {
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
++recursionDepth;
builder.mergeFrom(this, extensionRegistry);
checkLastTagWas(
WireFormat.makeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP));
--recursionDepth;
}
/**
* Reads a {@code group} field value from the stream and merges it into the
* given {@link UnknownFieldSet}.
*/
public void readUnknownGroup(int fieldNumber, UnknownFieldSet.Builder builder)
throws IOException {
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
++recursionDepth;
builder.mergeFrom(this);
checkLastTagWas(
WireFormat.makeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP));
--recursionDepth;
}
/** Read an embedded message field value from the stream. */
public void readMessage(Message.Builder builder,
ExtensionRegistry extensionRegistry)
throws IOException {
int length = readRawVarint32();
if (recursionDepth >= recursionLimit) {
throw InvalidProtocolBufferException.recursionLimitExceeded();
}
int oldLimit = pushLimit(length);
++recursionDepth;
builder.mergeFrom(this, extensionRegistry);
checkLastTagWas(0);
--recursionDepth;
popLimit(oldLimit);
}
/** Read a {@code bytes} field value from the stream. */
public ByteString readBytes() throws IOException {
int size = readRawVarint32();
if (size < bufferSize - bufferPos && size > 0) {
// Fast path: We already have the bytes in a contiguous buffer, so
// just copy directly from it.
ByteString result = ByteString.copyFrom(buffer, bufferPos, size);
bufferPos += size;
return result;
} else {
// Slow path: Build a byte array first then copy it.
return ByteString.copyFrom(readRawBytes(size));
}
}
/** Read a {@code uint32} field value from the stream. */
public int readUInt32() throws IOException {
return readRawVarint32();
}
/**
* Read an enum field value from the stream. Caller is responsible
* for converting the numeric value to an actual enum.
*/
public int readEnum() throws IOException {
return readRawVarint32();
}
/** Read an {@code sfixed32} field value from the stream. */
public int readSFixed32() throws IOException {
return readRawLittleEndian32();
}
/** Read an {@code sfixed64} field value from the stream. */
public long readSFixed64() throws IOException {
return readRawLittleEndian64();
}
/** Read an {@code sint32} field value from the stream. */
public int readSInt32() throws IOException {
return decodeZigZag32(readRawVarint32());
}
/** Read an {@code sint64} field value from the stream. */
public long readSInt64() throws IOException {
return decodeZigZag64(readRawVarint64());
}
/**
* Read a field of any primitive type. Enums, groups, and embedded
* messages are not handled by this method.
*
* @param type Declared type of the field.
* @return An object representing the field's value, of the exact
* type which would be returned by
* {@link Message#getField(Descriptors.FieldDescriptor)} for
* this field.
*/
public Object readPrimitiveField(
Descriptors.FieldDescriptor.Type type) throws IOException {
switch (type) {
case DOUBLE : return readDouble ();
case FLOAT : return readFloat ();
case INT64 : return readInt64 ();
case UINT64 : return readUInt64 ();
case INT32 : return readInt32 ();
case FIXED64 : return readFixed64 ();
case FIXED32 : return readFixed32 ();
case BOOL : return readBool ();
case STRING : return readString ();
case BYTES : return readBytes ();
case UINT32 : return readUInt32 ();
case SFIXED32: return readSFixed32();
case SFIXED64: return readSFixed64();
case SINT32 : return readSInt32 ();
case SINT64 : return readSInt64 ();
case GROUP:
throw new IllegalArgumentException(
"readPrimitiveField() cannot handle nested groups.");
case MESSAGE:
throw new IllegalArgumentException(
"readPrimitiveField() cannot handle embedded messages.");
case ENUM:
// We don't hanlde enums because we don't know what to do if the
// value is not recognized.
throw new IllegalArgumentException(
"readPrimitiveField() cannot handle enums.");
}
throw new RuntimeException(
"There is no way to get here, but the compiler thinks otherwise.");
}
// =================================================================
/**
* Read a raw Varint from the stream. If larger than 32 bits, discard the
* upper bits.
*/
public int readRawVarint32() throws IOException {
byte tmp = readRawByte();
if (tmp >= 0) {
return tmp;
}
int result = tmp & 0x7f;
if ((tmp = readRawByte()) >= 0) {
result |= tmp << 7;
} else {
result |= (tmp & 0x7f) << 7;
if ((tmp = readRawByte()) >= 0) {
result |= tmp << 14;
} else {
result |= (tmp & 0x7f) << 14;
if ((tmp = readRawByte()) >= 0) {
result |= tmp << 21;
} else {
result |= (tmp & 0x7f) << 21;
result |= (tmp = readRawByte()) << 28;
if (tmp < 0) {
// Discard upper 32 bits.
for (int i = 0; i < 5; i++) {
if (readRawByte() >= 0) return result;
}
throw InvalidProtocolBufferException.malformedVarint();
}
}
}
}
return result;
}
/** Read a raw Varint from the stream. */
public long readRawVarint64() throws IOException {
int shift = 0;
long result = 0;
while (shift < 64) {
byte b = readRawByte();
result |= (long)(b & 0x7F) << shift;
if ((b & 0x80) == 0) return result;
shift += 7;
}
throw InvalidProtocolBufferException.malformedVarint();
}
/** Read a 32-bit little-endian integer from the stream. */
public int readRawLittleEndian32() throws IOException {
byte b1 = readRawByte();
byte b2 = readRawByte();
byte b3 = readRawByte();
byte b4 = readRawByte();
return (((int)b1 & 0xff) ) |
(((int)b2 & 0xff) << 8) |
(((int)b3 & 0xff) << 16) |
(((int)b4 & 0xff) << 24);
}
/** Read a 64-bit little-endian integer from the stream. */
public long readRawLittleEndian64() throws IOException {
byte b1 = readRawByte();
byte b2 = readRawByte();
byte b3 = readRawByte();
byte b4 = readRawByte();
byte b5 = readRawByte();
byte b6 = readRawByte();
byte b7 = readRawByte();
byte b8 = readRawByte();
return (((long)b1 & 0xff) ) |
(((long)b2 & 0xff) << 8) |
(((long)b3 & 0xff) << 16) |
(((long)b4 & 0xff) << 24) |
(((long)b5 & 0xff) << 32) |
(((long)b6 & 0xff) << 40) |
(((long)b7 & 0xff) << 48) |
(((long)b8 & 0xff) << 56);
}
/**
* Decode a ZigZag-encoded 32-bit value. ZigZag encodes signed integers
* into values that can be efficiently encoded with varint. (Otherwise,
* negative values must be sign-extended to 64 bits to be varint encoded,
* thus always taking 10 bytes on the wire.)
*
* @param n An unsigned 32-bit integer, stored in a signed int because
* Java has no explicit unsigned support.
* @return A signed 32-bit integer.
*/
public static int decodeZigZag32(int n) {
return (n >>> 1) ^ -(n & 1);
}
/**
* Decode a ZigZag-encoded 64-bit value. ZigZag encodes signed integers
* into values that can be efficiently encoded with varint. (Otherwise,
* negative values must be sign-extended to 64 bits to be varint encoded,
* thus always taking 10 bytes on the wire.)
*
* @param n An unsigned 64-bit integer, stored in a signed int because
* Java has no explicit unsigned support.
* @return A signed 64-bit integer.
*/
public static long decodeZigZag64(long n) {
return (n >>> 1) ^ -(n & 1);
}
// -----------------------------------------------------------------
private byte[] buffer;
private int bufferSize;
private int bufferSizeAfterLimit = 0;
private int bufferPos = 0;
private InputStream input;
private int lastTag = 0;
/**
* The total number of bytes read before the current buffer. The total
* bytes read up to the current position can be computed as
* {@code totalBytesRetired + bufferPos}.
*/
private int totalBytesRetired = 0;
/** The absolute position of the end of the current message. */
private int currentLimit = Integer.MAX_VALUE;
/** See setRecursionLimit() */
private int recursionDepth = 0;
private int recursionLimit = DEFAULT_RECURSION_LIMIT;
/** See setSizeLimit() */
private int sizeLimit = DEFAULT_SIZE_LIMIT;
private static final int DEFAULT_RECURSION_LIMIT = 64;
private static final int DEFAULT_SIZE_LIMIT = 64 << 20; // 64MB
private static final int BUFFER_SIZE = 4096;
private CodedInputStream(byte[] buffer) {
this.buffer = buffer;
this.bufferSize = buffer.length;
this.input = null;
}
private CodedInputStream(InputStream input) {
this.buffer = new byte[BUFFER_SIZE];
this.bufferSize = 0;
this.input = input;
}
/**
* Set the maximum message recursion depth. In order to prevent malicious
* messages from causing stack overflows, {@code CodedInputStream} limits
* how deeply messages may be nested. The default limit is 64.
*
* @return the old limit.
*/
public int setRecursionLimit(int limit) {
if (limit < 0) {
throw new IllegalArgumentException(
"Recursion limit cannot be negative: " + limit);
}
int oldLimit = recursionLimit;
recursionLimit = limit;
return oldLimit;
}
/**
* Set the maximum message size. In order to prevent malicious
* messages from exhausting memory or causing integer overflows,
* {@code CodedInputStream} limits how large a message may be.
* The default limit is 64MB. You should set this limit as small
* as you can without harming your app's functionality. Note that
* size limits only apply when reading from an {@code InputStream}, not
* when constructed around a raw byte array (nor with
* {@link ByteString#newCodedInput}).
*
* @return the old limit.
*/
public int setSizeLimit(int limit) {
if (limit < 0) {
throw new IllegalArgumentException(
"Size limit cannot be negative: " + limit);
}
int oldLimit = sizeLimit;
sizeLimit = limit;
return oldLimit;
}
/**
* Sets {@code currentLimit} to (current position) + {@code byteLimit}. This
* is called when descending into a length-delimited embedded message.
*
* @return the old limit.
*/
public int pushLimit(int byteLimit) throws InvalidProtocolBufferException {
if (byteLimit < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
byteLimit += totalBytesRetired + bufferPos;
int oldLimit = currentLimit;
if (byteLimit > oldLimit) {
throw InvalidProtocolBufferException.truncatedMessage();
}
currentLimit = byteLimit;
recomputeBufferSizeAfterLimit();
return oldLimit;
}
private void recomputeBufferSizeAfterLimit() {
bufferSize += bufferSizeAfterLimit;
int bufferEnd = totalBytesRetired + bufferSize;
if (bufferEnd > currentLimit) {
// Limit is in current buffer.
bufferSizeAfterLimit = bufferEnd - currentLimit;
bufferSize -= bufferSizeAfterLimit;
} else {
bufferSizeAfterLimit = 0;
}
}
/**
* Discards the current limit, returning to the previous limit.
*
* @param oldLimit The old limit, as returned by {@code pushLimit}.
*/
public void popLimit(int oldLimit) {
currentLimit = oldLimit;
recomputeBufferSizeAfterLimit();
}
/**
* Called with {@code this.buffer} is empty to read more bytes from the
* input. If {@code mustSucceed} is true, refillBuffer() gurantees that
* either there will be at least one byte in the buffer when it returns
* or it will throw an exception. If {@code mustSucceed} is false,
* refillBuffer() returns false if no more bytes were available.
*/
private boolean refillBuffer(boolean mustSucceed) throws IOException {
if (bufferPos < bufferSize) {
throw new IllegalStateException(
"refillBuffer() called when buffer wasn't empty.");
}
if (totalBytesRetired + bufferSize == currentLimit) {
// Oops, we hit a limit.
if (mustSucceed) {
throw InvalidProtocolBufferException.truncatedMessage();
} else {
return false;
}
}
totalBytesRetired += bufferSize;
bufferPos = 0;
bufferSize = (input == null) ? -1 : input.read(buffer);
if (bufferSize == -1) {
bufferSize = 0;
if (mustSucceed) {
throw InvalidProtocolBufferException.truncatedMessage();
} else {
return false;
}
} else {
recomputeBufferSizeAfterLimit();
int totalBytesRead =
totalBytesRetired + bufferSize + bufferSizeAfterLimit;
if (totalBytesRead > sizeLimit || totalBytesRead < 0) {
throw InvalidProtocolBufferException.sizeLimitExceeded();
}
return true;
}
}
/**
* Read one byte from the input.
*
* @throws InvalidProtocolBufferException The end of the stream or the current
* limit was reached.
*/
public byte readRawByte() throws IOException {
if (bufferPos == bufferSize) {
refillBuffer(true);
}
return buffer[bufferPos++];
}
/**
* Read a fixed size of bytes from the input.
*
* @throws InvalidProtocolBufferException The end of the stream or the current
* limit was reached.
*/
public byte[] readRawBytes(int size) throws IOException {
if (size < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
if (totalBytesRetired + bufferPos + size > currentLimit) {
// Read to the end of the stream anyway.
skipRawBytes(currentLimit - totalBytesRetired - bufferPos);
// Then fail.
throw InvalidProtocolBufferException.truncatedMessage();
}
if (size <= bufferSize - bufferPos) {
// We have all the bytes we need already.
byte[] bytes = new byte[size];
System.arraycopy(buffer, bufferPos, bytes, 0, size);
bufferPos += size;
return bytes;
} else if (size < BUFFER_SIZE) {
// Reading more bytes than are in the buffer, but not an excessive number
// of bytes. We can safely allocate the resulting array ahead of time.
// First copy what we have.
byte[] bytes = new byte[size];
int pos = bufferSize - bufferPos;
System.arraycopy(buffer, bufferPos, bytes, 0, pos);
bufferPos = bufferSize;
// We want to use refillBuffer() and then copy from the buffer into our
// byte array rather than reading directly into our byte array because
// the input may be unbuffered.
refillBuffer(true);
while (size - pos > bufferSize) {
System.arraycopy(buffer, 0, bytes, pos, bufferSize);
pos += bufferSize;
bufferPos = bufferSize;
refillBuffer(true);
}
System.arraycopy(buffer, 0, bytes, pos, size - pos);
bufferPos = size - pos;
return bytes;
} else {
// The size is very large. For security reasons, we can't allocate the
// entire byte array yet. The size comes directly from the input, so a
// maliciously-crafted message could provide a bogus very large size in
// order to trick the app into allocating a lot of memory. We avoid this
// by allocating and reading only a small chunk at a time, so that the
// malicious message must actually *be* extremely large to cause
// problems. Meanwhile, we limit the allowed size of a message elsewhere.
// Remember the buffer markers since we'll have to copy the bytes out of
// it later.
int originalBufferPos = bufferPos;
int originalBufferSize = bufferSize;
// Mark the current buffer consumed.
totalBytesRetired += bufferSize;
bufferPos = 0;
bufferSize = 0;
// Read all the rest of the bytes we need.
int sizeLeft = size - (originalBufferSize - originalBufferPos);
List<byte[]> chunks = new ArrayList<byte[]>();
while (sizeLeft > 0) {
byte[] chunk = new byte[Math.min(sizeLeft, BUFFER_SIZE)];
int pos = 0;
while (pos < chunk.length) {
int n = (input == null) ? -1 :
input.read(chunk, pos, chunk.length - pos);
if (n == -1) {
throw InvalidProtocolBufferException.truncatedMessage();
}
totalBytesRetired += n;
pos += n;
}
sizeLeft -= chunk.length;
chunks.add(chunk);
}
// OK, got everything. Now concatenate it all into one buffer.
byte[] bytes = new byte[size];
// Start by copying the leftover bytes from this.buffer.
int pos = originalBufferSize - originalBufferPos;
System.arraycopy(buffer, originalBufferPos, bytes, 0, pos);
// And now all the chunks.
for (byte[] chunk : chunks) {
System.arraycopy(chunk, 0, bytes, pos, chunk.length);
pos += chunk.length;
}
// Done.
return bytes;
}
}
/**
* Reads and discards {@code size} bytes.
*
* @throws InvalidProtocolBufferException The end of the stream or the current
* limit was reached.
*/
public void skipRawBytes(int size) throws IOException {
if (size < 0) {
throw InvalidProtocolBufferException.negativeSize();
}
if (totalBytesRetired + bufferPos + size > currentLimit) {
// Read to the end of the stream anyway.
skipRawBytes(currentLimit - totalBytesRetired - bufferPos);
// Then fail.
throw InvalidProtocolBufferException.truncatedMessage();
}
if (size < bufferSize - bufferPos) {
// We have all the bytes we need already.
bufferPos += size;
} else {
// Skipping more bytes than are in the buffer. First skip what we have.
int pos = bufferSize - bufferPos;
totalBytesRetired += pos;
bufferPos = 0;
bufferSize = 0;
// Then skip directly from the InputStream for the rest.
while (pos < size) {
int n = (input == null) ? -1 : (int) input.skip(size - pos);
if (n <= 0) {
throw InvalidProtocolBufferException.truncatedMessage();
}
pos += n;
totalBytesRetired += n;
}
}
}
}

View File

@ -0,0 +1,775 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import java.io.OutputStream;
import java.io.IOException;
/**
* Encodes and writes protocol message fields.
*
* <p>This class contains two kinds of methods: methods that write specific
* protocol message constructs and field types (e.g. {@link #writeTag} and
* {@link #writeInt32}) and methods that write low-level values (e.g.
* {@link #writeRawVarint32} and {@link #writeRawBytes}). If you are
* writing encoded protocol messages, you should use the former methods, but if
* you are writing some other format of your own design, use the latter.
*
* <p>This class is totally unsynchronized.
*
* @author kneton@google.com Kenton Varda
*/
public final class CodedOutputStream {
private final byte[] buffer;
private final int limit;
private int position;
private final OutputStream output;
/**
* The buffer size used in {@link #newInstance(java.io.OutputStream)}.
*/
public static final int DEFAULT_BUFFER_SIZE = 4096;
private CodedOutputStream(byte[] buffer, int offset, int length) {
this.output = null;
this.buffer = buffer;
this.position = offset;
this.limit = offset + length;
}
private CodedOutputStream(OutputStream output, byte[] buffer) {
this.output = output;
this.buffer = buffer;
this.position = 0;
this.limit = buffer.length;
}
/**
* Create a new {@code CodedOutputStream} wrapping the given
* {@code OutputStream}.
*/
public static CodedOutputStream newInstance(OutputStream output) {
return newInstance(output, DEFAULT_BUFFER_SIZE);
}
/**
* Create a new {@code CodedOutputStream} wrapping the given
* {@code OutputStream} with a given buffer size.
*/
public static CodedOutputStream newInstance(OutputStream output,
int bufferSize) {
return new CodedOutputStream(output, new byte[bufferSize]);
}
/**
* Create a new {@code CodedOutputStream} that writes directly to the given
* byte array. If more bytes are written than fit in the array,
* {@link OutOfSpaceException} will be thrown. Writing directly to a flat
* array is faster than writing to an {@code OutputStream}. See also
* {@link ByteString#newCodedBuilder}.
*/
public static CodedOutputStream newInstance(byte[] flatArray) {
return newInstance(flatArray, 0, flatArray.length);
}
/**
* Create a new {@code CodedOutputStream} that writes directly to the given
* byte array slice. If more bytes are written than fit in the slice,
* {@link OutOfSpaceException} will be thrown. Writing directly to a flat
* array is faster than writing to an {@code OutputStream}. See also
* {@link ByteString#newCodedBuilder}.
*/
public static CodedOutputStream newInstance(byte[] flatArray, int offset,
int length) {
return new CodedOutputStream(flatArray, offset, length);
}
// -----------------------------------------------------------------
/** Write a {@code double} field, including tag, to the stream. */
public void writeDouble(int fieldNumber, double value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED64);
writeRawLittleEndian64(Double.doubleToRawLongBits(value));
}
/** Write a {@code float} field, including tag, to the stream. */
public void writeFloat(int fieldNumber, float value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED32);
writeRawLittleEndian32(Float.floatToRawIntBits(value));
}
/** Write a {@code uint64} field, including tag, to the stream. */
public void writeUInt64(int fieldNumber, long value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint64(value);
}
/** Write an {@code int64} field, including tag, to the stream. */
public void writeInt64(int fieldNumber, long value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint64(value);
}
/** Write an {@code int32} field, including tag, to the stream. */
public void writeInt32(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
if (value >= 0) {
writeRawVarint32(value);
} else {
// Must sign-extend.
writeRawVarint64(value);
}
}
/** Write a {@code fixed64} field, including tag, to the stream. */
public void writeFixed64(int fieldNumber, long value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED64);
writeRawLittleEndian64(value);
}
/** Write a {@code fixed32} field, including tag, to the stream. */
public void writeFixed32(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED32);
writeRawLittleEndian32(value);
}
/** Write a {@code bool} field, including tag, to the stream. */
public void writeBool(int fieldNumber, boolean value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawByte(value ? 1 : 0);
}
/** Write a {@code string} field, including tag, to the stream. */
public void writeString(int fieldNumber, String value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_LENGTH_DELIMITED);
// Unfortunately there does not appear to be any way to tell Java to encode
// UTF-8 directly into our buffer, so we have to let it create its own byte
// array and then copy.
byte[] bytes = value.getBytes("UTF-8");
writeRawVarint32(bytes.length);
writeRawBytes(bytes);
}
/** Write a {@code group} field, including tag, to the stream. */
public void writeGroup(int fieldNumber, Message value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_START_GROUP);
value.writeTo(this);
writeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP);
}
/** Write a group represented by an {@link UnknownFieldSet}. */
public void writeUnknownGroup(int fieldNumber, UnknownFieldSet value)
throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_START_GROUP);
value.writeTo(this);
writeTag(fieldNumber, WireFormat.WIRETYPE_END_GROUP);
}
/** Write an embedded message field, including tag, to the stream. */
public void writeMessage(int fieldNumber, Message value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_LENGTH_DELIMITED);
writeRawVarint32(value.getSerializedSize());
value.writeTo(this);
}
/** Write a {@code bytes} field, including tag, to the stream. */
public void writeBytes(int fieldNumber, ByteString value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_LENGTH_DELIMITED);
byte[] bytes = value.toByteArray();
writeRawVarint32(bytes.length);
writeRawBytes(bytes);
}
/** Write a {@code uint32} field, including tag, to the stream. */
public void writeUInt32(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint32(value);
}
/**
* Write an enum field, including tag, to the stream. Caller is responsible
* for converting the enum value to its numeric value.
*/
public void writeEnum(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint32(value);
}
/** Write an {@code sfixed32} field, including tag, to the stream. */
public void writeSFixed32(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED32);
writeRawLittleEndian32(value);
}
/** Write an {@code sfixed64} field, including tag, to the stream. */
public void writeSFixed64(int fieldNumber, long value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_FIXED64);
writeRawLittleEndian64(value);
}
/** Write an {@code sint32} field, including tag, to the stream. */
public void writeSInt32(int fieldNumber, int value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint32(encodeZigZag32(value));
}
/** Write an {@code sint64} field, including tag, to the stream. */
public void writeSInt64(int fieldNumber, long value) throws IOException {
writeTag(fieldNumber, WireFormat.WIRETYPE_VARINT);
writeRawVarint64(encodeZigZag64(value));
}
/**
* Write a MessageSet extension field to the stream. For historical reasons,
* the wire format differs from normal fields.
*/
public void writeMessageSetExtension(int fieldNumber, Message value)
throws IOException {
writeTag(WireFormat.MESSAGE_SET_ITEM, WireFormat.WIRETYPE_START_GROUP);
writeUInt32(WireFormat.MESSAGE_SET_TYPE_ID, fieldNumber);
writeMessage(WireFormat.MESSAGE_SET_MESSAGE, value);
writeTag(WireFormat.MESSAGE_SET_ITEM, WireFormat.WIRETYPE_END_GROUP);
}
/**
* Write an unparsed MessageSet extension field to the stream. For
* historical reasons, the wire format differs from normal fields.
*/
public void writeRawMessageSetExtension(int fieldNumber, ByteString value)
throws IOException {
writeTag(WireFormat.MESSAGE_SET_ITEM, WireFormat.WIRETYPE_START_GROUP);
writeUInt32(WireFormat.MESSAGE_SET_TYPE_ID, fieldNumber);
writeBytes(WireFormat.MESSAGE_SET_MESSAGE, value);
writeTag(WireFormat.MESSAGE_SET_ITEM, WireFormat.WIRETYPE_END_GROUP);
}
/**
* Write a field of arbitrary type, including tag, to the stream.
*
* @param type The field's type.
* @param number The field's number.
* @param value Object representing the field's value. Must be of the exact
* type which would be returned by
* {@link Message#getField(Descriptors.FieldDescriptor)} for
* this field.
*/
public void writeField(Descriptors.FieldDescriptor.Type type,
int number, Object value) throws IOException {
switch (type) {
case DOUBLE : writeDouble (number, (Double )value); break;
case FLOAT : writeFloat (number, (Float )value); break;
case INT64 : writeInt64 (number, (Long )value); break;
case UINT64 : writeUInt64 (number, (Long )value); break;
case INT32 : writeInt32 (number, (Integer )value); break;
case FIXED64 : writeFixed64 (number, (Long )value); break;
case FIXED32 : writeFixed32 (number, (Integer )value); break;
case BOOL : writeBool (number, (Boolean )value); break;
case STRING : writeString (number, (String )value); break;
case GROUP : writeGroup (number, (Message )value); break;
case MESSAGE : writeMessage (number, (Message )value); break;
case BYTES : writeBytes (number, (ByteString)value); break;
case UINT32 : writeUInt32 (number, (Integer )value); break;
case SFIXED32: writeSFixed32(number, (Integer )value); break;
case SFIXED64: writeSFixed64(number, (Long )value); break;
case SINT32 : writeSInt32 (number, (Integer )value); break;
case SINT64 : writeSInt64 (number, (Long )value); break;
case ENUM:
writeEnum(number, ((Descriptors.EnumValueDescriptor)value).getNumber());
break;
}
}
// =================================================================
/**
* Compute the number of bytes that would be needed to encode a
* {@code double} field, including tag.
*/
public static int computeDoubleSize(int fieldNumber, double value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_64_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code float} field, including tag.
*/
public static int computeFloatSize(int fieldNumber, float value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_32_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code uint64} field, including tag.
*/
public static int computeUInt64Size(int fieldNumber, long value) {
return computeTagSize(fieldNumber) + computeRawVarint64Size(value);
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code int64} field, including tag.
*/
public static int computeInt64Size(int fieldNumber, long value) {
return computeTagSize(fieldNumber) + computeRawVarint64Size(value);
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code int32} field, including tag.
*/
public static int computeInt32Size(int fieldNumber, int value) {
if (value >= 0) {
return computeTagSize(fieldNumber) + computeRawVarint32Size(value);
} else {
// Must sign-extend.
return computeTagSize(fieldNumber) + 10;
}
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code fixed64} field, including tag.
*/
public static int computeFixed64Size(int fieldNumber, long value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_64_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code fixed32} field, including tag.
*/
public static int computeFixed32Size(int fieldNumber, int value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_32_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code bool} field, including tag.
*/
public static int computeBoolSize(int fieldNumber, boolean value) {
return computeTagSize(fieldNumber) + 1;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code string} field, including tag.
*/
public static int computeStringSize(int fieldNumber, String value) {
try {
byte[] bytes = value.getBytes("UTF-8");
return computeTagSize(fieldNumber) +
computeRawVarint32Size(bytes.length) +
bytes.length;
} catch (java.io.UnsupportedEncodingException e) {
throw new RuntimeException("UTF-8 not supported.", e);
}
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code group} field, including tag.
*/
public static int computeGroupSize(int fieldNumber, Message value) {
return computeTagSize(fieldNumber) * 2 + value.getSerializedSize();
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code group} field represented by an {@code UnknownFieldSet}, including
* tag.
*/
public static int computeUnknownGroupSize(int fieldNumber,
UnknownFieldSet value) {
return computeTagSize(fieldNumber) * 2 + value.getSerializedSize();
}
/**
* Compute the number of bytes that would be needed to encode an
* embedded message field, including tag.
*/
public static int computeMessageSize(int fieldNumber, Message value) {
int size = value.getSerializedSize();
return computeTagSize(fieldNumber) + computeRawVarint32Size(size) + size;
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code bytes} field, including tag.
*/
public static int computeBytesSize(int fieldNumber, ByteString value) {
return computeTagSize(fieldNumber) +
computeRawVarint32Size(value.size()) +
value.size();
}
/**
* Compute the number of bytes that would be needed to encode a
* {@code uint32} field, including tag.
*/
public static int computeUInt32Size(int fieldNumber, int value) {
return computeTagSize(fieldNumber) + computeRawVarint32Size(value);
}
/**
* Compute the number of bytes that would be needed to encode an
* enum field, including tag. Caller is responsible for converting the
* enum value to its numeric value.
*/
public static int computeEnumSize(int fieldNumber, int value) {
return computeTagSize(fieldNumber) + computeRawVarint32Size(value);
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code sfixed32} field, including tag.
*/
public static int computeSFixed32Size(int fieldNumber, int value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_32_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code sfixed64} field, including tag.
*/
public static int computeSFixed64Size(int fieldNumber, long value) {
return computeTagSize(fieldNumber) + LITTLE_ENDIAN_64_SIZE;
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code sint32} field, including tag.
*/
public static int computeSInt32Size(int fieldNumber, int value) {
return computeTagSize(fieldNumber) +
computeRawVarint32Size(encodeZigZag32(value));
}
/**
* Compute the number of bytes that would be needed to encode an
* {@code sint64} field, including tag.
*/
public static int computeSInt64Size(int fieldNumber, long value) {
return computeTagSize(fieldNumber) +
computeRawVarint64Size(encodeZigZag64(value));
}
/**
* Compute the number of bytes that would be needed to encode a
* MessageSet extension to the stream. For historical reasons,
* the wire format differs from normal fields.
*/
public static int computeMessageSetExtensionSize(
int fieldNumber, Message value) {
return computeTagSize(WireFormat.MESSAGE_SET_ITEM) * 2 +
computeUInt32Size(WireFormat.MESSAGE_SET_TYPE_ID, fieldNumber) +
computeMessageSize(WireFormat.MESSAGE_SET_MESSAGE, value);
}
/**
* Compute the number of bytes that would be needed to encode an
* unparsed MessageSet extension field to the stream. For
* historical reasons, the wire format differs from normal fields.
*/
public static int computeRawMessageSetExtensionSize(
int fieldNumber, ByteString value) {
return computeTagSize(WireFormat.MESSAGE_SET_ITEM) * 2 +
computeUInt32Size(WireFormat.MESSAGE_SET_TYPE_ID, fieldNumber) +
computeBytesSize(WireFormat.MESSAGE_SET_MESSAGE, value);
}
/**
* Compute the number of bytes that would be needed to encode a
* field of arbitrary type, including tag, to the stream.
*
* @param type The field's type.
* @param number The field's number.
* @param value Object representing the field's value. Must be of the exact
* type which would be returned by
* {@link Message#getField(Descriptors.FieldDescriptor)} for
* this field.
*/
public static int computeFieldSize(
Descriptors.FieldDescriptor.Type type,
int number, Object value) {
switch (type) {
case DOUBLE : return computeDoubleSize (number, (Double )value);
case FLOAT : return computeFloatSize (number, (Float )value);
case INT64 : return computeInt64Size (number, (Long )value);
case UINT64 : return computeUInt64Size (number, (Long )value);
case INT32 : return computeInt32Size (number, (Integer )value);
case FIXED64 : return computeFixed64Size (number, (Long )value);
case FIXED32 : return computeFixed32Size (number, (Integer )value);
case BOOL : return computeBoolSize (number, (Boolean )value);
case STRING : return computeStringSize (number, (String )value);
case GROUP : return computeGroupSize (number, (Message )value);
case MESSAGE : return computeMessageSize (number, (Message )value);
case BYTES : return computeBytesSize (number, (ByteString)value);
case UINT32 : return computeUInt32Size (number, (Integer )value);
case SFIXED32: return computeSFixed32Size(number, (Integer )value);
case SFIXED64: return computeSFixed64Size(number, (Long )value);
case SINT32 : return computeSInt32Size (number, (Integer )value);
case SINT64 : return computeSInt64Size (number, (Long )value);
case ENUM:
return computeEnumSize(number,
((Descriptors.EnumValueDescriptor)value).getNumber());
}
throw new RuntimeException(
"There is no way to get here, but the compiler thinks otherwise.");
}
// =================================================================
/**
* Internal helper that writes the current buffer to the output. The
* buffer position is reset to its initial value when this returns.
*/
private void refreshBuffer() throws IOException {
if (output == null) {
// We're writing to a single buffer.
throw new OutOfSpaceException();
}
// Since we have an output stream, this is our buffer
// and buffer offset == 0
output.write(buffer, 0, position);
position = 0;
}
/**
* Flushes the stream and forces any buffered bytes to be written. This
* does not flush the underlying OutputStream.
*/
public void flush() throws IOException {
if (output != null) {
refreshBuffer();
}
}
/**
* If writing to a flat array, return the space left in the array.
* Otherwise, throws {@code UnsupportedOperationException}.
*/
public int spaceLeft() {
if (output == null) {
return limit - position;
} else {
throw new UnsupportedOperationException(
"spaceLeft() can only be called on CodedOutputStreams that are " +
"writing to a flat array.");
}
}
/**
* Verifies that {@link #spaceLeft()} returns zero. It's common to create
* a byte array that is exactly big enough to hold a message, then write to
* it with a {@code CodedOutputStream}. Calling {@code checkNoSpaceLeft()}
* after writing verifies that the message was actually as big as expected,
* which can help catch bugs.
*/
public void checkNoSpaceLeft() {
if (spaceLeft() != 0) {
throw new IllegalStateException(
"Did not write as much data as expected.");
}
}
/**
* If you create a CodedOutputStream around a simple flat array, you must
* not attempt to write more bytes than the array has space. Otherwise,
* this exception will be thrown.
*/
public static class OutOfSpaceException extends IOException {
OutOfSpaceException() {
super("CodedOutputStream was writing to a flat byte array and ran " +
"out of space.");
}
}
/** Write a single byte. */
public void writeRawByte(byte value) throws IOException {
if (position == limit) {
refreshBuffer();
}
buffer[position++] = value;
}
/** Write a single byte, represented by an integer value. */
public void writeRawByte(int value) throws IOException {
writeRawByte((byte) value);
}
/** Write an array of bytes. */
public void writeRawBytes(byte[] value) throws IOException {
writeRawBytes(value, 0, value.length);
}
/** Write part of an array of bytes. */
public void writeRawBytes(byte[] value, int offset, int length)
throws IOException {
if (limit - position >= length) {
// We have room in the current buffer.
System.arraycopy(value, offset, buffer, position, length);
position += length;
} else {
// Write extends past current buffer. Fill the rest of this buffer and
// flush.
int bytesWritten = limit - position;
System.arraycopy(value, offset, buffer, position, bytesWritten);
offset += bytesWritten;
length -= bytesWritten;
position = limit;
refreshBuffer();
// Now deal with the rest.
// Since we have an output stream, this is our buffer
// and buffer offset == 0
if (length <= limit) {
// Fits in new buffer.
System.arraycopy(value, offset, buffer, 0, length);
position = length;
} else {
// Write is very big. Let's do it all at once.
output.write(value, offset, length);
}
}
}
/** Encode and write a tag. */
public void writeTag(int fieldNumber, int wireType) throws IOException {
writeRawVarint32(WireFormat.makeTag(fieldNumber, wireType));
}
/** Compute the number of bytes that would be needed to encode a tag. */
public static int computeTagSize(int fieldNumber) {
return computeRawVarint32Size(WireFormat.makeTag(fieldNumber, 0));
}
/**
* Encode and write a varint. {@code value} is treated as
* unsigned, so it won't be sign-extended if negative.
*/
public void writeRawVarint32(int value) throws IOException {
while (true) {
if ((value & ~0x7F) == 0) {
writeRawByte(value);
return;
} else {
writeRawByte((value & 0x7F) | 0x80);
value >>>= 7;
}
}
}
/**
* Compute the number of bytes that would be needed to encode a varint.
* {@code value} is treated as unsigned, so it won't be sign-extended if
* negative.
*/
public static int computeRawVarint32Size(int value) {
if ((value & (0xffffffff << 7)) == 0) return 1;
if ((value & (0xffffffff << 14)) == 0) return 2;
if ((value & (0xffffffff << 21)) == 0) return 3;
if ((value & (0xffffffff << 28)) == 0) return 4;
return 5;
}
/** Encode and write a varint. */
public void writeRawVarint64(long value) throws IOException {
while (true) {
if ((value & ~0x7FL) == 0) {
writeRawByte((int)value);
return;
} else {
writeRawByte(((int)value & 0x7F) | 0x80);
value >>>= 7;
}
}
}
/** Compute the number of bytes that would be needed to encode a varint. */
public static int computeRawVarint64Size(long value) {
if ((value & (0xffffffffffffffffL << 7)) == 0) return 1;
if ((value & (0xffffffffffffffffL << 14)) == 0) return 2;
if ((value & (0xffffffffffffffffL << 21)) == 0) return 3;
if ((value & (0xffffffffffffffffL << 28)) == 0) return 4;
if ((value & (0xffffffffffffffffL << 35)) == 0) return 5;
if ((value & (0xffffffffffffffffL << 42)) == 0) return 6;
if ((value & (0xffffffffffffffffL << 49)) == 0) return 7;
if ((value & (0xffffffffffffffffL << 56)) == 0) return 8;
if ((value & (0xffffffffffffffffL << 63)) == 0) return 9;
return 10;
}
/** Write a little-endian 32-bit integer. */
public void writeRawLittleEndian32(int value) throws IOException {
writeRawByte((value ) & 0xFF);
writeRawByte((value >> 8) & 0xFF);
writeRawByte((value >> 16) & 0xFF);
writeRawByte((value >> 24) & 0xFF);
}
public static final int LITTLE_ENDIAN_32_SIZE = 4;
/** Write a little-endian 64-bit integer. */
public void writeRawLittleEndian64(long value) throws IOException {
writeRawByte((int)(value ) & 0xFF);
writeRawByte((int)(value >> 8) & 0xFF);
writeRawByte((int)(value >> 16) & 0xFF);
writeRawByte((int)(value >> 24) & 0xFF);
writeRawByte((int)(value >> 32) & 0xFF);
writeRawByte((int)(value >> 40) & 0xFF);
writeRawByte((int)(value >> 48) & 0xFF);
writeRawByte((int)(value >> 56) & 0xFF);
}
public static final int LITTLE_ENDIAN_64_SIZE = 8;
/**
* Encode a ZigZag-encoded 32-bit value. ZigZag encodes signed integers
* into values that can be efficiently encoded with varint. (Otherwise,
* negative values must be sign-extended to 64 bits to be varint encoded,
* thus always taking 10 bytes on the wire.)
*
* @param n A signed 32-bit integer.
* @return An unsigned 32-bit integer, stored in a signed int because
* Java has no explicit unsigned support.
*/
public static int encodeZigZag32(int n) {
// Note: the right-shift must be arithmetic
return (n << 1) ^ (n >> 31);
}
/**
* Encode a ZigZag-encoded 64-bit value. ZigZag encodes signed integers
* into values that can be efficiently encoded with varint. (Otherwise,
* negative values must be sign-extended to 64 bits to be varint encoded,
* thus always taking 10 bytes on the wire.)
*
* @param n A signed 64-bit integer.
* @return An unsigned 64-bit integer, stored in a signed int because
* Java has no explicit unsigned support.
*/
public static long encodeZigZag64(long n) {
// Note: the right-shift must be arithmetic
return (n << 1) ^ (n >> 63);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,391 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.Descriptor;
import com.google.protobuf.Descriptors.FieldDescriptor;
import java.io.InputStream;
import java.io.IOException;
import java.util.Map;
/**
* An implementation of {@link Message} that can represent arbitrary types,
* given a {@link Descriptors.Descriptor}.
*
* @author kenton@google.com Kenton Varda
*/
public final class DynamicMessage extends AbstractMessage {
private final Descriptor type;
private final FieldSet fields;
private final UnknownFieldSet unknownFields;
private int memoizedSize = -1;
/**
* Construct a {@code DynamicMessage} using the given {@code FieldSet}.
*/
private DynamicMessage(Descriptor type, FieldSet fields,
UnknownFieldSet unknownFields) {
this.type = type;
this.fields = fields;
this.unknownFields = unknownFields;
}
/**
* Get a {@code DynamicMessage} representing the default instance of the
* given type.
*/
public static DynamicMessage getDefaultInstance(Descriptor type) {
return new DynamicMessage(type, FieldSet.emptySet(),
UnknownFieldSet.getDefaultInstance());
}
/** Parse a message of the given type from the given input stream. */
public static DynamicMessage parseFrom(Descriptor type,
CodedInputStream input)
throws IOException {
return newBuilder(type).mergeFrom(input).buildParsed();
}
/** Parse a message of the given type from the given input stream. */
public static DynamicMessage parseFrom(
Descriptor type,
CodedInputStream input,
ExtensionRegistry extensionRegistry)
throws IOException {
return newBuilder(type).mergeFrom(input, extensionRegistry).buildParsed();
}
/** Parse {@code data} as a message of the given type and return it. */
public static DynamicMessage parseFrom(Descriptor type, ByteString data)
throws InvalidProtocolBufferException {
return newBuilder(type).mergeFrom(data).buildParsed();
}
/** Parse {@code data} as a message of the given type and return it. */
public static DynamicMessage parseFrom(Descriptor type, ByteString data,
ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException {
return newBuilder(type).mergeFrom(data, extensionRegistry).buildParsed();
}
/** Parse {@code data} as a message of the given type and return it. */
public static DynamicMessage parseFrom(Descriptor type, byte[] data)
throws InvalidProtocolBufferException {
return newBuilder(type).mergeFrom(data).buildParsed();
}
/** Parse {@code data} as a message of the given type and return it. */
public static DynamicMessage parseFrom(Descriptor type, byte[] data,
ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException {
return newBuilder(type).mergeFrom(data, extensionRegistry).buildParsed();
}
/** Parse a message of the given type from {@code input} and return it. */
public static DynamicMessage parseFrom(Descriptor type, InputStream input)
throws IOException {
return newBuilder(type).mergeFrom(input).buildParsed();
}
/** Parse a message of the given type from {@code input} and return it. */
public static DynamicMessage parseFrom(Descriptor type, InputStream input,
ExtensionRegistry extensionRegistry)
throws IOException {
return newBuilder(type).mergeFrom(input, extensionRegistry).buildParsed();
}
/** Construct a {@link Message.Builder} for the given type. */
public static Builder newBuilder(Descriptor type) {
return new Builder(type);
}
/**
* Construct a {@link Message.Builder} for a message of the same type as
* {@code prototype}, and initialize it with {@code prototype}'s contents.
*/
public static Builder newBuilder(Message prototype) {
return new Builder(prototype.getDescriptorForType()).mergeFrom(prototype);
}
// -----------------------------------------------------------------
// Implementation of Message interface.
public Descriptor getDescriptorForType() {
return type;
}
public DynamicMessage getDefaultInstanceForType() {
return getDefaultInstance(type);
}
public Map<FieldDescriptor, Object> getAllFields() {
return fields.getAllFields();
}
public boolean hasField(FieldDescriptor field) {
verifyContainingType(field);
return fields.hasField(field);
}
public Object getField(FieldDescriptor field) {
verifyContainingType(field);
Object result = fields.getField(field);
if (result == null) {
result = getDefaultInstance(field.getMessageType());
}
return result;
}
public int getRepeatedFieldCount(FieldDescriptor field) {
verifyContainingType(field);
return fields.getRepeatedFieldCount(field);
}
public Object getRepeatedField(FieldDescriptor field, int index) {
verifyContainingType(field);
return fields.getRepeatedField(field, index);
}
public UnknownFieldSet getUnknownFields() {
return unknownFields;
}
public boolean isInitialized() {
return fields.isInitialized(type);
}
public void writeTo(CodedOutputStream output) throws IOException {
fields.writeTo(output);
if (type.getOptions().getMessageSetWireFormat()) {
unknownFields.writeAsMessageSetTo(output);
} else {
unknownFields.writeTo(output);
}
}
public int getSerializedSize() {
int size = memoizedSize;
if (size != -1) return size;
size = fields.getSerializedSize();
if (type.getOptions().getMessageSetWireFormat()) {
size += unknownFields.getSerializedSizeAsMessageSet();
} else {
size += unknownFields.getSerializedSize();
}
memoizedSize = size;
return size;
}
public Builder newBuilderForType() {
return new Builder(type);
}
/** Verifies that the field is a field of this message. */
private void verifyContainingType(FieldDescriptor field) {
if (field.getContainingType() != type) {
throw new IllegalArgumentException(
"FieldDescriptor does not match message type.");
}
}
// =================================================================
/**
* Builder for {@link DynamicMessage}s.
*/
public static final class Builder extends AbstractMessage.Builder<Builder> {
private final Descriptor type;
private FieldSet fields;
private UnknownFieldSet unknownFields;
/** Construct a {@code Builder} for the given type. */
private Builder(Descriptor type) {
this.type = type;
this.fields = FieldSet.newFieldSet();
this.unknownFields = UnknownFieldSet.getDefaultInstance();
}
// ---------------------------------------------------------------
// Implementation of Message.Builder interface.
public Builder clear() {
fields.clear();
return this;
}
public Builder mergeFrom(Message other) {
if (other.getDescriptorForType() != type) {
throw new IllegalArgumentException(
"mergeFrom(Message) can only merge messages of the same type.");
}
fields.mergeFrom(other);
return this;
}
public DynamicMessage build() {
if (!isInitialized()) {
throw new UninitializedMessageException(
new DynamicMessage(type, fields, unknownFields));
}
return buildPartial();
}
/**
* Helper for DynamicMessage.parseFrom() methods to call. Throws
* {@link InvalidProtocolBufferException} instead of
* {@link UninitializedMessageException}.
*/
private DynamicMessage buildParsed() throws InvalidProtocolBufferException {
if (!isInitialized()) {
throw new UninitializedMessageException(
new DynamicMessage(type, fields, unknownFields))
.asInvalidProtocolBufferException();
}
return buildPartial();
}
public DynamicMessage buildPartial() {
fields.makeImmutable();
DynamicMessage result =
new DynamicMessage(type, fields, unknownFields);
fields = null;
unknownFields = null;
return result;
}
public Builder clone() {
Builder result = new Builder(type);
result.fields.mergeFrom(fields);
return result;
}
public boolean isInitialized() {
return fields.isInitialized(type);
}
public Builder mergeFrom(CodedInputStream input,
ExtensionRegistry extensionRegistry)
throws IOException {
UnknownFieldSet.Builder unknownFieldsBuilder =
UnknownFieldSet.newBuilder(unknownFields);
fields.mergeFrom(input, unknownFieldsBuilder, extensionRegistry, this);
unknownFields = unknownFieldsBuilder.build();
return this;
}
public Descriptor getDescriptorForType() {
return type;
}
public DynamicMessage getDefaultInstanceForType() {
return getDefaultInstance(type);
}
public Map<FieldDescriptor, Object> getAllFields() {
return fields.getAllFields();
}
public Builder newBuilderForField(FieldDescriptor field) {
verifyContainingType(field);
if (field.getJavaType() != FieldDescriptor.JavaType.MESSAGE) {
throw new IllegalArgumentException(
"newBuilderForField is only valid for fields with message type.");
}
return new Builder(field.getMessageType());
}
public boolean hasField(FieldDescriptor field) {
verifyContainingType(field);
return fields.hasField(field);
}
public Object getField(FieldDescriptor field) {
verifyContainingType(field);
Object result = fields.getField(field);
if (result == null) {
result = getDefaultInstance(field.getMessageType());
}
return result;
}
public Builder setField(FieldDescriptor field, Object value) {
verifyContainingType(field);
fields.setField(field, value);
return this;
}
public Builder clearField(FieldDescriptor field) {
verifyContainingType(field);
fields.clearField(field);
return this;
}
public int getRepeatedFieldCount(FieldDescriptor field) {
verifyContainingType(field);
return fields.getRepeatedFieldCount(field);
}
public Object getRepeatedField(FieldDescriptor field, int index) {
verifyContainingType(field);
return fields.getRepeatedField(field, index);
}
public Builder setRepeatedField(FieldDescriptor field,
int index, Object value) {
verifyContainingType(field);
fields.setRepeatedField(field, index, value);
return this;
}
public Builder addRepeatedField(FieldDescriptor field, Object value) {
verifyContainingType(field);
fields.addRepeatedField(field, value);
return this;
}
public UnknownFieldSet getUnknownFields() {
return unknownFields;
}
public Builder setUnknownFields(UnknownFieldSet unknownFields) {
this.unknownFields = unknownFields;
return this;
}
public Builder mergeUnknownFields(UnknownFieldSet unknownFields) {
this.unknownFields =
UnknownFieldSet.newBuilder(this.unknownFields)
.mergeFrom(unknownFields)
.build();
return this;
}
/** Verifies that the field is a field of this message. */
private void verifyContainingType(FieldDescriptor field) {
if (field.getContainingType() != type) {
throw new IllegalArgumentException(
"FieldDescriptor does not match message type.");
}
}
}
}

View File

@ -0,0 +1,237 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.Descriptor;
import com.google.protobuf.Descriptors.FieldDescriptor;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
/**
* A table of known extensions, searchable by name or field number. When
* parsing a protocol message that might have extensions, you must provide
* an {@code ExtensionRegistry} in which you have registered any extensions
* that you want to be able to parse. Otherwise, those extensions will just
* be treated like unknown fields.
*
* <p>For example, if you had the {@code .proto} file:
*
* <pre>
* option java_class = "MyProto";
*
* message Foo {
* extensions 1000 to max;
* }
*
* extend Foo {
* optional int32 bar;
* }
* </pre>
*
* Then you might write code like:
*
* <pre>
* ExtensionRegistry registry = ExtensionRegistry.newInstance();
* registry.add(MyProto.bar);
* MyProto.Foo message = MyProto.Foo.parseFrom(input, registry);
* </pre>
*
* <p>Background:
*
* <p>You might wonder why this is necessary. Two alternatives might come to
* mind. First, you might imagine a system where generated extensions are
* automatically registered when their containing classes are loaded. This
* is a popular technique, but is bad design; among other things, it creates a
* situation where behavior can change depending on what classes happen to be
* loaded. It also introduces a security vulnerability, because an
* unprivileged class could cause its code to be called unexpectedly from a
* privileged class by registering itself as an extension of the right type.
*
* <p>Another option you might consider is lazy parsing: do not parse an
* extension until it is first requested, at which point the caller must
* provide a type to use. This introduces a different set of problems. First,
* it would require a mutex lock any time an extension was accessed, which
* would be slow. Second, corrupt data would not be detected until first
* access, at which point it would be much harder to deal with it. Third, it
* could violate the expectation that message objects are immutable, since the
* type provided could be any arbitrary message class. An unpriviledged user
* could take advantage of this to inject a mutable object into a message
* belonging to priviledged code and create mischief.
*
* @author kenton@google.com Kenton Varda
*/
public final class ExtensionRegistry {
/** Construct a new, empty instance. */
public static ExtensionRegistry newInstance() {
return new ExtensionRegistry(
new HashMap<String, ExtensionInfo>(),
new HashMap<DescriptorIntPair, ExtensionInfo>());
}
/** Get the unmodifiable singleton empty instance. */
public static ExtensionRegistry getEmptyRegistry() {
return EMPTY;
}
/** Returns an unmodifiable view of the registry. */
public ExtensionRegistry getUnmodifiable() {
return new ExtensionRegistry(
Collections.unmodifiableMap(extensionsByName),
Collections.unmodifiableMap(extensionsByNumber));
}
/** A (Descriptor, Message) pair, returned by lookup methods. */
public static final class ExtensionInfo {
/** The extension's descriptor. */
public final FieldDescriptor descriptor;
/**
* A default instance of the extension's type, if it has a message type.
* Otherwise, {@code null}.
*/
public final Message defaultInstance;
private ExtensionInfo(FieldDescriptor descriptor) {
this.descriptor = descriptor;
this.defaultInstance = null;
}
private ExtensionInfo(FieldDescriptor descriptor, Message defaultInstance) {
this.descriptor = descriptor;
this.defaultInstance = defaultInstance;
}
}
/**
* Find an extension by fully-qualified field name, in the proto namespace.
* I.e. {@code result.descriptor.fullName()} will match {@code fullName} if
* a match is found.
*
* @return Information about the extension if found, or {@code null}
* otherwise.
*/
public ExtensionInfo findExtensionByName(String fullName) {
return extensionsByName.get(fullName);
}
/**
* Find an extension by containing type and field number.
*
* @return Information about the extension if found, or {@code null}
* otherwise.
*/
public ExtensionInfo findExtensionByNumber(Descriptor containingType,
int fieldNumber) {
return extensionsByNumber.get(
new DescriptorIntPair(containingType, fieldNumber));
}
/** Add an extension from a generated file to the registry. */
public void add(GeneratedMessage.GeneratedExtension<?, ?> extension) {
if (extension.getDescriptor().getJavaType() ==
FieldDescriptor.JavaType.MESSAGE) {
add(new ExtensionInfo(extension.getDescriptor(),
extension.getMessageDefaultInstance()));
} else {
add(new ExtensionInfo(extension.getDescriptor(), null));
}
}
/** Add a non-message-type extension to the registry by descriptor. */
public void add(FieldDescriptor type) {
if (type.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
throw new IllegalArgumentException(
"ExtensionRegistry.add() must be provided a default instance when " +
"adding an embedded message extension.");
}
add(new ExtensionInfo(type, null));
}
/** Add a message-type extension to the registry by descriptor. */
public void add(FieldDescriptor type, Message defaultInstance) {
if (type.getJavaType() != FieldDescriptor.JavaType.MESSAGE) {
throw new IllegalArgumentException(
"ExtensionRegistry.add() provided a default instance for a " +
"non-message extension.");
}
add(new ExtensionInfo(type, defaultInstance));
}
// =================================================================
// Private stuff.
private ExtensionRegistry(
Map<String, ExtensionInfo> extensionsByName,
Map<DescriptorIntPair, ExtensionInfo> extensionsByNumber) {
this.extensionsByName = extensionsByName;
this.extensionsByNumber = extensionsByNumber;
}
private final Map<String, ExtensionInfo> extensionsByName;
private final Map<DescriptorIntPair, ExtensionInfo> extensionsByNumber;
private static final ExtensionRegistry EMPTY =
new ExtensionRegistry(
Collections.<String, ExtensionInfo>emptyMap(),
Collections.<DescriptorIntPair, ExtensionInfo>emptyMap());
private void add(ExtensionInfo extension) {
if (!extension.descriptor.isExtension()) {
throw new IllegalArgumentException(
"ExtensionRegistry.add() was given a FieldDescriptor for a regular " +
"(non-extension) field.");
}
extensionsByName.put(extension.descriptor.getFullName(), extension);
extensionsByNumber.put(
new DescriptorIntPair(extension.descriptor.getContainingType(),
extension.descriptor.getNumber()),
extension);
FieldDescriptor field = extension.descriptor;
if (field.getContainingType().getOptions().getMessageSetWireFormat() &&
field.getType() == FieldDescriptor.Type.MESSAGE &&
field.isOptional() &&
field.getExtensionScope() == field.getMessageType()) {
// This is an extension of a MessageSet type defined within the extension
// type's own scope. For backwards-compatibility, allow it to be looked
// up by type name.
extensionsByName.put(field.getMessageType().getFullName(), extension);
}
}
/** A (GenericDescriptor, int) pair, used as a map key. */
private static final class DescriptorIntPair {
final Descriptor descriptor;
final int number;
DescriptorIntPair(Descriptor descriptor, int number) {
this.descriptor = descriptor;
this.number = number;
}
public int hashCode() {
return descriptor.hashCode() * ((1 << 16) - 1) + number;
}
public boolean equals(Object obj) {
if (!(obj instanceof DescriptorIntPair)) return false;
DescriptorIntPair other = (DescriptorIntPair)obj;
return descriptor == other.descriptor && number == other.number;
}
}
}

View File

@ -0,0 +1,662 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.Descriptor;
import com.google.protobuf.Descriptors.FieldDescriptor;
import com.google.protobuf.Descriptors.EnumValueDescriptor;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.TreeMap;
import java.util.List;
import java.util.Map;
/**
* A class which represents an arbitrary set of fields of some message type.
* This is used to implement {@link DynamicMessage}, and also to represent
* extensions in {@link GeneratedMessage}. This class is package-private,
* since outside users should probably be using {@link DynamicMessage}.
*
* @author kenton@google.com Kenton Varda
*/
final class FieldSet {
private Map<FieldDescriptor, Object> fields;
/** Construct a new FieldSet. */
private FieldSet() {
// Use a TreeMap because fields need to be in canonical order when
// serializing.
this.fields = new TreeMap<FieldDescriptor, Object>();
}
/**
* Construct a new FieldSet with the given map. This is only used by
* DEFAULT_INSTANCE, to pass in an immutable empty map.
*/
private FieldSet(Map<FieldDescriptor, Object> fields) {
this.fields = fields;
}
/** Construct a new FieldSet. */
public static FieldSet newFieldSet() {
return new FieldSet();
}
/** Get an immutable empty FieldSet. */
public static FieldSet emptySet() {
return DEFAULT_INSTANCE;
}
private static final FieldSet DEFAULT_INSTANCE =
new FieldSet(Collections.<FieldDescriptor, Object>emptyMap());
/** Make this FieldSet immutable from this point forward. */
@SuppressWarnings("unchecked")
public void makeImmutable() {
for (Map.Entry<FieldDescriptor, Object> entry: fields.entrySet()) {
if (entry.getKey().isRepeated()) {
List value = (List)entry.getValue();
entry.setValue(Collections.unmodifiableList(value));
}
}
fields = Collections.unmodifiableMap(fields);
}
// =================================================================
/** See {@link Message.Builder#clear()}. */
public void clear() {
fields.clear();
}
/** See {@link Message#getAllFields()}. */
public Map<Descriptors.FieldDescriptor, Object> getAllFields() {
return Collections.unmodifiableMap(fields);
}
/**
* Get an interator to the field map. This iterator should not be leaked
* out of the protobuf library as it is not protected from mutation.
*/
public Iterator<Map.Entry<Descriptors.FieldDescriptor, Object>> iterator() {
return fields.entrySet().iterator();
}
/** See {@link Message#hasField(Descriptors.FieldDescriptor)}. */
public boolean hasField(Descriptors.FieldDescriptor field) {
if (field.isRepeated()) {
throw new IllegalArgumentException(
"hasField() can only be called on non-repeated fields.");
}
return fields.containsKey(field);
}
/**
* See {@link Message#getField(Descriptors.FieldDescriptor)}. This method
* returns {@code null} if the field is a singular message type and is not
* set; in this case it is up to the caller to fetch the message's default
* instance.
*/
public Object getField(Descriptors.FieldDescriptor field) {
Object result = fields.get(field);
if (result == null) {
if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
if (field.isRepeated()) {
return Collections.emptyList();
} else {
return null;
}
} else {
return field.getDefaultValue();
}
} else {
return result;
}
}
/** See {@link Message.Builder#setField(Descriptors.FieldDescriptor,Object)}. */
@SuppressWarnings("unchecked")
public void setField(Descriptors.FieldDescriptor field, Object value) {
if (field.isRepeated()) {
if (!(value instanceof List)) {
throw new IllegalArgumentException(
"Wrong object type used with protocol message reflection.");
}
// Wrap the contents in a new list so that the caller cannot change
// the list's contents after setting it.
List newList = new ArrayList();
newList.addAll((List)value);
for (Object element : newList) {
verifyType(field, element);
}
value = newList;
} else {
verifyType(field, value);
}
fields.put(field, value);
}
/** See {@link Message.Builder#clearField(Descriptors.FieldDescriptor)}. */
public void clearField(Descriptors.FieldDescriptor field) {
fields.remove(field);
}
/** See {@link Message#getRepeatedFieldCount(Descriptors.FieldDescriptor)}. */
public int getRepeatedFieldCount(Descriptors.FieldDescriptor field) {
if (!field.isRepeated()) {
throw new IllegalArgumentException(
"getRepeatedFieldCount() can only be called on repeated fields.");
}
return ((List)getField(field)).size();
}
/** See {@link Message#getRepeatedField(Descriptors.FieldDescriptor,int)}. */
public Object getRepeatedField(Descriptors.FieldDescriptor field, int index) {
if (!field.isRepeated()) {
throw new IllegalArgumentException(
"getRepeatedField() can only be called on repeated fields.");
}
return ((List)getField(field)).get(index);
}
/** See {@link Message.Builder#setRepeatedField(Descriptors.FieldDescriptor,int,Object)}. */
@SuppressWarnings("unchecked")
public void setRepeatedField(Descriptors.FieldDescriptor field, int index,
Object value) {
if (!field.isRepeated()) {
throw new IllegalArgumentException(
"setRepeatedField() can only be called on repeated fields.");
}
verifyType(field, value);
List list = (List)fields.get(field);
if (list == null) {
throw new IndexOutOfBoundsException();
}
list.set(index, value);
}
/** See {@link Message.Builder#addRepeatedField(Descriptors.FieldDescriptor,Object)}. */
@SuppressWarnings("unchecked")
public void addRepeatedField(Descriptors.FieldDescriptor field,
Object value) {
if (!field.isRepeated()) {
throw new IllegalArgumentException(
"setRepeatedField() can only be called on repeated fields.");
}
verifyType(field, value);
List list = (List)fields.get(field);
if (list == null) {
list = new ArrayList();
fields.put(field, list);
}
list.add(value);
}
/**
* Verifies that the given object is of the correct type to be a valid
* value for the given field. (For repeated fields, this checks if the
* object is the right type to be one element of the field.)
*
* @throws IllegalArgumentException The value is not of the right type.
*/
private void verifyType(FieldDescriptor field, Object value) {
boolean isValid = false;
switch (field.getJavaType()) {
case INT: isValid = value instanceof Integer ; break;
case LONG: isValid = value instanceof Long ; break;
case FLOAT: isValid = value instanceof Float ; break;
case DOUBLE: isValid = value instanceof Double ; break;
case BOOLEAN: isValid = value instanceof Boolean ; break;
case STRING: isValid = value instanceof String ; break;
case BYTE_STRING: isValid = value instanceof ByteString; break;
case ENUM:
isValid = value instanceof EnumValueDescriptor &&
((EnumValueDescriptor)value).getType() == field.getEnumType();
break;
case MESSAGE:
isValid = value instanceof Message &&
((Message)value).getDescriptorForType() == field.getMessageType();
break;
}
if (!isValid) {
// When chaining calls to setField(), it can be hard to tell from
// the stack trace which exact call failed, since the whole chain is
// considered one line of code. So, let's make sure to include the
// field name and other useful info in the exception.
throw new IllegalArgumentException(
"Wrong object type used with protocol message reflection. " +
"Message type \"" + field.getContainingType().getFullName() +
"\", field \"" +
(field.isExtension() ? field.getFullName() : field.getName()) +
"\", value was type \"" + value.getClass().getName() + "\".");
}
}
// =================================================================
// Parsing and serialization
/**
* See {@link Message#isInitialized()}. Note: Since {@code FieldSet}
* itself does not have any way of knowing about required fields that
* aren't actually present in the set, it is up to the caller to check
* that all required fields are present.
*/
@SuppressWarnings("unchecked")
public boolean isInitialized() {
for (Map.Entry<FieldDescriptor, Object> entry : fields.entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
if (field.isRepeated()) {
for (Message element : (List<Message>) entry.getValue()) {
if (!element.isInitialized()) {
return false;
}
}
} else {
if (!((Message) entry.getValue()).isInitialized()) {
return false;
}
}
}
}
return true;
}
/**
* Like {@link #isInitialized()}, but also checks for the presence of
* all required fields in the given type.
*/
@SuppressWarnings("unchecked")
public boolean isInitialized(Descriptor type) {
// Check that all required fields are present.
for (FieldDescriptor field : type.getFields()) {
if (field.isRequired()) {
if (!hasField(field)) {
return false;
}
}
}
// Check that embedded messages are initialized.
return isInitialized();
}
/** See {@link Message.Builder#mergeFrom(Message)}. */
@SuppressWarnings("unchecked")
public void mergeFrom(Message other) {
// Note: We don't attempt to verify that other's fields have valid
// types. Doing so would be a losing battle. We'd have to verify
// all sub-messages as well, and we'd have to make copies of all of
// them to insure that they don't change after verification (since
// the Message interface itself cannot enforce immutability of
// implementations).
// TODO(kenton): Provide a function somewhere called makeDeepCopy()
// which allows people to make secure deep copies of messages.
for (Map.Entry<FieldDescriptor, Object> entry :
other.getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
if (field.isRepeated()) {
List existingValue = (List)fields.get(field);
if (existingValue == null) {
existingValue = new ArrayList();
fields.put(field, existingValue);
}
existingValue.addAll((List)entry.getValue());
} else if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
Message existingValue = (Message)fields.get(field);
if (existingValue == null) {
setField(field, entry.getValue());
} else {
setField(field,
existingValue.newBuilderForType()
.mergeFrom(existingValue)
.mergeFrom((Message)entry.getValue())
.build());
}
} else {
setField(field, entry.getValue());
}
}
}
/**
* Like {@link #mergeFrom(Message)}, but merges from another {@link FieldSet}.
*/
@SuppressWarnings("unchecked")
public void mergeFrom(FieldSet other) {
for (Map.Entry<FieldDescriptor, Object> entry : other.fields.entrySet()) {
FieldDescriptor field = entry.getKey();
Object value = entry.getValue();
if (field.isRepeated()) {
List existingValue = (List)fields.get(field);
if (existingValue == null) {
existingValue = new ArrayList();
fields.put(field, existingValue);
}
existingValue.addAll((List)value);
} else if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
Message existingValue = (Message)fields.get(field);
if (existingValue == null) {
setField(field, value);
} else {
setField(field,
existingValue.newBuilderForType()
.mergeFrom(existingValue)
.mergeFrom((Message)value)
.build());
}
} else {
setField(field, value);
}
}
}
// TODO(kenton): Move parsing code into AbstractMessage, since it no longer
// uses any special knowledge from FieldSet.
/**
* See {@link Message.Builder#mergeFrom(CodedInputStream)}.
* @param builder The {@code Builder} for the target message.
*/
public static void mergeFrom(CodedInputStream input,
UnknownFieldSet.Builder unknownFields,
ExtensionRegistry extensionRegistry,
Message.Builder builder)
throws java.io.IOException {
while (true) {
int tag = input.readTag();
if (tag == 0) {
break;
}
if (!mergeFieldFrom(input, unknownFields, extensionRegistry,
builder, tag)) {
// end group tag
break;
}
}
}
/**
* Like {@link #mergeFrom(CodedInputStream, UnknownFieldSet.Builder,
* ExtensionRegistry, Message.Builder)}, but parses a single field.
* @param tag The tag, which should have already been read.
* @return {@code true} unless the tag is an end-group tag.
*/
@SuppressWarnings("unchecked")
public static boolean mergeFieldFrom(
CodedInputStream input,
UnknownFieldSet.Builder unknownFields,
ExtensionRegistry extensionRegistry,
Message.Builder builder,
int tag) throws java.io.IOException {
Descriptor type = builder.getDescriptorForType();
if (type.getOptions().getMessageSetWireFormat() &&
tag == WireFormat.MESSAGE_SET_ITEM_TAG) {
mergeMessageSetExtensionFromCodedStream(
input, unknownFields, extensionRegistry, builder);
return true;
}
int wireType = WireFormat.getTagWireType(tag);
int fieldNumber = WireFormat.getTagFieldNumber(tag);
FieldDescriptor field;
Message defaultInstance = null;
if (type.isExtensionNumber(fieldNumber)) {
ExtensionRegistry.ExtensionInfo extension =
extensionRegistry.findExtensionByNumber(type, fieldNumber);
if (extension == null) {
field = null;
} else {
field = extension.descriptor;
defaultInstance = extension.defaultInstance;
}
} else {
field = type.findFieldByNumber(fieldNumber);
}
if (field == null ||
wireType != WireFormat.getWireFormatForFieldType(field.getType())) {
// Unknown field or wrong wire type. Skip.
return unknownFields.mergeFieldFrom(tag, input);
} else {
Object value;
switch (field.getType()) {
case GROUP: {
Message.Builder subBuilder;
if (defaultInstance != null) {
subBuilder = defaultInstance.newBuilderForType();
} else {
subBuilder = builder.newBuilderForField(field);
}
if (!field.isRepeated()) {
subBuilder.mergeFrom((Message) builder.getField(field));
}
input.readGroup(field.getNumber(), subBuilder, extensionRegistry);
value = subBuilder.build();
break;
}
case MESSAGE: {
Message.Builder subBuilder;
if (defaultInstance != null) {
subBuilder = defaultInstance.newBuilderForType();
} else {
subBuilder = builder.newBuilderForField(field);
}
if (!field.isRepeated()) {
subBuilder.mergeFrom((Message) builder.getField(field));
}
input.readMessage(subBuilder, extensionRegistry);
value = subBuilder.build();
break;
}
case ENUM: {
int rawValue = input.readEnum();
value = field.getEnumType().findValueByNumber(rawValue);
// If the number isn't recognized as a valid value for this enum,
// drop it.
if (value == null) {
unknownFields.mergeVarintField(fieldNumber, rawValue);
return true;
}
break;
}
default:
value = input.readPrimitiveField(field.getType());
break;
}
if (field.isRepeated()) {
builder.addRepeatedField(field, value);
} else {
builder.setField(field, value);
}
}
return true;
}
/** Called by {@code #mergeFieldFrom()} to parse a MessageSet extension. */
private static void mergeMessageSetExtensionFromCodedStream(
CodedInputStream input,
UnknownFieldSet.Builder unknownFields,
ExtensionRegistry extensionRegistry,
Message.Builder builder) throws java.io.IOException {
Descriptor type = builder.getDescriptorForType();
// The wire format for MessageSet is:
// message MessageSet {
// repeated group Item = 1 {
// required int32 typeId = 2;
// required bytes message = 3;
// }
// }
// "typeId" is the extension's field number. The extension can only be
// a message type, where "message" contains the encoded bytes of that
// message.
//
// In practice, we will probably never see a MessageSet item in which
// the message appears before the type ID, or where either field does not
// appear exactly once. However, in theory such cases are valid, so we
// should be prepared to accept them.
int typeId = 0;
ByteString rawBytes = null; // If we encounter "message" before "typeId"
Message.Builder subBuilder = null;
FieldDescriptor field = null;
while (true) {
int tag = input.readTag();
if (tag == 0) {
break;
}
if (tag == WireFormat.MESSAGE_SET_TYPE_ID_TAG) {
typeId = input.readUInt32();
// Zero is not a valid type ID.
if (typeId != 0) {
ExtensionRegistry.ExtensionInfo extension =
extensionRegistry.findExtensionByNumber(type, typeId);
if (extension != null) {
field = extension.descriptor;
subBuilder = extension.defaultInstance.newBuilderForType();
Message originalMessage = (Message)builder.getField(field);
if (originalMessage != null) {
subBuilder.mergeFrom(originalMessage);
}
if (rawBytes != null) {
// We already encountered the message. Parse it now.
subBuilder.mergeFrom(
CodedInputStream.newInstance(rawBytes.newInput()));
rawBytes = null;
}
} else {
// Unknown extension number. If we already saw data, put it
// in rawBytes.
if (rawBytes != null) {
unknownFields.mergeField(typeId,
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(rawBytes)
.build());
rawBytes = null;
}
}
}
} else if (tag == WireFormat.MESSAGE_SET_MESSAGE_TAG) {
if (typeId == 0) {
// We haven't seen a type ID yet, so we have to store the raw bytes
// for now.
rawBytes = input.readBytes();
} else if (subBuilder == null) {
// We don't know how to parse this. Ignore it.
unknownFields.mergeField(typeId,
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(input.readBytes())
.build());
} else {
// We already know the type, so we can parse directly from the input
// with no copying. Hooray!
input.readMessage(subBuilder, extensionRegistry);
}
} else {
// Unknown tag. Skip it.
if (!input.skipField(tag)) {
break; // end of group
}
}
}
input.checkLastTagWas(WireFormat.MESSAGE_SET_ITEM_END_TAG);
if (subBuilder != null) {
builder.setField(field, subBuilder.build());
}
}
/** See {@link Message#writeTo(CodedOutputStream)}. */
public void writeTo(CodedOutputStream output)
throws java.io.IOException {
for (Map.Entry<FieldDescriptor, Object> entry : fields.entrySet()) {
writeField(entry.getKey(), entry.getValue(), output);
}
}
/** Write a single field. */
public void writeField(FieldDescriptor field, Object value,
CodedOutputStream output) throws java.io.IOException {
if (field.isExtension() &&
field.getContainingType().getOptions().getMessageSetWireFormat()) {
output.writeMessageSetExtension(field.getNumber(), (Message)value);
} else {
if (field.isRepeated()) {
for (Object element : (List)value) {
output.writeField(field.getType(), field.getNumber(), element);
}
} else {
output.writeField(field.getType(), field.getNumber(), value);
}
}
}
/**
* See {@link Message#getSerializedSize()}. It's up to the caller to cache
* the resulting size if desired.
*/
public int getSerializedSize() {
int size = 0;
for (Map.Entry<FieldDescriptor, Object> entry : fields.entrySet()) {
FieldDescriptor field = entry.getKey();
Object value = entry.getValue();
if (field.isExtension() &&
field.getContainingType().getOptions().getMessageSetWireFormat()) {
size += CodedOutputStream.computeMessageSetExtensionSize(
field.getNumber(), (Message)value);
} else {
if (field.isRepeated()) {
for (Object element : (List)value) {
size += CodedOutputStream.computeFieldSize(
field.getType(), field.getNumber(), element);
}
} else {
size += CodedOutputStream.computeFieldSize(
field.getType(), field.getNumber(), value);
}
}
}
return size;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,77 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import java.io.IOException;
/**
* Thrown when a protocol message being parsed is invalid in some way,
* e.g. it contains a malformed varint or a negative byte length.
*
* @author kenton@google.com Kenton Varda
*/
public class InvalidProtocolBufferException extends IOException {
public InvalidProtocolBufferException(String description) {
super(description);
}
static InvalidProtocolBufferException truncatedMessage() {
return new InvalidProtocolBufferException(
"While parsing a protocol message, the input ended unexpectedly " +
"in the middle of a field. This could mean either than the " +
"input has been truncated or that an embedded message " +
"misreported its own length.");
}
static InvalidProtocolBufferException negativeSize() {
return new InvalidProtocolBufferException(
"CodedInputStream encountered an embedded string or message " +
"which claimed to have negative size.");
}
static InvalidProtocolBufferException malformedVarint() {
return new InvalidProtocolBufferException(
"CodedInputStream encountered a malformed varint.");
}
static InvalidProtocolBufferException invalidTag() {
return new InvalidProtocolBufferException(
"Protocol message contained an invalid tag (zero).");
}
static InvalidProtocolBufferException invalidEndTag() {
return new InvalidProtocolBufferException(
"Protocol message end-group tag did not match expected tag.");
}
static InvalidProtocolBufferException invalidWireType() {
return new InvalidProtocolBufferException(
"Protocol message tag had invalid wire type.");
}
static InvalidProtocolBufferException recursionLimitExceeded() {
return new InvalidProtocolBufferException(
"Protocol message had too many levels of nesting. May be malicious. " +
"Use CodedInputStream.setRecursionLimit() to increase the depth limit.");
}
static InvalidProtocolBufferException sizeLimitExceeded() {
return new InvalidProtocolBufferException(
"Protocol message was too large. May be malicious. " +
"Use CodedInputStream.setSizeLimit() to increase the size limit.");
}
}

View File

@ -0,0 +1,415 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// TODO(kenton): Use generics? E.g. Builder<BuilderType extends Builder>, then
// mergeFrom*() could return BuilderType for better type-safety.
package com.google.protobuf;
import java.io.InputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Map;
/**
* Abstract interface implemented by Protocol Message objects.
*
* @author kenton@google.com Kenton Varda
*/
public interface Message {
/**
* Get the message's type's descriptor. This differs from the
* {@code getDescriptor()} method of generated message classes in that
* this method is an abstract method of the {@code Message} interface
* whereas {@code getDescriptor()} is a static method of a specific class.
* They return the same thing.
*/
Descriptors.Descriptor getDescriptorForType();
/**
* Get an instance of the type with all fields set to their default values.
* This may or may not be a singleton. This differs from the
* {@code getDefaultInstance()} method of generated message classes in that
* this method is an abstract method of the {@code Message} interface
* whereas {@code getDefaultInstance()} is a static method of a specific
* class. They return the same thing.
*/
Message getDefaultInstanceForType();
/**
* Returns a collection of all the fields in this message which are set
* and their corresponding values. A singular ("required" or "optional")
* field is set iff hasField() returns true for that field. A "repeated"
* field is set iff getRepeatedFieldSize() is greater than zero. The
* values are exactly what would be returned by calling
* {@link #getField(Descriptors.FieldDescriptor)} for each field. The map
* is guaranteed to be a sorted map, so iterating over it will return fields
* in order by field number.
*/
Map<Descriptors.FieldDescriptor, Object> getAllFields();
/**
* Returns true if the given field is set. This is exactly equivalent to
* calling the generated "has" accessor method corresponding to the field.
* @throws IllegalArgumentException The field is a repeated field, or
* {@code field.getContainingType() != getDescriptorForType()}.
*/
boolean hasField(Descriptors.FieldDescriptor field);
/**
* Obtains the value of the given field, or the default value if it is
* not set. For primitive fields, the boxed primitive value is returned.
* For enum fields, the EnumValueDescriptor for the value is returend. For
* embedded message fields, the sub-message is returned. For repeated
* fields, a java.util.List is returned.
*/
Object getField(Descriptors.FieldDescriptor field);
/**
* Gets the number of elements of a repeated field. This is exactly
* equivalent to calling the generated "Count" accessor method corresponding
* to the field.
* @throws IllegalArgumentException The field is not a repeated field, or
* {@code field.getContainingType() != getDescriptorForType()}.
*/
int getRepeatedFieldCount(Descriptors.FieldDescriptor field);
/**
* Gets an element of a repeated field. For primitive fields, the boxed
* primitive value is returned. For enum fields, the EnumValueDescriptor
* for the value is returend. For embedded message fields, the sub-message
* is returned.
* @throws IllegalArgumentException The field is not a repeated field, or
* {@code field.getContainingType() != getDescriptorForType()}.
*/
Object getRepeatedField(Descriptors.FieldDescriptor field, int index);
/** Get the {@link UnknownFieldSet} for this message. */
UnknownFieldSet getUnknownFields();
/**
* Returns true if all required fields in the message and all embedded
* messages are set, false otherwise.
*/
boolean isInitialized();
/**
* Serializes the message and writes it to {@code output}. This does not
* flush or close the stream.
*/
void writeTo(CodedOutputStream output) throws IOException;
/**
* Get the number of bytes required to encode this message. The result
* is only computed on the first call and memoized after that.
*/
int getSerializedSize();
// -----------------------------------------------------------------
// Comparison and hashing
/**
* Compares the specified object with this message for equality. Returns
* <tt>true</tt> if the given object is a message of the same type (as
* defined by {@code getDescriptorForType()}) and has identical values for
* all of its fields.
*
* @param other object to be compared for equality with this message
* @return <tt>true</tt> if the specified object is equal to this message
*/
boolean equals(Object other);
/**
* Returns the hash code value for this message. The hash code of a message
* is defined to be <tt>getDescriptor().hashCode() ^ map.hashCode()</tt>,
* where <tt>map</tt> is a map of field numbers to field values.
*
* @return the hash code value for this message
* @see Map#hashCode()
*/
int hashCode();
// -----------------------------------------------------------------
// Convenience methods.
/**
* Converts the message to a string in protocol buffer text format. This is
* just a trivial wrapper around {@link TextFormat#printToString(Message)}.
*/
String toString();
/**
* Serializes the message to a {@code ByteString} and returns it. This is
* just a trivial wrapper around
* {@link #writeTo(CodedOutputStream)}.
*/
ByteString toByteString();
/**
* Serializes the message to a {@code byte} array and returns it. This is
* just a trivial wrapper around
* {@link #writeTo(CodedOutputStream)}.
*/
byte[] toByteArray();
/**
* Serializes the message and writes it to {@code output}. This is just a
* trivial wrapper around {@link #writeTo(CodedOutputStream)}. This does
* not flush or close the stream.
*/
void writeTo(OutputStream output) throws IOException;
// =================================================================
// Builders
/**
* Constructs a new builder for a message of the same type as this message.
*/
Builder newBuilderForType();
/**
* Abstract interface implemented by Protocol Message builders.
*/
public static interface Builder extends Cloneable {
/** Resets all fields to their default values. */
Builder clear();
/**
* Merge {@code other} into the message being built. {@code other} must
* have the exact same type as {@code this} (i.e.
* {@code getDescriptorForType() == other.getDescriptorForType()}).
*
* Merging occurs as follows. For each field:<br>
* * For singular primitive fields, if the field is set in {@code other},
* then {@code other}'s value overwrites the value in this message.<br>
* * For singular message fields, if the field is set in {@code other},
* it is merged into the corresponding sub-message of this message
* using the same merging rules.<br>
* * For repeated fields, the elements in {@code other} are concatenated
* with the elements in this message.
*
* This is equivalent to the {@code Message::MergeFrom} method in C++.
*/
Builder mergeFrom(Message other);
/**
* Construct the final message. Once this is called, the Builder is no
* longer valid, and calling any other method may throw a
* NullPointerException. If you need to continue working with the builder
* after calling {@code build()}, {@code clone()} it first.
* @throws UninitializedMessageException The message is missing one or more
* required fields (i.e. {@link #isInitialized()} returns false).
* Use {@link #buildPartial()} to bypass this check.
*/
Message build();
/**
* Like {@link #build()}, but does not throw an exception if the message
* is missing required fields. Instead, a partial message is returned.
*/
Message buildPartial();
/**
* Clones the Builder.
* @see Object#clone()
*/
Builder clone();
/**
* Returns true if all required fields in the message and all embedded
* messages are set, false otherwise.
*/
boolean isInitialized();
/**
* Parses a message of this type from the input and merges it with this
* message, as if using {@link Builder#mergeFrom(Message)}.
*
* <p>Warning: This does not verify that all required fields are present in
* the input message. If you call {@link #build()} without setting all
* required fields, it will throw an {@link UninitializedMessageException},
* which is a {@code RuntimeException} and thus might not be caught. There
* are a few good ways to deal with this:
* <ul>
* <li>Call {@link #isInitialized()} to verify that all required fields
* are set before building.
* <li>Parse the message separately using one of the static
* {@code parseFrom} methods, then use {@link #mergeFrom(Message)}
* to merge it with this one. {@code parseFrom} will throw an
* {@link InvalidProtocolBufferException} (an {@code IOException})
* if some required fields are missing.
* <li>Use {@code buildPartial()} to build, which ignores missing
* required fields.
* </ul>
*
* <p>Note: The caller should call
* {@link CodedInputStream#checkLastTagWas(int)} after calling this to
* verify that the last tag seen was the appropriate end-group tag,
* or zero for EOF.
*/
Builder mergeFrom(CodedInputStream input) throws IOException;
/**
* Like {@link Builder#mergeFrom(CodedInputStream)}, but also
* parses extensions. The extensions that you want to be able to parse
* must be registered in {@code extensionRegistry}. Extensions not in
* the registry will be treated as unknown fields.
*/
Builder mergeFrom(CodedInputStream input,
ExtensionRegistry extensionRegistry)
throws IOException;
/**
* Get the message's type's descriptor.
* See {@link Message#getDescriptorForType()}.
*/
Descriptors.Descriptor getDescriptorForType();
/**
* Get the message's type's default instance.
* See {@link Message#getDefaultInstanceForType()}.
*/
Message getDefaultInstanceForType();
/**
* Like {@link Message#getAllFields()}. The returned map may or may not
* reflect future changes to the builder. Either way, the returned map is
* itself unmodifiable.
*/
Map<Descriptors.FieldDescriptor, Object> getAllFields();
/**
* Create a Builder for messages of the appropriate type for the given
* field. Messages built with this can then be passed to setField(),
* setRepeatedField(), or addRepeatedField().
*/
Builder newBuilderForField(Descriptors.FieldDescriptor field);
/** Like {@link Message#hasField(Descriptors.FieldDescriptor)} */
boolean hasField(Descriptors.FieldDescriptor field);
/** Like {@link Message#getField(Descriptors.FieldDescriptor)} */
Object getField(Descriptors.FieldDescriptor field);
/**
* Sets a field to the given value. The value must be of the correct type
* for this field, i.e. the same type that
* {@link Message#getField(Descriptors.FieldDescriptor)} would return.
*/
Builder setField(Descriptors.FieldDescriptor field, Object value);
/**
* Clears the field. This is exactly equivalent to calling the generated
* "clear" accessor method corresponding to the field.
*/
Builder clearField(Descriptors.FieldDescriptor field);
/**
* Like {@link Message#getRepeatedFieldCount(Descriptors.FieldDescriptor)}
*/
int getRepeatedFieldCount(Descriptors.FieldDescriptor field);
/**
* Like {@link Message#getRepeatedField(Descriptors.FieldDescriptor,int)}
*/
Object getRepeatedField(Descriptors.FieldDescriptor field, int index);
/**
* Sets an element of a repeated field to the given value. The value must
* be of the correct type for this field, i.e. the same type that
* {@link Message#getRepeatedField(Descriptors.FieldDescriptor,int)} would
* return.
* @throws IllegalArgumentException The field is not a repeated field, or
* {@code field.getContainingType() != getDescriptorForType()}.
*/
Builder setRepeatedField(Descriptors.FieldDescriptor field,
int index, Object value);
/**
* Like {@code setRepeatedField}, but appends the value as a new element.
* @throws IllegalArgumentException The field is not a repeated field, or
* {@code field.getContainingType() != getDescriptorForType()}.
*/
Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value);
/** Get the {@link UnknownFieldSet} for this message. */
UnknownFieldSet getUnknownFields();
/** Set the {@link UnknownFieldSet} for this message. */
Builder setUnknownFields(UnknownFieldSet unknownFields);
/**
* Merge some unknown fields into the {@link UnknownFieldSet} for this
* message.
*/
Builder mergeUnknownFields(UnknownFieldSet unknownFields);
// ---------------------------------------------------------------
// Convenience methods.
/**
* Parse {@code data} as a message of this type and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}.
*/
Builder mergeFrom(ByteString data) throws InvalidProtocolBufferException;
/**
* Parse {@code data} as a message of this type and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream,ExtensionRegistry)}.
*/
Builder mergeFrom(ByteString data,
ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException;
/**
* Parse {@code data} as a message of this type and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}.
*/
public Builder mergeFrom(byte[] data) throws InvalidProtocolBufferException;
/**
* Parse {@code data} as a message of this type and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream,ExtensionRegistry)}.
*/
Builder mergeFrom(byte[] data,
ExtensionRegistry extensionRegistry)
throws InvalidProtocolBufferException;
/**
* Parse a message of this type from {@code input} and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}. Note that this method always
* reads the <i>entire</i> input (unless it throws an exception). If you
* want it to stop earlier, you will need to wrap your input in some
* wrapper stream that limits reading. Despite usually reading the entire
* input, this does not close the stream.
*/
Builder mergeFrom(InputStream input) throws IOException;
/**
* Parse a message of this type from {@code input} and merge it with the
* message being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream,ExtensionRegistry)}.
*/
Builder mergeFrom(InputStream input,
ExtensionRegistry extensionRegistry)
throws IOException;
}
}

View File

@ -0,0 +1,27 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* Interface for an RPC callback, normally called when an RPC completes.
* {@code ParameterType} is normally the method's response message type.
*
* @author kenton@google.com Kenton Varda
*/
public interface RpcCallback<ParameterType> {
void run(ParameterType parameter);
}

View File

@ -0,0 +1,51 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* <p>Abstract interface for an RPC channel. An {@code RpcChannel} represents a
* communication line to a {@link Service} which can be used to call that
* {@link Service}'s methods. The {@link Service} may be running on another
* machine. Normally, you should not call an {@code RpcChannel} directly, but
* instead construct a stub {@link Service} wrapping it. Example:
*
* <pre>
* RpcChannel channel = rpcImpl.newChannel("remotehost.example.com:1234");
* RpcController controller = rpcImpl.newController();
* MyService service = MyService.newStub(channel);
* service.myMethod(controller, request, callback);
* </pre>
*
* @author kenton@google.com Kenton Varda
*/
public interface RpcChannel {
/**
* Call the given method of the remote service. This method is similar to
* {@code Service.callMethod()} with one important difference: the caller
* decides the types of the {@code Message} objects, not the callee. The
* request may be of any type as long as
* {@code request.getDescriptor() == method.getInputType()}.
* The response passed to the callback will be of the same type as
* {@code responsePrototype} (which must have
* {@code getDescriptor() == method.getOutputType()}).
*/
void callMethod(Descriptors.MethodDescriptor method,
RpcController controller,
Message request,
Message responsePrototype,
RpcCallback<Message> done);
}

View File

@ -0,0 +1,98 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* <p>An {@code RpcController} mediates a single method call. The primary
* purpose of the controller is to provide a way to manipulate settings
* specific to the RPC implementation and to find out about RPC-level errors.
*
* <p>The methods provided by the {@code RpcController} interface are intended
* to be a "least common denominator" set of features which we expect all
* implementations to support. Specific implementations may provide more
* advanced features (e.g. deadline propagation).
*
* @author kenton@google.com Kenton Varda
*/
public interface RpcController {
// -----------------------------------------------------------------
// These calls may be made from the client side only. Their results
// are undefined on the server side (may throw RuntimeExceptions).
/**
* Resets the RpcController to its initial state so that it may be reused in
* a new call. This can be called from the client side only. It must not
* be called while an RPC is in progress.
*/
void reset();
/**
* After a call has finished, returns true if the call failed. The possible
* reasons for failure depend on the RPC implementation. {@code failed()}
* most only be called on the client side, and must not be called before a
* call has finished.
*/
boolean failed();
/**
* If {@code failed()} is {@code true}, returns a human-readable description
* of the error.
*/
String errorText();
/**
* Advises the RPC system that the caller desires that the RPC call be
* canceled. The RPC system may cancel it immediately, may wait awhile and
* then cancel it, or may not even cancel the call at all. If the call is
* canceled, the "done" callback will still be called and the RpcController
* will indicate that the call failed at that time.
*/
void startCancel();
// -----------------------------------------------------------------
// These calls may be made from the server side only. Their results
// are undefined on the client side (may throw RuntimeExceptions).
/**
* Causes {@code failed()} to return true on the client side. {@code reason}
* will be incorporated into the message returned by {@code errorText()}.
* If you find you need to return machine-readable information about
* failures, you should incorporate it into your response protocol buffer
* and should NOT call {@code setFailed()}.
*/
void setFailed(String reason);
/**
* If {@code true}, indicates that the client canceled the RPC, so the server
* may as well give up on replying to it. This method must be called on the
* server side only. The server should still call the final "done" callback.
*/
boolean isCanceled();
/**
* Asks that the given callback be called when the RPC is canceled. The
* parameter passed to the callback will always be {@code null}. The
* callback will always be called exactly once. If the RPC completes without
* being canceled, the callback will be called after completion. If the RPC
* has already been canceled when NotifyOnCancel() is called, the callback
* will be called immediately.
*
* <p>{@code notifyOnCancel()} must be called no more than once per request.
* It must be called on the server side only.
*/
void notifyOnCancel(RpcCallback<Object> callback);
}

View File

@ -0,0 +1,118 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* Grab-bag of utility functions useful when dealing with RPCs.
*
* @author kenton@google.com Kenton Varda
*/
public final class RpcUtil {
private RpcUtil() {}
/**
* Take an {@code RcpCallabck<Message>} and convert it to an
* {@code RpcCallback} accepting a specific message type. This is always
* type-safe (parameter type contravariance).
*/
@SuppressWarnings("unchecked")
public static <Type extends Message> RpcCallback<Type>
specializeCallback(final RpcCallback<Message> originalCallback) {
return (RpcCallback<Type>)originalCallback;
// The above cast works, but only due to technical details of the Java
// implementation. A more theoretically correct -- but less efficient --
// implementation would be as follows:
// return new RpcCallback<Type>() {
// public void run(Type parameter) {
// originalCallback.run(parameter);
// }
// };
}
/**
* Take an {@code RcpCallabck} accepting a specific message type and convert
* it to an {@code RcpCallabck<Message>}. The generalized callback will
* accept any message object which has the same descriptor, and will convert
* it to the correct class before calling the original callback. However,
* if the generalized callback is given a message with a different descriptor,
* an exception will be thrown.
*/
public static <Type extends Message>
RpcCallback<Message> generalizeCallback(
final RpcCallback<Type> originalCallback,
final Class<Type> originalClass,
final Type defaultInstance) {
return new RpcCallback<Message>() {
public void run(Message parameter) {
Type typedParameter;
try {
typedParameter = originalClass.cast(parameter);
} catch (ClassCastException e) {
typedParameter = copyAsType(defaultInstance, parameter);
}
originalCallback.run(typedParameter);
}
};
}
/**
* Creates a new message of type "Type" which is a copy of "source". "source"
* must have the same descriptor but may be a different class (e.g.
* DynamicMessage).
*/
@SuppressWarnings("unchecked")
private static <Type extends Message> Type copyAsType(
Type typeDefaultInstance, Message source) {
return (Type)typeDefaultInstance.newBuilderForType()
.mergeFrom(source)
.build();
}
/**
* Creates a callback which can only be called once. This may be useful for
* security, when passing a callback to untrusted code: most callbacks do
* not expect to be called more than once, so doing so may expose bugs if it
* is not prevented.
*/
public static <ParameterType>
RpcCallback<ParameterType> newOneTimeCallback(
final RpcCallback<ParameterType> originalCallback) {
return new RpcCallback<ParameterType>() {
boolean alreadyCalled = false;
public void run(ParameterType parameter) {
synchronized(this) {
if (alreadyCalled) {
throw new AlreadyCalledException();
}
alreadyCalled = true;
}
originalCallback.run(parameter);
}
};
}
/**
* Exception thrown when a one-time callback is called more than once.
*/
public static final class AlreadyCalledException extends RuntimeException {
public AlreadyCalledException() {
super("This RpcCallback was already called and cannot be called " +
"multiple times.");
}
}
}

View File

@ -0,0 +1,97 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* Abstract base interface for protocol-buffer-based RPC services. Services
* themselves are abstract classes (implemented either by servers or as
* stubs), but they subclass this base interface. The methods of this
* interface can be used to call the methods of the service without knowing
* its exact type at compile time (analogous to the Message interface).
*
* @author kenton@google.com Kenton Varda
*/
public interface Service {
/**
* Get the {@code ServiceDescriptor} describing this service and its methods.
*/
Descriptors.ServiceDescriptor getDescriptorForType();
/**
* <p>Call a method of the service specified by MethodDescriptor. This is
* normally implemented as a simple {@code switch()} that calls the standard
* definitions of the service's methods.
*
* <p>Preconditions:
* <ul>
* <li>{@code method.getService() == getDescriptorForType()}
* <li>{@code request} is of the exact same class as the object returned by
* {@code getRequestPrototype(method)}.
* <li>{@code controller} is of the correct type for the RPC implementation
* being used by this Service. For stubs, the "correct type" depends
* on the RpcChannel which the stub is using. Server-side Service
* implementations are expected to accept whatever type of
* {@code RpcController} the server-side RPC implementation uses.
* </ul>
*
* <p>Postconditions:
* <ul>
* <li>{@code done} will be called when the method is complete. This may be
* before {@code callMethod()} returns or it may be at some point in
* the future.
* <li>The parameter to {@code done} is the response. It must be of the
* exact same type as would be returned by
* {@code getResponsePrototype(method)}.
* <li>If the RPC failed, the parameter to {@code done} will be
* {@code null}. Further details about the failure can be found by
* querying {@code controller}.
* </ul>
*/
void callMethod(Descriptors.MethodDescriptor method,
RpcController controller,
Message request,
RpcCallback<Message> done);
/**
* <p>{@code callMethod()} requires that the request passed in is of a
* particular subclass of {@code Message}. {@code getRequestPrototype()}
* gets the default instances of this type for a given method. You can then
* call {@code Message.newBuilderForType()} on this instance to
* construct a builder to build an object which you can then pass to
* {@code callMethod()}.
*
* <p>Example:
* <pre>
* MethodDescriptor method =
* service.getDescriptorForType().findMethodByName("Foo");
* Message request =
* stub.getRequestPrototype(method).newBuilderForType()
* .mergeFrom(input).build();
* service.callMethod(method, request, callback);
* </pre>
*/
Message getRequestPrototype(Descriptors.MethodDescriptor method);
/**
* Like {@code getRequestPrototype()}, but gets a prototype of the response
* message. {@code getResponsePrototype()} is generally not needed because
* the {@code Service} implementation constructs the response message itself,
* but it may be useful in some cases to know ahead of time what type of
* object will be returned.
*/
Message getResponsePrototype(Descriptors.MethodDescriptor method);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,146 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.FieldDescriptor;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
/**
* Thrown when attempting to build a protocol message that is missing required
* fields. This is a {@code RuntimeException} because it normally represents
* a programming error: it happens when some code which constructs a message
* fails to set all the fields. {@code parseFrom()} methods <b>do not</b>
* throw this; they throw an {@link InvalidProtocolBufferException} if
* required fields are missing, because it is not a programming error to
* receive an incomplete message. In other words,
* {@code UninitializedMessageException} should never be thrown by correct
* code, but {@code InvalidProtocolBufferException} might be.
*
* @author kenton@google.com Kenton Varda
*/
public class UninitializedMessageException extends RuntimeException {
public UninitializedMessageException(Message message) {
this(findMissingFields(message));
}
private UninitializedMessageException(List<String> missingFields) {
super(buildDescription(missingFields));
this.missingFields = missingFields;
}
private final List<String> missingFields;
/**
* Get a list of human-readable names of required fields missing from this
* message. Each name is a full path to a field, e.g. "foo.bar[5].baz".
*/
public List<String> getMissingFields() {
return Collections.unmodifiableList(missingFields);
}
/**
* Converts this exception to an {@link InvalidProtocolBufferException}.
* When a parsed message is missing required fields, this should be thrown
* instead of {@code UninitializedMessageException}.
*/
public InvalidProtocolBufferException asInvalidProtocolBufferException() {
return new InvalidProtocolBufferException(getMessage());
}
/** Construct the description string for this exception. */
private static String buildDescription(List<String> missingFields) {
StringBuilder description =
new StringBuilder("Message missing required fields: ");
boolean first = true;
for (String field : missingFields) {
if (first) {
first = false;
} else {
description.append(", ");
}
description.append(field);
}
return description.toString();
}
/**
* Populates {@code this.missingFields} with the full "path" of each
* missing required field in the given message.
*/
private static List<String> findMissingFields(Message message) {
List<String> results = new ArrayList<String>();
findMissingFields(message, "", results);
return results;
}
/** Recursive helper implementing {@link #findMissingFields(Message)}. */
private static void findMissingFields(Message message, String prefix,
List<String> results) {
for (FieldDescriptor field : message.getDescriptorForType().getFields()) {
if (field.isRequired() && !message.hasField(field)) {
results.add(prefix + field.getName());
}
}
for (Map.Entry<FieldDescriptor, Object> entry :
message.getAllFields().entrySet()) {
FieldDescriptor field = entry.getKey();
Object value = entry.getValue();
if (field.getJavaType() == FieldDescriptor.JavaType.MESSAGE) {
if (field.isRepeated()) {
int i = 0;
for (Object element : (List) value) {
findMissingFields((Message) element,
subMessagePrefix(prefix, field, i++),
results);
}
} else {
if (message.hasField(field)) {
findMissingFields((Message) value,
subMessagePrefix(prefix, field, -1),
results);
}
}
}
}
}
private static String subMessagePrefix(String prefix,
FieldDescriptor field,
int index) {
StringBuilder result = new StringBuilder(prefix);
if (field.isExtension()) {
result.append('(')
.append(field.getFullName())
.append(')');
} else {
result.append(field.getName());
}
if (index != -1) {
result.append('[')
.append(index)
.append(']');
}
result.append('.');
return result.toString();
}
}

View File

@ -0,0 +1,746 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import java.io.InputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.TreeMap;
import java.util.List;
import java.util.Map;
/**
* {@code UnknownFieldSet} is used to keep track of fields which were seen when
* parsing a protocol message but whose field numbers or types are unrecognized.
* This most frequently occurs when new fields are added to a message type
* and then messages containing those feilds are read by old software that was
* compiled before the new types were added.
*
* <p>Every {@link Message} contains an {@code UnknownFieldSet} (and every
* {@link Message.Builder} contains an {@link UnknownFieldSet.Builder}).
*
* <p>Most users will never need to use this class.
*
* @author kenton@google.com Kenton Varda
*/
public final class UnknownFieldSet {
private UnknownFieldSet() {}
/** Create a new {@link UnknownFieldSet.Builder}. */
public static Builder newBuilder() {
return new Builder();
}
/**
* Create a new {@link UnknownFieldSet.Builder} and initialize it to be a copy
* of {@code copyFrom}.
*/
public static Builder newBuilder(UnknownFieldSet copyFrom) {
return new Builder().mergeFrom(copyFrom);
}
/** Get an empty {@code UnknownFieldSet}. */
public static UnknownFieldSet getDefaultInstance() {
return defaultInstance;
}
private static UnknownFieldSet defaultInstance =
new UnknownFieldSet(Collections.<Integer, Field>emptyMap());
/**
* Construct an {@code UnknownFieldSet} around the given map. The map is
* expected to be immutable.
*/
private UnknownFieldSet(Map<Integer, Field> fields) {
this.fields = fields;
}
private Map<Integer, Field> fields;
/** Get a map of fields in the set by number. */
public Map<Integer, Field> asMap() {
return fields;
}
/** Check if the given field number is present in the set. */
public boolean hasField(int number) {
return fields.containsKey(number);
}
/**
* Get a field by number. Returns an empty field if not present. Never
* returns {@code null}.
*/
public Field getField(int number) {
Field result = fields.get(number);
return (result == null) ? Field.getDefaultInstance() : result;
}
/** Serializes the set and writes it to {@code output}. */
public void writeTo(CodedOutputStream output) throws IOException {
for (Map.Entry<Integer, Field> entry : fields.entrySet()) {
entry.getValue().writeTo(entry.getKey(), output);
}
}
/**
* Converts the set to a string in protocol buffer text format. This is
* just a trivial wrapper around
* {@link TextFormat#printToString(UnknownFieldSet)}.
*/
public final String toString() {
return TextFormat.printToString(this);
}
/**
* Serializes the message to a {@code ByteString} and returns it. This is
* just a trivial wrapper around {@link #writeTo(CodedOutputStream)}.
*/
public final ByteString toByteString() {
try {
ByteString.CodedBuilder out =
ByteString.newCodedBuilder(getSerializedSize());
writeTo(out.getCodedOutput());
return out.build();
} catch (IOException e) {
throw new RuntimeException(
"Serializing to a ByteString threw an IOException (should " +
"never happen).", e);
}
}
/**
* Serializes the message to a {@code byte} array and returns it. This is
* just a trivial wrapper around {@link #writeTo(CodedOutputStream)}.
*/
public final byte[] toByteArray() {
try {
byte[] result = new byte[getSerializedSize()];
CodedOutputStream output = CodedOutputStream.newInstance(result);
writeTo(output);
output.checkNoSpaceLeft();
return result;
} catch (IOException e) {
throw new RuntimeException(
"Serializing to a byte array threw an IOException " +
"(should never happen).", e);
}
}
/**
* Serializes the message and writes it to {@code output}. This is just a
* trivial wrapper around {@link #writeTo(CodedOutputStream)}.
*/
public final void writeTo(OutputStream output) throws IOException {
CodedOutputStream codedOutput = CodedOutputStream.newInstance(output);
writeTo(codedOutput);
codedOutput.flush();
}
/** Get the number of bytes required to encode this set. */
public int getSerializedSize() {
int result = 0;
for (Map.Entry<Integer, Field> entry : fields.entrySet()) {
result += entry.getValue().getSerializedSize(entry.getKey());
}
return result;
}
/**
* Serializes the set and writes it to {@code output} using
* {@code MessageSet} wire format.
*/
public void writeAsMessageSetTo(CodedOutputStream output)
throws IOException {
for (Map.Entry<Integer, Field> entry : fields.entrySet()) {
entry.getValue().writeAsMessageSetExtensionTo(
entry.getKey(), output);
}
}
/**
* Get the number of bytes required to encode this set using
* {@code MessageSet} wire format.
*/
public int getSerializedSizeAsMessageSet() {
int result = 0;
for (Map.Entry<Integer, Field> entry : fields.entrySet()) {
result += entry.getValue().getSerializedSizeAsMessageSetExtension(
entry.getKey());
}
return result;
}
/** Parse an {@code UnknownFieldSet} from the given input stream. */
static public UnknownFieldSet parseFrom(CodedInputStream input)
throws IOException {
return newBuilder().mergeFrom(input).build();
}
/** Parse {@code data} as an {@code UnknownFieldSet} and return it. */
public static UnknownFieldSet parseFrom(ByteString data)
throws InvalidProtocolBufferException {
return newBuilder().mergeFrom(data).build();
}
/** Parse {@code data} as an {@code UnknownFieldSet} and return it. */
public static UnknownFieldSet parseFrom(byte[] data)
throws InvalidProtocolBufferException {
return newBuilder().mergeFrom(data).build();
}
/** Parse an {@code UnknownFieldSet} from {@code input} and return it. */
public static UnknownFieldSet parseFrom(InputStream input)
throws IOException {
return newBuilder().mergeFrom(input).build();
}
/**
* Builder for {@link UnknownFieldSet}s.
*
* <p>Note that this class maintains {@link Field.Builder}s for all fields
* in the set. Thus, adding one element to an existing {@link Field} does not
* require making a copy. This is important for efficient parsing of
* unknown repeated fields. However, it implies that {@link Field}s cannot
* be constructed independently, nor can two {@link UnknownFieldSet}s share
* the same {@code Field} object.
*
* <p>Use {@link UnknownFieldSet#newBuilder()} to construct a {@code Builder}.
*/
public static final class Builder {
private Builder() {}
private Map<Integer, Field> fields = new TreeMap<Integer, Field>();
// Optimization: We keep around a builder for the last field that was
// modified so that we can efficiently add to it multiple times in a
// row (important when parsing an unknown repeated field).
int lastFieldNumber = 0;
Field.Builder lastField = null;
/**
* Get a field builder for the given field number which includes any
* values that already exist.
*/
private Field.Builder getFieldBuilder(int number) {
if (lastField != null) {
if (number == lastFieldNumber) {
return lastField;
}
// Note: addField() will reset lastField and lastFieldNumber.
addField(lastFieldNumber, lastField.build());
}
if (number == 0) {
return null;
} else {
Field existing = fields.get(number);
lastFieldNumber = number;
lastField = Field.newBuilder();
if (existing != null) {
lastField.mergeFrom(existing);
}
return lastField;
}
}
/**
* Build the {@link UnknownFieldSet} and return it.
*
* <p>Once {@code build()} has been called, the {@code Builder} will no
* longer be usable. Calling any method after {@code build()} will throw
* {@code NullPointerException}.
*/
public UnknownFieldSet build() {
getFieldBuilder(0); // Force lastField to be built.
UnknownFieldSet result;
if (fields.isEmpty()) {
result = getDefaultInstance();
} else {
result = new UnknownFieldSet(Collections.unmodifiableMap(fields));
}
fields = null;
return result;
}
/** Reset the builder to an empty set. */
public Builder clear() {
fields = new TreeMap<Integer, Field>();
lastFieldNumber = 0;
lastField = null;
return this;
}
/**
* Merge the fields from {@code other} into this set. If a field number
* exists in both sets, {@code other}'s values for that field will be
* appended to the values in this set.
*/
public Builder mergeFrom(UnknownFieldSet other) {
if (other != getDefaultInstance()) {
for (Map.Entry<Integer, Field> entry : other.fields.entrySet()) {
mergeField(entry.getKey(), entry.getValue());
}
}
return this;
}
/**
* Add a field to the {@code UnknownFieldSet}. If a field with the same
* number already exists, the two are merged.
*/
public Builder mergeField(int number, Field field) {
if (number == 0) {
throw new IllegalArgumentException("Zero is not a valid field number.");
}
if (hasField(number)) {
getFieldBuilder(number).mergeFrom(field);
} else {
// Optimization: We could call getFieldBuilder(number).mergeFrom(field)
// in this case, but that would create a copy of the Field object.
// We'd rather reuse the one passed to us, so call addField() instead.
addField(number, field);
}
return this;
}
/**
* Convenience method for merging a new field containing a single varint
* value. This is used in particular when an unknown enum value is
* encountered.
*/
public Builder mergeVarintField(int number, int value) {
if (number == 0) {
throw new IllegalArgumentException("Zero is not a valid field number.");
}
getFieldBuilder(number).addVarint(value);
return this;
}
/** Check if the given field number is present in the set. */
public boolean hasField(int number) {
if (number == 0) {
throw new IllegalArgumentException("Zero is not a valid field number.");
}
return number == lastFieldNumber || fields.containsKey(number);
}
/**
* Add a field to the {@code UnknownFieldSet}. If a field with the same
* number already exists, it is removed.
*/
public Builder addField(int number, Field field) {
if (number == 0) {
throw new IllegalArgumentException("Zero is not a valid field number.");
}
if (lastField != null && lastFieldNumber == number) {
// Discard this.
lastField = null;
lastFieldNumber = 0;
}
fields.put(number, field);
return this;
}
/**
* Get all present {@code Field}s as an immutable {@code Map}. If more
* fields are added, the changes may or may not be reflected in this map.
*/
public Map<Integer, Field> asMap() {
getFieldBuilder(0); // Force lastField to be built.
return Collections.unmodifiableMap(fields);
}
/**
* Parse an entire message from {@code input} and merge its fields into
* this set.
*/
public Builder mergeFrom(CodedInputStream input) throws IOException {
while (true) {
int tag = input.readTag();
if (tag == 0 || !mergeFieldFrom(tag, input)) {
break;
}
}
return this;
}
/**
* Parse a single field from {@code input} and merge it into this set.
* @param tag The field's tag number, which was already parsed.
* @return {@code false} if the tag is an engroup tag.
*/
public boolean mergeFieldFrom(int tag, CodedInputStream input)
throws IOException {
int number = WireFormat.getTagFieldNumber(tag);
switch (WireFormat.getTagWireType(tag)) {
case WireFormat.WIRETYPE_VARINT:
getFieldBuilder(number).addVarint(input.readInt32());
return true;
case WireFormat.WIRETYPE_FIXED64:
getFieldBuilder(number).addFixed64(input.readFixed64());
return true;
case WireFormat.WIRETYPE_LENGTH_DELIMITED:
getFieldBuilder(number).addLengthDelimited(input.readBytes());
return true;
case WireFormat.WIRETYPE_START_GROUP: {
UnknownFieldSet.Builder subBuilder = UnknownFieldSet.newBuilder();
input.readUnknownGroup(number, subBuilder);
getFieldBuilder(number).addGroup(subBuilder.build());
return true;
}
case WireFormat.WIRETYPE_END_GROUP:
return false;
case WireFormat.WIRETYPE_FIXED32:
getFieldBuilder(number).addFixed32(input.readFixed32());
return true;
default:
throw InvalidProtocolBufferException.invalidWireType();
}
}
/**
* Parse {@code data} as an {@code UnknownFieldSet} and merge it with the
* set being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}.
*/
public Builder mergeFrom(ByteString data)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = data.newCodedInput();
mergeFrom(input);
input.checkLastTagWas(0);
return this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a ByteString threw an IOException (should " +
"never happen).", e);
}
}
/**
* Parse {@code data} as an {@code UnknownFieldSet} and merge it with the
* set being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}.
*/
public Builder mergeFrom(byte[] data)
throws InvalidProtocolBufferException {
try {
CodedInputStream input = CodedInputStream.newInstance(data);
mergeFrom(input);
input.checkLastTagWas(0);
return this;
} catch (InvalidProtocolBufferException e) {
throw e;
} catch (IOException e) {
throw new RuntimeException(
"Reading from a byte array threw an IOException (should " +
"never happen).", e);
}
}
/**
* Parse an {@code UnknownFieldSet} from {@code input} and merge it with the
* set being built. This is just a small wrapper around
* {@link #mergeFrom(CodedInputStream)}.
*/
public Builder mergeFrom(InputStream input) throws IOException {
CodedInputStream codedInput = CodedInputStream.newInstance(input);
mergeFrom(codedInput);
codedInput.checkLastTagWas(0);
return this;
}
}
/**
* Represents a single field in an {@code UnknownFieldSet}.
*
* <p>A {@code Field} consists of five lists of values. The lists correspond
* to the five "wire types" used in the protocol buffer binary format.
* The wire type of each field can be determined from the encoded form alone,
* without knowing the field's declared type. So, we are able to parse
* unknown values at least this far and separate them. Normally, only one
* of the five lists will contain any values, since it is impossible to
* define a valid message type that declares two different types for the
* same field number. However, the code is designed to allow for the case
* where the same unknown field number is encountered using multiple different
* wire types.
*
* <p>{@code Field} is an immutable class. To construct one, you must use a
* {@link Field.Builder}.
*
* @see UnknownFieldSet
*/
public static final class Field {
private Field() {}
/** Construct a new {@link Builder}. */
public static Builder newBuilder() {
return new Builder();
}
/**
* Construct a new {@link Builder} and initialize it to a copy of
* {@code copyFrom}.
*/
public static Builder newBuilder(Field copyFrom) {
return new Builder().mergeFrom(copyFrom);
}
/** Get an empty {@code Field}. */
public static Field getDefaultInstance() {
return defaultInstance;
}
private static Field defaultInstance = newBuilder().build();
/** Get the list of varint values for this field. */
public List<Long> getVarintList() { return varint; }
/** Get the list of fixed32 values for this field. */
public List<Integer> getFixed32List() { return fixed32; }
/** Get the list of fixed64 values for this field. */
public List<Long> getFixed64List() { return fixed64; }
/** Get the list of length-delimited values for this field. */
public List<ByteString> getLengthDelimitedList() { return lengthDelimited; }
/**
* Get the list of embedded group values for this field. These are
* represented using {@link UnknownFieldSet}s rather than {@link Message}s
* since the group's type is presumably unknown.
*/
public List<UnknownFieldSet> getGroupList() { return group; }
/**
* Serializes the field, including field number, and writes it to
* {@code output}.
*/
public void writeTo(int fieldNumber, CodedOutputStream output)
throws IOException {
for (long value : varint) {
output.writeUInt64(fieldNumber, value);
}
for (int value : fixed32) {
output.writeFixed32(fieldNumber, value);
}
for (long value : fixed64) {
output.writeFixed64(fieldNumber, value);
}
for (ByteString value : lengthDelimited) {
output.writeBytes(fieldNumber, value);
}
for (UnknownFieldSet value : group) {
output.writeUnknownGroup(fieldNumber, value);
}
}
/**
* Get the number of bytes required to encode this field, including field
* number.
*/
public int getSerializedSize(int fieldNumber) {
int result = 0;
for (long value : varint) {
result += CodedOutputStream.computeUInt64Size(fieldNumber, value);
}
for (int value : fixed32) {
result += CodedOutputStream.computeFixed32Size(fieldNumber, value);
}
for (long value : fixed64) {
result += CodedOutputStream.computeFixed64Size(fieldNumber, value);
}
for (ByteString value : lengthDelimited) {
result += CodedOutputStream.computeBytesSize(fieldNumber, value);
}
for (UnknownFieldSet value : group) {
result += CodedOutputStream.computeUnknownGroupSize(fieldNumber, value);
}
return result;
}
/**
* Serializes the field, including field number, and writes it to
* {@code output}, using {@code MessageSet} wire format.
*/
public void writeAsMessageSetExtensionTo(
int fieldNumber,
CodedOutputStream output)
throws IOException {
for (ByteString value : lengthDelimited) {
output.writeRawMessageSetExtension(fieldNumber, value);
}
}
/**
* Get the number of bytes required to encode this field, including field
* number, using {@code MessageSet} wire format.
*/
public int getSerializedSizeAsMessageSetExtension(int fieldNumber) {
int result = 0;
for (ByteString value : lengthDelimited) {
result += CodedOutputStream.computeRawMessageSetExtensionSize(
fieldNumber, value);
}
return result;
}
private List<Long> varint;
private List<Integer> fixed32;
private List<Long> fixed64;
private List<ByteString> lengthDelimited;
private List<UnknownFieldSet> group;
/**
* Used to build a {@link Field} within an {@link UnknownFieldSet}.
*
* <p>Use {@link Field#newBuilder()} to construct a {@code Builder}.
*/
public static final class Builder {
private Builder() {}
private Field result = new Field();
/**
* Build the field. After {@code build()} has been called, the
* {@code Builder} is no longer usable. Calling any other method will
* throw a {@code NullPointerException}.
*/
public Field build() {
if (result.varint == null) {
result.varint = Collections.emptyList();
} else {
result.varint = Collections.unmodifiableList(result.varint);
}
if (result.fixed32 == null) {
result.fixed32 = Collections.emptyList();
} else {
result.fixed32 = Collections.unmodifiableList(result.fixed32);
}
if (result.fixed64 == null) {
result.fixed64 = Collections.emptyList();
} else {
result.fixed64 = Collections.unmodifiableList(result.fixed64);
}
if (result.lengthDelimited == null) {
result.lengthDelimited = Collections.emptyList();
} else {
result.lengthDelimited =
Collections.unmodifiableList(result.lengthDelimited);
}
if (result.group == null) {
result.group = Collections.emptyList();
} else {
result.group = Collections.unmodifiableList(result.group);
}
Field returnMe = result;
result = null;
return returnMe;
}
/** Discard the field's contents. */
public Builder clear() {
result = new Field();
return this;
}
/**
* Merge the values in {@code other} into this field. For each list
* of values, {@code other}'s values are append to the ones in this
* field.
*/
public Builder mergeFrom(Field other) {
if (!other.varint.isEmpty()) {
if (result.varint == null) {
result.varint = new ArrayList<Long>();
}
result.varint.addAll(other.varint);
}
if (!other.fixed32.isEmpty()) {
if (result.fixed32 == null) {
result.fixed32 = new ArrayList<Integer>();
}
result.fixed32.addAll(other.fixed32);
}
if (!other.fixed64.isEmpty()) {
if (result.fixed64 == null) {
result.fixed64 = new ArrayList<Long>();
}
result.fixed64.addAll(other.fixed64);
}
if (!other.lengthDelimited.isEmpty()) {
if (result.lengthDelimited == null) {
result.lengthDelimited = new ArrayList<ByteString>();
}
result.lengthDelimited.addAll(other.lengthDelimited);
}
if (!other.group.isEmpty()) {
if (result.group == null) {
result.group = new ArrayList<UnknownFieldSet>();
}
result.group.addAll(other.group);
}
return this;
}
/** Add a varint value. */
public Builder addVarint(long value) {
if (result.varint == null) {
result.varint = new ArrayList<Long>();
}
result.varint.add(value);
return this;
}
/** Add a fixed32 value. */
public Builder addFixed32(int value) {
if (result.fixed32 == null) {
result.fixed32 = new ArrayList<Integer>();
}
result.fixed32.add(value);
return this;
}
/** Add a fixed64 value. */
public Builder addFixed64(long value) {
if (result.fixed64 == null) {
result.fixed64 = new ArrayList<Long>();
}
result.fixed64.add(value);
return this;
}
/** Add a length-delimited value. */
public Builder addLengthDelimited(ByteString value) {
if (result.lengthDelimited == null) {
result.lengthDelimited = new ArrayList<ByteString>();
}
result.lengthDelimited.add(value);
return this;
}
/** Add an embedded group. */
public Builder addGroup(UnknownFieldSet value) {
if (result.group == null) {
result.group = new ArrayList<UnknownFieldSet>();
}
result.group.add(value);
return this;
}
}
}
}

View File

@ -0,0 +1,99 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
/**
* This class is used internally by the Protocol Buffer library and generated
* message implementations. It is public only because those generated messages
* do not reside in the {@code protocol2} package. Others should not use this
* class directly.
*
* This class contains constants and helper functions useful for dealing with
* the Protocol Buffer wire format.
*
* @author kenton@google.com Kenton Varda
*/
public final class WireFormat {
// Do not allow instantiation.
private WireFormat() {}
static final int WIRETYPE_VARINT = 0;
static final int WIRETYPE_FIXED64 = 1;
static final int WIRETYPE_LENGTH_DELIMITED = 2;
static final int WIRETYPE_START_GROUP = 3;
static final int WIRETYPE_END_GROUP = 4;
static final int WIRETYPE_FIXED32 = 5;
static final int TAG_TYPE_BITS = 3;
static final int TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1;
/** Given a tag value, determines the wire type (the lower 3 bits). */
static int getTagWireType(int tag) {
return tag & TAG_TYPE_MASK;
}
/** Given a tag value, determines the field number (the upper 29 bits). */
public static int getTagFieldNumber(int tag) {
return tag >>> TAG_TYPE_BITS;
}
/** Makes a tag value given a field number and wire type. */
static int makeTag(int fieldNumber, int wireType) {
return (fieldNumber << TAG_TYPE_BITS) | wireType;
}
static int getWireFormatForFieldType(Descriptors.FieldDescriptor.Type type) {
switch (type) {
case DOUBLE : return WIRETYPE_FIXED64;
case FLOAT : return WIRETYPE_FIXED32;
case INT64 : return WIRETYPE_VARINT;
case UINT64 : return WIRETYPE_VARINT;
case INT32 : return WIRETYPE_VARINT;
case FIXED64 : return WIRETYPE_FIXED64;
case FIXED32 : return WIRETYPE_FIXED32;
case BOOL : return WIRETYPE_VARINT;
case STRING : return WIRETYPE_LENGTH_DELIMITED;
case GROUP : return WIRETYPE_START_GROUP;
case MESSAGE : return WIRETYPE_LENGTH_DELIMITED;
case BYTES : return WIRETYPE_LENGTH_DELIMITED;
case UINT32 : return WIRETYPE_VARINT;
case ENUM : return WIRETYPE_VARINT;
case SFIXED32: return WIRETYPE_FIXED32;
case SFIXED64: return WIRETYPE_FIXED64;
case SINT32 : return WIRETYPE_VARINT;
case SINT64 : return WIRETYPE_VARINT;
}
throw new RuntimeException(
"There is no way to get here, but the compiler thinks otherwise.");
}
// Field numbers for feilds in MessageSet wire format.
static final int MESSAGE_SET_ITEM = 1;
static final int MESSAGE_SET_TYPE_ID = 2;
static final int MESSAGE_SET_MESSAGE = 3;
// Tag numbers.
static final int MESSAGE_SET_ITEM_TAG =
makeTag(MESSAGE_SET_ITEM, WIRETYPE_START_GROUP);
static final int MESSAGE_SET_ITEM_END_TAG =
makeTag(MESSAGE_SET_ITEM, WIRETYPE_END_GROUP);
static final int MESSAGE_SET_TYPE_ID_TAG =
makeTag(MESSAGE_SET_TYPE_ID, WIRETYPE_VARINT);
static final int MESSAGE_SET_MESSAGE_TAG =
makeTag(MESSAGE_SET_MESSAGE, WIRETYPE_LENGTH_DELIMITED);
}

View File

@ -0,0 +1,362 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto;
import protobuf_unittest.UnittestProto.ForeignMessage;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestRequired;
import protobuf_unittest.UnittestProto.TestRequiredForeign;
import protobuf_unittest.UnittestOptimizeFor.TestOptimizedForSize;
import junit.framework.TestCase;
import java.util.Map;
/**
* Unit test for {@link AbstractMessage}.
*
* @author kenton@google.com Kenton Varda
*/
public class AbstractMessageTest extends TestCase {
/**
* Extends AbstractMessage and wraps some other message object. The methods
* of the Message interface which aren't explicitly implemented by
* AbstractMessage are forwarded to the wrapped object. This allows us to
* test that AbstractMessage's implementations work even if the wrapped
* object does not use them.
*/
private static class AbstractMessageWrapper extends AbstractMessage {
private final Message wrappedMessage;
public AbstractMessageWrapper(Message wrappedMessage) {
this.wrappedMessage = wrappedMessage;
}
public Descriptors.Descriptor getDescriptorForType() {
return wrappedMessage.getDescriptorForType();
}
public AbstractMessageWrapper getDefaultInstanceForType() {
return new AbstractMessageWrapper(
wrappedMessage.getDefaultInstanceForType());
}
public Map<Descriptors.FieldDescriptor, Object> getAllFields() {
return wrappedMessage.getAllFields();
}
public boolean hasField(Descriptors.FieldDescriptor field) {
return wrappedMessage.hasField(field);
}
public Object getField(Descriptors.FieldDescriptor field) {
return wrappedMessage.getField(field);
}
public int getRepeatedFieldCount(Descriptors.FieldDescriptor field) {
return wrappedMessage.getRepeatedFieldCount(field);
}
public Object getRepeatedField(
Descriptors.FieldDescriptor field, int index) {
return wrappedMessage.getRepeatedField(field, index);
}
public UnknownFieldSet getUnknownFields() {
return wrappedMessage.getUnknownFields();
}
public Builder newBuilderForType() {
return new Builder(wrappedMessage.newBuilderForType());
}
static class Builder extends AbstractMessage.Builder<Builder> {
private final Message.Builder wrappedBuilder;
public Builder(Message.Builder wrappedBuilder) {
this.wrappedBuilder = wrappedBuilder;
}
public AbstractMessageWrapper build() {
return new AbstractMessageWrapper(wrappedBuilder.build());
}
public AbstractMessageWrapper buildPartial() {
return new AbstractMessageWrapper(wrappedBuilder.buildPartial());
}
public Builder clone() {
return new Builder(wrappedBuilder.clone());
}
public boolean isInitialized() {
return clone().buildPartial().isInitialized();
}
public Descriptors.Descriptor getDescriptorForType() {
return wrappedBuilder.getDescriptorForType();
}
public AbstractMessageWrapper getDefaultInstanceForType() {
return new AbstractMessageWrapper(
wrappedBuilder.getDefaultInstanceForType());
}
public Map<Descriptors.FieldDescriptor, Object> getAllFields() {
return wrappedBuilder.getAllFields();
}
public Builder newBuilderForField(Descriptors.FieldDescriptor field) {
return new Builder(wrappedBuilder.newBuilderForField(field));
}
public boolean hasField(Descriptors.FieldDescriptor field) {
return wrappedBuilder.hasField(field);
}
public Object getField(Descriptors.FieldDescriptor field) {
return wrappedBuilder.getField(field);
}
public Builder setField(Descriptors.FieldDescriptor field, Object value) {
wrappedBuilder.setField(field, value);
return this;
}
public Builder clearField(Descriptors.FieldDescriptor field) {
wrappedBuilder.clearField(field);
return this;
}
public int getRepeatedFieldCount(Descriptors.FieldDescriptor field) {
return wrappedBuilder.getRepeatedFieldCount(field);
}
public Object getRepeatedField(
Descriptors.FieldDescriptor field, int index) {
return wrappedBuilder.getRepeatedField(field, index);
}
public Builder setRepeatedField(Descriptors.FieldDescriptor field,
int index, Object value) {
wrappedBuilder.setRepeatedField(field, index, value);
return this;
}
public Builder addRepeatedField(
Descriptors.FieldDescriptor field, Object value) {
wrappedBuilder.addRepeatedField(field, value);
return this;
}
public UnknownFieldSet getUnknownFields() {
return wrappedBuilder.getUnknownFields();
}
public Builder setUnknownFields(UnknownFieldSet unknownFields) {
wrappedBuilder.setUnknownFields(unknownFields);
return this;
}
}
}
// =================================================================
TestUtil.ReflectionTester reflectionTester =
new TestUtil.ReflectionTester(TestAllTypes.getDescriptor(), null);
TestUtil.ReflectionTester extensionsReflectionTester =
new TestUtil.ReflectionTester(TestAllExtensions.getDescriptor(),
TestUtil.getExtensionRegistry());
public void testClear() throws Exception {
AbstractMessageWrapper message =
new AbstractMessageWrapper.Builder(
TestAllTypes.newBuilder(TestUtil.getAllSet()))
.clear().build();
TestUtil.assertClear((TestAllTypes) message.wrappedMessage);
}
public void testCopy() throws Exception {
AbstractMessageWrapper message =
new AbstractMessageWrapper.Builder(TestAllTypes.newBuilder())
.mergeFrom(TestUtil.getAllSet()).build();
TestUtil.assertAllFieldsSet((TestAllTypes) message.wrappedMessage);
}
public void testSerializedSize() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
Message abstractMessage = new AbstractMessageWrapper(TestUtil.getAllSet());
assertEquals(message.getSerializedSize(),
abstractMessage.getSerializedSize());
}
public void testSerialization() throws Exception {
Message abstractMessage = new AbstractMessageWrapper(TestUtil.getAllSet());
TestUtil.assertAllFieldsSet(
TestAllTypes.parseFrom(abstractMessage.toByteString()));
assertEquals(TestUtil.getAllSet().toByteString(),
abstractMessage.toByteString());
}
public void testParsing() throws Exception {
AbstractMessageWrapper.Builder builder =
new AbstractMessageWrapper.Builder(TestAllTypes.newBuilder());
AbstractMessageWrapper message =
builder.mergeFrom(TestUtil.getAllSet().toByteString()).build();
TestUtil.assertAllFieldsSet((TestAllTypes) message.wrappedMessage);
}
public void testOptimizedForSize() throws Exception {
// We're mostly only checking that this class was compiled successfully.
TestOptimizedForSize message =
TestOptimizedForSize.newBuilder().setI(1).build();
message = TestOptimizedForSize.parseFrom(message.toByteString());
assertEquals(2, message.getSerializedSize());
}
// -----------------------------------------------------------------
// Tests for isInitialized().
private static final TestRequired TEST_REQUIRED_UNINITIALIZED =
TestRequired.getDefaultInstance();
private static final TestRequired TEST_REQUIRED_INITIALIZED =
TestRequired.newBuilder().setA(1).setB(2).setC(3).build();
public void testIsInitialized() throws Exception {
TestRequired.Builder builder = TestRequired.newBuilder();
AbstractMessageWrapper.Builder abstractBuilder =
new AbstractMessageWrapper.Builder(builder);
assertFalse(abstractBuilder.isInitialized());
builder.setA(1);
assertFalse(abstractBuilder.isInitialized());
builder.setB(1);
assertFalse(abstractBuilder.isInitialized());
builder.setC(1);
assertTrue(abstractBuilder.isInitialized());
}
public void testForeignIsInitialized() throws Exception {
TestRequiredForeign.Builder builder = TestRequiredForeign.newBuilder();
AbstractMessageWrapper.Builder abstractBuilder =
new AbstractMessageWrapper.Builder(builder);
assertTrue(abstractBuilder.isInitialized());
builder.setOptionalMessage(TEST_REQUIRED_UNINITIALIZED);
assertFalse(abstractBuilder.isInitialized());
builder.setOptionalMessage(TEST_REQUIRED_INITIALIZED);
assertTrue(abstractBuilder.isInitialized());
builder.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED);
assertFalse(abstractBuilder.isInitialized());
builder.setRepeatedMessage(0, TEST_REQUIRED_INITIALIZED);
assertTrue(abstractBuilder.isInitialized());
}
// -----------------------------------------------------------------
// Tests for mergeFrom
static final TestAllTypes MERGE_SOURCE =
TestAllTypes.newBuilder()
.setOptionalInt32(1)
.setOptionalString("foo")
.setOptionalForeignMessage(ForeignMessage.getDefaultInstance())
.addRepeatedString("bar")
.build();
static final TestAllTypes MERGE_DEST =
TestAllTypes.newBuilder()
.setOptionalInt64(2)
.setOptionalString("baz")
.setOptionalForeignMessage(ForeignMessage.newBuilder().setC(3).build())
.addRepeatedString("qux")
.build();
static final String MERGE_RESULT_TEXT =
"optional_int32: 1\n" +
"optional_int64: 2\n" +
"optional_string: \"foo\"\n" +
"optional_foreign_message {\n" +
" c: 3\n" +
"}\n" +
"repeated_string: \"qux\"\n" +
"repeated_string: \"bar\"\n";
public void testMergeFrom() throws Exception {
AbstractMessageWrapper result =
new AbstractMessageWrapper.Builder(
TestAllTypes.newBuilder(MERGE_DEST))
.mergeFrom(MERGE_SOURCE).build();
assertEquals(MERGE_RESULT_TEXT, result.toString());
}
// -----------------------------------------------------------------
// Tests for equals and hashCode
public void testEqualsAndHashCode() {
TestAllTypes a = TestUtil.getAllSet();
TestAllTypes b = TestAllTypes.newBuilder().build();
TestAllTypes c = TestAllTypes.newBuilder(b).addRepeatedString("x").build();
TestAllTypes d = TestAllTypes.newBuilder(c).addRepeatedString("y").build();
TestAllExtensions e = TestUtil.getAllExtensionsSet();
TestAllExtensions f = TestAllExtensions.newBuilder(e)
.addExtension(UnittestProto.repeatedInt32Extension, 999).build();
checkEqualsIsConsistent(a);
checkEqualsIsConsistent(b);
checkEqualsIsConsistent(c);
checkEqualsIsConsistent(d);
checkEqualsIsConsistent(e);
checkEqualsIsConsistent(f);
checkNotEqual(a, b);
checkNotEqual(a, c);
checkNotEqual(a, d);
checkNotEqual(a, e);
checkNotEqual(a, f);
checkNotEqual(b, c);
checkNotEqual(b, d);
checkNotEqual(b, e);
checkNotEqual(b, f);
checkNotEqual(c, d);
checkNotEqual(c, e);
checkNotEqual(c, f);
checkNotEqual(d, e);
checkNotEqual(d, f);
checkNotEqual(e, f);
}
/**
* Asserts that the given protos are equal and have the same hash code.
*/
private void checkEqualsIsConsistent(Message message) {
// Object should be equal to itself.
assertEquals(message, message);
// Object should be equal to a dynamic copy of itself.
DynamicMessage dynamic = DynamicMessage.newBuilder(message).build();
assertEquals(message, dynamic);
assertEquals(dynamic, message);
assertEquals(dynamic.hashCode(), message.hashCode());
}
/**
* Asserts that the given protos are not equal and have different hash codes.
*
* @warning It's valid for non-equal objects to have the same hash code, so
* this test is stricter than it needs to be. However, this should happen
* relatively rarely.
*/
private void checkNotEqual(Message m1, Message m2) {
String equalsError = String.format("%s should not be equal to %s", m1, m2);
assertFalse(equalsError, m1.equals(m2));
assertFalse(equalsError, m2.equals(m1));
assertFalse(
String.format("%s should have a different hash code from %s", m1, m2),
m1.hashCode() == m2.hashCode());
}
}

View File

@ -0,0 +1,401 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestRecursiveMessage;
import junit.framework.TestCase;
import java.io.ByteArrayInputStream;
import java.io.FilterInputStream;
import java.io.InputStream;
import java.io.IOException;
/**
* Unit test for {@link CodedInputStream}.
*
* @author kenton@google.com Kenton Varda
*/
public class CodedInputStreamTest extends TestCase {
/**
* Helper to construct a byte array from a bunch of bytes. The inputs are
* actually ints so that I can use hex notation and not get stupid errors
* about precision.
*/
private byte[] bytes(int... bytesAsInts) {
byte[] bytes = new byte[bytesAsInts.length];
for (int i = 0; i < bytesAsInts.length; i++) {
bytes[i] = (byte) bytesAsInts[i];
}
return bytes;
}
/**
* An InputStream which limits the number of bytes it reads at a time.
* We use this to make sure that CodedInputStream doesn't screw up when
* reading in small blocks.
*/
private static final class SmallBlockInputStream extends FilterInputStream {
private final int blockSize;
public SmallBlockInputStream(byte[] data, int blockSize) {
this(new ByteArrayInputStream(data), blockSize);
}
public SmallBlockInputStream(InputStream in, int blockSize) {
super(in);
this.blockSize = blockSize;
}
public int read(byte[] b) throws IOException {
return super.read(b, 0, Math.min(b.length, blockSize));
}
public int read(byte[] b, int off, int len) throws IOException {
return super.read(b, off, Math.min(len, blockSize));
}
}
/**
* Parses the given bytes using readRawVarint32() and readRawVarint64() and
* checks that the result matches the given value.
*/
private void assertReadVarint(byte[] data, long value) throws Exception {
CodedInputStream input = CodedInputStream.newInstance(data);
assertEquals((int)value, input.readRawVarint32());
input = CodedInputStream.newInstance(data);
assertEquals(value, input.readRawVarint64());
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
input = CodedInputStream.newInstance(
new SmallBlockInputStream(data, blockSize));
assertEquals((int)value, input.readRawVarint32());
input = CodedInputStream.newInstance(
new SmallBlockInputStream(data, blockSize));
assertEquals(value, input.readRawVarint64());
}
}
/**
* Parses the given bytes using readRawVarint32() and readRawVarint64() and
* expects them to fail with an InvalidProtocolBufferException whose
* description matches the given one.
*/
private void assertReadVarintFailure(
InvalidProtocolBufferException expected, byte[] data)
throws Exception {
CodedInputStream input = CodedInputStream.newInstance(data);
try {
input.readRawVarint32();
fail("Should have thrown an exception.");
} catch (InvalidProtocolBufferException e) {
assertEquals(expected.getMessage(), e.getMessage());
}
input = CodedInputStream.newInstance(data);
try {
input.readRawVarint64();
fail("Should have thrown an exception.");
} catch (InvalidProtocolBufferException e) {
assertEquals(expected.getMessage(), e.getMessage());
}
}
/** Tests readRawVarint32() and readRawVarint64(). */
public void testReadVarint() throws Exception {
assertReadVarint(bytes(0x00), 0);
assertReadVarint(bytes(0x01), 1);
assertReadVarint(bytes(0x7f), 127);
// 14882
assertReadVarint(bytes(0xa2, 0x74), (0x22 << 0) | (0x74 << 7));
// 2961488830
assertReadVarint(bytes(0xbe, 0xf7, 0x92, 0x84, 0x0b),
(0x3e << 0) | (0x77 << 7) | (0x12 << 14) | (0x04 << 21) |
(0x0bL << 28));
// 64-bit
// 7256456126
assertReadVarint(bytes(0xbe, 0xf7, 0x92, 0x84, 0x1b),
(0x3e << 0) | (0x77 << 7) | (0x12 << 14) | (0x04 << 21) |
(0x1bL << 28));
// 41256202580718336
assertReadVarint(
bytes(0x80, 0xe6, 0xeb, 0x9c, 0xc3, 0xc9, 0xa4, 0x49),
(0x00 << 0) | (0x66 << 7) | (0x6b << 14) | (0x1c << 21) |
(0x43L << 28) | (0x49L << 35) | (0x24L << 42) | (0x49L << 49));
// 11964378330978735131
assertReadVarint(
bytes(0x9b, 0xa8, 0xf9, 0xc2, 0xbb, 0xd6, 0x80, 0x85, 0xa6, 0x01),
(0x1b << 0) | (0x28 << 7) | (0x79 << 14) | (0x42 << 21) |
(0x3bL << 28) | (0x56L << 35) | (0x00L << 42) |
(0x05L << 49) | (0x26L << 56) | (0x01L << 63));
// Failures
assertReadVarintFailure(
InvalidProtocolBufferException.malformedVarint(),
bytes(0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
0x00));
assertReadVarintFailure(
InvalidProtocolBufferException.truncatedMessage(),
bytes(0x80));
}
/**
* Parses the given bytes using readRawLittleEndian32() and checks
* that the result matches the given value.
*/
private void assertReadLittleEndian32(byte[] data, int value)
throws Exception {
CodedInputStream input = CodedInputStream.newInstance(data);
assertEquals(value, input.readRawLittleEndian32());
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
input = CodedInputStream.newInstance(
new SmallBlockInputStream(data, blockSize));
assertEquals(value, input.readRawLittleEndian32());
}
}
/**
* Parses the given bytes using readRawLittleEndian64() and checks
* that the result matches the given value.
*/
private void assertReadLittleEndian64(byte[] data, long value)
throws Exception {
CodedInputStream input = CodedInputStream.newInstance(data);
assertEquals(value, input.readRawLittleEndian64());
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
input = CodedInputStream.newInstance(
new SmallBlockInputStream(data, blockSize));
assertEquals(value, input.readRawLittleEndian64());
}
}
/** Tests readRawLittleEndian32() and readRawLittleEndian64(). */
public void testReadLittleEndian() throws Exception {
assertReadLittleEndian32(bytes(0x78, 0x56, 0x34, 0x12), 0x12345678);
assertReadLittleEndian32(bytes(0xf0, 0xde, 0xbc, 0x9a), 0x9abcdef0);
assertReadLittleEndian64(
bytes(0xf0, 0xde, 0xbc, 0x9a, 0x78, 0x56, 0x34, 0x12),
0x123456789abcdef0L);
assertReadLittleEndian64(
bytes(0x78, 0x56, 0x34, 0x12, 0xf0, 0xde, 0xbc, 0x9a),
0x9abcdef012345678L);
}
/** Test decodeZigZag32() and decodeZigZag64(). */
public void testDecodeZigZag() throws Exception {
assertEquals( 0, CodedInputStream.decodeZigZag32(0));
assertEquals(-1, CodedInputStream.decodeZigZag32(1));
assertEquals( 1, CodedInputStream.decodeZigZag32(2));
assertEquals(-2, CodedInputStream.decodeZigZag32(3));
assertEquals(0x3FFFFFFF, CodedInputStream.decodeZigZag32(0x7FFFFFFE));
assertEquals(0xC0000000, CodedInputStream.decodeZigZag32(0x7FFFFFFF));
assertEquals(0x7FFFFFFF, CodedInputStream.decodeZigZag32(0xFFFFFFFE));
assertEquals(0x80000000, CodedInputStream.decodeZigZag32(0xFFFFFFFF));
assertEquals( 0, CodedInputStream.decodeZigZag64(0));
assertEquals(-1, CodedInputStream.decodeZigZag64(1));
assertEquals( 1, CodedInputStream.decodeZigZag64(2));
assertEquals(-2, CodedInputStream.decodeZigZag64(3));
assertEquals(0x000000003FFFFFFFL,
CodedInputStream.decodeZigZag64(0x000000007FFFFFFEL));
assertEquals(0xFFFFFFFFC0000000L,
CodedInputStream.decodeZigZag64(0x000000007FFFFFFFL));
assertEquals(0x000000007FFFFFFFL,
CodedInputStream.decodeZigZag64(0x00000000FFFFFFFEL));
assertEquals(0xFFFFFFFF80000000L,
CodedInputStream.decodeZigZag64(0x00000000FFFFFFFFL));
assertEquals(0x7FFFFFFFFFFFFFFFL,
CodedInputStream.decodeZigZag64(0xFFFFFFFFFFFFFFFEL));
assertEquals(0x8000000000000000L,
CodedInputStream.decodeZigZag64(0xFFFFFFFFFFFFFFFFL));
}
/** Tests reading and parsing a whole message with every field type. */
public void testReadWholeMessage() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
byte[] rawBytes = message.toByteArray();
assertEquals(rawBytes.length, message.getSerializedSize());
TestAllTypes message2 = TestAllTypes.parseFrom(rawBytes);
TestUtil.assertAllFieldsSet(message2);
// Try different block sizes.
for (int blockSize = 1; blockSize < 256; blockSize *= 2) {
message2 = TestAllTypes.parseFrom(
new SmallBlockInputStream(rawBytes, blockSize));
TestUtil.assertAllFieldsSet(message2);
}
}
/** Tests skipField(). */
public void testSkipWholeMessage() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
byte[] rawBytes = message.toByteArray();
// Create two parallel inputs. Parse one as unknown fields while using
// skipField() to skip each field on the other. Expect the same tags.
CodedInputStream input1 = CodedInputStream.newInstance(rawBytes);
CodedInputStream input2 = CodedInputStream.newInstance(rawBytes);
UnknownFieldSet.Builder unknownFields = UnknownFieldSet.newBuilder();
while (true) {
int tag = input1.readTag();
assertEquals(tag, input2.readTag());
if (tag == 0) {
break;
}
unknownFields.mergeFieldFrom(tag, input1);
input2.skipField(tag);
}
}
public void testReadHugeBlob() throws Exception {
// Allocate and initialize a 1MB blob.
byte[] blob = new byte[1 << 20];
for (int i = 0; i < blob.length; i++) {
blob[i] = (byte)i;
}
// Make a message containing it.
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
builder.setOptionalBytes(ByteString.copyFrom(blob));
TestAllTypes message = builder.build();
// Serialize and parse it. Make sure to parse from an InputStream, not
// directly from a ByteString, so that CodedInputStream uses buffered
// reading.
TestAllTypes message2 =
TestAllTypes.parseFrom(message.toByteString().newInput());
assertEquals(message.getOptionalBytes(), message2.getOptionalBytes());
// Make sure all the other fields were parsed correctly.
TestAllTypes message3 = TestAllTypes.newBuilder(message2)
.setOptionalBytes(TestUtil.getAllSet().getOptionalBytes())
.build();
TestUtil.assertAllFieldsSet(message3);
}
public void testReadMaliciouslyLargeBlob() throws Exception {
ByteString.Output rawOutput = ByteString.newOutput();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
int tag = WireFormat.makeTag(1, WireFormat.WIRETYPE_LENGTH_DELIMITED);
output.writeRawVarint32(tag);
output.writeRawVarint32(0x7FFFFFFF);
output.writeRawBytes(new byte[32]); // Pad with a few random bytes.
output.flush();
CodedInputStream input = rawOutput.toByteString().newCodedInput();
assertEquals(tag, input.readTag());
try {
input.readBytes();
fail("Should have thrown an exception!");
} catch (InvalidProtocolBufferException e) {
// success.
}
}
private TestRecursiveMessage makeRecursiveMessage(int depth) {
if (depth == 0) {
return TestRecursiveMessage.newBuilder().setI(5).build();
} else {
return TestRecursiveMessage.newBuilder()
.setA(makeRecursiveMessage(depth - 1)).build();
}
}
private void assertMessageDepth(TestRecursiveMessage message, int depth) {
if (depth == 0) {
assertFalse(message.hasA());
assertEquals(5, message.getI());
} else {
assertTrue(message.hasA());
assertMessageDepth(message.getA(), depth - 1);
}
}
public void testMaliciousRecursion() throws Exception {
ByteString data64 = makeRecursiveMessage(64).toByteString();
ByteString data65 = makeRecursiveMessage(65).toByteString();
assertMessageDepth(TestRecursiveMessage.parseFrom(data64), 64);
try {
TestRecursiveMessage.parseFrom(data65);
fail("Should have thrown an exception!");
} catch (InvalidProtocolBufferException e) {
// success.
}
CodedInputStream input = data64.newCodedInput();
input.setRecursionLimit(8);
try {
TestRecursiveMessage.parseFrom(input);
fail("Should have thrown an exception!");
} catch (InvalidProtocolBufferException e) {
// success.
}
}
public void testSizeLimit() throws Exception {
CodedInputStream input = CodedInputStream.newInstance(
TestUtil.getAllSet().toByteString().newInput());
input.setSizeLimit(16);
try {
TestAllTypes.parseFrom(input);
fail("Should have thrown an exception!");
} catch (InvalidProtocolBufferException e) {
// success.
}
}
/**
* Tests that if we read an string that contains invalid UTF-8, no exception
* is thrown. Instead, the invalid bytes are replaced with the Unicode
* "replacement character" U+FFFD.
*/
public void testReadInvalidUtf8() throws Exception {
ByteString.Output rawOutput = ByteString.newOutput();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
int tag = WireFormat.makeTag(1, WireFormat.WIRETYPE_LENGTH_DELIMITED);
output.writeRawVarint32(tag);
output.writeRawVarint32(1);
output.writeRawBytes(new byte[] { (byte)0x80 });
output.flush();
CodedInputStream input = rawOutput.toByteString().newCodedInput();
assertEquals(tag, input.readTag());
String text = input.readString();
assertEquals(0xfffd, text.charAt(0));
}
}

View File

@ -0,0 +1,280 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestAllTypes;
import junit.framework.TestCase;
import java.io.ByteArrayOutputStream;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
/**
* Unit test for {@link CodedOutputStream}.
*
* @author kenton@google.com Kenton Varda
*/
public class CodedOutputStreamTest extends TestCase {
/**
* Helper to construct a byte array from a bunch of bytes. The inputs are
* actually ints so that I can use hex notation and not get stupid errors
* about precision.
*/
private byte[] bytes(int... bytesAsInts) {
byte[] bytes = new byte[bytesAsInts.length];
for (int i = 0; i < bytesAsInts.length; i++) {
bytes[i] = (byte) bytesAsInts[i];
}
return bytes;
}
/** Arrays.asList() does not work with arrays of primitives. :( */
private List<Byte> toList(byte[] bytes) {
List<Byte> result = new ArrayList<Byte>();
for (byte b : bytes) {
result.add(b);
}
return result;
}
private void assertEqualBytes(byte[] a, byte[] b) {
assertEquals(toList(a), toList(b));
}
/**
* Writes the given value using writeRawVarint32() and writeRawVarint64() and
* checks that the result matches the given bytes.
*/
private void assertWriteVarint(byte[] data, long value) throws Exception {
// Only do 32-bit write if the value fits in 32 bits.
if ((value >>> 32) == 0) {
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
output.writeRawVarint32((int) value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
// Also try computing size.
assertEquals(data.length,
CodedOutputStream.computeRawVarint32Size((int) value));
}
{
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
output.writeRawVarint64(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
// Also try computing size.
assertEquals(data.length,
CodedOutputStream.computeRawVarint64Size(value));
}
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
// Only do 32-bit write if the value fits in 32 bits.
if ((value >>> 32) == 0) {
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output =
CodedOutputStream.newInstance(rawOutput, blockSize);
output.writeRawVarint32((int) value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
}
{
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output =
CodedOutputStream.newInstance(rawOutput, blockSize);
output.writeRawVarint64(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
}
}
}
/** Tests writeRawVarint32() and writeRawVarint64(). */
public void testWriteVarint() throws Exception {
assertWriteVarint(bytes(0x00), 0);
assertWriteVarint(bytes(0x01), 1);
assertWriteVarint(bytes(0x7f), 127);
// 14882
assertWriteVarint(bytes(0xa2, 0x74), (0x22 << 0) | (0x74 << 7));
// 2961488830
assertWriteVarint(bytes(0xbe, 0xf7, 0x92, 0x84, 0x0b),
(0x3e << 0) | (0x77 << 7) | (0x12 << 14) | (0x04 << 21) |
(0x0bL << 28));
// 64-bit
// 7256456126
assertWriteVarint(bytes(0xbe, 0xf7, 0x92, 0x84, 0x1b),
(0x3e << 0) | (0x77 << 7) | (0x12 << 14) | (0x04 << 21) |
(0x1bL << 28));
// 41256202580718336
assertWriteVarint(
bytes(0x80, 0xe6, 0xeb, 0x9c, 0xc3, 0xc9, 0xa4, 0x49),
(0x00 << 0) | (0x66 << 7) | (0x6b << 14) | (0x1c << 21) |
(0x43L << 28) | (0x49L << 35) | (0x24L << 42) | (0x49L << 49));
// 11964378330978735131
assertWriteVarint(
bytes(0x9b, 0xa8, 0xf9, 0xc2, 0xbb, 0xd6, 0x80, 0x85, 0xa6, 0x01),
(0x1b << 0) | (0x28 << 7) | (0x79 << 14) | (0x42 << 21) |
(0x3bL << 28) | (0x56L << 35) | (0x00L << 42) |
(0x05L << 49) | (0x26L << 56) | (0x01L << 63));
}
/**
* Parses the given bytes using writeRawLittleEndian32() and checks
* that the result matches the given value.
*/
private void assertWriteLittleEndian32(byte[] data, int value)
throws Exception {
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
output.writeRawLittleEndian32(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
rawOutput = new ByteArrayOutputStream();
output = CodedOutputStream.newInstance(rawOutput, blockSize);
output.writeRawLittleEndian32(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
}
}
/**
* Parses the given bytes using writeRawLittleEndian64() and checks
* that the result matches the given value.
*/
private void assertWriteLittleEndian64(byte[] data, long value)
throws Exception {
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output = CodedOutputStream.newInstance(rawOutput);
output.writeRawLittleEndian64(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
// Try different block sizes.
for (int blockSize = 1; blockSize <= 16; blockSize *= 2) {
rawOutput = new ByteArrayOutputStream();
output = CodedOutputStream.newInstance(rawOutput, blockSize);
output.writeRawLittleEndian64(value);
output.flush();
assertEqualBytes(data, rawOutput.toByteArray());
}
}
/** Tests writeRawLittleEndian32() and writeRawLittleEndian64(). */
public void testWriteLittleEndian() throws Exception {
assertWriteLittleEndian32(bytes(0x78, 0x56, 0x34, 0x12), 0x12345678);
assertWriteLittleEndian32(bytes(0xf0, 0xde, 0xbc, 0x9a), 0x9abcdef0);
assertWriteLittleEndian64(
bytes(0xf0, 0xde, 0xbc, 0x9a, 0x78, 0x56, 0x34, 0x12),
0x123456789abcdef0L);
assertWriteLittleEndian64(
bytes(0x78, 0x56, 0x34, 0x12, 0xf0, 0xde, 0xbc, 0x9a),
0x9abcdef012345678L);
}
/** Test encodeZigZag32() and encodeZigZag64(). */
public void testEncodeZigZag() throws Exception {
assertEquals(0, CodedOutputStream.encodeZigZag32( 0));
assertEquals(1, CodedOutputStream.encodeZigZag32(-1));
assertEquals(2, CodedOutputStream.encodeZigZag32( 1));
assertEquals(3, CodedOutputStream.encodeZigZag32(-2));
assertEquals(0x7FFFFFFE, CodedOutputStream.encodeZigZag32(0x3FFFFFFF));
assertEquals(0x7FFFFFFF, CodedOutputStream.encodeZigZag32(0xC0000000));
assertEquals(0xFFFFFFFE, CodedOutputStream.encodeZigZag32(0x7FFFFFFF));
assertEquals(0xFFFFFFFF, CodedOutputStream.encodeZigZag32(0x80000000));
assertEquals(0, CodedOutputStream.encodeZigZag64( 0));
assertEquals(1, CodedOutputStream.encodeZigZag64(-1));
assertEquals(2, CodedOutputStream.encodeZigZag64( 1));
assertEquals(3, CodedOutputStream.encodeZigZag64(-2));
assertEquals(0x000000007FFFFFFEL,
CodedOutputStream.encodeZigZag64(0x000000003FFFFFFFL));
assertEquals(0x000000007FFFFFFFL,
CodedOutputStream.encodeZigZag64(0xFFFFFFFFC0000000L));
assertEquals(0x00000000FFFFFFFEL,
CodedOutputStream.encodeZigZag64(0x000000007FFFFFFFL));
assertEquals(0x00000000FFFFFFFFL,
CodedOutputStream.encodeZigZag64(0xFFFFFFFF80000000L));
assertEquals(0xFFFFFFFFFFFFFFFEL,
CodedOutputStream.encodeZigZag64(0x7FFFFFFFFFFFFFFFL));
assertEquals(0xFFFFFFFFFFFFFFFFL,
CodedOutputStream.encodeZigZag64(0x8000000000000000L));
// Some easier-to-verify round-trip tests. The inputs (other than 0, 1, -1)
// were chosen semi-randomly via keyboard bashing.
assertEquals(0,
CodedOutputStream.encodeZigZag32(CodedInputStream.decodeZigZag32(0)));
assertEquals(1,
CodedOutputStream.encodeZigZag32(CodedInputStream.decodeZigZag32(1)));
assertEquals(-1,
CodedOutputStream.encodeZigZag32(CodedInputStream.decodeZigZag32(-1)));
assertEquals(14927,
CodedOutputStream.encodeZigZag32(CodedInputStream.decodeZigZag32(14927)));
assertEquals(-3612,
CodedOutputStream.encodeZigZag32(CodedInputStream.decodeZigZag32(-3612)));
assertEquals(0,
CodedOutputStream.encodeZigZag64(CodedInputStream.decodeZigZag64(0)));
assertEquals(1,
CodedOutputStream.encodeZigZag64(CodedInputStream.decodeZigZag64(1)));
assertEquals(-1,
CodedOutputStream.encodeZigZag64(CodedInputStream.decodeZigZag64(-1)));
assertEquals(14927,
CodedOutputStream.encodeZigZag64(CodedInputStream.decodeZigZag64(14927)));
assertEquals(-3612,
CodedOutputStream.encodeZigZag64(CodedInputStream.decodeZigZag64(-3612)));
assertEquals(856912304801416L,
CodedOutputStream.encodeZigZag64(
CodedInputStream.decodeZigZag64(
856912304801416L)));
assertEquals(-75123905439571256L,
CodedOutputStream.encodeZigZag64(
CodedInputStream.decodeZigZag64(
-75123905439571256L)));
}
/** Tests writing a whole message with every field type. */
public void testWriteWholeMessage() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
byte[] rawBytes = message.toByteArray();
assertEqualBytes(TestUtil.getGoldenMessage().toByteArray(), rawBytes);
// Try different block sizes.
for (int blockSize = 1; blockSize < 256; blockSize *= 2) {
ByteArrayOutputStream rawOutput = new ByteArrayOutputStream();
CodedOutputStream output =
CodedOutputStream.newInstance(rawOutput, blockSize);
message.writeTo(output);
output.flush();
assertEqualBytes(rawBytes, rawOutput.toByteArray());
}
}
}

View File

@ -0,0 +1,313 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import com.google.protobuf.Descriptors.FileDescriptor;
import com.google.protobuf.Descriptors.Descriptor;
import com.google.protobuf.Descriptors.FieldDescriptor;
import com.google.protobuf.Descriptors.EnumDescriptor;
import com.google.protobuf.Descriptors.EnumValueDescriptor;
import com.google.protobuf.Descriptors.ServiceDescriptor;
import com.google.protobuf.Descriptors.MethodDescriptor;
import com.google.protobuf.test.UnittestImport;
import com.google.protobuf.test.UnittestImport.ImportEnum;
import com.google.protobuf.test.UnittestImport.ImportMessage;
import protobuf_unittest.UnittestProto;
import protobuf_unittest.UnittestProto.ForeignEnum;
import protobuf_unittest.UnittestProto.ForeignMessage;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestExtremeDefaultValues;
import protobuf_unittest.UnittestProto.TestRequired;
import protobuf_unittest.UnittestProto.TestService;
import junit.framework.TestCase;
import java.util.Arrays;
import java.util.Collections;
/**
* Unit test for {@link Descriptors}.
*
* @author kenton@google.com Kenton Varda
*/
public class DescriptorsTest extends TestCase {
public void testFileDescriptor() throws Exception {
FileDescriptor file = UnittestProto.getDescriptor();
assertEquals("google/protobuf/unittest.proto", file.getName());
assertEquals("protobuf_unittest", file.getPackage());
assertEquals("UnittestProto", file.getOptions().getJavaOuterClassname());
assertEquals("google/protobuf/unittest.proto",
file.toProto().getName());
assertEquals(Arrays.asList(UnittestImport.getDescriptor()),
file.getDependencies());
Descriptor messageType = TestAllTypes.getDescriptor();
assertEquals(messageType, file.getMessageTypes().get(0));
assertEquals(messageType, file.findMessageTypeByName("TestAllTypes"));
assertNull(file.findMessageTypeByName("NoSuchType"));
assertNull(file.findMessageTypeByName("protobuf_unittest.TestAllTypes"));
for (int i = 0; i < file.getMessageTypes().size(); i++) {
assertEquals(i, file.getMessageTypes().get(i).getIndex());
}
EnumDescriptor enumType = ForeignEnum.getDescriptor();
assertEquals(enumType, file.getEnumTypes().get(0));
assertEquals(enumType, file.findEnumTypeByName("ForeignEnum"));
assertNull(file.findEnumTypeByName("NoSuchType"));
assertNull(file.findEnumTypeByName("protobuf_unittest.ForeignEnum"));
assertEquals(Arrays.asList(ImportEnum.getDescriptor()),
UnittestImport.getDescriptor().getEnumTypes());
for (int i = 0; i < file.getEnumTypes().size(); i++) {
assertEquals(i, file.getEnumTypes().get(i).getIndex());
}
ServiceDescriptor service = TestService.getDescriptor();
assertEquals(service, file.getServices().get(0));
assertEquals(service, file.findServiceByName("TestService"));
assertNull(file.findServiceByName("NoSuchType"));
assertNull(file.findServiceByName("protobuf_unittest.TestService"));
assertEquals(Collections.emptyList(),
UnittestImport.getDescriptor().getServices());
for (int i = 0; i < file.getServices().size(); i++) {
assertEquals(i, file.getServices().get(i).getIndex());
}
FieldDescriptor extension =
UnittestProto.optionalInt32Extension.getDescriptor();
assertEquals(extension, file.getExtensions().get(0));
assertEquals(extension,
file.findExtensionByName("optional_int32_extension"));
assertNull(file.findExtensionByName("no_such_ext"));
assertNull(file.findExtensionByName(
"protobuf_unittest.optional_int32_extension"));
assertEquals(Collections.emptyList(),
UnittestImport.getDescriptor().getExtensions());
for (int i = 0; i < file.getExtensions().size(); i++) {
assertEquals(i, file.getExtensions().get(i).getIndex());
}
}
public void testDescriptor() throws Exception {
Descriptor messageType = TestAllTypes.getDescriptor();
Descriptor nestedType = TestAllTypes.NestedMessage.getDescriptor();
assertEquals("TestAllTypes", messageType.getName());
assertEquals("protobuf_unittest.TestAllTypes", messageType.getFullName());
assertEquals(UnittestProto.getDescriptor(), messageType.getFile());
assertNull(messageType.getContainingType());
assertEquals(DescriptorProtos.MessageOptions.getDefaultInstance(),
messageType.getOptions());
assertEquals("TestAllTypes", messageType.toProto().getName());
assertEquals("NestedMessage", nestedType.getName());
assertEquals("protobuf_unittest.TestAllTypes.NestedMessage",
nestedType.getFullName());
assertEquals(UnittestProto.getDescriptor(), nestedType.getFile());
assertEquals(messageType, nestedType.getContainingType());
FieldDescriptor field = messageType.getFields().get(0);
assertEquals("optional_int32", field.getName());
assertEquals(field, messageType.findFieldByName("optional_int32"));
assertNull(messageType.findFieldByName("no_such_field"));
assertEquals(field, messageType.findFieldByNumber(1));
assertNull(messageType.findFieldByNumber(571283));
for (int i = 0; i < messageType.getFields().size(); i++) {
assertEquals(i, messageType.getFields().get(i).getIndex());
}
assertEquals(nestedType, messageType.getNestedTypes().get(0));
assertEquals(nestedType, messageType.findNestedTypeByName("NestedMessage"));
assertNull(messageType.findNestedTypeByName("NoSuchType"));
for (int i = 0; i < messageType.getNestedTypes().size(); i++) {
assertEquals(i, messageType.getNestedTypes().get(i).getIndex());
}
EnumDescriptor enumType = TestAllTypes.NestedEnum.getDescriptor();
assertEquals(enumType, messageType.getEnumTypes().get(0));
assertEquals(enumType, messageType.findEnumTypeByName("NestedEnum"));
assertNull(messageType.findEnumTypeByName("NoSuchType"));
for (int i = 0; i < messageType.getEnumTypes().size(); i++) {
assertEquals(i, messageType.getEnumTypes().get(i).getIndex());
}
}
public void testFieldDescriptor() throws Exception {
Descriptor messageType = TestAllTypes.getDescriptor();
FieldDescriptor primitiveField =
messageType.findFieldByName("optional_int32");
FieldDescriptor enumField =
messageType.findFieldByName("optional_nested_enum");
FieldDescriptor messageField =
messageType.findFieldByName("optional_foreign_message");
FieldDescriptor cordField =
messageType.findFieldByName("optional_cord");
FieldDescriptor extension =
UnittestProto.optionalInt32Extension.getDescriptor();
FieldDescriptor nestedExtension = TestRequired.single.getDescriptor();
assertEquals("optional_int32", primitiveField.getName());
assertEquals("protobuf_unittest.TestAllTypes.optional_int32",
primitiveField.getFullName());
assertEquals(1, primitiveField.getNumber());
assertEquals(messageType, primitiveField.getContainingType());
assertEquals(UnittestProto.getDescriptor(), primitiveField.getFile());
assertEquals(FieldDescriptor.Type.INT32, primitiveField.getType());
assertEquals(FieldDescriptor.JavaType.INT, primitiveField.getJavaType());
assertEquals(DescriptorProtos.FieldOptions.getDefaultInstance(),
primitiveField.getOptions());
assertFalse(primitiveField.isExtension());
assertEquals("optional_int32", primitiveField.toProto().getName());
assertEquals("optional_nested_enum", enumField.getName());
assertEquals(FieldDescriptor.Type.ENUM, enumField.getType());
assertEquals(FieldDescriptor.JavaType.ENUM, enumField.getJavaType());
assertEquals(TestAllTypes.NestedEnum.getDescriptor(),
enumField.getEnumType());
assertEquals("optional_foreign_message", messageField.getName());
assertEquals(FieldDescriptor.Type.MESSAGE, messageField.getType());
assertEquals(FieldDescriptor.JavaType.MESSAGE, messageField.getJavaType());
assertEquals(ForeignMessage.getDescriptor(), messageField.getMessageType());
assertEquals("optional_cord", cordField.getName());
assertEquals(FieldDescriptor.Type.STRING, cordField.getType());
assertEquals(FieldDescriptor.JavaType.STRING, cordField.getJavaType());
assertEquals(DescriptorProtos.FieldOptions.CType.CORD,
cordField.getOptions().getCtype());
assertEquals("optional_int32_extension", extension.getName());
assertEquals("protobuf_unittest.optional_int32_extension",
extension.getFullName());
assertEquals(1, extension.getNumber());
assertEquals(TestAllExtensions.getDescriptor(),
extension.getContainingType());
assertEquals(UnittestProto.getDescriptor(), extension.getFile());
assertEquals(FieldDescriptor.Type.INT32, extension.getType());
assertEquals(FieldDescriptor.JavaType.INT, extension.getJavaType());
assertEquals(DescriptorProtos.FieldOptions.getDefaultInstance(),
extension.getOptions());
assertTrue(extension.isExtension());
assertEquals(null, extension.getExtensionScope());
assertEquals("optional_int32_extension", extension.toProto().getName());
assertEquals("single", nestedExtension.getName());
assertEquals("protobuf_unittest.TestRequired.single",
nestedExtension.getFullName());
assertEquals(TestRequired.getDescriptor(),
nestedExtension.getExtensionScope());
}
public void testFieldDescriptorLabel() throws Exception {
FieldDescriptor requiredField =
TestRequired.getDescriptor().findFieldByName("a");
FieldDescriptor optionalField =
TestAllTypes.getDescriptor().findFieldByName("optional_int32");
FieldDescriptor repeatedField =
TestAllTypes.getDescriptor().findFieldByName("repeated_int32");
assertTrue(requiredField.isRequired());
assertFalse(requiredField.isRepeated());
assertFalse(optionalField.isRequired());
assertFalse(optionalField.isRepeated());
assertFalse(repeatedField.isRequired());
assertTrue(repeatedField.isRepeated());
}
public void testFieldDescriptorDefault() throws Exception {
Descriptor d = TestAllTypes.getDescriptor();
assertFalse(d.findFieldByName("optional_int32").hasDefaultValue());
assertEquals(0, d.findFieldByName("optional_int32").getDefaultValue());
assertTrue(d.findFieldByName("default_int32").hasDefaultValue());
assertEquals(41, d.findFieldByName("default_int32").getDefaultValue());
d = TestExtremeDefaultValues.getDescriptor();
assertEquals(
ByteString.copyFrom(
"\0\001\007\b\f\n\r\t\013\\\'\"\u00fe".getBytes("ISO-8859-1")),
d.findFieldByName("escaped_bytes").getDefaultValue());
assertEquals(-1, d.findFieldByName("large_uint32").getDefaultValue());
assertEquals(-1L, d.findFieldByName("large_uint64").getDefaultValue());
}
public void testEnumDescriptor() throws Exception {
EnumDescriptor enumType = ForeignEnum.getDescriptor();
EnumDescriptor nestedType = TestAllTypes.NestedEnum.getDescriptor();
assertEquals("ForeignEnum", enumType.getName());
assertEquals("protobuf_unittest.ForeignEnum", enumType.getFullName());
assertEquals(UnittestProto.getDescriptor(), enumType.getFile());
assertNull(enumType.getContainingType());
assertEquals(DescriptorProtos.EnumOptions.getDefaultInstance(),
enumType.getOptions());
assertEquals("NestedEnum", nestedType.getName());
assertEquals("protobuf_unittest.TestAllTypes.NestedEnum",
nestedType.getFullName());
assertEquals(UnittestProto.getDescriptor(), nestedType.getFile());
assertEquals(TestAllTypes.getDescriptor(), nestedType.getContainingType());
EnumValueDescriptor value = ForeignEnum.FOREIGN_FOO.getValueDescriptor();
assertEquals(value, enumType.getValues().get(0));
assertEquals("FOREIGN_FOO", value.getName());
assertEquals(4, value.getNumber());
assertEquals(value, enumType.findValueByName("FOREIGN_FOO"));
assertEquals(value, enumType.findValueByNumber(4));
assertNull(enumType.findValueByName("NO_SUCH_VALUE"));
for (int i = 0; i < enumType.getValues().size(); i++) {
assertEquals(i, enumType.getValues().get(i).getIndex());
}
}
public void testServiceDescriptor() throws Exception {
ServiceDescriptor service = TestService.getDescriptor();
assertEquals("TestService", service.getName());
assertEquals("protobuf_unittest.TestService", service.getFullName());
assertEquals(UnittestProto.getDescriptor(), service.getFile());
assertEquals(2, service.getMethods().size());
MethodDescriptor fooMethod = service.getMethods().get(0);
assertEquals("Foo", fooMethod.getName());
assertEquals(UnittestProto.FooRequest.getDescriptor(),
fooMethod.getInputType());
assertEquals(UnittestProto.FooResponse.getDescriptor(),
fooMethod.getOutputType());
assertEquals(fooMethod, service.findMethodByName("Foo"));
MethodDescriptor barMethod = service.getMethods().get(1);
assertEquals("Bar", barMethod.getName());
assertEquals(UnittestProto.BarRequest.getDescriptor(),
barMethod.getInputType());
assertEquals(UnittestProto.BarResponse.getDescriptor(),
barMethod.getOutputType());
assertEquals(barMethod, service.findMethodByName("Bar"));
assertNull(service.findMethodByName("NoSuchMethod"));
for (int i = 0; i < service.getMethods().size(); i++) {
assertEquals(i, service.getMethods().get(i).getIndex());
}
}
}

View File

@ -0,0 +1,120 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import junit.framework.TestCase;
/**
* Unit test for {@link DynamicMessage}. See also {@link MessageTest}, which
* tests some {@link DynamicMessage} functionality.
*
* @author kenton@google.com Kenton Varda
*/
public class DynamicMessageTest extends TestCase {
TestUtil.ReflectionTester reflectionTester =
new TestUtil.ReflectionTester(TestAllTypes.getDescriptor(), null);
TestUtil.ReflectionTester extensionsReflectionTester =
new TestUtil.ReflectionTester(TestAllExtensions.getDescriptor(),
TestUtil.getExtensionRegistry());
public void testDynamicMessageAccessors() throws Exception {
Message.Builder builder =
DynamicMessage.newBuilder(TestAllTypes.getDescriptor());
reflectionTester.setAllFieldsViaReflection(builder);
Message message = builder.build();
reflectionTester.assertAllFieldsSetViaReflection(message);
}
public void testDynamicMessageExtensionAccessors() throws Exception {
// We don't need to extensively test DynamicMessage's handling of
// extensions because, frankly, it doesn't do anything special with them.
// It treats them just like any other fields.
Message.Builder builder =
DynamicMessage.newBuilder(TestAllExtensions.getDescriptor());
extensionsReflectionTester.setAllFieldsViaReflection(builder);
Message message = builder.build();
extensionsReflectionTester.assertAllFieldsSetViaReflection(message);
}
public void testDynamicMessageRepeatedSetters() throws Exception {
Message.Builder builder =
DynamicMessage.newBuilder(TestAllTypes.getDescriptor());
reflectionTester.setAllFieldsViaReflection(builder);
reflectionTester.modifyRepeatedFieldsViaReflection(builder);
Message message = builder.build();
reflectionTester.assertRepeatedFieldsModifiedViaReflection(message);
}
public void testDynamicMessageDefaults() throws Exception {
reflectionTester.assertClearViaReflection(
DynamicMessage.getDefaultInstance(TestAllTypes.getDescriptor()));
reflectionTester.assertClearViaReflection(
DynamicMessage.newBuilder(TestAllTypes.getDescriptor()).build());
}
public void testDynamicMessageSerializedSize() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
Message.Builder dynamicBuilder =
DynamicMessage.newBuilder(TestAllTypes.getDescriptor());
reflectionTester.setAllFieldsViaReflection(dynamicBuilder);
Message dynamicMessage = dynamicBuilder.build();
assertEquals(message.getSerializedSize(),
dynamicMessage.getSerializedSize());
}
public void testDynamicMessageSerialization() throws Exception {
Message.Builder builder =
DynamicMessage.newBuilder(TestAllTypes.getDescriptor());
reflectionTester.setAllFieldsViaReflection(builder);
Message message = builder.build();
ByteString rawBytes = message.toByteString();
TestAllTypes message2 = TestAllTypes.parseFrom(rawBytes);
TestUtil.assertAllFieldsSet(message2);
// In fact, the serialized forms should be exactly the same, byte-for-byte.
assertEquals(TestUtil.getAllSet().toByteString(), rawBytes);
}
public void testDynamicMessageParsing() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
TestAllTypes message = builder.build();
ByteString rawBytes = message.toByteString();
Message message2 =
DynamicMessage.parseFrom(TestAllTypes.getDescriptor(), rawBytes);
reflectionTester.assertAllFieldsSetViaReflection(message2);
}
public void testDynamicMessageCopy() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
TestAllTypes message = builder.build();
DynamicMessage copy = DynamicMessage.newBuilder(message).build();
reflectionTester.assertAllFieldsSetViaReflection(copy);
}
}

View File

@ -0,0 +1,246 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto;
import protobuf_unittest.UnittestProto.ForeignMessage;
import protobuf_unittest.UnittestProto.ForeignEnum;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestExtremeDefaultValues;
import protobuf_unittest.MultipleFilesTestProto;
import protobuf_unittest.MessageWithNoOuter;
import protobuf_unittest.EnumWithNoOuter;
import protobuf_unittest.ServiceWithNoOuter;
import junit.framework.TestCase;
import java.util.Arrays;
/**
* Unit test for generated messages and generated code. See also
* {@link MessageTest}, which tests some generated message functionality.
*
* @author kenton@google.com Kenton Varda
*/
public class GeneratedMessageTest extends TestCase {
TestUtil.ReflectionTester reflectionTester =
new TestUtil.ReflectionTester(TestAllTypes.getDescriptor(), null);
public void testDefaultInstance() throws Exception {
assertSame(TestAllTypes.getDefaultInstance(),
TestAllTypes.getDefaultInstance().getDefaultInstanceForType());
assertSame(TestAllTypes.getDefaultInstance(),
TestAllTypes.newBuilder().getDefaultInstanceForType());
}
public void testAccessors() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
TestAllTypes message = builder.build();
TestUtil.assertAllFieldsSet(message);
}
public void testRepeatedSetters() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
TestUtil.modifyRepeatedFields(builder);
TestAllTypes message = builder.build();
TestUtil.assertRepeatedFieldsModified(message);
}
public void testRepeatedAppend() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
builder.addAllRepeatedInt32(Arrays.asList(1, 2, 3, 4));
builder.addAllRepeatedForeignEnum(Arrays.asList(ForeignEnum.FOREIGN_BAZ));
ForeignMessage foreignMessage =
ForeignMessage.newBuilder().setC(12).build();
builder.addAllRepeatedForeignMessage(Arrays.asList(foreignMessage));
TestAllTypes message = builder.build();
assertEquals(message.getRepeatedInt32List(), Arrays.asList(1, 2, 3, 4));
assertEquals(message.getRepeatedForeignEnumList(),
Arrays.asList(ForeignEnum.FOREIGN_BAZ));
assertEquals(1, message.getRepeatedForeignMessageCount());
assertEquals(12, message.getRepeatedForeignMessage(0).getC());
}
public void testSettingForeignMessageUsingBuilder() throws Exception {
TestAllTypes message = TestAllTypes.newBuilder()
// Pass builder for foreign message instance.
.setOptionalForeignMessage(ForeignMessage.newBuilder().setC(123))
.build();
TestAllTypes expectedMessage = TestAllTypes.newBuilder()
// Create expected version passing foreign message instance explicitly.
.setOptionalForeignMessage(
ForeignMessage.newBuilder().setC(123).build())
.build();
// TODO(ngd): Upgrade to using real #equals method once implemented
assertEquals(expectedMessage.toString(), message.toString());
}
public void testSettingRepeatedForeignMessageUsingBuilder() throws Exception {
TestAllTypes message = TestAllTypes.newBuilder()
// Pass builder for foreign message instance.
.addRepeatedForeignMessage(ForeignMessage.newBuilder().setC(456))
.build();
TestAllTypes expectedMessage = TestAllTypes.newBuilder()
// Create expected version passing foreign message instance explicitly.
.addRepeatedForeignMessage(
ForeignMessage.newBuilder().setC(456).build())
.build();
assertEquals(expectedMessage.toString(), message.toString());
}
public void testDefaults() throws Exception {
TestUtil.assertClear(TestAllTypes.getDefaultInstance());
TestUtil.assertClear(TestAllTypes.newBuilder().build());
assertEquals("\u1234",
TestExtremeDefaultValues.getDefaultInstance().getUtf8String());
}
public void testReflectionGetters() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TestUtil.setAllFields(builder);
TestAllTypes message = builder.build();
reflectionTester.assertAllFieldsSetViaReflection(message);
}
public void testReflectionSetters() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
reflectionTester.setAllFieldsViaReflection(builder);
TestAllTypes message = builder.build();
TestUtil.assertAllFieldsSet(message);
}
public void testReflectionRepeatedSetters() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
reflectionTester.setAllFieldsViaReflection(builder);
reflectionTester.modifyRepeatedFieldsViaReflection(builder);
TestAllTypes message = builder.build();
TestUtil.assertRepeatedFieldsModified(message);
}
public void testReflectionDefaults() throws Exception {
reflectionTester.assertClearViaReflection(
TestAllTypes.getDefaultInstance());
reflectionTester.assertClearViaReflection(
TestAllTypes.newBuilder().build());
}
// =================================================================
// Extensions.
TestUtil.ReflectionTester extensionsReflectionTester =
new TestUtil.ReflectionTester(TestAllExtensions.getDescriptor(),
TestUtil.getExtensionRegistry());
public void testExtensionAccessors() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
TestUtil.setAllExtensions(builder);
TestAllExtensions message = builder.build();
TestUtil.assertAllExtensionsSet(message);
}
public void testExtensionRepeatedSetters() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
TestUtil.setAllExtensions(builder);
TestUtil.modifyRepeatedExtensions(builder);
TestAllExtensions message = builder.build();
TestUtil.assertRepeatedExtensionsModified(message);
}
public void testExtensionDefaults() throws Exception {
TestUtil.assertExtensionsClear(TestAllExtensions.getDefaultInstance());
TestUtil.assertExtensionsClear(TestAllExtensions.newBuilder().build());
}
public void testExtensionReflectionGetters() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
TestUtil.setAllExtensions(builder);
TestAllExtensions message = builder.build();
extensionsReflectionTester.assertAllFieldsSetViaReflection(message);
}
public void testExtensionReflectionSetters() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
extensionsReflectionTester.setAllFieldsViaReflection(builder);
TestAllExtensions message = builder.build();
TestUtil.assertAllExtensionsSet(message);
}
public void testExtensionReflectionRepeatedSetters() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
extensionsReflectionTester.setAllFieldsViaReflection(builder);
extensionsReflectionTester.modifyRepeatedFieldsViaReflection(builder);
TestAllExtensions message = builder.build();
TestUtil.assertRepeatedExtensionsModified(message);
}
public void testExtensionReflectionDefaults() throws Exception {
extensionsReflectionTester.assertClearViaReflection(
TestAllExtensions.getDefaultInstance());
extensionsReflectionTester.assertClearViaReflection(
TestAllExtensions.newBuilder().build());
}
public void testClearExtension() throws Exception {
// clearExtension() is not actually used in TestUtil, so try it manually.
assertFalse(
TestAllExtensions.newBuilder()
.setExtension(UnittestProto.optionalInt32Extension, 1)
.clearExtension(UnittestProto.optionalInt32Extension)
.hasExtension(UnittestProto.optionalInt32Extension));
assertEquals(0,
TestAllExtensions.newBuilder()
.addExtension(UnittestProto.repeatedInt32Extension, 1)
.clearExtension(UnittestProto.repeatedInt32Extension)
.getExtensionCount(UnittestProto.repeatedInt32Extension));
}
// =================================================================
// multiple_files_test
public void testMultipleFilesOption() throws Exception {
// We mostly just want to check that things compile.
MessageWithNoOuter message =
MessageWithNoOuter.newBuilder()
.setNested(MessageWithNoOuter.NestedMessage.newBuilder().setI(1))
.addForeign(TestAllTypes.newBuilder().setOptionalInt32(1))
.setNestedEnum(MessageWithNoOuter.NestedEnum.BAZ)
.setForeignEnum(EnumWithNoOuter.BAR)
.build();
assertEquals(message, MessageWithNoOuter.parseFrom(message.toByteString()));
assertEquals(MultipleFilesTestProto.getDescriptor(),
MessageWithNoOuter.getDescriptor().getFile());
Descriptors.FieldDescriptor field =
MessageWithNoOuter.getDescriptor().findFieldByName("foreign_enum");
assertEquals(EnumWithNoOuter.BAR.getValueDescriptor(),
message.getField(field));
assertEquals(MultipleFilesTestProto.getDescriptor(),
ServiceWithNoOuter.getDescriptor().getFile());
assertFalse(
TestAllExtensions.getDefaultInstance().hasExtension(
MultipleFilesTestProto.extensionWithOuter));
}
}

View File

@ -0,0 +1,299 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestRequired;
import protobuf_unittest.UnittestProto.TestRequiredForeign;
import protobuf_unittest.UnittestProto.ForeignMessage;
import junit.framework.TestCase;
/**
* Misc. unit tests for message operations that apply to both generated
* and dynamic messages.
*
* @author kenton@google.com Kenton Varda
*/
public class MessageTest extends TestCase {
// =================================================================
// Message-merging tests.
static final TestAllTypes MERGE_SOURCE =
TestAllTypes.newBuilder()
.setOptionalInt32(1)
.setOptionalString("foo")
.setOptionalForeignMessage(ForeignMessage.getDefaultInstance())
.addRepeatedString("bar")
.build();
static final TestAllTypes MERGE_DEST =
TestAllTypes.newBuilder()
.setOptionalInt64(2)
.setOptionalString("baz")
.setOptionalForeignMessage(ForeignMessage.newBuilder().setC(3).build())
.addRepeatedString("qux")
.build();
static final String MERGE_RESULT_TEXT =
"optional_int32: 1\n" +
"optional_int64: 2\n" +
"optional_string: \"foo\"\n" +
"optional_foreign_message {\n" +
" c: 3\n" +
"}\n" +
"repeated_string: \"qux\"\n" +
"repeated_string: \"bar\"\n";
public void testMergeFrom() throws Exception {
TestAllTypes result =
TestAllTypes.newBuilder(MERGE_DEST)
.mergeFrom(MERGE_SOURCE).build();
assertEquals(MERGE_RESULT_TEXT, result.toString());
}
/**
* Test merging a DynamicMessage into a GeneratedMessage. As long as they
* have the same descriptor, this should work, but it is an entirely different
* code path.
*/
public void testMergeFromDynamic() throws Exception {
TestAllTypes result =
TestAllTypes.newBuilder(MERGE_DEST)
.mergeFrom(DynamicMessage.newBuilder(MERGE_SOURCE).build())
.build();
assertEquals(MERGE_RESULT_TEXT, result.toString());
}
/** Test merging two DynamicMessages. */
public void testDynamicMergeFrom() throws Exception {
DynamicMessage result =
DynamicMessage.newBuilder(MERGE_DEST)
.mergeFrom(DynamicMessage.newBuilder(MERGE_SOURCE).build())
.build();
assertEquals(MERGE_RESULT_TEXT, result.toString());
}
// =================================================================
// Required-field-related tests.
private static final TestRequired TEST_REQUIRED_UNINITIALIZED =
TestRequired.getDefaultInstance();
private static final TestRequired TEST_REQUIRED_INITIALIZED =
TestRequired.newBuilder().setA(1).setB(2).setC(3).build();
public void testRequired() throws Exception {
TestRequired.Builder builder = TestRequired.newBuilder();
assertFalse(builder.isInitialized());
builder.setA(1);
assertFalse(builder.isInitialized());
builder.setB(1);
assertFalse(builder.isInitialized());
builder.setC(1);
assertTrue(builder.isInitialized());
}
public void testRequiredForeign() throws Exception {
TestRequiredForeign.Builder builder = TestRequiredForeign.newBuilder();
assertTrue(builder.isInitialized());
builder.setOptionalMessage(TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setOptionalMessage(TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
builder.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setRepeatedMessage(0, TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
}
public void testRequiredExtension() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
assertTrue(builder.isInitialized());
builder.setExtension(TestRequired.single, TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setExtension(TestRequired.single, TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
builder.addExtension(TestRequired.multi, TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setExtension(TestRequired.multi, 0, TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
}
public void testRequiredDynamic() throws Exception {
Descriptors.Descriptor descriptor = TestRequired.getDescriptor();
DynamicMessage.Builder builder = DynamicMessage.newBuilder(descriptor);
assertFalse(builder.isInitialized());
builder.setField(descriptor.findFieldByName("a"), 1);
assertFalse(builder.isInitialized());
builder.setField(descriptor.findFieldByName("b"), 1);
assertFalse(builder.isInitialized());
builder.setField(descriptor.findFieldByName("c"), 1);
assertTrue(builder.isInitialized());
}
public void testRequiredDynamicForeign() throws Exception {
Descriptors.Descriptor descriptor = TestRequiredForeign.getDescriptor();
DynamicMessage.Builder builder = DynamicMessage.newBuilder(descriptor);
assertTrue(builder.isInitialized());
builder.setField(descriptor.findFieldByName("optional_message"),
TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setField(descriptor.findFieldByName("optional_message"),
TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
builder.addRepeatedField(descriptor.findFieldByName("repeated_message"),
TEST_REQUIRED_UNINITIALIZED);
assertFalse(builder.isInitialized());
builder.setRepeatedField(descriptor.findFieldByName("repeated_message"), 0,
TEST_REQUIRED_INITIALIZED);
assertTrue(builder.isInitialized());
}
public void testUninitializedException() throws Exception {
try {
TestRequired.newBuilder().build();
fail("Should have thrown an exception.");
} catch (UninitializedMessageException e) {
assertEquals("Message missing required fields: a, b, c", e.getMessage());
}
}
public void testBuildPartial() throws Exception {
// We're mostly testing that no exception is thrown.
TestRequired message = TestRequired.newBuilder().buildPartial();
assertFalse(message.isInitialized());
}
public void testNestedUninitializedException() throws Exception {
try {
TestRequiredForeign.newBuilder()
.setOptionalMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.build();
fail("Should have thrown an exception.");
} catch (UninitializedMessageException e) {
assertEquals(
"Message missing required fields: " +
"optional_message.a, " +
"optional_message.b, " +
"optional_message.c, " +
"repeated_message[0].a, " +
"repeated_message[0].b, " +
"repeated_message[0].c, " +
"repeated_message[1].a, " +
"repeated_message[1].b, " +
"repeated_message[1].c",
e.getMessage());
}
}
public void testBuildNestedPartial() throws Exception {
// We're mostly testing that no exception is thrown.
TestRequiredForeign message =
TestRequiredForeign.newBuilder()
.setOptionalMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.buildPartial();
assertFalse(message.isInitialized());
}
public void testParseUnititialized() throws Exception {
try {
TestRequired.parseFrom(ByteString.EMPTY);
fail("Should have thrown an exception.");
} catch (InvalidProtocolBufferException e) {
assertEquals("Message missing required fields: a, b, c", e.getMessage());
}
}
public void testParseNestedUnititialized() throws Exception {
ByteString data =
TestRequiredForeign.newBuilder()
.setOptionalMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.addRepeatedMessage(TEST_REQUIRED_UNINITIALIZED)
.buildPartial().toByteString();
try {
TestRequiredForeign.parseFrom(data);
fail("Should have thrown an exception.");
} catch (InvalidProtocolBufferException e) {
assertEquals(
"Message missing required fields: " +
"optional_message.a, " +
"optional_message.b, " +
"optional_message.c, " +
"repeated_message[0].a, " +
"repeated_message[0].b, " +
"repeated_message[0].c, " +
"repeated_message[1].a, " +
"repeated_message[1].b, " +
"repeated_message[1].c",
e.getMessage());
}
}
public void testDynamicUninitializedException() throws Exception {
try {
DynamicMessage.newBuilder(TestRequired.getDescriptor()).build();
fail("Should have thrown an exception.");
} catch (UninitializedMessageException e) {
assertEquals("Message missing required fields: a, b, c", e.getMessage());
}
}
public void testDynamicBuildPartial() throws Exception {
// We're mostly testing that no exception is thrown.
DynamicMessage message =
DynamicMessage.newBuilder(TestRequired.getDescriptor())
.buildPartial();
assertFalse(message.isInitialized());
}
public void testDynamicParseUnititialized() throws Exception {
try {
Descriptors.Descriptor descriptor = TestRequired.getDescriptor();
DynamicMessage.parseFrom(descriptor, ByteString.EMPTY);
fail("Should have thrown an exception.");
} catch (InvalidProtocolBufferException e) {
assertEquals("Message missing required fields: a, b, c", e.getMessage());
}
}
}

View File

@ -0,0 +1,164 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestService;
import protobuf_unittest.UnittestProto.FooRequest;
import protobuf_unittest.UnittestProto.FooResponse;
import protobuf_unittest.UnittestProto.BarRequest;
import protobuf_unittest.UnittestProto.BarResponse;
import org.easymock.classextension.EasyMock;
import org.easymock.classextension.IMocksControl;
import org.easymock.IArgumentMatcher;
import junit.framework.TestCase;
/**
* Tests services and stubs.
*
* @author kenton@google.com Kenton Varda
*/
public class ServiceTest extends TestCase {
private IMocksControl control;
private RpcController mockController;
private final Descriptors.MethodDescriptor fooDescriptor =
TestService.getDescriptor().getMethods().get(0);
private final Descriptors.MethodDescriptor barDescriptor =
TestService.getDescriptor().getMethods().get(1);
protected void setUp() throws Exception {
super.setUp();
control = EasyMock.createStrictControl();
mockController = control.createMock(RpcController.class);
}
// =================================================================
/** Tests Service.callMethod(). */
public void testCallMethod() throws Exception {
FooRequest fooRequest = FooRequest.newBuilder().build();
BarRequest barRequest = BarRequest.newBuilder().build();
MockCallback<Message> fooCallback = new MockCallback<Message>();
MockCallback<Message> barCallback = new MockCallback<Message>();
TestService mockService = control.createMock(TestService.class);
mockService.foo(EasyMock.same(mockController), EasyMock.same(fooRequest),
this.<FooResponse>wrapsCallback(fooCallback));
mockService.bar(EasyMock.same(mockController), EasyMock.same(barRequest),
this.<BarResponse>wrapsCallback(barCallback));
control.replay();
mockService.callMethod(fooDescriptor, mockController,
fooRequest, fooCallback);
mockService.callMethod(barDescriptor, mockController,
barRequest, barCallback);
control.verify();
}
/** Tests Service.get{Request,Response}Prototype(). */
public void testGetPrototype() throws Exception {
TestService mockService = control.createMock(TestService.class);
assertSame(mockService.getRequestPrototype(fooDescriptor),
FooRequest.getDefaultInstance());
assertSame(mockService.getResponsePrototype(fooDescriptor),
FooResponse.getDefaultInstance());
assertSame(mockService.getRequestPrototype(barDescriptor),
BarRequest.getDefaultInstance());
assertSame(mockService.getResponsePrototype(barDescriptor),
BarResponse.getDefaultInstance());
}
/** Tests generated stubs. */
public void testStub() throws Exception {
FooRequest fooRequest = FooRequest.newBuilder().build();
BarRequest barRequest = BarRequest.newBuilder().build();
MockCallback<FooResponse> fooCallback = new MockCallback<FooResponse>();
MockCallback<BarResponse> barCallback = new MockCallback<BarResponse>();
RpcChannel mockChannel = control.createMock(RpcChannel.class);
TestService stub = TestService.newStub(mockChannel);
mockChannel.callMethod(
EasyMock.same(fooDescriptor),
EasyMock.same(mockController),
EasyMock.same(fooRequest),
EasyMock.same(FooResponse.getDefaultInstance()),
this.<Message>wrapsCallback(fooCallback));
mockChannel.callMethod(
EasyMock.same(barDescriptor),
EasyMock.same(mockController),
EasyMock.same(barRequest),
EasyMock.same(BarResponse.getDefaultInstance()),
this.<Message>wrapsCallback(barCallback));
control.replay();
stub.foo(mockController, fooRequest, fooCallback);
stub.bar(mockController, barRequest, barCallback);
control.verify();
}
// =================================================================
/**
* wrapsCallback() is an EasyMock argument predicate. wrapsCallback(c)
* matches a callback if calling that callback causes c to be called.
* In other words, c wraps the given callback.
*/
private <Type extends Message> RpcCallback<Type> wrapsCallback(
MockCallback callback) {
EasyMock.reportMatcher(new WrapsCallback(callback));
return null;
}
/** The parameter to wrapsCallback() must be a MockCallback. */
private static class MockCallback<Type extends Message>
implements RpcCallback<Type> {
private boolean called = false;
public boolean isCalled() { return called; }
public void reset() { called = false; }
public void run(Type message) { called = true; }
}
/** Implementation of the wrapsCallback() argument matcher. */
private static class WrapsCallback implements IArgumentMatcher {
private MockCallback callback;
public WrapsCallback(MockCallback callback) {
this.callback = callback;
}
@SuppressWarnings("unchecked")
public boolean matches(Object actual) {
if (!(actual instanceof RpcCallback)) {
return false;
}
RpcCallback actualCallback = (RpcCallback)actual;
callback.reset();
actualCallback.run(null);
return callback.isCalled();
}
public void appendTo(StringBuffer buffer) {
buffer.append("wrapsCallback(mockCallback)");
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,534 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestEmptyMessage;
import protobuf_unittest.UnittestMset.TestMessageSet;
import protobuf_unittest.UnittestMset.TestMessageSetExtension1;
import protobuf_unittest.UnittestMset.TestMessageSetExtension2;
import junit.framework.TestCase;
import java.io.StringReader;
/**
* Test case for {@link TextFormat}.
*
* TODO(wenboz): ExtensionTest and rest of text_format_unittest.cc.
*
* @author wenboz@google.com (Wenbo Zhu)
*/
public class TextFormatTest extends TestCase {
// A basic string with different escapable characters for testing.
private final static String kEscapeTestString =
"\"A string with ' characters \n and \r newlines and \t tabs and \001 "
+ "slashes \\";
// A representation of the above string with all the characters escaped.
private final static String kEscapeTestStringEscaped =
"\"\\\"A string with \\' characters \\n and \\r newlines "
+ "and \\t tabs and \\001 slashes \\\\\"";
private static String allFieldsSetText = TestUtil.readTextFromFile(
"text_format_unittest_data.txt");
private static String allExtensionsSetText = TestUtil.readTextFromFile(
"text_format_unittest_extensions_data.txt");
private String exoticText =
"repeated_int32: -1\n" +
"repeated_int32: -2147483648\n" +
"repeated_int64: -1\n" +
"repeated_int64: -9223372036854775808\n" +
"repeated_uint32: 4294967295\n" +
"repeated_uint32: 2147483648\n" +
"repeated_uint64: 18446744073709551615\n" +
"repeated_uint64: 9223372036854775808\n" +
"repeated_double: 123.0\n" +
"repeated_double: 123.5\n" +
"repeated_double: 0.125\n" +
"repeated_double: 1.23E17\n" +
"repeated_double: 1.235E22\n" +
"repeated_double: 1.235E-18\n" +
"repeated_double: 123.456789\n" +
"repeated_double: Infinity\n" +
"repeated_double: -Infinity\n" +
"repeated_double: NaN\n" +
"repeated_string: \"\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\"" +
"\\341\\210\\264\"\n" +
"repeated_bytes: \"\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\"\\376\"\n";
private String messageSetText =
"[protobuf_unittest.TestMessageSetExtension1] {\n" +
" i: 123\n" +
"}\n" +
"[protobuf_unittest.TestMessageSetExtension2] {\n" +
" str: \"foo\"\n" +
"}\n";
/** Print TestAllTypes and compare with golden file. */
public void testPrintMessage() throws Exception {
String javaText = TextFormat.printToString(TestUtil.getAllSet());
// Java likes to add a trailing ".0" to floats and doubles. C printf
// (with %g format) does not. Our golden files are used for both
// C++ and Java TextFormat classes, so we need to conform.
javaText = javaText.replace(".0\n", "\n");
assertEquals(allFieldsSetText, javaText);
}
/** Print TestAllExtensions and compare with golden file. */
public void testPrintExtensions() throws Exception {
String javaText = TextFormat.printToString(TestUtil.getAllExtensionsSet());
// Java likes to add a trailing ".0" to floats and doubles. C printf
// (with %g format) does not. Our golden files are used for both
// C++ and Java TextFormat classes, so we need to conform.
javaText = javaText.replace(".0\n", "\n");
assertEquals(allExtensionsSetText, javaText);
}
public void testPrintUnknownFields() throws Exception {
// Test printing of unknown fields in a message.
TestEmptyMessage message =
TestEmptyMessage.newBuilder()
.setUnknownFields(
UnknownFieldSet.newBuilder()
.addField(5,
UnknownFieldSet.Field.newBuilder()
.addVarint(1)
.addFixed32(2)
.addFixed64(3)
.addLengthDelimited(ByteString.copyFromUtf8("4"))
.addGroup(
UnknownFieldSet.newBuilder()
.addField(10,
UnknownFieldSet.Field.newBuilder()
.addVarint(5)
.build())
.build())
.build())
.addField(8,
UnknownFieldSet.Field.newBuilder()
.addVarint(1)
.addVarint(2)
.addVarint(3)
.build())
.addField(15,
UnknownFieldSet.Field.newBuilder()
.addVarint(0xABCDEF1234567890L)
.addFixed32(0xABCD1234)
.addFixed64(0xABCDEF1234567890L)
.build())
.build())
.build();
assertEquals(
"5: 1\n" +
"5: 0x00000002\n" +
"5: 0x0000000000000003\n" +
"5: \"4\"\n" +
"5 {\n" +
" 10: 5\n" +
"}\n" +
"8: 1\n" +
"8: 2\n" +
"8: 3\n" +
"15: 12379813812177893520\n" +
"15: 0xabcd1234\n" +
"15: 0xabcdef1234567890\n",
TextFormat.printToString(message));
}
/**
* Helper to construct a ByteString from a String containing only 8-bit
* characters. The characters are converted directly to bytes, *not*
* encoded using UTF-8.
*/
private ByteString bytes(String str) throws Exception {
return ByteString.copyFrom(str.getBytes("ISO-8859-1"));
}
/**
* Helper to construct a ByteString from a bunch of bytes. The inputs are
* actually ints so that I can use hex notation and not get stupid errors
* about precision.
*/
private ByteString bytes(int... bytesAsInts) {
byte[] bytes = new byte[bytesAsInts.length];
for (int i = 0; i < bytesAsInts.length; i++) {
bytes[i] = (byte) bytesAsInts[i];
}
return ByteString.copyFrom(bytes);
}
public void testPrintExotic() throws Exception {
Message message = TestAllTypes.newBuilder()
// Signed vs. unsigned numbers.
.addRepeatedInt32 (-1)
.addRepeatedUint32(-1)
.addRepeatedInt64 (-1)
.addRepeatedUint64(-1)
.addRepeatedInt32 (1 << 31)
.addRepeatedUint32(1 << 31)
.addRepeatedInt64 (1l << 63)
.addRepeatedUint64(1l << 63)
// Floats of various precisions and exponents.
.addRepeatedDouble(123)
.addRepeatedDouble(123.5)
.addRepeatedDouble(0.125)
.addRepeatedDouble(123e15)
.addRepeatedDouble(123.5e20)
.addRepeatedDouble(123.5e-20)
.addRepeatedDouble(123.456789)
.addRepeatedDouble(Double.POSITIVE_INFINITY)
.addRepeatedDouble(Double.NEGATIVE_INFINITY)
.addRepeatedDouble(Double.NaN)
// Strings and bytes that needing escaping.
.addRepeatedString("\0\001\007\b\f\n\r\t\013\\\'\"\u1234")
.addRepeatedBytes(bytes("\0\001\007\b\f\n\r\t\013\\\'\"\u00fe"))
.build();
assertEquals(exoticText, message.toString());
}
public void testPrintMessageSet() throws Exception {
TestMessageSet messageSet =
TestMessageSet.newBuilder()
.setExtension(
TestMessageSetExtension1.messageSetExtension,
TestMessageSetExtension1.newBuilder().setI(123).build())
.setExtension(
TestMessageSetExtension2.messageSetExtension,
TestMessageSetExtension2.newBuilder().setStr("foo").build())
.build();
assertEquals(messageSetText, messageSet.toString());
}
// =================================================================
public void testParse() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TextFormat.merge(allFieldsSetText, builder);
TestUtil.assertAllFieldsSet(builder.build());
}
public void testParseReader() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TextFormat.merge(new StringReader(allFieldsSetText), builder);
TestUtil.assertAllFieldsSet(builder.build());
}
public void testParseExtensions() throws Exception {
TestAllExtensions.Builder builder = TestAllExtensions.newBuilder();
TextFormat.merge(allExtensionsSetText,
TestUtil.getExtensionRegistry(),
builder);
TestUtil.assertAllExtensionsSet(builder.build());
}
public void testParseExotic() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TextFormat.merge(exoticText, builder);
// Too lazy to check things individually. Don't try to debug this
// if testPrintExotic() is failing.
assertEquals(exoticText, builder.build().toString());
}
public void testParseMessageSet() throws Exception {
ExtensionRegistry extensionRegistry = ExtensionRegistry.newInstance();
extensionRegistry.add(TestMessageSetExtension1.messageSetExtension);
extensionRegistry.add(TestMessageSetExtension2.messageSetExtension);
TestMessageSet.Builder builder = TestMessageSet.newBuilder();
TextFormat.merge(messageSetText, extensionRegistry, builder);
TestMessageSet messageSet = builder.build();
assertTrue(messageSet.hasExtension(
TestMessageSetExtension1.messageSetExtension));
assertEquals(123, messageSet.getExtension(
TestMessageSetExtension1.messageSetExtension).getI());
assertTrue(messageSet.hasExtension(
TestMessageSetExtension2.messageSetExtension));
assertEquals("foo", messageSet.getExtension(
TestMessageSetExtension2.messageSetExtension).getStr());
}
public void testParseNumericEnum() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TextFormat.merge("optional_nested_enum: 2", builder);
assertEquals(TestAllTypes.NestedEnum.BAR, builder.getOptionalNestedEnum());
}
public void testParseAngleBrackets() throws Exception {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
TextFormat.merge("OptionalGroup: < a: 1 >", builder);
assertTrue(builder.hasOptionalGroup());
assertEquals(1, builder.getOptionalGroup().getA());
}
private void assertParseError(String error, String text) {
TestAllTypes.Builder builder = TestAllTypes.newBuilder();
try {
TextFormat.merge(text, TestUtil.getExtensionRegistry(), builder);
fail("Expected parse exception.");
} catch (TextFormat.ParseException e) {
assertEquals(error, e.getMessage());
}
}
public void testParseErrors() throws Exception {
assertParseError(
"1:16: Expected \":\".",
"optional_int32 123");
assertParseError(
"1:23: Expected identifier.",
"optional_nested_enum: ?");
assertParseError(
"1:18: Couldn't parse integer: Number must be positive: -1",
"optional_uint32: -1");
assertParseError(
"1:17: Couldn't parse integer: Number out of range for 32-bit signed " +
"integer: 82301481290849012385230157",
"optional_int32: 82301481290849012385230157");
assertParseError(
"1:16: Expected \"true\" or \"false\".",
"optional_bool: maybe");
assertParseError(
"1:18: Expected string.",
"optional_string: 123");
assertParseError(
"1:18: String missing ending quote.",
"optional_string: \"ueoauaoe");
assertParseError(
"1:18: String missing ending quote.",
"optional_string: \"ueoauaoe\n" +
"optional_int32: 123");
assertParseError(
"1:18: Invalid escape sequence: '\\z'",
"optional_string: \"\\z\"");
assertParseError(
"1:18: String missing ending quote.",
"optional_string: \"ueoauaoe\n" +
"optional_int32: 123");
assertParseError(
"1:2: Extension \"nosuchext\" not found in the ExtensionRegistry.",
"[nosuchext]: 123");
assertParseError(
"1:20: Extension \"protobuf_unittest.optional_int32_extension\" does " +
"not extend message type \"protobuf_unittest.TestAllTypes\".",
"[protobuf_unittest.optional_int32_extension]: 123");
assertParseError(
"1:1: Message type \"protobuf_unittest.TestAllTypes\" has no field " +
"named \"nosuchfield\".",
"nosuchfield: 123");
assertParseError(
"1:21: Expected \">\".",
"OptionalGroup < a: 1");
assertParseError(
"1:23: Enum type \"protobuf_unittest.TestAllTypes.NestedEnum\" has no " +
"value named \"NO_SUCH_VALUE\".",
"optional_nested_enum: NO_SUCH_VALUE");
assertParseError(
"1:23: Enum type \"protobuf_unittest.TestAllTypes.NestedEnum\" has no " +
"value with number 123.",
"optional_nested_enum: 123");
// Delimiters must match.
assertParseError(
"1:22: Expected identifier.",
"OptionalGroup < a: 1 }");
assertParseError(
"1:22: Expected identifier.",
"OptionalGroup { a: 1 >");
}
// =================================================================
public void testEscape() throws Exception {
// Escape sequences.
assertEquals("\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\"",
TextFormat.escapeBytes(bytes("\0\001\007\b\f\n\r\t\013\\\'\"")));
assertEquals("\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\"",
TextFormat.escapeText("\0\001\007\b\f\n\r\t\013\\\'\""));
assertEquals(bytes("\0\001\007\b\f\n\r\t\013\\\'\""),
TextFormat.unescapeBytes("\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\""));
assertEquals("\0\001\007\b\f\n\r\t\013\\\'\"",
TextFormat.unescapeText("\\000\\001\\a\\b\\f\\n\\r\\t\\v\\\\\\'\\\""));
// Unicode handling.
assertEquals("\\341\\210\\264", TextFormat.escapeText("\u1234"));
assertEquals("\\341\\210\\264",
TextFormat.escapeBytes(bytes(0xe1, 0x88, 0xb4)));
assertEquals("\u1234", TextFormat.unescapeText("\\341\\210\\264"));
assertEquals(bytes(0xe1, 0x88, 0xb4),
TextFormat.unescapeBytes("\\341\\210\\264"));
assertEquals("\u1234", TextFormat.unescapeText("\\xe1\\x88\\xb4"));
assertEquals(bytes(0xe1, 0x88, 0xb4),
TextFormat.unescapeBytes("\\xe1\\x88\\xb4"));
// Errors.
try {
TextFormat.unescapeText("\\x");
fail("Should have thrown an exception.");
} catch (TextFormat.InvalidEscapeSequence e) {
// success
}
try {
TextFormat.unescapeText("\\z");
fail("Should have thrown an exception.");
} catch (TextFormat.InvalidEscapeSequence e) {
// success
}
try {
TextFormat.unescapeText("\\");
fail("Should have thrown an exception.");
} catch (TextFormat.InvalidEscapeSequence e) {
// success
}
}
public void testParseInteger() throws Exception {
assertEquals( 0, TextFormat.parseInt32( "0"));
assertEquals( 1, TextFormat.parseInt32( "1"));
assertEquals( -1, TextFormat.parseInt32( "-1"));
assertEquals( 12345, TextFormat.parseInt32( "12345"));
assertEquals( -12345, TextFormat.parseInt32( "-12345"));
assertEquals( 2147483647, TextFormat.parseInt32( "2147483647"));
assertEquals(-2147483648, TextFormat.parseInt32("-2147483648"));
assertEquals( 0, TextFormat.parseUInt32( "0"));
assertEquals( 1, TextFormat.parseUInt32( "1"));
assertEquals( 12345, TextFormat.parseUInt32( "12345"));
assertEquals( 2147483647, TextFormat.parseUInt32("2147483647"));
assertEquals((int) 2147483648L, TextFormat.parseUInt32("2147483648"));
assertEquals((int) 4294967295L, TextFormat.parseUInt32("4294967295"));
assertEquals( 0L, TextFormat.parseInt64( "0"));
assertEquals( 1L, TextFormat.parseInt64( "1"));
assertEquals( -1L, TextFormat.parseInt64( "-1"));
assertEquals( 12345L, TextFormat.parseInt64( "12345"));
assertEquals( -12345L, TextFormat.parseInt64( "-12345"));
assertEquals( 2147483647L, TextFormat.parseInt64( "2147483647"));
assertEquals(-2147483648L, TextFormat.parseInt64("-2147483648"));
assertEquals( 4294967295L, TextFormat.parseInt64( "4294967295"));
assertEquals( 4294967296L, TextFormat.parseInt64( "4294967296"));
assertEquals(9223372036854775807L,
TextFormat.parseInt64("9223372036854775807"));
assertEquals(-9223372036854775808L,
TextFormat.parseInt64("-9223372036854775808"));
assertEquals( 0L, TextFormat.parseUInt64( "0"));
assertEquals( 1L, TextFormat.parseUInt64( "1"));
assertEquals( 12345L, TextFormat.parseUInt64( "12345"));
assertEquals( 2147483647L, TextFormat.parseUInt64( "2147483647"));
assertEquals( 4294967295L, TextFormat.parseUInt64( "4294967295"));
assertEquals( 4294967296L, TextFormat.parseUInt64( "4294967296"));
assertEquals(9223372036854775807L,
TextFormat.parseUInt64("9223372036854775807"));
assertEquals(-9223372036854775808L,
TextFormat.parseUInt64("9223372036854775808"));
assertEquals(-1L, TextFormat.parseUInt64("18446744073709551615"));
// Hex
assertEquals(0x1234abcd, TextFormat.parseInt32("0x1234abcd"));
assertEquals(-0x1234abcd, TextFormat.parseInt32("-0x1234abcd"));
assertEquals(-1, TextFormat.parseUInt64("0xffffffffffffffff"));
assertEquals(0x7fffffffffffffffL,
TextFormat.parseInt64("0x7fffffffffffffff"));
// Octal
assertEquals(01234567, TextFormat.parseInt32("01234567"));
// Out-of-range
try {
TextFormat.parseInt32("2147483648");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseInt32("-2147483649");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseUInt32("4294967296");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseUInt32("-1");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseInt64("9223372036854775808");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseInt64("-9223372036854775809");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseUInt64("18446744073709551616");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
try {
TextFormat.parseUInt64("-1");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
// Not a number.
try {
TextFormat.parseInt32("abcd");
fail("Should have thrown an exception.");
} catch (NumberFormatException e) {
// success
}
}
}

View File

@ -0,0 +1,315 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import protobuf_unittest.UnittestProto;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestEmptyMessage;
import protobuf_unittest.UnittestProto.
TestEmptyMessageWithExtensions;
import junit.framework.TestCase;
import java.util.Arrays;
import java.util.Map;
/**
* Tests related to unknown field handling.
*
* @author kenton@google.com (Kenton Varda)
*/
public class UnknownFieldSetTest extends TestCase {
public void setUp() throws Exception {
descriptor = TestAllTypes.getDescriptor();
allFields = TestUtil.getAllSet();
allFieldsData = allFields.toByteString();
emptyMessage = TestEmptyMessage.parseFrom(allFieldsData);
unknownFields = emptyMessage.getUnknownFields();
}
UnknownFieldSet.Field getField(String name) {
Descriptors.FieldDescriptor field = descriptor.findFieldByName(name);
assertNotNull(field);
return unknownFields.getField(field.getNumber());
}
// Constructs a protocol buffer which contains fields with all the same
// numbers as allFieldsData except that each field is some other wire
// type.
ByteString getBizarroData() throws Exception {
UnknownFieldSet.Builder bizarroFields = UnknownFieldSet.newBuilder();
UnknownFieldSet.Field varintField =
UnknownFieldSet.Field.newBuilder().addVarint(1).build();
UnknownFieldSet.Field fixed32Field =
UnknownFieldSet.Field.newBuilder().addFixed32(1).build();
for (Map.Entry<Integer, UnknownFieldSet.Field> entry :
unknownFields.asMap().entrySet()) {
if (entry.getValue().getVarintList().isEmpty()) {
// Original field is not a varint, so use a varint.
bizarroFields.addField(entry.getKey(), varintField);
} else {
// Original field *is* a varint, so use something else.
bizarroFields.addField(entry.getKey(), fixed32Field);
}
}
return bizarroFields.build().toByteString();
}
Descriptors.Descriptor descriptor;
TestAllTypes allFields;
ByteString allFieldsData;
// An empty message that has been parsed from allFieldsData. So, it has
// unknown fields of every type.
TestEmptyMessage emptyMessage;
UnknownFieldSet unknownFields;
// =================================================================
public void testVarint() throws Exception {
UnknownFieldSet.Field field = getField("optional_int32");
assertEquals(1, field.getVarintList().size());
assertEquals(allFields.getOptionalInt32(),
(long) field.getVarintList().get(0));
}
public void testFixed32() throws Exception {
UnknownFieldSet.Field field = getField("optional_fixed32");
assertEquals(1, field.getFixed32List().size());
assertEquals(allFields.getOptionalFixed32(),
(int) field.getFixed32List().get(0));
}
public void testFixed64() throws Exception {
UnknownFieldSet.Field field = getField("optional_fixed64");
assertEquals(1, field.getFixed64List().size());
assertEquals(allFields.getOptionalFixed64(),
(long) field.getFixed64List().get(0));
}
public void testLengthDelimited() throws Exception {
UnknownFieldSet.Field field = getField("optional_bytes");
assertEquals(1, field.getLengthDelimitedList().size());
assertEquals(allFields.getOptionalBytes(),
field.getLengthDelimitedList().get(0));
}
public void testGroup() throws Exception {
Descriptors.FieldDescriptor nestedFieldDescriptor =
TestAllTypes.OptionalGroup.getDescriptor().findFieldByName("a");
assertNotNull(nestedFieldDescriptor);
UnknownFieldSet.Field field = getField("optionalgroup");
assertEquals(1, field.getGroupList().size());
UnknownFieldSet group = field.getGroupList().get(0);
assertEquals(1, group.asMap().size());
assertTrue(group.hasField(nestedFieldDescriptor.getNumber()));
UnknownFieldSet.Field nestedField =
group.getField(nestedFieldDescriptor.getNumber());
assertEquals(1, nestedField.getVarintList().size());
assertEquals(allFields.getOptionalGroup().getA(),
(long) nestedField.getVarintList().get(0));
}
public void testSerialize() throws Exception {
// Check that serializing the UnknownFieldSet produces the original data
// again.
ByteString data = emptyMessage.toByteString();
assertEquals(allFieldsData, data);
}
public void testCopyFrom() throws Exception {
TestEmptyMessage message =
TestEmptyMessage.newBuilder().mergeFrom(emptyMessage).build();
assertEquals(emptyMessage.toString(), message.toString());
}
public void testMergeFrom() throws Exception {
TestEmptyMessage source =
TestEmptyMessage.newBuilder()
.setUnknownFields(
UnknownFieldSet.newBuilder()
.addField(2,
UnknownFieldSet.Field.newBuilder()
.addVarint(2).build())
.addField(3,
UnknownFieldSet.Field.newBuilder()
.addVarint(4).build())
.build())
.build();
TestEmptyMessage destination =
TestEmptyMessage.newBuilder()
.setUnknownFields(
UnknownFieldSet.newBuilder()
.addField(1,
UnknownFieldSet.Field.newBuilder()
.addVarint(1).build())
.addField(3,
UnknownFieldSet.Field.newBuilder()
.addVarint(3).build())
.build())
.mergeFrom(source)
.build();
assertEquals(
"1: 1\n" +
"2: 2\n" +
"3: 3\n" +
"3: 4\n",
destination.toString());
}
public void testClear() throws Exception {
UnknownFieldSet fields =
UnknownFieldSet.newBuilder().mergeFrom(unknownFields).clear().build();
assertTrue(fields.asMap().isEmpty());
}
public void testClearMessage() throws Exception {
TestEmptyMessage message =
TestEmptyMessage.newBuilder().mergeFrom(emptyMessage).clear().build();
assertEquals(0, message.getSerializedSize());
}
public void testParseKnownAndUnknown() throws Exception {
// Test mixing known and unknown fields when parsing.
UnknownFieldSet fields =
UnknownFieldSet.newBuilder(unknownFields)
.addField(123456,
UnknownFieldSet.Field.newBuilder().addVarint(654321).build())
.build();
ByteString data = fields.toByteString();
TestAllTypes destination = TestAllTypes.parseFrom(data);
TestUtil.assertAllFieldsSet(destination);
assertEquals(1, destination.getUnknownFields().asMap().size());
UnknownFieldSet.Field field =
destination.getUnknownFields().getField(123456);
assertEquals(1, field.getVarintList().size());
assertEquals(654321, (long) field.getVarintList().get(0));
}
public void testWrongTypeTreatedAsUnknown() throws Exception {
// Test that fields of the wrong wire type are treated like unknown fields
// when parsing.
ByteString bizarroData = getBizarroData();
TestAllTypes allTypesMessage = TestAllTypes.parseFrom(bizarroData);
TestEmptyMessage emptyMessage = TestEmptyMessage.parseFrom(bizarroData);
// All fields should have been interpreted as unknown, so the debug strings
// should be the same.
assertEquals(emptyMessage.toString(), allTypesMessage.toString());
}
public void testUnknownExtensions() throws Exception {
// Make sure fields are properly parsed to the UnknownFieldSet even when
// they are declared as extension numbers.
TestEmptyMessageWithExtensions message =
TestEmptyMessageWithExtensions.parseFrom(allFieldsData);
assertEquals(unknownFields.asMap().size(),
message.getUnknownFields().asMap().size());
assertEquals(allFieldsData, message.toByteString());
}
public void testWrongExtensionTypeTreatedAsUnknown() throws Exception {
// Test that fields of the wrong wire type are treated like unknown fields
// when parsing extensions.
ByteString bizarroData = getBizarroData();
TestAllExtensions allExtensionsMessage =
TestAllExtensions.parseFrom(bizarroData);
TestEmptyMessage emptyMessage = TestEmptyMessage.parseFrom(bizarroData);
// All fields should have been interpreted as unknown, so the debug strings
// should be the same.
assertEquals(emptyMessage.toString(),
allExtensionsMessage.toString());
}
public void testParseUnknownEnumValue() throws Exception {
Descriptors.FieldDescriptor singularField =
TestAllTypes.getDescriptor().findFieldByName("optional_nested_enum");
Descriptors.FieldDescriptor repeatedField =
TestAllTypes.getDescriptor().findFieldByName("repeated_nested_enum");
assertNotNull(singularField);
assertNotNull(repeatedField);
ByteString data =
UnknownFieldSet.newBuilder()
.addField(singularField.getNumber(),
UnknownFieldSet.Field.newBuilder()
.addVarint(TestAllTypes.NestedEnum.BAR.getNumber())
.addVarint(5) // not valid
.build())
.addField(repeatedField.getNumber(),
UnknownFieldSet.Field.newBuilder()
.addVarint(TestAllTypes.NestedEnum.FOO.getNumber())
.addVarint(4) // not valid
.addVarint(TestAllTypes.NestedEnum.BAZ.getNumber())
.addVarint(6) // not valid
.build())
.build()
.toByteString();
{
TestAllTypes message = TestAllTypes.parseFrom(data);
assertEquals(TestAllTypes.NestedEnum.BAR,
message.getOptionalNestedEnum());
assertEquals(
Arrays.asList(TestAllTypes.NestedEnum.FOO, TestAllTypes.NestedEnum.BAZ),
message.getRepeatedNestedEnumList());
assertEquals(Arrays.asList(5L),
message.getUnknownFields()
.getField(singularField.getNumber())
.getVarintList());
assertEquals(Arrays.asList(4L, 6L),
message.getUnknownFields()
.getField(repeatedField.getNumber())
.getVarintList());
}
{
TestAllExtensions message =
TestAllExtensions.parseFrom(data, TestUtil.getExtensionRegistry());
assertEquals(TestAllTypes.NestedEnum.BAR,
message.getExtension(UnittestProto.optionalNestedEnumExtension));
assertEquals(
Arrays.asList(TestAllTypes.NestedEnum.FOO, TestAllTypes.NestedEnum.BAZ),
message.getExtension(UnittestProto.repeatedNestedEnumExtension));
assertEquals(Arrays.asList(5L),
message.getUnknownFields()
.getField(singularField.getNumber())
.getVarintList());
assertEquals(Arrays.asList(4L, 6L),
message.getUnknownFields()
.getField(repeatedField.getNumber())
.getVarintList());
}
}
}

View File

@ -0,0 +1,226 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.protobuf;
import junit.framework.TestCase;
import protobuf_unittest.UnittestProto;
import protobuf_unittest.UnittestProto.TestAllTypes;
import protobuf_unittest.UnittestProto.TestAllExtensions;
import protobuf_unittest.UnittestProto.TestFieldOrderings;
import protobuf_unittest.UnittestMset.TestMessageSet;
import protobuf_unittest.UnittestMset.RawMessageSet;
import protobuf_unittest.UnittestMset.TestMessageSetExtension1;
import protobuf_unittest.UnittestMset.TestMessageSetExtension2;
/**
* Tests related to parsing and serialization.
*
* @author kenton@google.com (Kenton Varda)
*/
public class WireFormatTest extends TestCase {
public void testSerialization() throws Exception {
TestAllTypes message = TestUtil.getAllSet();
ByteString rawBytes = message.toByteString();
assertEquals(rawBytes.size(), message.getSerializedSize());
TestAllTypes message2 = TestAllTypes.parseFrom(rawBytes);
TestUtil.assertAllFieldsSet(message2);
}
public void testSerializeExtensions() throws Exception {
// TestAllTypes and TestAllExtensions should have compatible wire formats,
// so if we serealize a TestAllExtensions then parse it as TestAllTypes
// it should work.
TestAllExtensions message = TestUtil.getAllExtensionsSet();
ByteString rawBytes = message.toByteString();
assertEquals(rawBytes.size(), message.getSerializedSize());
TestAllTypes message2 = TestAllTypes.parseFrom(rawBytes);
TestUtil.assertAllFieldsSet(message2);
}
public void testParseExtensions() throws Exception {
// TestAllTypes and TestAllExtensions should have compatible wire formats,
// so if we serealize a TestAllTypes then parse it as TestAllExtensions
// it should work.
TestAllTypes message = TestUtil.getAllSet();
ByteString rawBytes = message.toByteString();
ExtensionRegistry registry = ExtensionRegistry.newInstance();
TestUtil.registerAllExtensions(registry);
registry = registry.getUnmodifiable();
TestAllExtensions message2 =
TestAllExtensions.parseFrom(rawBytes, registry);
TestUtil.assertAllExtensionsSet(message2);
}
public void testExtensionsSerializedSize() throws Exception {
assertEquals(TestUtil.getAllSet().getSerializedSize(),
TestUtil.getAllExtensionsSet().getSerializedSize());
}
private void assertFieldsInOrder(ByteString data) throws Exception {
CodedInputStream input = data.newCodedInput();
int previousTag = 0;
while (true) {
int tag = input.readTag();
if (tag == 0) {
break;
}
assertTrue(tag > previousTag);
input.skipField(tag);
}
}
public void testInterleavedFieldsAndExtensions() throws Exception {
// Tests that fields are written in order even when extension ranges
// are interleaved with field numbers.
ByteString data =
TestFieldOrderings.newBuilder()
.setMyInt(1)
.setMyString("foo")
.setMyFloat(1.0F)
.setExtension(UnittestProto.myExtensionInt, 23)
.setExtension(UnittestProto.myExtensionString, "bar")
.build().toByteString();
assertFieldsInOrder(data);
Descriptors.Descriptor descriptor = TestFieldOrderings.getDescriptor();
ByteString dynamic_data =
DynamicMessage.newBuilder(TestFieldOrderings.getDescriptor())
.setField(descriptor.findFieldByName("my_int"), 1L)
.setField(descriptor.findFieldByName("my_string"), "foo")
.setField(descriptor.findFieldByName("my_float"), 1.0F)
.setField(UnittestProto.myExtensionInt.getDescriptor(), 23)
.setField(UnittestProto.myExtensionString.getDescriptor(), "bar")
.build().toByteString();
assertFieldsInOrder(dynamic_data);
}
private static final int UNKNOWN_TYPE_ID = 1550055;
private static final int TYPE_ID_1 =
TestMessageSetExtension1.getDescriptor().getExtensions().get(0).getNumber();
private static final int TYPE_ID_2 =
TestMessageSetExtension2.getDescriptor().getExtensions().get(0).getNumber();
public void testSerializeMessageSet() throws Exception {
// Set up a TestMessageSet with two known messages and an unknown one.
TestMessageSet messageSet =
TestMessageSet.newBuilder()
.setExtension(
TestMessageSetExtension1.messageSetExtension,
TestMessageSetExtension1.newBuilder().setI(123).build())
.setExtension(
TestMessageSetExtension2.messageSetExtension,
TestMessageSetExtension2.newBuilder().setStr("foo").build())
.setUnknownFields(
UnknownFieldSet.newBuilder()
.addField(UNKNOWN_TYPE_ID,
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(ByteString.copyFromUtf8("bar"))
.build())
.build())
.build();
ByteString data = messageSet.toByteString();
// Parse back using RawMessageSet and check the contents.
RawMessageSet raw = RawMessageSet.parseFrom(data);
assertTrue(raw.getUnknownFields().asMap().isEmpty());
assertEquals(3, raw.getItemCount());
assertEquals(TYPE_ID_1, raw.getItem(0).getTypeId());
assertEquals(TYPE_ID_2, raw.getItem(1).getTypeId());
assertEquals(UNKNOWN_TYPE_ID, raw.getItem(2).getTypeId());
TestMessageSetExtension1 message1 =
TestMessageSetExtension1.parseFrom(
raw.getItem(0).getMessage().toByteArray());
assertEquals(123, message1.getI());
TestMessageSetExtension2 message2 =
TestMessageSetExtension2.parseFrom(
raw.getItem(1).getMessage().toByteArray());
assertEquals("foo", message2.getStr());
assertEquals("bar", raw.getItem(2).getMessage().toStringUtf8());
}
public void testParseMessageSet() throws Exception {
ExtensionRegistry extensionRegistry = ExtensionRegistry.newInstance();
extensionRegistry.add(TestMessageSetExtension1.messageSetExtension);
extensionRegistry.add(TestMessageSetExtension2.messageSetExtension);
// Set up a RawMessageSet with two known messages and an unknown one.
RawMessageSet raw =
RawMessageSet.newBuilder()
.addItem(
RawMessageSet.Item.newBuilder()
.setTypeId(TYPE_ID_1)
.setMessage(
TestMessageSetExtension1.newBuilder()
.setI(123)
.build().toByteString())
.build())
.addItem(
RawMessageSet.Item.newBuilder()
.setTypeId(TYPE_ID_2)
.setMessage(
TestMessageSetExtension2.newBuilder()
.setStr("foo")
.build().toByteString())
.build())
.addItem(
RawMessageSet.Item.newBuilder()
.setTypeId(UNKNOWN_TYPE_ID)
.setMessage(ByteString.copyFromUtf8("bar"))
.build())
.build();
ByteString data = raw.toByteString();
// Parse as a TestMessageSet and check the contents.
TestMessageSet messageSet =
TestMessageSet.parseFrom(data, extensionRegistry);
assertEquals(123, messageSet.getExtension(
TestMessageSetExtension1.messageSetExtension).getI());
assertEquals("foo", messageSet.getExtension(
TestMessageSetExtension2.messageSetExtension).getStr());
// Check for unknown field with type LENGTH_DELIMITED,
// number UNKNOWN_TYPE_ID, and contents "bar".
UnknownFieldSet unknownFields = messageSet.getUnknownFields();
assertEquals(1, unknownFields.asMap().size());
assertTrue(unknownFields.hasField(UNKNOWN_TYPE_ID));
UnknownFieldSet.Field field = unknownFields.getField(UNKNOWN_TYPE_ID);
assertEquals(1, field.getLengthDelimitedList().size());
assertEquals("bar", field.getLengthDelimitedList().get(0).toStringUtf8());
}
}

View File

@ -0,0 +1,53 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
//
// A proto file which tests the java_multiple_files option.
import "google/protobuf/unittest.proto";
package protobuf_unittest;
option java_multiple_files = true;
option java_outer_classname = "MultipleFilesTestProto";
message MessageWithNoOuter {
message NestedMessage {
optional int32 i = 1;
}
enum NestedEnum {
BAZ = 3;
}
optional NestedMessage nested = 1;
repeated TestAllTypes foreign = 2;
optional NestedEnum nested_enum = 3;
optional EnumWithNoOuter foreign_enum = 4;
}
enum EnumWithNoOuter {
FOO = 1;
BAR = 2;
}
service ServiceWithNoOuter {
rpc Foo(MessageWithNoOuter) returns(TestAllTypes);
}
extend TestAllExtensions {
optional int32 extension_with_outer = 1234567;
}

363
m4/acx_pthread.m4 Normal file
View File

@ -0,0 +1,363 @@
# This was retrieved from
# http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?revision=1277&root=avahi
# See also (perhaps for new versions?)
# http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?root=avahi
#
# We've rewritten the inconsistency check code (from avahi), to work
# more broadly. In particular, it no longer assumes ld accepts -zdefs.
# This caused a restructing of the code, but the functionality has only
# changed a little.
dnl @synopsis ACX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]])
dnl
dnl @summary figure out how to build C programs using POSIX threads
dnl
dnl This macro figures out how to build C programs using POSIX threads.
dnl It sets the PTHREAD_LIBS output variable to the threads library and
dnl linker flags, and the PTHREAD_CFLAGS output variable to any special
dnl C compiler flags that are needed. (The user can also force certain
dnl compiler flags/libs to be tested by setting these environment
dnl variables.)
dnl
dnl Also sets PTHREAD_CC to any special C compiler that is needed for
dnl multi-threaded programs (defaults to the value of CC otherwise).
dnl (This is necessary on AIX to use the special cc_r compiler alias.)
dnl
dnl NOTE: You are assumed to not only compile your program with these
dnl flags, but also link it with them as well. e.g. you should link
dnl with $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS
dnl $LIBS
dnl
dnl If you are only building threads programs, you may wish to use
dnl these variables in your default LIBS, CFLAGS, and CC:
dnl
dnl LIBS="$PTHREAD_LIBS $LIBS"
dnl CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
dnl CC="$PTHREAD_CC"
dnl
dnl In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute
dnl constant has a nonstandard name, defines PTHREAD_CREATE_JOINABLE to
dnl that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX).
dnl
dnl ACTION-IF-FOUND is a list of shell commands to run if a threads
dnl library is found, and ACTION-IF-NOT-FOUND is a list of commands to
dnl run it if it is not found. If ACTION-IF-FOUND is not specified, the
dnl default action will define HAVE_PTHREAD.
dnl
dnl Please let the authors know if this macro fails on any platform, or
dnl if you have any other suggestions or comments. This macro was based
dnl on work by SGJ on autoconf scripts for FFTW (www.fftw.org) (with
dnl help from M. Frigo), as well as ac_pthread and hb_pthread macros
dnl posted by Alejandro Forero Cuervo to the autoconf macro repository.
dnl We are also grateful for the helpful feedback of numerous users.
dnl
dnl @category InstalledPackages
dnl @author Steven G. Johnson <stevenj@alum.mit.edu>
dnl @version 2006-05-29
dnl @license GPLWithACException
dnl
dnl Checks for GCC shared/pthread inconsistency based on work by
dnl Marcin Owsiany <marcin@owsiany.pl>
AC_DEFUN([ACX_PTHREAD], [
AC_REQUIRE([AC_CANONICAL_HOST])
AC_LANG_SAVE
AC_LANG_C
acx_pthread_ok=no
# We used to check for pthread.h first, but this fails if pthread.h
# requires special compiler flags (e.g. on True64 or Sequent).
# It gets checked for in the link test anyway.
# First of all, check if the user has set any of the PTHREAD_LIBS,
# etcetera environment variables, and if threads linking works using
# them:
if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then
save_CFLAGS="$CFLAGS"
CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
save_LIBS="$LIBS"
LIBS="$PTHREAD_LIBS $LIBS"
AC_MSG_CHECKING([for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS])
AC_TRY_LINK_FUNC(pthread_join, acx_pthread_ok=yes)
AC_MSG_RESULT($acx_pthread_ok)
if test x"$acx_pthread_ok" = xno; then
PTHREAD_LIBS=""
PTHREAD_CFLAGS=""
fi
LIBS="$save_LIBS"
CFLAGS="$save_CFLAGS"
fi
# We must check for the threads library under a number of different
# names; the ordering is very important because some systems
# (e.g. DEC) have both -lpthread and -lpthreads, where one of the
# libraries is broken (non-POSIX).
# Create a list of thread flags to try. Items starting with a "-" are
# C compiler flags, and other items are library names, except for "none"
# which indicates that we try without any flags at all, and "pthread-config"
# which is a program returning the flags for the Pth emulation library.
acx_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config"
# The ordering *is* (sometimes) important. Some notes on the
# individual items follow:
# pthreads: AIX (must check this before -lpthread)
# none: in case threads are in libc; should be tried before -Kthread and
# other compiler flags to prevent continual compiler warnings
# -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h)
# -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able)
# lthread: LinuxThreads port on FreeBSD (also preferred to -pthread)
# -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads)
# -pthreads: Solaris/gcc
# -mthreads: Mingw32/gcc, Lynx/gcc
# -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it
# doesn't hurt to check since this sometimes defines pthreads too;
# also defines -D_REENTRANT)
# ... -mt is also the pthreads flag for HP/aCC
# pthread: Linux, etcetera
# --thread-safe: KAI C++
# pthread-config: use pthread-config program (for GNU Pth library)
case "${host_cpu}-${host_os}" in
*solaris*)
# On Solaris (at least, for some versions), libc contains stubbed
# (non-functional) versions of the pthreads routines, so link-based
# tests will erroneously succeed. (We need to link with -pthreads/-mt/
# -lpthread.) (The stubs are missing pthread_cleanup_push, or rather
# a function called by this macro, so we could check for that, but
# who knows whether they'll stub that too in a future libc.) So,
# we'll just look for -pthreads and -lpthread first:
acx_pthread_flags="-pthreads pthread -mt -pthread $acx_pthread_flags"
;;
esac
if test x"$acx_pthread_ok" = xno; then
for flag in $acx_pthread_flags; do
case $flag in
none)
AC_MSG_CHECKING([whether pthreads work without any flags])
;;
-*)
AC_MSG_CHECKING([whether pthreads work with $flag])
PTHREAD_CFLAGS="$flag"
;;
pthread-config)
AC_CHECK_PROG(acx_pthread_config, pthread-config, yes, no)
if test x"$acx_pthread_config" = xno; then continue; fi
PTHREAD_CFLAGS="`pthread-config --cflags`"
PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`"
;;
*)
AC_MSG_CHECKING([for the pthreads library -l$flag])
PTHREAD_LIBS="-l$flag"
;;
esac
save_LIBS="$LIBS"
save_CFLAGS="$CFLAGS"
LIBS="$PTHREAD_LIBS $LIBS"
CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
# Check for various functions. We must include pthread.h,
# since some functions may be macros. (On the Sequent, we
# need a special flag -Kthread to make this header compile.)
# We check for pthread_join because it is in -lpthread on IRIX
# while pthread_create is in libc. We check for pthread_attr_init
# due to DEC craziness with -lpthreads. We check for
# pthread_cleanup_push because it is one of the few pthread
# functions on Solaris that doesn't have a non-functional libc stub.
# We try pthread_create on general principles.
AC_TRY_LINK([#include <pthread.h>],
[pthread_t th; pthread_join(th, 0);
pthread_attr_init(0); pthread_cleanup_push(0, 0);
pthread_create(0,0,0,0); pthread_cleanup_pop(0); ],
[acx_pthread_ok=yes])
LIBS="$save_LIBS"
CFLAGS="$save_CFLAGS"
AC_MSG_RESULT($acx_pthread_ok)
if test "x$acx_pthread_ok" = xyes; then
break;
fi
PTHREAD_LIBS=""
PTHREAD_CFLAGS=""
done
fi
# Various other checks:
if test "x$acx_pthread_ok" = xyes; then
save_LIBS="$LIBS"
LIBS="$PTHREAD_LIBS $LIBS"
save_CFLAGS="$CFLAGS"
CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
# Detect AIX lossage: JOINABLE attribute is called UNDETACHED.
AC_MSG_CHECKING([for joinable pthread attribute])
attr_name=unknown
for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do
AC_TRY_LINK([#include <pthread.h>], [int attr=$attr; return attr;],
[attr_name=$attr; break])
done
AC_MSG_RESULT($attr_name)
if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then
AC_DEFINE_UNQUOTED(PTHREAD_CREATE_JOINABLE, $attr_name,
[Define to necessary symbol if this constant
uses a non-standard name on your system.])
fi
AC_MSG_CHECKING([if more special flags are required for pthreads])
flag=no
case "${host_cpu}-${host_os}" in
*-aix* | *-freebsd* | *-darwin*) flag="-D_THREAD_SAFE";;
*solaris* | *-osf* | *-hpux*) flag="-D_REENTRANT";;
esac
AC_MSG_RESULT(${flag})
if test "x$flag" != xno; then
PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS"
fi
LIBS="$save_LIBS"
CFLAGS="$save_CFLAGS"
# More AIX lossage: must compile with xlc_r or cc_r
if test x"$GCC" != xyes; then
AC_CHECK_PROGS(PTHREAD_CC, xlc_r cc_r, ${CC})
else
PTHREAD_CC=$CC
fi
# The next part tries to detect GCC inconsistency with -shared on some
# architectures and systems. The problem is that in certain
# configurations, when -shared is specified, GCC "forgets" to
# internally use various flags which are still necessary.
#
# Prepare the flags
#
save_CFLAGS="$CFLAGS"
save_LIBS="$LIBS"
save_CC="$CC"
# Try with the flags determined by the earlier checks.
#
# -Wl,-z,defs forces link-time symbol resolution, so that the
# linking checks with -shared actually have any value
#
# FIXME: -fPIC is required for -shared on many architectures,
# so we specify it here, but the right way would probably be to
# properly detect whether it is actually required.
CFLAGS="-shared -fPIC -Wl,-z,defs $CFLAGS $PTHREAD_CFLAGS"
LIBS="$PTHREAD_LIBS $LIBS"
CC="$PTHREAD_CC"
# In order not to create several levels of indentation, we test
# the value of "$done" until we find the cure or run out of ideas.
done="no"
# First, make sure the CFLAGS we added are actually accepted by our
# compiler. If not (and OS X's ld, for instance, does not accept -z),
# then we can't do this test.
if test x"$done" = xno; then
AC_MSG_CHECKING([whether to check for GCC pthread/shared inconsistencies])
AC_TRY_LINK(,, , [done=yes])
if test "x$done" = xyes ; then
AC_MSG_RESULT([no])
else
AC_MSG_RESULT([yes])
fi
fi
if test x"$done" = xno; then
AC_MSG_CHECKING([whether -pthread is sufficient with -shared])
AC_TRY_LINK([#include <pthread.h>],
[pthread_t th; pthread_join(th, 0);
pthread_attr_init(0); pthread_cleanup_push(0, 0);
pthread_create(0,0,0,0); pthread_cleanup_pop(0); ],
[done=yes])
if test "x$done" = xyes; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
fi
fi
#
# Linux gcc on some architectures such as mips/mipsel forgets
# about -lpthread
#
if test x"$done" = xno; then
AC_MSG_CHECKING([whether -lpthread fixes that])
LIBS="-lpthread $PTHREAD_LIBS $save_LIBS"
AC_TRY_LINK([#include <pthread.h>],
[pthread_t th; pthread_join(th, 0);
pthread_attr_init(0); pthread_cleanup_push(0, 0);
pthread_create(0,0,0,0); pthread_cleanup_pop(0); ],
[done=yes])
if test "x$done" = xyes; then
AC_MSG_RESULT([yes])
PTHREAD_LIBS="-lpthread $PTHREAD_LIBS"
else
AC_MSG_RESULT([no])
fi
fi
#
# FreeBSD 4.10 gcc forgets to use -lc_r instead of -lc
#
if test x"$done" = xno; then
AC_MSG_CHECKING([whether -lc_r fixes that])
LIBS="-lc_r $PTHREAD_LIBS $save_LIBS"
AC_TRY_LINK([#include <pthread.h>],
[pthread_t th; pthread_join(th, 0);
pthread_attr_init(0); pthread_cleanup_push(0, 0);
pthread_create(0,0,0,0); pthread_cleanup_pop(0); ],
[done=yes])
if test "x$done" = xyes; then
AC_MSG_RESULT([yes])
PTHREAD_LIBS="-lc_r $PTHREAD_LIBS"
else
AC_MSG_RESULT([no])
fi
fi
if test x"$done" = xno; then
# OK, we have run out of ideas
AC_MSG_WARN([Impossible to determine how to use pthreads with shared libraries])
# so it's not safe to assume that we may use pthreads
acx_pthread_ok=no
fi
CFLAGS="$save_CFLAGS"
LIBS="$save_LIBS"
CC="$save_CC"
else
PTHREAD_CC="$CC"
fi
AC_SUBST(PTHREAD_LIBS)
AC_SUBST(PTHREAD_CFLAGS)
AC_SUBST(PTHREAD_CC)
# Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND:
if test x"$acx_pthread_ok" = xyes; then
ifelse([$1],,AC_DEFINE(HAVE_PTHREAD,1,[Define if you have POSIX threads libraries and header files.]),[$1])
:
else
acx_pthread_ok=no
$2
fi
AC_LANG_RESTORE
])dnl ACX_PTHREAD

43
m4/stl_hash.m4 Normal file
View File

@ -0,0 +1,43 @@
# We check two things: where the include file is for hash_map, and
# what namespace hash_map lives in within that include file. We
# include AC_TRY_COMPILE for all the combinations we've seen in the
# wild. We define one of HAVE_HASH_MAP or HAVE_EXT_HASH_MAP depending
# on location, and HASH_NAMESPACE to be the namespace hash_map is
# defined in.
#
# Ideally we'd use AC_CACHE_CHECK, but that only lets us store one value
# at a time, and we need to store two (filename and namespace).
# prints messages itself, so we have to do the message-printing ourselves
# via AC_MSG_CHECKING + AC_MSG_RESULT. (TODO(csilvers): can we cache?)
AC_DEFUN([AC_CXX_STL_HASH],
[AC_MSG_CHECKING(the location of hash_map)
AC_LANG_SAVE
AC_LANG_CPLUSPLUS
ac_cv_cxx_hash_map=""
for location in ext/hash_map hash_map; do
for namespace in __gnu_cxx "" std stdext; do
if test -z "$ac_cv_cxx_hash_map"; then
AC_TRY_COMPILE([#include <$location>],
[${namespace}::hash_map<int, int> t],
[ac_cv_cxx_hash_map="<$location>";
ac_cv_cxx_hash_namespace="$namespace";])
fi
done
done
ac_cv_cxx_hash_set=`echo "$ac_cv_cxx_hash_map" | sed s/map/set/`;
if test -n "$ac_cv_cxx_hash_map"; then
AC_DEFINE(HAVE_HASH_MAP, 1, [define if the compiler has hash_map])
AC_DEFINE(HAVE_HASH_SET, 1, [define if the compiler has hash_set])
AC_DEFINE_UNQUOTED(HASH_MAP_H,$ac_cv_cxx_hash_map,
[the location of <hash_map>])
AC_DEFINE_UNQUOTED(HASH_SET_H,$ac_cv_cxx_hash_set,
[the location of <hash_set>])
AC_DEFINE_UNQUOTED(HASH_NAMESPACE,$ac_cv_cxx_hash_namespace,
[the namespace of hash_map/hash_set])
AC_MSG_RESULT([$ac_cv_cxx_hash_map])
else
AC_MSG_RESULT()
AC_MSG_WARN([could not find an STL hash_map])
fi
])

53
python/README.txt Normal file
View File

@ -0,0 +1,53 @@
Protocol Buffers - Google's data interchange format
Copyright 2008 Google Inc.
This directory contains the Python Protocol Buffers runtime library.
Installation
============
1) Make sure you have Python 2.4 or newer. If in doubt, run:
$ python -V
2) If you do not have setuptools installed, note that it will be
downloaded and installed automatically as soon as you run setup.py.
If you would rather install it manually, you may do so by following
the instructions on this page:
http://peak.telecommunity.com/DevCenter/EasyInstall#installation-instructions
3) Build the C++ code, or install a binary distribution of protoc. If
you install a binary distribution, make sure that it is the same
version as this package. If in doubt, run:
$ protoc --version
4) Run the tests:
$ python setup.py test
If some tests fail, this library may not work correctly on your
system. Continue at your own risk.
Please note that there is a known problem with some versions of
Python on Cygwin which causes the tests to fail after printing the
error: "sem_init: Resource temporarily unavailable". This appears
to be a bug either in Cygwin or in Python:
http://www.cygwin.com/ml/cygwin/2005-07/msg01378.html
We do not know if or when it might me fixed. We also do not know
how likely it is that this bug will affect users in practice.
5) Install:
$ python setup.py install
This step may require superuser privileges.
Usage
=====
The complete documentation for Protocol Buffers is available via the
web at:
http://code.google.com/apis/protocolbuffers/

277
python/ez_setup.py Executable file
View File

@ -0,0 +1,277 @@
#!python
# This file was obtained from:
# http://peak.telecommunity.com/dist/ez_setup.py
# on 2008/7/1.
"""Bootstrap setuptools installation
If you want to use setuptools in your package's setup.py, just include this
file in the same directory with it, and add this to the top of your setup.py::
from ez_setup import use_setuptools
use_setuptools()
If you want to require a specific version of setuptools, set a download
mirror, or use an alternate download directory, you can do so by supplying
the appropriate options to ``use_setuptools()``.
This file can also be run as a script to install or upgrade setuptools.
"""
import sys
DEFAULT_VERSION = "0.6c8"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
}
import sys, os
def _validate_md5(egg_name, data):
if egg_name in md5_data:
from md5 import md5
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
def do_download():
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first, using 'easy_install -U setuptools'."
"\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
del pkg_resources, sys.modules['pkg_resources'] # reload ok
return do_download()
except pkg_resources.DistributionNotFound:
return do_download()
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
from md5 import md5
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:])

1
python/google/__init__.py Executable file
View File

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

View File

View File

@ -0,0 +1,419 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO(robinson): We probably need to provide deep-copy methods for
# descriptor types. When a FieldDescriptor is passed into
# Descriptor.__init__(), we should make a deep copy and then set
# containing_type on it. Alternatively, we could just get
# rid of containing_type (iit's not needed for reflection.py, at least).
#
# TODO(robinson): Print method?
#
# TODO(robinson): Useful __repr__?
"""Descriptors essentially contain exactly the information found in a .proto
file, in types that make this information accessible in Python.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class DescriptorBase(object):
"""Descriptors base class.
This class is the base of all descriptor classes. It provides common options
related functionaility.
"""
def __init__(self, options, options_class_name):
"""Initialize the descriptor given its options message and the name of the
class of the options message. The name of the class is required in case
the options message is None and has to be created.
"""
self._options = options
self._options_class_name = options_class_name
def GetOptions(self):
"""Retrieves descriptor options.
This method returns the options set or creates the default options for the
descriptor.
"""
if self._options:
return self._options
from google.protobuf import descriptor_pb2
try:
options_class = getattr(descriptor_pb2, self._options_class_name)
except AttributeError:
raise RuntimeError('Unknown options class name %s!' %
(self._options_class_name))
self._options = options_class()
return self._options
class Descriptor(DescriptorBase):
"""Descriptor for a protocol message type.
A Descriptor instance has the following attributes:
name: (str) Name of this protocol message type.
full_name: (str) Fully-qualified name of this protocol message type,
which will include protocol "package" name and the name of any
enclosing types.
filename: (str) Name of the .proto file containing this message.
containing_type: (Descriptor) Reference to the descriptor of the
type containing us, or None if we have no containing type.
fields: (list of FieldDescriptors) Field descriptors for all
fields in this type.
fields_by_number: (dict int -> FieldDescriptor) Same FieldDescriptor
objects as in |fields|, but indexed by "number" attribute in each
FieldDescriptor.
fields_by_name: (dict str -> FieldDescriptor) Same FieldDescriptor
objects as in |fields|, but indexed by "name" attribute in each
FieldDescriptor.
nested_types: (list of Descriptors) Descriptor references
for all protocol message types nested within this one.
nested_types_by_name: (dict str -> Descriptor) Same Descriptor
objects as in |nested_types|, but indexed by "name" attribute
in each Descriptor.
enum_types: (list of EnumDescriptors) EnumDescriptor references
for all enums contained within this type.
enum_types_by_name: (dict str ->EnumDescriptor) Same EnumDescriptor
objects as in |enum_types|, but indexed by "name" attribute
in each EnumDescriptor.
enum_values_by_name: (dict str -> EnumValueDescriptor) Dict mapping
from enum value name to EnumValueDescriptor for that value.
extensions: (list of FieldDescriptor) All extensions defined directly
within this message type (NOT within a nested type).
extensions_by_name: (dict, string -> FieldDescriptor) Same FieldDescriptor
objects as |extensions|, but indexed by "name" attribute of each
FieldDescriptor.
options: (descriptor_pb2.MessageOptions) Protocol message options or None
to use default message options.
"""
def __init__(self, name, full_name, filename, containing_type,
fields, nested_types, enum_types, extensions, options=None):
"""Arguments to __init__() are as described in the description
of Descriptor fields above.
"""
super(Descriptor, self).__init__(options, 'MessageOptions')
self.name = name
self.full_name = full_name
self.filename = filename
self.containing_type = containing_type
# We have fields in addition to fields_by_name and fields_by_number,
# so that:
# 1. Clients can index fields by "order in which they're listed."
# 2. Clients can easily iterate over all fields with the terse
# syntax: for f in descriptor.fields: ...
self.fields = fields
for field in self.fields:
field.containing_type = self
self.fields_by_number = dict((f.number, f) for f in fields)
self.fields_by_name = dict((f.name, f) for f in fields)
self.nested_types = nested_types
self.nested_types_by_name = dict((t.name, t) for t in nested_types)
self.enum_types = enum_types
for enum_type in self.enum_types:
enum_type.containing_type = self
self.enum_types_by_name = dict((t.name, t) for t in enum_types)
self.enum_values_by_name = dict(
(v.name, v) for t in enum_types for v in t.values)
self.extensions = extensions
for extension in self.extensions:
extension.extension_scope = self
self.extensions_by_name = dict((f.name, f) for f in extensions)
# TODO(robinson): We should have aggressive checking here,
# for example:
# * If you specify a repeated field, you should not be allowed
# to specify a default value.
# * [Other examples here as needed].
#
# TODO(robinson): for this and other *Descriptor classes, we
# might also want to lock things down aggressively (e.g.,
# prevent clients from setting the attributes). Having
# stronger invariants here in general will reduce the number
# of runtime checks we must do in reflection.py...
class FieldDescriptor(DescriptorBase):
"""Descriptor for a single field in a .proto file.
A FieldDescriptor instance has the following attriubtes:
name: (str) Name of this field, exactly as it appears in .proto.
full_name: (str) Name of this field, including containing scope. This is
particularly relevant for extensions.
index: (int) Dense, 0-indexed index giving the order that this
field textually appears within its message in the .proto file.
number: (int) Tag number declared for this field in the .proto file.
type: (One of the TYPE_* constants below) Declared type.
cpp_type: (One of the CPPTYPE_* constants below) C++ type used to
represent this field.
label: (One of the LABEL_* constants below) Tells whether this
field is optional, required, or repeated.
default_value: (Varies) Default value of this field. Only
meaningful for non-repeated scalar fields. Repeated fields
should always set this to [], and non-repeated composite
fields should always set this to None.
containing_type: (Descriptor) Descriptor of the protocol message
type that contains this field. Set by the Descriptor constructor
if we're passed into one.
Somewhat confusingly, for extension fields, this is the
descriptor of the EXTENDED message, not the descriptor
of the message containing this field. (See is_extension and
extension_scope below).
message_type: (Descriptor) If a composite field, a descriptor
of the message type contained in this field. Otherwise, this is None.
enum_type: (EnumDescriptor) If this field contains an enum, a
descriptor of that enum. Otherwise, this is None.
is_extension: True iff this describes an extension field.
extension_scope: (Descriptor) Only meaningful if is_extension is True.
Gives the message that immediately contains this extension field.
Will be None iff we're a top-level (file-level) extension field.
options: (descriptor_pb2.FieldOptions) Protocol message field options or
None to use default field options.
"""
# Must be consistent with C++ FieldDescriptor::Type enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
TYPE_DOUBLE = 1
TYPE_FLOAT = 2
TYPE_INT64 = 3
TYPE_UINT64 = 4
TYPE_INT32 = 5
TYPE_FIXED64 = 6
TYPE_FIXED32 = 7
TYPE_BOOL = 8
TYPE_STRING = 9
TYPE_GROUP = 10
TYPE_MESSAGE = 11
TYPE_BYTES = 12
TYPE_UINT32 = 13
TYPE_ENUM = 14
TYPE_SFIXED32 = 15
TYPE_SFIXED64 = 16
TYPE_SINT32 = 17
TYPE_SINT64 = 18
MAX_TYPE = 18
# Must be consistent with C++ FieldDescriptor::CppType enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
CPPTYPE_INT32 = 1
CPPTYPE_INT64 = 2
CPPTYPE_UINT32 = 3
CPPTYPE_UINT64 = 4
CPPTYPE_DOUBLE = 5
CPPTYPE_FLOAT = 6
CPPTYPE_BOOL = 7
CPPTYPE_ENUM = 8
CPPTYPE_STRING = 9
CPPTYPE_MESSAGE = 10
MAX_CPPTYPE = 10
# Must be consistent with C++ FieldDescriptor::Label enum in
# descriptor.h.
#
# TODO(robinson): Find a way to eliminate this repetition.
LABEL_OPTIONAL = 1
LABEL_REQUIRED = 2
LABEL_REPEATED = 3
MAX_LABEL = 3
def __init__(self, name, full_name, index, number, type, cpp_type, label,
default_value, message_type, enum_type, containing_type,
is_extension, extension_scope, options=None):
"""The arguments are as described in the description of FieldDescriptor
attributes above.
Note that containing_type may be None, and may be set later if necessary
(to deal with circular references between message types, for example).
Likewise for extension_scope.
"""
super(FieldDescriptor, self).__init__(options, 'FieldOptions')
self.name = name
self.full_name = full_name
self.index = index
self.number = number
self.type = type
self.cpp_type = cpp_type
self.label = label
self.default_value = default_value
self.containing_type = containing_type
self.message_type = message_type
self.enum_type = enum_type
self.is_extension = is_extension
self.extension_scope = extension_scope
class EnumDescriptor(DescriptorBase):
"""Descriptor for an enum defined in a .proto file.
An EnumDescriptor instance has the following attributes:
name: (str) Name of the enum type.
full_name: (str) Full name of the type, including package name
and any enclosing type(s).
filename: (str) Name of the .proto file in which this appears.
values: (list of EnumValueDescriptors) List of the values
in this enum.
values_by_name: (dict str -> EnumValueDescriptor) Same as |values|,
but indexed by the "name" field of each EnumValueDescriptor.
values_by_number: (dict int -> EnumValueDescriptor) Same as |values|,
but indexed by the "number" field of each EnumValueDescriptor.
containing_type: (Descriptor) Descriptor of the immediate containing
type of this enum, or None if this is an enum defined at the
top level in a .proto file. Set by Descriptor's constructor
if we're passed into one.
options: (descriptor_pb2.EnumOptions) Enum options message or
None to use default enum options.
"""
def __init__(self, name, full_name, filename, values,
containing_type=None, options=None):
"""Arguments are as described in the attribute description above."""
super(EnumDescriptor, self).__init__(options, 'EnumOptions')
self.name = name
self.full_name = full_name
self.filename = filename
self.values = values
for value in self.values:
value.type = self
self.values_by_name = dict((v.name, v) for v in values)
self.values_by_number = dict((v.number, v) for v in values)
self.containing_type = containing_type
class EnumValueDescriptor(DescriptorBase):
"""Descriptor for a single value within an enum.
name: (str) Name of this value.
index: (int) Dense, 0-indexed index giving the order that this
value appears textually within its enum in the .proto file.
number: (int) Actual number assigned to this enum value.
type: (EnumDescriptor) EnumDescriptor to which this value
belongs. Set by EnumDescriptor's constructor if we're
passed into one.
options: (descriptor_pb2.EnumValueOptions) Enum value options message or
None to use default enum value options options.
"""
def __init__(self, name, index, number, type=None, options=None):
"""Arguments are as described in the attribute description above."""
super(EnumValueDescriptor, self).__init__(options, 'EnumValueOptions')
self.name = name
self.index = index
self.number = number
self.type = type
class ServiceDescriptor(DescriptorBase):
"""Descriptor for a service.
name: (str) Name of the service.
full_name: (str) Full name of the service, including package name.
index: (int) 0-indexed index giving the order that this services
definition appears withing the .proto file.
methods: (list of MethodDescriptor) List of methods provided by this
service.
options: (descriptor_pb2.ServiceOptions) Service options message or
None to use default service options.
"""
def __init__(self, name, full_name, index, methods, options=None):
super(ServiceDescriptor, self).__init__(options, 'ServiceOptions')
self.name = name
self.full_name = full_name
self.index = index
self.methods = methods
# Set the containing service for each method in this service.
for method in self.methods:
method.containing_service = self
def FindMethodByName(self, name):
"""Searches for the specified method, and returns its descriptor."""
for method in self.methods:
if name == method.name:
return method
return None
class MethodDescriptor(DescriptorBase):
"""Descriptor for a method in a service.
name: (str) Name of the method within the service.
full_name: (str) Full name of method.
index: (int) 0-indexed index of the method inside the service.
containing_service: (ServiceDescriptor) The service that contains this
method.
input_type: The descriptor of the message that this method accepts.
output_type: The descriptor of the message that this method returns.
options: (descriptor_pb2.MethodOptions) Method options message or
None to use default method options.
"""
def __init__(self, name, full_name, index, containing_service,
input_type, output_type, options=None):
"""The arguments are as described in the description of MethodDescriptor
attributes above.
Note that containing_service may be None, and may be set later if necessary.
"""
super(MethodDescriptor, self).__init__(options, 'MethodOptions')
self.name = name
self.full_name = full_name
self.index = index
self.containing_service = containing_service
self.input_type = input_type
self.output_type = output_type
def _ParseOptions(message, string):
"""Parses serialized options.
This helper function is used to parse serialized options in generated
proto2 files. It must not be used outside proto2.
"""
message.ParseFromString(string)
return message;

View File

View File

@ -0,0 +1,194 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Class for decoding protocol buffer primitives.
Contains the logic for decoding every logical protocol field type
from one of the 5 physical wire types.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import message
from google.protobuf.internal import input_stream
from google.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by WireFormat from the C++ proto2
# implementation.
class Decoder(object):
"""Decodes logical protocol buffer fields from the wire."""
def __init__(self, s):
"""Initializes the decoder to read from s.
Args:
s: An immutable sequence of bytes, which must be accessible
via the Python buffer() primitive (i.e., buffer(s)).
"""
self._stream = input_stream.InputStream(s)
def EndOfStream(self):
"""Returns true iff we've reached the end of the bytes we're reading."""
return self._stream.EndOfStream()
def Position(self):
"""Returns the 0-indexed position in |s|."""
return self._stream.Position()
def ReadFieldNumberAndWireType(self):
"""Reads a tag from the wire. Returns a (field_number, wire_type) pair."""
tag_and_type = self.ReadUInt32()
return wire_format.UnpackTag(tag_and_type)
def SkipBytes(self, bytes):
"""Skips the specified number of bytes on the wire."""
self._stream.SkipBytes(bytes)
# Note that the Read*() methods below are not exactly symmetrical with the
# corresponding Encoder.Append*() methods. Those Encoder methods first
# encode a tag, but the Read*() methods below assume that the tag has already
# been read, and that the client wishes to read a field of the specified type
# starting at the current position.
def ReadInt32(self):
"""Reads and returns a signed, varint-encoded, 32-bit integer."""
return self._stream.ReadVarint32()
def ReadInt64(self):
"""Reads and returns a signed, varint-encoded, 64-bit integer."""
return self._stream.ReadVarint64()
def ReadUInt32(self):
"""Reads and returns an signed, varint-encoded, 32-bit integer."""
return self._stream.ReadVarUInt32()
def ReadUInt64(self):
"""Reads and returns an signed, varint-encoded,64-bit integer."""
return self._stream.ReadVarUInt64()
def ReadSInt32(self):
"""Reads and returns a signed, zigzag-encoded, varint-encoded,
32-bit integer."""
return wire_format.ZigZagDecode(self._stream.ReadVarUInt32())
def ReadSInt64(self):
"""Reads and returns a signed, zigzag-encoded, varint-encoded,
64-bit integer."""
return wire_format.ZigZagDecode(self._stream.ReadVarUInt64())
def ReadFixed32(self):
"""Reads and returns an unsigned, fixed-width, 32-bit integer."""
return self._stream.ReadLittleEndian32()
def ReadFixed64(self):
"""Reads and returns an unsigned, fixed-width, 64-bit integer."""
return self._stream.ReadLittleEndian64()
def ReadSFixed32(self):
"""Reads and returns a signed, fixed-width, 32-bit integer."""
value = self._stream.ReadLittleEndian32()
if value >= (1 << 31):
value -= (1 << 32)
return value
def ReadSFixed64(self):
"""Reads and returns a signed, fixed-width, 64-bit integer."""
value = self._stream.ReadLittleEndian64()
if value >= (1 << 63):
value -= (1 << 64)
return value
def ReadFloat(self):
"""Reads and returns a 4-byte floating-point number."""
serialized = self._stream.ReadString(4)
return struct.unpack('f', serialized)[0]
def ReadDouble(self):
"""Reads and returns an 8-byte floating-point number."""
serialized = self._stream.ReadString(8)
return struct.unpack('d', serialized)[0]
def ReadBool(self):
"""Reads and returns a bool."""
i = self._stream.ReadVarUInt32()
return bool(i)
def ReadEnum(self):
"""Reads and returns an enum value."""
return self._stream.ReadVarUInt32()
def ReadString(self):
"""Reads and returns a length-delimited string."""
length = self._stream.ReadVarUInt32()
return self._stream.ReadString(length)
def ReadBytes(self):
"""Reads and returns a length-delimited byte sequence."""
return self.ReadString()
def ReadMessageInto(self, msg):
"""Calls msg.MergeFromString() to merge
length-delimited serialized message data into |msg|.
REQUIRES: The decoder must be positioned at the serialized "length"
prefix to a length-delmiited serialized message.
POSTCONDITION: The decoder is positioned just after the
serialized message, and we have merged those serialized
contents into |msg|.
"""
length = self._stream.ReadVarUInt32()
sub_buffer = self._stream.GetSubBuffer(length)
num_bytes_used = msg.MergeFromString(sub_buffer)
if num_bytes_used != length:
raise message.DecodeError(
'Submessage told to deserialize from %d-byte encoding, '
'but used only %d bytes' % (length, num_bytes_used))
self._stream.SkipBytes(num_bytes_used)
def ReadGroupInto(self, expected_field_number, group):
"""Calls group.MergeFromString() to merge
END_GROUP-delimited serialized message data into |group|.
We'll raise an exception if we don't find an END_GROUP
tag immediately after the serialized message contents.
REQUIRES: The decoder is positioned just after the START_GROUP
tag for this group.
POSTCONDITION: The decoder is positioned just after the
END_GROUP tag for this group, and we have merged
the contents of the group into |group|.
"""
sub_buffer = self._stream.GetSubBuffer() # No a priori length limit.
num_bytes_used = group.MergeFromString(sub_buffer)
if num_bytes_used < 0:
raise message.DecodeError('Group message reported negative bytes read.')
self._stream.SkipBytes(num_bytes_used)
field_number, field_type = self.ReadFieldNumberAndWireType()
if field_type != wire_format.WIRETYPE_END_GROUP:
raise message.DecodeError('Group message did not end with an END_GROUP.')
if field_number != expected_field_number:
raise message.DecodeError('END_GROUP tag had field '
'number %d, was expecting field number %d' % (
field_number, expected_field_number))
# We're now positioned just after the END_GROUP tag. Perfect.

View File

@ -0,0 +1,230 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.internal.decoder."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
import unittest
from google.protobuf.internal import wire_format
from google.protobuf.internal import encoder
from google.protobuf.internal import decoder
import logging
import mox
from google.protobuf.internal import input_stream
from google.protobuf import message
class DecoderTest(unittest.TestCase):
def setUp(self):
self.mox = mox.Mox()
self.mock_stream = self.mox.CreateMock(input_stream.InputStream)
self.mock_message = self.mox.CreateMock(message.Message)
def testReadFieldNumberAndWireType(self):
# Test field numbers that will require various varint sizes.
for expected_field_number in (1, 15, 16, 2047, 2048):
for expected_wire_type in range(6): # Highest-numbered wiretype is 5.
e = encoder.Encoder()
e._AppendTag(expected_field_number, expected_wire_type)
s = e.ToString()
d = decoder.Decoder(s)
field_number, wire_type = d.ReadFieldNumberAndWireType()
self.assertEqual(expected_field_number, field_number)
self.assertEqual(expected_wire_type, wire_type)
def ReadScalarTestHelper(self, test_name, decoder_method, expected_result,
expected_stream_method_name,
stream_method_return, *args):
"""Helper for testReadScalars below.
Calls one of the Decoder.Read*() methods and ensures that the results are
as expected.
Args:
test_name: Name of this test, used for logging only.
decoder_method: Unbound decoder.Decoder method to call.
expected_result: Value we expect returned from decoder_method().
expected_stream_method_name: (string) Name of the InputStream
method we expect Decoder to call to actually read the value
on the wire.
stream_method_return: Value our mocked-out stream method should
return to the decoder.
args: Additional arguments that we expect to be passed to the
stream method.
"""
logging.info('Testing %s scalar input.\n'
'Calling %r(), and expecting that to call the '
'stream method %s(%r), which will return %r. Finally, '
'expecting the Decoder method to return %r'% (
test_name, decoder_method,
expected_stream_method_name, args, stream_method_return,
expected_result))
d = decoder.Decoder('')
d._stream = self.mock_stream
if decoder_method in (decoder.Decoder.ReadString,
decoder.Decoder.ReadBytes):
self.mock_stream.ReadVarUInt32().AndReturn(len(stream_method_return))
# We have to use names instead of methods to work around some
# mox weirdness. (ResetAll() is overzealous).
expected_stream_method = getattr(self.mock_stream,
expected_stream_method_name)
expected_stream_method(*args).AndReturn(stream_method_return)
self.mox.ReplayAll()
self.assertEqual(expected_result, decoder_method(d))
self.mox.VerifyAll()
self.mox.ResetAll()
def testReadScalars(self):
test_string = 'I can feel myself getting sutpider.'
scalar_tests = [
['int32', decoder.Decoder.ReadInt32, 0, 'ReadVarint32', 0],
['int64', decoder.Decoder.ReadInt64, 0, 'ReadVarint64', 0],
['uint32', decoder.Decoder.ReadUInt32, 0, 'ReadVarUInt32', 0],
['uint64', decoder.Decoder.ReadUInt64, 0, 'ReadVarUInt64', 0],
['fixed32', decoder.Decoder.ReadFixed32, 0xffffffff,
'ReadLittleEndian32', 0xffffffff],
['fixed64', decoder.Decoder.ReadFixed64, 0xffffffffffffffff,
'ReadLittleEndian64', 0xffffffffffffffff],
['sfixed32', decoder.Decoder.ReadSFixed32, -1,
'ReadLittleEndian32', 0xffffffff],
['sfixed64', decoder.Decoder.ReadSFixed64, -1,
'ReadLittleEndian64', 0xffffffffffffffff],
['float', decoder.Decoder.ReadFloat, 0.0,
'ReadString', struct.pack('f', 0.0), 4],
['double', decoder.Decoder.ReadDouble, 0.0,
'ReadString', struct.pack('d', 0.0), 8],
['bool', decoder.Decoder.ReadBool, True, 'ReadVarUInt32', 1],
['enum', decoder.Decoder.ReadEnum, 23, 'ReadVarUInt32', 23],
['string', decoder.Decoder.ReadString,
test_string, 'ReadString', test_string, len(test_string)],
['bytes', decoder.Decoder.ReadBytes,
test_string, 'ReadString', test_string, len(test_string)],
# We test zigzag decoding routines more extensively below.
['sint32', decoder.Decoder.ReadSInt32, -1, 'ReadVarUInt32', 1],
['sint64', decoder.Decoder.ReadSInt64, -1, 'ReadVarUInt64', 1],
]
# Ensure that we're testing different Decoder methods and using
# different test names in all test cases above.
self.assertEqual(len(scalar_tests), len(set(t[0] for t in scalar_tests)))
self.assertEqual(len(scalar_tests), len(set(t[1] for t in scalar_tests)))
for args in scalar_tests:
self.ReadScalarTestHelper(*args)
def testReadMessageInto(self):
length = 23
def Test(simulate_error):
d = decoder.Decoder('')
d._stream = self.mock_stream
self.mock_stream.ReadVarUInt32().AndReturn(length)
sub_buffer = object()
self.mock_stream.GetSubBuffer(length).AndReturn(sub_buffer)
if simulate_error:
self.mock_message.MergeFromString(sub_buffer).AndReturn(length - 1)
self.mox.ReplayAll()
self.assertRaises(
message.DecodeError, d.ReadMessageInto, self.mock_message)
else:
self.mock_message.MergeFromString(sub_buffer).AndReturn(length)
self.mock_stream.SkipBytes(length)
self.mox.ReplayAll()
d.ReadMessageInto(self.mock_message)
self.mox.VerifyAll()
self.mox.ResetAll()
Test(simulate_error=False)
Test(simulate_error=True)
def testReadGroupInto_Success(self):
# Test both the empty and nonempty cases.
for num_bytes in (5, 0):
field_number = expected_field_number = 10
d = decoder.Decoder('')
d._stream = self.mock_stream
sub_buffer = object()
self.mock_stream.GetSubBuffer().AndReturn(sub_buffer)
self.mock_message.MergeFromString(sub_buffer).AndReturn(num_bytes)
self.mock_stream.SkipBytes(num_bytes)
self.mock_stream.ReadVarUInt32().AndReturn(wire_format.PackTag(
field_number, wire_format.WIRETYPE_END_GROUP))
self.mox.ReplayAll()
d.ReadGroupInto(expected_field_number, self.mock_message)
self.mox.VerifyAll()
self.mox.ResetAll()
def ReadGroupInto_FailureTestHelper(self, bytes_read):
d = decoder.Decoder('')
d._stream = self.mock_stream
sub_buffer = object()
self.mock_stream.GetSubBuffer().AndReturn(sub_buffer)
self.mock_message.MergeFromString(sub_buffer).AndReturn(bytes_read)
return d
def testReadGroupInto_NegativeBytesReported(self):
expected_field_number = 10
d = self.ReadGroupInto_FailureTestHelper(bytes_read=-1)
self.mox.ReplayAll()
self.assertRaises(message.DecodeError,
d.ReadGroupInto, expected_field_number,
self.mock_message)
self.mox.VerifyAll()
def testReadGroupInto_NoEndGroupTag(self):
field_number = expected_field_number = 10
num_bytes = 5
d = self.ReadGroupInto_FailureTestHelper(bytes_read=num_bytes)
self.mock_stream.SkipBytes(num_bytes)
# Right field number, wrong wire type.
self.mock_stream.ReadVarUInt32().AndReturn(wire_format.PackTag(
field_number, wire_format.WIRETYPE_LENGTH_DELIMITED))
self.mox.ReplayAll()
self.assertRaises(message.DecodeError,
d.ReadGroupInto, expected_field_number,
self.mock_message)
self.mox.VerifyAll()
def testReadGroupInto_WrongFieldNumberInEndGroupTag(self):
expected_field_number = 10
field_number = expected_field_number + 1
num_bytes = 5
d = self.ReadGroupInto_FailureTestHelper(bytes_read=num_bytes)
self.mock_stream.SkipBytes(num_bytes)
# Wrong field number, right wire type.
self.mock_stream.ReadVarUInt32().AndReturn(wire_format.PackTag(
field_number, wire_format.WIRETYPE_END_GROUP))
self.mox.ReplayAll()
self.assertRaises(message.DecodeError,
d.ReadGroupInto, expected_field_number,
self.mock_message)
self.mox.VerifyAll()
def testSkipBytes(self):
d = decoder.Decoder('')
num_bytes = 1024
self.mock_stream.SkipBytes(num_bytes)
d._stream = self.mock_stream
self.mox.ReplayAll()
d.SkipBytes(num_bytes)
self.mox.VerifyAll()
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,97 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unittest for google.protobuf.internal.descriptor."""
__author__ = 'robinson@google.com (Will Robinson)'
import unittest
from google.protobuf import descriptor_pb2
from google.protobuf import descriptor
class DescriptorTest(unittest.TestCase):
def setUp(self):
self.my_enum = descriptor.EnumDescriptor(
name='ForeignEnum',
full_name='protobuf_unittest.ForeignEnum',
filename='ForeignEnum',
values=[
descriptor.EnumValueDescriptor(name='FOREIGN_FOO', index=0, number=4),
descriptor.EnumValueDescriptor(name='FOREIGN_BAR', index=1, number=5),
descriptor.EnumValueDescriptor(name='FOREIGN_BAZ', index=2, number=6),
])
self.my_message = descriptor.Descriptor(
name='NestedMessage',
full_name='protobuf_unittest.TestAllTypes.NestedMessage',
filename='some/filename/some.proto',
containing_type=None,
fields=[
descriptor.FieldDescriptor(
name='bb',
full_name='protobuf_unittest.TestAllTypes.NestedMessage.bb',
index=0, number=1,
type=5, cpp_type=1, label=1,
default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None),
],
nested_types=[],
enum_types=[
self.my_enum,
],
extensions=[])
self.my_method = descriptor.MethodDescriptor(
name='Bar',
full_name='protobuf_unittest.TestService.Bar',
index=0,
containing_service=None,
input_type=None,
output_type=None)
self.my_service = descriptor.ServiceDescriptor(
name='TestServiceWithOptions',
full_name='protobuf_unittest.TestServiceWithOptions',
index=0,
methods=[
self.my_method
])
def testEnumFixups(self):
self.assertEqual(self.my_enum, self.my_enum.values[0].type)
def testContainingTypeFixups(self):
self.assertEqual(self.my_message, self.my_message.fields[0].containing_type)
self.assertEqual(self.my_message, self.my_enum.containing_type)
def testContainingServiceFixups(self):
self.assertEqual(self.my_service, self.my_method.containing_service)
def testGetOptions(self):
self.assertEqual(self.my_enum.GetOptions(),
descriptor_pb2.EnumOptions())
self.assertEqual(self.my_enum.values[0].GetOptions(),
descriptor_pb2.EnumValueOptions())
self.assertEqual(self.my_message.GetOptions(),
descriptor_pb2.MessageOptions())
self.assertEqual(self.my_message.fields[0].GetOptions(),
descriptor_pb2.FieldOptions())
self.assertEqual(self.my_method.GetOptions(),
descriptor_pb2.MethodOptions())
self.assertEqual(self.my_service.GetOptions(),
descriptor_pb2.ServiceOptions())
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,192 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Class for encoding protocol message primitives.
Contains the logic for encoding every logical protocol field type
into one of the 5 physical wire types.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import message
from google.protobuf.internal import wire_format
from google.protobuf.internal import output_stream
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by WireFormat from the C++ proto2
# implementation.
class Encoder(object):
"""Encodes logical protocol buffer fields to the wire format."""
def __init__(self):
self._stream = output_stream.OutputStream()
def ToString(self):
"""Returns all values encoded in this object as a string."""
return self._stream.ToString()
# All the Append*() methods below first append a tag+type pair to the buffer
# before appending the specified value.
def AppendInt32(self, field_number, value):
"""Appends a 32-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarint32(value)
def AppendInt64(self, field_number, value):
"""Appends a 64-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarint64(value)
def AppendUInt32(self, field_number, unsigned_value):
"""Appends an unsigned 32-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarUInt32(unsigned_value)
def AppendUInt64(self, field_number, unsigned_value):
"""Appends an unsigned 64-bit integer to our buffer, varint-encoded."""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
self._stream.AppendVarUInt64(unsigned_value)
def AppendSInt32(self, field_number, value):
"""Appends a 32-bit integer to our buffer, zigzag-encoded and then
varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
zigzag_value = wire_format.ZigZagEncode(value)
self._stream.AppendVarUInt32(zigzag_value)
def AppendSInt64(self, field_number, value):
"""Appends a 64-bit integer to our buffer, zigzag-encoded and then
varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_VARINT)
zigzag_value = wire_format.ZigZagEncode(value)
self._stream.AppendVarUInt64(zigzag_value)
def AppendFixed32(self, field_number, unsigned_value):
"""Appends an unsigned 32-bit integer to our buffer, in little-endian
byte-order.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendLittleEndian32(unsigned_value)
def AppendFixed64(self, field_number, unsigned_value):
"""Appends an unsigned 64-bit integer to our buffer, in little-endian
byte-order.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendLittleEndian64(unsigned_value)
def AppendSFixed32(self, field_number, value):
"""Appends a signed 32-bit integer to our buffer, in little-endian
byte-order.
"""
sign = (value & 0x80000000) and -1 or 0
if value >> 32 != sign:
raise message.EncodeError('SFixed32 out of range: %d' % value)
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendLittleEndian32(value & 0xffffffff)
def AppendSFixed64(self, field_number, value):
"""Appends a signed 64-bit integer to our buffer, in little-endian
byte-order.
"""
sign = (value & 0x8000000000000000) and -1 or 0
if value >> 64 != sign:
raise message.EncodeError('SFixed64 out of range: %d' % value)
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendLittleEndian64(value & 0xffffffffffffffff)
def AppendFloat(self, field_number, value):
"""Appends a floating-point number to our buffer."""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED32)
self._stream.AppendRawBytes(struct.pack('f', value))
def AppendDouble(self, field_number, value):
"""Appends a double-precision floating-point number to our buffer."""
self._AppendTag(field_number, wire_format.WIRETYPE_FIXED64)
self._stream.AppendRawBytes(struct.pack('d', value))
def AppendBool(self, field_number, value):
"""Appends a boolean to our buffer."""
self.AppendInt32(field_number, value)
def AppendEnum(self, field_number, value):
"""Appends an enum value to our buffer."""
self.AppendInt32(field_number, value)
def AppendString(self, field_number, value):
"""Appends a length-prefixed string to our buffer, with the
length varint-encoded.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
self._stream.AppendVarUInt32(len(value))
self._stream.AppendRawBytes(value)
def AppendBytes(self, field_number, value):
"""Appends a length-prefixed sequence of bytes to our buffer, with the
length varint-encoded.
"""
self.AppendString(field_number, value)
# TODO(robinson): For AppendGroup() and AppendMessage(), we'd really like to
# avoid the extra string copy here. We can do so if we widen the Message
# interface to be able to serialize to a stream in addition to a string. The
# challenge when thinking ahead to the Python/C API implementation of Message
# is finding a stream-like Python thing to which we can write raw bytes
# from C. I'm not sure such a thing exists(?). (array.array is pretty much
# what we want, but it's not directly exposed in the Python/C API).
def AppendGroup(self, field_number, group):
"""Appends a group to our buffer.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_START_GROUP)
self._stream.AppendRawBytes(group.SerializeToString())
self._AppendTag(field_number, wire_format.WIRETYPE_END_GROUP)
def AppendMessage(self, field_number, msg):
"""Appends a nested message to our buffer.
"""
self._AppendTag(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
self._stream.AppendVarUInt32(msg.ByteSize())
self._stream.AppendRawBytes(msg.SerializeToString())
def AppendMessageSetItem(self, field_number, msg):
"""Appends an item using the message set wire format.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
self._AppendTag(1, wire_format.WIRETYPE_START_GROUP)
self.AppendInt32(2, field_number)
self.AppendMessage(3, msg)
self._AppendTag(1, wire_format.WIRETYPE_END_GROUP)
def _AppendTag(self, field_number, wire_type):
"""Appends a tag containing field number and wire type information."""
self._stream.AppendVarUInt32(wire_format.PackTag(field_number, wire_type))

View File

@ -0,0 +1,211 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.internal.encoder."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
import logging
import unittest
import mox
from google.protobuf.internal import wire_format
from google.protobuf.internal import encoder
from google.protobuf.internal import output_stream
from google.protobuf import message
class EncoderTest(unittest.TestCase):
def setUp(self):
self.mox = mox.Mox()
self.encoder = encoder.Encoder()
self.mock_stream = self.mox.CreateMock(output_stream.OutputStream)
self.mock_message = self.mox.CreateMock(message.Message)
self.encoder._stream = self.mock_stream
def PackTag(self, field_number, wire_type):
return wire_format.PackTag(field_number, wire_type)
def AppendScalarTestHelper(self, test_name, encoder_method,
expected_stream_method_name,
wire_type, field_value, expected_value=None):
"""Helper for testAppendScalars.
Calls one of the Encoder methods, and ensures that the Encoder
in turn makes the expected calls into its OutputStream.
Args:
test_name: Name of this test, used only for logging.
encoder_method: Callable on self.encoder, which should
accept |field_value| as an argument. This is the Encoder
method we're testing.
expected_stream_method_name: (string) Name of the OutputStream
method we expect Encoder to call to actually put the value
on the wire.
wire_type: The WIRETYPE_* constant we expect encoder to
use in the specified encoder_method.
field_value: The value we're trying to encode. Passed
into encoder_method.
expected_value: The value we expect Encoder to pass into
the OutputStream method. If None, we expect field_value
to pass through unmodified.
"""
if expected_value is None:
expected_value = field_value
logging.info('Testing %s scalar output.\n'
'Calling %r(%r), and expecting that to call the '
'stream method %s(%r).' % (
test_name, encoder_method, field_value,
expected_stream_method_name, expected_value))
field_number = 10
# Should first append the field number and type information.
self.mock_stream.AppendVarUInt32(self.PackTag(field_number, wire_type))
# If we're length-delimited, we should then append the length.
if wire_type == wire_format.WIRETYPE_LENGTH_DELIMITED:
self.mock_stream.AppendVarUInt32(len(field_value))
# Should then append the value itself.
# We have to use names instead of methods to work around some
# mox weirdness. (ResetAll() is overzealous).
expected_stream_method = getattr(self.mock_stream,
expected_stream_method_name)
expected_stream_method(expected_value)
self.mox.ReplayAll()
encoder_method(field_number, field_value)
self.mox.VerifyAll()
self.mox.ResetAll()
def testAppendScalars(self):
scalar_tests = [
['int32', self.encoder.AppendInt32, 'AppendVarint32',
wire_format.WIRETYPE_VARINT, 0],
['int64', self.encoder.AppendInt64, 'AppendVarint64',
wire_format.WIRETYPE_VARINT, 0],
['uint32', self.encoder.AppendUInt32, 'AppendVarUInt32',
wire_format.WIRETYPE_VARINT, 0],
['uint64', self.encoder.AppendUInt64, 'AppendVarUInt64',
wire_format.WIRETYPE_VARINT, 0],
['fixed32', self.encoder.AppendFixed32, 'AppendLittleEndian32',
wire_format.WIRETYPE_FIXED32, 0],
['fixed64', self.encoder.AppendFixed64, 'AppendLittleEndian64',
wire_format.WIRETYPE_FIXED64, 0],
['sfixed32', self.encoder.AppendSFixed32, 'AppendLittleEndian32',
wire_format.WIRETYPE_FIXED32, -1, 0xffffffff],
['sfixed64', self.encoder.AppendSFixed64, 'AppendLittleEndian64',
wire_format.WIRETYPE_FIXED64, -1, 0xffffffffffffffff],
['float', self.encoder.AppendFloat, 'AppendRawBytes',
wire_format.WIRETYPE_FIXED32, 0.0, struct.pack('f', 0.0)],
['double', self.encoder.AppendDouble, 'AppendRawBytes',
wire_format.WIRETYPE_FIXED64, 0.0, struct.pack('d', 0.0)],
['bool', self.encoder.AppendBool, 'AppendVarint32',
wire_format.WIRETYPE_VARINT, False],
['enum', self.encoder.AppendEnum, 'AppendVarint32',
wire_format.WIRETYPE_VARINT, 0],
['string', self.encoder.AppendString, 'AppendRawBytes',
wire_format.WIRETYPE_LENGTH_DELIMITED,
"You're in a maze of twisty little passages, all alike."],
# We test zigzag encoding routines more extensively below.
['sint32', self.encoder.AppendSInt32, 'AppendVarUInt32',
wire_format.WIRETYPE_VARINT, -1, 1],
['sint64', self.encoder.AppendSInt64, 'AppendVarUInt64',
wire_format.WIRETYPE_VARINT, -1, 1],
]
# Ensure that we're testing different Encoder methods and using
# different test names in all test cases above.
self.assertEqual(len(scalar_tests), len(set(t[0] for t in scalar_tests)))
self.assertEqual(len(scalar_tests), len(set(t[1] for t in scalar_tests)))
for args in scalar_tests:
self.AppendScalarTestHelper(*args)
def testAppendGroup(self):
field_number = 23
# Should first append the start-group marker.
self.mock_stream.AppendVarUInt32(
self.PackTag(field_number, wire_format.WIRETYPE_START_GROUP))
# Should then serialize itself.
self.mock_message.SerializeToString().AndReturn('foo')
self.mock_stream.AppendRawBytes('foo')
# Should finally append the end-group marker.
self.mock_stream.AppendVarUInt32(
self.PackTag(field_number, wire_format.WIRETYPE_END_GROUP))
self.mox.ReplayAll()
self.encoder.AppendGroup(field_number, self.mock_message)
self.mox.VerifyAll()
def testAppendMessage(self):
field_number = 23
byte_size = 42
# Should first append the field number and type information.
self.mock_stream.AppendVarUInt32(
self.PackTag(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED))
# Should then append its length.
self.mock_message.ByteSize().AndReturn(byte_size)
self.mock_stream.AppendVarUInt32(byte_size)
# Should then serialize itself to the encoder.
self.mock_message.SerializeToString().AndReturn('foo')
self.mock_stream.AppendRawBytes('foo')
self.mox.ReplayAll()
self.encoder.AppendMessage(field_number, self.mock_message)
self.mox.VerifyAll()
def testAppendMessageSetItem(self):
field_number = 23
byte_size = 42
# Should first append the field number and type information.
self.mock_stream.AppendVarUInt32(
self.PackTag(1, wire_format.WIRETYPE_START_GROUP))
self.mock_stream.AppendVarUInt32(
self.PackTag(2, wire_format.WIRETYPE_VARINT))
self.mock_stream.AppendVarint32(field_number)
self.mock_stream.AppendVarUInt32(
self.PackTag(3, wire_format.WIRETYPE_LENGTH_DELIMITED))
# Should then append its length.
self.mock_message.ByteSize().AndReturn(byte_size)
self.mock_stream.AppendVarUInt32(byte_size)
# Should then serialize itself to the encoder.
self.mock_message.SerializeToString().AndReturn('foo')
self.mock_stream.AppendRawBytes('foo')
self.mock_stream.AppendVarUInt32(
self.PackTag(1, wire_format.WIRETYPE_END_GROUP))
self.mox.ReplayAll()
self.encoder.AppendMessageSetItem(field_number, self.mock_message)
self.mox.VerifyAll()
def testAppendSFixed(self):
# Most of our bounds-checking is done in output_stream.py,
# but encoder.py is responsible for transforming signed
# fixed-width integers into unsigned ones, so we test here
# to ensure that we're not losing any entropy when we do
# that conversion.
field_number = 10
self.assertRaises(message.EncodeError, self.encoder.AppendSFixed32,
10, wire_format.UINT32_MAX + 1)
self.assertRaises(message.EncodeError, self.encoder.AppendSFixed32,
10, -(1 << 32))
self.assertRaises(message.EncodeError, self.encoder.AppendSFixed64,
10, wire_format.UINT64_MAX + 1)
self.assertRaises(message.EncodeError, self.encoder.AppendSFixed64,
10, -(1 << 64))
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,84 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO(robinson): Flesh this out considerably. We focused on reflection_test.py
# first, since it's testing the subtler code, and since it provides decent
# indirect testing of the protocol compiler output.
"""Unittest that directly tests the output of the pure-Python protocol
compiler. See //net/proto2/internal/reflection_test.py for a test which
further ensures that we can use Python protocol message objects as we expect.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import unittest
from google.protobuf import unittest_mset_pb2
from google.protobuf import unittest_pb2
class GeneratorTest(unittest.TestCase):
def testNestedMessageDescriptor(self):
field_name = 'optional_nested_message'
proto_type = unittest_pb2.TestAllTypes
self.assertEqual(
proto_type.NestedMessage.DESCRIPTOR,
proto_type.DESCRIPTOR.fields_by_name[field_name].message_type)
def testEnums(self):
# We test only module-level enums here.
# TODO(robinson): Examine descriptors directly to check
# enum descriptor output.
self.assertEqual(4, unittest_pb2.FOREIGN_FOO)
self.assertEqual(5, unittest_pb2.FOREIGN_BAR)
self.assertEqual(6, unittest_pb2.FOREIGN_BAZ)
proto = unittest_pb2.TestAllTypes()
self.assertEqual(1, proto.FOO)
self.assertEqual(1, unittest_pb2.TestAllTypes.FOO)
self.assertEqual(2, proto.BAR)
self.assertEqual(2, unittest_pb2.TestAllTypes.BAR)
self.assertEqual(3, proto.BAZ)
self.assertEqual(3, unittest_pb2.TestAllTypes.BAZ)
def testContainingTypeBehaviorForExtensions(self):
self.assertEqual(unittest_pb2.optional_int32_extension.containing_type,
unittest_pb2.TestAllExtensions.DESCRIPTOR)
self.assertEqual(unittest_pb2.TestRequired.single.containing_type,
unittest_pb2.TestAllExtensions.DESCRIPTOR)
def testExtensionScope(self):
self.assertEqual(unittest_pb2.optional_int32_extension.extension_scope,
None)
self.assertEqual(unittest_pb2.TestRequired.single.extension_scope,
unittest_pb2.TestRequired.DESCRIPTOR)
def testIsExtension(self):
self.assertTrue(unittest_pb2.optional_int32_extension.is_extension)
self.assertTrue(unittest_pb2.TestRequired.single.is_extension)
message_descriptor = unittest_pb2.TestRequired.DESCRIPTOR
non_extension_descriptor = message_descriptor.fields_by_name['a']
self.assertTrue(not non_extension_descriptor.is_extension)
def testOptions(self):
proto = unittest_mset_pb2.TestMessageSet()
self.assertTrue(proto.DESCRIPTOR.GetOptions().message_set_wire_format)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,211 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""InputStream is the primitive interface for reading bits from the wire.
All protocol buffer deserialization can be expressed in terms of
the InputStream primitives provided here.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import message
from google.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by CodedInputStream from the C++
# proto2 implementation.
class InputStream(object):
"""Contains all logic for reading bits, and dealing with stream position.
If an InputStream method ever raises an exception, the stream is left
in an indeterminate state and is not safe for further use.
"""
def __init__(self, s):
# What we really want is something like array('B', s), where elements we
# read from the array are already given to us as one-byte integers. BUT
# using array() instead of buffer() would force full string copies to result
# from each GetSubBuffer() call.
#
# So, if the N serialized bytes of a single protocol buffer object are
# split evenly between 2 child messages, and so on recursively, using
# array('B', s) instead of buffer() would incur an additional N*logN bytes
# copied during deserialization.
#
# The higher constant overhead of having to ord() for every byte we read
# from the buffer in _ReadVarintHelper() could definitely lead to worse
# performance in many real-world scenarios, even if the asymptotic
# complexity is better. However, our real answer is that the mythical
# Python/C extension module output mode for the protocol compiler will
# be blazing-fast and will eliminate most use of this class anyway.
self._buffer = buffer(s)
self._pos = 0
def EndOfStream(self):
"""Returns true iff we're at the end of the stream.
If this returns true, then a call to any other InputStream method
will raise an exception.
"""
return self._pos >= len(self._buffer)
def Position(self):
"""Returns the current position in the stream, or equivalently, the
number of bytes read so far.
"""
return self._pos
def GetSubBuffer(self, size=None):
"""Returns a sequence-like object that represents a portion of our
underlying sequence.
Position 0 in the returned object corresponds to self.Position()
in this stream.
If size is specified, then the returned object ends after the
next "size" bytes in this stream. If size is not specified,
then the returned object ends at the end of this stream.
We guarantee that the returned object R supports the Python buffer
interface (and thus that the call buffer(R) will work).
Note that the returned buffer is read-only.
The intended use for this method is for nested-message and nested-group
deserialization, where we want to make a recursive MergeFromString()
call on the portion of the original sequence that contains the serialized
nested message. (And we'd like to do so without making unnecessary string
copies).
REQUIRES: size is nonnegative.
"""
# Note that buffer() doesn't perform any actual string copy.
if size is None:
return buffer(self._buffer, self._pos)
else:
if size < 0:
raise message.DecodeError('Negative size %d' % size)
return buffer(self._buffer, self._pos, size)
def SkipBytes(self, num_bytes):
"""Skip num_bytes bytes ahead, or go to the end of the stream, whichever
comes first.
REQUIRES: num_bytes is nonnegative.
"""
if num_bytes < 0:
raise message.DecodeError('Negative num_bytes %d' % num_bytes)
self._pos += num_bytes
self._pos = min(self._pos, len(self._buffer))
def ReadString(self, size):
"""Reads up to 'size' bytes from the stream, stopping early
only if we reach the end of the stream. Returns the bytes read
as a string.
"""
if size < 0:
raise message.DecodeError('Negative size %d' % size)
s = (self._buffer[self._pos : self._pos + size])
self._pos += len(s) # Only advance by the number of bytes actually read.
return s
def ReadLittleEndian32(self):
"""Interprets the next 4 bytes of the stream as a little-endian
encoded, unsiged 32-bit integer, and returns that integer.
"""
try:
i = struct.unpack(wire_format.FORMAT_UINT32_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 4])
self._pos += 4
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadLittleEndian64(self):
"""Interprets the next 8 bytes of the stream as a little-endian
encoded, unsiged 64-bit integer, and returns that integer.
"""
try:
i = struct.unpack(wire_format.FORMAT_UINT64_LITTLE_ENDIAN,
self._buffer[self._pos : self._pos + 8])
self._pos += 8
return i[0] # unpack() result is a 1-element tuple.
except struct.error, e:
raise message.DecodeError(e)
def ReadVarint32(self):
"""Reads a varint from the stream, interprets this varint
as a signed, 32-bit integer, and returns the integer.
"""
i = self.ReadVarint64()
if not wire_format.INT32_MIN <= i <= wire_format.INT32_MAX:
raise message.DecodeError('Value out of range for int32: %d' % i)
return int(i)
def ReadVarUInt32(self):
"""Reads a varint from the stream, interprets this varint
as an unsigned, 32-bit integer, and returns the integer.
"""
i = self.ReadVarUInt64()
if i > wire_format.UINT32_MAX:
raise message.DecodeError('Value out of range for uint32: %d' % i)
return i
def ReadVarint64(self):
"""Reads a varint from the stream, interprets this varint
as a signed, 64-bit integer, and returns the integer.
"""
i = self.ReadVarUInt64()
if i > wire_format.INT64_MAX:
i -= (1 << 64)
return i
def ReadVarUInt64(self):
"""Reads a varint from the stream, interprets this varint
as an unsigned, 64-bit integer, and returns the integer.
"""
i = self._ReadVarintHelper()
if not 0 <= i <= wire_format.UINT64_MAX:
raise message.DecodeError('Value out of range for uint64: %d' % i)
return i
def _ReadVarintHelper(self):
"""Helper for the various varint-reading methods above.
Reads an unsigned, varint-encoded integer from the stream and
returns this integer.
Does no bounds checking except to ensure that we read at most as many bytes
as could possibly be present in a varint-encoded 64-bit number.
"""
result = 0
shift = 0
while 1:
if shift >= 64:
raise message.DecodeError('Too many bytes when decoding varint.')
try:
b = ord(self._buffer[self._pos])
except IndexError:
raise message.DecodeError('Truncated varint.')
self._pos += 1
result |= ((b & 0x7f) << shift)
shift += 7
if not (b & 0x80):
return result

View File

@ -0,0 +1,279 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.internal.input_stream."""
__author__ = 'robinson@google.com (Will Robinson)'
import unittest
from google.protobuf import message
from google.protobuf.internal import wire_format
from google.protobuf.internal import input_stream
class InputStreamTest(unittest.TestCase):
def testEndOfStream(self):
stream = input_stream.InputStream('abcd')
self.assertFalse(stream.EndOfStream())
self.assertEqual('abcd', stream.ReadString(10))
self.assertTrue(stream.EndOfStream())
def testPosition(self):
stream = input_stream.InputStream('abcd')
self.assertEqual(0, stream.Position())
self.assertEqual(0, stream.Position()) # No side-effects.
stream.ReadString(1)
self.assertEqual(1, stream.Position())
stream.ReadString(1)
self.assertEqual(2, stream.Position())
stream.ReadString(10)
self.assertEqual(4, stream.Position()) # Can't go past end of stream.
def testGetSubBuffer(self):
stream = input_stream.InputStream('abcd')
# Try leaving out the size.
self.assertEqual('abcd', str(stream.GetSubBuffer()))
stream.SkipBytes(1)
# GetSubBuffer() always starts at current size.
self.assertEqual('bcd', str(stream.GetSubBuffer()))
# Try 0-size.
self.assertEqual('', str(stream.GetSubBuffer(0)))
# Negative sizes should raise an error.
self.assertRaises(message.DecodeError, stream.GetSubBuffer, -1)
# Positive sizes should work as expected.
self.assertEqual('b', str(stream.GetSubBuffer(1)))
self.assertEqual('bc', str(stream.GetSubBuffer(2)))
# Sizes longer than remaining bytes in the buffer should
# return the whole remaining buffer.
self.assertEqual('bcd', str(stream.GetSubBuffer(1000)))
def testSkipBytes(self):
stream = input_stream.InputStream('')
# Skipping bytes when at the end of stream
# should have no effect.
stream.SkipBytes(0)
stream.SkipBytes(1)
stream.SkipBytes(2)
self.assertTrue(stream.EndOfStream())
self.assertEqual(0, stream.Position())
# Try skipping within a stream.
stream = input_stream.InputStream('abcd')
self.assertEqual(0, stream.Position())
stream.SkipBytes(1)
self.assertEqual(1, stream.Position())
stream.SkipBytes(10) # Can't skip past the end.
self.assertEqual(4, stream.Position())
# Ensure that a negative skip raises an exception.
stream = input_stream.InputStream('abcd')
stream.SkipBytes(1)
self.assertRaises(message.DecodeError, stream.SkipBytes, -1)
def testReadString(self):
s = 'abcd'
# Also test going past the total stream length.
for i in range(len(s) + 10):
stream = input_stream.InputStream(s)
self.assertEqual(s[:i], stream.ReadString(i))
self.assertEqual(min(i, len(s)), stream.Position())
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadString, -1)
def EnsureFailureOnEmptyStream(self, input_stream_method):
"""Helper for integer-parsing tests below.
Ensures that the given InputStream method raises a DecodeError
if called on a stream with no bytes remaining.
"""
stream = input_stream.InputStream('')
self.assertRaises(message.DecodeError, input_stream_method, stream)
def testReadLittleEndian32(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadLittleEndian32)
s = ''
# Read 0.
s += '\x00\x00\x00\x00'
# Read 1.
s += '\x01\x00\x00\x00'
# Read a bunch of different bytes.
s += '\x01\x02\x03\x04'
# Read max unsigned 32-bit int.
s += '\xff\xff\xff\xff'
# Try a read with fewer than 4 bytes left in the stream.
s += '\x00\x00\x00'
stream = input_stream.InputStream(s)
self.assertEqual(0, stream.ReadLittleEndian32())
self.assertEqual(4, stream.Position())
self.assertEqual(1, stream.ReadLittleEndian32())
self.assertEqual(8, stream.Position())
self.assertEqual(0x04030201, stream.ReadLittleEndian32())
self.assertEqual(12, stream.Position())
self.assertEqual(wire_format.UINT32_MAX, stream.ReadLittleEndian32())
self.assertEqual(16, stream.Position())
self.assertRaises(message.DecodeError, stream.ReadLittleEndian32)
def testReadLittleEndian64(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadLittleEndian64)
s = ''
# Read 0.
s += '\x00\x00\x00\x00\x00\x00\x00\x00'
# Read 1.
s += '\x01\x00\x00\x00\x00\x00\x00\x00'
# Read a bunch of different bytes.
s += '\x01\x02\x03\x04\x05\x06\x07\x08'
# Read max unsigned 64-bit int.
s += '\xff\xff\xff\xff\xff\xff\xff\xff'
# Try a read with fewer than 8 bytes left in the stream.
s += '\x00\x00\x00'
stream = input_stream.InputStream(s)
self.assertEqual(0, stream.ReadLittleEndian64())
self.assertEqual(8, stream.Position())
self.assertEqual(1, stream.ReadLittleEndian64())
self.assertEqual(16, stream.Position())
self.assertEqual(0x0807060504030201, stream.ReadLittleEndian64())
self.assertEqual(24, stream.Position())
self.assertEqual(wire_format.UINT64_MAX, stream.ReadLittleEndian64())
self.assertEqual(32, stream.Position())
self.assertRaises(message.DecodeError, stream.ReadLittleEndian64)
def ReadVarintSuccessTestHelper(self, varints_and_ints, read_method):
"""Helper for tests below that test successful reads of various varints.
Args:
varints_and_ints: Iterable of (str, integer) pairs, where the string
gives the wire encoding and the integer gives the value we expect
to be returned by the read_method upon encountering this string.
read_method: Unbound InputStream method that is capable of reading
the encoded strings provided in the first elements of varints_and_ints.
"""
s = ''.join(s for s, i in varints_and_ints)
stream = input_stream.InputStream(s)
expected_pos = 0
self.assertEqual(expected_pos, stream.Position())
for s, expected_int in varints_and_ints:
self.assertEqual(expected_int, read_method(stream))
expected_pos += len(s)
self.assertEqual(expected_pos, stream.Position())
def testReadVarint32Success(self):
varints_and_ints = [
('\x00', 0),
('\x01', 1),
('\x7f', 127),
('\x80\x01', 128),
('\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01', -1),
('\xff\xff\xff\xff\x07', wire_format.INT32_MAX),
('\x80\x80\x80\x80\xf8\xff\xff\xff\xff\x01', wire_format.INT32_MIN),
]
self.ReadVarintSuccessTestHelper(varints_and_ints,
input_stream.InputStream.ReadVarint32)
def testReadVarint32Failure(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadVarint32)
# Try and fail to read INT32_MAX + 1.
s = '\x80\x80\x80\x80\x08'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarint32)
# Try and fail to read INT32_MIN - 1.
s = '\xfe\xff\xff\xff\xf7\xff\xff\xff\xff\x01'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarint32)
# Try and fail to read something that looks like
# a varint with more than 10 bytes.
s = '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarint32)
def testReadVarUInt32Success(self):
varints_and_ints = [
('\x00', 0),
('\x01', 1),
('\x7f', 127),
('\x80\x01', 128),
('\xff\xff\xff\xff\x0f', wire_format.UINT32_MAX),
]
self.ReadVarintSuccessTestHelper(varints_and_ints,
input_stream.InputStream.ReadVarUInt32)
def testReadVarUInt32Failure(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadVarUInt32)
# Try and fail to read UINT32_MAX + 1
s = '\x80\x80\x80\x80\x10'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarUInt32)
# Try and fail to read something that looks like
# a varint with more than 10 bytes.
s = '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarUInt32)
def testReadVarint64Success(self):
varints_and_ints = [
('\x00', 0),
('\x01', 1),
('\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01', -1),
('\x7f', 127),
('\x80\x01', 128),
('\xff\xff\xff\xff\xff\xff\xff\xff\x7f', wire_format.INT64_MAX),
('\x80\x80\x80\x80\x80\x80\x80\x80\x80\x01', wire_format.INT64_MIN),
]
self.ReadVarintSuccessTestHelper(varints_and_ints,
input_stream.InputStream.ReadVarint64)
def testReadVarint64Failure(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadVarint64)
# Try and fail to read something with the mythical 64th bit set.
s = '\x80\x80\x80\x80\x80\x80\x80\x80\x80\x02'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarint64)
# Try and fail to read something that looks like
# a varint with more than 10 bytes.
s = '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarint64)
def testReadVarUInt64Success(self):
varints_and_ints = [
('\x00', 0),
('\x01', 1),
('\x7f', 127),
('\x80\x01', 128),
('\x80\x80\x80\x80\x80\x80\x80\x80\x80\x01', 1 << 63),
]
self.ReadVarintSuccessTestHelper(varints_and_ints,
input_stream.InputStream.ReadVarUInt64)
def testReadVarUInt64Failure(self):
self.EnsureFailureOnEmptyStream(input_stream.InputStream.ReadVarUInt64)
# Try and fail to read something with the mythical 64th bit set.
s = '\x80\x80\x80\x80\x80\x80\x80\x80\x80\x02'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarUInt64)
# Try and fail to read something that looks like
# a varint with more than 10 bytes.
s = '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00'
stream = input_stream.InputStream(s)
self.assertRaises(message.DecodeError, stream.ReadVarUInt64)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,55 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Defines a listener interface for observing certain
state transitions on Message objects.
Also defines a null implementation of this interface.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class MessageListener(object):
"""Listens for transitions to nonempty and for invalidations of cached
byte sizes. Meant to be registered via Message._SetListener().
"""
def TransitionToNonempty(self):
"""Called the *first* time that this message becomes nonempty.
Implementations are free (but not required) to call this method multiple
times after the message has become nonempty.
"""
raise NotImplementedError
def ByteSizeDirty(self):
"""Called *every* time the cached byte size value
for this object is invalidated (transitions from being
"clean" to "dirty").
"""
raise NotImplementedError
class NullMessageListener(object):
"""No-op MessageListener implementation."""
def TransitionToNonempty(self):
pass
def ByteSizeDirty(self):
pass

View File

@ -0,0 +1,44 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: robinson@google.com (Will Robinson)
package google.protobuf.internal;
message TopLevelMessage {
optional ExtendedMessage submessage = 1;
}
message ExtendedMessage {
extensions 1 to max;
}
message ForeignMessage {
optional int32 foreign_message_int = 1;
}
extend ExtendedMessage {
optional int32 optional_int_extension = 1;
optional ForeignMessage optional_message_extension = 2;
repeated int32 repeated_int_extension = 3;
repeated ForeignMessage repeated_message_extension = 4;
}

View File

@ -0,0 +1,37 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: robinson@google.com (Will Robinson)
package google.protobuf.internal;
// A message where tag numbers are listed out of order, to allow us to test our
// canonicalization of serialized output, which should always be in tag order.
// We also mix in some extensions for extra fun.
message OutOfOrderFields {
optional sint32 optional_sint32 = 5;
extensions 4 to 4;
optional uint32 optional_uint32 = 3;
extensions 2 to 2;
optional int32 optional_int32 = 1;
};
extend OutOfOrderFields {
optional uint64 optional_uint64 = 4;
optional int64 optional_int64 = 2;
}

View File

@ -0,0 +1,112 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""OutputStream is the primitive interface for sticking bits on the wire.
All protocol buffer serialization can be expressed in terms of
the OutputStream primitives provided here.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import array
import struct
from google.protobuf import message
from google.protobuf.internal import wire_format
# Note that much of this code is ported from //net/proto/ProtocolBuffer, and
# that the interface is strongly inspired by CodedOutputStream from the C++
# proto2 implementation.
class OutputStream(object):
"""Contains all logic for writing bits, and ToString() to get the result."""
def __init__(self):
self._buffer = array.array('B')
def AppendRawBytes(self, raw_bytes):
"""Appends raw_bytes to our internal buffer."""
self._buffer.fromstring(raw_bytes)
def AppendLittleEndian32(self, unsigned_value):
"""Appends an unsigned 32-bit integer to the internal buffer,
in little-endian byte order.
"""
if not 0 <= unsigned_value <= wire_format.UINT32_MAX:
raise message.EncodeError(
'Unsigned 32-bit out of range: %d' % unsigned_value)
self._buffer.fromstring(struct.pack(
wire_format.FORMAT_UINT32_LITTLE_ENDIAN, unsigned_value))
def AppendLittleEndian64(self, unsigned_value):
"""Appends an unsigned 64-bit integer to the internal buffer,
in little-endian byte order.
"""
if not 0 <= unsigned_value <= wire_format.UINT64_MAX:
raise message.EncodeError(
'Unsigned 64-bit out of range: %d' % unsigned_value)
self._buffer.fromstring(struct.pack(
wire_format.FORMAT_UINT64_LITTLE_ENDIAN, unsigned_value))
def AppendVarint32(self, value):
"""Appends a signed 32-bit integer to the internal buffer,
encoded as a varint. (Note that a negative varint32 will
always require 10 bytes of space.)
"""
if not wire_format.INT32_MIN <= value <= wire_format.INT32_MAX:
raise message.EncodeError('Value out of range: %d' % value)
self.AppendVarint64(value)
def AppendVarUInt32(self, value):
"""Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= value <= wire_format.UINT32_MAX:
raise message.EncodeError('Value out of range: %d' % value)
self.AppendVarUInt64(value)
def AppendVarint64(self, value):
"""Appends a signed 64-bit integer to the internal buffer,
encoded as a varint.
"""
if not wire_format.INT64_MIN <= value <= wire_format.INT64_MAX:
raise message.EncodeError('Value out of range: %d' % value)
if value < 0:
value += (1 << 64)
self.AppendVarUInt64(value)
def AppendVarUInt64(self, unsigned_value):
"""Appends an unsigned 64-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= unsigned_value <= wire_format.UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % unsigned_value)
while True:
bits = unsigned_value & 0x7f
unsigned_value >>= 7
if unsigned_value:
bits |= 0x80
self._buffer.append(bits)
if not unsigned_value:
break
def ToString(self):
"""Returns a string containing the bytes in our internal buffer."""
return self._buffer.tostring()

View File

@ -0,0 +1,162 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.internal.output_stream."""
__author__ = 'robinson@google.com (Will Robinson)'
import unittest
from google.protobuf import message
from google.protobuf.internal import output_stream
from google.protobuf.internal import wire_format
class OutputStreamTest(unittest.TestCase):
def setUp(self):
self.stream = output_stream.OutputStream()
def testAppendRawBytes(self):
# Empty string.
self.stream.AppendRawBytes('')
self.assertEqual('', self.stream.ToString())
# Nonempty string.
self.stream.AppendRawBytes('abc')
self.assertEqual('abc', self.stream.ToString())
# Ensure that we're actually appending.
self.stream.AppendRawBytes('def')
self.assertEqual('abcdef', self.stream.ToString())
def AppendNumericTestHelper(self, append_fn, values_and_strings):
"""For each (value, expected_string) pair in values_and_strings,
calls an OutputStream.Append*(value) method on an OutputStream and ensures
that the string written to that stream matches expected_string.
Args:
append_fn: Unbound OutputStream method that takes an integer or
long value as input.
values_and_strings: Iterable of (value, expected_string) pairs.
"""
for conversion in (int, long):
for value, string in values_and_strings:
stream = output_stream.OutputStream()
expected_string = ''
append_fn(stream, conversion(value))
expected_string += string
self.assertEqual(expected_string, stream.ToString())
def AppendOverflowTestHelper(self, append_fn, value):
"""Calls an OutputStream.Append*(value) method and asserts
that the method raises message.EncodeError.
Args:
append_fn: Unbound OutputStream method that takes an integer or
long value as input.
value: Value to pass to append_fn which should cause an
message.EncodeError.
"""
stream = output_stream.OutputStream()
self.assertRaises(message.EncodeError, append_fn, stream, value)
def testAppendLittleEndian32(self):
append_fn = output_stream.OutputStream.AppendLittleEndian32
values_and_expected_strings = [
(0, '\x00\x00\x00\x00'),
(1, '\x01\x00\x00\x00'),
((1 << 32) - 1, '\xff\xff\xff\xff'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, 1 << 32)
self.AppendOverflowTestHelper(append_fn, -1)
def testAppendLittleEndian64(self):
append_fn = output_stream.OutputStream.AppendLittleEndian64
values_and_expected_strings = [
(0, '\x00\x00\x00\x00\x00\x00\x00\x00'),
(1, '\x01\x00\x00\x00\x00\x00\x00\x00'),
((1 << 64) - 1, '\xff\xff\xff\xff\xff\xff\xff\xff'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, 1 << 64)
self.AppendOverflowTestHelper(append_fn, -1)
def testAppendVarint32(self):
append_fn = output_stream.OutputStream.AppendVarint32
values_and_expected_strings = [
(0, '\x00'),
(1, '\x01'),
(127, '\x7f'),
(128, '\x80\x01'),
(-1, '\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01'),
(wire_format.INT32_MAX, '\xff\xff\xff\xff\x07'),
(wire_format.INT32_MIN, '\x80\x80\x80\x80\xf8\xff\xff\xff\xff\x01'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, wire_format.INT32_MAX + 1)
self.AppendOverflowTestHelper(append_fn, wire_format.INT32_MIN - 1)
def testAppendVarUInt32(self):
append_fn = output_stream.OutputStream.AppendVarUInt32
values_and_expected_strings = [
(0, '\x00'),
(1, '\x01'),
(127, '\x7f'),
(128, '\x80\x01'),
(wire_format.UINT32_MAX, '\xff\xff\xff\xff\x0f'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, -1)
self.AppendOverflowTestHelper(append_fn, wire_format.UINT32_MAX + 1)
def testAppendVarint64(self):
append_fn = output_stream.OutputStream.AppendVarint64
values_and_expected_strings = [
(0, '\x00'),
(1, '\x01'),
(127, '\x7f'),
(128, '\x80\x01'),
(-1, '\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01'),
(wire_format.INT64_MAX, '\xff\xff\xff\xff\xff\xff\xff\xff\x7f'),
(wire_format.INT64_MIN, '\x80\x80\x80\x80\x80\x80\x80\x80\x80\x01'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, wire_format.INT64_MAX + 1)
self.AppendOverflowTestHelper(append_fn, wire_format.INT64_MIN - 1)
def testAppendVarUInt64(self):
append_fn = output_stream.OutputStream.AppendVarUInt64
values_and_expected_strings = [
(0, '\x00'),
(1, '\x01'),
(127, '\x7f'),
(128, '\x80\x01'),
(wire_format.UINT64_MAX, '\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01'),
]
self.AppendNumericTestHelper(append_fn, values_and_expected_strings)
self.AppendOverflowTestHelper(append_fn, -1)
self.AppendOverflowTestHelper(append_fn, wire_format.UINT64_MAX + 1)
if __name__ == '__main__':
unittest.main()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,98 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for google.protobuf.internal.service_reflection."""
__author__ = 'petar@google.com (Petar Petrov)'
import unittest
from google.protobuf import unittest_pb2
from google.protobuf import service_reflection
from google.protobuf import service
class FooUnitTest(unittest.TestCase):
def testService(self):
class MockRpcChannel(service.RpcChannel):
def CallMethod(self, method, controller, request, response, callback):
self.method = method
self.controller = controller
self.request = request
callback(response)
class MockRpcController(service.RpcController):
def SetFailed(self, msg):
self.failure_message = msg
self.callback_response = None
class MyService(unittest_pb2.TestService):
pass
self.callback_response = None
def MyCallback(response):
self.callback_response = response
rpc_controller = MockRpcController()
channel = MockRpcChannel()
srvc = MyService()
srvc.Foo(rpc_controller, unittest_pb2.FooRequest(), MyCallback)
self.assertEqual('Method Foo not implemented.',
rpc_controller.failure_message)
self.assertEqual(None, self.callback_response)
rpc_controller.failure_message = None
service_descriptor = unittest_pb2.TestService.DESCRIPTOR
srvc.CallMethod(service_descriptor.methods[1], rpc_controller,
unittest_pb2.BarRequest(), MyCallback)
self.assertEqual('Method Bar not implemented.',
rpc_controller.failure_message)
self.assertEqual(None, self.callback_response)
def testServiceStub(self):
class MockRpcChannel(service.RpcChannel):
def CallMethod(self, method, controller, request,
response_class, callback):
self.method = method
self.controller = controller
self.request = request
callback(response_class())
self.callback_response = None
def MyCallback(response):
self.callback_response = response
channel = MockRpcChannel()
stub = unittest_pb2.TestService_Stub(channel)
rpc_controller = 'controller'
request = 'request'
# Invoke method.
stub.Foo(rpc_controller, request, MyCallback)
self.assertTrue(isinstance(self.callback_response,
unittest_pb2.FooResponse))
self.assertEqual(request, channel.request)
self.assertEqual(rpc_controller, channel.controller)
self.assertEqual(stub.GetDescriptor().methods[0], channel.method)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,354 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities for Python proto2 tests.
This is intentionally modeled on C++ code in
//net/proto2/internal/test_util.*.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import os.path
from google.protobuf import unittest_import_pb2
from google.protobuf import unittest_pb2
def SetAllFields(message):
"""Sets every field in the message to a unique value.
Args:
message: A unittest_pb2.TestAllTypes instance.
"""
#
# Optional fields.
#
message.optional_int32 = 101
message.optional_int64 = 102
message.optional_uint32 = 103
message.optional_uint64 = 104
message.optional_sint32 = 105
message.optional_sint64 = 106
message.optional_fixed32 = 107
message.optional_fixed64 = 108
message.optional_sfixed32 = 109
message.optional_sfixed64 = 110
message.optional_float = 111
message.optional_double = 112
message.optional_bool = True
# TODO(robinson): Firmly spec out and test how
# protos interact with unicode. One specific example:
# what happens if we change the literal below to
# u'115'? What *should* happen? Still some discussion
# to finish with Kenton about bytes vs. strings
# and forcing everything to be utf8. :-/
message.optional_string = '115'
message.optional_bytes = '116'
message.optionalgroup.a = 117
message.optional_nested_message.bb = 118
message.optional_foreign_message.c = 119
message.optional_import_message.d = 120
message.optional_nested_enum = unittest_pb2.TestAllTypes.BAZ
message.optional_foreign_enum = unittest_pb2.FOREIGN_BAZ
message.optional_import_enum = unittest_import_pb2.IMPORT_BAZ
message.optional_string_piece = '124'
message.optional_cord = '125'
#
# Repeated fields.
#
message.repeated_int32.append(201)
message.repeated_int64.append(202)
message.repeated_uint32.append(203)
message.repeated_uint64.append(204)
message.repeated_sint32.append(205)
message.repeated_sint64.append(206)
message.repeated_fixed32.append(207)
message.repeated_fixed64.append(208)
message.repeated_sfixed32.append(209)
message.repeated_sfixed64.append(210)
message.repeated_float.append(211)
message.repeated_double.append(212)
message.repeated_bool.append(True)
message.repeated_string.append('215')
message.repeated_bytes.append('216')
message.repeatedgroup.add().a = 217
message.repeated_nested_message.add().bb = 218
message.repeated_foreign_message.add().c = 219
message.repeated_import_message.add().d = 220
message.repeated_nested_enum.append(unittest_pb2.TestAllTypes.BAR)
message.repeated_foreign_enum.append(unittest_pb2.FOREIGN_BAR)
message.repeated_import_enum.append(unittest_import_pb2.IMPORT_BAR)
message.repeated_string_piece.append('224')
message.repeated_cord.append('225')
# Add a second one of each field.
message.repeated_int32.append(301)
message.repeated_int64.append(302)
message.repeated_uint32.append(303)
message.repeated_uint64.append(304)
message.repeated_sint32.append(305)
message.repeated_sint64.append(306)
message.repeated_fixed32.append(307)
message.repeated_fixed64.append(308)
message.repeated_sfixed32.append(309)
message.repeated_sfixed64.append(310)
message.repeated_float.append(311)
message.repeated_double.append(312)
message.repeated_bool.append(False)
message.repeated_string.append('315')
message.repeated_bytes.append('316')
message.repeatedgroup.add().a = 317
message.repeated_nested_message.add().bb = 318
message.repeated_foreign_message.add().c = 319
message.repeated_import_message.add().d = 320
message.repeated_nested_enum.append(unittest_pb2.TestAllTypes.BAZ)
message.repeated_foreign_enum.append(unittest_pb2.FOREIGN_BAZ)
message.repeated_import_enum.append(unittest_import_pb2.IMPORT_BAZ)
message.repeated_string_piece.append('324')
message.repeated_cord.append('325')
#
# Fields that have defaults.
#
message.default_int32 = 401
message.default_int64 = 402
message.default_uint32 = 403
message.default_uint64 = 404
message.default_sint32 = 405
message.default_sint64 = 406
message.default_fixed32 = 407
message.default_fixed64 = 408
message.default_sfixed32 = 409
message.default_sfixed64 = 410
message.default_float = 411
message.default_double = 412
message.default_bool = False
message.default_string = '415'
message.default_bytes = '416'
message.default_nested_enum = unittest_pb2.TestAllTypes.FOO
message.default_foreign_enum = unittest_pb2.FOREIGN_FOO
message.default_import_enum = unittest_import_pb2.IMPORT_FOO
message.default_string_piece = '424'
message.default_cord = '425'
def SetAllExtensions(message):
"""Sets every extension in the message to a unique value.
Args:
message: A unittest_pb2.TestAllExtensions instance.
"""
extensions = message.Extensions
pb2 = unittest_pb2
import_pb2 = unittest_import_pb2
#
# Optional fields.
#
extensions[pb2.optional_int32_extension] = 101
extensions[pb2.optional_int64_extension] = 102
extensions[pb2.optional_uint32_extension] = 103
extensions[pb2.optional_uint64_extension] = 104
extensions[pb2.optional_sint32_extension] = 105
extensions[pb2.optional_sint64_extension] = 106
extensions[pb2.optional_fixed32_extension] = 107
extensions[pb2.optional_fixed64_extension] = 108
extensions[pb2.optional_sfixed32_extension] = 109
extensions[pb2.optional_sfixed64_extension] = 110
extensions[pb2.optional_float_extension] = 111
extensions[pb2.optional_double_extension] = 112
extensions[pb2.optional_bool_extension] = True
extensions[pb2.optional_string_extension] = '115'
extensions[pb2.optional_bytes_extension] = '116'
extensions[pb2.optionalgroup_extension].a = 117
extensions[pb2.optional_nested_message_extension].bb = 118
extensions[pb2.optional_foreign_message_extension].c = 119
extensions[pb2.optional_import_message_extension].d = 120
extensions[pb2.optional_nested_enum_extension] = pb2.TestAllTypes.BAZ
extensions[pb2.optional_nested_enum_extension] = pb2.TestAllTypes.BAZ
extensions[pb2.optional_foreign_enum_extension] = pb2.FOREIGN_BAZ
extensions[pb2.optional_import_enum_extension] = import_pb2.IMPORT_BAZ
extensions[pb2.optional_string_piece_extension] = '124'
extensions[pb2.optional_cord_extension] = '125'
#
# Repeated fields.
#
extensions[pb2.repeated_int32_extension].append(201)
extensions[pb2.repeated_int64_extension].append(202)
extensions[pb2.repeated_uint32_extension].append(203)
extensions[pb2.repeated_uint64_extension].append(204)
extensions[pb2.repeated_sint32_extension].append(205)
extensions[pb2.repeated_sint64_extension].append(206)
extensions[pb2.repeated_fixed32_extension].append(207)
extensions[pb2.repeated_fixed64_extension].append(208)
extensions[pb2.repeated_sfixed32_extension].append(209)
extensions[pb2.repeated_sfixed64_extension].append(210)
extensions[pb2.repeated_float_extension].append(211)
extensions[pb2.repeated_double_extension].append(212)
extensions[pb2.repeated_bool_extension].append(True)
extensions[pb2.repeated_string_extension].append('215')
extensions[pb2.repeated_bytes_extension].append('216')
extensions[pb2.repeatedgroup_extension].add().a = 217
extensions[pb2.repeated_nested_message_extension].add().bb = 218
extensions[pb2.repeated_foreign_message_extension].add().c = 219
extensions[pb2.repeated_import_message_extension].add().d = 220
extensions[pb2.repeated_nested_enum_extension].append(pb2.TestAllTypes.BAR)
extensions[pb2.repeated_foreign_enum_extension].append(pb2.FOREIGN_BAR)
extensions[pb2.repeated_import_enum_extension].append(import_pb2.IMPORT_BAR)
extensions[pb2.repeated_string_piece_extension].append('224')
extensions[pb2.repeated_cord_extension].append('225')
# Append a second one of each field.
extensions[pb2.repeated_int32_extension].append(301)
extensions[pb2.repeated_int64_extension].append(302)
extensions[pb2.repeated_uint32_extension].append(303)
extensions[pb2.repeated_uint64_extension].append(304)
extensions[pb2.repeated_sint32_extension].append(305)
extensions[pb2.repeated_sint64_extension].append(306)
extensions[pb2.repeated_fixed32_extension].append(307)
extensions[pb2.repeated_fixed64_extension].append(308)
extensions[pb2.repeated_sfixed32_extension].append(309)
extensions[pb2.repeated_sfixed64_extension].append(310)
extensions[pb2.repeated_float_extension].append(311)
extensions[pb2.repeated_double_extension].append(312)
extensions[pb2.repeated_bool_extension].append(False)
extensions[pb2.repeated_string_extension].append('315')
extensions[pb2.repeated_bytes_extension].append('316')
extensions[pb2.repeatedgroup_extension].add().a = 317
extensions[pb2.repeated_nested_message_extension].add().bb = 318
extensions[pb2.repeated_foreign_message_extension].add().c = 319
extensions[pb2.repeated_import_message_extension].add().d = 320
extensions[pb2.repeated_nested_enum_extension].append(pb2.TestAllTypes.BAZ)
extensions[pb2.repeated_foreign_enum_extension].append(pb2.FOREIGN_BAZ)
extensions[pb2.repeated_import_enum_extension].append(import_pb2.IMPORT_BAZ)
extensions[pb2.repeated_string_piece_extension].append('324')
extensions[pb2.repeated_cord_extension].append('325')
#
# Fields with defaults.
#
extensions[pb2.default_int32_extension] = 401
extensions[pb2.default_int64_extension] = 402
extensions[pb2.default_uint32_extension] = 403
extensions[pb2.default_uint64_extension] = 404
extensions[pb2.default_sint32_extension] = 405
extensions[pb2.default_sint64_extension] = 406
extensions[pb2.default_fixed32_extension] = 407
extensions[pb2.default_fixed64_extension] = 408
extensions[pb2.default_sfixed32_extension] = 409
extensions[pb2.default_sfixed64_extension] = 410
extensions[pb2.default_float_extension] = 411
extensions[pb2.default_double_extension] = 412
extensions[pb2.default_bool_extension] = False
extensions[pb2.default_string_extension] = '415'
extensions[pb2.default_bytes_extension] = '416'
extensions[pb2.default_nested_enum_extension] = pb2.TestAllTypes.FOO
extensions[pb2.default_foreign_enum_extension] = pb2.FOREIGN_FOO
extensions[pb2.default_import_enum_extension] = import_pb2.IMPORT_FOO
extensions[pb2.default_string_piece_extension] = '424'
extensions[pb2.default_cord_extension] = '425'
def SetAllFieldsAndExtensions(message):
"""Sets every field and extension in the message to a unique value.
Args:
message: A unittest_pb2.TestAllExtensions message.
"""
message.my_int = 1
message.my_string = 'foo'
message.my_float = 1.0
message.Extensions[unittest_pb2.my_extension_int] = 23
message.Extensions[unittest_pb2.my_extension_string] = 'bar'
def ExpectAllFieldsAndExtensionsInOrder(serialized):
"""Ensures that serialized is the serialization we expect for a message
filled with SetAllFieldsAndExtensions(). (Specifically, ensures that the
serialization is in canonical, tag-number order).
"""
my_extension_int = unittest_pb2.my_extension_int
my_extension_string = unittest_pb2.my_extension_string
expected_strings = []
message = unittest_pb2.TestFieldOrderings()
message.my_int = 1 # Field 1.
expected_strings.append(message.SerializeToString())
message.Clear()
message.Extensions[my_extension_int] = 23 # Field 5.
expected_strings.append(message.SerializeToString())
message.Clear()
message.my_string = 'foo' # Field 11.
expected_strings.append(message.SerializeToString())
message.Clear()
message.Extensions[my_extension_string] = 'bar' # Field 50.
expected_strings.append(message.SerializeToString())
message.Clear()
message.my_float = 1.0
expected_strings.append(message.SerializeToString())
message.Clear()
expected = ''.join(expected_strings)
if expected != serialized:
raise ValueError('Expected %r, found %r' % (expected, serialized))
def GoldenFile(filename):
"""Finds the given golden file and returns a file object representing it."""
# Search up the directory tree looking for the C++ protobuf source code.
path = '.'
while os.path.exists(path):
if os.path.exists(os.path.join(path, 'src/google/protobuf')):
# Found it. Load the golden file from the testdata directory.
return file(os.path.join(path, 'src/google/protobuf/testdata', filename))
path = os.path.join(path, '..')
raise RuntimeError(
'Could not find golden files. This test must be run from within the '
'protobuf source package so that it can read test data files from the '
'C++ source tree.')

View File

@ -0,0 +1,97 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.text_format."""
__author__ = 'kenton@google.com (Kenton Varda)'
import difflib
import unittest
from google.protobuf import text_format
from google.protobuf.internal import test_util
from google.protobuf import unittest_pb2
from google.protobuf import unittest_mset_pb2
class TextFormatTest(unittest.TestCase):
def CompareToGoldenFile(self, text, golden_filename):
f = test_util.GoldenFile(golden_filename)
golden_lines = f.readlines()
f.close()
self.CompareToGoldenLines(text, golden_lines)
def CompareToGoldenText(self, text, golden_text):
self.CompareToGoldenLines(text, golden_text.splitlines(1))
def CompareToGoldenLines(self, text, golden_lines):
actual_lines = text.splitlines(1)
self.assertEqual(golden_lines, actual_lines,
"Text doesn't match golden. Diff:\n" +
''.join(difflib.ndiff(golden_lines, actual_lines)))
def testPrintAllFields(self):
message = unittest_pb2.TestAllTypes()
test_util.SetAllFields(message)
self.CompareToGoldenFile(text_format.MessageToString(message),
'text_format_unittest_data.txt')
def testPrintAllExtensions(self):
message = unittest_pb2.TestAllExtensions()
test_util.SetAllExtensions(message)
self.CompareToGoldenFile(text_format.MessageToString(message),
'text_format_unittest_extensions_data.txt')
def testPrintMessageSet(self):
message = unittest_mset_pb2.TestMessageSetContainer()
ext1 = unittest_mset_pb2.TestMessageSetExtension1.message_set_extension
ext2 = unittest_mset_pb2.TestMessageSetExtension2.message_set_extension
message.message_set.Extensions[ext1].i = 23
message.message_set.Extensions[ext2].str = 'foo'
self.CompareToGoldenText(text_format.MessageToString(message),
'message_set {\n'
' [protobuf_unittest.TestMessageSetExtension1] {\n'
' i: 23\n'
' }\n'
' [protobuf_unittest.TestMessageSetExtension2] {\n'
' str: \"foo\"\n'
' }\n'
'}\n')
def testPrintExotic(self):
message = unittest_pb2.TestAllTypes()
message.repeated_int64.append(-9223372036854775808);
message.repeated_uint64.append(18446744073709551615);
message.repeated_double.append(123.456);
message.repeated_double.append(1.23e22);
message.repeated_double.append(1.23e-18);
message.repeated_string.append('\000\001\a\b\f\n\r\t\v\\\'\"');
self.CompareToGoldenText(text_format.MessageToString(message),
'repeated_int64: -9223372036854775808\n'
'repeated_uint64: 18446744073709551615\n'
'repeated_double: 123.456\n'
'repeated_double: 1.23e+22\n'
'repeated_double: 1.23e-18\n'
'repeated_string: '
'\"\\000\\001\\007\\010\\014\\n\\r\\t\\013\\\\\\\'\\\"\"\n')
def testMessageToString(self):
message = unittest_pb2.ForeignMessage()
message.c = 123
self.assertEqual('c: 123\n', str(message))
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,222 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constants and static functions to support protocol buffer wire format."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import message
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
_TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
# These numbers identify the wire type of a protocol buffer value.
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
# tag-and-type to store one of these WIRETYPE_* constants.
# These values must match WireType enum in //net/proto2/public/wire_format.h.
WIRETYPE_VARINT = 0
WIRETYPE_FIXED64 = 1
WIRETYPE_LENGTH_DELIMITED = 2
WIRETYPE_START_GROUP = 3
WIRETYPE_END_GROUP = 4
WIRETYPE_FIXED32 = 5
_WIRETYPE_MAX = 5
# Bounds for various integer types.
INT32_MAX = int((1 << 31) - 1)
INT32_MIN = int(-(1 << 31))
UINT32_MAX = (1 << 32) - 1
INT64_MAX = (1 << 63) - 1
INT64_MIN = -(1 << 63)
UINT64_MAX = (1 << 64) - 1
# "struct" format strings that will encode/decode the specified formats.
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
# We'll have to provide alternate implementations of AppendLittleEndian*() on
# any architectures where these checks fail.
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
raise AssertionError('Format "I" is not a 32-bit number.')
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
raise AssertionError('Format "Q" is not a 64-bit number.')
def PackTag(field_number, wire_type):
"""Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
"""
if not 0 <= wire_type <= _WIRETYPE_MAX:
raise message.EncodeError('Unknown wire type: %d' % wire_type)
return (field_number << TAG_TYPE_BITS) | wire_type
def UnpackTag(tag):
"""The inverse of PackTag(). Given an unsigned 32-bit number,
returns a (field_number, wire_type) tuple.
"""
return (tag >> TAG_TYPE_BITS), (tag & _TAG_TYPE_MASK)
def ZigZagEncode(value):
"""ZigZag Transform: Encodes signed integers so that they can be
effectively used with varint encoding. See wire_format.h for
more details.
"""
if value >= 0:
return value << 1
return ((value << 1) ^ (~0)) | 0x1
def ZigZagDecode(value):
"""Inverse of ZigZagEncode()."""
if not value & 0x1:
return value >> 1
return (value >> 1) ^ (~0)
# The *ByteSize() functions below return the number of bytes required to
# serialize "field number + type" information and then serialize the value.
def Int32ByteSize(field_number, int32):
return Int64ByteSize(field_number, int32)
def Int64ByteSize(field_number, int64):
# Have to convert to uint before calling UInt64ByteSize().
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
def UInt32ByteSize(field_number, uint32):
return UInt64ByteSize(field_number, uint32)
def UInt64ByteSize(field_number, uint64):
return _TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
def SInt32ByteSize(field_number, int32):
return UInt32ByteSize(field_number, ZigZagEncode(int32))
def SInt64ByteSize(field_number, int64):
return UInt64ByteSize(field_number, ZigZagEncode(int64))
def Fixed32ByteSize(field_number, fixed32):
return _TagByteSize(field_number) + 4
def Fixed64ByteSize(field_number, fixed64):
return _TagByteSize(field_number) + 8
def SFixed32ByteSize(field_number, sfixed32):
return _TagByteSize(field_number) + 4
def SFixed64ByteSize(field_number, sfixed64):
return _TagByteSize(field_number) + 8
def FloatByteSize(field_number, flt):
return _TagByteSize(field_number) + 4
def DoubleByteSize(field_number, double):
return _TagByteSize(field_number) + 8
def BoolByteSize(field_number, b):
return _TagByteSize(field_number) + 1
def EnumByteSize(field_number, enum):
return UInt32ByteSize(field_number, enum)
def StringByteSize(field_number, string):
return (_TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(len(string))
+ len(string))
def BytesByteSize(field_number, b):
return StringByteSize(field_number, b)
def GroupByteSize(field_number, message):
return (2 * _TagByteSize(field_number) # START and END group.
+ message.ByteSize())
def MessageByteSize(field_number, message):
return (_TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
+ message.ByteSize())
def MessageSetItemByteSize(field_number, msg):
# First compute the sizes of the tags.
# There are 2 tags for the beginning and ending of the repeated group, that
# is field number 1, one with field number 2 (type_id) and one with field
# number 3 (message).
total_size = (2 * _TagByteSize(1) + _TagByteSize(2) + _TagByteSize(3))
# Add the number of bytes for type_id.
total_size += _VarUInt64ByteSizeNoTag(field_number)
message_size = msg.ByteSize()
# The number of bytes for encoding the length of the message.
total_size += _VarUInt64ByteSizeNoTag(message_size)
# The size of the message.
total_size += message_size
return total_size
# Private helper functions for the *ByteSize() functions above.
def _TagByteSize(field_number):
"""Returns the bytes required to serialize a tag with this field number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the bytes required to serialize a single varint.
uint64 must be unsigned.
"""
if uint64 > UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % uint64)
bytes = 1
while uint64 > 0x7f:
bytes += 1
uint64 >>= 7
return bytes

View File

@ -0,0 +1,232 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test for google.protobuf.internal.wire_format."""
__author__ = 'robinson@google.com (Will Robinson)'
import unittest
from google.protobuf import message
from google.protobuf.internal import wire_format
class WireFormatTest(unittest.TestCase):
def testPackTag(self):
field_number = 0xabc
tag_type = 2
self.assertEqual((field_number << 3) | tag_type,
wire_format.PackTag(field_number, tag_type))
PackTag = wire_format.PackTag
# Number too high.
self.assertRaises(message.EncodeError, PackTag, field_number, 6)
# Number too low.
self.assertRaises(message.EncodeError, PackTag, field_number, -1)
def testUnpackTag(self):
# Test field numbers that will require various varint sizes.
for expected_field_number in (1, 15, 16, 2047, 2048):
for expected_wire_type in range(6): # Highest-numbered wiretype is 5.
field_number, wire_type = wire_format.UnpackTag(
wire_format.PackTag(expected_field_number, expected_wire_type))
self.assertEqual(expected_field_number, field_number)
self.assertEqual(expected_wire_type, wire_type)
self.assertRaises(TypeError, wire_format.UnpackTag, None)
self.assertRaises(TypeError, wire_format.UnpackTag, 'abc')
self.assertRaises(TypeError, wire_format.UnpackTag, 0.0)
self.assertRaises(TypeError, wire_format.UnpackTag, object())
def testZigZagEncode(self):
Z = wire_format.ZigZagEncode
self.assertEqual(0, Z(0))
self.assertEqual(1, Z(-1))
self.assertEqual(2, Z(1))
self.assertEqual(3, Z(-2))
self.assertEqual(4, Z(2))
self.assertEqual(0xfffffffe, Z(0x7fffffff))
self.assertEqual(0xffffffff, Z(-0x80000000))
self.assertEqual(0xfffffffffffffffe, Z(0x7fffffffffffffff))
self.assertEqual(0xffffffffffffffff, Z(-0x8000000000000000))
self.assertRaises(TypeError, Z, None)
self.assertRaises(TypeError, Z, 'abcd')
self.assertRaises(TypeError, Z, 0.0)
self.assertRaises(TypeError, Z, object())
def testZigZagDecode(self):
Z = wire_format.ZigZagDecode
self.assertEqual(0, Z(0))
self.assertEqual(-1, Z(1))
self.assertEqual(1, Z(2))
self.assertEqual(-2, Z(3))
self.assertEqual(2, Z(4))
self.assertEqual(0x7fffffff, Z(0xfffffffe))
self.assertEqual(-0x80000000, Z(0xffffffff))
self.assertEqual(0x7fffffffffffffff, Z(0xfffffffffffffffe))
self.assertEqual(-0x8000000000000000, Z(0xffffffffffffffff))
self.assertRaises(TypeError, Z, None)
self.assertRaises(TypeError, Z, 'abcd')
self.assertRaises(TypeError, Z, 0.0)
self.assertRaises(TypeError, Z, object())
def NumericByteSizeTestHelper(self, byte_size_fn, value, expected_value_size):
# Use field numbers that cause various byte sizes for the tag information.
for field_number, tag_bytes in ((15, 1), (16, 2), (2047, 2), (2048, 3)):
expected_size = expected_value_size + tag_bytes
actual_size = byte_size_fn(field_number, value)
self.assertEqual(expected_size, actual_size,
'byte_size_fn: %s, field_number: %d, value: %r\n'
'Expected: %d, Actual: %d'% (
byte_size_fn, field_number, value, expected_size, actual_size))
def testByteSizeFunctions(self):
# Test all numeric *ByteSize() functions.
NUMERIC_ARGS = [
# Int32ByteSize().
[wire_format.Int32ByteSize, 0, 1],
[wire_format.Int32ByteSize, 127, 1],
[wire_format.Int32ByteSize, 128, 2],
[wire_format.Int32ByteSize, -1, 10],
# Int64ByteSize().
[wire_format.Int64ByteSize, 0, 1],
[wire_format.Int64ByteSize, 127, 1],
[wire_format.Int64ByteSize, 128, 2],
[wire_format.Int64ByteSize, -1, 10],
# UInt32ByteSize().
[wire_format.UInt32ByteSize, 0, 1],
[wire_format.UInt32ByteSize, 127, 1],
[wire_format.UInt32ByteSize, 128, 2],
[wire_format.UInt32ByteSize, wire_format.UINT32_MAX, 5],
# UInt64ByteSize().
[wire_format.UInt64ByteSize, 0, 1],
[wire_format.UInt64ByteSize, 127, 1],
[wire_format.UInt64ByteSize, 128, 2],
[wire_format.UInt64ByteSize, wire_format.UINT64_MAX, 10],
# SInt32ByteSize().
[wire_format.SInt32ByteSize, 0, 1],
[wire_format.SInt32ByteSize, -1, 1],
[wire_format.SInt32ByteSize, 1, 1],
[wire_format.SInt32ByteSize, -63, 1],
[wire_format.SInt32ByteSize, 63, 1],
[wire_format.SInt32ByteSize, -64, 1],
[wire_format.SInt32ByteSize, 64, 2],
# SInt64ByteSize().
[wire_format.SInt64ByteSize, 0, 1],
[wire_format.SInt64ByteSize, -1, 1],
[wire_format.SInt64ByteSize, 1, 1],
[wire_format.SInt64ByteSize, -63, 1],
[wire_format.SInt64ByteSize, 63, 1],
[wire_format.SInt64ByteSize, -64, 1],
[wire_format.SInt64ByteSize, 64, 2],
# Fixed32ByteSize().
[wire_format.Fixed32ByteSize, 0, 4],
[wire_format.Fixed32ByteSize, wire_format.UINT32_MAX, 4],
# Fixed64ByteSize().
[wire_format.Fixed64ByteSize, 0, 8],
[wire_format.Fixed64ByteSize, wire_format.UINT64_MAX, 8],
# SFixed32ByteSize().
[wire_format.SFixed32ByteSize, 0, 4],
[wire_format.SFixed32ByteSize, wire_format.INT32_MIN, 4],
[wire_format.SFixed32ByteSize, wire_format.INT32_MAX, 4],
# SFixed64ByteSize().
[wire_format.SFixed64ByteSize, 0, 8],
[wire_format.SFixed64ByteSize, wire_format.INT64_MIN, 8],
[wire_format.SFixed64ByteSize, wire_format.INT64_MAX, 8],
# FloatByteSize().
[wire_format.FloatByteSize, 0.0, 4],
[wire_format.FloatByteSize, 1000000000.0, 4],
[wire_format.FloatByteSize, -1000000000.0, 4],
# DoubleByteSize().
[wire_format.DoubleByteSize, 0.0, 8],
[wire_format.DoubleByteSize, 1000000000.0, 8],
[wire_format.DoubleByteSize, -1000000000.0, 8],
# BoolByteSize().
[wire_format.BoolByteSize, False, 1],
[wire_format.BoolByteSize, True, 1],
# EnumByteSize().
[wire_format.EnumByteSize, 0, 1],
[wire_format.EnumByteSize, 127, 1],
[wire_format.EnumByteSize, 128, 2],
[wire_format.EnumByteSize, wire_format.UINT32_MAX, 5],
]
for args in NUMERIC_ARGS:
self.NumericByteSizeTestHelper(*args)
# Test strings and bytes.
for byte_size_fn in (wire_format.StringByteSize, wire_format.BytesByteSize):
# 1 byte for tag, 1 byte for length, 3 bytes for contents.
self.assertEqual(5, byte_size_fn(10, 'abc'))
# 2 bytes for tag, 1 byte for length, 3 bytes for contents.
self.assertEqual(6, byte_size_fn(16, 'abc'))
# 2 bytes for tag, 2 bytes for length, 128 bytes for contents.
self.assertEqual(132, byte_size_fn(16, 'a' * 128))
class MockMessage(object):
def __init__(self, byte_size):
self.byte_size = byte_size
def ByteSize(self):
return self.byte_size
message_byte_size = 10
mock_message = MockMessage(byte_size=message_byte_size)
# Test groups.
# (2 * 1) bytes for begin and end tags, plus message_byte_size.
self.assertEqual(2 + message_byte_size,
wire_format.GroupByteSize(1, mock_message))
# (2 * 2) bytes for begin and end tags, plus message_byte_size.
self.assertEqual(4 + message_byte_size,
wire_format.GroupByteSize(16, mock_message))
# Test messages.
# 1 byte for tag, plus 1 byte for length, plus contents.
self.assertEqual(2 + mock_message.byte_size,
wire_format.MessageByteSize(1, mock_message))
# 2 bytes for tag, plus 1 byte for length, plus contents.
self.assertEqual(3 + mock_message.byte_size,
wire_format.MessageByteSize(16, mock_message))
# 2 bytes for tag, plus 2 bytes for length, plus contents.
mock_message.byte_size = 128
self.assertEqual(4 + mock_message.byte_size,
wire_format.MessageByteSize(16, mock_message))
# Test message set item byte size.
# 4 bytes for tags, plus 1 byte for length, plus 1 byte for type_id,
# plus contents.
mock_message.byte_size = 10
self.assertEqual(mock_message.byte_size + 6,
wire_format.MessageSetItemByteSize(1, mock_message))
# 4 bytes for tags, plus 2 bytes for length, plus 1 byte for type_id,
# plus contents.
mock_message.byte_size = 128
self.assertEqual(mock_message.byte_size + 7,
wire_format.MessageSetItemByteSize(1, mock_message))
# 4 bytes for tags, plus 2 bytes for length, plus 2 byte for type_id,
# plus contents.
self.assertEqual(mock_message.byte_size + 8,
wire_format.MessageSetItemByteSize(128, mock_message))
# Too-long varint.
self.assertRaises(message.EncodeError,
wire_format.UInt64ByteSize, 1, 1 << 128)
if __name__ == '__main__':
unittest.main()

184
python/google/protobuf/message.py Executable file
View File

@ -0,0 +1,184 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO(robinson): We should just make these methods all "pure-virtual" and move
# all implementation out, into reflection.py for now.
"""Contains an abstract base class for protocol messages."""
__author__ = 'robinson@google.com (Will Robinson)'
from google.protobuf import text_format
class Error(Exception): pass
class DecodeError(Error): pass
class EncodeError(Error): pass
class Message(object):
"""Abstract base class for protocol messages.
Protocol message classes are almost always generated by the protocol
compiler. These generated types subclass Message and implement the methods
shown below.
TODO(robinson): Link to an HTML document here.
TODO(robinson): Document that instances of this class will also
have an Extensions attribute with __getitem__ and __setitem__.
Again, not sure how to best convey this.
TODO(robinson): Document that the class must also have a static
RegisterExtension(extension_field) method.
Not sure how to best express at this point.
"""
# TODO(robinson): Document these fields and methods.
__slots__ = []
DESCRIPTOR = None
def __eq__(self, other_msg):
raise NotImplementedError
def __ne__(self, other_msg):
# Can't just say self != other_msg, since that would infinitely recurse. :)
return not self == other_msg
def __str__(self):
return text_format.MessageToString(self)
def MergeFrom(self, other_msg):
raise NotImplementedError
def CopyFrom(self, other_msg):
raise NotImplementedError
def Clear(self):
raise NotImplementedError
def IsInitialized(self):
raise NotImplementedError
# TODO(robinson): MergeFromString() should probably return None and be
# implemented in terms of a helper that returns the # of bytes read. Our
# deserialization routines would use the helper when recursively
# deserializing, but the end user would almost always just want the no-return
# MergeFromString().
def MergeFromString(self, serialized):
"""Merges serialized protocol buffer data into this message.
When we find a field in |serialized| that is already present
in this message:
- If it's a "repeated" field, we append to the end of our list.
- Else, if it's a scalar, we overwrite our field.
- Else, (it's a nonrepeated composite), we recursively merge
into the existing composite.
TODO(robinson): Document handling of unknown fields.
Args:
serialized: Any object that allows us to call buffer(serialized)
to access a string of bytes using the buffer interface.
TODO(robinson): When we switch to a helper, this will return None.
Returns:
The number of bytes read from |serialized|.
For non-group messages, this will always be len(serialized),
but for messages which are actually groups, this will
generally be less than len(serialized), since we must
stop when we reach an END_GROUP tag. Note that if
we *do* stop because of an END_GROUP tag, the number
of bytes returned does not include the bytes
for the END_GROUP tag information.
"""
raise NotImplementedError
def ParseFromString(self, serialized):
"""Like MergeFromString(), except we clear the object first."""
self.Clear()
self.MergeFromString(serialized)
def SerializeToString(self):
raise NotImplementedError
# TODO(robinson): Decide whether we like these better
# than auto-generated has_foo() and clear_foo() methods
# on the instances themselves. This way is less consistent
# with C++, but it makes reflection-type access easier and
# reduces the number of magically autogenerated things.
#
# TODO(robinson): Be sure to document (and test) exactly
# which field names are accepted here. Are we case-sensitive?
# What do we do with fields that share names with Python keywords
# like 'lambda' and 'yield'?
#
# nnorwitz says:
# """
# Typically (in python), an underscore is appended to names that are
# keywords. So they would become lambda_ or yield_.
# """
def ListFields(self, field_name):
"""Returns a list of (FieldDescriptor, value) tuples for all
fields in the message which are not empty. A singular field is non-empty
if HasField() would return true, and a repeated field is non-empty if
it contains at least one element. The fields are ordered by field
number"""
raise NotImplementedError
def HasField(self, field_name):
raise NotImplementedError
def ClearField(self, field_name):
raise NotImplementedError
def HasExtension(self, extension_handle):
raise NotImplementedError
def ClearExtension(self, extension_handle):
raise NotImplementedError
def ByteSize(self):
"""Returns the serialized size of this message.
Recursively calls ByteSize() on all contained messages.
"""
raise NotImplementedError
def _SetListener(self, message_listener):
"""Internal method used by the protocol message implementation.
Clients should not call this directly.
Sets a listener that this message will call on certain state transitions.
The purpose of this method is to register back-edges from children to
parents at runtime, for the purpose of setting "has" bits and
byte-size-dirty bits in the parent and ancestor objects whenever a child or
descendant object is modified.
If the client wants to disconnect this Message from the object tree, she
explicitly sets callback to None.
If message_listener is None, unregisters any existing listener. Otherwise,
message_listener must implement the MessageListener interface in
internal/message_listener.py, and we discard any listener registered
via a previous _SetListener() call.
"""
raise NotImplementedError

File diff suppressed because it is too large Load Diff

194
python/google/protobuf/service.py Executable file
View File

@ -0,0 +1,194 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Declares the RPC service interfaces.
This module declares the abstract interfaces underlying proto2 RPC
services. These are intented to be independent of any particular RPC
implementation, so that proto2 services can be used on top of a variety
of implementations.
"""
__author__ = 'petar@google.com (Petar Petrov)'
class Service(object):
"""Abstract base interface for protocol-buffer-based RPC services.
Services themselves are abstract classes (implemented either by servers or as
stubs), but they subclass this base interface. The methods of this
interface can be used to call the methods of the service without knowing
its exact type at compile time (analogous to the Message interface).
"""
def GetDescriptor(self):
"""Retrieves this service's descriptor."""
raise NotImplementedError
def CallMethod(self, method_descriptor, rpc_controller,
request, done):
"""Calls a method of the service specified by method_descriptor.
Preconditions:
* method_descriptor.service == GetDescriptor
* request is of the exact same classes as returned by
GetRequestClass(method).
* After the call has started, the request must not be modified.
* "rpc_controller" is of the correct type for the RPC implementation being
used by this Service. For stubs, the "correct type" depends on the
RpcChannel which the stub is using.
Postconditions:
* "done" will be called when the method is complete. This may be
before CallMethod() returns or it may be at some point in the future.
"""
raise NotImplementedError
def GetRequestClass(self, method_descriptor):
"""Returns the class of the request message for the specified method.
CallMethod() requires that the request is of a particular subclass of
Message. GetRequestClass() gets the default instance of this required
type.
Example:
method = service.GetDescriptor().FindMethodByName("Foo")
request = stub.GetRequestClass(method)()
request.ParseFromString(input)
service.CallMethod(method, request, callback)
"""
raise NotImplementedError
def GetResponseClass(self, method_descriptor):
"""Returns the class of the response message for the specified method.
This method isn't really needed, as the RpcChannel's CallMethod constructs
the response protocol message. It's provided anyway in case it is useful
for the caller to know the response type in advance.
"""
raise NotImplementedError
class RpcController(object):
"""Abstract interface for an RPC channel.
An RpcChannel represents a communication line to a service which can be used
to call that service's methods. The service may be running on another
machine. Normally, you should not use an RpcChannel directly, but instead
construct a stub {@link Service} wrapping it. Example:
Example:
RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234")
RpcController controller = rpcImpl.Controller()
MyService service = MyService_Stub(channel)
service.MyMethod(controller, request, callback)
"""
# Client-side methods below
def Reset(self):
"""Resets the RpcController to its initial state.
After the RpcController has been reset, it may be reused in
a new call. Must not be called while an RPC is in progress.
"""
raise NotImplementedError
def Failed(self):
"""Returns true if the call failed.
After a call has finished, returns true if the call failed. The possible
reasons for failure depend on the RPC implementation. Failed() must not
be called before a call has finished. If Failed() returns true, the
contents of the response message are undefined.
"""
raise NotImplementedError
def ErrorText(self):
"""If Failed is true, returns a human-readable description of the error."""
raise NotImplementedError
def StartCancel(self):
"""Initiate cancellation.
Advises the RPC system that the caller desires that the RPC call be
canceled. The RPC system may cancel it immediately, may wait awhile and
then cancel it, or may not even cancel the call at all. If the call is
canceled, the "done" callback will still be called and the RpcController
will indicate that the call failed at that time.
"""
raise NotImplementedError
# Server-side methods below
def SetFailed(self, reason):
"""Sets a failure reason.
Causes Failed() to return true on the client side. "reason" will be
incorporated into the message returned by ErrorText(). If you find
you need to return machine-readable information about failures, you
should incorporate it into your response protocol buffer and should
NOT call SetFailed().
"""
raise NotImplementedError
def IsCanceled(self):
"""Checks if the client cancelled the RPC.
If true, indicates that the client canceled the RPC, so the server may
as well give up on replying to it. The server should still call the
final "done" callback.
"""
raise NotImplementedError
def NotifyOnCancel(self, callback):
"""Sets a callback to invoke on cancel.
Asks that the given callback be called when the RPC is canceled. The
callback will always be called exactly once. If the RPC completes without
being canceled, the callback will be called after completion. If the RPC
has already been canceled when NotifyOnCancel() is called, the callback
will be called immediately.
NotifyOnCancel() must be called no more than once per request.
"""
raise NotImplementedError
class RpcChannel(object):
"""An RpcController mediates a single method call.
The primary purpose of the controller is to provide a way to manipulate
settings specific to the RPC implementation and to find out about RPC-level
errors. The methods provided by the RpcController interface are intended
to be a "least common denominator" set of features which we expect all
implementations to support. Specific implementations may provide more
advanced features (e.g. deadline propagation).
"""
def CallMethod(self, method_descriptor, rpc_controller,
request, response_class, done):
"""Calls the method identified by the descriptor.
Call the given method of the remote service. The signature of this
procedure looks the same as Service.CallMethod(), but the requirements
are less strict in one important way: the request object doesn't have to
be of any specific class as long as its descriptor is method.input_type.
"""
raise NotImplementedError

View File

@ -0,0 +1,275 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains metaclasses used to create protocol service and service stub
classes from ServiceDescriptor objects at runtime.
The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to
inject all useful functionality into the classes output by the protocol
compiler at compile-time.
"""
__author__ = 'petar@google.com (Petar Petrov)'
class GeneratedServiceType(type):
"""Metaclass for service classes created at runtime from ServiceDescriptors.
Implementations for all methods described in the Service class are added here
by this class. We also create properties to allow getting/setting all fields
in the protocol message.
The protocol compiler currently uses this metaclass to create protocol service
classes at runtime. Clients can also manually create their own classes at
runtime, as in this example:
mydescriptor = ServiceDescriptor(.....)
class MyProtoService(service.Service):
__metaclass__ = GeneratedServiceType
DESCRIPTOR = mydescriptor
myservice_instance = MyProtoService()
...
"""
_DESCRIPTOR_KEY = 'DESCRIPTOR'
def __init__(cls, name, bases, dictionary):
"""Creates a message service class.
Args:
name: Name of the class (ignored, but required by the metaclass
protocol).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type.
"""
# Don't do anything if this class doesn't have a descriptor. This happens
# when a service class is subclassed.
if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary:
return
descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY]
service_builder = _ServiceBuilder(descriptor)
service_builder.BuildService(cls)
class GeneratedServiceStubType(GeneratedServiceType):
"""Metaclass for service stubs created at runtime from ServiceDescriptors.
This class has similar responsibilities as GeneratedServiceType, except that
it creates the service stub classes.
"""
_DESCRIPTOR_KEY = 'DESCRIPTOR'
def __init__(cls, name, bases, dictionary):
"""Creates a message service stub class.
Args:
name: Name of the class (ignored, here).
bases: Base classes of the class being constructed.
dictionary: The class dictionary of the class being constructed.
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
describing this protocol service type.
"""
super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary)
# Don't do anything if this class doesn't have a descriptor. This happens
# when a service stub is subclassed.
if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary:
return
descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY]
service_stub_builder = _ServiceStubBuilder(descriptor)
service_stub_builder.BuildServiceStub(cls)
class _ServiceBuilder(object):
"""This class constructs a protocol service class using a service descriptor.
Given a service descriptor, this class constructs a class that represents
the specified service descriptor. One service builder instance constructs
exactly one service class. That means all instances of that class share the
same builder.
"""
def __init__(self, service_descriptor):
"""Initializes an instance of the service class builder.
Args:
service_descriptor: ServiceDescriptor to use when constructing the
service class.
"""
self.descriptor = service_descriptor
def BuildService(self, cls):
"""Constructs the service class.
Args:
cls: The class that will be constructed.
"""
# CallMethod needs to operate with an instance of the Service class. This
# internal wrapper function exists only to be able to pass the service
# instance to the method that does the real CallMethod work.
def _WrapCallMethod(srvc, method_descriptor,
rpc_controller, request, callback):
self._CallMethod(srvc, method_descriptor,
rpc_controller, request, callback)
self.cls = cls
cls.CallMethod = _WrapCallMethod
cls.GetDescriptor = self._GetDescriptor
cls.GetRequestClass = self._GetRequestClass
cls.GetResponseClass = self._GetResponseClass
for method in self.descriptor.methods:
setattr(cls, method.name, self._GenerateNonImplementedMethod(method))
def _GetDescriptor(self):
"""Retrieves the service descriptor.
Returns:
The descriptor of the service (of type ServiceDescriptor).
"""
return self.descriptor
def _CallMethod(self, srvc, method_descriptor,
rpc_controller, request, callback):
"""Calls the method described by a given method descriptor.
Args:
srvc: Instance of the service for which this method is called.
method_descriptor: Descriptor that represent the method to call.
rpc_controller: RPC controller to use for this method's execution.
request: Request protocol message.
callback: A callback to invoke after the method has completed.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'CallMethod() given method descriptor for wrong service type.')
method = getattr(self.cls, method_descriptor.name)
method(srvc, rpc_controller, request, callback)
def _GetRequestClass(self, method_descriptor):
"""Returns the class of the request protocol message.
Args:
method_descriptor: Descriptor of the method for which to return the
request protocol message class.
Returns:
A class that represents the input protocol message of the specified
method.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'GetRequestClass() given method descriptor for wrong service type.')
return method_descriptor.input_type._concrete_class
def _GetResponseClass(self, method_descriptor):
"""Returns the class of the response protocol message.
Args:
method_descriptor: Descriptor of the method for which to return the
response protocol message class.
Returns:
A class that represents the output protocol message of the specified
method.
"""
if method_descriptor.containing_service != self.descriptor:
raise RuntimeError(
'GetResponseClass() given method descriptor for wrong service type.')
return method_descriptor.output_type._concrete_class
def _GenerateNonImplementedMethod(self, method):
"""Generates and returns a method that can be set for a service methods.
Args:
method: Descriptor of the service method for which a method is to be
generated.
Returns:
A method that can be added to the service class.
"""
return lambda inst, rpc_controller, request, callback: (
self._NonImplementedMethod(method.name, rpc_controller, callback))
def _NonImplementedMethod(self, method_name, rpc_controller, callback):
"""The body of all methods in the generated service class.
Args:
method_name: Name of the method being executed.
rpc_controller: RPC controller used to execute this method.
callback: A callback which will be invoked when the method finishes.
"""
rpc_controller.SetFailed('Method %s not implemented.' % method_name)
callback(None)
class _ServiceStubBuilder(object):
"""Constructs a protocol service stub class using a service descriptor.
Given a service descriptor, this class constructs a suitable stub class.
A stub is just a type-safe wrapper around an RpcChannel which emulates a
local implementation of the service.
One service stub builder instance constructs exactly one class. It means all
instances of that class share the same service stub builder.
"""
def __init__(self, service_descriptor):
"""Initializes an instance of the service stub class builder.
Args:
service_descriptor: ServiceDescriptor to use when constructing the
stub class.
"""
self.descriptor = service_descriptor
def BuildServiceStub(self, cls):
"""Constructs the stub class.
Args:
cls: The class that will be constructed.
"""
def _ServiceStubInit(stub, rpc_channel):
stub.rpc_channel = rpc_channel
self.cls = cls
cls.__init__ = _ServiceStubInit
for method in self.descriptor.methods:
setattr(cls, method.name, self._GenerateStubMethod(method))
def _GenerateStubMethod(self, method):
return lambda inst, rpc_controller, request, callback: self._StubMethod(
inst, method, rpc_controller, request, callback)
def _StubMethod(self, stub, method_descriptor,
rpc_controller, request, callback):
"""The body of all service methods in the generated stub class.
Args:
stub: Stub instance.
method_descriptor: Descriptor of the invoked method.
rpc_controller: Rpc controller to execute the method.
request: Request protocol message.
callback: A callback to execute when the method finishes.
"""
stub.rpc_channel.CallMethod(
method_descriptor, rpc_controller, request,
method_descriptor.output_type._concrete_class, callback)

View File

@ -0,0 +1,111 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc.
# http://code.google.com/p/protobuf/
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains routines for printing protocol messages in text format."""
__author__ = 'kenton@google.com (Kenton Varda)'
import cStringIO
from google.protobuf import descriptor
__all__ = [ 'MessageToString', 'PrintMessage', 'PrintField', 'PrintFieldValue' ]
def MessageToString(message):
out = cStringIO.StringIO()
PrintMessage(message, out)
result = out.getvalue()
out.close()
return result
def PrintMessage(message, out, indent = 0):
for field, value in message.ListFields():
if field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
for element in value:
PrintField(field, element, out, indent)
else:
PrintField(field, value, out, indent)
def PrintField(field, value, out, indent = 0):
"""Print a single field name/value pair. For repeated fields, the value
should be a single element."""
out.write(' ' * indent);
if field.is_extension:
out.write('[')
if (field.containing_type.GetOptions().message_set_wire_format and
field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and
field.message_type == field.extension_scope and
field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL):
out.write(field.message_type.full_name)
else:
out.write(field.full_name)
out.write(']')
elif field.type == descriptor.FieldDescriptor.TYPE_GROUP:
# For groups, use the capitalized name.
out.write(field.message_type.name)
else:
out.write(field.name)
if field.cpp_type != descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
# The colon is optional in this case, but our cross-language golden files
# don't include it.
out.write(': ')
PrintFieldValue(field, value, out, indent)
out.write('\n')
def PrintFieldValue(field, value, out, indent = 0):
"""Print a single field value (not including name). For repeated fields,
the value should be a single element."""
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
out.write(' {\n')
PrintMessage(value, out, indent + 2)
out.write(' ' * indent + '}')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
out.write(field.enum_type.values_by_number[value].name)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
out.write('\"')
out.write(_CEscape(value))
out.write('\"')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
if value:
out.write("true")
else:
out.write("false")
else:
out.write(str(value))
# text.encode('string_escape') does not seem to satisfy our needs as it
# encodes unprintable characters using two-digit hex escapes whereas our
# C++ unescaping function allows hex escapes to be any length. So,
# "\0011".encode('string_escape') ends up being "\\x011", which will be
# decoded in C++ as a single-character string with char code 0x11.
def _CEscape(text):
def escape(c):
o = ord(c)
if o == 10: return r"\n" # optional escape
if o == 13: return r"\r" # optional escape
if o == 9: return r"\t" # optional escape
if o == 39: return r"\'" # optional escape
if o == 34: return r'\"' # necessary escape
if o == 92: return r"\\" # necessary escape
if o >= 127 or o < 32: return "\\%03o" % o # necessary escapes
return c
return "".join([escape(c) for c in text])

1401
python/mox.py Executable file

File diff suppressed because it is too large Load Diff

126
python/setup.py Executable file
View File

@ -0,0 +1,126 @@
#! /usr/bin/python
#
# See README for usage instructions.
# We must use setuptools, not distutils, because we need to use the
# namespace_packages option for the "google" package.
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup
from distutils.spawn import find_executable
import sys
import os
maintainer_email = "protobuf@googlegroups.com"
# Find the Protocol Compiler.
if os.path.exists("../src/protoc"):
protoc = "../src/protoc"
else:
protoc = find_executable("protoc")
def generate_proto(source):
"""Invokes the Protocol Compiler to generate a _pb2.py from the given
.proto file. Does nothing if the output already exists and is newer than
the input."""
output = source.replace(".proto", "_pb2.py").replace("../src/", "")
if not os.path.exists(source):
print "Can't find required file: " + source
sys.exit(-1)
if (not os.path.exists(output) or
(os.path.exists(source) and
os.path.getmtime(source) > os.path.getmtime(output))):
print "Generating %s..." % output
if protoc == None:
sys.stderr.write(
"protoc is not installed nor found in ../src. Please compile it "
"or install the binary package.\n")
sys.exit(-1)
protoc_command = protoc + " -I../src -I. --python_out=. " + source
if os.system(protoc_command) != 0:
sys.exit(-1)
def MakeTestSuite():
generate_proto("../src/google/protobuf/unittest.proto")
generate_proto("../src/google/protobuf/unittest_import.proto")
generate_proto("../src/google/protobuf/unittest_mset.proto")
generate_proto("google/protobuf/internal/more_extensions.proto")
generate_proto("google/protobuf/internal/more_messages.proto")
import unittest
import google.protobuf.internal.generator_test as generator_test
import google.protobuf.internal.decoder_test as decoder_test
import google.protobuf.internal.descriptor_test as descriptor_test
import google.protobuf.internal.encoder_test as encoder_test
import google.protobuf.internal.input_stream_test as input_stream_test
import google.protobuf.internal.output_stream_test as output_stream_test
import google.protobuf.internal.reflection_test as reflection_test
import google.protobuf.internal.service_reflection_test \
as service_reflection_test
import google.protobuf.internal.text_format_test as text_format_test
import google.protobuf.internal.wire_format_test as wire_format_test
loader = unittest.defaultTestLoader
suite = unittest.TestSuite()
for test in [ generator_test,
decoder_test,
descriptor_test,
encoder_test,
input_stream_test,
output_stream_test,
reflection_test,
service_reflection_test,
text_format_test,
wire_format_test ]:
suite.addTest(loader.loadTestsFromModule(test))
return suite
if __name__ == '__main__':
# TODO(kenton): Integrate this into setuptools somehow?
if len(sys.argv) >= 2 and sys.argv[1] == "clean":
# Delete generated _pb2.py files and .pyc files in the code tree.
for (dirpath, dirnames, filenames) in os.walk("."):
for filename in filenames:
filepath = os.path.join(dirpath, filename)
if filepath.endswith("_pb2.py") or filepath.endswith(".pyc"):
os.remove(filepath)
else:
# Generate necessary .proto file if it doesn't exist.
# TODO(kenton): Maybe we should hook this into a distutils command?
generate_proto("../src/google/protobuf/descriptor.proto")
setup(name = 'protobuf',
version = '2.0.1-SNAPSHOT',
packages = [ 'google' ],
namespace_packages = [ 'google' ],
test_suite = 'setup.MakeTestSuite',
# Must list modules explicitly so that we don't install tests.
py_modules = [
'google.protobuf.internal.decoder',
'google.protobuf.internal.encoder',
'google.protobuf.internal.input_stream',
'google.protobuf.internal.message_listener',
'google.protobuf.internal.output_stream',
'google.protobuf.internal.wire_format',
'google.protobuf.descriptor',
'google.protobuf.descriptor_pb2',
'google.protobuf.message',
'google.protobuf.reflection',
'google.protobuf.service',
'google.protobuf.service_reflection',
'google.protobuf.text_format'],
url = 'http://code.google.com/p/protobuf/',
maintainer = maintainer_email,
maintainer_email = 'protobuf@googlegroups.com',
license = 'Apache License, Version 2.0',
description = 'Protocol Buffers',
long_description =
"Protocol Buffers are Google's data interchange format.",
)

140
python/stubout.py Executable file
View File

@ -0,0 +1,140 @@
#!/usr/bin/python2.4
#
# Copyright 2008 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is used for testing. The original is at:
# http://code.google.com/p/pymox/
class StubOutForTesting:
"""Sample Usage:
You want os.path.exists() to always return true during testing.
stubs = StubOutForTesting()
stubs.Set(os.path, 'exists', lambda x: 1)
...
stubs.UnsetAll()
The above changes os.path.exists into a lambda that returns 1. Once
the ... part of the code finishes, the UnsetAll() looks up the old value
of os.path.exists and restores it.
"""
def __init__(self):
self.cache = []
self.stubs = []
def __del__(self):
self.SmartUnsetAll()
self.UnsetAll()
def SmartSet(self, obj, attr_name, new_attr):
"""Replace obj.attr_name with new_attr. This method is smart and works
at the module, class, and instance level while preserving proper
inheritance. It will not stub out C types however unless that has been
explicitly allowed by the type.
This method supports the case where attr_name is a staticmethod or a
classmethod of obj.
Notes:
- If obj is an instance, then it is its class that will actually be
stubbed. Note that the method Set() does not do that: if obj is
an instance, it (and not its class) will be stubbed.
- The stubbing is using the builtin getattr and setattr. So, the __get__
and __set__ will be called when stubbing (TODO: A better idea would
probably be to manipulate obj.__dict__ instead of getattr() and
setattr()).
Raises AttributeError if the attribute cannot be found.
"""
if (inspect.ismodule(obj) or
(not inspect.isclass(obj) and obj.__dict__.has_key(attr_name))):
orig_obj = obj
orig_attr = getattr(obj, attr_name)
else:
if not inspect.isclass(obj):
mro = list(inspect.getmro(obj.__class__))
else:
mro = list(inspect.getmro(obj))
mro.reverse()
orig_attr = None
for cls in mro:
try:
orig_obj = cls
orig_attr = getattr(obj, attr_name)
except AttributeError:
continue
if orig_attr is None:
raise AttributeError("Attribute not found.")
# Calling getattr() on a staticmethod transforms it to a 'normal' function.
# We need to ensure that we put it back as a staticmethod.
old_attribute = obj.__dict__.get(attr_name)
if old_attribute is not None and isinstance(old_attribute, staticmethod):
orig_attr = staticmethod(orig_attr)
self.stubs.append((orig_obj, attr_name, orig_attr))
setattr(orig_obj, attr_name, new_attr)
def SmartUnsetAll(self):
"""Reverses all the SmartSet() calls, restoring things to their original
definition. Its okay to call SmartUnsetAll() repeatedly, as later calls
have no effect if no SmartSet() calls have been made.
"""
self.stubs.reverse()
for args in self.stubs:
setattr(*args)
self.stubs = []
def Set(self, parent, child_name, new_child):
"""Replace child_name's old definition with new_child, in the context
of the given parent. The parent could be a module when the child is a
function at module scope. Or the parent could be a class when a class'
method is being replaced. The named child is set to new_child, while
the prior definition is saved away for later, when UnsetAll() is called.
This method supports the case where child_name is a staticmethod or a
classmethod of parent.
"""
old_child = getattr(parent, child_name)
old_attribute = parent.__dict__.get(child_name)
if old_attribute is not None and isinstance(old_attribute, staticmethod):
old_child = staticmethod(old_child)
self.cache.append((parent, old_child, child_name))
setattr(parent, child_name, new_child)
def UnsetAll(self):
"""Reverses all the Set() calls, restoring things to their original
definition. Its okay to call UnsetAll() repeatedly, as later calls have
no effect if no Set() calls have been made.
"""
# Undo calls to Set() in reverse order, in case Set() was called on the
# same arguments repeatedly (want the original call to be last one undone)
self.cache.reverse()
for (parent, old_child, child_name) in self.cache:
setattr(parent, child_name, old_child)
self.cache = []

255
src/Makefile.am Normal file
View File

@ -0,0 +1,255 @@
## Process this file with automake to produce Makefile.in
if GCC
# These are good warnings to turn on by default
AM_CXXFLAGS = $(PTHREAD_CFLAGS) -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare
else
AM_CXXFLAGS = $(PTHREAD_CFLAGS)
endif
AM_LDFLAGS = $(PTHREAD_CFLAGS)
# If I say "dist_include_DATA", automake complains that $(includedir) is not
# a "legitimate" directory for DATA. Screw you, automake.
protodir = $(includedir)
nobase_dist_proto_DATA = google/protobuf/descriptor.proto
# Not sure why these don't get cleaned automatically.
clean-local:
rm -f *.loT
CLEANFILES = $(protoc_outputs) unittest_proto_middleman
MAINTAINERCLEANFILES = \
Makefile.in
nobase_include_HEADERS = \
google/protobuf/stubs/common.h \
google/protobuf/descriptor.h \
google/protobuf/descriptor.pb.h \
google/protobuf/descriptor_database.h \
google/protobuf/dynamic_message.h \
google/protobuf/extension_set.h \
google/protobuf/generated_message_reflection.h \
google/protobuf/message.h \
google/protobuf/reflection_ops.h \
google/protobuf/repeated_field.h \
google/protobuf/service.h \
google/protobuf/text_format.h \
google/protobuf/unknown_field_set.h \
google/protobuf/wire_format.h \
google/protobuf/wire_format_inl.h \
google/protobuf/io/coded_stream.h \
google/protobuf/io/printer.h \
google/protobuf/io/tokenizer.h \
google/protobuf/io/zero_copy_stream.h \
google/protobuf/io/zero_copy_stream_impl.h \
google/protobuf/compiler/code_generator.h \
google/protobuf/compiler/command_line_interface.h \
google/protobuf/compiler/importer.h \
google/protobuf/compiler/parser.h \
google/protobuf/compiler/cpp/cpp_generator.h \
google/protobuf/compiler/java/java_generator.h \
google/protobuf/compiler/python/python_generator.h
lib_LTLIBRARIES = libprotobuf.la libprotoc.la
libprotobuf_la_LIBADD = $(PTHREAD_LIBS)
libprotobuf_la_LDFLAGS = -version-info 0:0:0
libprotobuf_la_SOURCES = \
google/protobuf/stubs/common.cc \
google/protobuf/stubs/hash.cc \
google/protobuf/stubs/hash.h \
google/protobuf/stubs/map-util.cc \
google/protobuf/stubs/map-util.h \
google/protobuf/stubs/stl_util-inl.cc \
google/protobuf/stubs/stl_util-inl.h \
google/protobuf/stubs/substitute.cc \
google/protobuf/stubs/substitute.h \
google/protobuf/stubs/strutil.cc \
google/protobuf/stubs/strutil.h \
google/protobuf/descriptor.cc \
google/protobuf/descriptor.pb.cc \
google/protobuf/descriptor_database.cc \
google/protobuf/dynamic_message.cc \
google/protobuf/extension_set.cc \
google/protobuf/generated_message_reflection.cc \
google/protobuf/message.cc \
google/protobuf/reflection_ops.cc \
google/protobuf/repeated_field.cc \
google/protobuf/service.cc \
google/protobuf/text_format.cc \
google/protobuf/unknown_field_set.cc \
google/protobuf/wire_format.cc \
google/protobuf/io/coded_stream.cc \
google/protobuf/io/printer.cc \
google/protobuf/io/tokenizer.cc \
google/protobuf/io/zero_copy_stream.cc \
google/protobuf/io/zero_copy_stream_impl.cc \
google/protobuf/compiler/importer.cc \
google/protobuf/compiler/parser.cc
libprotoc_la_LIBADD = $(PTHREAD_LIBS) libprotobuf.la
libprotoc_la_LDFLAGS = -version-info 0:0:0
libprotoc_la_SOURCES = \
google/protobuf/compiler/code_generator.cc \
google/protobuf/compiler/command_line_interface.cc \
google/protobuf/compiler/cpp/cpp_enum.cc \
google/protobuf/compiler/cpp/cpp_enum.h \
google/protobuf/compiler/cpp/cpp_enum_field.cc \
google/protobuf/compiler/cpp/cpp_enum_field.h \
google/protobuf/compiler/cpp/cpp_extension.cc \
google/protobuf/compiler/cpp/cpp_extension.h \
google/protobuf/compiler/cpp/cpp_field.cc \
google/protobuf/compiler/cpp/cpp_field.h \
google/protobuf/compiler/cpp/cpp_file.cc \
google/protobuf/compiler/cpp/cpp_file.h \
google/protobuf/compiler/cpp/cpp_generator.cc \
google/protobuf/compiler/cpp/cpp_helpers.cc \
google/protobuf/compiler/cpp/cpp_helpers.h \
google/protobuf/compiler/cpp/cpp_message.cc \
google/protobuf/compiler/cpp/cpp_message.h \
google/protobuf/compiler/cpp/cpp_message_field.cc \
google/protobuf/compiler/cpp/cpp_message_field.h \
google/protobuf/compiler/cpp/cpp_primitive_field.cc \
google/protobuf/compiler/cpp/cpp_primitive_field.h \
google/protobuf/compiler/cpp/cpp_service.cc \
google/protobuf/compiler/cpp/cpp_service.h \
google/protobuf/compiler/cpp/cpp_string_field.cc \
google/protobuf/compiler/cpp/cpp_string_field.h \
google/protobuf/compiler/java/java_enum.cc \
google/protobuf/compiler/java/java_enum.h \
google/protobuf/compiler/java/java_enum_field.cc \
google/protobuf/compiler/java/java_enum_field.h \
google/protobuf/compiler/java/java_extension.cc \
google/protobuf/compiler/java/java_extension.h \
google/protobuf/compiler/java/java_field.cc \
google/protobuf/compiler/java/java_field.h \
google/protobuf/compiler/java/java_file.cc \
google/protobuf/compiler/java/java_file.h \
google/protobuf/compiler/java/java_generator.cc \
google/protobuf/compiler/java/java_helpers.cc \
google/protobuf/compiler/java/java_helpers.h \
google/protobuf/compiler/java/java_message.cc \
google/protobuf/compiler/java/java_message.h \
google/protobuf/compiler/java/java_message_field.cc \
google/protobuf/compiler/java/java_message_field.h \
google/protobuf/compiler/java/java_primitive_field.cc \
google/protobuf/compiler/java/java_primitive_field.h \
google/protobuf/compiler/java/java_service.cc \
google/protobuf/compiler/java/java_service.h \
google/protobuf/compiler/python/python_generator.cc
bin_PROGRAMS = protoc
protoc_LDADD = $(PTHREAD_LIBS) libprotobuf.la libprotoc.la
protoc_SOURCES = google/protobuf/compiler/main.cc
# Tests ==============================================================
protoc_inputs = \
google/protobuf/unittest.proto \
google/protobuf/unittest_import.proto \
google/protobuf/unittest_mset.proto \
google/protobuf/unittest_optimize_for.proto \
google/protobuf/unittest_embed_optimize_for.proto \
google/protobuf/compiler/cpp/cpp_test_bad_identifiers.proto
EXTRA_DIST = \
$(protoc_inputs) \
solaris/libstdc++.la \
google/protobuf/testdata/golden_message \
google/protobuf/testdata/text_format_unittest_data.txt \
google/protobuf/testdata/text_format_unittest_extensions_data.txt \
google/protobuf/package_info.h \
google/protobuf/io/package_info.h \
google/protobuf/compiler/package_info.h \
gtest/CHANGES \
gtest/CONTRIBUTORS \
gtest/COPYING \
gtest/README \
gtest/gen_gtest_pred_impl.py
protoc_outputs = \
google/protobuf/unittest.pb.cc \
google/protobuf/unittest.pb.h \
google/protobuf/unittest_import.pb.cc \
google/protobuf/unittest_import.pb.h \
google/protobuf/unittest_mset.pb.cc \
google/protobuf/unittest_mset.pb.h \
google/protobuf/unittest_optimize_for.pb.cc \
google/protobuf/unittest_optimize_for.pb.h \
google/protobuf/unittest_embed_optimize_for.pb.cc \
google/protobuf/unittest_embed_optimize_for.pb.h \
google/protobuf/compiler/cpp/cpp_test_bad_identifiers.pb.cc \
google/protobuf/compiler/cpp/cpp_test_bad_identifiers.pb.h
BUILT_SOURCES = $(protoc_outputs)
# This rule is a little weird. The first prereq is the protoc executable
# and the rest are its inputs. Therefore, $^ -- which expands to the
# list of prereqs -- is actually a valid command. We have to place "./" in
# front of it in case protoc is in the current directory. protoc allows
# flags to appear after input file names, so we happily stick the flags on
# the end.
#
# For reference, if we didn't have to worry about VPATH (i.e., building from
# a directory other than the package root), we could have just written this:
# ./protoc$(EXEEXT) -I$(srcdir) --cpp_out=. $(protoc_inputs)
unittest_proto_middleman: protoc$(EXEEXT) $(protoc_inputs)
./$^ -I$(srcdir) --cpp_out=.
touch unittest_proto_middleman
$(protoc_outputs): unittest_proto_middleman
noinst_PROGRAMS = protobuf-test
protobuf_test_LDADD = $(PTHREAD_LIBS) libprotobuf.la libprotoc.la
protobuf_test_SOURCES = \
google/protobuf/stubs/common_unittest.cc \
google/protobuf/stubs/strutil_unittest.cc \
google/protobuf/descriptor_database_unittest.cc \
google/protobuf/descriptor_unittest.cc \
google/protobuf/dynamic_message_unittest.cc \
google/protobuf/extension_set_unittest.cc \
google/protobuf/generated_message_reflection_unittest.cc \
google/protobuf/message_unittest.cc \
google/protobuf/reflection_ops_unittest.cc \
google/protobuf/repeated_field_unittest.cc \
google/protobuf/text_format_unittest.cc \
google/protobuf/unknown_field_set_unittest.cc \
google/protobuf/wire_format_unittest.cc \
google/protobuf/io/coded_stream_unittest.cc \
google/protobuf/io/printer_unittest.cc \
google/protobuf/io/tokenizer_unittest.cc \
google/protobuf/io/zero_copy_stream_unittest.cc \
google/protobuf/compiler/command_line_interface_unittest.cc \
google/protobuf/compiler/importer_unittest.cc \
google/protobuf/compiler/parser_unittest.cc \
google/protobuf/compiler/cpp/cpp_bootstrap_unittest.cc \
google/protobuf/compiler/cpp/cpp_unittest.cc \
google/protobuf/test_util.cc \
google/protobuf/test_util.h \
google/protobuf/testing/googletest.cc \
google/protobuf/testing/googletest.h \
google/protobuf/testing/file.cc \
google/protobuf/testing/file.h \
gtest/gtest.cc \
gtest/gtest.h \
gtest/gtest-death-test.cc \
gtest/gtest-death-test.h \
gtest/gtest-filepath.cc \
gtest/gtest-internal-inl.h \
gtest/gtest-message.h \
gtest/gtest-port.cc \
gtest/gtest-spi.h \
gtest/gtest_main.cc \
gtest/gtest_pred_impl.h \
gtest/gtest_prod.h \
gtest/internal/gtest-death-test-internal.h \
gtest/internal/gtest-filepath.h \
gtest/internal/gtest-internal.h \
gtest/internal/gtest-port.h \
gtest/internal/gtest-string.h
nodist_protobuf_test_SOURCES = $(protoc_outputs)
TESTS = protobuf-test

View File

@ -0,0 +1,32 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#include <google/protobuf/compiler/code_generator.h>
namespace google {
namespace protobuf {
namespace compiler {
CodeGenerator::~CodeGenerator() {}
OutputDirectory::~OutputDirectory() {}
} // namespace compiler
} // namespace protobuf
} // namespace google

View File

@ -0,0 +1,98 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
//
// Defines the abstract interface implemented by each of the language-specific
// code generators.
#ifndef GOOGLE_PROTOBUF_COMPILER_CODE_GENERATOR_H__
#define GOOGLE_PROTOBUF_COMPILER_CODE_GENERATOR_H__
#include <google/protobuf/stubs/common.h>
#include <string>
namespace google {
namespace protobuf {
namespace io { class ZeroCopyOutputStream; }
class FileDescriptor;
namespace compiler {
// Defined in this file.
class CodeGenerator;
class OutputDirectory;
// The abstract interface to a class which generates code implementing a
// particular proto file in a particular language. A number of these may
// be registered with CommandLineInterface to support various languages.
class LIBPROTOC_EXPORT CodeGenerator {
public:
inline CodeGenerator() {}
virtual ~CodeGenerator();
// Generates code for the given proto file, generating one or more files in
// the given output directory.
//
// A parameter to be passed to the generator can be specified on the
// command line. This is intended to be used by Java and similar languages
// to specify which specific class from the proto file is to be generated,
// though it could have other uses as well. It is empty if no parameter was
// given.
//
// Returns true if successful. Otherwise, sets *error to a description of
// the problem (e.g. "invalid parameter") and returns false.
virtual bool Generate(const FileDescriptor* file,
const string& parameter,
OutputDirectory* output_directory,
string* error) const = 0;
private:
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(CodeGenerator);
};
// CodeGenerators generate one or more files in a given directory. This
// abstract interface represents the directory to which the CodeGenerator is
// to write.
class LIBPROTOC_EXPORT OutputDirectory {
public:
inline OutputDirectory() {}
virtual ~OutputDirectory();
// Opens the given file, truncating it if it exists, and returns a
// ZeroCopyOutputStream that writes to the file. The caller takes ownership
// of the returned object. This method never fails (a dummy stream will be
// returned instead).
//
// The filename given should be relative to the root of the source tree.
// E.g. the C++ generator, when generating code for "foo/bar.proto", will
// generate the files "foo/bar.pb2.h" and "foo/bar.pb2.cc"; note that
// "foo/" is included in these filenames. The filename is not allowed to
// contain "." or ".." components.
virtual io::ZeroCopyOutputStream* Open(const string& filename) = 0;
private:
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(OutputDirectory);
};
} // namespace compiler
} // namespace protobuf
} // namespace google
#endif // GOOGLE_PROTOBUF_COMPILER_CODE_GENERATOR_H__

View File

@ -0,0 +1,579 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#ifdef _MSC_VER
#include <io.h>
#include <direct.h>
#else
#include <unistd.h>
#endif
#include <errno.h>
#include <iostream>
#include <google/protobuf/compiler/command_line_interface.h>
#include <google/protobuf/compiler/importer.h>
#include <google/protobuf/compiler/code_generator.h>
#include <google/protobuf/descriptor.h>
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/stubs/common.h>
#include <google/protobuf/stubs/strutil.h>
namespace google {
namespace protobuf {
namespace compiler {
#if defined(_WIN32)
#define mkdir(name, mode) mkdir(name)
#ifndef W_OK
#define W_OK 02 // not defined by MSVC for whatever reason
#endif
#ifndef F_OK
#define F_OK 00 // not defined by MSVC for whatever reason
#endif
#endif
#ifndef O_BINARY
#ifdef _O_BINARY
#define O_BINARY _O_BINARY
#else
#define O_BINARY 0 // If this isn't defined, the platform doesn't need it.
#endif
#endif
namespace {
#if defined(_WIN32) && !defined(__CYGWIN__)
static const char* kPathSeparator = ";";
#else
static const char* kPathSeparator = ":";
#endif
} // namespace
// A MultiFileErrorCollector that prints errors to stderr.
class CommandLineInterface::ErrorPrinter : public MultiFileErrorCollector {
public:
ErrorPrinter() {}
~ErrorPrinter() {}
// implements MultiFileErrorCollector ------------------------------
void AddError(const string& filename, int line, int column,
const string& message) {
// Users typically expect 1-based line/column numbers, so we add 1
// to each here.
cerr << filename;
if (line != -1) {
cerr << ":" << (line + 1) << ":" << (column + 1);
}
cerr << ": " << message << endl;
}
};
// -------------------------------------------------------------------
// An OutputDirectory implementation that writes to disk.
class CommandLineInterface::DiskOutputDirectory : public OutputDirectory {
public:
DiskOutputDirectory(const string& root);
~DiskOutputDirectory();
bool VerifyExistence();
inline bool had_error() { return had_error_; }
inline void set_had_error(bool value) { had_error_ = value; }
// implements OutputDirectory --------------------------------------
io::ZeroCopyOutputStream* Open(const string& filename);
private:
string root_;
bool had_error_;
};
// A FileOutputStream that checks for errors in the destructor and reports
// them. We extend FileOutputStream via wrapping rather than inheritance
// for two reasons:
// 1) Implementation inheritance is evil.
// 2) We need to close the file descriptor *after* the FileOutputStream's
// destructor is run to make sure it flushes the file contents.
class CommandLineInterface::ErrorReportingFileOutput
: public io::ZeroCopyOutputStream {
public:
ErrorReportingFileOutput(int file_descriptor,
const string& filename,
DiskOutputDirectory* directory);
~ErrorReportingFileOutput();
// implements ZeroCopyOutputStream ---------------------------------
bool Next(void** data, int* size) { return file_stream_->Next(data, size); }
void BackUp(int count) { file_stream_->BackUp(count); }
int64 ByteCount() const { return file_stream_->ByteCount(); }
private:
scoped_ptr<io::FileOutputStream> file_stream_;
int file_descriptor_;
string filename_;
DiskOutputDirectory* directory_;
};
// -------------------------------------------------------------------
CommandLineInterface::DiskOutputDirectory::DiskOutputDirectory(
const string& root)
: root_(root), had_error_(false) {
// Add a '/' to the end if it doesn't already have one. But don't add a
// '/' to an empty string since this probably means the current directory.
if (!root_.empty() && root[root_.size() - 1] != '/') {
root_ += '/';
}
}
CommandLineInterface::DiskOutputDirectory::~DiskOutputDirectory() {
}
bool CommandLineInterface::DiskOutputDirectory::VerifyExistence() {
if (!root_.empty()) {
// Make sure the directory exists. If it isn't a directory, this will fail
// because we added a '/' to the end of the name in the constructor.
if (access(root_.c_str(), W_OK) == -1) {
cerr << root_ << ": " << strerror(errno) << endl;
return false;
}
}
return true;
}
io::ZeroCopyOutputStream* CommandLineInterface::DiskOutputDirectory::Open(
const string& filename) {
// Recursively create parent directories to the output file.
vector<string> parts;
SplitStringUsing(filename, "/", &parts);
string path_so_far = root_;
for (int i = 0; i < parts.size() - 1; i++) {
path_so_far += parts[i];
if (mkdir(path_so_far.c_str(), 0777) != 0) {
if (errno != EEXIST) {
cerr << filename << ": while trying to create directory "
<< path_so_far << ": " << strerror(errno) << endl;
had_error_ = true;
// Return a dummy stream.
return new io::ArrayOutputStream(NULL, 0);
}
}
path_so_far += '/';
}
// Create the output file.
int file_descriptor;
do {
file_descriptor =
open((root_ + filename).c_str(),
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, 0777);
} while (file_descriptor < 0 && errno == EINTR);
if (file_descriptor < 0) {
// Failed to open.
cerr << filename << ": " << strerror(errno) << endl;
had_error_ = true;
// Return a dummy stream.
return new io::ArrayOutputStream(NULL, 0);
}
return new ErrorReportingFileOutput(file_descriptor, filename, this);
}
CommandLineInterface::ErrorReportingFileOutput::ErrorReportingFileOutput(
int file_descriptor,
const string& filename,
DiskOutputDirectory* directory)
: file_stream_(new io::FileOutputStream(file_descriptor)),
file_descriptor_(file_descriptor),
filename_(filename),
directory_(directory) {}
CommandLineInterface::ErrorReportingFileOutput::~ErrorReportingFileOutput() {
// Check if we had any errors while writing.
if (file_stream_->GetErrno() != 0) {
cerr << filename_ << ": " << strerror(file_stream_->GetErrno()) << endl;
directory_->set_had_error(true);
}
// Close the file stream.
if (!file_stream_->Close()) {
cerr << filename_ << ": " << strerror(file_stream_->GetErrno()) << endl;
directory_->set_had_error(true);
}
}
// ===================================================================
CommandLineInterface::CommandLineInterface()
: disallow_services_(false),
inputs_are_proto_path_relative_(false) {}
CommandLineInterface::~CommandLineInterface() {}
void CommandLineInterface::RegisterGenerator(const string& flag_name,
CodeGenerator* generator,
const string& help_text) {
GeneratorInfo info;
info.generator = generator;
info.help_text = help_text;
generators_[flag_name] = info;
}
int CommandLineInterface::Run(int argc, const char* const argv[]) {
Clear();
if (!ParseArguments(argc, argv)) return -1;
// Set up the source tree.
DiskSourceTree source_tree;
for (int i = 0; i < proto_path_.size(); i++) {
source_tree.MapPath(proto_path_[i].first, proto_path_[i].second);
}
// Map input files to virtual paths if necessary.
if (!inputs_are_proto_path_relative_) {
if (!MakeInputsBeProtoPathRelative(&source_tree)) {
return -1;
}
}
// Allocate the Importer.
ErrorPrinter error_collector;
DescriptorPool pool;
Importer importer(&source_tree, &error_collector);
// Parse each file and generate output.
for (int i = 0; i < input_files_.size(); i++) {
// Import the file.
const FileDescriptor* parsed_file = importer.Import(input_files_[i]);
if (parsed_file == NULL) return -1;
// Enforce --disallow_services.
if (disallow_services_ && parsed_file->service_count() > 0) {
cerr << parsed_file->name() << ": This file contains services, but "
"--disallow_services was used." << endl;
return -1;
}
// Generate output files.
for (int i = 0; i < output_directives_.size(); i++) {
if (!GenerateOutput(parsed_file, output_directives_[i])) {
return -1;
}
}
}
return 0;
}
void CommandLineInterface::Clear() {
proto_path_.clear();
input_files_.clear();
output_directives_.clear();
}
bool CommandLineInterface::MakeInputsBeProtoPathRelative(
DiskSourceTree* source_tree) {
for (int i = 0; i < input_files_.size(); i++) {
string virtual_file, shadowing_disk_file;
switch (source_tree->DiskFileToVirtualFile(
input_files_[i], &virtual_file, &shadowing_disk_file)) {
case DiskSourceTree::SUCCESS:
input_files_[i] = virtual_file;
break;
case DiskSourceTree::SHADOWED:
cerr << input_files_[i] << ": Input is shadowed in the --proto_path "
"by \"" << shadowing_disk_file << "\". Either use the latter "
"file as your input or reorder the --proto_path so that the "
"former file's location comes first." << endl;
return false;
case DiskSourceTree::CANNOT_OPEN:
cerr << input_files_[i] << ": " << strerror(errno) << endl;
return false;
case DiskSourceTree::NO_MAPPING:
// First check if the file exists at all.
if (access(input_files_[i].c_str(), F_OK) < 0) {
// File does not even exist.
cerr << input_files_[i] << ": " << strerror(ENOENT) << endl;
} else {
cerr << input_files_[i] << ": File does not reside within any path "
"specified using --proto_path (or -I). You must specify a "
"--proto_path which encompasses this file." << endl;
}
return false;
}
}
return true;
}
bool CommandLineInterface::ParseArguments(int argc, const char* const argv[]) {
executable_name_ = argv[0];
// Iterate through all arguments and parse them.
for (int i = 1; i < argc; i++) {
string name, value;
if (ParseArgument(argv[i], &name, &value)) {
// Retured true => Use the next argument as the flag value.
if (i + 1 == argc || argv[i+1][0] == '-') {
cerr << "Missing value for flag: " << name << endl;
return false;
} else {
++i;
value = argv[i];
}
}
if (!InterpretArgument(name, value)) return false;
}
// If no --proto_path was given, use the current working directory.
if (proto_path_.empty()) {
proto_path_.push_back(make_pair("", "."));
}
// Check some errror cases.
if (input_files_.empty()) {
cerr << "Missing input file." << endl;
return false;
}
if (output_directives_.empty()) {
cerr << "Missing output directives." << endl;
return false;
}
return true;
}
bool CommandLineInterface::ParseArgument(const char* arg,
string* name, string* value) {
bool parsed_value = false;
if (arg[0] != '-') {
// Not a flag.
name->clear();
parsed_value = true;
*value = arg;
} else if (arg[1] == '-') {
// Two dashes: Multi-character name, with '=' separating name and
// value.
const char* equals_pos = strchr(arg, '=');
if (equals_pos != NULL) {
*name = string(arg, equals_pos - arg);
*value = equals_pos + 1;
parsed_value = true;
} else {
*name = arg;
}
} else {
// One dash: One-character name, all subsequent characters are the
// value.
if (arg[1] == '\0') {
// arg is just "-". We treat this as an input file, except that at
// present this will just lead to a "file not found" error.
name->clear();
*value = arg;
parsed_value = true;
} else {
*name = string(arg, 2);
*value = arg + 2;
parsed_value = !value->empty();
}
}
// Need to return true iff the next arg should be used as the value for this
// one, false otherwise.
if (parsed_value) {
// We already parsed a value for this flag.
return false;
}
if (*name == "-h" || *name == "--help" ||
*name == "--disallow_services" ||
*name == "--version") {
// HACK: These are the only flags that don't take a value.
// They probably should not be hard-coded like this but for now it's
// not worth doing better.
return false;
}
// Next argument is the flag value.
return true;
}
bool CommandLineInterface::InterpretArgument(const string& name,
const string& value) {
if (name.empty()) {
// Not a flag. Just a filename.
if (value.empty()) {
cerr << "You seem to have passed an empty string as one of the "
"arguments to " << executable_name_ << ". This is actually "
"sort of hard to do. Congrats. Unfortunately it is not valid "
"input so the program is going to die now." << endl;
return false;
}
input_files_.push_back(value);
} else if (name == "-I" || name == "--proto_path") {
// Java's -classpath (and some other languages) delimits path components
// with colons. Let's accept that syntax too just to make things more
// intuitive.
vector<string> parts;
SplitStringUsing(value, kPathSeparator, &parts);
for (int i = 0; i < parts.size(); i++) {
string virtual_path;
string disk_path;
int equals_pos = parts[i].find_first_of('=');
if (equals_pos == string::npos) {
virtual_path = "";
disk_path = parts[i];
} else {
virtual_path = parts[i].substr(0, equals_pos);
disk_path = parts[i].substr(equals_pos + 1);
}
if (disk_path.empty()) {
cerr << "--proto_path passed empty directory name. (Use \".\" for "
"current directory.)" << endl;
return false;
}
// Make sure disk path exists, warn otherwise.
if (access(disk_path.c_str(), F_OK) < 0) {
cerr << disk_path << ": warning: directory does not exist." << endl;
}
proto_path_.push_back(make_pair(virtual_path, disk_path));
}
} else if (name == "-h" || name == "--help") {
PrintHelpText();
return false; // Exit without running compiler.
} else if (name == "--version") {
if (!version_info_.empty()) {
cout << version_info_ << endl;
}
cout << "libprotoc "
<< protobuf::internal::VersionString(GOOGLE_PROTOBUF_VERSION)
<< endl;
return false; // Exit without running compiler.
} else if (name == "--disallow_services") {
disallow_services_ = true;
} else {
// Some other flag. Look it up in the generators list.
GeneratorMap::const_iterator iter = generators_.find(name);
if (iter == generators_.end()) {
cerr << "Unknown flag: " << name << endl;
return false;
}
// It's an output flag. Add it to the output directives.
OutputDirective directive;
directive.name = name;
directive.generator = iter->second.generator;
// Split value at ':' to separate the generator parameter from the
// filename.
vector<string> parts;
SplitStringUsing(value, ":", &parts);
if (parts.size() == 1) {
directive.output_location = parts[0];
} else if (parts.size() == 2) {
directive.parameter = parts[0];
directive.output_location = parts[1];
} else {
cerr << "Invalid value for flag " << name << "." << endl;
return false;
}
output_directives_.push_back(directive);
}
return true;
}
void CommandLineInterface::PrintHelpText() {
// Sorry for indentation here; line wrapping would be uglier.
cerr <<
"Usage: " << executable_name_ << " [OPTION] PROTO_FILE\n"
"Parse PROTO_FILE and generate output based on the options given:\n"
" -IPATH, --proto_path=PATH Specify the directory in which to search for\n"
" imports. May be specified multiple times;\n"
" directories will be searched in order. If not\n"
" given, the current working directory is used.\n"
" --version Show version info and exit.\n"
" -h, --help Show this text and exit." << endl;
for (GeneratorMap::iterator iter = generators_.begin();
iter != generators_.end(); ++iter) {
// FIXME(kenton): If the text is long enough it will wrap, which is ugly,
// but fixing this nicely (e.g. splitting on spaces) is probably more
// trouble than it's worth.
cerr << " " << iter->first << "=OUT_DIR "
<< string(19 - iter->first.size(), ' ') // Spaces for alignment.
<< iter->second.help_text << endl;
}
}
bool CommandLineInterface::GenerateOutput(
const FileDescriptor* parsed_file,
const OutputDirective& output_directive) {
// Create the output directory.
DiskOutputDirectory output_directory(output_directive.output_location);
if (!output_directory.VerifyExistence()) {
return false;
}
// Opened successfully. Write it.
// Call the generator.
string error;
if (!output_directive.generator->Generate(
parsed_file, output_directive.parameter, &output_directory, &error)) {
// Generator returned an error.
cerr << output_directive.name << ": " << error << endl;
return false;
}
// Check for write errors.
if (output_directory.had_error()) {
return false;
}
return true;
}
} // namespace compiler
} // namespace protobuf
} // namespace google

View File

@ -0,0 +1,210 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
//
// Implements the Protocol Compiler front-end such that it may be reused by
// custom compilers written to support other languages.
#ifndef GOOGLE_PROTOBUF_COMPILER_COMMAND_LINE_INTERFACE_H__
#define GOOGLE_PROTOBUF_COMPILER_COMMAND_LINE_INTERFACE_H__
#include <google/protobuf/stubs/common.h>
#include <string>
#include <vector>
#include <map>
#include <set>
#include <utility>
namespace google {
namespace protobuf {
class FileDescriptor; // descriptor.h
namespace compiler {
class CodeGenerator; // code_generator.h
class DiskSourceTree; // importer.h
// This class implements the command-line interface to the protocol compiler.
// It is designed to make it very easy to create a custom protocol compiler
// supporting the languages of your choice. For example, if you wanted to
// create a custom protocol compiler binary which includes both the regular
// C++ support plus support for your own custom output "Foo", you would
// write a class "FooGenerator" which implements the CodeGenerator interface,
// then write a main() procedure like this:
//
// int main(int argc, char* argv[]) {
// google::protobuf::compiler::CommandLineInterface cli;
//
// // Support generation of C++ source and headers.
// google::protobuf::compiler::cpp::CppGenerator cpp_generator;
// cli.RegisterGenerator("--cpp_out", &cpp_generator,
// "Generate C++ source and header.");
//
// // Support generation of Foo code.
// FooGenerator foo_generator;
// cli.RegisterGenerator("--foo_out", &foo_generator,
// "Generate Foo file.");
//
// return cli.Run(argc, argv);
// }
//
// The compiler is invoked with syntax like:
// protoc --cpp_out=outdir --foo_out=outdir --proto_path=src foo.proto
//
// For a full description of the command-line syntax, invoke it with --help.
class LIBPROTOC_EXPORT CommandLineInterface {
public:
CommandLineInterface();
~CommandLineInterface();
// Register a code generator for a language.
//
// Parameters:
// * flag_name: The command-line flag used to specify an output file of
// this type. The name must start with a '-'. If the name is longer
// than one letter, it must start with two '-'s.
// * generator: The CodeGenerator which will be called to generate files
// of this type.
// * help_text: Text describing this flag in the --help output.
//
// Some generators accept extra parameters. You can specify this parameter
// on the command-line by placing it before the output directory, separated
// by a colon:
// protoc --foo_out=enable_bar:outdir
// The text before the colon is passed to CodeGenerator::Generate() as the
// "parameter".
void RegisterGenerator(const string& flag_name,
CodeGenerator* generator,
const string& help_text);
// Run the Protocol Compiler with the given command-line parameters.
// Returns the error code which should be returned by main().
//
// It may not be safe to call Run() in a multi-threaded environment because
// it calls strerror(). I'm not sure why you'd want to do this anyway.
int Run(int argc, const char* const argv[]);
// Call SetInputsAreCwdRelative(true) if the input files given on the command
// line should be interpreted relative to the proto import path specified
// using --proto_path or -I flags. Otherwise, input file names will be
// interpreted relative to the current working directory (or as absolute
// paths if they start with '/'), though they must still reside inside
// a directory given by --proto_path or the compiler will fail. The latter
// mode is generally more intuitive and easier to use, especially e.g. when
// defining implicit rules in Makefiles.
void SetInputsAreProtoPathRelative(bool enable) {
inputs_are_proto_path_relative_ = enable;
}
// Provides some text which will be printed when the --version flag is
// used. The version of libprotoc will also be printed on the next line
// after this text.
void SetVersionInfo(const string& text) {
version_info_ = text;
}
private:
// -----------------------------------------------------------------
class ErrorPrinter;
class DiskOutputDirectory;
class ErrorReportingFileOutput;
// Clear state from previous Run().
void Clear();
// Remaps each file in input_files_ so that it is relative to one of the
// directories in proto_path_. Returns false if an error occurred. This
// is only used if inputs_are_proto_path_relative_ is false.
bool MakeInputsBeProtoPathRelative(
DiskSourceTree* source_tree);
// Parse all command-line arguments.
bool ParseArguments(int argc, const char* const argv[]);
// Parses a command-line argument into a name/value pair. Returns
// true if the next argument in the argv should be used as the value,
// false otherwise.
//
// Exmaples:
// "-Isrc/protos" ->
// name = "-I", value = "src/protos"
// "--cpp_out=src/foo.pb2.cc" ->
// name = "--cpp_out", value = "src/foo.pb2.cc"
// "foo.proto" ->
// name = "", value = "foo.proto"
bool ParseArgument(const char* arg, string* name, string* value);
// Interprets arguments parsed with ParseArgument.
bool InterpretArgument(const string& name, const string& value);
// Print the --help text to stderr.
void PrintHelpText();
// Generate the given output file from the given input.
struct OutputDirective; // see below
bool GenerateOutput(const FileDescriptor* proto_file,
const OutputDirective& output_directive);
// -----------------------------------------------------------------
// The name of the executable as invoked (i.e. argv[0]).
string executable_name_;
// Version info set with SetVersionInfo().
string version_info_;
// Map from flag names to registered generators.
struct GeneratorInfo {
CodeGenerator* generator;
string help_text;
};
typedef map<string, GeneratorInfo> GeneratorMap;
GeneratorMap generators_;
// Stuff parsed from command line.
vector<pair<string, string> > proto_path_; // Search path for proto files.
vector<string> input_files_; // Names of the input proto files.
// output_directives_ lists all the files we are supposed to output and what
// generator to use for each.
struct OutputDirective {
string name;
CodeGenerator* generator;
string parameter;
string output_location;
};
vector<OutputDirective> output_directives_;
// Was the --disallow_services flag used?
bool disallow_services_;
// See SetInputsAreProtoPathRelative().
bool inputs_are_proto_path_relative_;
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(CommandLineInterface);
};
} // namespace compiler
} // namespace protobuf
} // namespace google
#endif // GOOGLE_PROTOBUF_COMPILER_COMMAND_LINE_INTERFACE_H__

View File

@ -0,0 +1,964 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#include <vector>
#include <google/protobuf/descriptor.h>
#include <google/protobuf/io/zero_copy_stream.h>
#include <google/protobuf/compiler/command_line_interface.h>
#include <google/protobuf/compiler/code_generator.h>
#include <google/protobuf/io/printer.h>
#include <google/protobuf/testing/file.h>
#include <google/protobuf/stubs/strutil.h>
#include <google/protobuf/testing/googletest.h>
#include <gtest/gtest.h>
namespace google {
namespace protobuf {
namespace compiler {
namespace {
class CommandLineInterfaceTest : public testing::Test {
protected:
virtual void SetUp();
virtual void TearDown();
// Runs the CommandLineInterface with the given command line. The
// command is automatically split on spaces, and the string "$tmpdir"
// is replaced with TestTempDir().
void Run(const string& command);
// -----------------------------------------------------------------
// Methods to set up the test (called before Run()).
class MockCodeGenerator;
// Registers a MockCodeGenerator with the given name.
MockCodeGenerator* RegisterGenerator(const string& generator_name,
const string& flag_name,
const string& filename,
const string& help_text);
MockCodeGenerator* RegisterErrorGenerator(const string& generator_name,
const string& error_text,
const string& flag_name,
const string& filename,
const string& help_text);
// Create a temp file within temp_directory_ with the given name.
// The containing directory is also created if necessary.
void CreateTempFile(const string& name, const string& contents);
void SetInputsAreProtoPathRelative(bool enable) {
cli_.SetInputsAreProtoPathRelative(enable);
}
// -----------------------------------------------------------------
// Methods to check the test results (called after Run()).
// Checks that no text was written to stderr during Run(), and Run()
// returned 0.
void ExpectNoErrors();
// Checks that Run() returned non-zero and the stderr output is exactly
// the text given. expected_test may contain references to "$tmpdir",
// which will be replaced by the temporary directory path.
void ExpectErrorText(const string& expected_text);
// Checks that Run() returned non-zero and the stderr contains the given
// substring.
void ExpectErrorSubstring(const string& expected_substring);
// Returns true if ExpectErrorSubstring(expected_substring) would pass, but
// does not fail otherwise.
bool HasAlternateErrorSubstring(const string& expected_substring);
// Checks that MockCodeGenerator::Generate() was called in the given
// context. That is, this tests if the generator with the given name
// was called with the given parameter and proto file and produced the
// given output file. This is checked by reading the output file and
// checking that it contains the content that MockCodeGenerator would
// generate given these inputs. message_name is the name of the first
// message that appeared in the proto file; this is just to make extra
// sure that the correct file was parsed.
void ExpectGenerated(const string& generator_name,
const string& parameter,
const string& proto_name,
const string& message_name,
const string& output_file);
private:
// The object we are testing.
CommandLineInterface cli_;
// We create a directory within TestTempDir() in order to add extra
// protection against accidentally deleting user files (since we recursively
// delete this directory during the test). This is the full path of that
// directory.
string temp_directory_;
// The result of Run().
int return_code_;
// The captured stderr output.
string error_text_;
// Pointers which need to be deleted later.
vector<MockCodeGenerator*> mock_generators_to_delete_;
};
// A mock CodeGenerator which outputs information about the context in which
// it was called, which can then be checked. Output is written to a filename
// constructed by concatenating the filename_prefix (given to the constructor)
// with the proto file name, separated by a '.'.
class CommandLineInterfaceTest::MockCodeGenerator : public CodeGenerator {
public:
// Create a MockCodeGenerator whose Generate() method returns true.
MockCodeGenerator(const string& name, const string& filename_prefix);
// Create a MockCodeGenerator whose Generate() method returns false
// and sets the error string to the given string.
MockCodeGenerator(const string& name, const string& filename_prefix,
const string& error);
~MockCodeGenerator();
void set_expect_write_error(bool value) {
expect_write_error_ = value;
}
// implements CodeGenerator ----------------------------------------
bool Generate(const FileDescriptor* file,
const string& parameter,
OutputDirectory* output_directory,
string* error) const;
private:
string name_;
string filename_prefix_;
bool return_error_;
string error_;
bool expect_write_error_;
};
// ===================================================================
void CommandLineInterfaceTest::SetUp() {
// Most of these tests were written before this option was added, so we
// run with the option on (which used to be the only way) except in certain
// tests where we turn it off.
cli_.SetInputsAreProtoPathRelative(true);
temp_directory_ = TestTempDir() + "/proto2_cli_test_temp";
// If the temp directory already exists, it must be left over from a
// previous run. Delete it.
if (File::Exists(temp_directory_)) {
File::DeleteRecursively(temp_directory_, NULL, NULL);
}
// Create the temp directory.
GOOGLE_CHECK(File::CreateDir(temp_directory_.c_str(), DEFAULT_FILE_MODE));
}
void CommandLineInterfaceTest::TearDown() {
// Delete the temp directory.
File::DeleteRecursively(temp_directory_, NULL, NULL);
// Delete all the MockCodeGenerators.
for (int i = 0; i < mock_generators_to_delete_.size(); i++) {
delete mock_generators_to_delete_[i];
}
mock_generators_to_delete_.clear();
}
void CommandLineInterfaceTest::Run(const string& command) {
vector<string> args;
SplitStringUsing(command, " ", &args);
scoped_array<const char*> argv(new const char*[args.size()]);
for (int i = 0; i < args.size(); i++) {
args[i] = StringReplace(args[i], "$tmpdir", temp_directory_, true);
argv[i] = args[i].c_str();
}
CaptureTestStderr();
return_code_ = cli_.Run(args.size(), argv.get());
error_text_ = GetCapturedTestStderr();
}
// -------------------------------------------------------------------
CommandLineInterfaceTest::MockCodeGenerator*
CommandLineInterfaceTest::RegisterGenerator(
const string& generator_name,
const string& flag_name,
const string& filename,
const string& help_text) {
MockCodeGenerator* generator =
new MockCodeGenerator(generator_name, filename);
mock_generators_to_delete_.push_back(generator);
cli_.RegisterGenerator(flag_name, generator, help_text);
return generator;
}
CommandLineInterfaceTest::MockCodeGenerator*
CommandLineInterfaceTest::RegisterErrorGenerator(
const string& generator_name,
const string& error_text,
const string& flag_name,
const string& filename_prefix,
const string& help_text) {
MockCodeGenerator* generator =
new MockCodeGenerator(generator_name, filename_prefix, error_text);
mock_generators_to_delete_.push_back(generator);
cli_.RegisterGenerator(flag_name, generator, help_text);
return generator;
}
void CommandLineInterfaceTest::CreateTempFile(
const string& name,
const string& contents) {
// Create parent directory, if necessary.
string::size_type slash_pos = name.find_last_of('/');
if (slash_pos != string::npos) {
string dir = name.substr(0, slash_pos);
File::RecursivelyCreateDir(temp_directory_ + "/" + dir, 0777);
}
// Write file.
string full_name = temp_directory_ + "/" + name;
File::WriteStringToFileOrDie(contents, full_name);
}
// -------------------------------------------------------------------
void CommandLineInterfaceTest::ExpectNoErrors() {
EXPECT_EQ(0, return_code_);
EXPECT_EQ("", error_text_);
}
void CommandLineInterfaceTest::ExpectErrorText(const string& expected_text) {
EXPECT_NE(0, return_code_);
EXPECT_EQ(StringReplace(expected_text, "$tmpdir", temp_directory_, true),
error_text_);
}
void CommandLineInterfaceTest::ExpectErrorSubstring(
const string& expected_substring) {
EXPECT_NE(0, return_code_);
EXPECT_PRED_FORMAT2(testing::IsSubstring, expected_substring, error_text_);
}
bool CommandLineInterfaceTest::HasAlternateErrorSubstring(
const string& expected_substring) {
EXPECT_NE(0, return_code_);
return error_text_.find(expected_substring) != string::npos;
}
void CommandLineInterfaceTest::ExpectGenerated(
const string& generator_name,
const string& parameter,
const string& proto_name,
const string& message_name,
const string& output_file_prefix) {
// Open and read the file.
string output_file = output_file_prefix + "." + proto_name;
string file_contents;
ASSERT_TRUE(File::ReadFileToString(temp_directory_ + "/" + output_file,
&file_contents))
<< "Failed to open file: " + output_file;
// Check that the contents are as we expect.
string expected_contents =
generator_name + ": " + parameter + ", " + proto_name + ", " +
message_name + "\n";
EXPECT_EQ(expected_contents, file_contents)
<< "Output file did not have expected contents: " + output_file;
}
// ===================================================================
CommandLineInterfaceTest::MockCodeGenerator::MockCodeGenerator(
const string& name, const string& filename_prefix)
: name_(name),
filename_prefix_(filename_prefix),
return_error_(false),
expect_write_error_(false) {
}
CommandLineInterfaceTest::MockCodeGenerator::MockCodeGenerator(
const string& name, const string& filename_prefix, const string& error)
: name_(name),
filename_prefix_(filename_prefix),
return_error_(true),
error_(error),
expect_write_error_(false) {
}
CommandLineInterfaceTest::MockCodeGenerator::~MockCodeGenerator() {}
bool CommandLineInterfaceTest::MockCodeGenerator::Generate(
const FileDescriptor* file,
const string& parameter,
OutputDirectory* output_directory,
string* error) const {
scoped_ptr<io::ZeroCopyOutputStream> output(
output_directory->Open(filename_prefix_ + "." + file->name()));
io::Printer printer(output.get(), '$');
map<string, string> vars;
vars["name"] = name_;
vars["parameter"] = parameter;
vars["proto_name"] = file->name();
vars["message_name"] = file->message_type_count() > 0 ?
file->message_type(0)->full_name().c_str() : "(none)";
printer.Print(vars, "$name$: $parameter$, $proto_name$, $message_name$\n");
if (expect_write_error_) {
EXPECT_TRUE(printer.failed());
} else {
EXPECT_FALSE(printer.failed());
}
*error = error_;
return !return_error_;
}
// ===================================================================
TEST_F(CommandLineInterfaceTest, BasicOutput) {
// Test that the common case works.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, MultipleInputs) {
// Test parsing multiple input files.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
CreateTempFile("bar.proto",
"syntax = \"proto2\";\n"
"message Bar {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto bar.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
ExpectGenerated("test_generator", "", "bar.proto", "Bar", "output.test");
}
TEST_F(CommandLineInterfaceTest, CreateDirectory) {
// Test that when we output to a sub-directory, it is created.
RegisterGenerator("test_generator", "--test_out",
"bar/baz/output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "",
"foo.proto", "Foo", "bar/baz/output.test");
}
TEST_F(CommandLineInterfaceTest, GeneratorParameters) {
// Test that generator parameters are correctly parsed from the command line.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=TestParameter:$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "TestParameter",
"foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, PathLookup) {
// Test that specifying multiple directories in the proto search path works.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("b/bar.proto",
"syntax = \"proto2\";\n"
"message Bar {}\n");
CreateTempFile("a/foo.proto",
"syntax = \"proto2\";\n"
"import \"bar.proto\";\n"
"message Foo {\n"
" optional Bar a = 1;\n"
"}\n");
CreateTempFile("b/foo.proto", "this should not be parsed\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/a --proto_path=$tmpdir/b foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, ColonDelimitedPath) {
// Same as PathLookup, but we provide the proto_path in a single flag.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("b/bar.proto",
"syntax = \"proto2\";\n"
"message Bar {}\n");
CreateTempFile("a/foo.proto",
"syntax = \"proto2\";\n"
"import \"bar.proto\";\n"
"message Foo {\n"
" optional Bar a = 1;\n"
"}\n");
CreateTempFile("b/foo.proto", "this should not be parsed\n");
#undef PATH_SEPARATOR
#if defined(_WIN32)
#define PATH_SEPARATOR ";"
#else
#define PATH_SEPARATOR ":"
#endif
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/a"PATH_SEPARATOR"$tmpdir/b foo.proto");
#undef PATH_SEPARATOR
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, NonRootMapping) {
// Test setting up a search path mapping a directory to a non-root location.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=bar=$tmpdir bar/foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "bar/foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, MultipleGenerators) {
// Test that we can have multiple generators and use both in one invocation,
// each with a different output directory.
RegisterGenerator("test_generator_1", "--test1_out",
"output1.test", "Test output 1.");
RegisterGenerator("test_generator_2", "--test2_out",
"output2.test", "Test output 2.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
// Create the "a" and "b" sub-directories.
CreateTempFile("a/dummy", "");
CreateTempFile("b/dummy", "");
Run("protocol_compiler "
"--test1_out=$tmpdir/a "
"--test2_out=$tmpdir/b "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator_1", "", "foo.proto", "Foo", "a/output1.test");
ExpectGenerated("test_generator_2", "", "foo.proto", "Foo", "b/output2.test");
}
TEST_F(CommandLineInterfaceTest, DisallowServicesNoServices) {
// Test that --disallow_services doesn't cause a problem when there are no
// services.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --disallow_services --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, DisallowServicesHasService) {
// Test that --disallow_services produces an error when there are services.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n"
"service Bar {}\n");
Run("protocol_compiler --disallow_services --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorSubstring("foo.proto: This file contains services");
}
TEST_F(CommandLineInterfaceTest, AllowServicesHasService) {
// Test that services work fine as long as --disallow_services is not used.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n"
"service Bar {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, CwdRelativeInputs) {
// Test that we can accept working-directory-relative input files.
SetInputsAreProtoPathRelative(false);
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir $tmpdir/foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
// -------------------------------------------------------------------
TEST_F(CommandLineInterfaceTest, ParseErrors) {
// Test that parse errors are reported.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"badsyntax\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorText(
"foo.proto:2:1: Expected top-level statement (e.g. \"message\").\n");
}
TEST_F(CommandLineInterfaceTest, ParseErrorsMultipleFiles) {
// Test that parse errors are reported from multiple files.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
// We set up files such that foo.proto actually depends on bar.proto in
// two ways: Directly and through baz.proto. bar.proto's errors should
// only be reported once.
CreateTempFile("bar.proto",
"syntax = \"proto2\";\n"
"badsyntax\n");
CreateTempFile("baz.proto",
"syntax = \"proto2\";\n"
"import \"bar.proto\";\n");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"import \"bar.proto\";\n"
"import \"baz.proto\";\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorText(
"bar.proto:2:1: Expected top-level statement (e.g. \"message\").\n"
"baz.proto: Import \"bar.proto\" was not found or had errors.\n"
"foo.proto: Import \"bar.proto\" was not found or had errors.\n"
"foo.proto: Import \"baz.proto\" was not found or had errors.\n");
}
TEST_F(CommandLineInterfaceTest, InputNotFoundError) {
// Test what happens if the input file is not found.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorText(
"foo.proto: File not found.\n");
}
TEST_F(CommandLineInterfaceTest, CwdRelativeInputNotFoundError) {
// Test what happens when a working-directory-relative input file is not
// found.
SetInputsAreProtoPathRelative(false);
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir $tmpdir/foo.proto");
ExpectErrorText(
"$tmpdir/foo.proto: No such file or directory\n");
}
TEST_F(CommandLineInterfaceTest, CwdRelativeInputNotMappedError) {
// Test what happens when a working-directory-relative input file is not
// mapped to a virtual path.
SetInputsAreProtoPathRelative(false);
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
// Create a directory called "bar" so that we can point --proto_path at it.
CreateTempFile("bar/dummy", "");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/bar $tmpdir/foo.proto");
ExpectErrorText(
"$tmpdir/foo.proto: File does not reside within any path "
"specified using --proto_path (or -I). You must specify a "
"--proto_path which encompasses this file.\n");
}
TEST_F(CommandLineInterfaceTest, CwdRelativeInputNotFoundAndNotMappedError) {
// Check what happens if the input file is not found *and* is not mapped
// in the proto_path.
SetInputsAreProtoPathRelative(false);
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
// Create a directory called "bar" so that we can point --proto_path at it.
CreateTempFile("bar/dummy", "");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/bar $tmpdir/foo.proto");
ExpectErrorText(
"$tmpdir/foo.proto: No such file or directory\n");
}
TEST_F(CommandLineInterfaceTest, CwdRelativeInputShadowedError) {
// Test what happens when a working-directory-relative input file is shadowed
// by another file in the virtual path.
SetInputsAreProtoPathRelative(false);
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo/foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
CreateTempFile("bar/foo.proto",
"syntax = \"proto2\";\n"
"message Bar {}\n");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/foo --proto_path=$tmpdir/bar "
"$tmpdir/bar/foo.proto");
ExpectErrorText(
"$tmpdir/bar/foo.proto: Input is shadowed in the --proto_path "
"by \"$tmpdir/foo/foo.proto\". Either use the latter "
"file as your input or reorder the --proto_path so that the "
"former file's location comes first.\n");
}
TEST_F(CommandLineInterfaceTest, ProtoPathNotFoundError) {
// Test what happens if the input file is not found.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir/foo foo.proto");
ExpectErrorText(
"$tmpdir/foo: warning: directory does not exist.\n"
"foo.proto: File not found.\n");
}
TEST_F(CommandLineInterfaceTest, MissingInputError) {
// Test that we get an error if no inputs are given.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir");
ExpectErrorText("Missing input file.\n");
}
TEST_F(CommandLineInterfaceTest, MissingOutputError) {
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --proto_path=$tmpdir foo.proto");
ExpectErrorText("Missing output directives.\n");
}
TEST_F(CommandLineInterfaceTest, OutputWriteError) {
MockCodeGenerator* generator =
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
generator->set_expect_write_error(true);
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
// Create a directory blocking our output location.
CreateTempFile("output.test.foo.proto/foo", "");
Run("protocol_compiler --test_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
#if defined(_WIN32) && !defined(__CYGWIN__)
// Windows with MSVCRT.dll produces EPERM instead of EISDIR.
if (HasAlternateErrorSubstring("output.test.foo.proto: Permission denied")) {
return;
}
#endif
ExpectErrorSubstring("output.test.foo.proto: Is a directory");
}
TEST_F(CommandLineInterfaceTest, OutputDirectoryNotFoundError) {
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir/nosuchdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorSubstring("nosuchdir/: "
"No such file or directory");
}
TEST_F(CommandLineInterfaceTest, OutputDirectoryIsFileError) {
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out=$tmpdir/foo.proto "
"--proto_path=$tmpdir foo.proto");
#if defined(_WIN32) && !defined(__CYGWIN__)
// Windows with MSVCRT.dll produces EINVAL instead of ENOTDIR.
if (HasAlternateErrorSubstring("foo.proto/: Invalid argument")) {
return;
}
#endif
ExpectErrorSubstring("foo.proto/: Not a directory");
}
TEST_F(CommandLineInterfaceTest, GeneratorError) {
RegisterErrorGenerator("error_generator", "Test error message.",
"--error_out", "output.test", "Test error output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --error_out=$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectErrorSubstring("--error_out: Test error message.");
}
TEST_F(CommandLineInterfaceTest, HelpText) {
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
RegisterErrorGenerator("error_generator", "Test error message.",
"--error_out", "output.test", "Test error output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("test_exec_name --help");
ExpectErrorSubstring("Usage: test_exec_name ");
ExpectErrorSubstring("--test_out=OUT_DIR");
ExpectErrorSubstring("Test output.");
ExpectErrorSubstring("--error_out=OUT_DIR");
ExpectErrorSubstring("Test error output.");
}
// -------------------------------------------------------------------
// Flag parsing tests
TEST_F(CommandLineInterfaceTest, ParseSingleCharacterFlag) {
// Test that a single-character flag works.
RegisterGenerator("test_generator", "-o",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler -o$tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, ParseSpaceDelimitedValue) {
// Test that separating the flag value with a space works.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler --test_out $tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, ParseSingleCharacterSpaceDelimitedValue) {
// Test that separating the flag value with a space works for
// single-character flags.
RegisterGenerator("test_generator", "-o",
"output.test", "Test output.");
CreateTempFile("foo.proto",
"syntax = \"proto2\";\n"
"message Foo {}\n");
Run("protocol_compiler -o $tmpdir "
"--proto_path=$tmpdir foo.proto");
ExpectNoErrors();
ExpectGenerated("test_generator", "", "foo.proto", "Foo", "output.test");
}
TEST_F(CommandLineInterfaceTest, MissingValueError) {
// Test that we get an error if a flag is missing its value.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out --proto_path=$tmpdir foo.proto");
ExpectErrorText("Missing value for flag: --test_out\n");
}
TEST_F(CommandLineInterfaceTest, MissingValueAtEndError) {
// Test that we get an error if the last argument is a flag requiring a
// value.
RegisterGenerator("test_generator", "--test_out",
"output.test", "Test output.");
Run("protocol_compiler --test_out");
ExpectErrorText("Missing value for flag: --test_out\n");
}
} // anonymous namespace
} // namespace compiler
} // namespace protobuf
} // namespace google

View File

@ -0,0 +1,135 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
//
// This test insures that google/protobuf/descriptor.pb.{h,cc} match exactly
// what would be generated by the protocol compiler. These files are not
// generated automatically at build time because they are compiled into the
// protocol compiler itself. So, if they were auto-generated, you'd have a
// chicken-and-egg problem.
//
// If this test fails, run the script
// "generate_descriptor_proto.sh" and add
// descriptor.pb.{h,cc} to your changelist.
#include <map>
#include <google/protobuf/compiler/cpp/cpp_generator.h>
#include <google/protobuf/compiler/importer.h>
#include <google/protobuf/descriptor.h>
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/stubs/stl_util-inl.h>
#include <google/protobuf/stubs/map-util.h>
#include <google/protobuf/stubs/strutil.h>
#include <google/protobuf/stubs/substitute.h>
#include <google/protobuf/testing/file.h>
#include <google/protobuf/testing/googletest.h>
#include <gtest/gtest.h>
namespace google {
namespace protobuf {
namespace compiler {
namespace cpp {
namespace {
class MockErrorCollector : public MultiFileErrorCollector {
public:
MockErrorCollector() {}
~MockErrorCollector() {}
string text_;
// implements ErrorCollector ---------------------------------------
void AddError(const string& filename, int line, int column,
const string& message) {
strings::SubstituteAndAppend(&text_, "$0:$1:$2: $3\n",
filename, line, column, message);
}
};
class MockOutputDirectory : public OutputDirectory {
public:
MockOutputDirectory() {}
~MockOutputDirectory() {
STLDeleteValues(&files_);
}
void ExpectFileMatches(const string& virtual_filename,
const string& physical_filename) {
string* expected_contents = FindPtrOrNull(files_, virtual_filename);
ASSERT_TRUE(expected_contents != NULL)
<< "Generator failed to generate file: " << virtual_filename;
string actual_contents;
File::ReadFileToStringOrDie(
TestSourceDir() + "/" + physical_filename,
&actual_contents);
EXPECT_TRUE(actual_contents == *expected_contents)
<< physical_filename << " needs to be regenerated. Please run "
"generate_descriptor_proto.sh and add this file "
"to your CL.";
}
// implements OutputDirectory --------------------------------------
virtual io::ZeroCopyOutputStream* Open(const string& filename) {
string** map_slot = &files_[filename];
if (*map_slot != NULL) delete *map_slot;
*map_slot = new string;
return new io::StringOutputStream(*map_slot);
}
private:
map<string, string*> files_;
};
TEST(BootstrapTest, GeneratedDescriptorMatches) {
MockErrorCollector error_collector;
DiskSourceTree source_tree;
source_tree.MapPath("", TestSourceDir());
Importer importer(&source_tree, &error_collector);
const FileDescriptor* proto_file =
importer.Import("google/protobuf/descriptor.proto");
EXPECT_EQ("", error_collector.text_);
ASSERT_TRUE(proto_file != NULL);
CppGenerator generator;
MockOutputDirectory output_directory;
string error;
string parameter;
parameter = "dllexport_decl=LIBPROTOBUF_EXPORT";
ASSERT_TRUE(generator.Generate(proto_file, parameter,
&output_directory, &error));
output_directory.ExpectFileMatches("google/protobuf/descriptor.pb.h",
"google/protobuf/descriptor.pb.h");
output_directory.ExpectFileMatches("google/protobuf/descriptor.pb.cc",
"google/protobuf/descriptor.pb.cc");
}
} // namespace
} // namespace cpp
} // namespace compiler
} // namespace protobuf
} // namespace google

View File

@ -0,0 +1,196 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#include <set>
#include <map>
#include <google/protobuf/compiler/cpp/cpp_enum.h>
#include <google/protobuf/compiler/cpp/cpp_helpers.h>
#include <google/protobuf/io/printer.h>
#include <google/protobuf/stubs/strutil.h>
namespace google {
namespace protobuf {
namespace compiler {
namespace cpp {
EnumGenerator::EnumGenerator(const EnumDescriptor* descriptor,
const string& dllexport_decl)
: descriptor_(descriptor),
classname_(ClassName(descriptor, false)),
dllexport_decl_(dllexport_decl) {
}
EnumGenerator::~EnumGenerator() {}
void EnumGenerator::GenerateDefinition(io::Printer* printer) {
map<string, string> vars;
vars["classname"] = classname_;
vars["short_name"] = descriptor_->name();
printer->Print(vars, "enum $classname$ {\n");
printer->Indent();
const EnumValueDescriptor* min_value = descriptor_->value(0);
const EnumValueDescriptor* max_value = descriptor_->value(0);
for (int i = 0; i < descriptor_->value_count(); i++) {
vars["name"] = descriptor_->value(i)->name();
vars["number"] = SimpleItoa(descriptor_->value(i)->number());
vars["prefix"] = (descriptor_->containing_type() == NULL) ?
"" : classname_ + "_";
printer->Print(vars, "$prefix$$name$ = $number$,\n");
if (descriptor_->value(i)->number() < min_value->number()) {
min_value = descriptor_->value(i);
}
if (descriptor_->value(i)->number() > max_value->number()) {
max_value = descriptor_->value(i);
}
}
printer->Outdent();
printer->Print("};\n");
vars["min_name"] = min_value->name();
vars["max_name"] = max_value->name();
if (dllexport_decl_.empty()) {
vars["dllexport"] = "";
} else {
vars["dllexport"] = dllexport_decl_ + " ";
}
printer->Print(vars,
"$dllexport$const ::google::protobuf::EnumDescriptor* $classname$_descriptor();\n"
"$dllexport$bool $classname$_IsValid(int value);\n"
"const $classname$ $prefix$$short_name$_MIN = $prefix$$min_name$;\n"
"const $classname$ $prefix$$short_name$_MAX = $prefix$$max_name$;\n"
"\n");
}
void EnumGenerator::GenerateSymbolImports(io::Printer* printer) {
map<string, string> vars;
vars["nested_name"] = descriptor_->name();
vars["classname"] = classname_;
printer->Print(vars, "typedef $classname$ $nested_name$;\n");
for (int j = 0; j < descriptor_->value_count(); j++) {
vars["tag"] = descriptor_->value(j)->name();
printer->Print(vars,
"static const $nested_name$ $tag$ = $classname$_$tag$;\n");
}
printer->Print(vars,
"static inline const ::google::protobuf::EnumDescriptor*\n"
"$nested_name$_descriptor() {\n"
" return $classname$_descriptor();\n"
"}\n"
"static inline bool $nested_name$_IsValid(int value) {\n"
" return $classname$_IsValid(value);\n"
"}\n"
"static const $nested_name$ $nested_name$_MIN =\n"
" $classname$_$nested_name$_MIN;\n"
"static const $nested_name$ $nested_name$_MAX =\n"
" $classname$_$nested_name$_MAX;\n");
}
void EnumGenerator::GenerateDescriptorInitializer(
io::Printer* printer, int index) {
map<string, string> vars;
vars["classname"] = classname_;
vars["index"] = SimpleItoa(index);
if (descriptor_->containing_type() == NULL) {
printer->Print(vars,
"$classname$_descriptor_ = file->enum_type($index$);\n");
} else {
vars["parent"] = ClassName(descriptor_->containing_type(), false);
printer->Print(vars,
"$classname$_descriptor_ = $parent$_descriptor_->enum_type($index$);\n");
}
}
void EnumGenerator::GenerateMethods(io::Printer* printer) {
map<string, string> vars;
vars["classname"] = classname_;
vars["builddescriptorsname"] =
GlobalBuildDescriptorsName(descriptor_->file()->name());
printer->Print(vars,
"const ::google::protobuf::EnumDescriptor* $classname$_descriptor() {\n"
" if ($classname$_descriptor_ == NULL) $builddescriptorsname$();\n"
" return $classname$_descriptor_;\n"
"}\n"
"bool $classname$_IsValid(int value) {\n"
" switch(value) {\n");
// Multiple values may have the same number. Make sure we only cover
// each number once by first constructing a set containing all valid
// numbers, then printing a case statement for each element.
set<int> numbers;
for (int j = 0; j < descriptor_->value_count(); j++) {
const EnumValueDescriptor* value = descriptor_->value(j);
numbers.insert(value->number());
}
for (set<int>::iterator iter = numbers.begin();
iter != numbers.end(); ++iter) {
printer->Print(
" case $number$:\n",
"number", SimpleItoa(*iter));
}
printer->Print(vars,
" return true;\n"
" default:\n"
" return false;\n"
" }\n"
"}\n"
"\n");
if (descriptor_->containing_type() != NULL) {
// We need to "define" the static constants which were declared in the
// header, to give the linker a place to put them. Or at least the C++
// standard says we have to. MSVC actually insists tha we do _not_ define
// them again in the .cc file.
printer->Print("#ifndef _MSC_VER\n");
vars["parent"] = ClassName(descriptor_->containing_type(), false);
vars["nested_name"] = descriptor_->name();
for (int i = 0; i < descriptor_->value_count(); i++) {
vars["value"] = descriptor_->value(i)->name();
printer->Print(vars,
"const $classname$ $parent$::$value$;\n");
}
printer->Print(vars,
"const $classname$ $parent$::$nested_name$_MIN;\n"
"const $classname$ $parent$::$nested_name$_MAX;\n");
printer->Print("#endif // _MSC_VER\n");
}
}
} // namespace cpp
} // namespace compiler
} // namespace protobuf
} // namespace google

View File

@ -0,0 +1,81 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#ifndef GOOGLE_PROTOBUF_COMPILER_CPP_ENUM_H__
#define GOOGLE_PROTOBUF_COMPILER_CPP_ENUM_H__
#include <string>
#include <google/protobuf/descriptor.h>
namespace google {
namespace protobuf {
namespace io {
class Printer; // printer.h
}
}
namespace protobuf {
namespace compiler {
namespace cpp {
class EnumGenerator {
public:
// See generator.cc for the meaning of dllexport_decl.
explicit EnumGenerator(const EnumDescriptor* descriptor,
const string& dllexport_decl);
~EnumGenerator();
// Header stuff.
// Generate header code defining the enum. This code should be placed
// within the enum's package namespace, but NOT within any class, even for
// nested enums.
void GenerateDefinition(io::Printer* printer);
// For enums nested within a message, generate code to import all the enum's
// symbols (e.g. the enum type name, all its values, etc.) into the class's
// namespace. This should be placed inside the class definition in the
// header.
void GenerateSymbolImports(io::Printer* printer);
// Source file stuff.
// Generate code that initializes the global variable storing the enum's
// descriptor.
void GenerateDescriptorInitializer(io::Printer* printer, int index);
// Generate non-inline methods related to the enum, such as IsValidValue().
// Goes in the .cc file.
void GenerateMethods(io::Printer* printer);
private:
const EnumDescriptor* descriptor_;
string classname_;
string dllexport_decl_;
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(EnumGenerator);
};
} // namespace cpp
} // namespace compiler
} // namespace protobuf
} // namespace google
#endif // GOOGLE_PROTOBUF_COMPILER_CPP_ENUM_H__

View File

@ -0,0 +1,226 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc.
// http://code.google.com/p/protobuf/
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
#include <google/protobuf/compiler/cpp/cpp_enum_field.h>
#include <google/protobuf/compiler/cpp/cpp_helpers.h>
#include <google/protobuf/io/printer.h>
#include <google/protobuf/wire_format_inl.h>
#include <google/protobuf/stubs/strutil.h>
namespace google {
namespace protobuf {
namespace compiler {
namespace cpp {
using internal::WireFormat;
namespace {
// TODO(kenton): Factor out a "SetCommonFieldVariables()" to get rid of
// repeat code between this and the other field types.
void SetEnumVariables(const FieldDescriptor* descriptor,
map<string, string>* variables) {
const EnumValueDescriptor* default_value = descriptor->default_value_enum();
(*variables)["name"] = FieldName(descriptor);
(*variables)["type"] = ClassName(descriptor->enum_type(), true);
(*variables)["default"] = SimpleItoa(default_value->number());
(*variables)["index"] = SimpleItoa(descriptor->index());
(*variables)["number"] = SimpleItoa(descriptor->number());
(*variables)["classname"] = ClassName(FieldScope(descriptor), false);
(*variables)["tag_size"] = SimpleItoa(
WireFormat::TagSize(descriptor->number(), descriptor->type()));
}
} // namespace
// ===================================================================
EnumFieldGenerator::
EnumFieldGenerator(const FieldDescriptor* descriptor)
: descriptor_(descriptor) {
SetEnumVariables(descriptor, &variables_);
}
EnumFieldGenerator::~EnumFieldGenerator() {}
void EnumFieldGenerator::
GeneratePrivateMembers(io::Printer* printer) const {
printer->Print(variables_, "int $name$_;\n");
}
void EnumFieldGenerator::
GenerateAccessorDeclarations(io::Printer* printer) const {
printer->Print(variables_,
"inline $type$ $name$() const;\n"
"inline void set_$name$($type$ value);\n");
}
void EnumFieldGenerator::
GenerateInlineAccessorDefinitions(io::Printer* printer) const {
printer->Print(variables_,
"inline $type$ $classname$::$name$() const {\n"
" return static_cast< $type$ >($name$_);\n"
"}\n"
"inline void $classname$::set_$name$($type$ value) {\n"
" GOOGLE_DCHECK($type$_IsValid(value));\n"
" _set_bit($index$);\n"
" $name$_ = value;\n"
"}\n");
}
void EnumFieldGenerator::
GenerateClearingCode(io::Printer* printer) const {
printer->Print(variables_, "$name$_ = $default$;\n");
}
void EnumFieldGenerator::
GenerateMergingCode(io::Printer* printer) const {
printer->Print(variables_, "set_$name$(from.$name$());\n");
}
void EnumFieldGenerator::
GenerateInitializer(io::Printer* printer) const {
printer->Print(variables_, ",\n$name$_($default$)");
}
void EnumFieldGenerator::
GenerateMergeFromCodedStream(io::Printer* printer) const {
printer->Print(variables_,
"int value;\n"
"DO_(::google::protobuf::internal::WireFormat::ReadEnum(input, &value));\n"
"if ($type$_IsValid(value)) {\n"
" set_$name$(static_cast< $type$ >(value));\n"
"} else {\n"
" mutable_unknown_fields()->AddField($number$)->add_varint(value);\n"
"}\n");
}
void EnumFieldGenerator::
GenerateSerializeWithCachedSizes(io::Printer* printer) const {
printer->Print(variables_,
"DO_(::google::protobuf::internal::WireFormat::WriteEnum("
"$number$, this->$name$(), output));\n");
}
void EnumFieldGenerator::
GenerateByteSize(io::Printer* printer) const {
printer->Print(variables_,
"total_size += $tag_size$ +\n"
" ::google::protobuf::internal::WireFormat::EnumSize(this->$name$());\n");
}
// ===================================================================
RepeatedEnumFieldGenerator::
RepeatedEnumFieldGenerator(const FieldDescriptor* descriptor)
: descriptor_(descriptor) {
SetEnumVariables(descriptor, &variables_);
}
RepeatedEnumFieldGenerator::~RepeatedEnumFieldGenerator() {}
void RepeatedEnumFieldGenerator::
GeneratePrivateMembers(io::Printer* printer) const {
printer->Print(variables_, "::google::protobuf::RepeatedField<int> $name$_;\n");
}
void RepeatedEnumFieldGenerator::
GenerateAccessorDeclarations(io::Printer* printer) const {
printer->Print(variables_,
"inline const ::google::protobuf::RepeatedField<int>& $name$() const;\n"
"inline ::google::protobuf::RepeatedField<int>* mutable_$name$();\n"
"inline $type$ $name$(int index) const;\n"
"inline void set_$name$(int index, $type$ value);\n"
"inline void add_$name$($type$ value);\n");
}
void RepeatedEnumFieldGenerator::
GenerateInlineAccessorDefinitions(io::Printer* printer) const {
printer->Print(variables_,
"inline const ::google::protobuf::RepeatedField<int>&\n"
"$classname$::$name$() const {\n"
" return $name$_;\n"
"}\n"
"inline ::google::protobuf::RepeatedField<int>*\n"
"$classname$::mutable_$name$() {\n"
" return &$name$_;\n"
"}\n"
"inline $type$ $classname$::$name$(int index) const {\n"
" return static_cast< $type$ >($name$_.Get(index));\n"
"}\n"
"inline void $classname$::set_$name$(int index, $type$ value) {\n"
" GOOGLE_DCHECK($type$_IsValid(value));\n"
" $name$_.Set(index, value);\n"
"}\n"
"inline void $classname$::add_$name$($type$ value) {\n"
" GOOGLE_DCHECK($type$_IsValid(value));\n"
" $name$_.Add(value);\n"
"}\n");
}
void RepeatedEnumFieldGenerator::
GenerateClearingCode(io::Printer* printer) const {
printer->Print(variables_, "$name$_.Clear();\n");
}
void RepeatedEnumFieldGenerator::
GenerateMergingCode(io::Printer* printer) const {
printer->Print(variables_, "$name$_.MergeFrom(from.$name$_);\n");
}
void RepeatedEnumFieldGenerator::
GenerateInitializer(io::Printer* printer) const {
// Not needed for repeated fields.
}
void RepeatedEnumFieldGenerator::
GenerateMergeFromCodedStream(io::Printer* printer) const {
printer->Print(variables_,
"int value;\n"
"DO_(::google::protobuf::internal::WireFormat::ReadEnum(input, &value));\n"
"if ($type$_IsValid(value)) {\n"
" add_$name$(static_cast< $type$ >(value));\n"
"} else {\n"
" mutable_unknown_fields()->AddField($number$)->add_varint(value);\n"
"}\n");
}
void RepeatedEnumFieldGenerator::
GenerateSerializeWithCachedSizes(io::Printer* printer) const {
printer->Print(variables_,
"DO_(::google::protobuf::internal::WireFormat::WriteEnum("
"$number$, this->$name$(i), output));\n");
}
void RepeatedEnumFieldGenerator::
GenerateByteSize(io::Printer* printer) const {
printer->Print(variables_,
"total_size += $tag_size$ * $name$_size();\n"
"for (int i = 0; i < $name$_size(); i++) {\n"
" total_size += ::google::protobuf::internal::WireFormat::EnumSize(\n"
" this->$name$(i));\n"
"}\n");
}
} // namespace cpp
} // namespace compiler
} // namespace protobuf
} // namespace google

Some files were not shown because too many files have changed in this diff Show More