* Add conformance test for nested listvalue
* Fix upb for parsing repeated Value/ListValue
* Add failed repeated ListValue conformance test into php failure list
* Bump target frameworks from netcoreapp1.0 to netcoreapp2.2.
Move global.json up to root of repo, change SDK ver to 2.2.100
Change .net core sdk in dockerfile for kokoro to ver 2.2.100
* Re-add curl install
* Change all exe target to 2.1
* Fix incorrect versions in global.json and Dockerfile
* Downgrade version to 2.1 to match exe targets
* introduce separate testing Dockerfile for C#
* revert changes to the shared Dockerfile
* use netcoreapp2.1 for C# conformance tests
* use language specific dockerfile for testing C#
* Edit compatibility tests script to use parameters instead of file copies
* install dotnet SDK on windows before running the tests
* update csharp_EXTRA_DIST
* Improve C# serialization performance of repeated fields for primitives.
* Changes based on feedback.
* Change compatibility tests to chec float, bool and double are fixed
* Changes based on feedback.
* In the compute methods use the newly created constants
* Fix#5513
* Added tests for invalid lengths when reading strings and bytes.
Added test for reading tags with invalid wire types in unknown field set.
Changed invalid length check in ReadString to match the one in ReadBytes
* Modify how end tags are encounted in merge code (compiler)
* Modify how end tags are encounted in merge code (generated)
* Modify how end tags are encounted in merge code (library)
* Regenerate generated code through generate_descriptor_proto.sh
* Modify how end tags are encounted in merge code (compiler)
* Modify how end tags are encounted in merge code (generated)
* Modify how end tags are encounted in merge code (library)
* Regenerate generated code through generate_descriptor_proto.sh
Even though the comments were indented to appear to go with the jspb
case/field, protoc doesn't collect comments like that, so these "hanging"
comments actually "attach" to the next thing added to each. Looking at
https://github.com/protocolbuffers/protobuf/pull/5566 you see where
the generated code picked up the comment on the wrong field.
* Down-integrate internal changes to github.
* fix python conformance test
* fix csharp conformance test
* add back java map_lite_test.proto's optimize for option
* fix php conformance test
* Increase C# default recursion limit to 100
This matches the Java and C++ defaults.
* Change compatibility tests to use execution-time default recursion limit
This way the same tests should pass against all versions, even
if the recursion limit changes. (The tests will be testing whether
different messages work, admittedly - but that's probably fine.)
This is primarily for access to comments, which would be expected to be available in a protoc plugin.
The implementation has two fiddly aspects:
- We use a Lazy<T> to avoid building the map before cross-linking. An alternative would be to crosslink at the end of the constructor, and remove the calls to CrossLink elsewhere. This would be generally better IMO, but deviate from the Java code.
- The casts to IReadOnlyList<DescriptorBase> are unfortunate. They'll always work, because these lists are always ReadOnlyCollection<T> for a descriptor type... but we can't use IList<DescriptorBase> as that's not covariant, and it's annoyingly fiddly to change the field to be of type ReadOnlyCollection<T>.
This performs more testing for field descriptors built from byte
strings too, but that's mostly incidental. The chief intent is to
check that cross-linking occurs.
* Give a unique category to each test.
This change introduce a TestCategory enum to ConformanceRequest. Existing tests
are divided into three categories: binary format test, json format test and json
format (ignore unknown when parsing) test. For the previous two categories, there
is no change to existing testee programs. For tests with the last category, testee programs
should either enable ignoring unknown field during json parsing or skip the test.
* Fix python test
* Fix java
* Fix csharp
* Update document
* Update csharp generated code
With this fix, Unity using IL2CPP should work with one of two
approaches:
- Call `FileDescriptor.ForceReflectionInitialization<T>` for every
enum present in generated code (including oneof case enums)
- Ensure that IL2CPP uses the same code for int and any int-based
enums
The former approach is likely to be simpler, unless IL2CPP changes
its default behavior. We *could* potentially generate the code
automatically, but that makes me slightly uncomfortable in terms of
generating code that's only relevant in one specific scenario. It
would be reasonably easy to write a tool (separate from protoc) to
generate the code required for any specific set of assemblies, so
that Unity users can include it in their application. We can always
decide to change to generate it automatically later.
The SampleEnumMethod method was previously only called via
reflection, so the Unity linker thought it could be removed. Ditto
the parameterless constructor in ReflectionHelper.
This PR should avoid that issue, reducing the work needed by
customers to use Google.Protobuf from Unity.
For oneofs, to get the case, we need to call the property that
returns the enum value. We really want it as an int, and modern
runtimes allow us to create a delegate which returns an int from the
method. (I suspect that the MS runtime has always allowed that.)
Old versions of Mono (e.g. used by Unity3d) don't allow that, so we
have to convert the enum value to an int via boxing. It's ugly, but
it should work.
This should work on Unity, Mono and .NET 3.5 as far as I'm aware.
It won't work on platforms where reflection itself is prohibited,
but that's a non-starter basically.
This will allow SourceLink as per #4179, and mean that we can use C#
7.0 language features in the library (but not in generated code).
This does not affect which platforms we're *targeting*, so end users
won't see any difference.
It would be nice to update to 2.1.4, but AppVeyor's "Visual Studio
2017" environment is only 2.0.3.
By default, unknown fields are preserved when parsing. To discard
them, use a parser configured to do so:
var parser = MyMessage.Parser.WithDiscardUnknownFields(true);
unittest_proto3 had been changed in a very backward-incompatible
way which was never going to work with C# as it imports proto2 messages.
This is now a copy of the old file, but with a package name change for
compatibility with the remaining files in src/google/protobuf.
The other moves are for files that are only used by C#.
If messages A and B have the same oneof case, which is a message
type, and we merge B into A, those sub-messages should be merged.
Fixes#3200.
Note that I haven't regenerated all the code, as some of the protos
have been changed, breaking generation.
Previously we only rejected the tag if the tag itself was 0, i.e.
field=0, type=varint. The type doesn't matter: field 0 is always
invalid.
This removes the last of the C# conformance failures.
* Add php_generic_services option
* Generate PHP generic services
* Respect namespaces for generated PHP services
* Test PHP generated services
* Rename PHP generator service method doc comment function
* Correct phpdoc service method case
* Test namespaced PHP generic services
* Always use the FQCN for PHP generic service input/output
* Add generated_service_test to php test.sh
* Add php service test protos to CI
* Add php service files to php_EXTRA_DIST
* Use Interface suffix for php generic services
Note that the compatibility tests have had to cahnge as well, to
cope with internal changes. (The test project has access to
internals in the main project.)
Fixes#3209.
* Add new file option php_namespace.
Use this option to change the namespace of php generated classes.
Default is empty. When this option is empty, the package name will be
used for determining the namespace.
* Uncomment commented tests
* Revert gdb test change
* Update csharp descriptor.
* Add test for empty php_namespace.
This has one important packaging change: the netstandard version now
depends (implicitly) on netstandard1.6.1 rather than on individual
packages. This is the preferred style of dependency, and shouldn't
affect any users - see http://stackoverflow.com/questions/42946951
for details.
The tests are still NUnit, but NUnit doesn't support "dotnet test"
yet; the test project is now an executable using NUnitLite. (When
NUnit supports dotnet test, we can adapt to it.)
Note that the project will now only work in Visual Studio 2017 (and
Visual Studio Code, and from the command line with the .NET Core
1.0.0 SDK); Visual Studio 2015 does *not* support this project file
format.
* Changing DOTNET35 framework symbols in preprocessor directives to the default built-in value of NET35.
* Adding extension method StreamExtension.CopyTo for .NET 3.5 because it didn’t exist until .NET 4, and adding associated unit tests.
* Down-integrate internal changes to github.
* Update conformance test failure list.
* Explicitly import used class in nano test to avoid random test fail.
* Update _GNUC_VER to use the correct implementation of atomic operation
on Mac.
* maps_test.js: check whether Symbol is defined before using it (#2524)
Symbol is not yet available on older versions of Node.js and so this
test fails with them. This change just directly checks whether Symbol is
available before we try to use it.
* Added well_known_types_embed.cc to CLEANFILES so that it gets cleaned up
* Updated Makefile.am to fix out-of-tree builds
* Added Bazel genrule for generating well_known_types_embed.cc
In pull request #2517 I made this change for the CMake and autotools
builds but forgot to do it for the Bazel build.
* Update _GNUC_VER to use the correct implementation of atomic operation on Mac.
* Add new js file in extra dist.
* Bump version number to 3.2.0
* Fixed issue with autoloading - Invalid paths (#2538)
* PHP fix int64 decoding (#2516)
* fix int64 decoding
* fix int64 decoding + tests
* Fix int64 decoding on 32-bit machines.
* Fix warning in compiler/js/embed.cc
embed.cc: In function ‘std::string CEscape(const string&)’:
embed.cc:51:32: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < str.size(); ++i) {
^
* Fix include in auto-generated well_known_types_embed.cc
Restore include style fix (e3da722) that has been trampled by
auto-generation of well_known_types_embed.cc
* Fixed cross compilations with the Autotools build
Pull request #2517 caused cross compilations to start failing, because
the js_embed binary was being built to run on the target platform
instead of on the build machine. This change updates the Autotools build
to use the AX_PROG_CXX_FOR_BUILD macro to find a suitable compiler for
the build machine and always use that when building js_embed.
* Minor fix for autocreated object repeated fields and maps.
- If setting/clearing a repeated field/map that was objects, check the class
before checking the autocreator.
- Just to be paranoid, don’t mutate within copy/mutableCopy for the autocreated
classes to ensure there is less chance of issues if someone does something
really crazy threading wise.
- Some more tests for the internal AutocreatedArray/AutocreatedDictionary
classes to ensure things are working as expected.
- Add Xcode 8.2 to the full_mac_build.sh supported list.
* Fix generation of extending nested messages in JavaScript (#2439)
* Fix generation of extending nested messages in JavaScript
* Added missing test8.proto to build
* Fix generated code when there is no namespace but there is enum definition.
* Decoding unknown field should succeed.
* Add embed.cc in src/Makefile.am to fix dist check.
* Fixed "make distcheck" for the Autotools build
To make the test pass I needed to fix out-of-tree builds and update
EXTRA_DIST and CLEANFILES.
* Remove redundent embed.cc from src/Makefile.am
* Update version number to 3.2.0-rc.1 (#2578)
* Change protoc-artifacts version to 3.2.0-rc.1
* Update version number to 3.2.0rc2
* Update change logs for 3.2.0 release.
* Update php README
* Update upb, fixes some bugs (including a hash table problem). (#2611)
* Update upb, fixes some bugs (including a hash table problem).
* Ruby: added a test for the previous hash table corruption.
Verified that this triggers the bug in the currently released
version.
* Ruby: bugfix for SEGV.
* Ruby: removed old code for dup'ing defs.
* Reverting deployment target to 7.0 (#2618)
The Protobuf library doesn’t require the 7.1 deployment target so
reverting it back to 7.0
* Fix typo that breaks builds on big-endian (#2632)
* Bump version number to 3.2.0
We explicitly don't do this when targeting .NET 3.5, where the
interface doesn't exist.
No implementation is required, as we're already implementing
everything we need for IList<T>.
This consists of:
- Changing the codegen for the fixed set of options protos, to parse unknown fields instead of skipping them
- Add a new CustomOptions type in the C# support library
- Expose CustomOptions properties from the immutable proto wrappers in the support library
Only single-value options are currently supported, and fetching options values requires getting the type right
and knowing the field number. Both of these can be addressed at a later time.
Fixes#2143, at least as a first pass.
Fixes#2088.
We now have separate tests for netcoreapp and net45 to test the two branches here.
(netstandard10 doesn't have MemoryStream.GetBuffer)
Although most of this library doesn't have any async functionality,
this feels like a natural place to locally add it.
* Factored Conformance test messages into shared test schema.
* Updated benchmarks to use new proto3 message locations.
* Fixed include path.
* Conformance: fixed include of Python test messages.
* Make maven in Rakefile use --batch-mode.
* Revert changes to benchmarks.
On second thought I think a separate schema for
CPU benchmarking makes sense.
* Try regenerating C# protos for new test protos.
* Removed benchmark messages from test proto.
* Added Jon Skeet's fixes for C#.
* Removed duplicate/old test messages C# file.
* C# fixes for test schema move.
* Fixed C# to use the correct TestAllTypes message.
* Fixes for Objective C test schema move.
* Added missing EXTRA_DIST file.
Swift generators should default to CamelCasing the proto package and prefixing
symbols with that, but this option allows developers to override that behavior
with something custom if they desire.
Fixes https://github.com/google/protobuf/issues/1833
This seems to be necessary to prevent warnings in some compiler
configurations, particularly for tag numbers that are too large to fit
in a signed 32-bit int.
This change fixes the following Chromium presubmit error:
third_party/protobuf/csharp/src/Google.Protobuf.Test/project.json could
not be parsed: Expecting property name: line 25 column 3 (char 482)
This affects cases with leading capital letters.
This breaks compatibility with previous C# releases, but
fixes compatibility with other implementations.
See #2278 for details.
This check adds a few constraints on the way to build a project when we have
a proto file which imports another one. In particular, on projects which
build both C# and Java, it's easy to end up with exceptions like
Expected: included.proto but was src/main/protobuf/included.proto
A user may work around this issue, but it may add unnecessary constraints
on the layout of the project.
According to f3504cf3b1 (diff-ecb0b909ed572381a1c8d1994f09a948R309)
it has already been considered to get rid of this check, for
similar considerations, and because it doesn't exist in the Java code
This should fix the failures in the conformance tests - although
it highlights the problem that we need to do this when changing
the conformance.proto file...
We now just perform the optimization within AddRange itself.
This is a breaking change in terms of "drop in the DLL", but is
source compatible, which should be fine.
This doesn't currently change the ordering in the implementation, but allows us to do so in the future.
We also need to change
https://developers.google.com/protocol-buffers/docs/reference/csharp-generated#singular
which states "Finally, unlike Dictionary<TKey, TValue>, MapField<TKey, TValue> preserves insertion order of entries."
(We can just remove that sentence, I think.)
Also added a standalone formatter test, for confidence.
Have validated that undoing the change in 835fb947 breaks the tests
(i.e. we are still testing that the change is required).
(There are documentation changes and new fields in descriptor.proto that have resulted
in changes to the serialized descriptor, but no breaking changes for C#.)
Overview of changes:
- A new C#-specific command-line option, legacy_enum_values to revert to the old behavior
- When legacy_enum_values isn't specified, we strip the enum name as a prefix, and PascalCase the value name
- A new attribute within the C# code so that we can always tell the original in-proto name
Regenerating the C# code with legacy_enum_values leads to code which still compiles and works - but
there's more still to do.
I've moved both protoc.exe and the proto files out of Google.Protobuf.
The .proto files aren't a slam-dunk, but it feels like they belong with protoc as you'd *use* them with protoc.
It's not clear to me whether we really need both an x86 and x64 version of protoc.exe, as x86 would work on 64-bit Windows anyway. Discuss :)
This makes no externally visible behavioral changes. Internally and non-behaviorally:
- We use a field (compiler-generated) to store the JsonName to avoid recomputing it repeatedly
- The documentation for JsonName is updated to reflect the meaning better
- Readonly autoprops and expression-bodied properties used where possible
This detects:
- An end-group tag with the wrong field number (doesn't match the start-group field)
- An end-group tag with no preceding start-group tag
Fixes issue #688.
This is a start to fixing issue #1212. It won't help for test protos,
conformance etc, but it will definitely be better than nothing, and
would have highlighted a change in descriptor.proto which broken C#
earlier.