Also, eliminate the 'atom' field of TPpToken.
Parsing a real 300 line shader, through to making the AST, is about 10% faster.
Memory is slightly reduced (< 1%).
The whole google-test suite, inclusive of all testing overhead, SPIR-V generation,
etc., runs 3% faster.
Since this is a code *simplification* that leads to perf. improvement, I'm not
going to invest too much more in measuring the perf. than this. The PP code is
simply now in a better state to see how to further rationalize/improve it.
Removed the preprocesser memory pool.
Removed extra copies and unnecessary allocations of objects related to the ones
that were using the pool.
Replaced some allocated pointers with objects instead, generally using more
modern techiques. There end up being fewer memory allocations/deletions to get right.
Overall combined effect of all changes is to use slightly less memory and
run slightly faster (< 1% for both, but noticable).
As part of simplifying the code base, this change makes it easier to see
PP symbol tracking, which I suspect has an even bigger run-time simplification
to make.
Implement token pasting as per the C++ specification, within the current
style of the PP code.
Non-identifiers (turning 12 ## 10 into the numeral 1210) is not yet covered;
they should be a simple incremental change built on this one.
Addresses issue #255.
Unlike other qualifiers, HLSL allows "sample" to be either a qualifier keyword or an
identifier (e.g, a variable or function name).
A fix to allow this was made a while ago, but that fix was insufficient when 'sample'
was used in an expression. The problem was around the initial ambiguity between:
sample float a; // "sample" is part of a fully specified type
and
sample.xyz; // sample is a keyword in a dot expression
Both start the same. The "sample" was being accepted as a qualifier before enough
further parsing was done to determine we were not a declaration after all. This
consumed the token, causing it to fail for its real purpose.
Now, when accepting a fully specified type, the token is pushed back onto the stack if
the thing is not a fully specified type. This leaves it available for subsequent
purposes.
Changed the "hlsl.identifier.sample.frag" test to exercise this situation, distilled
down from a production shaders.
This change is helpful for integration with Chromium, which recently
added a compiler option to warn when compiling any source files which
use extended characters. In this case the offending character was a
single unicode dash in a comment.
If some DCE is performed such as removing dead functions, then even
if we are NOT stripping debug info, we still must remove the debug
opcodes that refer to the now-dead IDs.
Also, this adds a small change to perform no ID remapping if none
is requested, making spirv-remap properly be a no-op if no options
are given.
This wasn't needed until the recent generalization of "main" to "entry point",
so makes some HLSL-specific code be generic now, for GLSL functional correctness.
In file included from C:/Projects/glslang/glslang/MachineIndependent/glslang.y:59:0:
glslang/MachineIndependent/ParseHelper.h:276:24: error: 'va_list' has not been declared
va_list args);
^~~~~~~
This PR implements recursive type flattening. For example, an array of structs of other structs
can be flattened to individual member variables at the shader interface.
This is sufficient for many purposes, e.g, uniforms containing opaque types, but is not sufficient
for geometry shader arrayed inputs. That will be handled separately with structure splitting,
which is not implemented by this PR. In the meantime, that case is detected and triggers an error.
The recursive flattening extends the following three aspects of single-level flattening:
- Flattening of structures to individual members with names such as "foo[0].samp[1]";
- Turning constant references to the nested composite type into a reference to a particular
flattened member.
- Shadow copies between arrays of flattened members and the nested composite type.
Previous single-level flattening only flattened at the shader interface, and that is unchanged by this PR.
Internally, shadow copies are, such as if the type is passed to a function.
Also, the reasons for flattening are unchanged. Uniforms containing opaque types, and interface struct
types are flattened. (The latter will change with structure splitting).
One existing test changes: hlsl.structin.vert, which did in fact contain a nested composite type to be
flattened.
Two new tests are added: hlsl.structarray.flatten.frag, and hlsl.structarray.flatten.geom (currently
issues an error until type splitting is online).
The process of arriving at the individual member from chained postfix expressions is more complex than
it was with one level. See large-ish comment above HlslParseContext::flatten() for details.
PR #577 addresses most but not all of the intrinsic promotion problems.
This PR resolves all known cases in the remainder.
Interlocked ops need special promotion rules because at the time
of function selection, the first argument has not been converted
to a buffer object. It's just an int or uint, but you don't want
to convert THAT argument, because that implies converting the
buffer object itself. Rather, you can convert other arguments,
but want to stay in the same "family" of functions. E.g, if
the first interlocked arg is a uint, use only the uint family,
never the int family, you can convert the other args as you please.
This PR allows making such opcode and arg specific choices by
passing the op and arg to the convertible lambda. The code in
the new test "hlsl.promote.atomic.frag" would not compile without
this change, but it must compile.
Also, it provides better handling of downconversions (to "worse"
types), which are permitted in HLSL. The existing method of
selecting upconversions is unchanged, but if that doesn't find
any valid ones, then it will allow downconversions. In effect
this always uses an upconversion if there is one.
Recently added entry point renaming file referred to
test source file hlsl.entry.rename.frag via relative directory.
Change it to be consistent with other tests: assume test
sources are in the current directory.