A block name cannot alias with any name in its own scope,
and it cannot alias with any other "global" name.
To solve this, we need to complicate the name cache updates a little bit
where we have a "primary" namespace and "secondary" namespace.
This is required to avoid relying on complex sub-expression elimination
in compilers, and generates cleaner code.
The problem case is if a complex expression is used in an access chain,
like:
Composite comp = buffer[texture(...)];
vec4 a = comp.a + comp.b + comp.c;
Before, we did not have common subexpression tracking for
OpLoad/OpAccessChain, so we easily ended up with code like:
vec4 a = buffer[texture(...)].a + buffer[texture(...)].b + buffer[texture(...)].c;
A good compiler will optimize this, but we should not rely on it, and
forcing texture(...) to a temporary also looks better.
The solution is to add a vector "implied_expression_reads", which works
similarly to expression_dependencies. We also need an extra mechanism in
to_expression which lets us skip expression read checking and do it
later. E.g. for expr -> access chain -> load, we should only trigger
a read of expr when using the loaded expression.
HLSL just picked the variable name which did not work as expected for
some users. Use the same logic as GLSL and set up declared_block_names,
so the actual name can be queried later.
This is a large refactor which splits out the SPIR-V parser from
Compiler and moves it into its more appropriately named Parser module.
The Parser is responsible for building a ParsedIR structure which is
then consumed by one or more compilers.
Compiler can take a ParsedIR by value or move reference. This should
allow for optimal case for both multiple compilations and single
compilation scenarios.
It'll be useful to have an "auxiliary buffer" for other builtins--e.g.
`DrawIndex` (which should be easier to implement now), or `ViewIndex`
when someone gets around to implementing multiview.
Pass this buffer to leaf functions as well.
Test that we handle this for integer textures as well.
Implement this by flattening outputs and unflattening inputs explicitly.
This allows us to pass down a single struct instead of dealing with the
insanity that would be passing down each flattened member separately.
Remove stage_uniforms_var_id.
Seems to be dead code. Naked uniforms do not exist in SPIR-V for Vulkan,
which this seems to have been intended for. It was also unused elsewhere.
- Do not emit set = in GLSL, even when non-zero.
- Fix warning on tautological comparison.
- Expose get_buffer_block_flags as mentioned in reflection guide.
SPIR-V allows names to alias if they implement different stages.
Deprecate the old interface and replace it with a new one which takes
execution modes into account.
Normally, temporary declaration must dominate any use of it,
so we generally did not need to analyze the CFG for these variables,
but there is an edge case where you have an inliner doing:
do {
create_temporary;
break;
} while(0);
use_temporary;
The inside of the loop dominates the outer scope, but we cannot emit
code like this in GLSL, so make sure we hoist these temporaries outside
the "loop".
HLSL UAVs are a bit annoying because they can share block types,
so reflection becomes rather awkward. Sometimes we will need to make
some nasty fallbacks, so add a reflection interface which lets you query
post-shader compile which names was actually declared in the shader.
We don't have a mechanism to move temporaries to their appropriate
scope, and Phi behavior is weird enough that it will be a heroic effort
to not do this rather ugly codegen :(
Support Workgroup (threadgroup) variables.
Mark if SPIRConstant is used as an array length, since it cannot be specialized.
Resolve specialized array length constants.
Support passing an array to MSL function.
Support emitting GLSL array assignments in MSL via an array copy function.
Support for memory and control barriers.
Struct packing enhancements, including packing nested structs.
Enhancements to replacing illegal MSL variable and function names.
Add Compiler::get_entry_point_name_map() function to retrieve entry point renamings.
Remove CompilerGLSL::clean_func_name() as obsolete.
Fixes to types in bitcast MSL functions.
Add Variant::get_id() member function.
Add CompilerMSL::Options::msl_version option.
Add numerous MSL compute tests.
Emit input struct assignment by assigning member by member from stage_in struct.
Map qualified member name from pointer type, not base type.
Add Comiler::expression_type_id() function, similar to expression_type().
Support BuiltInFragDepth.
Emit interface block for StorageClassUniformConstant.
Throw exception when output or fragment input structs contain matrix or array.
Dynamically created interface structs sorted by location number instead of alphabetically.
Add Compiler::is_array() function.
This avoids the need to construct a temporary std::vector on the application side just to create a Compiler instance if application itself doesn't use STL containers.
This is kinda tricky, because if we only conditionally write to a
function parameter variable it is implicitly preserved in SPIR-V, so we must force
an in qualifier on the parameter to get the same behavior in GLSL.
spirv_msl optionally add padding and packing to allow MSL
struct members to align with SPIR-V struct alignments.
spirv_cross add convenience methods for testing Decorations.
spirv_glsl replace member_decl() function with new emit_stuct_member().
Allow struct member types to be marked as packed via DecorationCPacked decoration.
Legacy GLSL targets do not support uniform buffers, and as such require
some sort of emulation. There are two alternatives - one is to represent
a uniform buffer as a uniform struct, and another one is to flatten it
into an array of primitive vector types (vec4).
Uniform struct have two disadvantages that make using them prohibitive
in some applications:
- The location assignment for struct members is arbitrary which means
the application has to set each struct member one by one
- Some Android drivers fail to link shader programs if both vertex and
fragment shader use the same uniform struct
Because of this, we need to support flattening uniform buffers into an
array. This is not just important for legacy GLSL but also is sometimes
useful for ESSL 3.0 where some Android drivers do not have stable UBO
support.
The way flattening works is the entire buffer is represented as a vec4
array; each access chain is rewritten into a combination of array
accesses, swizzles and data type constructors. Specifically:
- Extracting a vector or a scalar requires indexing into the array with
an optional swizzle, for example CB0[13].yz for reading vec2
- Extracting a matrix or a struct requires extracting each individual
vector or struct member and then combining them into the resulting
object
- Extracting arrays is not supported, mostly because the resulting
construct is very inefficient and ESSL 1.0 does not support array
constructors.
Additionally, while we try to constant-fold each individual indexing
operation, there are cases where we have to use dynamic index
computation (specifically for indexing arrays with non-constants); so
the general form of the primitive array extraction expression is:
buffer[stride0*index0+...+strideN*indexN+offset]
Where stride/offset are integer literals and index represents variables.
Make Compiler::OpcodeHandler and Compiler::traverse_all_reachable_opcodes protected
instead of private, for use by subclasses.
Add CompilerMSL::CustomFunctionHandler and traverse_all_reachable_opcodes() to detect
active opcodes that require the output of a custom function.
CompilerMSL::custom_function_ops use std::set to retain ordering to improve testability.