Legacy GLSL targets do not support uniform buffers, and as such require
some sort of emulation. There are two alternatives - one is to represent
a uniform buffer as a uniform struct, and another one is to flatten it
into an array of primitive vector types (vec4).
Uniform struct have two disadvantages that make using them prohibitive
in some applications:
- The location assignment for struct members is arbitrary which means
the application has to set each struct member one by one
- Some Android drivers fail to link shader programs if both vertex and
fragment shader use the same uniform struct
Because of this, we need to support flattening uniform buffers into an
array. This is not just important for legacy GLSL but also is sometimes
useful for ESSL 3.0 where some Android drivers do not have stable UBO
support.
The way flattening works is the entire buffer is represented as a vec4
array; each access chain is rewritten into a combination of array
accesses, swizzles and data type constructors. Specifically:
- Extracting a vector or a scalar requires indexing into the array with
an optional swizzle, for example CB0[13].yz for reading vec2
- Extracting a matrix or a struct requires extracting each individual
vector or struct member and then combining them into the resulting
object
- Extracting arrays is not supported, mostly because the resulting
construct is very inefficient and ESSL 1.0 does not support array
constructors.
Additionally, while we try to constant-fold each individual indexing
operation, there are cases where we have to use dynamic index
computation (specifically for indexing arrays with non-constants); so
the general form of the primitive array extraction expression is:
buffer[stride0*index0+...+strideN*indexN+offset]
Where stride/offset are integer literals and index represents variables.
Make Compiler::OpcodeHandler and Compiler::traverse_all_reachable_opcodes protected
instead of private, for use by subclasses.
Add CompilerMSL::CustomFunctionHandler and traverse_all_reachable_opcodes() to detect
active opcodes that require the output of a custom function.
CompilerMSL::custom_function_ops use std::set to retain ordering to improve testability.
Fix some minor missing pieces from C++.
Type remapping like this doesn't seem to fit MSL backend so well, as it
does a lot of remapping internally on its own.
Type name remapping, really is for fringe extension cases in GLSL which
aren't yet supported in SPIR-V.
The basic idea here is that all functions will have a list of which
combinations of parameters will be combined inside the function.
The caller will then know which combined samplers must be provided to
the callee in order to satisfy it.
- Only consider I/O variables if part of OpEntryPoint.
- Keep a safe fallback if #entry-points is 1 to avoid potentially
breaking previously working shaders.
type_id was not intuitive and did not allow for parsing array sizes of
variables.
Expose another member, base_type_id which will provide the base type
suitable for parsing metadata such as decorations and type_id will now
point to the actual type which includes full type information such as
arrays and so on.
There was a potential problem if variables were invalidated and SPIR-V
read expressions which depended on other expression which in turn depended on the
invalidated variable.
Also fixes issue where variables were considered immutable if they were
forwardable. This allowed some incorrect optimizations to slip through.
Add Decoration qualified_alias element.
Virualize Compiler to_name() function.
MSL use qualified_alias instead of alias when inside entry-point function.
Reformats the entire codebase. Better to do it now than later.
Adds .clang-format and a convenience script format_all.sh which formats
everything automatically.