Clang added -Wunqualified-std-cast-call in
https://reviews.llvm.org/D119670, which warns on unqualified std::move
and std::forward calls. This change qualifies these calls to allow the
project to build on HEAD Clang -Werror.
We don't need to keep track of them because when the block.condition is
either a SPIRConstant or a SPIRVariable, we can get the type directly in
the get_case_list.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
Now we added block.cases_32bit as requested and we only parse if the
remaining ops are a multiple of 2. None of them are mutable because we
return a reference of them depending of the op.condition width.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
Apparently, it's legal to use a selection construct where both paths
branch to same location, but a different merge point is used.
This breaks many assumptions the variable scope analyzer makes.
The only logical way to generate code for this scenario is to treat the
selection construct as a trivial switch construct with only a default
case.
We don't need to keep track of the type itself, only its width since the
type check of the OpSwitch can be done at runtime. This also avoids
creating a dangling reference.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
Moving out the logic from the parser as requested because it's sensitive
to try to keep the parsing the most simple process as said.
For that, the load_types is now tracked in the ParsedIR, which can be
accessed in the Compiler struct. The switch cases are fixed in the CFG
stage since that's the point where the nullptr is deref.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
According to the spec, if the `condition` has a type wider than 32 bits,
the literals to be compared with will be of that size as well.
This caused some misalignments if the `condition` was bigger than 32,
causing a nullptr return without further explanation.
Currently neither GLSL nor MSL supports uint64 as the condition but the
SPIRV allows it anyway.
This also fixes#1768.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
Fairly minor differences, so can keep them side by side without too much
effort. NV support is effectively deprecated now however.
- Add OpConvertUToAccelerationStructureKHR
- Ignore/Terminate ray is now a terminator in KHR, but a call in NV.
- Fix some bugs with reportIntersection.
Some fallout where internal functions are using stronger types.
Overkill to move everything over to strong types right now, but perhaps
move over to it slowly over time.
Buffer objects can contain arbitrary pointers to blocks.
We can also implement ConvertPtrToU and ConvertUToPtr.
The latter can cast a uint64_t to any type as it pleases,
so we will need to generate fake buffer reference blocks to be able to
cast the type.
- Replace ostringstream with custom implementation.
~30% performance uplift on vector-shuffle-oom test.
Allocations are measurably reduced in Valgrind.
- Replace std::vector with SmallVector.
Classic malloc optimization, small vectors are backed by inline data.
~ 7-8% gain on vector-shuffle-oom on GCC 8 on Linux.
- Use an object pool for IVariant type.
We generally allocate a lot of SPIR* objects. We can amortize these
allocations neatly by pooling them.
- ~15% overall uplift on ./test_shaders.py --iterations 10000 shaders/.
This is a pragmatic trick to avoid symbol collision where a project
links against SPIRV-Cross statically, while linking to other projects
which also use SPIRV-Cross statically. We can end up with very awkward
symbol collisions which can resolve themselves silently because
SPIRV-Cross is pulled in as necessary. To fix this, we must use
different symbols and embed two copies of SPIRV-Cross in this scenario,
now with different namespaces, which in turn leads to different symbols.
Storage was in place already, so mostly just dealing with bitcasts and
constants.
Simplies some of the bitcasting logic, and this exposed some bugs in the
implementation. Refactor to use correct width integers with explicit bitcast opcodes.
A flat array was consuming way too much memory and was far too slow to
initialize properly with a very large ID bound (8 million IDs, showed up as #1 hotspot in perf).
Meta struct does not have to be in-order as we never iterate over it in
a meaningful way, so using a hashmap here is reasonable. Very few IDs
should need decorations or meta-data, so this should also be a quite
decent memory save.
For the pathological case, a 6x uplift was observed.
This allows shaders to declare and use pointer-type variables. Pointers
may be loaded and stored, be the result of an `OpSelect`, be passed to
and returned from functions, and even be passed as inputs to the `OpPhi`
instruction. All types of pointers may be used as variable pointers.
Variable pointers to storage buffers and workgroup memory may even be
loaded from and stored to, as though they were ordinary variables. In
addition, this enables using an interior pointer to an array as though
it were an array pointer itself using the `OpPtrAccessChain`
instruction.
This is a rather large and involved change, mostly because this is
somewhat complicated with a lot of moving parts. It's a wonder
SPIRV-Cross's output is largely unchanged. Indeed, many of these changes
are to accomplish exactly that! Perhaps the largest source of changes
was the violation of the assumption that, when emitting types, the
pointer type didn't matter.
One of the test cases added by the change doesn't optimize very well;
the output of `spirv-opt` here is invalid SPIR-V. I need to file a bug
with SPIRV-Tools about this.
I wanted to test that variable pointers to images worked too, but I
couldn't figure out how to propagate the access qualifier properly--in
MSL, it's part of the type, so getting this right is important. I've
punted on that for now.