We are using floats for rgb, and we don't need more precision
for hsl colors either. We use hsl for computing color expressions
like shade(), lighter() and darker(), which are not precisely
specified anyway.
This commit updates the one test where the output changes a
tiny bit due to this.
Instead of one item keeping the item + its position and sorting that
list, keep the items in 1 array and put the positions into a 2nd array.
This is generally slower while sorting, but allows multiple improvements:
1. We can replace items with keys
This allows avoiding multiple slow lookups when using complex
comparisons
2. We can keep multiple position arrays
This allows doing a sorting in the background without actually
emitting items-changed() until the array is completely sorted.
3. The main list tracks the items in the original model
So only a single memmove() is necessary there, while the old version
had to upgrade the position in every item.
Benchmarks:
sorting a model of simple strings
old new
256,000 items 256ms 268ms
512,000 items 569ms 638ms
sorting a model of file trees, directories first, by size
old new
64,000 items 350ms 364ms
128,000 items 667ms 691ms
removing half the model
old new
512,000 items 24ms 15ms
1,024,000 items 49ms 25ms
This was preventing any sort of building on macOS, even though the quartz
backend is currently non-functional. Fixing this is a pre-requisite to
getting a new macOS backend compiling.
This is a scary idea where you #define a bunch of preprocessor values
and then #include "gdkarrayimpl.c" and end up with a dynamic array for
that data type.
See https://en.wikipedia.org/wiki/X_Macro for what's going on.
What are the advantages over using GArray or GPtrArray?
* It's typesafe
Because it works like C++ templates, we can use the actual type of
the object instead of having to use gpointer.
* It's one less indirection
instead of 2 indirections via self->array->data, this array is
embedded, so self->array is the actual data, and just one indirection
away. This is pretty irrelevant in general, but can be very noticable
in tight loops.
* It's all inline
Because the whole API is defined as static inline functions, the
compiler has full access to everything and can (and does) optimize
out unnecessary calls, thereby speeding up some operations quite
significantly, when full optimizations are enabled.
* It has more features
In particular preallocation allows for avoiding malloc() calls, which
can again speed up tight loops a lot.
But there's also splice(), which is very useful when used with
listmodels.
Instead of an array of arrays, let's use an array of dictionaries; it's
easier to add optional keys without requiring to remember where to put
empty arrays.
char ** arrays are null-terminated everywhere, so make sure they are in
splice(), too.
Also fix the argument to be a const char * const * like in the
constructor.