This is another example for a 2-texture shader.
So far, only separable blend modes are implemented.
The implementation is not optimized, with an
if-else cascade in the shader.
We were looking at uninitialized memory here, instead
of the type of the source clip, as we should.
This showed up as mispositioned clip in the first frame
of a crossfade stack transition, and also as overdraw in
sliding stack transitions.
We already move the descriptor set layout out of it,
so we can just as well keep the pipeline layouts in
the render object as well, and get rid of this extra
object. Update all callers.
Instead of doing multiple copy commands with a tiny buffer
for each glyph, we can just batch them all together. This
also avoids the issue of creating multiple barriers for the
same image.
By tracking the last transition we can build the appropriate barriers.
Also use the most appropriate initial layout/access at creation :
for linear image : predefined (we prepare the content ourself through memcpy)
for everything else : undefined (we don't care about the content, will most likely be erase)
Move the glyph caching api to something that can support using
multiple textures. We now split the text render ops into multiple
ops for different textures, and make each op render just a substring
of the text node's glyph string.
gdk_seat_default_grab() grabs POINTER_EVENTS if the capability is
GDK_SEAT_CAPABILITY_ALL_POINTING. But that enumerator is a union that
includes GDK_SEAT_CAPABILITY_TOUCH, but we never grabbed TOUCH_EVENTS,
an unused macro that was presumably created with this purpose in mind.
So, check which of the ALL_POINTING capabilities we have, and set the
right mask of POINTER_EVENTS and/or TOUCH_EVENTS as required.
As part of this, explicitly let TABLET_STYLUS take over pointer events,
as this is the intended behaviour and was the effective result before.
This should fix touch events being lost in migrating from Device.grab()
to Seat.grab(GDK_SEAT_CAPABILITY_ALL_POINTING), as found by Inkscape.
https://bugzilla.gnome.org/show_bug.cgi?id=781757
The behavior where a touchpoint takes over the pointer position is
really backend dependent. Since this went away from the generic code,
implement it here.
This was by all lights broken, and is basically an implementation detail
of the X11 backend since the pointer emulating touch just steals the pointer
cursor, so should be reimplemented there.
One used to point to the toplevel and the other to the client-side window
that the pointer pointed to. The latter was made to be like the former in
most places, so put those together, and fix the remaining cases where the
variable might not end up with a toplevel/native window.
This is not necessary now that there's no client-side windows to track.
The only removed piece that could make sense is emission of grab broken
events, but it's already an stretch since the semantics of those with
multi-touchpoint is unclear.
Anyhow, This should be fixed at the GTK level, while we let GDK deal with
seat/device level grabs.
GDK just needs to care about toplevels nowadays, which means these events
are already delivered from the windowing. We don't need to generate
intra-window crossing events ourselves.
Those should be interpreted by widget-local gestures, not guessed at a
high level with no notions of the specific context. Users will want
GtkGestureMultiPress to replace these events.
Those worked similarly to those in GtkFlowBox, but would additionally
handle "active" state for child rows. Simplify this to just enabling/
disabling active state on gesture press/release, we don't get the
nice state updates when hovering around with a mouse button pressed,
but the rationale from flowbox applies here, and makes a nice cleanup.
They just maintain priv->in_button and widget state up-to-date, this
basically matters during user interaction, and is already maintained
in the gesture ::update handler. This seems to be sufficient.
Those basically controlled priv->active_child_active, which would
1) trigger a redraw when the pointer enters/leaves it, and 2) ensure
that press/release happen on the same child for it to be activated.
The former is not necessary, and the latter can be simplified by
just checking again the child on the coordinates given by the
::release gesture handler. This makes all enter/leave/motion_notify
event handlers unneeded.