Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
/*
|
|
|
|
* Copyright 2015 Google Inc.
|
|
|
|
*
|
|
|
|
* Use of this source code is governed by a BSD-style license that can be
|
|
|
|
* found in the LICENSE file.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef SkNx_neon_DEFINED
|
|
|
|
#define SkNx_neon_DEFINED
|
|
|
|
|
2016-06-09 20:40:56 +00:00
|
|
|
#include <arm_neon.h>
|
|
|
|
|
2018-12-12 13:47:54 +00:00
|
|
|
namespace { // NOLINT(google-build-namespaces)
|
2016-10-14 21:09:03 +00:00
|
|
|
|
2018-06-08 17:46:42 +00:00
|
|
|
// ARMv8 has vrndm(q)_f32 to floor floats. Here we emulate it:
|
2016-02-10 15:55:56 +00:00
|
|
|
// - roundtrip through integers via truncation
|
|
|
|
// - subtract 1 if that's too big (possible for negative values).
|
|
|
|
// This restricts the domain of our inputs to a maximum somehwere around 2^31. Seems plenty big.
|
2018-06-08 17:46:42 +00:00
|
|
|
AI static float32x4_t emulate_vrndmq_f32(float32x4_t v) {
|
2016-07-30 21:18:49 +00:00
|
|
|
auto roundtrip = vcvtq_f32_s32(vcvtq_s32_f32(v));
|
|
|
|
auto too_big = vcgtq_f32(roundtrip, v);
|
|
|
|
return vsubq_f32(roundtrip, (float32x4_t)vandq_u32(too_big, (uint32x4_t)vdupq_n_f32(1)));
|
2016-02-09 23:41:36 +00:00
|
|
|
}
|
2018-06-08 17:46:42 +00:00
|
|
|
AI static float32x2_t emulate_vrndm_f32(float32x2_t v) {
|
|
|
|
auto roundtrip = vcvt_f32_s32(vcvt_s32_f32(v));
|
|
|
|
auto too_big = vcgt_f32(roundtrip, v);
|
|
|
|
return vsub_f32(roundtrip, (float32x2_t)vand_u32(too_big, (uint32x2_t)vdup_n_f32(1)));
|
|
|
|
}
|
2016-02-09 23:41:36 +00:00
|
|
|
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
template <>
|
2015-11-20 21:53:19 +00:00
|
|
|
class SkNx<2, float> {
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(float32x2_t vec) : fVec(vec) {}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(float val) : fVec(vdup_n_f32(val)) {}
|
|
|
|
AI SkNx(float a, float b) { fVec = (float32x2_t) { a, b }; }
|
2016-07-30 21:18:49 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) { return vld1_f32((const float*)ptr); }
|
|
|
|
AI void store(void* ptr) const { vst1_f32((float*)ptr, fVec); }
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2018-04-11 20:01:04 +00:00
|
|
|
AI static void Load2(const void* ptr, SkNx* x, SkNx* y) {
|
|
|
|
float32x2x2_t xy = vld2_f32((const float*) ptr);
|
|
|
|
*x = xy.val[0];
|
|
|
|
*y = xy.val[1];
|
|
|
|
}
|
|
|
|
|
2018-04-09 05:58:43 +00:00
|
|
|
AI static void Store2(void* dst, const SkNx& a, const SkNx& b) {
|
|
|
|
float32x2x2_t ab = {{
|
|
|
|
a.fVec,
|
|
|
|
b.fVec,
|
|
|
|
}};
|
|
|
|
vst2_f32((float*) dst, ab);
|
|
|
|
}
|
|
|
|
|
2017-12-01 20:23:05 +00:00
|
|
|
AI static void Store3(void* dst, const SkNx& a, const SkNx& b, const SkNx& c) {
|
|
|
|
float32x2x3_t abc = {{
|
|
|
|
a.fVec,
|
|
|
|
b.fVec,
|
|
|
|
c.fVec,
|
|
|
|
}};
|
|
|
|
vst3_f32((float*) dst, abc);
|
|
|
|
}
|
|
|
|
|
2018-02-07 00:55:30 +00:00
|
|
|
AI static void Store4(void* dst, const SkNx& a, const SkNx& b, const SkNx& c, const SkNx& d) {
|
|
|
|
float32x2x4_t abcd = {{
|
|
|
|
a.fVec,
|
|
|
|
b.fVec,
|
|
|
|
c.fVec,
|
|
|
|
d.fVec,
|
|
|
|
}};
|
|
|
|
vst4_f32((float*) dst, abcd);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx invert() const {
|
2016-07-30 21:18:49 +00:00
|
|
|
float32x2_t est0 = vrecpe_f32(fVec),
|
|
|
|
est1 = vmul_f32(vrecps_f32(est0, fVec), est0);
|
|
|
|
return est1;
|
|
|
|
}
|
|
|
|
|
2017-08-28 20:45:40 +00:00
|
|
|
AI SkNx operator - () const { return vneg_f32(fVec); }
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vadd_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsub_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmul_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator / (const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vdiv_f32(fVec, o.fVec);
|
|
|
|
#else
|
|
|
|
float32x2_t est0 = vrecpe_f32(o.fVec),
|
|
|
|
est1 = vmul_f32(vrecps_f32(est0, o.fVec), est0),
|
|
|
|
est2 = vmul_f32(vrecps_f32(est1, o.fVec), est1);
|
|
|
|
return vmul_f32(fVec, est2);
|
|
|
|
#endif
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator==(const SkNx& o) const { return vreinterpret_f32_u32(vceq_f32(fVec, o.fVec)); }
|
|
|
|
AI SkNx operator <(const SkNx& o) const { return vreinterpret_f32_u32(vclt_f32(fVec, o.fVec)); }
|
|
|
|
AI SkNx operator >(const SkNx& o) const { return vreinterpret_f32_u32(vcgt_f32(fVec, o.fVec)); }
|
|
|
|
AI SkNx operator<=(const SkNx& o) const { return vreinterpret_f32_u32(vcle_f32(fVec, o.fVec)); }
|
|
|
|
AI SkNx operator>=(const SkNx& o) const { return vreinterpret_f32_u32(vcge_f32(fVec, o.fVec)); }
|
|
|
|
AI SkNx operator!=(const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vreinterpret_f32_u32(vmvn_u32(vceq_f32(fVec, o.fVec)));
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& l, const SkNx& r) { return vmin_f32(l.fVec, r.fVec); }
|
|
|
|
AI static SkNx Max(const SkNx& l, const SkNx& r) { return vmax_f32(l.fVec, r.fVec); }
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2017-08-28 20:45:40 +00:00
|
|
|
AI SkNx abs() const { return vabs_f32(fVec); }
|
2018-06-08 17:46:42 +00:00
|
|
|
AI SkNx floor() const {
|
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vrndm_f32(fVec);
|
|
|
|
#else
|
|
|
|
return emulate_vrndm_f32(fVec);
|
|
|
|
#endif
|
|
|
|
}
|
2017-08-28 20:45:40 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx rsqrt() const {
|
2016-03-21 17:04:46 +00:00
|
|
|
float32x2_t est0 = vrsqrte_f32(fVec);
|
2015-04-27 21:22:32 +00:00
|
|
|
return vmul_f32(vrsqrts_f32(fVec, vmul_f32(est0, est0)), est0);
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx sqrt() const {
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vsqrt_f32(fVec);
|
|
|
|
#else
|
2016-03-21 17:04:46 +00:00
|
|
|
float32x2_t est0 = vrsqrte_f32(fVec),
|
|
|
|
est1 = vmul_f32(vrsqrts_f32(fVec, vmul_f32(est0, est0)), est0),
|
|
|
|
est2 = vmul_f32(vrsqrts_f32(fVec, vmul_f32(est1, est1)), est1);
|
|
|
|
return vmul_f32(fVec, est2);
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI float operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 2);
|
|
|
|
union { float32x2_t v; float fs[2]; } pun = {fVec};
|
|
|
|
return pun.fs[k&1];
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI bool allTrue() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-03-26 17:04:14 +00:00
|
|
|
return 0 != vminv_u32(vreinterpret_u32_f32(fVec));
|
|
|
|
#else
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
auto v = vreinterpret_u32_f32(fVec);
|
|
|
|
return vget_lane_u32(v,0) && vget_lane_u32(v,1);
|
2018-03-26 17:04:14 +00:00
|
|
|
#endif
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI bool anyTrue() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-03-26 17:04:14 +00:00
|
|
|
return 0 != vmaxv_u32(vreinterpret_u32_f32(fVec));
|
|
|
|
#else
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
auto v = vreinterpret_u32_f32(fVec);
|
|
|
|
return vget_lane_u32(v,0) || vget_lane_u32(v,1);
|
2018-03-26 17:04:14 +00:00
|
|
|
#endif
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
}
|
|
|
|
|
2017-08-28 20:45:40 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
|
|
|
return vbsl_f32(vreinterpret_u32_f32(fVec), t.fVec, e.fVec);
|
|
|
|
}
|
|
|
|
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
float32x2_t fVec;
|
|
|
|
};
|
|
|
|
|
|
|
|
template <>
|
2015-11-20 21:53:19 +00:00
|
|
|
class SkNx<4, float> {
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(float32x4_t vec) : fVec(vec) {}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(float val) : fVec(vdupq_n_f32(val)) {}
|
|
|
|
AI SkNx(float a, float b, float c, float d) { fVec = (float32x4_t) { a, b, c, d }; }
|
2016-07-30 21:18:49 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) { return vld1q_f32((const float*)ptr); }
|
|
|
|
AI void store(void* ptr) const { vst1q_f32((float*)ptr, fVec); }
|
2016-10-06 15:09:27 +00:00
|
|
|
|
2017-11-30 17:07:20 +00:00
|
|
|
AI static void Load2(const void* ptr, SkNx* x, SkNx* y) {
|
|
|
|
float32x4x2_t xy = vld2q_f32((const float*) ptr);
|
|
|
|
*x = xy.val[0];
|
|
|
|
*y = xy.val[1];
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static void Load4(const void* ptr, SkNx* r, SkNx* g, SkNx* b, SkNx* a) {
|
2016-10-06 15:09:27 +00:00
|
|
|
float32x4x4_t rgba = vld4q_f32((const float*) ptr);
|
|
|
|
*r = rgba.val[0];
|
|
|
|
*g = rgba.val[1];
|
|
|
|
*b = rgba.val[2];
|
|
|
|
*a = rgba.val[3];
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static void Store4(void* dst, const SkNx& r, const SkNx& g, const SkNx& b, const SkNx& a) {
|
2016-10-06 15:09:27 +00:00
|
|
|
float32x4x4_t rgba = {{
|
|
|
|
r.fVec,
|
|
|
|
g.fVec,
|
|
|
|
b.fVec,
|
|
|
|
a.fVec,
|
|
|
|
}};
|
|
|
|
vst4q_f32((float*) dst, rgba);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx invert() const {
|
2016-07-30 21:18:49 +00:00
|
|
|
float32x4_t est0 = vrecpeq_f32(fVec),
|
|
|
|
est1 = vmulq_f32(vrecpsq_f32(est0, fVec), est0);
|
|
|
|
return est1;
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2017-08-28 20:45:40 +00:00
|
|
|
AI SkNx operator - () const { return vnegq_f32(fVec); }
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vaddq_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsubq_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmulq_f32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator / (const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vdivq_f32(fVec, o.fVec);
|
|
|
|
#else
|
|
|
|
float32x4_t est0 = vrecpeq_f32(o.fVec),
|
|
|
|
est1 = vmulq_f32(vrecpsq_f32(est0, o.fVec), est0),
|
|
|
|
est2 = vmulq_f32(vrecpsq_f32(est1, o.fVec), est1);
|
|
|
|
return vmulq_f32(fVec, est2);
|
|
|
|
#endif
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator==(const SkNx& o) const {return vreinterpretq_f32_u32(vceqq_f32(fVec, o.fVec));}
|
|
|
|
AI SkNx operator <(const SkNx& o) const {return vreinterpretq_f32_u32(vcltq_f32(fVec, o.fVec));}
|
|
|
|
AI SkNx operator >(const SkNx& o) const {return vreinterpretq_f32_u32(vcgtq_f32(fVec, o.fVec));}
|
|
|
|
AI SkNx operator<=(const SkNx& o) const {return vreinterpretq_f32_u32(vcleq_f32(fVec, o.fVec));}
|
|
|
|
AI SkNx operator>=(const SkNx& o) const {return vreinterpretq_f32_u32(vcgeq_f32(fVec, o.fVec));}
|
|
|
|
AI SkNx operator!=(const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vreinterpretq_f32_u32(vmvnq_u32(vceqq_f32(fVec, o.fVec)));
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& l, const SkNx& r) { return vminq_f32(l.fVec, r.fVec); }
|
|
|
|
AI static SkNx Max(const SkNx& l, const SkNx& r) { return vmaxq_f32(l.fVec, r.fVec); }
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx abs() const { return vabsq_f32(fVec); }
|
|
|
|
AI SkNx floor() const {
|
2016-02-09 23:41:36 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vrndmq_f32(fVec);
|
|
|
|
#else
|
2018-06-08 17:46:42 +00:00
|
|
|
return emulate_vrndmq_f32(fVec);
|
2016-02-09 23:41:36 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-07-30 21:18:49 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx rsqrt() const {
|
2016-03-21 17:04:46 +00:00
|
|
|
float32x4_t est0 = vrsqrteq_f32(fVec);
|
2015-04-27 21:22:32 +00:00
|
|
|
return vmulq_f32(vrsqrtsq_f32(fVec, vmulq_f32(est0, est0)), est0);
|
|
|
|
}
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx sqrt() const {
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
return vsqrtq_f32(fVec);
|
|
|
|
#else
|
2016-03-21 17:04:46 +00:00
|
|
|
float32x4_t est0 = vrsqrteq_f32(fVec),
|
|
|
|
est1 = vmulq_f32(vrsqrtsq_f32(fVec, vmulq_f32(est0, est0)), est0),
|
|
|
|
est2 = vmulq_f32(vrsqrtsq_f32(fVec, vmulq_f32(est1, est1)), est1);
|
|
|
|
return vmulq_f32(fVec, est2);
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI float operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 4);
|
|
|
|
union { float32x4_t v; float fs[4]; } pun = {fVec};
|
|
|
|
return pun.fs[k&3];
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
}
|
|
|
|
|
2018-04-11 19:18:09 +00:00
|
|
|
AI float min() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-04-11 19:18:09 +00:00
|
|
|
return vminvq_f32(fVec);
|
|
|
|
#else
|
|
|
|
SkNx min = Min(*this, vrev64q_f32(fVec));
|
2020-02-07 15:36:46 +00:00
|
|
|
return std::min(min[0], min[2]);
|
2018-04-11 19:18:09 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
AI float max() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-04-11 19:18:09 +00:00
|
|
|
return vmaxvq_f32(fVec);
|
|
|
|
#else
|
|
|
|
SkNx max = Max(*this, vrev64q_f32(fVec));
|
2020-02-07 15:36:46 +00:00
|
|
|
return std::max(max[0], max[2]);
|
2018-04-11 19:18:09 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI bool allTrue() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-03-26 17:04:14 +00:00
|
|
|
return 0 != vminvq_u32(vreinterpretq_u32_f32(fVec));
|
|
|
|
#else
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
auto v = vreinterpretq_u32_f32(fVec);
|
|
|
|
return vgetq_lane_u32(v,0) && vgetq_lane_u32(v,1)
|
|
|
|
&& vgetq_lane_u32(v,2) && vgetq_lane_u32(v,3);
|
2018-03-26 17:04:14 +00:00
|
|
|
#endif
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI bool anyTrue() const {
|
2018-08-31 15:21:27 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
2018-03-26 17:04:14 +00:00
|
|
|
return 0 != vmaxvq_u32(vreinterpretq_u32_f32(fVec));
|
|
|
|
#else
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
auto v = vreinterpretq_u32_f32(fVec);
|
|
|
|
return vgetq_lane_u32(v,0) || vgetq_lane_u32(v,1)
|
|
|
|
|| vgetq_lane_u32(v,2) || vgetq_lane_u32(v,3);
|
2018-03-26 17:04:14 +00:00
|
|
|
#endif
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2015-07-27 13:12:05 +00:00
|
|
|
return vbslq_f32(vreinterpretq_u32_f32(fVec), t.fVec, e.fVec);
|
Color dodge and burn with SkPMFloat.
Both 25-35% faster with SSE.
With NEON, Burn measures as a ~10% regression, Dodge a huge 2.9x improvement.
The Burn regression is somewhat artificial: we're drawing random colored rects onto an opaque white dst, so we're heavily biased toward the (d==da) fast path in the serial code. In the vector code there's no short-circuiting and we always pay a fixed cost for ColorBurn regardless of src or dst content.
Dodge's fast paths, in contrast, only trigger when (s==sa) or (d==0), neither of which happens any more than randomly in our benchmark. I don't think (d==0) should happen at all. Similarly, the (s==0) Burn fast path is really only going to happen as often as SkRandom allows.
In practice, the existing Burn benchmark is hitting its fast path 100% of the time. So I actually feel really great that this only dings the benchmark by 10%.
Chrome's still guarded by SK_SUPPORT_LEGACY_XFERMODES, which I'll lift after finishing the last xfermode, SoftLight.
BUG=skia:
Review URL: https://codereview.chromium.org/1214443002
2015-06-26 17:46:31 +00:00
|
|
|
}
|
|
|
|
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
float32x4_t fVec;
|
|
|
|
};
|
|
|
|
|
2016-11-03 18:43:48 +00:00
|
|
|
#if defined(SK_CPU_ARM64)
|
|
|
|
AI static Sk4f SkNx_fma(const Sk4f& f, const Sk4f& m, const Sk4f& a) {
|
|
|
|
return vfmaq_f32(a.fVec, f.fVec, m.fVec);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-01-20 19:55:51 +00:00
|
|
|
// It's possible that for our current use cases, representing this as
|
|
|
|
// half a uint16x8_t might be better than representing it as a uint16x4_t.
|
|
|
|
// It'd make conversion to Sk4b one step simpler.
|
|
|
|
template <>
|
|
|
|
class SkNx<4, uint16_t> {
|
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const uint16x4_t& vec) : fVec(vec) {}
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint16_t val) : fVec(vdup_n_u16(val)) {}
|
|
|
|
AI SkNx(uint16_t a, uint16_t b, uint16_t c, uint16_t d) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (uint16x4_t) { a,b,c,d };
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) { return vld1_u16((const uint16_t*)ptr); }
|
|
|
|
AI void store(void* ptr) const { vst1_u16((uint16_t*)ptr, fVec); }
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static void Load4(const void* ptr, SkNx* r, SkNx* g, SkNx* b, SkNx* a) {
|
2016-10-06 15:09:27 +00:00
|
|
|
uint16x4x4_t rgba = vld4_u16((const uint16_t*)ptr);
|
|
|
|
*r = rgba.val[0];
|
|
|
|
*g = rgba.val[1];
|
|
|
|
*b = rgba.val[2];
|
|
|
|
*a = rgba.val[3];
|
|
|
|
}
|
2017-01-19 17:04:32 +00:00
|
|
|
AI static void Load3(const void* ptr, SkNx* r, SkNx* g, SkNx* b) {
|
|
|
|
uint16x4x3_t rgba = vld3_u16((const uint16_t*)ptr);
|
|
|
|
*r = rgba.val[0];
|
|
|
|
*g = rgba.val[1];
|
|
|
|
*b = rgba.val[2];
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static void Store4(void* dst, const SkNx& r, const SkNx& g, const SkNx& b, const SkNx& a) {
|
2016-10-06 15:09:27 +00:00
|
|
|
uint16x4x4_t rgba = {{
|
|
|
|
r.fVec,
|
|
|
|
g.fVec,
|
|
|
|
b.fVec,
|
|
|
|
a.fVec,
|
|
|
|
}};
|
|
|
|
vst4_u16((uint16_t*) dst, rgba);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vadd_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsub_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmul_u16(fVec, o.fVec); }
|
2017-01-12 23:34:29 +00:00
|
|
|
AI SkNx operator & (const SkNx& o) const { return vand_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator | (const SkNx& o) const { return vorr_u16(fVec, o.fVec); }
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator << (int bits) const { return fVec << SkNx(bits).fVec; }
|
|
|
|
AI SkNx operator >> (int bits) const { return fVec >> SkNx(bits).fVec; }
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& a, const SkNx& b) { return vmin_u16(a.fVec, b.fVec); }
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI uint16_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 4);
|
|
|
|
union { uint16x4_t v; uint16_t us[4]; } pun = {fVec};
|
|
|
|
return pun.us[k&3];
|
|
|
|
}
|
2016-01-20 19:55:51 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2016-01-20 19:55:51 +00:00
|
|
|
return vbsl_u16(fVec, t.fVec, e.fVec);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint16x4_t fVec;
|
|
|
|
};
|
|
|
|
|
2015-05-12 13:11:21 +00:00
|
|
|
template <>
|
2015-11-20 21:53:19 +00:00
|
|
|
class SkNx<8, uint16_t> {
|
2015-05-12 13:11:21 +00:00
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const uint16x8_t& vec) : fVec(vec) {}
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint16_t val) : fVec(vdupq_n_u16(val)) {}
|
|
|
|
AI static SkNx Load(const void* ptr) { return vld1q_u16((const uint16_t*)ptr); }
|
2016-07-30 21:18:49 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(uint16_t a, uint16_t b, uint16_t c, uint16_t d,
|
|
|
|
uint16_t e, uint16_t f, uint16_t g, uint16_t h) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (uint16x8_t) { a,b,c,d, e,f,g,h };
|
|
|
|
}
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI void store(void* ptr) const { vst1q_u16((uint16_t*)ptr, fVec); }
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vaddq_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsubq_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmulq_u16(fVec, o.fVec); }
|
2017-01-12 23:34:29 +00:00
|
|
|
AI SkNx operator & (const SkNx& o) const { return vandq_u16(fVec, o.fVec); }
|
|
|
|
AI SkNx operator | (const SkNx& o) const { return vorrq_u16(fVec, o.fVec); }
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator << (int bits) const { return fVec << SkNx(bits).fVec; }
|
|
|
|
AI SkNx operator >> (int bits) const { return fVec >> SkNx(bits).fVec; }
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& a, const SkNx& b) { return vminq_u16(a.fVec, b.fVec); }
|
2015-05-15 00:53:04 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI uint16_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 8);
|
|
|
|
union { uint16x8_t v; uint16_t us[8]; } pun = {fVec};
|
|
|
|
return pun.us[k&7];
|
|
|
|
}
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2017-11-02 17:18:38 +00:00
|
|
|
AI SkNx mulHi(const SkNx& m) const {
|
|
|
|
uint32x4_t hi = vmull_u16(vget_high_u16(fVec), vget_high_u16(m.fVec));
|
|
|
|
uint32x4_t lo = vmull_u16( vget_low_u16(fVec), vget_low_u16(m.fVec));
|
|
|
|
|
|
|
|
return { vcombine_u16(vshrn_n_u32(lo,16), vshrn_n_u32(hi,16)) };
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2015-07-27 13:12:05 +00:00
|
|
|
return vbslq_u16(fVec, t.fVec, e.fVec);
|
2015-07-14 17:54:19 +00:00
|
|
|
}
|
|
|
|
|
2015-05-12 13:11:21 +00:00
|
|
|
uint16x8_t fVec;
|
|
|
|
};
|
|
|
|
|
2015-12-14 19:25:18 +00:00
|
|
|
template <>
|
|
|
|
class SkNx<4, uint8_t> {
|
|
|
|
public:
|
2016-07-26 17:07:34 +00:00
|
|
|
typedef uint32_t __attribute__((aligned(1))) unaligned_uint32_t;
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const uint8x8_t& vec) : fVec(vec) {}
|
2015-12-14 19:25:18 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint8_t a, uint8_t b, uint8_t c, uint8_t d) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (uint8x8_t){a,b,c,d, 0,0,0,0};
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) {
|
2016-07-26 17:07:34 +00:00
|
|
|
return (uint8x8_t)vld1_dup_u32((const unaligned_uint32_t*)ptr);
|
2015-12-14 19:25:18 +00:00
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI void store(void* ptr) const {
|
2016-07-26 17:07:34 +00:00
|
|
|
return vst1_lane_u32((unaligned_uint32_t*)ptr, (uint32x2_t)fVec, 0);
|
2015-12-14 19:25:18 +00:00
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI uint8_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 4);
|
|
|
|
union { uint8x8_t v; uint8_t us[8]; } pun = {fVec};
|
|
|
|
return pun.us[k&3];
|
|
|
|
}
|
2015-12-14 19:25:18 +00:00
|
|
|
|
2016-07-30 21:18:49 +00:00
|
|
|
// TODO as needed
|
2015-12-14 19:25:18 +00:00
|
|
|
|
|
|
|
uint8x8_t fVec;
|
|
|
|
};
|
|
|
|
|
2017-11-02 17:18:38 +00:00
|
|
|
template <>
|
|
|
|
class SkNx<8, uint8_t> {
|
|
|
|
public:
|
|
|
|
AI SkNx(const uint8x8_t& vec) : fVec(vec) {}
|
|
|
|
|
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint8_t val) : fVec(vdup_n_u8(val)) {}
|
|
|
|
AI SkNx(uint8_t a, uint8_t b, uint8_t c, uint8_t d,
|
|
|
|
uint8_t e, uint8_t f, uint8_t g, uint8_t h) {
|
|
|
|
fVec = (uint8x8_t) { a,b,c,d, e,f,g,h };
|
|
|
|
}
|
|
|
|
|
|
|
|
AI static SkNx Load(const void* ptr) { return vld1_u8((const uint8_t*)ptr); }
|
|
|
|
AI void store(void* ptr) const { vst1_u8((uint8_t*)ptr, fVec); }
|
|
|
|
|
|
|
|
AI uint8_t operator[](int k) const {
|
|
|
|
SkASSERT(0 <= k && k < 8);
|
|
|
|
union { uint8x8_t v; uint8_t us[8]; } pun = {fVec};
|
|
|
|
return pun.us[k&7];
|
|
|
|
}
|
|
|
|
|
|
|
|
uint8x8_t fVec;
|
|
|
|
};
|
|
|
|
|
2015-05-12 13:11:21 +00:00
|
|
|
template <>
|
2015-11-20 21:53:19 +00:00
|
|
|
class SkNx<16, uint8_t> {
|
2015-05-12 13:11:21 +00:00
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const uint8x16_t& vec) : fVec(vec) {}
|
|
|
|
|
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint8_t val) : fVec(vdupq_n_u8(val)) {}
|
|
|
|
AI SkNx(uint8_t a, uint8_t b, uint8_t c, uint8_t d,
|
|
|
|
uint8_t e, uint8_t f, uint8_t g, uint8_t h,
|
|
|
|
uint8_t i, uint8_t j, uint8_t k, uint8_t l,
|
|
|
|
uint8_t m, uint8_t n, uint8_t o, uint8_t p) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (uint8x16_t) { a,b,c,d, e,f,g,h, i,j,k,l, m,n,o,p };
|
|
|
|
}
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) { return vld1q_u8((const uint8_t*)ptr); }
|
|
|
|
AI void store(void* ptr) const { vst1q_u8((uint8_t*)ptr, fVec); }
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx saturatedAdd(const SkNx& o) const { return vqaddq_u8(fVec, o.fVec); }
|
2015-05-13 15:02:14 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vaddq_u8(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsubq_u8(fVec, o.fVec); }
|
2018-12-18 19:53:37 +00:00
|
|
|
AI SkNx operator & (const SkNx& o) const { return vandq_u8(fVec, o.fVec); }
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& a, const SkNx& b) { return vminq_u8(a.fVec, b.fVec); }
|
|
|
|
AI SkNx operator < (const SkNx& o) const { return vcltq_u8(fVec, o.fVec); }
|
2015-05-15 00:53:04 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI uint8_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 16);
|
|
|
|
union { uint8x16_t v; uint8_t us[16]; } pun = {fVec};
|
|
|
|
return pun.us[k&15];
|
|
|
|
}
|
2015-05-12 13:11:21 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2015-07-27 13:12:05 +00:00
|
|
|
return vbslq_u8(fVec, t.fVec, e.fVec);
|
Implement four more xfermodes with Sk4px.
HardLight, Overlay, Darken, and Lighten are all
~2x faster with SSE, ~25% faster with NEON.
This covers all previously-implemented NEON xfermodes.
3 previous SSE xfermodes remain. Those need division
and sqrt, so I'm planning on using SkPMFloat for them.
It'll help the readability and NEON speed if I move that
into [0,1] space first.
The main new concept here is c.thenElse(t,e), which behaves like
(c ? t : e) except, of course, both t and e are evaluated. This allows
us to emulate conditionals with vectors.
This also removes the concept of SkNb. Instead of a standalone bool
vector, each SkNi or SkNf will just return their own types for
comparisons. Turns out to be a lot more manageable this way.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/b9d4163bebab0f5639f9c5928bb5fc15f472dddc
CQ_EXTRA_TRYBOTS=client.skia.compile:Build-Ubuntu-GCC-Arm64-Debug-Android-Trybot
Review URL: https://codereview.chromium.org/1196713004
2015-06-24 22:18:39 +00:00
|
|
|
}
|
|
|
|
|
2015-05-12 13:11:21 +00:00
|
|
|
uint8x16_t fVec;
|
|
|
|
};
|
|
|
|
|
2016-03-21 17:04:46 +00:00
|
|
|
template <>
|
2016-07-29 17:10:15 +00:00
|
|
|
class SkNx<4, int32_t> {
|
2016-03-21 17:04:46 +00:00
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const int32x4_t& vec) : fVec(vec) {}
|
2016-03-21 17:04:46 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(int32_t v) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = vdupq_n_s32(v);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(int32_t a, int32_t b, int32_t c, int32_t d) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (int32x4_t){a,b,c,d};
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vld1q_s32((const int32_t*)ptr);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI void store(void* ptr) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vst1q_s32((int32_t*)ptr, fVec);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI int32_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 4);
|
|
|
|
union { int32x4_t v; int32_t is[4]; } pun = {fVec};
|
|
|
|
return pun.is[k&3];
|
|
|
|
}
|
2016-03-21 17:04:46 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vaddq_s32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsubq_s32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmulq_s32(fVec, o.fVec); }
|
2016-07-29 18:11:12 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator & (const SkNx& o) const { return vandq_s32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator | (const SkNx& o) const { return vorrq_s32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator ^ (const SkNx& o) const { return veorq_s32(fVec, o.fVec); }
|
2016-06-17 19:09:16 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator << (int bits) const { return fVec << SkNx(bits).fVec; }
|
|
|
|
AI SkNx operator >> (int bits) const { return fVec >> SkNx(bits).fVec; }
|
2016-03-21 17:04:46 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator == (const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vreinterpretq_s32_u32(vceqq_s32(fVec, o.fVec));
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator < (const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vreinterpretq_s32_u32(vcltq_s32(fVec, o.fVec));
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator > (const SkNx& o) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vreinterpretq_s32_u32(vcgtq_s32(fVec, o.fVec));
|
|
|
|
}
|
2016-07-15 14:00:11 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& a, const SkNx& b) { return vminq_s32(a.fVec, b.fVec); }
|
2017-07-12 17:36:05 +00:00
|
|
|
AI static SkNx Max(const SkNx& a, const SkNx& b) { return vmaxq_s32(a.fVec, b.fVec); }
|
2016-07-30 21:18:49 +00:00
|
|
|
// TODO as needed
|
2016-03-21 17:04:46 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2016-07-15 14:00:11 +00:00
|
|
|
return vbslq_s32(vreinterpretq_u32_s32(fVec), t.fVec, e.fVec);
|
|
|
|
}
|
|
|
|
|
2017-07-12 17:36:05 +00:00
|
|
|
AI SkNx abs() const { return vabsq_s32(fVec); }
|
|
|
|
|
2016-03-21 17:04:46 +00:00
|
|
|
int32x4_t fVec;
|
|
|
|
};
|
|
|
|
|
2016-07-29 17:10:15 +00:00
|
|
|
template <>
|
|
|
|
class SkNx<4, uint32_t> {
|
|
|
|
public:
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(const uint32x4_t& vec) : fVec(vec) {}
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx() {}
|
|
|
|
AI SkNx(uint32_t v) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = vdupq_n_u32(v);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx(uint32_t a, uint32_t b, uint32_t c, uint32_t d) {
|
2016-07-30 21:18:49 +00:00
|
|
|
fVec = (uint32x4_t){a,b,c,d};
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Load(const void* ptr) {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vld1q_u32((const uint32_t*)ptr);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI void store(void* ptr) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
return vst1q_u32((uint32_t*)ptr, fVec);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
AI uint32_t operator[](int k) const {
|
2016-07-30 21:18:49 +00:00
|
|
|
SkASSERT(0 <= k && k < 4);
|
|
|
|
union { uint32x4_t v; uint32_t us[4]; } pun = {fVec};
|
|
|
|
return pun.us[k&3];
|
|
|
|
}
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator + (const SkNx& o) const { return vaddq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator - (const SkNx& o) const { return vsubq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator * (const SkNx& o) const { return vmulq_u32(fVec, o.fVec); }
|
2016-07-29 18:11:12 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator & (const SkNx& o) const { return vandq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator | (const SkNx& o) const { return vorrq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator ^ (const SkNx& o) const { return veorq_u32(fVec, o.fVec); }
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator << (int bits) const { return fVec << SkNx(bits).fVec; }
|
|
|
|
AI SkNx operator >> (int bits) const { return fVec >> SkNx(bits).fVec; }
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx operator == (const SkNx& o) const { return vceqq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator < (const SkNx& o) const { return vcltq_u32(fVec, o.fVec); }
|
|
|
|
AI SkNx operator > (const SkNx& o) const { return vcgtq_u32(fVec, o.fVec); }
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static SkNx Min(const SkNx& a, const SkNx& b) { return vminq_u32(a.fVec, b.fVec); }
|
2016-07-30 21:18:49 +00:00
|
|
|
// TODO as needed
|
2016-07-29 17:10:15 +00:00
|
|
|
|
2017-10-10 21:14:18 +00:00
|
|
|
AI SkNx mulHi(const SkNx& m) const {
|
|
|
|
uint64x2_t hi = vmull_u32(vget_high_u32(fVec), vget_high_u32(m.fVec));
|
|
|
|
uint64x2_t lo = vmull_u32( vget_low_u32(fVec), vget_low_u32(m.fVec));
|
|
|
|
|
|
|
|
return { vcombine_u32(vshrn_n_u64(lo,32), vshrn_n_u64(hi,32)) };
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI SkNx thenElse(const SkNx& t, const SkNx& e) const {
|
2016-07-29 17:10:15 +00:00
|
|
|
return vbslq_u32(fVec, t.fVec, e.fVec);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint32x4_t fVec;
|
|
|
|
};
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4i SkNx_cast<int32_t, float>(const Sk4f& src) {
|
2016-03-21 17:04:46 +00:00
|
|
|
return vcvtq_s32_f32(src.fVec);
|
|
|
|
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4f SkNx_cast<float, int32_t>(const Sk4i& src) {
|
2016-03-21 17:04:46 +00:00
|
|
|
return vcvtq_f32_s32(src.fVec);
|
|
|
|
}
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4f SkNx_cast<float, uint32_t>(const Sk4u& src) {
|
2016-07-29 17:10:15 +00:00
|
|
|
return SkNx_cast<float>(Sk4i::Load(&src));
|
|
|
|
}
|
2016-03-21 17:04:46 +00:00
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4h SkNx_cast<uint16_t, float>(const Sk4f& src) {
|
2016-02-19 17:40:24 +00:00
|
|
|
return vqmovn_u32(vcvtq_u32_f32(src.fVec));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4f SkNx_cast<float, uint16_t>(const Sk4h& src) {
|
2016-02-19 17:40:24 +00:00
|
|
|
return vcvtq_f32_u32(vmovl_u16(src.fVec));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4b SkNx_cast<uint8_t, float>(const Sk4f& src) {
|
2015-12-14 19:25:18 +00:00
|
|
|
uint32x4_t _32 = vcvtq_u32_f32(src.fVec);
|
|
|
|
uint16x4_t _16 = vqmovn_u32(_32);
|
|
|
|
return vqmovn_u16(vcombine_u16(_16, _16));
|
|
|
|
}
|
|
|
|
|
2017-10-10 21:14:18 +00:00
|
|
|
template<> AI /*static*/ Sk4u SkNx_cast<uint32_t, uint8_t>(const Sk4b& src) {
|
2016-11-17 17:39:09 +00:00
|
|
|
uint16x8_t _16 = vmovl_u8(src.fVec);
|
2017-10-10 21:14:18 +00:00
|
|
|
return vmovl_u16(vget_low_u16(_16));
|
|
|
|
}
|
|
|
|
|
|
|
|
template<> AI /*static*/ Sk4i SkNx_cast<int32_t, uint8_t>(const Sk4b& src) {
|
|
|
|
return vreinterpretq_s32_u32(SkNx_cast<uint32_t>(src).fVec);
|
2016-11-17 17:39:09 +00:00
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4f SkNx_cast<float, uint8_t>(const Sk4b& src) {
|
2016-11-17 19:33:11 +00:00
|
|
|
return vcvtq_f32_s32(SkNx_cast<int32_t>(src).fVec);
|
2015-12-14 19:25:18 +00:00
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk16b SkNx_cast<uint8_t, float>(const Sk16f& src) {
|
2016-03-21 17:04:46 +00:00
|
|
|
Sk8f ab, cd;
|
|
|
|
SkNx_split(src, &ab, &cd);
|
|
|
|
|
|
|
|
Sk4f a,b,c,d;
|
|
|
|
SkNx_split(ab, &a, &b);
|
|
|
|
SkNx_split(cd, &c, &d);
|
|
|
|
return vuzpq_u8(vuzpq_u8((uint8x16_t)vcvtq_u32_f32(a.fVec),
|
|
|
|
(uint8x16_t)vcvtq_u32_f32(b.fVec)).val[0],
|
|
|
|
vuzpq_u8((uint8x16_t)vcvtq_u32_f32(c.fVec),
|
|
|
|
(uint8x16_t)vcvtq_u32_f32(d.fVec)).val[0]).val[0];
|
2015-12-14 19:25:18 +00:00
|
|
|
}
|
|
|
|
|
2017-11-02 17:18:38 +00:00
|
|
|
template<> AI /*static*/ Sk8b SkNx_cast<uint8_t, int32_t>(const Sk8i& src) {
|
|
|
|
Sk4i a, b;
|
|
|
|
SkNx_split(src, &a, &b);
|
|
|
|
uint16x4_t a16 = vqmovun_s32(a.fVec);
|
|
|
|
uint16x4_t b16 = vqmovun_s32(b.fVec);
|
|
|
|
|
|
|
|
return vqmovn_u16(vcombine_u16(a16, b16));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4h SkNx_cast<uint16_t, uint8_t>(const Sk4b& src) {
|
2016-01-20 19:55:51 +00:00
|
|
|
return vget_low_u16(vmovl_u8(src.fVec));
|
|
|
|
}
|
|
|
|
|
2017-11-02 17:18:38 +00:00
|
|
|
template<> AI /*static*/ Sk8h SkNx_cast<uint16_t, uint8_t>(const Sk8b& src) {
|
|
|
|
return vmovl_u8(src.fVec);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4b SkNx_cast<uint8_t, uint16_t>(const Sk4h& src) {
|
2016-01-20 19:55:51 +00:00
|
|
|
return vmovn_u16(vcombine_u16(src.fVec, src.fVec));
|
|
|
|
}
|
|
|
|
|
2017-11-02 17:18:38 +00:00
|
|
|
template<> AI /*static*/ Sk8b SkNx_cast<uint8_t, uint16_t>(const Sk8h& src) {
|
|
|
|
return vqmovn_u16(src.fVec);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4b SkNx_cast<uint8_t, int32_t>(const Sk4i& src) {
|
2016-07-12 21:55:45 +00:00
|
|
|
uint16x4_t _16 = vqmovun_s32(src.fVec);
|
|
|
|
return vqmovn_u16(vcombine_u16(_16, _16));
|
|
|
|
}
|
|
|
|
|
2017-09-13 20:46:05 +00:00
|
|
|
template<> AI /*static*/ Sk4b SkNx_cast<uint8_t, uint32_t>(const Sk4u& src) {
|
|
|
|
uint16x4_t _16 = vqmovn_u32(src.fVec);
|
|
|
|
return vqmovn_u16(vcombine_u16(_16, _16));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4i SkNx_cast<int32_t, uint16_t>(const Sk4h& src) {
|
2016-07-15 14:00:11 +00:00
|
|
|
return vreinterpretq_s32_u32(vmovl_u16(src.fVec));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4h SkNx_cast<uint16_t, int32_t>(const Sk4i& src) {
|
2016-07-15 14:00:11 +00:00
|
|
|
return vmovn_u32(vreinterpretq_u32_s32(src.fVec));
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
template<> AI /*static*/ Sk4i SkNx_cast<int32_t, uint32_t>(const Sk4u& src) {
|
2016-08-02 18:30:30 +00:00
|
|
|
return vreinterpretq_s32_u32(src.fVec);
|
|
|
|
}
|
|
|
|
|
2016-10-19 13:21:11 +00:00
|
|
|
AI static Sk4i Sk4f_round(const Sk4f& x) {
|
2016-07-12 21:55:45 +00:00
|
|
|
return vcvtq_s32_f32((x + 0.5f).fVec);
|
|
|
|
}
|
|
|
|
|
2016-10-14 21:09:03 +00:00
|
|
|
} // namespace
|
|
|
|
|
Refactor Sk2x<T> + Sk4x<T> into SkNf<N,T> and SkNi<N,T>
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
2015-03-30 17:50:27 +00:00
|
|
|
#endif//SkNx_neon_DEFINED
|