NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Copyright (c) 2023 Google Inc.
|
2016-08-31 18:35:58 +00:00
|
|
|
//
|
2016-09-01 19:33:59 +00:00
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
2016-08-31 18:35:58 +00:00
|
|
|
//
|
2016-09-01 19:33:59 +00:00
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
2016-08-31 18:35:58 +00:00
|
|
|
//
|
2016-09-01 19:33:59 +00:00
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
2016-08-31 18:35:58 +00:00
|
|
|
|
2023-07-18 18:42:42 +00:00
|
|
|
#include <algorithm>
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
#include <cassert>
|
2016-08-31 18:35:58 +00:00
|
|
|
#include <cstdint>
|
|
|
|
#include <functional>
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
#include <initializer_list>
|
|
|
|
#include <limits>
|
|
|
|
#include <type_traits>
|
|
|
|
#include <vector>
|
|
|
|
|
|
|
|
#ifndef SOURCE_ENUM_SET_H_
|
|
|
|
#define SOURCE_ENUM_SET_H_
|
2016-08-31 18:35:58 +00:00
|
|
|
|
2018-08-03 19:06:09 +00:00
|
|
|
#include "source/latest_version_spirv_header.h"
|
2016-08-31 18:35:58 +00:00
|
|
|
|
2018-07-07 13:38:00 +00:00
|
|
|
namespace spvtools {
|
2016-08-31 18:35:58 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// This container is optimized to store and retrieve unsigned enum values.
|
|
|
|
// The base model for this implementation is an open-addressing hashtable with
|
|
|
|
// linear probing. For small enums (max index < 64), all operations are O(1).
|
|
|
|
//
|
|
|
|
// - Enums are stored in buckets (64 contiguous values max per bucket)
|
|
|
|
// - Buckets ranges don't overlap, but don't have to be contiguous.
|
|
|
|
// - Enums are packed into 64-bits buckets, using 1 bit per enum value.
|
|
|
|
//
|
|
|
|
// Example:
|
|
|
|
// - MyEnum { A = 0, B = 1, C = 64, D = 65 }
|
|
|
|
// - 2 buckets are required:
|
|
|
|
// - bucket 0, storing values in the range [ 0; 64[
|
|
|
|
// - bucket 1, storing values in the range [64; 128[
|
|
|
|
//
|
|
|
|
// - Buckets are stored in a sorted vector (sorted by bucket range).
|
|
|
|
// - Retrieval is done by computing the theoretical bucket index using the enum
|
|
|
|
// value, and
|
|
|
|
// doing a linear scan from this position.
|
|
|
|
// - Insertion is done by retrieving the bucket and either:
|
|
|
|
// - inserting a new bucket in the sorted vector when no buckets has a
|
|
|
|
// compatible range.
|
|
|
|
// - setting the corresponding bit in the bucket.
|
|
|
|
// This means insertion in the middle/beginning can cause a memmove when no
|
|
|
|
// bucket is available. In our case, this happens at most 23 times for the
|
|
|
|
// largest enum we have (Opcodes).
|
|
|
|
template <typename T>
|
2016-08-31 18:35:58 +00:00
|
|
|
class EnumSet {
|
|
|
|
private:
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
using BucketType = uint64_t;
|
|
|
|
using ElementType = std::underlying_type_t<T>;
|
|
|
|
static_assert(std::is_enum_v<T>, "EnumSets only works with enums.");
|
|
|
|
static_assert(std::is_signed_v<ElementType> == false,
|
|
|
|
"EnumSet doesn't supports signed enums.");
|
|
|
|
|
|
|
|
// Each bucket can hold up to `kBucketSize` distinct, contiguous enum values.
|
|
|
|
// The first value a bucket can hold must be aligned on `kBucketSize`.
|
|
|
|
struct Bucket {
|
|
|
|
// bit mask to store `kBucketSize` enums.
|
|
|
|
BucketType data;
|
|
|
|
// 1st enum this bucket can represent.
|
|
|
|
T start;
|
|
|
|
|
|
|
|
friend bool operator==(const Bucket& lhs, const Bucket& rhs) {
|
|
|
|
return lhs.start == rhs.start && lhs.data == rhs.data;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// How many distinct values can a bucket hold? 1 bit per value.
|
|
|
|
static constexpr size_t kBucketSize = sizeof(BucketType) * 8ULL;
|
2016-08-31 18:35:58 +00:00
|
|
|
|
|
|
|
public:
|
2023-07-12 15:34:44 +00:00
|
|
|
class Iterator {
|
|
|
|
public:
|
|
|
|
typedef Iterator self_type;
|
|
|
|
typedef T value_type;
|
|
|
|
typedef T& reference;
|
|
|
|
typedef T* pointer;
|
|
|
|
typedef std::forward_iterator_tag iterator_category;
|
|
|
|
typedef size_t difference_type;
|
|
|
|
|
|
|
|
Iterator(const Iterator& other)
|
|
|
|
: set_(other.set_),
|
|
|
|
bucketIndex_(other.bucketIndex_),
|
|
|
|
bucketOffset_(other.bucketOffset_) {}
|
|
|
|
|
|
|
|
Iterator& operator++() {
|
|
|
|
do {
|
|
|
|
if (bucketIndex_ >= set_->buckets_.size()) {
|
|
|
|
bucketIndex_ = set_->buckets_.size();
|
|
|
|
bucketOffset_ = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (bucketOffset_ + 1 == kBucketSize) {
|
|
|
|
bucketOffset_ = 0;
|
|
|
|
++bucketIndex_;
|
|
|
|
} else {
|
|
|
|
++bucketOffset_;
|
|
|
|
}
|
|
|
|
|
|
|
|
} while (bucketIndex_ < set_->buckets_.size() &&
|
|
|
|
!set_->HasEnumAt(bucketIndex_, bucketOffset_));
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
|
|
|
Iterator operator++(int) {
|
|
|
|
Iterator old = *this;
|
|
|
|
operator++();
|
|
|
|
return old;
|
|
|
|
}
|
|
|
|
|
|
|
|
T operator*() const {
|
2023-07-13 13:55:24 +00:00
|
|
|
assert(set_->HasEnumAt(bucketIndex_, bucketOffset_) &&
|
|
|
|
"operator*() called on an invalid iterator.");
|
2023-07-12 15:34:44 +00:00
|
|
|
return GetValueFromBucket(set_->buckets_[bucketIndex_], bucketOffset_);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool operator!=(const Iterator& other) const {
|
|
|
|
return set_ != other.set_ || bucketOffset_ != other.bucketOffset_ ||
|
|
|
|
bucketIndex_ != other.bucketIndex_;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool operator==(const Iterator& other) const {
|
|
|
|
return !(operator!=(other));
|
|
|
|
}
|
|
|
|
|
|
|
|
Iterator& operator=(const Iterator& other) {
|
|
|
|
set_ = other.set_;
|
|
|
|
bucketIndex_ = other.bucketIndex_;
|
|
|
|
bucketOffset_ = other.bucketOffset_;
|
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
Iterator(const EnumSet* set, size_t bucketIndex, ElementType bucketOffset)
|
|
|
|
: set_(set), bucketIndex_(bucketIndex), bucketOffset_(bucketOffset) {}
|
|
|
|
|
|
|
|
private:
|
2023-07-13 13:55:24 +00:00
|
|
|
const EnumSet* set_ = nullptr;
|
2023-07-12 15:34:44 +00:00
|
|
|
// Index of the bucket in the vector.
|
2023-07-13 13:55:24 +00:00
|
|
|
size_t bucketIndex_ = 0;
|
2023-07-12 15:34:44 +00:00
|
|
|
// Offset in bits in the current bucket.
|
2023-07-13 13:55:24 +00:00
|
|
|
ElementType bucketOffset_ = 0;
|
2023-07-12 15:34:44 +00:00
|
|
|
|
|
|
|
friend class EnumSet;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Required to allow the use of std::inserter.
|
|
|
|
using value_type = T;
|
|
|
|
using const_iterator = Iterator;
|
|
|
|
using iterator = Iterator;
|
|
|
|
|
|
|
|
public:
|
|
|
|
iterator cbegin() const noexcept {
|
2023-07-13 13:55:24 +00:00
|
|
|
auto it = iterator(this, /* bucketIndex= */ 0, /* bucketOffset= */ 0);
|
|
|
|
if (buckets_.size() == 0) {
|
|
|
|
return it;
|
|
|
|
}
|
|
|
|
|
|
|
|
// The iterator has the logic to find the next valid bit. If the value 0
|
|
|
|
// is not stored, use it to find the next valid bit.
|
|
|
|
if (!HasEnumAt(it.bucketIndex_, it.bucketOffset_)) {
|
|
|
|
++it;
|
|
|
|
}
|
|
|
|
|
|
|
|
return it;
|
2023-07-12 15:34:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
iterator begin() const noexcept { return cbegin(); }
|
|
|
|
|
|
|
|
iterator cend() const noexcept {
|
|
|
|
return iterator(this, buckets_.size(), /* bucketOffset= */ 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
iterator end() const noexcept { return cend(); }
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Creates an empty set.
|
2023-07-12 15:34:44 +00:00
|
|
|
EnumSet() : buckets_(0), size_(0) {}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
|
|
|
|
// Creates a set and store `value` in it.
|
2023-07-12 15:34:44 +00:00
|
|
|
EnumSet(T value) : EnumSet() { insert(value); }
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
|
|
|
|
// Creates a set and stores each `values` in it.
|
|
|
|
EnumSet(std::initializer_list<T> values) : EnumSet() {
|
|
|
|
for (auto item : values) {
|
2023-07-12 15:34:44 +00:00
|
|
|
insert(item);
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Creates a set, and insert `count` enum values pointed by `array` in it.
|
|
|
|
EnumSet(ElementType count, const T* array) : EnumSet() {
|
|
|
|
for (ElementType i = 0; i < count; i++) {
|
2023-07-12 15:34:44 +00:00
|
|
|
insert(array[i]);
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-20 17:54:50 +00:00
|
|
|
// Creates a set initialized with the content of the range [begin; end[.
|
|
|
|
template <class InputIt>
|
|
|
|
EnumSet(InputIt begin, InputIt end) : EnumSet() {
|
|
|
|
for (; begin != end; ++begin) {
|
|
|
|
insert(*begin);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Copies the EnumSet `other` into a new EnumSet.
|
2023-07-12 15:34:44 +00:00
|
|
|
EnumSet(const EnumSet& other)
|
|
|
|
: buckets_(other.buckets_), size_(other.size_) {}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
|
|
|
|
// Moves the EnumSet `other` into a new EnumSet.
|
2023-07-12 15:34:44 +00:00
|
|
|
EnumSet(EnumSet&& other)
|
|
|
|
: buckets_(std::move(other.buckets_)), size_(other.size_) {}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
|
|
|
|
// Deep-copies the EnumSet `other` into this EnumSet.
|
|
|
|
EnumSet& operator=(const EnumSet& other) {
|
|
|
|
buckets_ = other.buckets_;
|
2023-07-12 15:34:44 +00:00
|
|
|
size_ = other.size_;
|
2016-08-31 18:35:58 +00:00
|
|
|
return *this;
|
|
|
|
}
|
|
|
|
|
2023-07-12 15:34:44 +00:00
|
|
|
// Matches std::unordered_set::insert behavior.
|
|
|
|
std::pair<iterator, bool> insert(const T& value) {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
const size_t index = FindBucketForValue(value);
|
2023-07-12 15:34:44 +00:00
|
|
|
const ElementType offset = ComputeBucketOffset(value);
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
if (index >= buckets_.size() ||
|
|
|
|
buckets_[index].start != ComputeBucketStart(value)) {
|
2023-07-12 15:34:44 +00:00
|
|
|
size_ += 1;
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
InsertBucketFor(index, value);
|
2023-07-12 15:34:44 +00:00
|
|
|
return std::make_pair(Iterator(this, index, offset), true);
|
2019-08-28 15:49:16 +00:00
|
|
|
}
|
2023-07-12 15:34:44 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
auto& bucket = buckets_[index];
|
2023-07-12 15:34:44 +00:00
|
|
|
const auto mask = ComputeMaskForValue(value);
|
|
|
|
if (bucket.data & mask) {
|
|
|
|
return std::make_pair(Iterator(this, index, offset), false);
|
|
|
|
}
|
|
|
|
|
|
|
|
size_ += 1;
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
bucket.data |= ComputeMaskForValue(value);
|
2023-07-12 15:34:44 +00:00
|
|
|
return std::make_pair(Iterator(this, index, offset), true);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Inserts `value` in the set if possible.
|
|
|
|
// Similar to `std::unordered_set::insert`, except the hint is ignored.
|
|
|
|
// Returns an iterator to the inserted element, or the element preventing
|
|
|
|
// insertion.
|
|
|
|
iterator insert(const_iterator, const T& value) {
|
|
|
|
return insert(value).first;
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
}
|
2019-08-28 15:49:16 +00:00
|
|
|
|
2023-07-12 15:34:44 +00:00
|
|
|
// Inserts `value` in the set if possible.
|
|
|
|
// Similar to `std::unordered_set::insert`, except the hint is ignored.
|
|
|
|
// Returns an iterator to the inserted element, or the element preventing
|
|
|
|
// insertion.
|
|
|
|
iterator insert(const_iterator, T&& value) { return insert(value).first; }
|
|
|
|
|
2023-07-20 17:54:50 +00:00
|
|
|
// Inserts all the values in the range [`first`; `last[.
|
|
|
|
// Similar to `std::unordered_set::insert`.
|
|
|
|
template <class InputIt>
|
|
|
|
void insert(InputIt first, InputIt last) {
|
|
|
|
for (auto it = first; it != last; ++it) {
|
|
|
|
insert(*it);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Removes the value `value` into the set.
|
2023-07-13 09:16:54 +00:00
|
|
|
// Similar to `std::unordered_set::erase`.
|
|
|
|
// Returns the number of erased elements.
|
2023-07-12 15:34:44 +00:00
|
|
|
size_t erase(const T& value) {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
const size_t index = FindBucketForValue(value);
|
|
|
|
if (index >= buckets_.size() ||
|
|
|
|
buckets_[index].start != ComputeBucketStart(value)) {
|
2023-07-12 15:34:44 +00:00
|
|
|
return 0;
|
2019-08-28 15:49:16 +00:00
|
|
|
}
|
2023-07-12 15:34:44 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
auto& bucket = buckets_[index];
|
2023-07-12 15:34:44 +00:00
|
|
|
const auto mask = ComputeMaskForValue(value);
|
|
|
|
if (!(bucket.data & mask)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
size_ -= 1;
|
|
|
|
bucket.data &= ~mask;
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
if (bucket.data == 0) {
|
|
|
|
buckets_.erase(buckets_.cbegin() + index);
|
|
|
|
}
|
2023-07-12 15:34:44 +00:00
|
|
|
return 1;
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
}
|
2019-08-28 15:49:16 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns true if `value` is present in the set.
|
2023-07-12 15:34:44 +00:00
|
|
|
bool contains(T value) const {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
const size_t index = FindBucketForValue(value);
|
|
|
|
if (index >= buckets_.size() ||
|
|
|
|
buckets_[index].start != ComputeBucketStart(value)) {
|
2019-08-28 15:49:16 +00:00
|
|
|
return false;
|
|
|
|
}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
auto& bucket = buckets_[index];
|
|
|
|
return bucket.data & ComputeMaskForValue(value);
|
2019-08-28 15:49:16 +00:00
|
|
|
}
|
|
|
|
|
2023-07-12 15:34:44 +00:00
|
|
|
// Returns the 1 if `value` is present in the set, `0` otherwise.
|
|
|
|
inline size_t count(T value) const { return contains(value) ? 1 : 0; }
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns true if the set is holds no values.
|
2023-07-12 15:34:44 +00:00
|
|
|
inline bool empty() const { return size_ == 0; }
|
|
|
|
|
2023-07-13 09:16:54 +00:00
|
|
|
// Returns the number of enums stored in this set.
|
2023-07-12 15:34:44 +00:00
|
|
|
size_t size() const { return size_; }
|
2017-03-09 23:24:35 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns true if this set contains at least one value contained in `in_set`.
|
|
|
|
// Note: If `in_set` is empty, this function returns true.
|
|
|
|
bool HasAnyOf(const EnumSet<T>& in_set) const {
|
2023-07-12 15:34:44 +00:00
|
|
|
if (in_set.empty()) {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
return true;
|
2017-03-09 23:24:35 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
auto lhs = buckets_.cbegin();
|
|
|
|
auto rhs = in_set.buckets_.cbegin();
|
2017-03-09 23:24:35 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
while (lhs != buckets_.cend() && rhs != in_set.buckets_.cend()) {
|
|
|
|
if (lhs->start == rhs->start) {
|
|
|
|
if (lhs->data & rhs->data) {
|
|
|
|
// At least 1 bit is shared. Early return.
|
|
|
|
return true;
|
|
|
|
}
|
2017-03-09 23:24:35 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
lhs++;
|
|
|
|
rhs++;
|
|
|
|
continue;
|
|
|
|
}
|
2017-03-09 23:24:35 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// LHS bucket is smaller than the current RHS bucket. Catching up on RHS.
|
|
|
|
if (lhs->start < rhs->start) {
|
|
|
|
lhs++;
|
|
|
|
continue;
|
|
|
|
}
|
2017-03-09 23:24:35 +00:00
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Otherwise, RHS needs to catch up on LHS.
|
|
|
|
rhs++;
|
2017-03-09 23:24:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns the index of the last bucket in which `value` could be stored.
|
|
|
|
static constexpr inline size_t ComputeLargestPossibleBucketIndexFor(T value) {
|
|
|
|
return static_cast<size_t>(value) / kBucketSize;
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns the smallest enum value that could be contained in the same bucket
|
|
|
|
// as `value`.
|
|
|
|
static constexpr inline T ComputeBucketStart(T value) {
|
|
|
|
return static_cast<T>(kBucketSize *
|
|
|
|
ComputeLargestPossibleBucketIndexFor(value));
|
2020-03-12 13:19:52 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns the index of the bit that corresponds to `value` in the bucket.
|
2023-07-12 15:34:44 +00:00
|
|
|
static constexpr inline ElementType ComputeBucketOffset(T value) {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
return static_cast<ElementType>(value) % kBucketSize;
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns the bitmask used to represent the enum `value` in its bucket.
|
|
|
|
static constexpr inline BucketType ComputeMaskForValue(T value) {
|
|
|
|
return 1ULL << ComputeBucketOffset(value);
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns the `enum` stored in `bucket` at `offset`.
|
|
|
|
// `offset` is the bit-offset in the bucket storage.
|
|
|
|
static constexpr inline T GetValueFromBucket(const Bucket& bucket,
|
2023-07-12 15:34:44 +00:00
|
|
|
BucketType offset) {
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
return static_cast<T>(static_cast<ElementType>(bucket.start) + offset);
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// For a given enum `value`, finds the bucket index that could contain this
|
|
|
|
// value. If no such bucket is found, the index at which the new bucket should
|
|
|
|
// be inserted is returned.
|
|
|
|
size_t FindBucketForValue(T value) const {
|
|
|
|
// Set is empty, insert at 0.
|
|
|
|
if (buckets_.size() == 0) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
const T wanted_start = ComputeBucketStart(value);
|
|
|
|
assert(buckets_.size() > 0 &&
|
|
|
|
"Size must not be 0 here. Has the code above changed?");
|
|
|
|
size_t index = std::min(buckets_.size() - 1,
|
|
|
|
ComputeLargestPossibleBucketIndexFor(value));
|
|
|
|
|
|
|
|
// This loops behaves like std::upper_bound with a reverse iterator.
|
|
|
|
// Buckets are sorted. 3 main cases:
|
|
|
|
// - The bucket matches
|
|
|
|
// => returns the bucket index.
|
|
|
|
// - The found bucket is larger
|
|
|
|
// => scans left until it finds the correct bucket, or insertion point.
|
|
|
|
// - The found bucket is smaller
|
|
|
|
// => We are at the end, so we return past-end index for insertion.
|
|
|
|
for (; buckets_[index].start >= wanted_start; index--) {
|
|
|
|
if (index == 0) {
|
|
|
|
return 0;
|
|
|
|
}
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
|
|
|
|
return index + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Creates a new bucket to store `value` and inserts it at `index`.
|
|
|
|
// If the `index` is past the end, the bucket is inserted at the end of the
|
|
|
|
// vector.
|
|
|
|
void InsertBucketFor(size_t index, T value) {
|
|
|
|
const T bucket_start = ComputeBucketStart(value);
|
|
|
|
Bucket bucket = {1ULL << ComputeBucketOffset(value), bucket_start};
|
|
|
|
auto it = buckets_.emplace(buckets_.begin() + index, std::move(bucket));
|
|
|
|
#if defined(NDEBUG)
|
|
|
|
(void)it; // Silencing unused variable warning.
|
|
|
|
#else
|
|
|
|
assert(std::next(it) == buckets_.end() ||
|
|
|
|
std::next(it)->start > bucket_start);
|
|
|
|
assert(it == buckets_.begin() || std::prev(it)->start < bucket_start);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2023-07-12 15:34:44 +00:00
|
|
|
// Returns true if the bucket at `bucketIndex/ stores the enum at
|
|
|
|
// `bucketOffset`, false otherwise.
|
|
|
|
bool HasEnumAt(size_t bucketIndex, BucketType bucketOffset) const {
|
|
|
|
assert(bucketIndex < buckets_.size());
|
|
|
|
assert(bucketOffset < kBucketSize);
|
|
|
|
return buckets_[bucketIndex].data & (1ULL << bucketOffset);
|
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Returns true if `lhs` and `rhs` hold the exact same values.
|
|
|
|
friend bool operator==(const EnumSet& lhs, const EnumSet& rhs) {
|
2023-07-12 15:34:44 +00:00
|
|
|
if (lhs.size_ != rhs.size_) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
if (lhs.buckets_.size() != rhs.buckets_.size()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return lhs.buckets_ == rhs.buckets_;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns true if `lhs` and `rhs` hold at least 1 different value.
|
|
|
|
friend bool operator!=(const EnumSet& lhs, const EnumSet& rhs) {
|
|
|
|
return !(lhs == rhs);
|
2016-08-31 18:35:58 +00:00
|
|
|
}
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// Storage for the buckets.
|
|
|
|
std::vector<Bucket> buckets_;
|
2023-07-12 15:34:44 +00:00
|
|
|
// How many enums is this set storing.
|
2023-07-13 13:55:24 +00:00
|
|
|
size_t size_ = 0;
|
2016-08-31 18:35:58 +00:00
|
|
|
};
|
|
|
|
|
NFC: rewrite EnumSet to handle larger enums. (#5289)
The current EnumSet implementation is only efficient for enums with
values < than 64. The reason is the first 63 values are stored as a
bitmask in a 64 bit unsigned integer, and the other values are stored
in a std::set.
For small enums, this is fine (most SPIR-V enums have IDs < than 64),
but performance starts to drop with larger enums (Capabilities,
opcodes).
Design considerations:
----------------------
This PR changes the internal behavior of the EnumSet to handle enums
with arbitrary values while staying performant.
The idea is to extend the 64-bits buckets sparsely:
- each bucket can store 64 value, starting from a multiplier of 64.
This could be considered as a hashset with linear probing.
- For small enums, there is a slight memory overhead due to the bucket
storage, but lookup is still constant.
- For linearly distributed values, lookup is constant.
- Worse case for storage are for enums with values which are multiples of 64.
But lookup is constant.
- Worse case for lookup are enums with a lot of small ranges scattered in
the space (requires linear probing).
For enums like capabilities/opcodes, this bucketing is useful as values
are usually scatters in distinct, but almost contiguous blocks.
(vendors usually have allocated ranges, like [5000;5500], while [1000;5000]
is mostly unused).
Benchmarking:
-------------
Benchmarking was done in 2 ways:
- a benchmark built for the occasion, which only measure the EnumSet
performance.
- SPIRV-Tools tests, to measure a more realist scenario.
Running SPIR-V tests with both implementations shows the same
performance (delta < noise). So seems like we have no regressions.
This method is noisy by nature (I/O, etc), but the most representative
of a real-life scenario.
Protocol:
- run spirv-tests with no stdout using perf, multiple times.
Result:
- measure noise is larger than the observed difference.
The custom benchmark was testing EnumSet interfaces using SPIRV enums.
Doing thousand of insertion/deletion/lookup, with 2 kind of scenarios:
- add once, lookup many times.
- add/delete/loopkup many time.
For small enums, results are similar (delta < noise). Seems relevant
with the previously observed results as most SPIRV enums are small, and
SPIRV-Tools is not doing that many intensive operations on EnumSets.
Performance on large enums (opcode/capabilities) shows an improvement:
+-----------------------------+---------+---------+---------+
| Metric | Old | New | Delta % |
+-----------------------------+---------+---------+---------+
| Execution time | 27s | 7s | -72% |
| Instruction count | 174b | 129b | -25% |
| Branch count | 28b | 33b | +17% |
| Branch miss | 490m | 26m | -94% |
| Cache-misses | 149k | 26k | -82% |
+-----------------------------+---------+---------+---------+
Future work
-----------
This was by-design an NFC change to compare apples-to-apples.
The next PR aims to add STL-like iterators to the EnumSet to allow
using it with STL algorithms, and range-based for loops.
Signed-off-by: Nathan Gauër <brioche@google.com>
2023-07-07 14:41:52 +00:00
|
|
|
// A set of spv::Capability.
|
2022-11-04 21:27:10 +00:00
|
|
|
using CapabilitySet = EnumSet<spv::Capability>;
|
2016-08-31 18:35:58 +00:00
|
|
|
|
2018-07-07 13:38:00 +00:00
|
|
|
} // namespace spvtools
|
2016-08-31 18:35:58 +00:00
|
|
|
|
2018-08-03 12:05:33 +00:00
|
|
|
#endif // SOURCE_ENUM_SET_H_
|