diff --git a/docs/html/choosing_memory_type.html b/docs/html/choosing_memory_type.html index 60b761b..53177d3 100644 --- a/docs/html/choosing_memory_type.html +++ b/docs/html/choosing_memory_type.html @@ -78,7 +78,7 @@ $(function() {

You can leave VmaAllocationCreateInfo structure completely filled with zeros. It means no requirements are specified for memory type. It is valid, although not very useful.

Usage

-

The easiest way to specify memory requirements is to fill member VmaAllocationCreateInfo::usage using one of the values of enum VmaMemoryUsage. It defines high level, common usage types.

+

The easiest way to specify memory requirements is to fill member VmaAllocationCreateInfo::usage using one of the values of enum VmaMemoryUsage. It defines high level, common usage types. For more details, see description of this enum.

For example, if you want to create a uniform buffer that will be filled using transfer only once or infrequently and used for rendering every frame, you can do it using following code:

VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufferInfo.size = 65536;
bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocInfo = {};
allocInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);

Required and preferred flags

diff --git a/docs/html/memory_mapping.html b/docs/html/memory_mapping.html index bb32ecc..8df6d86 100644 --- a/docs/html/memory_mapping.html +++ b/docs/html/memory_mapping.html @@ -66,22 +66,23 @@ $(function() {
Memory mapping
-

+

To "map memory" in Vulkan means to obtain a CPU pointer to VkDeviceMemory, to be able to read from it or write to it in CPU code. Mapping is possible only of memory allocated from a memory type that has VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT flag. Functions vkMapMemory(), vkUnmapMemory() are designed for this purpose. You can use them directly with memory allocated by this library, but it is not recommended because of following issue: Mapping the same VkDeviceMemory block multiple times is illegal - only one mapping at a time is allowed. This includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan. Because of this, Vulkan Memory Allocator provides following facilities:

+

+Mapping functions

+

The library provides following functions for mapping of a specific VmaAllocation: vmaMapMemory(), vmaUnmapMemory(). They are safer and more convenient to use than standard Vulkan functions. You can map an allocation multiple times simultaneously - mapping is reference-counted internally. You can also map different allocations simultaneously regardless of whether they use the same VkDeviceMemory block. They way it's implemented is that the library always maps entire memory block, not just region of the allocation. For further details, see description of vmaMapMemory() function. Example:

+
// Having these objects initialized:
struct ConstantBuffer
{
...
};
ConstantBuffer constantBufferData;
VmaAllocator allocator;
VmaBuffer constantBuffer;
VmaAllocation constantBufferAllocation;
// You can map and fill your buffer using following code:
void* mappedData;
vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
vmaUnmapMemory(allocator, constantBufferAllocation);

Persistently mapped memory

-

If you need to map memory on host, it may happen that two allocations are assigned to the same VkDeviceMemory block, so if you map them both at the same time, it will cause error because mapping single memory block multiple times is illegal in Vulkan.

-

TODO update this...

-

It is safer, more convenient and more efficient to use special feature designed for that: persistently mapped memory. Allocations made with VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in VmaAllocationCreateInfo::flags are returned from device memory blocks that stay mapped all the time, so you can just access CPU pointer to it. VmaAllocationInfo::pMappedData pointer is already offseted to the beginning of particular allocation. Example:

-
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 1024;
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_CPU_ONLY;
VkBuffer buf;
VmaAllocation alloc;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
// Buffer is immediately mapped. You can access its memory.
memcpy(allocInfo.pMappedData, myData, 1024);

Memory in Vulkan doesn't need to be unmapped before using it e.g. for transfers, but if you are not sure whether it's HOST_COHERENT (here is surely is because it's created with VMA_MEMORY_USAGE_CPU_ONLY), you should check it. If it's not, you should call vkInvalidateMappedMemoryRanges() before reading and vkFlushMappedMemoryRanges() after writing to mapped memory on CPU. Example:

-
VkMemoryPropertyFlags memFlags;
vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
if((memFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) == 0)
{
VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
memRange.memory = allocInfo.deviceMemory;
memRange.offset = allocInfo.offset;
memRange.size = allocInfo.size;
vkFlushMappedMemoryRanges(device, 1, &memRange);
}

-Note on performance

-

There is a situation that you should be careful about. It happens only if all of following conditions are met:

-
    -
  1. You use AMD GPU.
  2. -
  3. You use the memory type that is both DEVICE_LOCAL and HOST_VISIBLE (used when you specify VMA_MEMORY_USAGE_CPU_TO_GPU).
  4. -
  5. Operating system is Windows 7 or 8.x (Windows 10 is not affected because it uses WDDM2).
  6. -
-

Then whenever a VkDeviceMemory block allocated from this memory type is mapped for the time of any call to vkQueueSubmit() or vkQueuePresentKHR(), this block is migrated by WDDM to system RAM, which degrades performance. It doesn't matter if that particular memory block is actually used by the command buffer being submitted.

-

To avoid this problem, either make sure to unmap all allocations made from this memory type before your Submit and Present, or use VMA_MEMORY_USAGE_GPU_ONLY and transfer from a staging buffer in VMA_MEMORY_USAGE_CPU_ONLY, which can safely stay mapped all the time.

+

Kepping your memory persistently mapped is generally OK in Vulkan. You don't need to unmap it before using its data on the GPU. The library provides a special feature designed for that: Allocations made with VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in VmaAllocationCreateInfo::flags stay mapped all the time, so you can just access CPU pointer to it any time without a need to call any "map" or "unmap" function. Example:

+
VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = sizeof(ConstantBuffer);
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_CPU_ONLY;
VkBuffer buf;
VmaAllocation alloc;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
// Buffer is already mapped. You can access its memory.
memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));

There are some exceptions though, when you should consider mapping memory only for a short period of time:

+
    +
  • When operating system is Windows 7 or 8.x (Windows 10 is not affected because it uses WDDM2), device is discrete AMD GPU, and memory type is the special 256 MiB pool of DEVICE_LOCAL + HOST_VISIBLE memory (selected when you use VMA_MEMORY_USAGE_CPU_TO_GPU), then whenever a memory block allocated from this memory type stays mapped for the time of any call to vkQueueSubmit() or vkQueuePresentKHR(), this block is migrated by WDDM to system RAM, which degrades performance. It doesn't matter if that particular memory block is actually used by the command buffer being submitted.
  • +
  • Keeping many large memory blocks mapped may impact performance or stability of some debugging tools.
  • +
+

+Cache control

+

Memory in Vulkan doesn't need to be unmapped before using it on GPU, but unless a memory types has VK_MEMORY_PROPERTY_HOST_COHERENT_BIT flag set, you need to manually invalidate cache before reading of mapped pointer using function vkvkInvalidateMappedMemoryRanges() and flush cache after writing to mapped pointer using function vkFlushMappedMemoryRanges(). Example:

+
memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
VkMemoryPropertyFlags memFlags;
vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
if((memFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) == 0)
{
VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
memRange.memory = allocInfo.deviceMemory;
memRange.offset = allocInfo.offset;
memRange.size = allocInfo.size;
vkFlushMappedMemoryRanges(device, 1, &memRange);
}

Please note that memory allocated with VMA_MEMORY_USAGE_CPU_ONLY is guaranteed to be host coherent.

+

Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA) currently provide VK_MEMORY_PROPERTY_HOST_COHERENT_BIT flag on all memory types that are VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, so on this platform you may not need to bother.

-

Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.

+

Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.

diff --git a/docs/html/struct_vma_pool_create_info.html b/docs/html/struct_vma_pool_create_info.html index b396331..06fd23f 100644 --- a/docs/html/struct_vma_pool_create_info.html +++ b/docs/html/struct_vma_pool_create_info.html @@ -77,7 +77,7 @@ Public Attributes  Vulkan memory type index to allocate this pool from. More...
  VmaPoolCreateFlags flags - Use combination of VmaPoolCreateFlagBits. More...
+ Use combination of VmaPoolCreateFlagBits. More...
  VkDeviceSize blockSize  Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes. More...
@@ -124,7 +124,7 @@ Public Attributes
-

Use combination of VmaPoolCreateFlagBits.

+

Use combination of VmaPoolCreateFlagBits.

diff --git a/docs/html/vk__mem__alloc_8h.html b/docs/html/vk__mem__alloc_8h.html index d1d5ad2..b60f43d 100644 --- a/docs/html/vk__mem__alloc_8h.html +++ b/docs/html/vk__mem__alloc_8h.html @@ -711,7 +711,7 @@ Functions -

Allocation may still end up in HOST_VISIBLE memory on some implementations. In such case, you are free to map it. You can use VMA_ALLOCATION_CREATE_MAPPED_BIT with this usage type.

- - diff --git a/docs/html/vk__mem__alloc_8h_source.html b/docs/html/vk__mem__alloc_8h_source.html index ce8f044..8442361 100644 --- a/docs/html/vk__mem__alloc_8h_source.html +++ b/docs/html/vk__mem__alloc_8h_source.html @@ -62,154 +62,154 @@ $(function() {
vk_mem_alloc.h
-Go to the documentation of this file.
1 //
2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
3 //
4 // Permission is hereby granted, free of charge, to any person obtaining a copy
5 // of this software and associated documentation files (the "Software"), to deal
6 // in the Software without restriction, including without limitation the rights
7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 // copies of the Software, and to permit persons to whom the Software is
9 // furnished to do so, subject to the following conditions:
10 //
11 // The above copyright notice and this permission notice shall be included in
12 // all copies or substantial portions of the Software.
13 //
14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
20 // THE SOFTWARE.
21 //
22 
23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
25 
26 #ifdef __cplusplus
27 extern "C" {
28 #endif
29 
736 #include <vulkan/vulkan.h>
737 
738 VK_DEFINE_HANDLE(VmaAllocator)
739 
740 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
742  VmaAllocator allocator,
743  uint32_t memoryType,
744  VkDeviceMemory memory,
745  VkDeviceSize size);
747 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
748  VmaAllocator allocator,
749  uint32_t memoryType,
750  VkDeviceMemory memory,
751  VkDeviceSize size);
752 
760 typedef struct VmaDeviceMemoryCallbacks {
766 
796 
799 typedef VkFlags VmaAllocatorCreateFlags;
800 
805 typedef struct VmaVulkanFunctions {
806  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
807  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
808  PFN_vkAllocateMemory vkAllocateMemory;
809  PFN_vkFreeMemory vkFreeMemory;
810  PFN_vkMapMemory vkMapMemory;
811  PFN_vkUnmapMemory vkUnmapMemory;
812  PFN_vkBindBufferMemory vkBindBufferMemory;
813  PFN_vkBindImageMemory vkBindImageMemory;
814  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
815  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
816  PFN_vkCreateBuffer vkCreateBuffer;
817  PFN_vkDestroyBuffer vkDestroyBuffer;
818  PFN_vkCreateImage vkCreateImage;
819  PFN_vkDestroyImage vkDestroyImage;
820  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
821  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
823 
826 {
828  VmaAllocatorCreateFlags flags;
830 
831  VkPhysicalDevice physicalDevice;
833 
834  VkDevice device;
836 
839 
840  const VkAllocationCallbacks* pAllocationCallbacks;
842 
857  uint32_t frameInUseCount;
881  const VkDeviceSize* pHeapSizeLimit;
895 
897 VkResult vmaCreateAllocator(
898  const VmaAllocatorCreateInfo* pCreateInfo,
899  VmaAllocator* pAllocator);
900 
903  VmaAllocator allocator);
904 
910  VmaAllocator allocator,
911  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
912 
918  VmaAllocator allocator,
919  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
920 
928  VmaAllocator allocator,
929  uint32_t memoryTypeIndex,
930  VkMemoryPropertyFlags* pFlags);
931 
941  VmaAllocator allocator,
942  uint32_t frameIndex);
943 
946 typedef struct VmaStatInfo
947 {
949  uint32_t blockCount;
951  uint32_t allocationCount;
955  VkDeviceSize usedBytes;
957  VkDeviceSize unusedBytes;
958  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
959  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
960 } VmaStatInfo;
961 
963 typedef struct VmaStats
964 {
965  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
966  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
968 } VmaStats;
969 
971 void vmaCalculateStats(
972  VmaAllocator allocator,
973  VmaStats* pStats);
974 
975 #define VMA_STATS_STRING_ENABLED 1
976 
977 #if VMA_STATS_STRING_ENABLED
978 
980 
983  VmaAllocator allocator,
984  char** ppStatsString,
985  VkBool32 detailedMap);
986 
987 void vmaFreeStatsString(
988  VmaAllocator allocator,
989  char* pStatsString);
990 
991 #endif // #if VMA_STATS_STRING_ENABLED
992 
993 VK_DEFINE_HANDLE(VmaPool)
994 
995 typedef enum VmaMemoryUsage
996 {
1045 } VmaMemoryUsage;
1046 
1061 
1111 
1115 
1117 {
1119  VmaAllocationCreateFlags flags;
1130  VkMemoryPropertyFlags requiredFlags;
1135  VkMemoryPropertyFlags preferredFlags;
1143  uint32_t memoryTypeBits;
1149  VmaPool pool;
1156  void* pUserData;
1158 
1173 VkResult vmaFindMemoryTypeIndex(
1174  VmaAllocator allocator,
1175  uint32_t memoryTypeBits,
1176  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1177  uint32_t* pMemoryTypeIndex);
1178 
1199 
1202 typedef VkFlags VmaPoolCreateFlags;
1203 
1206 typedef struct VmaPoolCreateInfo {
1212  VmaPoolCreateFlags flags;
1217  VkDeviceSize blockSize;
1246 
1249 typedef struct VmaPoolStats {
1252  VkDeviceSize size;
1255  VkDeviceSize unusedSize;
1268  VkDeviceSize unusedRangeSizeMax;
1269 } VmaPoolStats;
1270 
1277 VkResult vmaCreatePool(
1278  VmaAllocator allocator,
1279  const VmaPoolCreateInfo* pCreateInfo,
1280  VmaPool* pPool);
1281 
1284 void vmaDestroyPool(
1285  VmaAllocator allocator,
1286  VmaPool pool);
1287 
1294 void vmaGetPoolStats(
1295  VmaAllocator allocator,
1296  VmaPool pool,
1297  VmaPoolStats* pPoolStats);
1298 
1306  VmaAllocator allocator,
1307  VmaPool pool,
1308  size_t* pLostAllocationCount);
1309 
1310 VK_DEFINE_HANDLE(VmaAllocation)
1311 
1312 
1314 typedef struct VmaAllocationInfo {
1319  uint32_t memoryType;
1328  VkDeviceMemory deviceMemory;
1333  VkDeviceSize offset;
1338  VkDeviceSize size;
1352  void* pUserData;
1354 
1365 VkResult vmaAllocateMemory(
1366  VmaAllocator allocator,
1367  const VkMemoryRequirements* pVkMemoryRequirements,
1368  const VmaAllocationCreateInfo* pCreateInfo,
1369  VmaAllocation* pAllocation,
1370  VmaAllocationInfo* pAllocationInfo);
1371 
1379  VmaAllocator allocator,
1380  VkBuffer buffer,
1381  const VmaAllocationCreateInfo* pCreateInfo,
1382  VmaAllocation* pAllocation,
1383  VmaAllocationInfo* pAllocationInfo);
1384 
1386 VkResult vmaAllocateMemoryForImage(
1387  VmaAllocator allocator,
1388  VkImage image,
1389  const VmaAllocationCreateInfo* pCreateInfo,
1390  VmaAllocation* pAllocation,
1391  VmaAllocationInfo* pAllocationInfo);
1392 
1394 void vmaFreeMemory(
1395  VmaAllocator allocator,
1396  VmaAllocation allocation);
1397 
1400  VmaAllocator allocator,
1401  VmaAllocation allocation,
1402  VmaAllocationInfo* pAllocationInfo);
1403 
1418  VmaAllocator allocator,
1419  VmaAllocation allocation,
1420  void* pUserData);
1421 
1433  VmaAllocator allocator,
1434  VmaAllocation* pAllocation);
1435 
1470 VkResult vmaMapMemory(
1471  VmaAllocator allocator,
1472  VmaAllocation allocation,
1473  void** ppData);
1474 
1479 void vmaUnmapMemory(
1480  VmaAllocator allocator,
1481  VmaAllocation allocation);
1482 
1484 typedef struct VmaDefragmentationInfo {
1489  VkDeviceSize maxBytesToMove;
1496 
1498 typedef struct VmaDefragmentationStats {
1500  VkDeviceSize bytesMoved;
1502  VkDeviceSize bytesFreed;
1508 
1585 VkResult vmaDefragment(
1586  VmaAllocator allocator,
1587  VmaAllocation* pAllocations,
1588  size_t allocationCount,
1589  VkBool32* pAllocationsChanged,
1590  const VmaDefragmentationInfo *pDefragmentationInfo,
1591  VmaDefragmentationStats* pDefragmentationStats);
1592 
1619 VkResult vmaCreateBuffer(
1620  VmaAllocator allocator,
1621  const VkBufferCreateInfo* pBufferCreateInfo,
1622  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1623  VkBuffer* pBuffer,
1624  VmaAllocation* pAllocation,
1625  VmaAllocationInfo* pAllocationInfo);
1626 
1638 void vmaDestroyBuffer(
1639  VmaAllocator allocator,
1640  VkBuffer buffer,
1641  VmaAllocation allocation);
1642 
1644 VkResult vmaCreateImage(
1645  VmaAllocator allocator,
1646  const VkImageCreateInfo* pImageCreateInfo,
1647  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1648  VkImage* pImage,
1649  VmaAllocation* pAllocation,
1650  VmaAllocationInfo* pAllocationInfo);
1651 
1663 void vmaDestroyImage(
1664  VmaAllocator allocator,
1665  VkImage image,
1666  VmaAllocation allocation);
1667 
1668 #ifdef __cplusplus
1669 }
1670 #endif
1671 
1672 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
1673 
1674 // For Visual Studio IntelliSense.
1675 #ifdef __INTELLISENSE__
1676 #define VMA_IMPLEMENTATION
1677 #endif
1678 
1679 #ifdef VMA_IMPLEMENTATION
1680 #undef VMA_IMPLEMENTATION
1681 
1682 #include <cstdint>
1683 #include <cstdlib>
1684 #include <cstring>
1685 
1686 /*******************************************************************************
1687 CONFIGURATION SECTION
1688 
1689 Define some of these macros before each #include of this header or change them
1690 here if you need other then default behavior depending on your environment.
1691 */
1692 
1693 /*
1694 Define this macro to 1 to make the library fetch pointers to Vulkan functions
1695 internally, like:
1696 
1697  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
1698 
1699 Define to 0 if you are going to provide you own pointers to Vulkan functions via
1700 VmaAllocatorCreateInfo::pVulkanFunctions.
1701 */
1702 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
1703 #define VMA_STATIC_VULKAN_FUNCTIONS 1
1704 #endif
1705 
1706 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
1707 //#define VMA_USE_STL_CONTAINERS 1
1708 
1709 /* Set this macro to 1 to make the library including and using STL containers:
1710 std::pair, std::vector, std::list, std::unordered_map.
1711 
1712 Set it to 0 or undefined to make the library using its own implementation of
1713 the containers.
1714 */
1715 #if VMA_USE_STL_CONTAINERS
1716  #define VMA_USE_STL_VECTOR 1
1717  #define VMA_USE_STL_UNORDERED_MAP 1
1718  #define VMA_USE_STL_LIST 1
1719 #endif
1720 
1721 #if VMA_USE_STL_VECTOR
1722  #include <vector>
1723 #endif
1724 
1725 #if VMA_USE_STL_UNORDERED_MAP
1726  #include <unordered_map>
1727 #endif
1728 
1729 #if VMA_USE_STL_LIST
1730  #include <list>
1731 #endif
1732 
1733 /*
1734 Following headers are used in this CONFIGURATION section only, so feel free to
1735 remove them if not needed.
1736 */
1737 #include <cassert> // for assert
1738 #include <algorithm> // for min, max
1739 #include <mutex> // for std::mutex
1740 #include <atomic> // for std::atomic
1741 
1742 #if !defined(_WIN32)
1743  #include <malloc.h> // for aligned_alloc()
1744 #endif
1745 
1746 // Normal assert to check for programmer's errors, especially in Debug configuration.
1747 #ifndef VMA_ASSERT
1748  #ifdef _DEBUG
1749  #define VMA_ASSERT(expr) assert(expr)
1750  #else
1751  #define VMA_ASSERT(expr)
1752  #endif
1753 #endif
1754 
1755 // Assert that will be called very often, like inside data structures e.g. operator[].
1756 // Making it non-empty can make program slow.
1757 #ifndef VMA_HEAVY_ASSERT
1758  #ifdef _DEBUG
1759  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
1760  #else
1761  #define VMA_HEAVY_ASSERT(expr)
1762  #endif
1763 #endif
1764 
1765 #ifndef VMA_NULL
1766  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
1767  #define VMA_NULL nullptr
1768 #endif
1769 
1770 #ifndef VMA_ALIGN_OF
1771  #define VMA_ALIGN_OF(type) (__alignof(type))
1772 #endif
1773 
1774 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
1775  #if defined(_WIN32)
1776  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
1777  #else
1778  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
1779  #endif
1780 #endif
1781 
1782 #ifndef VMA_SYSTEM_FREE
1783  #if defined(_WIN32)
1784  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
1785  #else
1786  #define VMA_SYSTEM_FREE(ptr) free(ptr)
1787  #endif
1788 #endif
1789 
1790 #ifndef VMA_MIN
1791  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
1792 #endif
1793 
1794 #ifndef VMA_MAX
1795  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
1796 #endif
1797 
1798 #ifndef VMA_SWAP
1799  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
1800 #endif
1801 
1802 #ifndef VMA_SORT
1803  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
1804 #endif
1805 
1806 #ifndef VMA_DEBUG_LOG
1807  #define VMA_DEBUG_LOG(format, ...)
1808  /*
1809  #define VMA_DEBUG_LOG(format, ...) do { \
1810  printf(format, __VA_ARGS__); \
1811  printf("\n"); \
1812  } while(false)
1813  */
1814 #endif
1815 
1816 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
1817 #if VMA_STATS_STRING_ENABLED
1818  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
1819  {
1820  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
1821  }
1822  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
1823  {
1824  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
1825  }
1826  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
1827  {
1828  snprintf(outStr, strLen, "%p", ptr);
1829  }
1830 #endif
1831 
1832 #ifndef VMA_MUTEX
1833  class VmaMutex
1834  {
1835  public:
1836  VmaMutex() { }
1837  ~VmaMutex() { }
1838  void Lock() { m_Mutex.lock(); }
1839  void Unlock() { m_Mutex.unlock(); }
1840  private:
1841  std::mutex m_Mutex;
1842  };
1843  #define VMA_MUTEX VmaMutex
1844 #endif
1845 
1846 /*
1847 If providing your own implementation, you need to implement a subset of std::atomic:
1848 
1849 - Constructor(uint32_t desired)
1850 - uint32_t load() const
1851 - void store(uint32_t desired)
1852 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
1853 */
1854 #ifndef VMA_ATOMIC_UINT32
1855  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
1856 #endif
1857 
1858 #ifndef VMA_BEST_FIT
1859 
1871  #define VMA_BEST_FIT (1)
1872 #endif
1873 
1874 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
1875 
1879  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
1880 #endif
1881 
1882 #ifndef VMA_DEBUG_ALIGNMENT
1883 
1887  #define VMA_DEBUG_ALIGNMENT (1)
1888 #endif
1889 
1890 #ifndef VMA_DEBUG_MARGIN
1891 
1895  #define VMA_DEBUG_MARGIN (0)
1896 #endif
1897 
1898 #ifndef VMA_DEBUG_GLOBAL_MUTEX
1899 
1903  #define VMA_DEBUG_GLOBAL_MUTEX (0)
1904 #endif
1905 
1906 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
1907 
1911  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
1912 #endif
1913 
1914 #ifndef VMA_SMALL_HEAP_MAX_SIZE
1915  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
1917 #endif
1918 
1919 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
1920  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
1922 #endif
1923 
1924 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
1925 
1926 /*******************************************************************************
1927 END OF CONFIGURATION
1928 */
1929 
1930 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
1931  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
1932 
1933 // Returns number of bits set to 1 in (v).
1934 static inline uint32_t VmaCountBitsSet(uint32_t v)
1935 {
1936  uint32_t c = v - ((v >> 1) & 0x55555555);
1937  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
1938  c = ((c >> 4) + c) & 0x0F0F0F0F;
1939  c = ((c >> 8) + c) & 0x00FF00FF;
1940  c = ((c >> 16) + c) & 0x0000FFFF;
1941  return c;
1942 }
1943 
1944 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
1945 // Use types like uint32_t, uint64_t as T.
1946 template <typename T>
1947 static inline T VmaAlignUp(T val, T align)
1948 {
1949  return (val + align - 1) / align * align;
1950 }
1951 
1952 // Division with mathematical rounding to nearest number.
1953 template <typename T>
1954 inline T VmaRoundDiv(T x, T y)
1955 {
1956  return (x + (y / (T)2)) / y;
1957 }
1958 
1959 #ifndef VMA_SORT
1960 
1961 template<typename Iterator, typename Compare>
1962 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
1963 {
1964  Iterator centerValue = end; --centerValue;
1965  Iterator insertIndex = beg;
1966  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
1967  {
1968  if(cmp(*memTypeIndex, *centerValue))
1969  {
1970  if(insertIndex != memTypeIndex)
1971  {
1972  VMA_SWAP(*memTypeIndex, *insertIndex);
1973  }
1974  ++insertIndex;
1975  }
1976  }
1977  if(insertIndex != centerValue)
1978  {
1979  VMA_SWAP(*insertIndex, *centerValue);
1980  }
1981  return insertIndex;
1982 }
1983 
1984 template<typename Iterator, typename Compare>
1985 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
1986 {
1987  if(beg < end)
1988  {
1989  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
1990  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
1991  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
1992  }
1993 }
1994 
1995 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
1996 
1997 #endif // #ifndef VMA_SORT
1998 
1999 /*
2000 Returns true if two memory blocks occupy overlapping pages.
2001 ResourceA must be in less memory offset than ResourceB.
2002 
2003 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
2004 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
2005 */
2006 static inline bool VmaBlocksOnSamePage(
2007  VkDeviceSize resourceAOffset,
2008  VkDeviceSize resourceASize,
2009  VkDeviceSize resourceBOffset,
2010  VkDeviceSize pageSize)
2011 {
2012  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
2013  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
2014  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
2015  VkDeviceSize resourceBStart = resourceBOffset;
2016  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
2017  return resourceAEndPage == resourceBStartPage;
2018 }
2019 
2020 enum VmaSuballocationType
2021 {
2022  VMA_SUBALLOCATION_TYPE_FREE = 0,
2023  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
2024  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
2025  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
2026  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
2027  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
2028  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
2029 };
2030 
2031 /*
2032 Returns true if given suballocation types could conflict and must respect
2033 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
2034 or linear image and another one is optimal image. If type is unknown, behave
2035 conservatively.
2036 */
2037 static inline bool VmaIsBufferImageGranularityConflict(
2038  VmaSuballocationType suballocType1,
2039  VmaSuballocationType suballocType2)
2040 {
2041  if(suballocType1 > suballocType2)
2042  {
2043  VMA_SWAP(suballocType1, suballocType2);
2044  }
2045 
2046  switch(suballocType1)
2047  {
2048  case VMA_SUBALLOCATION_TYPE_FREE:
2049  return false;
2050  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
2051  return true;
2052  case VMA_SUBALLOCATION_TYPE_BUFFER:
2053  return
2054  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
2055  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2056  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
2057  return
2058  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
2059  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
2060  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2061  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
2062  return
2063  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2064  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
2065  return false;
2066  default:
2067  VMA_ASSERT(0);
2068  return true;
2069  }
2070 }
2071 
2072 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
2073 struct VmaMutexLock
2074 {
2075 public:
2076  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
2077  m_pMutex(useMutex ? &mutex : VMA_NULL)
2078  {
2079  if(m_pMutex)
2080  {
2081  m_pMutex->Lock();
2082  }
2083  }
2084 
2085  ~VmaMutexLock()
2086  {
2087  if(m_pMutex)
2088  {
2089  m_pMutex->Unlock();
2090  }
2091  }
2092 
2093 private:
2094  VMA_MUTEX* m_pMutex;
2095 };
2096 
2097 #if VMA_DEBUG_GLOBAL_MUTEX
2098  static VMA_MUTEX gDebugGlobalMutex;
2099  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
2100 #else
2101  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
2102 #endif
2103 
2104 // Minimum size of a free suballocation to register it in the free suballocation collection.
2105 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
2106 
2107 /*
2108 Performs binary search and returns iterator to first element that is greater or
2109 equal to (key), according to comparison (cmp).
2110 
2111 Cmp should return true if first argument is less than second argument.
2112 
2113 Returned value is the found element, if present in the collection or place where
2114 new element with value (key) should be inserted.
2115 */
2116 template <typename IterT, typename KeyT, typename CmpT>
2117 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
2118 {
2119  size_t down = 0, up = (end - beg);
2120  while(down < up)
2121  {
2122  const size_t mid = (down + up) / 2;
2123  if(cmp(*(beg+mid), key))
2124  {
2125  down = mid + 1;
2126  }
2127  else
2128  {
2129  up = mid;
2130  }
2131  }
2132  return beg + down;
2133 }
2134 
2136 // Memory allocation
2137 
2138 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
2139 {
2140  if((pAllocationCallbacks != VMA_NULL) &&
2141  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
2142  {
2143  return (*pAllocationCallbacks->pfnAllocation)(
2144  pAllocationCallbacks->pUserData,
2145  size,
2146  alignment,
2147  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
2148  }
2149  else
2150  {
2151  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
2152  }
2153 }
2154 
2155 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
2156 {
2157  if((pAllocationCallbacks != VMA_NULL) &&
2158  (pAllocationCallbacks->pfnFree != VMA_NULL))
2159  {
2160  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
2161  }
2162  else
2163  {
2164  VMA_SYSTEM_FREE(ptr);
2165  }
2166 }
2167 
2168 template<typename T>
2169 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
2170 {
2171  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
2172 }
2173 
2174 template<typename T>
2175 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
2176 {
2177  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
2178 }
2179 
2180 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
2181 
2182 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
2183 
2184 template<typename T>
2185 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
2186 {
2187  ptr->~T();
2188  VmaFree(pAllocationCallbacks, ptr);
2189 }
2190 
2191 template<typename T>
2192 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
2193 {
2194  if(ptr != VMA_NULL)
2195  {
2196  for(size_t i = count; i--; )
2197  {
2198  ptr[i].~T();
2199  }
2200  VmaFree(pAllocationCallbacks, ptr);
2201  }
2202 }
2203 
2204 // STL-compatible allocator.
2205 template<typename T>
2206 class VmaStlAllocator
2207 {
2208 public:
2209  const VkAllocationCallbacks* const m_pCallbacks;
2210  typedef T value_type;
2211 
2212  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
2213  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
2214 
2215  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
2216  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
2217 
2218  template<typename U>
2219  bool operator==(const VmaStlAllocator<U>& rhs) const
2220  {
2221  return m_pCallbacks == rhs.m_pCallbacks;
2222  }
2223  template<typename U>
2224  bool operator!=(const VmaStlAllocator<U>& rhs) const
2225  {
2226  return m_pCallbacks != rhs.m_pCallbacks;
2227  }
2228 
2229  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
2230 };
2231 
2232 #if VMA_USE_STL_VECTOR
2233 
2234 #define VmaVector std::vector
2235 
2236 template<typename T, typename allocatorT>
2237 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
2238 {
2239  vec.insert(vec.begin() + index, item);
2240 }
2241 
2242 template<typename T, typename allocatorT>
2243 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
2244 {
2245  vec.erase(vec.begin() + index);
2246 }
2247 
2248 #else // #if VMA_USE_STL_VECTOR
2249 
2250 /* Class with interface compatible with subset of std::vector.
2251 T must be POD because constructors and destructors are not called and memcpy is
2252 used for these objects. */
2253 template<typename T, typename AllocatorT>
2254 class VmaVector
2255 {
2256 public:
2257  typedef T value_type;
2258 
2259  VmaVector(const AllocatorT& allocator) :
2260  m_Allocator(allocator),
2261  m_pArray(VMA_NULL),
2262  m_Count(0),
2263  m_Capacity(0)
2264  {
2265  }
2266 
2267  VmaVector(size_t count, const AllocatorT& allocator) :
2268  m_Allocator(allocator),
2269  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
2270  m_Count(count),
2271  m_Capacity(count)
2272  {
2273  }
2274 
2275  VmaVector(const VmaVector<T, AllocatorT>& src) :
2276  m_Allocator(src.m_Allocator),
2277  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
2278  m_Count(src.m_Count),
2279  m_Capacity(src.m_Count)
2280  {
2281  if(m_Count != 0)
2282  {
2283  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
2284  }
2285  }
2286 
2287  ~VmaVector()
2288  {
2289  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2290  }
2291 
2292  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
2293  {
2294  if(&rhs != this)
2295  {
2296  resize(rhs.m_Count);
2297  if(m_Count != 0)
2298  {
2299  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
2300  }
2301  }
2302  return *this;
2303  }
2304 
2305  bool empty() const { return m_Count == 0; }
2306  size_t size() const { return m_Count; }
2307  T* data() { return m_pArray; }
2308  const T* data() const { return m_pArray; }
2309 
2310  T& operator[](size_t index)
2311  {
2312  VMA_HEAVY_ASSERT(index < m_Count);
2313  return m_pArray[index];
2314  }
2315  const T& operator[](size_t index) const
2316  {
2317  VMA_HEAVY_ASSERT(index < m_Count);
2318  return m_pArray[index];
2319  }
2320 
2321  T& front()
2322  {
2323  VMA_HEAVY_ASSERT(m_Count > 0);
2324  return m_pArray[0];
2325  }
2326  const T& front() const
2327  {
2328  VMA_HEAVY_ASSERT(m_Count > 0);
2329  return m_pArray[0];
2330  }
2331  T& back()
2332  {
2333  VMA_HEAVY_ASSERT(m_Count > 0);
2334  return m_pArray[m_Count - 1];
2335  }
2336  const T& back() const
2337  {
2338  VMA_HEAVY_ASSERT(m_Count > 0);
2339  return m_pArray[m_Count - 1];
2340  }
2341 
2342  void reserve(size_t newCapacity, bool freeMemory = false)
2343  {
2344  newCapacity = VMA_MAX(newCapacity, m_Count);
2345 
2346  if((newCapacity < m_Capacity) && !freeMemory)
2347  {
2348  newCapacity = m_Capacity;
2349  }
2350 
2351  if(newCapacity != m_Capacity)
2352  {
2353  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
2354  if(m_Count != 0)
2355  {
2356  memcpy(newArray, m_pArray, m_Count * sizeof(T));
2357  }
2358  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2359  m_Capacity = newCapacity;
2360  m_pArray = newArray;
2361  }
2362  }
2363 
2364  void resize(size_t newCount, bool freeMemory = false)
2365  {
2366  size_t newCapacity = m_Capacity;
2367  if(newCount > m_Capacity)
2368  {
2369  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
2370  }
2371  else if(freeMemory)
2372  {
2373  newCapacity = newCount;
2374  }
2375 
2376  if(newCapacity != m_Capacity)
2377  {
2378  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
2379  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
2380  if(elementsToCopy != 0)
2381  {
2382  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
2383  }
2384  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2385  m_Capacity = newCapacity;
2386  m_pArray = newArray;
2387  }
2388 
2389  m_Count = newCount;
2390  }
2391 
2392  void clear(bool freeMemory = false)
2393  {
2394  resize(0, freeMemory);
2395  }
2396 
2397  void insert(size_t index, const T& src)
2398  {
2399  VMA_HEAVY_ASSERT(index <= m_Count);
2400  const size_t oldCount = size();
2401  resize(oldCount + 1);
2402  if(index < oldCount)
2403  {
2404  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
2405  }
2406  m_pArray[index] = src;
2407  }
2408 
2409  void remove(size_t index)
2410  {
2411  VMA_HEAVY_ASSERT(index < m_Count);
2412  const size_t oldCount = size();
2413  if(index < oldCount - 1)
2414  {
2415  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
2416  }
2417  resize(oldCount - 1);
2418  }
2419 
2420  void push_back(const T& src)
2421  {
2422  const size_t newIndex = size();
2423  resize(newIndex + 1);
2424  m_pArray[newIndex] = src;
2425  }
2426 
2427  void pop_back()
2428  {
2429  VMA_HEAVY_ASSERT(m_Count > 0);
2430  resize(size() - 1);
2431  }
2432 
2433  void push_front(const T& src)
2434  {
2435  insert(0, src);
2436  }
2437 
2438  void pop_front()
2439  {
2440  VMA_HEAVY_ASSERT(m_Count > 0);
2441  remove(0);
2442  }
2443 
2444  typedef T* iterator;
2445 
2446  iterator begin() { return m_pArray; }
2447  iterator end() { return m_pArray + m_Count; }
2448 
2449 private:
2450  AllocatorT m_Allocator;
2451  T* m_pArray;
2452  size_t m_Count;
2453  size_t m_Capacity;
2454 };
2455 
2456 template<typename T, typename allocatorT>
2457 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
2458 {
2459  vec.insert(index, item);
2460 }
2461 
2462 template<typename T, typename allocatorT>
2463 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
2464 {
2465  vec.remove(index);
2466 }
2467 
2468 #endif // #if VMA_USE_STL_VECTOR
2469 
2470 template<typename CmpLess, typename VectorT>
2471 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
2472 {
2473  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
2474  vector.data(),
2475  vector.data() + vector.size(),
2476  value,
2477  CmpLess()) - vector.data();
2478  VmaVectorInsert(vector, indexToInsert, value);
2479  return indexToInsert;
2480 }
2481 
2482 template<typename CmpLess, typename VectorT>
2483 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
2484 {
2485  CmpLess comparator;
2486  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
2487  vector.begin(),
2488  vector.end(),
2489  value,
2490  comparator);
2491  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
2492  {
2493  size_t indexToRemove = it - vector.begin();
2494  VmaVectorRemove(vector, indexToRemove);
2495  return true;
2496  }
2497  return false;
2498 }
2499 
2500 template<typename CmpLess, typename VectorT>
2501 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
2502 {
2503  CmpLess comparator;
2504  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
2505  vector.data(),
2506  vector.data() + vector.size(),
2507  value,
2508  comparator);
2509  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
2510  {
2511  return it - vector.begin();
2512  }
2513  else
2514  {
2515  return vector.size();
2516  }
2517 }
2518 
2520 // class VmaPoolAllocator
2521 
2522 /*
2523 Allocator for objects of type T using a list of arrays (pools) to speed up
2524 allocation. Number of elements that can be allocated is not bounded because
2525 allocator can create multiple blocks.
2526 */
2527 template<typename T>
2528 class VmaPoolAllocator
2529 {
2530 public:
2531  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
2532  ~VmaPoolAllocator();
2533  void Clear();
2534  T* Alloc();
2535  void Free(T* ptr);
2536 
2537 private:
2538  union Item
2539  {
2540  uint32_t NextFreeIndex;
2541  T Value;
2542  };
2543 
2544  struct ItemBlock
2545  {
2546  Item* pItems;
2547  uint32_t FirstFreeIndex;
2548  };
2549 
2550  const VkAllocationCallbacks* m_pAllocationCallbacks;
2551  size_t m_ItemsPerBlock;
2552  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
2553 
2554  ItemBlock& CreateNewBlock();
2555 };
2556 
2557 template<typename T>
2558 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
2559  m_pAllocationCallbacks(pAllocationCallbacks),
2560  m_ItemsPerBlock(itemsPerBlock),
2561  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
2562 {
2563  VMA_ASSERT(itemsPerBlock > 0);
2564 }
2565 
2566 template<typename T>
2567 VmaPoolAllocator<T>::~VmaPoolAllocator()
2568 {
2569  Clear();
2570 }
2571 
2572 template<typename T>
2573 void VmaPoolAllocator<T>::Clear()
2574 {
2575  for(size_t i = m_ItemBlocks.size(); i--; )
2576  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
2577  m_ItemBlocks.clear();
2578 }
2579 
2580 template<typename T>
2581 T* VmaPoolAllocator<T>::Alloc()
2582 {
2583  for(size_t i = m_ItemBlocks.size(); i--; )
2584  {
2585  ItemBlock& block = m_ItemBlocks[i];
2586  // This block has some free items: Use first one.
2587  if(block.FirstFreeIndex != UINT32_MAX)
2588  {
2589  Item* const pItem = &block.pItems[block.FirstFreeIndex];
2590  block.FirstFreeIndex = pItem->NextFreeIndex;
2591  return &pItem->Value;
2592  }
2593  }
2594 
2595  // No block has free item: Create new one and use it.
2596  ItemBlock& newBlock = CreateNewBlock();
2597  Item* const pItem = &newBlock.pItems[0];
2598  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
2599  return &pItem->Value;
2600 }
2601 
2602 template<typename T>
2603 void VmaPoolAllocator<T>::Free(T* ptr)
2604 {
2605  // Search all memory blocks to find ptr.
2606  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
2607  {
2608  ItemBlock& block = m_ItemBlocks[i];
2609 
2610  // Casting to union.
2611  Item* pItemPtr;
2612  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
2613 
2614  // Check if pItemPtr is in address range of this block.
2615  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
2616  {
2617  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
2618  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
2619  block.FirstFreeIndex = index;
2620  return;
2621  }
2622  }
2623  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
2624 }
2625 
2626 template<typename T>
2627 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
2628 {
2629  ItemBlock newBlock = {
2630  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
2631 
2632  m_ItemBlocks.push_back(newBlock);
2633 
2634  // Setup singly-linked list of all free items in this block.
2635  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
2636  newBlock.pItems[i].NextFreeIndex = i + 1;
2637  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
2638  return m_ItemBlocks.back();
2639 }
2640 
2642 // class VmaRawList, VmaList
2643 
2644 #if VMA_USE_STL_LIST
2645 
2646 #define VmaList std::list
2647 
2648 #else // #if VMA_USE_STL_LIST
2649 
2650 template<typename T>
2651 struct VmaListItem
2652 {
2653  VmaListItem* pPrev;
2654  VmaListItem* pNext;
2655  T Value;
2656 };
2657 
2658 // Doubly linked list.
2659 template<typename T>
2660 class VmaRawList
2661 {
2662 public:
2663  typedef VmaListItem<T> ItemType;
2664 
2665  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
2666  ~VmaRawList();
2667  void Clear();
2668 
2669  size_t GetCount() const { return m_Count; }
2670  bool IsEmpty() const { return m_Count == 0; }
2671 
2672  ItemType* Front() { return m_pFront; }
2673  const ItemType* Front() const { return m_pFront; }
2674  ItemType* Back() { return m_pBack; }
2675  const ItemType* Back() const { return m_pBack; }
2676 
2677  ItemType* PushBack();
2678  ItemType* PushFront();
2679  ItemType* PushBack(const T& value);
2680  ItemType* PushFront(const T& value);
2681  void PopBack();
2682  void PopFront();
2683 
2684  // Item can be null - it means PushBack.
2685  ItemType* InsertBefore(ItemType* pItem);
2686  // Item can be null - it means PushFront.
2687  ItemType* InsertAfter(ItemType* pItem);
2688 
2689  ItemType* InsertBefore(ItemType* pItem, const T& value);
2690  ItemType* InsertAfter(ItemType* pItem, const T& value);
2691 
2692  void Remove(ItemType* pItem);
2693 
2694 private:
2695  const VkAllocationCallbacks* const m_pAllocationCallbacks;
2696  VmaPoolAllocator<ItemType> m_ItemAllocator;
2697  ItemType* m_pFront;
2698  ItemType* m_pBack;
2699  size_t m_Count;
2700 
2701  // Declared not defined, to block copy constructor and assignment operator.
2702  VmaRawList(const VmaRawList<T>& src);
2703  VmaRawList<T>& operator=(const VmaRawList<T>& rhs);
2704 };
2705 
2706 template<typename T>
2707 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
2708  m_pAllocationCallbacks(pAllocationCallbacks),
2709  m_ItemAllocator(pAllocationCallbacks, 128),
2710  m_pFront(VMA_NULL),
2711  m_pBack(VMA_NULL),
2712  m_Count(0)
2713 {
2714 }
2715 
2716 template<typename T>
2717 VmaRawList<T>::~VmaRawList()
2718 {
2719  // Intentionally not calling Clear, because that would be unnecessary
2720  // computations to return all items to m_ItemAllocator as free.
2721 }
2722 
2723 template<typename T>
2724 void VmaRawList<T>::Clear()
2725 {
2726  if(IsEmpty() == false)
2727  {
2728  ItemType* pItem = m_pBack;
2729  while(pItem != VMA_NULL)
2730  {
2731  ItemType* const pPrevItem = pItem->pPrev;
2732  m_ItemAllocator.Free(pItem);
2733  pItem = pPrevItem;
2734  }
2735  m_pFront = VMA_NULL;
2736  m_pBack = VMA_NULL;
2737  m_Count = 0;
2738  }
2739 }
2740 
2741 template<typename T>
2742 VmaListItem<T>* VmaRawList<T>::PushBack()
2743 {
2744  ItemType* const pNewItem = m_ItemAllocator.Alloc();
2745  pNewItem->pNext = VMA_NULL;
2746  if(IsEmpty())
2747  {
2748  pNewItem->pPrev = VMA_NULL;
2749  m_pFront = pNewItem;
2750  m_pBack = pNewItem;
2751  m_Count = 1;
2752  }
2753  else
2754  {
2755  pNewItem->pPrev = m_pBack;
2756  m_pBack->pNext = pNewItem;
2757  m_pBack = pNewItem;
2758  ++m_Count;
2759  }
2760  return pNewItem;
2761 }
2762 
2763 template<typename T>
2764 VmaListItem<T>* VmaRawList<T>::PushFront()
2765 {
2766  ItemType* const pNewItem = m_ItemAllocator.Alloc();
2767  pNewItem->pPrev = VMA_NULL;
2768  if(IsEmpty())
2769  {
2770  pNewItem->pNext = VMA_NULL;
2771  m_pFront = pNewItem;
2772  m_pBack = pNewItem;
2773  m_Count = 1;
2774  }
2775  else
2776  {
2777  pNewItem->pNext = m_pFront;
2778  m_pFront->pPrev = pNewItem;
2779  m_pFront = pNewItem;
2780  ++m_Count;
2781  }
2782  return pNewItem;
2783 }
2784 
2785 template<typename T>
2786 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
2787 {
2788  ItemType* const pNewItem = PushBack();
2789  pNewItem->Value = value;
2790  return pNewItem;
2791 }
2792 
2793 template<typename T>
2794 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
2795 {
2796  ItemType* const pNewItem = PushFront();
2797  pNewItem->Value = value;
2798  return pNewItem;
2799 }
2800 
2801 template<typename T>
2802 void VmaRawList<T>::PopBack()
2803 {
2804  VMA_HEAVY_ASSERT(m_Count > 0);
2805  ItemType* const pBackItem = m_pBack;
2806  ItemType* const pPrevItem = pBackItem->pPrev;
2807  if(pPrevItem != VMA_NULL)
2808  {
2809  pPrevItem->pNext = VMA_NULL;
2810  }
2811  m_pBack = pPrevItem;
2812  m_ItemAllocator.Free(pBackItem);
2813  --m_Count;
2814 }
2815 
2816 template<typename T>
2817 void VmaRawList<T>::PopFront()
2818 {
2819  VMA_HEAVY_ASSERT(m_Count > 0);
2820  ItemType* const pFrontItem = m_pFront;
2821  ItemType* const pNextItem = pFrontItem->pNext;
2822  if(pNextItem != VMA_NULL)
2823  {
2824  pNextItem->pPrev = VMA_NULL;
2825  }
2826  m_pFront = pNextItem;
2827  m_ItemAllocator.Free(pFrontItem);
2828  --m_Count;
2829 }
2830 
2831 template<typename T>
2832 void VmaRawList<T>::Remove(ItemType* pItem)
2833 {
2834  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
2835  VMA_HEAVY_ASSERT(m_Count > 0);
2836 
2837  if(pItem->pPrev != VMA_NULL)
2838  {
2839  pItem->pPrev->pNext = pItem->pNext;
2840  }
2841  else
2842  {
2843  VMA_HEAVY_ASSERT(m_pFront == pItem);
2844  m_pFront = pItem->pNext;
2845  }
2846 
2847  if(pItem->pNext != VMA_NULL)
2848  {
2849  pItem->pNext->pPrev = pItem->pPrev;
2850  }
2851  else
2852  {
2853  VMA_HEAVY_ASSERT(m_pBack == pItem);
2854  m_pBack = pItem->pPrev;
2855  }
2856 
2857  m_ItemAllocator.Free(pItem);
2858  --m_Count;
2859 }
2860 
2861 template<typename T>
2862 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
2863 {
2864  if(pItem != VMA_NULL)
2865  {
2866  ItemType* const prevItem = pItem->pPrev;
2867  ItemType* const newItem = m_ItemAllocator.Alloc();
2868  newItem->pPrev = prevItem;
2869  newItem->pNext = pItem;
2870  pItem->pPrev = newItem;
2871  if(prevItem != VMA_NULL)
2872  {
2873  prevItem->pNext = newItem;
2874  }
2875  else
2876  {
2877  VMA_HEAVY_ASSERT(m_pFront == pItem);
2878  m_pFront = newItem;
2879  }
2880  ++m_Count;
2881  return newItem;
2882  }
2883  else
2884  return PushBack();
2885 }
2886 
2887 template<typename T>
2888 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
2889 {
2890  if(pItem != VMA_NULL)
2891  {
2892  ItemType* const nextItem = pItem->pNext;
2893  ItemType* const newItem = m_ItemAllocator.Alloc();
2894  newItem->pNext = nextItem;
2895  newItem->pPrev = pItem;
2896  pItem->pNext = newItem;
2897  if(nextItem != VMA_NULL)
2898  {
2899  nextItem->pPrev = newItem;
2900  }
2901  else
2902  {
2903  VMA_HEAVY_ASSERT(m_pBack == pItem);
2904  m_pBack = newItem;
2905  }
2906  ++m_Count;
2907  return newItem;
2908  }
2909  else
2910  return PushFront();
2911 }
2912 
2913 template<typename T>
2914 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
2915 {
2916  ItemType* const newItem = InsertBefore(pItem);
2917  newItem->Value = value;
2918  return newItem;
2919 }
2920 
2921 template<typename T>
2922 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
2923 {
2924  ItemType* const newItem = InsertAfter(pItem);
2925  newItem->Value = value;
2926  return newItem;
2927 }
2928 
2929 template<typename T, typename AllocatorT>
2930 class VmaList
2931 {
2932 public:
2933  class iterator
2934  {
2935  public:
2936  iterator() :
2937  m_pList(VMA_NULL),
2938  m_pItem(VMA_NULL)
2939  {
2940  }
2941 
2942  T& operator*() const
2943  {
2944  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
2945  return m_pItem->Value;
2946  }
2947  T* operator->() const
2948  {
2949  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
2950  return &m_pItem->Value;
2951  }
2952 
2953  iterator& operator++()
2954  {
2955  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
2956  m_pItem = m_pItem->pNext;
2957  return *this;
2958  }
2959  iterator& operator--()
2960  {
2961  if(m_pItem != VMA_NULL)
2962  {
2963  m_pItem = m_pItem->pPrev;
2964  }
2965  else
2966  {
2967  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
2968  m_pItem = m_pList->Back();
2969  }
2970  return *this;
2971  }
2972 
2973  iterator operator++(int)
2974  {
2975  iterator result = *this;
2976  ++*this;
2977  return result;
2978  }
2979  iterator operator--(int)
2980  {
2981  iterator result = *this;
2982  --*this;
2983  return result;
2984  }
2985 
2986  bool operator==(const iterator& rhs) const
2987  {
2988  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
2989  return m_pItem == rhs.m_pItem;
2990  }
2991  bool operator!=(const iterator& rhs) const
2992  {
2993  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
2994  return m_pItem != rhs.m_pItem;
2995  }
2996 
2997  private:
2998  VmaRawList<T>* m_pList;
2999  VmaListItem<T>* m_pItem;
3000 
3001  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
3002  m_pList(pList),
3003  m_pItem(pItem)
3004  {
3005  }
3006 
3007  friend class VmaList<T, AllocatorT>;
3008  };
3009 
3010  class const_iterator
3011  {
3012  public:
3013  const_iterator() :
3014  m_pList(VMA_NULL),
3015  m_pItem(VMA_NULL)
3016  {
3017  }
3018 
3019  const_iterator(const iterator& src) :
3020  m_pList(src.m_pList),
3021  m_pItem(src.m_pItem)
3022  {
3023  }
3024 
3025  const T& operator*() const
3026  {
3027  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3028  return m_pItem->Value;
3029  }
3030  const T* operator->() const
3031  {
3032  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3033  return &m_pItem->Value;
3034  }
3035 
3036  const_iterator& operator++()
3037  {
3038  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3039  m_pItem = m_pItem->pNext;
3040  return *this;
3041  }
3042  const_iterator& operator--()
3043  {
3044  if(m_pItem != VMA_NULL)
3045  {
3046  m_pItem = m_pItem->pPrev;
3047  }
3048  else
3049  {
3050  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
3051  m_pItem = m_pList->Back();
3052  }
3053  return *this;
3054  }
3055 
3056  const_iterator operator++(int)
3057  {
3058  const_iterator result = *this;
3059  ++*this;
3060  return result;
3061  }
3062  const_iterator operator--(int)
3063  {
3064  const_iterator result = *this;
3065  --*this;
3066  return result;
3067  }
3068 
3069  bool operator==(const const_iterator& rhs) const
3070  {
3071  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3072  return m_pItem == rhs.m_pItem;
3073  }
3074  bool operator!=(const const_iterator& rhs) const
3075  {
3076  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3077  return m_pItem != rhs.m_pItem;
3078  }
3079 
3080  private:
3081  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
3082  m_pList(pList),
3083  m_pItem(pItem)
3084  {
3085  }
3086 
3087  const VmaRawList<T>* m_pList;
3088  const VmaListItem<T>* m_pItem;
3089 
3090  friend class VmaList<T, AllocatorT>;
3091  };
3092 
3093  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
3094 
3095  bool empty() const { return m_RawList.IsEmpty(); }
3096  size_t size() const { return m_RawList.GetCount(); }
3097 
3098  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
3099  iterator end() { return iterator(&m_RawList, VMA_NULL); }
3100 
3101  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
3102  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
3103 
3104  void clear() { m_RawList.Clear(); }
3105  void push_back(const T& value) { m_RawList.PushBack(value); }
3106  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
3107  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
3108 
3109 private:
3110  VmaRawList<T> m_RawList;
3111 };
3112 
3113 #endif // #if VMA_USE_STL_LIST
3114 
3116 // class VmaMap
3117 
3118 // Unused in this version.
3119 #if 0
3120 
3121 #if VMA_USE_STL_UNORDERED_MAP
3122 
3123 #define VmaPair std::pair
3124 
3125 #define VMA_MAP_TYPE(KeyT, ValueT) \
3126  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
3127 
3128 #else // #if VMA_USE_STL_UNORDERED_MAP
3129 
3130 template<typename T1, typename T2>
3131 struct VmaPair
3132 {
3133  T1 first;
3134  T2 second;
3135 
3136  VmaPair() : first(), second() { }
3137  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
3138 };
3139 
3140 /* Class compatible with subset of interface of std::unordered_map.
3141 KeyT, ValueT must be POD because they will be stored in VmaVector.
3142 */
3143 template<typename KeyT, typename ValueT>
3144 class VmaMap
3145 {
3146 public:
3147  typedef VmaPair<KeyT, ValueT> PairType;
3148  typedef PairType* iterator;
3149 
3150  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
3151 
3152  iterator begin() { return m_Vector.begin(); }
3153  iterator end() { return m_Vector.end(); }
3154 
3155  void insert(const PairType& pair);
3156  iterator find(const KeyT& key);
3157  void erase(iterator it);
3158 
3159 private:
3160  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
3161 };
3162 
3163 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
3164 
3165 template<typename FirstT, typename SecondT>
3166 struct VmaPairFirstLess
3167 {
3168  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
3169  {
3170  return lhs.first < rhs.first;
3171  }
3172  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
3173  {
3174  return lhs.first < rhsFirst;
3175  }
3176 };
3177 
3178 template<typename KeyT, typename ValueT>
3179 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
3180 {
3181  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
3182  m_Vector.data(),
3183  m_Vector.data() + m_Vector.size(),
3184  pair,
3185  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
3186  VmaVectorInsert(m_Vector, indexToInsert, pair);
3187 }
3188 
3189 template<typename KeyT, typename ValueT>
3190 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
3191 {
3192  PairType* it = VmaBinaryFindFirstNotLess(
3193  m_Vector.data(),
3194  m_Vector.data() + m_Vector.size(),
3195  key,
3196  VmaPairFirstLess<KeyT, ValueT>());
3197  if((it != m_Vector.end()) && (it->first == key))
3198  {
3199  return it;
3200  }
3201  else
3202  {
3203  return m_Vector.end();
3204  }
3205 }
3206 
3207 template<typename KeyT, typename ValueT>
3208 void VmaMap<KeyT, ValueT>::erase(iterator it)
3209 {
3210  VmaVectorRemove(m_Vector, it - m_Vector.begin());
3211 }
3212 
3213 #endif // #if VMA_USE_STL_UNORDERED_MAP
3214 
3215 #endif // #if 0
3216 
3218 
3219 class VmaDeviceMemoryBlock;
3220 
3221 struct VmaAllocation_T
3222 {
3223 private:
3224  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
3225 
3226  enum FLAGS
3227  {
3228  FLAG_USER_DATA_STRING = 0x01,
3229  };
3230 
3231 public:
3232  enum ALLOCATION_TYPE
3233  {
3234  ALLOCATION_TYPE_NONE,
3235  ALLOCATION_TYPE_BLOCK,
3236  ALLOCATION_TYPE_DEDICATED,
3237  };
3238 
3239  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
3240  m_Alignment(1),
3241  m_Size(0),
3242  m_pUserData(VMA_NULL),
3243  m_LastUseFrameIndex(currentFrameIndex),
3244  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
3245  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
3246  m_MapCount(0),
3247  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
3248  {
3249  }
3250 
3251  ~VmaAllocation_T()
3252  {
3253  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
3254 
3255  // Check if owned string was freed.
3256  VMA_ASSERT(m_pUserData == VMA_NULL);
3257  }
3258 
3259  void InitBlockAllocation(
3260  VmaPool hPool,
3261  VmaDeviceMemoryBlock* block,
3262  VkDeviceSize offset,
3263  VkDeviceSize alignment,
3264  VkDeviceSize size,
3265  VmaSuballocationType suballocationType,
3266  bool mapped,
3267  bool canBecomeLost)
3268  {
3269  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3270  VMA_ASSERT(block != VMA_NULL);
3271  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
3272  m_Alignment = alignment;
3273  m_Size = size;
3274  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
3275  m_SuballocationType = (uint8_t)suballocationType;
3276  m_BlockAllocation.m_hPool = hPool;
3277  m_BlockAllocation.m_Block = block;
3278  m_BlockAllocation.m_Offset = offset;
3279  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
3280  }
3281 
3282  void InitLost()
3283  {
3284  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3285  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
3286  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
3287  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
3288  m_BlockAllocation.m_Block = VMA_NULL;
3289  m_BlockAllocation.m_Offset = 0;
3290  m_BlockAllocation.m_CanBecomeLost = true;
3291  }
3292 
3293  void ChangeBlockAllocation(
3294  VmaAllocator hAllocator,
3295  VmaDeviceMemoryBlock* block,
3296  VkDeviceSize offset);
3297 
3298  // pMappedData not null means allocation is created with MAPPED flag.
3299  void InitDedicatedAllocation(
3300  uint32_t memoryTypeIndex,
3301  VkDeviceMemory hMemory,
3302  VmaSuballocationType suballocationType,
3303  void* pMappedData,
3304  VkDeviceSize size)
3305  {
3306  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3307  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
3308  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
3309  m_Alignment = 0;
3310  m_Size = size;
3311  m_SuballocationType = (uint8_t)suballocationType;
3312  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
3313  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
3314  m_DedicatedAllocation.m_hMemory = hMemory;
3315  m_DedicatedAllocation.m_pMappedData = pMappedData;
3316  }
3317 
3318  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
3319  VkDeviceSize GetAlignment() const { return m_Alignment; }
3320  VkDeviceSize GetSize() const { return m_Size; }
3321  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
3322  void* GetUserData() const { return m_pUserData; }
3323  void SetUserData(VmaAllocator hAllocator, void* pUserData);
3324  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
3325 
3326  VmaDeviceMemoryBlock* GetBlock() const
3327  {
3328  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
3329  return m_BlockAllocation.m_Block;
3330  }
3331  VkDeviceSize GetOffset() const;
3332  VkDeviceMemory GetMemory() const;
3333  uint32_t GetMemoryTypeIndex() const;
3334  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
3335  void* GetMappedData() const;
3336  bool CanBecomeLost() const;
3337  VmaPool GetPool() const;
3338 
3339  uint32_t GetLastUseFrameIndex() const
3340  {
3341  return m_LastUseFrameIndex.load();
3342  }
3343  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
3344  {
3345  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
3346  }
3347  /*
3348  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
3349  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
3350  - Else, returns false.
3351 
3352  If hAllocation is already lost, assert - you should not call it then.
3353  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
3354  */
3355  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
3356 
3357  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
3358  {
3359  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
3360  outInfo.blockCount = 1;
3361  outInfo.allocationCount = 1;
3362  outInfo.unusedRangeCount = 0;
3363  outInfo.usedBytes = m_Size;
3364  outInfo.unusedBytes = 0;
3365  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
3366  outInfo.unusedRangeSizeMin = UINT64_MAX;
3367  outInfo.unusedRangeSizeMax = 0;
3368  }
3369 
3370  void BlockAllocMap();
3371  void BlockAllocUnmap();
3372  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
3373  void DedicatedAllocUnmap(VmaAllocator hAllocator);
3374 
3375 private:
3376  VkDeviceSize m_Alignment;
3377  VkDeviceSize m_Size;
3378  void* m_pUserData;
3379  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
3380  uint8_t m_Type; // ALLOCATION_TYPE
3381  uint8_t m_SuballocationType; // VmaSuballocationType
3382  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
3383  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
3384  uint8_t m_MapCount;
3385  uint8_t m_Flags; // enum FLAGS
3386 
3387  // Allocation out of VmaDeviceMemoryBlock.
3388  struct BlockAllocation
3389  {
3390  VmaPool m_hPool; // Null if belongs to general memory.
3391  VmaDeviceMemoryBlock* m_Block;
3392  VkDeviceSize m_Offset;
3393  bool m_CanBecomeLost;
3394  };
3395 
3396  // Allocation for an object that has its own private VkDeviceMemory.
3397  struct DedicatedAllocation
3398  {
3399  uint32_t m_MemoryTypeIndex;
3400  VkDeviceMemory m_hMemory;
3401  void* m_pMappedData; // Not null means memory is mapped.
3402  };
3403 
3404  union
3405  {
3406  // Allocation out of VmaDeviceMemoryBlock.
3407  BlockAllocation m_BlockAllocation;
3408  // Allocation for an object that has its own private VkDeviceMemory.
3409  DedicatedAllocation m_DedicatedAllocation;
3410  };
3411 
3412  void FreeUserDataString(VmaAllocator hAllocator);
3413 };
3414 
3415 /*
3416 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
3417 allocated memory block or free.
3418 */
3419 struct VmaSuballocation
3420 {
3421  VkDeviceSize offset;
3422  VkDeviceSize size;
3423  VmaAllocation hAllocation;
3424  VmaSuballocationType type;
3425 };
3426 
3427 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
3428 
3429 // Cost of one additional allocation lost, as equivalent in bytes.
3430 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
3431 
3432 /*
3433 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
3434 
3435 If canMakeOtherLost was false:
3436 - item points to a FREE suballocation.
3437 - itemsToMakeLostCount is 0.
3438 
3439 If canMakeOtherLost was true:
3440 - item points to first of sequence of suballocations, which are either FREE,
3441  or point to VmaAllocations that can become lost.
3442 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
3443  the requested allocation to succeed.
3444 */
3445 struct VmaAllocationRequest
3446 {
3447  VkDeviceSize offset;
3448  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
3449  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
3450  VmaSuballocationList::iterator item;
3451  size_t itemsToMakeLostCount;
3452 
3453  VkDeviceSize CalcCost() const
3454  {
3455  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
3456  }
3457 };
3458 
3459 /*
3460 Data structure used for bookkeeping of allocations and unused ranges of memory
3461 in a single VkDeviceMemory block.
3462 */
3463 class VmaBlockMetadata
3464 {
3465 public:
3466  VmaBlockMetadata(VmaAllocator hAllocator);
3467  ~VmaBlockMetadata();
3468  void Init(VkDeviceSize size);
3469 
3470  // Validates all data structures inside this object. If not valid, returns false.
3471  bool Validate() const;
3472  VkDeviceSize GetSize() const { return m_Size; }
3473  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
3474  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
3475  VkDeviceSize GetUnusedRangeSizeMax() const;
3476  // Returns true if this block is empty - contains only single free suballocation.
3477  bool IsEmpty() const;
3478 
3479  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
3480  void AddPoolStats(VmaPoolStats& inoutStats) const;
3481 
3482 #if VMA_STATS_STRING_ENABLED
3483  void PrintDetailedMap(class VmaJsonWriter& json) const;
3484 #endif
3485 
3486  // Creates trivial request for case when block is empty.
3487  void CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest);
3488 
3489  // Tries to find a place for suballocation with given parameters inside this block.
3490  // If succeeded, fills pAllocationRequest and returns true.
3491  // If failed, returns false.
3492  bool CreateAllocationRequest(
3493  uint32_t currentFrameIndex,
3494  uint32_t frameInUseCount,
3495  VkDeviceSize bufferImageGranularity,
3496  VkDeviceSize allocSize,
3497  VkDeviceSize allocAlignment,
3498  VmaSuballocationType allocType,
3499  bool canMakeOtherLost,
3500  VmaAllocationRequest* pAllocationRequest);
3501 
3502  bool MakeRequestedAllocationsLost(
3503  uint32_t currentFrameIndex,
3504  uint32_t frameInUseCount,
3505  VmaAllocationRequest* pAllocationRequest);
3506 
3507  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
3508 
3509  // Makes actual allocation based on request. Request must already be checked and valid.
3510  void Alloc(
3511  const VmaAllocationRequest& request,
3512  VmaSuballocationType type,
3513  VkDeviceSize allocSize,
3514  VmaAllocation hAllocation);
3515 
3516  // Frees suballocation assigned to given memory region.
3517  void Free(const VmaAllocation allocation);
3518  void FreeAtOffset(VkDeviceSize offset);
3519 
3520 private:
3521  VkDeviceSize m_Size;
3522  uint32_t m_FreeCount;
3523  VkDeviceSize m_SumFreeSize;
3524  VmaSuballocationList m_Suballocations;
3525  // Suballocations that are free and have size greater than certain threshold.
3526  // Sorted by size, ascending.
3527  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
3528 
3529  bool ValidateFreeSuballocationList() const;
3530 
3531  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
3532  // If yes, fills pOffset and returns true. If no, returns false.
3533  bool CheckAllocation(
3534  uint32_t currentFrameIndex,
3535  uint32_t frameInUseCount,
3536  VkDeviceSize bufferImageGranularity,
3537  VkDeviceSize allocSize,
3538  VkDeviceSize allocAlignment,
3539  VmaSuballocationType allocType,
3540  VmaSuballocationList::const_iterator suballocItem,
3541  bool canMakeOtherLost,
3542  VkDeviceSize* pOffset,
3543  size_t* itemsToMakeLostCount,
3544  VkDeviceSize* pSumFreeSize,
3545  VkDeviceSize* pSumItemSize) const;
3546  // Given free suballocation, it merges it with following one, which must also be free.
3547  void MergeFreeWithNext(VmaSuballocationList::iterator item);
3548  // Releases given suballocation, making it free.
3549  // Merges it with adjacent free suballocations if applicable.
3550  // Returns iterator to new free suballocation at this place.
3551  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
3552  // Given free suballocation, it inserts it into sorted list of
3553  // m_FreeSuballocationsBySize if it's suitable.
3554  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
3555  // Given free suballocation, it removes it from sorted list of
3556  // m_FreeSuballocationsBySize if it's suitable.
3557  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
3558 };
3559 
3560 // Helper class that represents mapped memory. Synchronized internally.
3561 class VmaDeviceMemoryMapping
3562 {
3563 public:
3564  VmaDeviceMemoryMapping();
3565  ~VmaDeviceMemoryMapping();
3566 
3567  void* GetMappedData() const { return m_pMappedData; }
3568 
3569  // ppData can be null.
3570  VkResult Map(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count, void **ppData);
3571  void Unmap(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count);
3572 
3573 private:
3574  VMA_MUTEX m_Mutex;
3575  uint32_t m_MapCount;
3576  void* m_pMappedData;
3577 };
3578 
3579 /*
3580 Represents a single block of device memory (`VkDeviceMemory`) with all the
3581 data about its regions (aka suballocations, `VmaAllocation`), assigned and free.
3582 
3583 Thread-safety: This class must be externally synchronized.
3584 */
3585 class VmaDeviceMemoryBlock
3586 {
3587 public:
3588  uint32_t m_MemoryTypeIndex;
3589  VkDeviceMemory m_hMemory;
3590  VmaDeviceMemoryMapping m_Mapping;
3591  VmaBlockMetadata m_Metadata;
3592 
3593  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
3594 
3595  ~VmaDeviceMemoryBlock()
3596  {
3597  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
3598  }
3599 
3600  // Always call after construction.
3601  void Init(
3602  uint32_t newMemoryTypeIndex,
3603  VkDeviceMemory newMemory,
3604  VkDeviceSize newSize);
3605  // Always call before destruction.
3606  void Destroy(VmaAllocator allocator);
3607 
3608  // Validates all data structures inside this object. If not valid, returns false.
3609  bool Validate() const;
3610 
3611  // ppData can be null.
3612  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
3613  void Unmap(VmaAllocator hAllocator, uint32_t count);
3614 };
3615 
3616 struct VmaPointerLess
3617 {
3618  bool operator()(const void* lhs, const void* rhs) const
3619  {
3620  return lhs < rhs;
3621  }
3622 };
3623 
3624 class VmaDefragmentator;
3625 
3626 /*
3627 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
3628 Vulkan memory type.
3629 
3630 Synchronized internally with a mutex.
3631 */
3632 struct VmaBlockVector
3633 {
3634  VmaBlockVector(
3635  VmaAllocator hAllocator,
3636  uint32_t memoryTypeIndex,
3637  VkDeviceSize preferredBlockSize,
3638  size_t minBlockCount,
3639  size_t maxBlockCount,
3640  VkDeviceSize bufferImageGranularity,
3641  uint32_t frameInUseCount,
3642  bool isCustomPool);
3643  ~VmaBlockVector();
3644 
3645  VkResult CreateMinBlocks();
3646 
3647  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
3648  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
3649  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
3650  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
3651 
3652  void GetPoolStats(VmaPoolStats* pStats);
3653 
3654  bool IsEmpty() const { return m_Blocks.empty(); }
3655 
3656  VkResult Allocate(
3657  VmaPool hCurrentPool,
3658  uint32_t currentFrameIndex,
3659  const VkMemoryRequirements& vkMemReq,
3660  const VmaAllocationCreateInfo& createInfo,
3661  VmaSuballocationType suballocType,
3662  VmaAllocation* pAllocation);
3663 
3664  void Free(
3665  VmaAllocation hAllocation);
3666 
3667  // Adds statistics of this BlockVector to pStats.
3668  void AddStats(VmaStats* pStats);
3669 
3670 #if VMA_STATS_STRING_ENABLED
3671  void PrintDetailedMap(class VmaJsonWriter& json);
3672 #endif
3673 
3674  void MakePoolAllocationsLost(
3675  uint32_t currentFrameIndex,
3676  size_t* pLostAllocationCount);
3677 
3678  VmaDefragmentator* EnsureDefragmentator(
3679  VmaAllocator hAllocator,
3680  uint32_t currentFrameIndex);
3681 
3682  VkResult Defragment(
3683  VmaDefragmentationStats* pDefragmentationStats,
3684  VkDeviceSize& maxBytesToMove,
3685  uint32_t& maxAllocationsToMove);
3686 
3687  void DestroyDefragmentator();
3688 
3689 private:
3690  friend class VmaDefragmentator;
3691 
3692  const VmaAllocator m_hAllocator;
3693  const uint32_t m_MemoryTypeIndex;
3694  const VkDeviceSize m_PreferredBlockSize;
3695  const size_t m_MinBlockCount;
3696  const size_t m_MaxBlockCount;
3697  const VkDeviceSize m_BufferImageGranularity;
3698  const uint32_t m_FrameInUseCount;
3699  const bool m_IsCustomPool;
3700  VMA_MUTEX m_Mutex;
3701  // Incrementally sorted by sumFreeSize, ascending.
3702  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
3703  /* There can be at most one allocation that is completely empty - a
3704  hysteresis to avoid pessimistic case of alternating creation and destruction
3705  of a VkDeviceMemory. */
3706  bool m_HasEmptyBlock;
3707  VmaDefragmentator* m_pDefragmentator;
3708 
3709  size_t CalcMaxBlockSize() const;
3710 
3711  // Finds and removes given block from vector.
3712  void Remove(VmaDeviceMemoryBlock* pBlock);
3713 
3714  // Performs single step in sorting m_Blocks. They may not be fully sorted
3715  // after this call.
3716  void IncrementallySortBlocks();
3717 
3718  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
3719 };
3720 
3721 struct VmaPool_T
3722 {
3723 public:
3724  VmaBlockVector m_BlockVector;
3725 
3726  // Takes ownership.
3727  VmaPool_T(
3728  VmaAllocator hAllocator,
3729  const VmaPoolCreateInfo& createInfo);
3730  ~VmaPool_T();
3731 
3732  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
3733 
3734 #if VMA_STATS_STRING_ENABLED
3735  //void PrintDetailedMap(class VmaStringBuilder& sb);
3736 #endif
3737 };
3738 
3739 class VmaDefragmentator
3740 {
3741  const VmaAllocator m_hAllocator;
3742  VmaBlockVector* const m_pBlockVector;
3743  uint32_t m_CurrentFrameIndex;
3744  VkDeviceSize m_BytesMoved;
3745  uint32_t m_AllocationsMoved;
3746 
3747  struct AllocationInfo
3748  {
3749  VmaAllocation m_hAllocation;
3750  VkBool32* m_pChanged;
3751 
3752  AllocationInfo() :
3753  m_hAllocation(VK_NULL_HANDLE),
3754  m_pChanged(VMA_NULL)
3755  {
3756  }
3757  };
3758 
3759  struct AllocationInfoSizeGreater
3760  {
3761  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
3762  {
3763  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
3764  }
3765  };
3766 
3767  // Used between AddAllocation and Defragment.
3768  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
3769 
3770  struct BlockInfo
3771  {
3772  VmaDeviceMemoryBlock* m_pBlock;
3773  bool m_HasNonMovableAllocations;
3774  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
3775 
3776  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
3777  m_pBlock(VMA_NULL),
3778  m_HasNonMovableAllocations(true),
3779  m_Allocations(pAllocationCallbacks),
3780  m_pMappedDataForDefragmentation(VMA_NULL)
3781  {
3782  }
3783 
3784  void CalcHasNonMovableAllocations()
3785  {
3786  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
3787  const size_t defragmentAllocCount = m_Allocations.size();
3788  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
3789  }
3790 
3791  void SortAllocationsBySizeDescecnding()
3792  {
3793  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
3794  }
3795 
3796  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
3797  void Unmap(VmaAllocator hAllocator);
3798 
3799  private:
3800  // Not null if mapped for defragmentation only, not originally mapped.
3801  void* m_pMappedDataForDefragmentation;
3802  };
3803 
3804  struct BlockPointerLess
3805  {
3806  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
3807  {
3808  return pLhsBlockInfo->m_pBlock < pRhsBlock;
3809  }
3810  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
3811  {
3812  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
3813  }
3814  };
3815 
3816  // 1. Blocks with some non-movable allocations go first.
3817  // 2. Blocks with smaller sumFreeSize go first.
3818  struct BlockInfoCompareMoveDestination
3819  {
3820  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
3821  {
3822  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
3823  {
3824  return true;
3825  }
3826  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
3827  {
3828  return false;
3829  }
3830  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
3831  {
3832  return true;
3833  }
3834  return false;
3835  }
3836  };
3837 
3838  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
3839  BlockInfoVector m_Blocks;
3840 
3841  VkResult DefragmentRound(
3842  VkDeviceSize maxBytesToMove,
3843  uint32_t maxAllocationsToMove);
3844 
3845  static bool MoveMakesSense(
3846  size_t dstBlockIndex, VkDeviceSize dstOffset,
3847  size_t srcBlockIndex, VkDeviceSize srcOffset);
3848 
3849 public:
3850  VmaDefragmentator(
3851  VmaAllocator hAllocator,
3852  VmaBlockVector* pBlockVector,
3853  uint32_t currentFrameIndex);
3854 
3855  ~VmaDefragmentator();
3856 
3857  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
3858  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
3859 
3860  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
3861 
3862  VkResult Defragment(
3863  VkDeviceSize maxBytesToMove,
3864  uint32_t maxAllocationsToMove);
3865 };
3866 
3867 // Main allocator object.
3868 struct VmaAllocator_T
3869 {
3870  bool m_UseMutex;
3871  bool m_UseKhrDedicatedAllocation;
3872  VkDevice m_hDevice;
3873  bool m_AllocationCallbacksSpecified;
3874  VkAllocationCallbacks m_AllocationCallbacks;
3875  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
3876 
3877  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
3878  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
3879  VMA_MUTEX m_HeapSizeLimitMutex;
3880 
3881  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
3882  VkPhysicalDeviceMemoryProperties m_MemProps;
3883 
3884  // Default pools.
3885  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
3886 
3887  // Each vector is sorted by memory (handle value).
3888  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
3889  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
3890  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
3891 
3892  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
3893  ~VmaAllocator_T();
3894 
3895  const VkAllocationCallbacks* GetAllocationCallbacks() const
3896  {
3897  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
3898  }
3899  const VmaVulkanFunctions& GetVulkanFunctions() const
3900  {
3901  return m_VulkanFunctions;
3902  }
3903 
3904  VkDeviceSize GetBufferImageGranularity() const
3905  {
3906  return VMA_MAX(
3907  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
3908  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
3909  }
3910 
3911  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
3912  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
3913 
3914  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
3915  {
3916  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
3917  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
3918  }
3919 
3920  void GetBufferMemoryRequirements(
3921  VkBuffer hBuffer,
3922  VkMemoryRequirements& memReq,
3923  bool& requiresDedicatedAllocation,
3924  bool& prefersDedicatedAllocation) const;
3925  void GetImageMemoryRequirements(
3926  VkImage hImage,
3927  VkMemoryRequirements& memReq,
3928  bool& requiresDedicatedAllocation,
3929  bool& prefersDedicatedAllocation) const;
3930 
3931  // Main allocation function.
3932  VkResult AllocateMemory(
3933  const VkMemoryRequirements& vkMemReq,
3934  bool requiresDedicatedAllocation,
3935  bool prefersDedicatedAllocation,
3936  VkBuffer dedicatedBuffer,
3937  VkImage dedicatedImage,
3938  const VmaAllocationCreateInfo& createInfo,
3939  VmaSuballocationType suballocType,
3940  VmaAllocation* pAllocation);
3941 
3942  // Main deallocation function.
3943  void FreeMemory(const VmaAllocation allocation);
3944 
3945  void CalculateStats(VmaStats* pStats);
3946 
3947 #if VMA_STATS_STRING_ENABLED
3948  void PrintDetailedMap(class VmaJsonWriter& json);
3949 #endif
3950 
3951  VkResult Defragment(
3952  VmaAllocation* pAllocations,
3953  size_t allocationCount,
3954  VkBool32* pAllocationsChanged,
3955  const VmaDefragmentationInfo* pDefragmentationInfo,
3956  VmaDefragmentationStats* pDefragmentationStats);
3957 
3958  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
3959 
3960  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
3961  void DestroyPool(VmaPool pool);
3962  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
3963 
3964  void SetCurrentFrameIndex(uint32_t frameIndex);
3965 
3966  void MakePoolAllocationsLost(
3967  VmaPool hPool,
3968  size_t* pLostAllocationCount);
3969 
3970  void CreateLostAllocation(VmaAllocation* pAllocation);
3971 
3972  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
3973  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
3974 
3975  VkResult Map(VmaAllocation hAllocation, void** ppData);
3976  void Unmap(VmaAllocation hAllocation);
3977 
3978 private:
3979  VkDeviceSize m_PreferredLargeHeapBlockSize;
3980 
3981  VkPhysicalDevice m_PhysicalDevice;
3982  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
3983 
3984  VMA_MUTEX m_PoolsMutex;
3985  // Protected by m_PoolsMutex. Sorted by pointer value.
3986  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
3987 
3988  VmaVulkanFunctions m_VulkanFunctions;
3989 
3990  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
3991 
3992  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
3993 
3994  VkResult AllocateMemoryOfType(
3995  const VkMemoryRequirements& vkMemReq,
3996  bool dedicatedAllocation,
3997  VkBuffer dedicatedBuffer,
3998  VkImage dedicatedImage,
3999  const VmaAllocationCreateInfo& createInfo,
4000  uint32_t memTypeIndex,
4001  VmaSuballocationType suballocType,
4002  VmaAllocation* pAllocation);
4003 
4004  // Allocates and registers new VkDeviceMemory specifically for single allocation.
4005  VkResult AllocateDedicatedMemory(
4006  VkDeviceSize size,
4007  VmaSuballocationType suballocType,
4008  uint32_t memTypeIndex,
4009  bool map,
4010  bool isUserDataString,
4011  void* pUserData,
4012  VkBuffer dedicatedBuffer,
4013  VkImage dedicatedImage,
4014  VmaAllocation* pAllocation);
4015 
4016  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
4017  void FreeDedicatedMemory(VmaAllocation allocation);
4018 };
4019 
4021 // Memory allocation #2 after VmaAllocator_T definition
4022 
4023 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
4024 {
4025  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
4026 }
4027 
4028 static void VmaFree(VmaAllocator hAllocator, void* ptr)
4029 {
4030  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
4031 }
4032 
4033 template<typename T>
4034 static T* VmaAllocate(VmaAllocator hAllocator)
4035 {
4036  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
4037 }
4038 
4039 template<typename T>
4040 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
4041 {
4042  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
4043 }
4044 
4045 template<typename T>
4046 static void vma_delete(VmaAllocator hAllocator, T* ptr)
4047 {
4048  if(ptr != VMA_NULL)
4049  {
4050  ptr->~T();
4051  VmaFree(hAllocator, ptr);
4052  }
4053 }
4054 
4055 template<typename T>
4056 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
4057 {
4058  if(ptr != VMA_NULL)
4059  {
4060  for(size_t i = count; i--; )
4061  ptr[i].~T();
4062  VmaFree(hAllocator, ptr);
4063  }
4064 }
4065 
4067 // VmaStringBuilder
4068 
4069 #if VMA_STATS_STRING_ENABLED
4070 
4071 class VmaStringBuilder
4072 {
4073 public:
4074  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
4075  size_t GetLength() const { return m_Data.size(); }
4076  const char* GetData() const { return m_Data.data(); }
4077 
4078  void Add(char ch) { m_Data.push_back(ch); }
4079  void Add(const char* pStr);
4080  void AddNewLine() { Add('\n'); }
4081  void AddNumber(uint32_t num);
4082  void AddNumber(uint64_t num);
4083  void AddPointer(const void* ptr);
4084 
4085 private:
4086  VmaVector< char, VmaStlAllocator<char> > m_Data;
4087 };
4088 
4089 void VmaStringBuilder::Add(const char* pStr)
4090 {
4091  const size_t strLen = strlen(pStr);
4092  if(strLen > 0)
4093  {
4094  const size_t oldCount = m_Data.size();
4095  m_Data.resize(oldCount + strLen);
4096  memcpy(m_Data.data() + oldCount, pStr, strLen);
4097  }
4098 }
4099 
4100 void VmaStringBuilder::AddNumber(uint32_t num)
4101 {
4102  char buf[11];
4103  VmaUint32ToStr(buf, sizeof(buf), num);
4104  Add(buf);
4105 }
4106 
4107 void VmaStringBuilder::AddNumber(uint64_t num)
4108 {
4109  char buf[21];
4110  VmaUint64ToStr(buf, sizeof(buf), num);
4111  Add(buf);
4112 }
4113 
4114 void VmaStringBuilder::AddPointer(const void* ptr)
4115 {
4116  char buf[21];
4117  VmaPtrToStr(buf, sizeof(buf), ptr);
4118  Add(buf);
4119 }
4120 
4121 #endif // #if VMA_STATS_STRING_ENABLED
4122 
4124 // VmaJsonWriter
4125 
4126 #if VMA_STATS_STRING_ENABLED
4127 
4128 class VmaJsonWriter
4129 {
4130 public:
4131  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
4132  ~VmaJsonWriter();
4133 
4134  void BeginObject(bool singleLine = false);
4135  void EndObject();
4136 
4137  void BeginArray(bool singleLine = false);
4138  void EndArray();
4139 
4140  void WriteString(const char* pStr);
4141  void BeginString(const char* pStr = VMA_NULL);
4142  void ContinueString(const char* pStr);
4143  void ContinueString(uint32_t n);
4144  void ContinueString(uint64_t n);
4145  void ContinueString_Pointer(const void* ptr);
4146  void EndString(const char* pStr = VMA_NULL);
4147 
4148  void WriteNumber(uint32_t n);
4149  void WriteNumber(uint64_t n);
4150  void WriteBool(bool b);
4151  void WriteNull();
4152 
4153 private:
4154  static const char* const INDENT;
4155 
4156  enum COLLECTION_TYPE
4157  {
4158  COLLECTION_TYPE_OBJECT,
4159  COLLECTION_TYPE_ARRAY,
4160  };
4161  struct StackItem
4162  {
4163  COLLECTION_TYPE type;
4164  uint32_t valueCount;
4165  bool singleLineMode;
4166  };
4167 
4168  VmaStringBuilder& m_SB;
4169  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
4170  bool m_InsideString;
4171 
4172  void BeginValue(bool isString);
4173  void WriteIndent(bool oneLess = false);
4174 };
4175 
4176 const char* const VmaJsonWriter::INDENT = " ";
4177 
4178 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
4179  m_SB(sb),
4180  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
4181  m_InsideString(false)
4182 {
4183 }
4184 
4185 VmaJsonWriter::~VmaJsonWriter()
4186 {
4187  VMA_ASSERT(!m_InsideString);
4188  VMA_ASSERT(m_Stack.empty());
4189 }
4190 
4191 void VmaJsonWriter::BeginObject(bool singleLine)
4192 {
4193  VMA_ASSERT(!m_InsideString);
4194 
4195  BeginValue(false);
4196  m_SB.Add('{');
4197 
4198  StackItem item;
4199  item.type = COLLECTION_TYPE_OBJECT;
4200  item.valueCount = 0;
4201  item.singleLineMode = singleLine;
4202  m_Stack.push_back(item);
4203 }
4204 
4205 void VmaJsonWriter::EndObject()
4206 {
4207  VMA_ASSERT(!m_InsideString);
4208 
4209  WriteIndent(true);
4210  m_SB.Add('}');
4211 
4212  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
4213  m_Stack.pop_back();
4214 }
4215 
4216 void VmaJsonWriter::BeginArray(bool singleLine)
4217 {
4218  VMA_ASSERT(!m_InsideString);
4219 
4220  BeginValue(false);
4221  m_SB.Add('[');
4222 
4223  StackItem item;
4224  item.type = COLLECTION_TYPE_ARRAY;
4225  item.valueCount = 0;
4226  item.singleLineMode = singleLine;
4227  m_Stack.push_back(item);
4228 }
4229 
4230 void VmaJsonWriter::EndArray()
4231 {
4232  VMA_ASSERT(!m_InsideString);
4233 
4234  WriteIndent(true);
4235  m_SB.Add(']');
4236 
4237  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
4238  m_Stack.pop_back();
4239 }
4240 
4241 void VmaJsonWriter::WriteString(const char* pStr)
4242 {
4243  BeginString(pStr);
4244  EndString();
4245 }
4246 
4247 void VmaJsonWriter::BeginString(const char* pStr)
4248 {
4249  VMA_ASSERT(!m_InsideString);
4250 
4251  BeginValue(true);
4252  m_SB.Add('"');
4253  m_InsideString = true;
4254  if(pStr != VMA_NULL && pStr[0] != '\0')
4255  {
4256  ContinueString(pStr);
4257  }
4258 }
4259 
4260 void VmaJsonWriter::ContinueString(const char* pStr)
4261 {
4262  VMA_ASSERT(m_InsideString);
4263 
4264  const size_t strLen = strlen(pStr);
4265  for(size_t i = 0; i < strLen; ++i)
4266  {
4267  char ch = pStr[i];
4268  if(ch == '\'')
4269  {
4270  m_SB.Add("\\\\");
4271  }
4272  else if(ch == '"')
4273  {
4274  m_SB.Add("\\\"");
4275  }
4276  else if(ch >= 32)
4277  {
4278  m_SB.Add(ch);
4279  }
4280  else switch(ch)
4281  {
4282  case '\b':
4283  m_SB.Add("\\b");
4284  break;
4285  case '\f':
4286  m_SB.Add("\\f");
4287  break;
4288  case '\n':
4289  m_SB.Add("\\n");
4290  break;
4291  case '\r':
4292  m_SB.Add("\\r");
4293  break;
4294  case '\t':
4295  m_SB.Add("\\t");
4296  break;
4297  default:
4298  VMA_ASSERT(0 && "Character not currently supported.");
4299  break;
4300  }
4301  }
4302 }
4303 
4304 void VmaJsonWriter::ContinueString(uint32_t n)
4305 {
4306  VMA_ASSERT(m_InsideString);
4307  m_SB.AddNumber(n);
4308 }
4309 
4310 void VmaJsonWriter::ContinueString(uint64_t n)
4311 {
4312  VMA_ASSERT(m_InsideString);
4313  m_SB.AddNumber(n);
4314 }
4315 
4316 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
4317 {
4318  VMA_ASSERT(m_InsideString);
4319  m_SB.AddPointer(ptr);
4320 }
4321 
4322 void VmaJsonWriter::EndString(const char* pStr)
4323 {
4324  VMA_ASSERT(m_InsideString);
4325  if(pStr != VMA_NULL && pStr[0] != '\0')
4326  {
4327  ContinueString(pStr);
4328  }
4329  m_SB.Add('"');
4330  m_InsideString = false;
4331 }
4332 
4333 void VmaJsonWriter::WriteNumber(uint32_t n)
4334 {
4335  VMA_ASSERT(!m_InsideString);
4336  BeginValue(false);
4337  m_SB.AddNumber(n);
4338 }
4339 
4340 void VmaJsonWriter::WriteNumber(uint64_t n)
4341 {
4342  VMA_ASSERT(!m_InsideString);
4343  BeginValue(false);
4344  m_SB.AddNumber(n);
4345 }
4346 
4347 void VmaJsonWriter::WriteBool(bool b)
4348 {
4349  VMA_ASSERT(!m_InsideString);
4350  BeginValue(false);
4351  m_SB.Add(b ? "true" : "false");
4352 }
4353 
4354 void VmaJsonWriter::WriteNull()
4355 {
4356  VMA_ASSERT(!m_InsideString);
4357  BeginValue(false);
4358  m_SB.Add("null");
4359 }
4360 
4361 void VmaJsonWriter::BeginValue(bool isString)
4362 {
4363  if(!m_Stack.empty())
4364  {
4365  StackItem& currItem = m_Stack.back();
4366  if(currItem.type == COLLECTION_TYPE_OBJECT &&
4367  currItem.valueCount % 2 == 0)
4368  {
4369  VMA_ASSERT(isString);
4370  }
4371 
4372  if(currItem.type == COLLECTION_TYPE_OBJECT &&
4373  currItem.valueCount % 2 != 0)
4374  {
4375  m_SB.Add(": ");
4376  }
4377  else if(currItem.valueCount > 0)
4378  {
4379  m_SB.Add(", ");
4380  WriteIndent();
4381  }
4382  else
4383  {
4384  WriteIndent();
4385  }
4386  ++currItem.valueCount;
4387  }
4388 }
4389 
4390 void VmaJsonWriter::WriteIndent(bool oneLess)
4391 {
4392  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
4393  {
4394  m_SB.AddNewLine();
4395 
4396  size_t count = m_Stack.size();
4397  if(count > 0 && oneLess)
4398  {
4399  --count;
4400  }
4401  for(size_t i = 0; i < count; ++i)
4402  {
4403  m_SB.Add(INDENT);
4404  }
4405  }
4406 }
4407 
4408 #endif // #if VMA_STATS_STRING_ENABLED
4409 
4411 
4412 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
4413 {
4414  if(IsUserDataString())
4415  {
4416  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
4417 
4418  FreeUserDataString(hAllocator);
4419 
4420  if(pUserData != VMA_NULL)
4421  {
4422  const char* const newStrSrc = (char*)pUserData;
4423  const size_t newStrLen = strlen(newStrSrc);
4424  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
4425  memcpy(newStrDst, newStrSrc, newStrLen + 1);
4426  m_pUserData = newStrDst;
4427  }
4428  }
4429  else
4430  {
4431  m_pUserData = pUserData;
4432  }
4433 }
4434 
4435 void VmaAllocation_T::ChangeBlockAllocation(
4436  VmaAllocator hAllocator,
4437  VmaDeviceMemoryBlock* block,
4438  VkDeviceSize offset)
4439 {
4440  VMA_ASSERT(block != VMA_NULL);
4441  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
4442 
4443  // Move mapping reference counter from old block to new block.
4444  if(block != m_BlockAllocation.m_Block)
4445  {
4446  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
4447  if(IsPersistentMap())
4448  ++mapRefCount;
4449  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
4450  block->Map(hAllocator, mapRefCount, VMA_NULL);
4451  }
4452 
4453  m_BlockAllocation.m_Block = block;
4454  m_BlockAllocation.m_Offset = offset;
4455 }
4456 
4457 VkDeviceSize VmaAllocation_T::GetOffset() const
4458 {
4459  switch(m_Type)
4460  {
4461  case ALLOCATION_TYPE_BLOCK:
4462  return m_BlockAllocation.m_Offset;
4463  case ALLOCATION_TYPE_DEDICATED:
4464  return 0;
4465  default:
4466  VMA_ASSERT(0);
4467  return 0;
4468  }
4469 }
4470 
4471 VkDeviceMemory VmaAllocation_T::GetMemory() const
4472 {
4473  switch(m_Type)
4474  {
4475  case ALLOCATION_TYPE_BLOCK:
4476  return m_BlockAllocation.m_Block->m_hMemory;
4477  case ALLOCATION_TYPE_DEDICATED:
4478  return m_DedicatedAllocation.m_hMemory;
4479  default:
4480  VMA_ASSERT(0);
4481  return VK_NULL_HANDLE;
4482  }
4483 }
4484 
4485 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
4486 {
4487  switch(m_Type)
4488  {
4489  case ALLOCATION_TYPE_BLOCK:
4490  return m_BlockAllocation.m_Block->m_MemoryTypeIndex;
4491  case ALLOCATION_TYPE_DEDICATED:
4492  return m_DedicatedAllocation.m_MemoryTypeIndex;
4493  default:
4494  VMA_ASSERT(0);
4495  return UINT32_MAX;
4496  }
4497 }
4498 
4499 void* VmaAllocation_T::GetMappedData() const
4500 {
4501  switch(m_Type)
4502  {
4503  case ALLOCATION_TYPE_BLOCK:
4504  if(m_MapCount != 0)
4505  {
4506  void* pBlockData = m_BlockAllocation.m_Block->m_Mapping.GetMappedData();
4507  VMA_ASSERT(pBlockData != VMA_NULL);
4508  return (char*)pBlockData + m_BlockAllocation.m_Offset;
4509  }
4510  else
4511  {
4512  return VMA_NULL;
4513  }
4514  break;
4515  case ALLOCATION_TYPE_DEDICATED:
4516  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
4517  return m_DedicatedAllocation.m_pMappedData;
4518  default:
4519  VMA_ASSERT(0);
4520  return VMA_NULL;
4521  }
4522 }
4523 
4524 bool VmaAllocation_T::CanBecomeLost() const
4525 {
4526  switch(m_Type)
4527  {
4528  case ALLOCATION_TYPE_BLOCK:
4529  return m_BlockAllocation.m_CanBecomeLost;
4530  case ALLOCATION_TYPE_DEDICATED:
4531  return false;
4532  default:
4533  VMA_ASSERT(0);
4534  return false;
4535  }
4536 }
4537 
4538 VmaPool VmaAllocation_T::GetPool() const
4539 {
4540  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
4541  return m_BlockAllocation.m_hPool;
4542 }
4543 
4544 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
4545 {
4546  VMA_ASSERT(CanBecomeLost());
4547 
4548  /*
4549  Warning: This is a carefully designed algorithm.
4550  Do not modify unless you really know what you're doing :)
4551  */
4552  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
4553  for(;;)
4554  {
4555  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
4556  {
4557  VMA_ASSERT(0);
4558  return false;
4559  }
4560  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
4561  {
4562  return false;
4563  }
4564  else // Last use time earlier than current time.
4565  {
4566  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
4567  {
4568  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
4569  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
4570  return true;
4571  }
4572  }
4573  }
4574 }
4575 
4576 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
4577 {
4578  VMA_ASSERT(IsUserDataString());
4579  if(m_pUserData != VMA_NULL)
4580  {
4581  char* const oldStr = (char*)m_pUserData;
4582  const size_t oldStrLen = strlen(oldStr);
4583  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
4584  m_pUserData = VMA_NULL;
4585  }
4586 }
4587 
4588 void VmaAllocation_T::BlockAllocMap()
4589 {
4590  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
4591 
4592  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
4593  {
4594  ++m_MapCount;
4595  }
4596  else
4597  {
4598  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
4599  }
4600 }
4601 
4602 void VmaAllocation_T::BlockAllocUnmap()
4603 {
4604  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
4605 
4606  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
4607  {
4608  --m_MapCount;
4609  }
4610  else
4611  {
4612  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
4613  }
4614 }
4615 
4616 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
4617 {
4618  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
4619 
4620  if(m_MapCount != 0)
4621  {
4622  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
4623  {
4624  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
4625  *ppData = m_DedicatedAllocation.m_pMappedData;
4626  ++m_MapCount;
4627  return VK_SUCCESS;
4628  }
4629  else
4630  {
4631  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
4632  return VK_ERROR_MEMORY_MAP_FAILED;
4633  }
4634  }
4635  else
4636  {
4637  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
4638  hAllocator->m_hDevice,
4639  m_DedicatedAllocation.m_hMemory,
4640  0, // offset
4641  VK_WHOLE_SIZE,
4642  0, // flags
4643  ppData);
4644  if(result == VK_SUCCESS)
4645  {
4646  m_DedicatedAllocation.m_pMappedData = *ppData;
4647  m_MapCount = 1;
4648  }
4649  return result;
4650  }
4651 }
4652 
4653 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
4654 {
4655  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
4656 
4657  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
4658  {
4659  --m_MapCount;
4660  if(m_MapCount == 0)
4661  {
4662  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
4663  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
4664  hAllocator->m_hDevice,
4665  m_DedicatedAllocation.m_hMemory);
4666  }
4667  }
4668  else
4669  {
4670  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
4671  }
4672 }
4673 
4674 #if VMA_STATS_STRING_ENABLED
4675 
4676 // Correspond to values of enum VmaSuballocationType.
4677 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
4678  "FREE",
4679  "UNKNOWN",
4680  "BUFFER",
4681  "IMAGE_UNKNOWN",
4682  "IMAGE_LINEAR",
4683  "IMAGE_OPTIMAL",
4684 };
4685 
4686 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
4687 {
4688  json.BeginObject();
4689 
4690  json.WriteString("Blocks");
4691  json.WriteNumber(stat.blockCount);
4692 
4693  json.WriteString("Allocations");
4694  json.WriteNumber(stat.allocationCount);
4695 
4696  json.WriteString("UnusedRanges");
4697  json.WriteNumber(stat.unusedRangeCount);
4698 
4699  json.WriteString("UsedBytes");
4700  json.WriteNumber(stat.usedBytes);
4701 
4702  json.WriteString("UnusedBytes");
4703  json.WriteNumber(stat.unusedBytes);
4704 
4705  if(stat.allocationCount > 1)
4706  {
4707  json.WriteString("AllocationSize");
4708  json.BeginObject(true);
4709  json.WriteString("Min");
4710  json.WriteNumber(stat.allocationSizeMin);
4711  json.WriteString("Avg");
4712  json.WriteNumber(stat.allocationSizeAvg);
4713  json.WriteString("Max");
4714  json.WriteNumber(stat.allocationSizeMax);
4715  json.EndObject();
4716  }
4717 
4718  if(stat.unusedRangeCount > 1)
4719  {
4720  json.WriteString("UnusedRangeSize");
4721  json.BeginObject(true);
4722  json.WriteString("Min");
4723  json.WriteNumber(stat.unusedRangeSizeMin);
4724  json.WriteString("Avg");
4725  json.WriteNumber(stat.unusedRangeSizeAvg);
4726  json.WriteString("Max");
4727  json.WriteNumber(stat.unusedRangeSizeMax);
4728  json.EndObject();
4729  }
4730 
4731  json.EndObject();
4732 }
4733 
4734 #endif // #if VMA_STATS_STRING_ENABLED
4735 
4736 struct VmaSuballocationItemSizeLess
4737 {
4738  bool operator()(
4739  const VmaSuballocationList::iterator lhs,
4740  const VmaSuballocationList::iterator rhs) const
4741  {
4742  return lhs->size < rhs->size;
4743  }
4744  bool operator()(
4745  const VmaSuballocationList::iterator lhs,
4746  VkDeviceSize rhsSize) const
4747  {
4748  return lhs->size < rhsSize;
4749  }
4750 };
4751 
4753 // class VmaBlockMetadata
4754 
4755 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
4756  m_Size(0),
4757  m_FreeCount(0),
4758  m_SumFreeSize(0),
4759  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
4760  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
4761 {
4762 }
4763 
4764 VmaBlockMetadata::~VmaBlockMetadata()
4765 {
4766 }
4767 
4768 void VmaBlockMetadata::Init(VkDeviceSize size)
4769 {
4770  m_Size = size;
4771  m_FreeCount = 1;
4772  m_SumFreeSize = size;
4773 
4774  VmaSuballocation suballoc = {};
4775  suballoc.offset = 0;
4776  suballoc.size = size;
4777  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
4778  suballoc.hAllocation = VK_NULL_HANDLE;
4779 
4780  m_Suballocations.push_back(suballoc);
4781  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
4782  --suballocItem;
4783  m_FreeSuballocationsBySize.push_back(suballocItem);
4784 }
4785 
4786 bool VmaBlockMetadata::Validate() const
4787 {
4788  if(m_Suballocations.empty())
4789  {
4790  return false;
4791  }
4792 
4793  // Expected offset of new suballocation as calculates from previous ones.
4794  VkDeviceSize calculatedOffset = 0;
4795  // Expected number of free suballocations as calculated from traversing their list.
4796  uint32_t calculatedFreeCount = 0;
4797  // Expected sum size of free suballocations as calculated from traversing their list.
4798  VkDeviceSize calculatedSumFreeSize = 0;
4799  // Expected number of free suballocations that should be registered in
4800  // m_FreeSuballocationsBySize calculated from traversing their list.
4801  size_t freeSuballocationsToRegister = 0;
4802  // True if previous visisted suballocation was free.
4803  bool prevFree = false;
4804 
4805  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
4806  suballocItem != m_Suballocations.cend();
4807  ++suballocItem)
4808  {
4809  const VmaSuballocation& subAlloc = *suballocItem;
4810 
4811  // Actual offset of this suballocation doesn't match expected one.
4812  if(subAlloc.offset != calculatedOffset)
4813  {
4814  return false;
4815  }
4816 
4817  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
4818  // Two adjacent free suballocations are invalid. They should be merged.
4819  if(prevFree && currFree)
4820  {
4821  return false;
4822  }
4823 
4824  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
4825  {
4826  return false;
4827  }
4828 
4829  if(currFree)
4830  {
4831  calculatedSumFreeSize += subAlloc.size;
4832  ++calculatedFreeCount;
4833  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
4834  {
4835  ++freeSuballocationsToRegister;
4836  }
4837  }
4838  else
4839  {
4840  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
4841  {
4842  return false;
4843  }
4844  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
4845  {
4846  return false;
4847  }
4848  }
4849 
4850  calculatedOffset += subAlloc.size;
4851  prevFree = currFree;
4852  }
4853 
4854  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
4855  // match expected one.
4856  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
4857  {
4858  return false;
4859  }
4860 
4861  VkDeviceSize lastSize = 0;
4862  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
4863  {
4864  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
4865 
4866  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
4867  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
4868  {
4869  return false;
4870  }
4871  // They must be sorted by size ascending.
4872  if(suballocItem->size < lastSize)
4873  {
4874  return false;
4875  }
4876 
4877  lastSize = suballocItem->size;
4878  }
4879 
4880  // Check if totals match calculacted values.
4881  if(!ValidateFreeSuballocationList() ||
4882  (calculatedOffset != m_Size) ||
4883  (calculatedSumFreeSize != m_SumFreeSize) ||
4884  (calculatedFreeCount != m_FreeCount))
4885  {
4886  return false;
4887  }
4888 
4889  return true;
4890 }
4891 
4892 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
4893 {
4894  if(!m_FreeSuballocationsBySize.empty())
4895  {
4896  return m_FreeSuballocationsBySize.back()->size;
4897  }
4898  else
4899  {
4900  return 0;
4901  }
4902 }
4903 
4904 bool VmaBlockMetadata::IsEmpty() const
4905 {
4906  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
4907 }
4908 
4909 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
4910 {
4911  outInfo.blockCount = 1;
4912 
4913  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
4914  outInfo.allocationCount = rangeCount - m_FreeCount;
4915  outInfo.unusedRangeCount = m_FreeCount;
4916 
4917  outInfo.unusedBytes = m_SumFreeSize;
4918  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
4919 
4920  outInfo.allocationSizeMin = UINT64_MAX;
4921  outInfo.allocationSizeMax = 0;
4922  outInfo.unusedRangeSizeMin = UINT64_MAX;
4923  outInfo.unusedRangeSizeMax = 0;
4924 
4925  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
4926  suballocItem != m_Suballocations.cend();
4927  ++suballocItem)
4928  {
4929  const VmaSuballocation& suballoc = *suballocItem;
4930  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
4931  {
4932  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
4933  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
4934  }
4935  else
4936  {
4937  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
4938  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
4939  }
4940  }
4941 }
4942 
4943 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
4944 {
4945  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
4946 
4947  inoutStats.size += m_Size;
4948  inoutStats.unusedSize += m_SumFreeSize;
4949  inoutStats.allocationCount += rangeCount - m_FreeCount;
4950  inoutStats.unusedRangeCount += m_FreeCount;
4951  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
4952 }
4953 
4954 #if VMA_STATS_STRING_ENABLED
4955 
4956 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
4957 {
4958  json.BeginObject();
4959 
4960  json.WriteString("TotalBytes");
4961  json.WriteNumber(m_Size);
4962 
4963  json.WriteString("UnusedBytes");
4964  json.WriteNumber(m_SumFreeSize);
4965 
4966  json.WriteString("Allocations");
4967  json.WriteNumber(m_Suballocations.size() - m_FreeCount);
4968 
4969  json.WriteString("UnusedRanges");
4970  json.WriteNumber(m_FreeCount);
4971 
4972  json.WriteString("Suballocations");
4973  json.BeginArray();
4974  size_t i = 0;
4975  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
4976  suballocItem != m_Suballocations.cend();
4977  ++suballocItem, ++i)
4978  {
4979  json.BeginObject(true);
4980 
4981  json.WriteString("Type");
4982  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[suballocItem->type]);
4983 
4984  json.WriteString("Size");
4985  json.WriteNumber(suballocItem->size);
4986 
4987  json.WriteString("Offset");
4988  json.WriteNumber(suballocItem->offset);
4989 
4990  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
4991  {
4992  const void* pUserData = suballocItem->hAllocation->GetUserData();
4993  if(pUserData != VMA_NULL)
4994  {
4995  json.WriteString("UserData");
4996  if(suballocItem->hAllocation->IsUserDataString())
4997  {
4998  json.WriteString((const char*)pUserData);
4999  }
5000  else
5001  {
5002  json.BeginString();
5003  json.ContinueString_Pointer(pUserData);
5004  json.EndString();
5005  }
5006  }
5007  }
5008 
5009  json.EndObject();
5010  }
5011  json.EndArray();
5012 
5013  json.EndObject();
5014 }
5015 
5016 #endif // #if VMA_STATS_STRING_ENABLED
5017 
5018 /*
5019 How many suitable free suballocations to analyze before choosing best one.
5020 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
5021  be chosen.
5022 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
5023  suballocations will be analized and best one will be chosen.
5024 - Any other value is also acceptable.
5025 */
5026 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
5027 
5028 void VmaBlockMetadata::CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest)
5029 {
5030  VMA_ASSERT(IsEmpty());
5031  pAllocationRequest->offset = 0;
5032  pAllocationRequest->sumFreeSize = m_SumFreeSize;
5033  pAllocationRequest->sumItemSize = 0;
5034  pAllocationRequest->item = m_Suballocations.begin();
5035  pAllocationRequest->itemsToMakeLostCount = 0;
5036 }
5037 
5038 bool VmaBlockMetadata::CreateAllocationRequest(
5039  uint32_t currentFrameIndex,
5040  uint32_t frameInUseCount,
5041  VkDeviceSize bufferImageGranularity,
5042  VkDeviceSize allocSize,
5043  VkDeviceSize allocAlignment,
5044  VmaSuballocationType allocType,
5045  bool canMakeOtherLost,
5046  VmaAllocationRequest* pAllocationRequest)
5047 {
5048  VMA_ASSERT(allocSize > 0);
5049  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
5050  VMA_ASSERT(pAllocationRequest != VMA_NULL);
5051  VMA_HEAVY_ASSERT(Validate());
5052 
5053  // There is not enough total free space in this block to fullfill the request: Early return.
5054  if(canMakeOtherLost == false && m_SumFreeSize < allocSize)
5055  {
5056  return false;
5057  }
5058 
5059  // New algorithm, efficiently searching freeSuballocationsBySize.
5060  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
5061  if(freeSuballocCount > 0)
5062  {
5063  if(VMA_BEST_FIT)
5064  {
5065  // Find first free suballocation with size not less than allocSize.
5066  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
5067  m_FreeSuballocationsBySize.data(),
5068  m_FreeSuballocationsBySize.data() + freeSuballocCount,
5069  allocSize,
5070  VmaSuballocationItemSizeLess());
5071  size_t index = it - m_FreeSuballocationsBySize.data();
5072  for(; index < freeSuballocCount; ++index)
5073  {
5074  if(CheckAllocation(
5075  currentFrameIndex,
5076  frameInUseCount,
5077  bufferImageGranularity,
5078  allocSize,
5079  allocAlignment,
5080  allocType,
5081  m_FreeSuballocationsBySize[index],
5082  false, // canMakeOtherLost
5083  &pAllocationRequest->offset,
5084  &pAllocationRequest->itemsToMakeLostCount,
5085  &pAllocationRequest->sumFreeSize,
5086  &pAllocationRequest->sumItemSize))
5087  {
5088  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
5089  return true;
5090  }
5091  }
5092  }
5093  else
5094  {
5095  // Search staring from biggest suballocations.
5096  for(size_t index = freeSuballocCount; index--; )
5097  {
5098  if(CheckAllocation(
5099  currentFrameIndex,
5100  frameInUseCount,
5101  bufferImageGranularity,
5102  allocSize,
5103  allocAlignment,
5104  allocType,
5105  m_FreeSuballocationsBySize[index],
5106  false, // canMakeOtherLost
5107  &pAllocationRequest->offset,
5108  &pAllocationRequest->itemsToMakeLostCount,
5109  &pAllocationRequest->sumFreeSize,
5110  &pAllocationRequest->sumItemSize))
5111  {
5112  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
5113  return true;
5114  }
5115  }
5116  }
5117  }
5118 
5119  if(canMakeOtherLost)
5120  {
5121  // Brute-force algorithm. TODO: Come up with something better.
5122 
5123  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
5124  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
5125 
5126  VmaAllocationRequest tmpAllocRequest = {};
5127  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
5128  suballocIt != m_Suballocations.end();
5129  ++suballocIt)
5130  {
5131  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
5132  suballocIt->hAllocation->CanBecomeLost())
5133  {
5134  if(CheckAllocation(
5135  currentFrameIndex,
5136  frameInUseCount,
5137  bufferImageGranularity,
5138  allocSize,
5139  allocAlignment,
5140  allocType,
5141  suballocIt,
5142  canMakeOtherLost,
5143  &tmpAllocRequest.offset,
5144  &tmpAllocRequest.itemsToMakeLostCount,
5145  &tmpAllocRequest.sumFreeSize,
5146  &tmpAllocRequest.sumItemSize))
5147  {
5148  tmpAllocRequest.item = suballocIt;
5149 
5150  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
5151  {
5152  *pAllocationRequest = tmpAllocRequest;
5153  }
5154  }
5155  }
5156  }
5157 
5158  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
5159  {
5160  return true;
5161  }
5162  }
5163 
5164  return false;
5165 }
5166 
5167 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
5168  uint32_t currentFrameIndex,
5169  uint32_t frameInUseCount,
5170  VmaAllocationRequest* pAllocationRequest)
5171 {
5172  while(pAllocationRequest->itemsToMakeLostCount > 0)
5173  {
5174  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
5175  {
5176  ++pAllocationRequest->item;
5177  }
5178  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
5179  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
5180  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
5181  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
5182  {
5183  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
5184  --pAllocationRequest->itemsToMakeLostCount;
5185  }
5186  else
5187  {
5188  return false;
5189  }
5190  }
5191 
5192  VMA_HEAVY_ASSERT(Validate());
5193  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
5194  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
5195 
5196  return true;
5197 }
5198 
5199 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
5200 {
5201  uint32_t lostAllocationCount = 0;
5202  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
5203  it != m_Suballocations.end();
5204  ++it)
5205  {
5206  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
5207  it->hAllocation->CanBecomeLost() &&
5208  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
5209  {
5210  it = FreeSuballocation(it);
5211  ++lostAllocationCount;
5212  }
5213  }
5214  return lostAllocationCount;
5215 }
5216 
5217 void VmaBlockMetadata::Alloc(
5218  const VmaAllocationRequest& request,
5219  VmaSuballocationType type,
5220  VkDeviceSize allocSize,
5221  VmaAllocation hAllocation)
5222 {
5223  VMA_ASSERT(request.item != m_Suballocations.end());
5224  VmaSuballocation& suballoc = *request.item;
5225  // Given suballocation is a free block.
5226  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
5227  // Given offset is inside this suballocation.
5228  VMA_ASSERT(request.offset >= suballoc.offset);
5229  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
5230  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
5231  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
5232 
5233  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
5234  // it to become used.
5235  UnregisterFreeSuballocation(request.item);
5236 
5237  suballoc.offset = request.offset;
5238  suballoc.size = allocSize;
5239  suballoc.type = type;
5240  suballoc.hAllocation = hAllocation;
5241 
5242  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
5243  if(paddingEnd)
5244  {
5245  VmaSuballocation paddingSuballoc = {};
5246  paddingSuballoc.offset = request.offset + allocSize;
5247  paddingSuballoc.size = paddingEnd;
5248  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5249  VmaSuballocationList::iterator next = request.item;
5250  ++next;
5251  const VmaSuballocationList::iterator paddingEndItem =
5252  m_Suballocations.insert(next, paddingSuballoc);
5253  RegisterFreeSuballocation(paddingEndItem);
5254  }
5255 
5256  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
5257  if(paddingBegin)
5258  {
5259  VmaSuballocation paddingSuballoc = {};
5260  paddingSuballoc.offset = request.offset - paddingBegin;
5261  paddingSuballoc.size = paddingBegin;
5262  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5263  const VmaSuballocationList::iterator paddingBeginItem =
5264  m_Suballocations.insert(request.item, paddingSuballoc);
5265  RegisterFreeSuballocation(paddingBeginItem);
5266  }
5267 
5268  // Update totals.
5269  m_FreeCount = m_FreeCount - 1;
5270  if(paddingBegin > 0)
5271  {
5272  ++m_FreeCount;
5273  }
5274  if(paddingEnd > 0)
5275  {
5276  ++m_FreeCount;
5277  }
5278  m_SumFreeSize -= allocSize;
5279 }
5280 
5281 void VmaBlockMetadata::Free(const VmaAllocation allocation)
5282 {
5283  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
5284  suballocItem != m_Suballocations.end();
5285  ++suballocItem)
5286  {
5287  VmaSuballocation& suballoc = *suballocItem;
5288  if(suballoc.hAllocation == allocation)
5289  {
5290  FreeSuballocation(suballocItem);
5291  VMA_HEAVY_ASSERT(Validate());
5292  return;
5293  }
5294  }
5295  VMA_ASSERT(0 && "Not found!");
5296 }
5297 
5298 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
5299 {
5300  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
5301  suballocItem != m_Suballocations.end();
5302  ++suballocItem)
5303  {
5304  VmaSuballocation& suballoc = *suballocItem;
5305  if(suballoc.offset == offset)
5306  {
5307  FreeSuballocation(suballocItem);
5308  return;
5309  }
5310  }
5311  VMA_ASSERT(0 && "Not found!");
5312 }
5313 
5314 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
5315 {
5316  VkDeviceSize lastSize = 0;
5317  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
5318  {
5319  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
5320 
5321  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
5322  {
5323  VMA_ASSERT(0);
5324  return false;
5325  }
5326  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5327  {
5328  VMA_ASSERT(0);
5329  return false;
5330  }
5331  if(it->size < lastSize)
5332  {
5333  VMA_ASSERT(0);
5334  return false;
5335  }
5336 
5337  lastSize = it->size;
5338  }
5339  return true;
5340 }
5341 
5342 bool VmaBlockMetadata::CheckAllocation(
5343  uint32_t currentFrameIndex,
5344  uint32_t frameInUseCount,
5345  VkDeviceSize bufferImageGranularity,
5346  VkDeviceSize allocSize,
5347  VkDeviceSize allocAlignment,
5348  VmaSuballocationType allocType,
5349  VmaSuballocationList::const_iterator suballocItem,
5350  bool canMakeOtherLost,
5351  VkDeviceSize* pOffset,
5352  size_t* itemsToMakeLostCount,
5353  VkDeviceSize* pSumFreeSize,
5354  VkDeviceSize* pSumItemSize) const
5355 {
5356  VMA_ASSERT(allocSize > 0);
5357  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
5358  VMA_ASSERT(suballocItem != m_Suballocations.cend());
5359  VMA_ASSERT(pOffset != VMA_NULL);
5360 
5361  *itemsToMakeLostCount = 0;
5362  *pSumFreeSize = 0;
5363  *pSumItemSize = 0;
5364 
5365  if(canMakeOtherLost)
5366  {
5367  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5368  {
5369  *pSumFreeSize = suballocItem->size;
5370  }
5371  else
5372  {
5373  if(suballocItem->hAllocation->CanBecomeLost() &&
5374  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5375  {
5376  ++*itemsToMakeLostCount;
5377  *pSumItemSize = suballocItem->size;
5378  }
5379  else
5380  {
5381  return false;
5382  }
5383  }
5384 
5385  // Remaining size is too small for this request: Early return.
5386  if(m_Size - suballocItem->offset < allocSize)
5387  {
5388  return false;
5389  }
5390 
5391  // Start from offset equal to beginning of this suballocation.
5392  *pOffset = suballocItem->offset;
5393 
5394  // Apply VMA_DEBUG_MARGIN at the beginning.
5395  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
5396  {
5397  *pOffset += VMA_DEBUG_MARGIN;
5398  }
5399 
5400  // Apply alignment.
5401  const VkDeviceSize alignment = VMA_MAX(allocAlignment, static_cast<VkDeviceSize>(VMA_DEBUG_ALIGNMENT));
5402  *pOffset = VmaAlignUp(*pOffset, alignment);
5403 
5404  // Check previous suballocations for BufferImageGranularity conflicts.
5405  // Make bigger alignment if necessary.
5406  if(bufferImageGranularity > 1)
5407  {
5408  bool bufferImageGranularityConflict = false;
5409  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
5410  while(prevSuballocItem != m_Suballocations.cbegin())
5411  {
5412  --prevSuballocItem;
5413  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
5414  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
5415  {
5416  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
5417  {
5418  bufferImageGranularityConflict = true;
5419  break;
5420  }
5421  }
5422  else
5423  // Already on previous page.
5424  break;
5425  }
5426  if(bufferImageGranularityConflict)
5427  {
5428  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
5429  }
5430  }
5431 
5432  // Now that we have final *pOffset, check if we are past suballocItem.
5433  // If yes, return false - this function should be called for another suballocItem as starting point.
5434  if(*pOffset >= suballocItem->offset + suballocItem->size)
5435  {
5436  return false;
5437  }
5438 
5439  // Calculate padding at the beginning based on current offset.
5440  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
5441 
5442  // Calculate required margin at the end if this is not last suballocation.
5443  VmaSuballocationList::const_iterator next = suballocItem;
5444  ++next;
5445  const VkDeviceSize requiredEndMargin =
5446  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
5447 
5448  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
5449  // Another early return check.
5450  if(suballocItem->offset + totalSize > m_Size)
5451  {
5452  return false;
5453  }
5454 
5455  // Advance lastSuballocItem until desired size is reached.
5456  // Update itemsToMakeLostCount.
5457  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
5458  if(totalSize > suballocItem->size)
5459  {
5460  VkDeviceSize remainingSize = totalSize - suballocItem->size;
5461  while(remainingSize > 0)
5462  {
5463  ++lastSuballocItem;
5464  if(lastSuballocItem == m_Suballocations.cend())
5465  {
5466  return false;
5467  }
5468  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5469  {
5470  *pSumFreeSize += lastSuballocItem->size;
5471  }
5472  else
5473  {
5474  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
5475  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
5476  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5477  {
5478  ++*itemsToMakeLostCount;
5479  *pSumItemSize += lastSuballocItem->size;
5480  }
5481  else
5482  {
5483  return false;
5484  }
5485  }
5486  remainingSize = (lastSuballocItem->size < remainingSize) ?
5487  remainingSize - lastSuballocItem->size : 0;
5488  }
5489  }
5490 
5491  // Check next suballocations for BufferImageGranularity conflicts.
5492  // If conflict exists, we must mark more allocations lost or fail.
5493  if(bufferImageGranularity > 1)
5494  {
5495  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
5496  ++nextSuballocItem;
5497  while(nextSuballocItem != m_Suballocations.cend())
5498  {
5499  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
5500  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
5501  {
5502  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
5503  {
5504  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
5505  if(nextSuballoc.hAllocation->CanBecomeLost() &&
5506  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5507  {
5508  ++*itemsToMakeLostCount;
5509  }
5510  else
5511  {
5512  return false;
5513  }
5514  }
5515  }
5516  else
5517  {
5518  // Already on next page.
5519  break;
5520  }
5521  ++nextSuballocItem;
5522  }
5523  }
5524  }
5525  else
5526  {
5527  const VmaSuballocation& suballoc = *suballocItem;
5528  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
5529 
5530  *pSumFreeSize = suballoc.size;
5531 
5532  // Size of this suballocation is too small for this request: Early return.
5533  if(suballoc.size < allocSize)
5534  {
5535  return false;
5536  }
5537 
5538  // Start from offset equal to beginning of this suballocation.
5539  *pOffset = suballoc.offset;
5540 
5541  // Apply VMA_DEBUG_MARGIN at the beginning.
5542  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
5543  {
5544  *pOffset += VMA_DEBUG_MARGIN;
5545  }
5546 
5547  // Apply alignment.
5548  const VkDeviceSize alignment = VMA_MAX(allocAlignment, static_cast<VkDeviceSize>(VMA_DEBUG_ALIGNMENT));
5549  *pOffset = VmaAlignUp(*pOffset, alignment);
5550 
5551  // Check previous suballocations for BufferImageGranularity conflicts.
5552  // Make bigger alignment if necessary.
5553  if(bufferImageGranularity > 1)
5554  {
5555  bool bufferImageGranularityConflict = false;
5556  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
5557  while(prevSuballocItem != m_Suballocations.cbegin())
5558  {
5559  --prevSuballocItem;
5560  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
5561  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
5562  {
5563  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
5564  {
5565  bufferImageGranularityConflict = true;
5566  break;
5567  }
5568  }
5569  else
5570  // Already on previous page.
5571  break;
5572  }
5573  if(bufferImageGranularityConflict)
5574  {
5575  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
5576  }
5577  }
5578 
5579  // Calculate padding at the beginning based on current offset.
5580  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
5581 
5582  // Calculate required margin at the end if this is not last suballocation.
5583  VmaSuballocationList::const_iterator next = suballocItem;
5584  ++next;
5585  const VkDeviceSize requiredEndMargin =
5586  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
5587 
5588  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
5589  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
5590  {
5591  return false;
5592  }
5593 
5594  // Check next suballocations for BufferImageGranularity conflicts.
5595  // If conflict exists, allocation cannot be made here.
5596  if(bufferImageGranularity > 1)
5597  {
5598  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
5599  ++nextSuballocItem;
5600  while(nextSuballocItem != m_Suballocations.cend())
5601  {
5602  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
5603  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
5604  {
5605  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
5606  {
5607  return false;
5608  }
5609  }
5610  else
5611  {
5612  // Already on next page.
5613  break;
5614  }
5615  ++nextSuballocItem;
5616  }
5617  }
5618  }
5619 
5620  // All tests passed: Success. pOffset is already filled.
5621  return true;
5622 }
5623 
5624 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
5625 {
5626  VMA_ASSERT(item != m_Suballocations.end());
5627  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5628 
5629  VmaSuballocationList::iterator nextItem = item;
5630  ++nextItem;
5631  VMA_ASSERT(nextItem != m_Suballocations.end());
5632  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
5633 
5634  item->size += nextItem->size;
5635  --m_FreeCount;
5636  m_Suballocations.erase(nextItem);
5637 }
5638 
5639 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
5640 {
5641  // Change this suballocation to be marked as free.
5642  VmaSuballocation& suballoc = *suballocItem;
5643  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5644  suballoc.hAllocation = VK_NULL_HANDLE;
5645 
5646  // Update totals.
5647  ++m_FreeCount;
5648  m_SumFreeSize += suballoc.size;
5649 
5650  // Merge with previous and/or next suballocation if it's also free.
5651  bool mergeWithNext = false;
5652  bool mergeWithPrev = false;
5653 
5654  VmaSuballocationList::iterator nextItem = suballocItem;
5655  ++nextItem;
5656  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
5657  {
5658  mergeWithNext = true;
5659  }
5660 
5661  VmaSuballocationList::iterator prevItem = suballocItem;
5662  if(suballocItem != m_Suballocations.begin())
5663  {
5664  --prevItem;
5665  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5666  {
5667  mergeWithPrev = true;
5668  }
5669  }
5670 
5671  if(mergeWithNext)
5672  {
5673  UnregisterFreeSuballocation(nextItem);
5674  MergeFreeWithNext(suballocItem);
5675  }
5676 
5677  if(mergeWithPrev)
5678  {
5679  UnregisterFreeSuballocation(prevItem);
5680  MergeFreeWithNext(prevItem);
5681  RegisterFreeSuballocation(prevItem);
5682  return prevItem;
5683  }
5684  else
5685  {
5686  RegisterFreeSuballocation(suballocItem);
5687  return suballocItem;
5688  }
5689 }
5690 
5691 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
5692 {
5693  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5694  VMA_ASSERT(item->size > 0);
5695 
5696  // You may want to enable this validation at the beginning or at the end of
5697  // this function, depending on what do you want to check.
5698  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5699 
5700  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5701  {
5702  if(m_FreeSuballocationsBySize.empty())
5703  {
5704  m_FreeSuballocationsBySize.push_back(item);
5705  }
5706  else
5707  {
5708  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
5709  }
5710  }
5711 
5712  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5713 }
5714 
5715 
5716 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
5717 {
5718  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5719  VMA_ASSERT(item->size > 0);
5720 
5721  // You may want to enable this validation at the beginning or at the end of
5722  // this function, depending on what do you want to check.
5723  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5724 
5725  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5726  {
5727  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
5728  m_FreeSuballocationsBySize.data(),
5729  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
5730  item,
5731  VmaSuballocationItemSizeLess());
5732  for(size_t index = it - m_FreeSuballocationsBySize.data();
5733  index < m_FreeSuballocationsBySize.size();
5734  ++index)
5735  {
5736  if(m_FreeSuballocationsBySize[index] == item)
5737  {
5738  VmaVectorRemove(m_FreeSuballocationsBySize, index);
5739  return;
5740  }
5741  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
5742  }
5743  VMA_ASSERT(0 && "Not found.");
5744  }
5745 
5746  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5747 }
5748 
5750 // class VmaDeviceMemoryMapping
5751 
5752 VmaDeviceMemoryMapping::VmaDeviceMemoryMapping() :
5753  m_MapCount(0),
5754  m_pMappedData(VMA_NULL)
5755 {
5756 }
5757 
5758 VmaDeviceMemoryMapping::~VmaDeviceMemoryMapping()
5759 {
5760  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
5761 }
5762 
5763 VkResult VmaDeviceMemoryMapping::Map(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count, void **ppData)
5764 {
5765  if(count == 0)
5766  {
5767  return VK_SUCCESS;
5768  }
5769 
5770  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
5771  if(m_MapCount != 0)
5772  {
5773  m_MapCount += count;
5774  VMA_ASSERT(m_pMappedData != VMA_NULL);
5775  if(ppData != VMA_NULL)
5776  {
5777  *ppData = m_pMappedData;
5778  }
5779  return VK_SUCCESS;
5780  }
5781  else
5782  {
5783  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
5784  hAllocator->m_hDevice,
5785  hMemory,
5786  0, // offset
5787  VK_WHOLE_SIZE,
5788  0, // flags
5789  &m_pMappedData);
5790  if(result == VK_SUCCESS)
5791  {
5792  if(ppData != VMA_NULL)
5793  {
5794  *ppData = m_pMappedData;
5795  }
5796  m_MapCount = count;
5797  }
5798  return result;
5799  }
5800 }
5801 
5802 void VmaDeviceMemoryMapping::Unmap(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count)
5803 {
5804  if(count == 0)
5805  {
5806  return;
5807  }
5808 
5809  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
5810  if(m_MapCount >= count)
5811  {
5812  m_MapCount -= count;
5813  if(m_MapCount == 0)
5814  {
5815  m_pMappedData = VMA_NULL;
5816  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, hMemory);
5817  }
5818  }
5819  else
5820  {
5821  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
5822  }
5823 }
5824 
5826 // class VmaDeviceMemoryBlock
5827 
5828 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
5829  m_MemoryTypeIndex(UINT32_MAX),
5830  m_hMemory(VK_NULL_HANDLE),
5831  m_Metadata(hAllocator)
5832 {
5833 }
5834 
5835 void VmaDeviceMemoryBlock::Init(
5836  uint32_t newMemoryTypeIndex,
5837  VkDeviceMemory newMemory,
5838  VkDeviceSize newSize)
5839 {
5840  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
5841 
5842  m_MemoryTypeIndex = newMemoryTypeIndex;
5843  m_hMemory = newMemory;
5844 
5845  m_Metadata.Init(newSize);
5846 }
5847 
5848 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
5849 {
5850  // This is the most important assert in the entire library.
5851  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
5852  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
5853 
5854  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
5855  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
5856  m_hMemory = VK_NULL_HANDLE;
5857 }
5858 
5859 bool VmaDeviceMemoryBlock::Validate() const
5860 {
5861  if((m_hMemory == VK_NULL_HANDLE) ||
5862  (m_Metadata.GetSize() == 0))
5863  {
5864  return false;
5865  }
5866 
5867  return m_Metadata.Validate();
5868 }
5869 
5870 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
5871 {
5872  return m_Mapping.Map(hAllocator, m_hMemory, count, ppData);
5873 }
5874 
5875 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
5876 {
5877  m_Mapping.Unmap(hAllocator, m_hMemory, count);
5878 }
5879 
5880 static void InitStatInfo(VmaStatInfo& outInfo)
5881 {
5882  memset(&outInfo, 0, sizeof(outInfo));
5883  outInfo.allocationSizeMin = UINT64_MAX;
5884  outInfo.unusedRangeSizeMin = UINT64_MAX;
5885 }
5886 
5887 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
5888 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
5889 {
5890  inoutInfo.blockCount += srcInfo.blockCount;
5891  inoutInfo.allocationCount += srcInfo.allocationCount;
5892  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
5893  inoutInfo.usedBytes += srcInfo.usedBytes;
5894  inoutInfo.unusedBytes += srcInfo.unusedBytes;
5895  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
5896  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
5897  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
5898  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
5899 }
5900 
5901 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
5902 {
5903  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
5904  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
5905  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
5906  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
5907 }
5908 
5909 VmaPool_T::VmaPool_T(
5910  VmaAllocator hAllocator,
5911  const VmaPoolCreateInfo& createInfo) :
5912  m_BlockVector(
5913  hAllocator,
5914  createInfo.memoryTypeIndex,
5915  createInfo.blockSize,
5916  createInfo.minBlockCount,
5917  createInfo.maxBlockCount,
5918  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
5919  createInfo.frameInUseCount,
5920  true) // isCustomPool
5921 {
5922 }
5923 
5924 VmaPool_T::~VmaPool_T()
5925 {
5926 }
5927 
5928 #if VMA_STATS_STRING_ENABLED
5929 
5930 #endif // #if VMA_STATS_STRING_ENABLED
5931 
5932 VmaBlockVector::VmaBlockVector(
5933  VmaAllocator hAllocator,
5934  uint32_t memoryTypeIndex,
5935  VkDeviceSize preferredBlockSize,
5936  size_t minBlockCount,
5937  size_t maxBlockCount,
5938  VkDeviceSize bufferImageGranularity,
5939  uint32_t frameInUseCount,
5940  bool isCustomPool) :
5941  m_hAllocator(hAllocator),
5942  m_MemoryTypeIndex(memoryTypeIndex),
5943  m_PreferredBlockSize(preferredBlockSize),
5944  m_MinBlockCount(minBlockCount),
5945  m_MaxBlockCount(maxBlockCount),
5946  m_BufferImageGranularity(bufferImageGranularity),
5947  m_FrameInUseCount(frameInUseCount),
5948  m_IsCustomPool(isCustomPool),
5949  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
5950  m_HasEmptyBlock(false),
5951  m_pDefragmentator(VMA_NULL)
5952 {
5953 }
5954 
5955 VmaBlockVector::~VmaBlockVector()
5956 {
5957  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
5958 
5959  for(size_t i = m_Blocks.size(); i--; )
5960  {
5961  m_Blocks[i]->Destroy(m_hAllocator);
5962  vma_delete(m_hAllocator, m_Blocks[i]);
5963  }
5964 }
5965 
5966 VkResult VmaBlockVector::CreateMinBlocks()
5967 {
5968  for(size_t i = 0; i < m_MinBlockCount; ++i)
5969  {
5970  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
5971  if(res != VK_SUCCESS)
5972  {
5973  return res;
5974  }
5975  }
5976  return VK_SUCCESS;
5977 }
5978 
5979 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
5980 {
5981  pStats->size = 0;
5982  pStats->unusedSize = 0;
5983  pStats->allocationCount = 0;
5984  pStats->unusedRangeCount = 0;
5985  pStats->unusedRangeSizeMax = 0;
5986 
5987  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
5988 
5989  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
5990  {
5991  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
5992  VMA_ASSERT(pBlock);
5993  VMA_HEAVY_ASSERT(pBlock->Validate());
5994  pBlock->m_Metadata.AddPoolStats(*pStats);
5995  }
5996 }
5997 
5998 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
5999 
6000 VkResult VmaBlockVector::Allocate(
6001  VmaPool hCurrentPool,
6002  uint32_t currentFrameIndex,
6003  const VkMemoryRequirements& vkMemReq,
6004  const VmaAllocationCreateInfo& createInfo,
6005  VmaSuballocationType suballocType,
6006  VmaAllocation* pAllocation)
6007 {
6008  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
6009  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
6010 
6011  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6012 
6013  // 1. Search existing allocations. Try to allocate without making other allocations lost.
6014  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
6015  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
6016  {
6017  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
6018  VMA_ASSERT(pCurrBlock);
6019  VmaAllocationRequest currRequest = {};
6020  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
6021  currentFrameIndex,
6022  m_FrameInUseCount,
6023  m_BufferImageGranularity,
6024  vkMemReq.size,
6025  vkMemReq.alignment,
6026  suballocType,
6027  false, // canMakeOtherLost
6028  &currRequest))
6029  {
6030  // Allocate from pCurrBlock.
6031  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
6032 
6033  if(mapped)
6034  {
6035  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
6036  if(res != VK_SUCCESS)
6037  {
6038  return res;
6039  }
6040  }
6041 
6042  // We no longer have an empty Allocation.
6043  if(pCurrBlock->m_Metadata.IsEmpty())
6044  {
6045  m_HasEmptyBlock = false;
6046  }
6047 
6048  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6049  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, vkMemReq.size, *pAllocation);
6050  (*pAllocation)->InitBlockAllocation(
6051  hCurrentPool,
6052  pCurrBlock,
6053  currRequest.offset,
6054  vkMemReq.alignment,
6055  vkMemReq.size,
6056  suballocType,
6057  mapped,
6058  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6059  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
6060  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
6061  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6062  return VK_SUCCESS;
6063  }
6064  }
6065 
6066  const bool canCreateNewBlock =
6067  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
6068  (m_Blocks.size() < m_MaxBlockCount);
6069 
6070  // 2. Try to create new block.
6071  if(canCreateNewBlock)
6072  {
6073  // Calculate optimal size for new block.
6074  VkDeviceSize newBlockSize = m_PreferredBlockSize;
6075  uint32_t newBlockSizeShift = 0;
6076  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
6077 
6078  // Allocating blocks of other sizes is allowed only in default pools.
6079  // In custom pools block size is fixed.
6080  if(m_IsCustomPool == false)
6081  {
6082  // Allocate 1/8, 1/4, 1/2 as first blocks.
6083  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
6084  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
6085  {
6086  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
6087  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= vkMemReq.size * 2)
6088  {
6089  newBlockSize = smallerNewBlockSize;
6090  ++newBlockSizeShift;
6091  }
6092  else
6093  {
6094  break;
6095  }
6096  }
6097  }
6098 
6099  size_t newBlockIndex = 0;
6100  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
6101  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
6102  if(m_IsCustomPool == false)
6103  {
6104  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
6105  {
6106  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
6107  if(smallerNewBlockSize >= vkMemReq.size)
6108  {
6109  newBlockSize = smallerNewBlockSize;
6110  ++newBlockSizeShift;
6111  res = CreateBlock(newBlockSize, &newBlockIndex);
6112  }
6113  else
6114  {
6115  break;
6116  }
6117  }
6118  }
6119 
6120  if(res == VK_SUCCESS)
6121  {
6122  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
6123  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= vkMemReq.size);
6124 
6125  if(mapped)
6126  {
6127  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
6128  if(res != VK_SUCCESS)
6129  {
6130  return res;
6131  }
6132  }
6133 
6134  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
6135  VmaAllocationRequest allocRequest;
6136  pBlock->m_Metadata.CreateFirstAllocationRequest(&allocRequest);
6137  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6138  pBlock->m_Metadata.Alloc(allocRequest, suballocType, vkMemReq.size, *pAllocation);
6139  (*pAllocation)->InitBlockAllocation(
6140  hCurrentPool,
6141  pBlock,
6142  allocRequest.offset,
6143  vkMemReq.alignment,
6144  vkMemReq.size,
6145  suballocType,
6146  mapped,
6147  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6148  VMA_HEAVY_ASSERT(pBlock->Validate());
6149  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
6150  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6151  return VK_SUCCESS;
6152  }
6153  }
6154 
6155  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
6156 
6157  // 3. Try to allocate from existing blocks with making other allocations lost.
6158  if(canMakeOtherLost)
6159  {
6160  uint32_t tryIndex = 0;
6161  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
6162  {
6163  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
6164  VmaAllocationRequest bestRequest = {};
6165  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
6166 
6167  // 1. Search existing allocations.
6168  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
6169  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
6170  {
6171  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
6172  VMA_ASSERT(pCurrBlock);
6173  VmaAllocationRequest currRequest = {};
6174  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
6175  currentFrameIndex,
6176  m_FrameInUseCount,
6177  m_BufferImageGranularity,
6178  vkMemReq.size,
6179  vkMemReq.alignment,
6180  suballocType,
6181  canMakeOtherLost,
6182  &currRequest))
6183  {
6184  const VkDeviceSize currRequestCost = currRequest.CalcCost();
6185  if(pBestRequestBlock == VMA_NULL ||
6186  currRequestCost < bestRequestCost)
6187  {
6188  pBestRequestBlock = pCurrBlock;
6189  bestRequest = currRequest;
6190  bestRequestCost = currRequestCost;
6191 
6192  if(bestRequestCost == 0)
6193  {
6194  break;
6195  }
6196  }
6197  }
6198  }
6199 
6200  if(pBestRequestBlock != VMA_NULL)
6201  {
6202  if(mapped)
6203  {
6204  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
6205  if(res != VK_SUCCESS)
6206  {
6207  return res;
6208  }
6209  }
6210 
6211  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
6212  currentFrameIndex,
6213  m_FrameInUseCount,
6214  &bestRequest))
6215  {
6216  // We no longer have an empty Allocation.
6217  if(pBestRequestBlock->m_Metadata.IsEmpty())
6218  {
6219  m_HasEmptyBlock = false;
6220  }
6221  // Allocate from this pBlock.
6222  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6223  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, vkMemReq.size, *pAllocation);
6224  (*pAllocation)->InitBlockAllocation(
6225  hCurrentPool,
6226  pBestRequestBlock,
6227  bestRequest.offset,
6228  vkMemReq.alignment,
6229  vkMemReq.size,
6230  suballocType,
6231  mapped,
6232  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6233  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
6234  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
6235  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6236  return VK_SUCCESS;
6237  }
6238  // else: Some allocations must have been touched while we are here. Next try.
6239  }
6240  else
6241  {
6242  // Could not find place in any of the blocks - break outer loop.
6243  break;
6244  }
6245  }
6246  /* Maximum number of tries exceeded - a very unlike event when many other
6247  threads are simultaneously touching allocations making it impossible to make
6248  lost at the same time as we try to allocate. */
6249  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
6250  {
6251  return VK_ERROR_TOO_MANY_OBJECTS;
6252  }
6253  }
6254 
6255  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
6256 }
6257 
6258 void VmaBlockVector::Free(
6259  VmaAllocation hAllocation)
6260 {
6261  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
6262 
6263  // Scope for lock.
6264  {
6265  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6266 
6267  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
6268 
6269  if(hAllocation->IsPersistentMap())
6270  {
6271  pBlock->m_Mapping.Unmap(m_hAllocator, pBlock->m_hMemory, 1);
6272  }
6273 
6274  pBlock->m_Metadata.Free(hAllocation);
6275  VMA_HEAVY_ASSERT(pBlock->Validate());
6276 
6277  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
6278 
6279  // pBlock became empty after this deallocation.
6280  if(pBlock->m_Metadata.IsEmpty())
6281  {
6282  // Already has empty Allocation. We don't want to have two, so delete this one.
6283  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
6284  {
6285  pBlockToDelete = pBlock;
6286  Remove(pBlock);
6287  }
6288  // We now have first empty Allocation.
6289  else
6290  {
6291  m_HasEmptyBlock = true;
6292  }
6293  }
6294  // pBlock didn't become empty, but we have another empty block - find and free that one.
6295  // (This is optional, heuristics.)
6296  else if(m_HasEmptyBlock)
6297  {
6298  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
6299  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
6300  {
6301  pBlockToDelete = pLastBlock;
6302  m_Blocks.pop_back();
6303  m_HasEmptyBlock = false;
6304  }
6305  }
6306 
6307  IncrementallySortBlocks();
6308  }
6309 
6310  // Destruction of a free Allocation. Deferred until this point, outside of mutex
6311  // lock, for performance reason.
6312  if(pBlockToDelete != VMA_NULL)
6313  {
6314  VMA_DEBUG_LOG(" Deleted empty allocation");
6315  pBlockToDelete->Destroy(m_hAllocator);
6316  vma_delete(m_hAllocator, pBlockToDelete);
6317  }
6318 }
6319 
6320 size_t VmaBlockVector::CalcMaxBlockSize() const
6321 {
6322  size_t result = 0;
6323  for(size_t i = m_Blocks.size(); i--; )
6324  {
6325  result = VMA_MAX(result, m_Blocks[i]->m_Metadata.GetSize());
6326  if(result >= m_PreferredBlockSize)
6327  {
6328  break;
6329  }
6330  }
6331  return result;
6332 }
6333 
6334 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
6335 {
6336  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6337  {
6338  if(m_Blocks[blockIndex] == pBlock)
6339  {
6340  VmaVectorRemove(m_Blocks, blockIndex);
6341  return;
6342  }
6343  }
6344  VMA_ASSERT(0);
6345 }
6346 
6347 void VmaBlockVector::IncrementallySortBlocks()
6348 {
6349  // Bubble sort only until first swap.
6350  for(size_t i = 1; i < m_Blocks.size(); ++i)
6351  {
6352  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
6353  {
6354  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
6355  return;
6356  }
6357  }
6358 }
6359 
6360 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
6361 {
6362  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
6363  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
6364  allocInfo.allocationSize = blockSize;
6365  VkDeviceMemory mem = VK_NULL_HANDLE;
6366  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
6367  if(res < 0)
6368  {
6369  return res;
6370  }
6371 
6372  // New VkDeviceMemory successfully created.
6373 
6374  // Create new Allocation for it.
6375  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
6376  pBlock->Init(
6377  m_MemoryTypeIndex,
6378  mem,
6379  allocInfo.allocationSize);
6380 
6381  m_Blocks.push_back(pBlock);
6382  if(pNewBlockIndex != VMA_NULL)
6383  {
6384  *pNewBlockIndex = m_Blocks.size() - 1;
6385  }
6386 
6387  return VK_SUCCESS;
6388 }
6389 
6390 #if VMA_STATS_STRING_ENABLED
6391 
6392 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
6393 {
6394  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6395 
6396  json.BeginObject();
6397 
6398  if(m_IsCustomPool)
6399  {
6400  json.WriteString("MemoryTypeIndex");
6401  json.WriteNumber(m_MemoryTypeIndex);
6402 
6403  json.WriteString("BlockSize");
6404  json.WriteNumber(m_PreferredBlockSize);
6405 
6406  json.WriteString("BlockCount");
6407  json.BeginObject(true);
6408  if(m_MinBlockCount > 0)
6409  {
6410  json.WriteString("Min");
6411  json.WriteNumber(m_MinBlockCount);
6412  }
6413  if(m_MaxBlockCount < SIZE_MAX)
6414  {
6415  json.WriteString("Max");
6416  json.WriteNumber(m_MaxBlockCount);
6417  }
6418  json.WriteString("Cur");
6419  json.WriteNumber(m_Blocks.size());
6420  json.EndObject();
6421 
6422  if(m_FrameInUseCount > 0)
6423  {
6424  json.WriteString("FrameInUseCount");
6425  json.WriteNumber(m_FrameInUseCount);
6426  }
6427  }
6428  else
6429  {
6430  json.WriteString("PreferredBlockSize");
6431  json.WriteNumber(m_PreferredBlockSize);
6432  }
6433 
6434  json.WriteString("Blocks");
6435  json.BeginArray();
6436  for(size_t i = 0; i < m_Blocks.size(); ++i)
6437  {
6438  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
6439  }
6440  json.EndArray();
6441 
6442  json.EndObject();
6443 }
6444 
6445 #endif // #if VMA_STATS_STRING_ENABLED
6446 
6447 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
6448  VmaAllocator hAllocator,
6449  uint32_t currentFrameIndex)
6450 {
6451  if(m_pDefragmentator == VMA_NULL)
6452  {
6453  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
6454  hAllocator,
6455  this,
6456  currentFrameIndex);
6457  }
6458 
6459  return m_pDefragmentator;
6460 }
6461 
6462 VkResult VmaBlockVector::Defragment(
6463  VmaDefragmentationStats* pDefragmentationStats,
6464  VkDeviceSize& maxBytesToMove,
6465  uint32_t& maxAllocationsToMove)
6466 {
6467  if(m_pDefragmentator == VMA_NULL)
6468  {
6469  return VK_SUCCESS;
6470  }
6471 
6472  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6473 
6474  // Defragment.
6475  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
6476 
6477  // Accumulate statistics.
6478  if(pDefragmentationStats != VMA_NULL)
6479  {
6480  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
6481  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
6482  pDefragmentationStats->bytesMoved += bytesMoved;
6483  pDefragmentationStats->allocationsMoved += allocationsMoved;
6484  VMA_ASSERT(bytesMoved <= maxBytesToMove);
6485  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
6486  maxBytesToMove -= bytesMoved;
6487  maxAllocationsToMove -= allocationsMoved;
6488  }
6489 
6490  // Free empty blocks.
6491  m_HasEmptyBlock = false;
6492  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
6493  {
6494  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
6495  if(pBlock->m_Metadata.IsEmpty())
6496  {
6497  if(m_Blocks.size() > m_MinBlockCount)
6498  {
6499  if(pDefragmentationStats != VMA_NULL)
6500  {
6501  ++pDefragmentationStats->deviceMemoryBlocksFreed;
6502  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
6503  }
6504 
6505  VmaVectorRemove(m_Blocks, blockIndex);
6506  pBlock->Destroy(m_hAllocator);
6507  vma_delete(m_hAllocator, pBlock);
6508  }
6509  else
6510  {
6511  m_HasEmptyBlock = true;
6512  }
6513  }
6514  }
6515 
6516  return result;
6517 }
6518 
6519 void VmaBlockVector::DestroyDefragmentator()
6520 {
6521  if(m_pDefragmentator != VMA_NULL)
6522  {
6523  vma_delete(m_hAllocator, m_pDefragmentator);
6524  m_pDefragmentator = VMA_NULL;
6525  }
6526 }
6527 
6528 void VmaBlockVector::MakePoolAllocationsLost(
6529  uint32_t currentFrameIndex,
6530  size_t* pLostAllocationCount)
6531 {
6532  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6533  size_t lostAllocationCount = 0;
6534  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6535  {
6536  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
6537  VMA_ASSERT(pBlock);
6538  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
6539  }
6540  if(pLostAllocationCount != VMA_NULL)
6541  {
6542  *pLostAllocationCount = lostAllocationCount;
6543  }
6544 }
6545 
6546 void VmaBlockVector::AddStats(VmaStats* pStats)
6547 {
6548  const uint32_t memTypeIndex = m_MemoryTypeIndex;
6549  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
6550 
6551  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6552 
6553  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6554  {
6555  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
6556  VMA_ASSERT(pBlock);
6557  VMA_HEAVY_ASSERT(pBlock->Validate());
6558  VmaStatInfo allocationStatInfo;
6559  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
6560  VmaAddStatInfo(pStats->total, allocationStatInfo);
6561  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
6562  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
6563  }
6564 }
6565 
6567 // VmaDefragmentator members definition
6568 
6569 VmaDefragmentator::VmaDefragmentator(
6570  VmaAllocator hAllocator,
6571  VmaBlockVector* pBlockVector,
6572  uint32_t currentFrameIndex) :
6573  m_hAllocator(hAllocator),
6574  m_pBlockVector(pBlockVector),
6575  m_CurrentFrameIndex(currentFrameIndex),
6576  m_BytesMoved(0),
6577  m_AllocationsMoved(0),
6578  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
6579  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
6580 {
6581 }
6582 
6583 VmaDefragmentator::~VmaDefragmentator()
6584 {
6585  for(size_t i = m_Blocks.size(); i--; )
6586  {
6587  vma_delete(m_hAllocator, m_Blocks[i]);
6588  }
6589 }
6590 
6591 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
6592 {
6593  AllocationInfo allocInfo;
6594  allocInfo.m_hAllocation = hAlloc;
6595  allocInfo.m_pChanged = pChanged;
6596  m_Allocations.push_back(allocInfo);
6597 }
6598 
6599 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
6600 {
6601  // It has already been mapped for defragmentation.
6602  if(m_pMappedDataForDefragmentation)
6603  {
6604  *ppMappedData = m_pMappedDataForDefragmentation;
6605  return VK_SUCCESS;
6606  }
6607 
6608  // It is originally mapped.
6609  if(m_pBlock->m_Mapping.GetMappedData())
6610  {
6611  *ppMappedData = m_pBlock->m_Mapping.GetMappedData();
6612  return VK_SUCCESS;
6613  }
6614 
6615  // Map on first usage.
6616  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
6617  *ppMappedData = m_pMappedDataForDefragmentation;
6618  return res;
6619 }
6620 
6621 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
6622 {
6623  if(m_pMappedDataForDefragmentation != VMA_NULL)
6624  {
6625  m_pBlock->Unmap(hAllocator, 1);
6626  }
6627 }
6628 
6629 VkResult VmaDefragmentator::DefragmentRound(
6630  VkDeviceSize maxBytesToMove,
6631  uint32_t maxAllocationsToMove)
6632 {
6633  if(m_Blocks.empty())
6634  {
6635  return VK_SUCCESS;
6636  }
6637 
6638  size_t srcBlockIndex = m_Blocks.size() - 1;
6639  size_t srcAllocIndex = SIZE_MAX;
6640  for(;;)
6641  {
6642  // 1. Find next allocation to move.
6643  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
6644  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
6645  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
6646  {
6647  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
6648  {
6649  // Finished: no more allocations to process.
6650  if(srcBlockIndex == 0)
6651  {
6652  return VK_SUCCESS;
6653  }
6654  else
6655  {
6656  --srcBlockIndex;
6657  srcAllocIndex = SIZE_MAX;
6658  }
6659  }
6660  else
6661  {
6662  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
6663  }
6664  }
6665 
6666  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
6667  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
6668 
6669  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
6670  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
6671  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
6672  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
6673 
6674  // 2. Try to find new place for this allocation in preceding or current block.
6675  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
6676  {
6677  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
6678  VmaAllocationRequest dstAllocRequest;
6679  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
6680  m_CurrentFrameIndex,
6681  m_pBlockVector->GetFrameInUseCount(),
6682  m_pBlockVector->GetBufferImageGranularity(),
6683  size,
6684  alignment,
6685  suballocType,
6686  false, // canMakeOtherLost
6687  &dstAllocRequest) &&
6688  MoveMakesSense(
6689  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
6690  {
6691  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
6692 
6693  // Reached limit on number of allocations or bytes to move.
6694  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
6695  (m_BytesMoved + size > maxBytesToMove))
6696  {
6697  return VK_INCOMPLETE;
6698  }
6699 
6700  void* pDstMappedData = VMA_NULL;
6701  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
6702  if(res != VK_SUCCESS)
6703  {
6704  return res;
6705  }
6706 
6707  void* pSrcMappedData = VMA_NULL;
6708  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
6709  if(res != VK_SUCCESS)
6710  {
6711  return res;
6712  }
6713 
6714  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
6715  memcpy(
6716  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
6717  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
6718  static_cast<size_t>(size));
6719 
6720  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
6721  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
6722 
6723  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
6724 
6725  if(allocInfo.m_pChanged != VMA_NULL)
6726  {
6727  *allocInfo.m_pChanged = VK_TRUE;
6728  }
6729 
6730  ++m_AllocationsMoved;
6731  m_BytesMoved += size;
6732 
6733  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
6734 
6735  break;
6736  }
6737  }
6738 
6739  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
6740 
6741  if(srcAllocIndex > 0)
6742  {
6743  --srcAllocIndex;
6744  }
6745  else
6746  {
6747  if(srcBlockIndex > 0)
6748  {
6749  --srcBlockIndex;
6750  srcAllocIndex = SIZE_MAX;
6751  }
6752  else
6753  {
6754  return VK_SUCCESS;
6755  }
6756  }
6757  }
6758 }
6759 
6760 VkResult VmaDefragmentator::Defragment(
6761  VkDeviceSize maxBytesToMove,
6762  uint32_t maxAllocationsToMove)
6763 {
6764  if(m_Allocations.empty())
6765  {
6766  return VK_SUCCESS;
6767  }
6768 
6769  // Create block info for each block.
6770  const size_t blockCount = m_pBlockVector->m_Blocks.size();
6771  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6772  {
6773  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
6774  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
6775  m_Blocks.push_back(pBlockInfo);
6776  }
6777 
6778  // Sort them by m_pBlock pointer value.
6779  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
6780 
6781  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
6782  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
6783  {
6784  AllocationInfo& allocInfo = m_Allocations[blockIndex];
6785  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
6786  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
6787  {
6788  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
6789  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
6790  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
6791  {
6792  (*it)->m_Allocations.push_back(allocInfo);
6793  }
6794  else
6795  {
6796  VMA_ASSERT(0);
6797  }
6798  }
6799  }
6800  m_Allocations.clear();
6801 
6802  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6803  {
6804  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
6805  pBlockInfo->CalcHasNonMovableAllocations();
6806  pBlockInfo->SortAllocationsBySizeDescecnding();
6807  }
6808 
6809  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
6810  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
6811 
6812  // Execute defragmentation rounds (the main part).
6813  VkResult result = VK_SUCCESS;
6814  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
6815  {
6816  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
6817  }
6818 
6819  // Unmap blocks that were mapped for defragmentation.
6820  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6821  {
6822  m_Blocks[blockIndex]->Unmap(m_hAllocator);
6823  }
6824 
6825  return result;
6826 }
6827 
6828 bool VmaDefragmentator::MoveMakesSense(
6829  size_t dstBlockIndex, VkDeviceSize dstOffset,
6830  size_t srcBlockIndex, VkDeviceSize srcOffset)
6831 {
6832  if(dstBlockIndex < srcBlockIndex)
6833  {
6834  return true;
6835  }
6836  if(dstBlockIndex > srcBlockIndex)
6837  {
6838  return false;
6839  }
6840  if(dstOffset < srcOffset)
6841  {
6842  return true;
6843  }
6844  return false;
6845 }
6846 
6848 // VmaAllocator_T
6849 
6850 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
6851  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
6852  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
6853  m_hDevice(pCreateInfo->device),
6854  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
6855  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
6856  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
6857  m_PreferredLargeHeapBlockSize(0),
6858  m_PhysicalDevice(pCreateInfo->physicalDevice),
6859  m_CurrentFrameIndex(0),
6860  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks()))
6861 {
6862  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
6863 
6864  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
6865  memset(&m_MemProps, 0, sizeof(m_MemProps));
6866  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
6867 
6868  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
6869  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
6870 
6871  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
6872  {
6873  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
6874  }
6875 
6876  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
6877  {
6878  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
6879  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
6880  }
6881 
6882  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
6883 
6884  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
6885  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
6886 
6887  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
6888  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
6889 
6890  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
6891  {
6892  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
6893  {
6894  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
6895  if(limit != VK_WHOLE_SIZE)
6896  {
6897  m_HeapSizeLimit[heapIndex] = limit;
6898  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
6899  {
6900  m_MemProps.memoryHeaps[heapIndex].size = limit;
6901  }
6902  }
6903  }
6904  }
6905 
6906  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
6907  {
6908  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
6909 
6910  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
6911  this,
6912  memTypeIndex,
6913  preferredBlockSize,
6914  0,
6915  SIZE_MAX,
6916  GetBufferImageGranularity(),
6917  pCreateInfo->frameInUseCount,
6918  false); // isCustomPool
6919  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
6920  // becase minBlockCount is 0.
6921  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
6922  }
6923 }
6924 
6925 VmaAllocator_T::~VmaAllocator_T()
6926 {
6927  VMA_ASSERT(m_Pools.empty());
6928 
6929  for(size_t i = GetMemoryTypeCount(); i--; )
6930  {
6931  vma_delete(this, m_pDedicatedAllocations[i]);
6932  vma_delete(this, m_pBlockVectors[i]);
6933  }
6934 }
6935 
6936 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
6937 {
6938 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
6939  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
6940  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
6941  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
6942  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
6943  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
6944  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
6945  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
6946  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
6947  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
6948  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
6949  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
6950  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
6951  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
6952  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
6953  if(m_UseKhrDedicatedAllocation)
6954  {
6955  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
6956  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
6957  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
6958  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
6959  }
6960 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
6961 
6962 #define VMA_COPY_IF_NOT_NULL(funcName) \
6963  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
6964 
6965  if(pVulkanFunctions != VMA_NULL)
6966  {
6967  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
6968  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
6969  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
6970  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
6971  VMA_COPY_IF_NOT_NULL(vkMapMemory);
6972  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
6973  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
6974  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
6975  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
6976  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
6977  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
6978  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
6979  VMA_COPY_IF_NOT_NULL(vkCreateImage);
6980  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
6981  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
6982  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
6983  }
6984 
6985 #undef VMA_COPY_IF_NOT_NULL
6986 
6987  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
6988  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
6989  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
6990  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
6991  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
6992  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
6993  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
6994  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
6995  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
6996  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
6997  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
6998  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
6999  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
7000  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
7001  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
7002  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
7003  if(m_UseKhrDedicatedAllocation)
7004  {
7005  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
7006  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
7007  }
7008 }
7009 
7010 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
7011 {
7012  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
7013  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
7014  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
7015  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
7016 }
7017 
7018 VkResult VmaAllocator_T::AllocateMemoryOfType(
7019  const VkMemoryRequirements& vkMemReq,
7020  bool dedicatedAllocation,
7021  VkBuffer dedicatedBuffer,
7022  VkImage dedicatedImage,
7023  const VmaAllocationCreateInfo& createInfo,
7024  uint32_t memTypeIndex,
7025  VmaSuballocationType suballocType,
7026  VmaAllocation* pAllocation)
7027 {
7028  VMA_ASSERT(pAllocation != VMA_NULL);
7029  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
7030 
7031  VmaAllocationCreateInfo finalCreateInfo = createInfo;
7032 
7033  // If memory type is not HOST_VISIBLE, disable MAPPED.
7034  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
7035  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
7036  {
7037  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
7038  }
7039 
7040  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
7041  VMA_ASSERT(blockVector);
7042 
7043  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
7044  bool preferDedicatedMemory =
7045  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
7046  dedicatedAllocation ||
7047  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
7048  vkMemReq.size > preferredBlockSize / 2;
7049 
7050  if(preferDedicatedMemory &&
7051  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
7052  finalCreateInfo.pool == VK_NULL_HANDLE)
7053  {
7055  }
7056 
7057  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
7058  {
7059  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7060  {
7061  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7062  }
7063  else
7064  {
7065  return AllocateDedicatedMemory(
7066  vkMemReq.size,
7067  suballocType,
7068  memTypeIndex,
7069  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
7070  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
7071  finalCreateInfo.pUserData,
7072  dedicatedBuffer,
7073  dedicatedImage,
7074  pAllocation);
7075  }
7076  }
7077  else
7078  {
7079  VkResult res = blockVector->Allocate(
7080  VK_NULL_HANDLE, // hCurrentPool
7081  m_CurrentFrameIndex.load(),
7082  vkMemReq,
7083  finalCreateInfo,
7084  suballocType,
7085  pAllocation);
7086  if(res == VK_SUCCESS)
7087  {
7088  return res;
7089  }
7090 
7091  // 5. Try dedicated memory.
7092  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7093  {
7094  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7095  }
7096  else
7097  {
7098  res = AllocateDedicatedMemory(
7099  vkMemReq.size,
7100  suballocType,
7101  memTypeIndex,
7102  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
7103  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
7104  finalCreateInfo.pUserData,
7105  dedicatedBuffer,
7106  dedicatedImage,
7107  pAllocation);
7108  if(res == VK_SUCCESS)
7109  {
7110  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
7111  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
7112  return VK_SUCCESS;
7113  }
7114  else
7115  {
7116  // Everything failed: Return error code.
7117  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
7118  return res;
7119  }
7120  }
7121  }
7122 }
7123 
7124 VkResult VmaAllocator_T::AllocateDedicatedMemory(
7125  VkDeviceSize size,
7126  VmaSuballocationType suballocType,
7127  uint32_t memTypeIndex,
7128  bool map,
7129  bool isUserDataString,
7130  void* pUserData,
7131  VkBuffer dedicatedBuffer,
7132  VkImage dedicatedImage,
7133  VmaAllocation* pAllocation)
7134 {
7135  VMA_ASSERT(pAllocation);
7136 
7137  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
7138  allocInfo.memoryTypeIndex = memTypeIndex;
7139  allocInfo.allocationSize = size;
7140 
7141  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
7142  if(m_UseKhrDedicatedAllocation)
7143  {
7144  if(dedicatedBuffer != VK_NULL_HANDLE)
7145  {
7146  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
7147  dedicatedAllocInfo.buffer = dedicatedBuffer;
7148  allocInfo.pNext = &dedicatedAllocInfo;
7149  }
7150  else if(dedicatedImage != VK_NULL_HANDLE)
7151  {
7152  dedicatedAllocInfo.image = dedicatedImage;
7153  allocInfo.pNext = &dedicatedAllocInfo;
7154  }
7155  }
7156 
7157  // Allocate VkDeviceMemory.
7158  VkDeviceMemory hMemory = VK_NULL_HANDLE;
7159  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
7160  if(res < 0)
7161  {
7162  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
7163  return res;
7164  }
7165 
7166  void* pMappedData = VMA_NULL;
7167  if(map)
7168  {
7169  res = (*m_VulkanFunctions.vkMapMemory)(
7170  m_hDevice,
7171  hMemory,
7172  0,
7173  VK_WHOLE_SIZE,
7174  0,
7175  &pMappedData);
7176  if(res < 0)
7177  {
7178  VMA_DEBUG_LOG(" vkMapMemory FAILED");
7179  FreeVulkanMemory(memTypeIndex, size, hMemory);
7180  return res;
7181  }
7182  }
7183 
7184  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
7185  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
7186  (*pAllocation)->SetUserData(this, pUserData);
7187 
7188  // Register it in m_pDedicatedAllocations.
7189  {
7190  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7191  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
7192  VMA_ASSERT(pDedicatedAllocations);
7193  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
7194  }
7195 
7196  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
7197 
7198  return VK_SUCCESS;
7199 }
7200 
7201 void VmaAllocator_T::GetBufferMemoryRequirements(
7202  VkBuffer hBuffer,
7203  VkMemoryRequirements& memReq,
7204  bool& requiresDedicatedAllocation,
7205  bool& prefersDedicatedAllocation) const
7206 {
7207  if(m_UseKhrDedicatedAllocation)
7208  {
7209  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
7210  memReqInfo.buffer = hBuffer;
7211 
7212  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
7213 
7214  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
7215  memReq2.pNext = &memDedicatedReq;
7216 
7217  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
7218 
7219  memReq = memReq2.memoryRequirements;
7220  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
7221  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
7222  }
7223  else
7224  {
7225  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
7226  requiresDedicatedAllocation = false;
7227  prefersDedicatedAllocation = false;
7228  }
7229 }
7230 
7231 void VmaAllocator_T::GetImageMemoryRequirements(
7232  VkImage hImage,
7233  VkMemoryRequirements& memReq,
7234  bool& requiresDedicatedAllocation,
7235  bool& prefersDedicatedAllocation) const
7236 {
7237  if(m_UseKhrDedicatedAllocation)
7238  {
7239  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
7240  memReqInfo.image = hImage;
7241 
7242  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
7243 
7244  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
7245  memReq2.pNext = &memDedicatedReq;
7246 
7247  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
7248 
7249  memReq = memReq2.memoryRequirements;
7250  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
7251  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
7252  }
7253  else
7254  {
7255  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
7256  requiresDedicatedAllocation = false;
7257  prefersDedicatedAllocation = false;
7258  }
7259 }
7260 
7261 VkResult VmaAllocator_T::AllocateMemory(
7262  const VkMemoryRequirements& vkMemReq,
7263  bool requiresDedicatedAllocation,
7264  bool prefersDedicatedAllocation,
7265  VkBuffer dedicatedBuffer,
7266  VkImage dedicatedImage,
7267  const VmaAllocationCreateInfo& createInfo,
7268  VmaSuballocationType suballocType,
7269  VmaAllocation* pAllocation)
7270 {
7271  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
7272  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7273  {
7274  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
7275  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7276  }
7277  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
7279  {
7280  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
7281  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7282  }
7283  if(requiresDedicatedAllocation)
7284  {
7285  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7286  {
7287  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
7288  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7289  }
7290  if(createInfo.pool != VK_NULL_HANDLE)
7291  {
7292  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
7293  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7294  }
7295  }
7296  if((createInfo.pool != VK_NULL_HANDLE) &&
7297  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
7298  {
7299  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
7300  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7301  }
7302 
7303  if(createInfo.pool != VK_NULL_HANDLE)
7304  {
7305  return createInfo.pool->m_BlockVector.Allocate(
7306  createInfo.pool,
7307  m_CurrentFrameIndex.load(),
7308  vkMemReq,
7309  createInfo,
7310  suballocType,
7311  pAllocation);
7312  }
7313  else
7314  {
7315  // Bit mask of memory Vulkan types acceptable for this allocation.
7316  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
7317  uint32_t memTypeIndex = UINT32_MAX;
7318  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
7319  if(res == VK_SUCCESS)
7320  {
7321  res = AllocateMemoryOfType(
7322  vkMemReq,
7323  requiresDedicatedAllocation || prefersDedicatedAllocation,
7324  dedicatedBuffer,
7325  dedicatedImage,
7326  createInfo,
7327  memTypeIndex,
7328  suballocType,
7329  pAllocation);
7330  // Succeeded on first try.
7331  if(res == VK_SUCCESS)
7332  {
7333  return res;
7334  }
7335  // Allocation from this memory type failed. Try other compatible memory types.
7336  else
7337  {
7338  for(;;)
7339  {
7340  // Remove old memTypeIndex from list of possibilities.
7341  memoryTypeBits &= ~(1u << memTypeIndex);
7342  // Find alternative memTypeIndex.
7343  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
7344  if(res == VK_SUCCESS)
7345  {
7346  res = AllocateMemoryOfType(
7347  vkMemReq,
7348  requiresDedicatedAllocation || prefersDedicatedAllocation,
7349  dedicatedBuffer,
7350  dedicatedImage,
7351  createInfo,
7352  memTypeIndex,
7353  suballocType,
7354  pAllocation);
7355  // Allocation from this alternative memory type succeeded.
7356  if(res == VK_SUCCESS)
7357  {
7358  return res;
7359  }
7360  // else: Allocation from this memory type failed. Try next one - next loop iteration.
7361  }
7362  // No other matching memory type index could be found.
7363  else
7364  {
7365  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
7366  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7367  }
7368  }
7369  }
7370  }
7371  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
7372  else
7373  return res;
7374  }
7375 }
7376 
7377 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
7378 {
7379  VMA_ASSERT(allocation);
7380 
7381  if(allocation->CanBecomeLost() == false ||
7382  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
7383  {
7384  switch(allocation->GetType())
7385  {
7386  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7387  {
7388  VmaBlockVector* pBlockVector = VMA_NULL;
7389  VmaPool hPool = allocation->GetPool();
7390  if(hPool != VK_NULL_HANDLE)
7391  {
7392  pBlockVector = &hPool->m_BlockVector;
7393  }
7394  else
7395  {
7396  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
7397  pBlockVector = m_pBlockVectors[memTypeIndex];
7398  }
7399  pBlockVector->Free(allocation);
7400  }
7401  break;
7402  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7403  FreeDedicatedMemory(allocation);
7404  break;
7405  default:
7406  VMA_ASSERT(0);
7407  }
7408  }
7409 
7410  allocation->SetUserData(this, VMA_NULL);
7411  vma_delete(this, allocation);
7412 }
7413 
7414 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
7415 {
7416  // Initialize.
7417  InitStatInfo(pStats->total);
7418  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
7419  InitStatInfo(pStats->memoryType[i]);
7420  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
7421  InitStatInfo(pStats->memoryHeap[i]);
7422 
7423  // Process default pools.
7424  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7425  {
7426  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
7427  VMA_ASSERT(pBlockVector);
7428  pBlockVector->AddStats(pStats);
7429  }
7430 
7431  // Process custom pools.
7432  {
7433  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7434  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
7435  {
7436  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
7437  }
7438  }
7439 
7440  // Process dedicated allocations.
7441  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7442  {
7443  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
7444  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7445  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
7446  VMA_ASSERT(pDedicatedAllocVector);
7447  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
7448  {
7449  VmaStatInfo allocationStatInfo;
7450  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
7451  VmaAddStatInfo(pStats->total, allocationStatInfo);
7452  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
7453  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
7454  }
7455  }
7456 
7457  // Postprocess.
7458  VmaPostprocessCalcStatInfo(pStats->total);
7459  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
7460  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
7461  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
7462  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
7463 }
7464 
7465 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
7466 
7467 VkResult VmaAllocator_T::Defragment(
7468  VmaAllocation* pAllocations,
7469  size_t allocationCount,
7470  VkBool32* pAllocationsChanged,
7471  const VmaDefragmentationInfo* pDefragmentationInfo,
7472  VmaDefragmentationStats* pDefragmentationStats)
7473 {
7474  if(pAllocationsChanged != VMA_NULL)
7475  {
7476  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
7477  }
7478  if(pDefragmentationStats != VMA_NULL)
7479  {
7480  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
7481  }
7482 
7483  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
7484 
7485  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
7486 
7487  const size_t poolCount = m_Pools.size();
7488 
7489  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
7490  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
7491  {
7492  VmaAllocation hAlloc = pAllocations[allocIndex];
7493  VMA_ASSERT(hAlloc);
7494  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
7495  // DedicatedAlloc cannot be defragmented.
7496  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
7497  // Only HOST_VISIBLE memory types can be defragmented.
7498  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
7499  // Lost allocation cannot be defragmented.
7500  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
7501  {
7502  VmaBlockVector* pAllocBlockVector = VMA_NULL;
7503 
7504  const VmaPool hAllocPool = hAlloc->GetPool();
7505  // This allocation belongs to custom pool.
7506  if(hAllocPool != VK_NULL_HANDLE)
7507  {
7508  pAllocBlockVector = &hAllocPool->GetBlockVector();
7509  }
7510  // This allocation belongs to general pool.
7511  else
7512  {
7513  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
7514  }
7515 
7516  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
7517 
7518  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
7519  &pAllocationsChanged[allocIndex] : VMA_NULL;
7520  pDefragmentator->AddAllocation(hAlloc, pChanged);
7521  }
7522  }
7523 
7524  VkResult result = VK_SUCCESS;
7525 
7526  // ======== Main processing.
7527 
7528  VkDeviceSize maxBytesToMove = SIZE_MAX;
7529  uint32_t maxAllocationsToMove = UINT32_MAX;
7530  if(pDefragmentationInfo != VMA_NULL)
7531  {
7532  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
7533  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
7534  }
7535 
7536  // Process standard memory.
7537  for(uint32_t memTypeIndex = 0;
7538  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
7539  ++memTypeIndex)
7540  {
7541  // Only HOST_VISIBLE memory types can be defragmented.
7542  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
7543  {
7544  result = m_pBlockVectors[memTypeIndex]->Defragment(
7545  pDefragmentationStats,
7546  maxBytesToMove,
7547  maxAllocationsToMove);
7548  }
7549  }
7550 
7551  // Process custom pools.
7552  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
7553  {
7554  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
7555  pDefragmentationStats,
7556  maxBytesToMove,
7557  maxAllocationsToMove);
7558  }
7559 
7560  // ======== Destroy defragmentators.
7561 
7562  // Process custom pools.
7563  for(size_t poolIndex = poolCount; poolIndex--; )
7564  {
7565  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
7566  }
7567 
7568  // Process standard memory.
7569  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
7570  {
7571  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
7572  {
7573  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
7574  }
7575  }
7576 
7577  return result;
7578 }
7579 
7580 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
7581 {
7582  if(hAllocation->CanBecomeLost())
7583  {
7584  /*
7585  Warning: This is a carefully designed algorithm.
7586  Do not modify unless you really know what you're doing :)
7587  */
7588  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
7589  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
7590  for(;;)
7591  {
7592  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
7593  {
7594  pAllocationInfo->memoryType = UINT32_MAX;
7595  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
7596  pAllocationInfo->offset = 0;
7597  pAllocationInfo->size = hAllocation->GetSize();
7598  pAllocationInfo->pMappedData = VMA_NULL;
7599  pAllocationInfo->pUserData = hAllocation->GetUserData();
7600  return;
7601  }
7602  else if(localLastUseFrameIndex == localCurrFrameIndex)
7603  {
7604  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
7605  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
7606  pAllocationInfo->offset = hAllocation->GetOffset();
7607  pAllocationInfo->size = hAllocation->GetSize();
7608  pAllocationInfo->pMappedData = VMA_NULL;
7609  pAllocationInfo->pUserData = hAllocation->GetUserData();
7610  return;
7611  }
7612  else // Last use time earlier than current time.
7613  {
7614  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
7615  {
7616  localLastUseFrameIndex = localCurrFrameIndex;
7617  }
7618  }
7619  }
7620  }
7621  else
7622  {
7623  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
7624  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
7625  pAllocationInfo->offset = hAllocation->GetOffset();
7626  pAllocationInfo->size = hAllocation->GetSize();
7627  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
7628  pAllocationInfo->pUserData = hAllocation->GetUserData();
7629  }
7630 }
7631 
7632 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
7633 {
7634  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
7635 
7636  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
7637 
7638  if(newCreateInfo.maxBlockCount == 0)
7639  {
7640  newCreateInfo.maxBlockCount = SIZE_MAX;
7641  }
7642  if(newCreateInfo.blockSize == 0)
7643  {
7644  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
7645  }
7646 
7647  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
7648 
7649  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
7650  if(res != VK_SUCCESS)
7651  {
7652  vma_delete(this, *pPool);
7653  *pPool = VMA_NULL;
7654  return res;
7655  }
7656 
7657  // Add to m_Pools.
7658  {
7659  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7660  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
7661  }
7662 
7663  return VK_SUCCESS;
7664 }
7665 
7666 void VmaAllocator_T::DestroyPool(VmaPool pool)
7667 {
7668  // Remove from m_Pools.
7669  {
7670  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7671  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
7672  VMA_ASSERT(success && "Pool not found in Allocator.");
7673  }
7674 
7675  vma_delete(this, pool);
7676 }
7677 
7678 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
7679 {
7680  pool->m_BlockVector.GetPoolStats(pPoolStats);
7681 }
7682 
7683 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
7684 {
7685  m_CurrentFrameIndex.store(frameIndex);
7686 }
7687 
7688 void VmaAllocator_T::MakePoolAllocationsLost(
7689  VmaPool hPool,
7690  size_t* pLostAllocationCount)
7691 {
7692  hPool->m_BlockVector.MakePoolAllocationsLost(
7693  m_CurrentFrameIndex.load(),
7694  pLostAllocationCount);
7695 }
7696 
7697 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
7698 {
7699  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
7700  (*pAllocation)->InitLost();
7701 }
7702 
7703 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
7704 {
7705  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
7706 
7707  VkResult res;
7708  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
7709  {
7710  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
7711  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
7712  {
7713  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
7714  if(res == VK_SUCCESS)
7715  {
7716  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
7717  }
7718  }
7719  else
7720  {
7721  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
7722  }
7723  }
7724  else
7725  {
7726  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
7727  }
7728 
7729  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
7730  {
7731  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
7732  }
7733 
7734  return res;
7735 }
7736 
7737 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
7738 {
7739  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
7740  {
7741  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
7742  }
7743 
7744  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
7745 
7746  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
7747  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
7748  {
7749  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
7750  m_HeapSizeLimit[heapIndex] += size;
7751  }
7752 }
7753 
7754 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
7755 {
7756  if(hAllocation->CanBecomeLost())
7757  {
7758  return VK_ERROR_MEMORY_MAP_FAILED;
7759  }
7760 
7761  switch(hAllocation->GetType())
7762  {
7763  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7764  {
7765  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
7766  char *pBytes = VMA_NULL;
7767  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
7768  if(res == VK_SUCCESS)
7769  {
7770  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
7771  hAllocation->BlockAllocMap();
7772  }
7773  return res;
7774  }
7775  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7776  return hAllocation->DedicatedAllocMap(this, ppData);
7777  default:
7778  VMA_ASSERT(0);
7779  return VK_ERROR_MEMORY_MAP_FAILED;
7780  }
7781 }
7782 
7783 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
7784 {
7785  switch(hAllocation->GetType())
7786  {
7787  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7788  {
7789  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
7790  hAllocation->BlockAllocUnmap();
7791  pBlock->Unmap(this, 1);
7792  }
7793  break;
7794  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7795  hAllocation->DedicatedAllocUnmap(this);
7796  break;
7797  default:
7798  VMA_ASSERT(0);
7799  }
7800 }
7801 
7802 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
7803 {
7804  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
7805 
7806  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
7807  {
7808  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7809  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
7810  VMA_ASSERT(pDedicatedAllocations);
7811  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
7812  VMA_ASSERT(success);
7813  }
7814 
7815  VkDeviceMemory hMemory = allocation->GetMemory();
7816 
7817  if(allocation->GetMappedData() != VMA_NULL)
7818  {
7819  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
7820  }
7821 
7822  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
7823 
7824  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
7825 }
7826 
7827 #if VMA_STATS_STRING_ENABLED
7828 
7829 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
7830 {
7831  bool dedicatedAllocationsStarted = false;
7832  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7833  {
7834  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7835  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
7836  VMA_ASSERT(pDedicatedAllocVector);
7837  if(pDedicatedAllocVector->empty() == false)
7838  {
7839  if(dedicatedAllocationsStarted == false)
7840  {
7841  dedicatedAllocationsStarted = true;
7842  json.WriteString("DedicatedAllocations");
7843  json.BeginObject();
7844  }
7845 
7846  json.BeginString("Type ");
7847  json.ContinueString(memTypeIndex);
7848  json.EndString();
7849 
7850  json.BeginArray();
7851 
7852  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
7853  {
7854  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
7855  json.BeginObject(true);
7856 
7857  json.WriteString("Type");
7858  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[hAlloc->GetSuballocationType()]);
7859 
7860  json.WriteString("Size");
7861  json.WriteNumber(hAlloc->GetSize());
7862 
7863  const void* pUserData = hAlloc->GetUserData();
7864  if(pUserData != VMA_NULL)
7865  {
7866  json.WriteString("UserData");
7867  if(hAlloc->IsUserDataString())
7868  {
7869  json.WriteString((const char*)pUserData);
7870  }
7871  else
7872  {
7873  json.BeginString();
7874  json.ContinueString_Pointer(pUserData);
7875  json.EndString();
7876  }
7877  }
7878 
7879  json.EndObject();
7880  }
7881 
7882  json.EndArray();
7883  }
7884  }
7885  if(dedicatedAllocationsStarted)
7886  {
7887  json.EndObject();
7888  }
7889 
7890  {
7891  bool allocationsStarted = false;
7892  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7893  {
7894  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
7895  {
7896  if(allocationsStarted == false)
7897  {
7898  allocationsStarted = true;
7899  json.WriteString("DefaultPools");
7900  json.BeginObject();
7901  }
7902 
7903  json.BeginString("Type ");
7904  json.ContinueString(memTypeIndex);
7905  json.EndString();
7906 
7907  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
7908  }
7909  }
7910  if(allocationsStarted)
7911  {
7912  json.EndObject();
7913  }
7914  }
7915 
7916  {
7917  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7918  const size_t poolCount = m_Pools.size();
7919  if(poolCount > 0)
7920  {
7921  json.WriteString("Pools");
7922  json.BeginArray();
7923  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
7924  {
7925  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
7926  }
7927  json.EndArray();
7928  }
7929  }
7930 }
7931 
7932 #endif // #if VMA_STATS_STRING_ENABLED
7933 
7934 static VkResult AllocateMemoryForImage(
7935  VmaAllocator allocator,
7936  VkImage image,
7937  const VmaAllocationCreateInfo* pAllocationCreateInfo,
7938  VmaSuballocationType suballocType,
7939  VmaAllocation* pAllocation)
7940 {
7941  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
7942 
7943  VkMemoryRequirements vkMemReq = {};
7944  bool requiresDedicatedAllocation = false;
7945  bool prefersDedicatedAllocation = false;
7946  allocator->GetImageMemoryRequirements(image, vkMemReq,
7947  requiresDedicatedAllocation, prefersDedicatedAllocation);
7948 
7949  return allocator->AllocateMemory(
7950  vkMemReq,
7951  requiresDedicatedAllocation,
7952  prefersDedicatedAllocation,
7953  VK_NULL_HANDLE, // dedicatedBuffer
7954  image, // dedicatedImage
7955  *pAllocationCreateInfo,
7956  suballocType,
7957  pAllocation);
7958 }
7959 
7961 // Public interface
7962 
7963 VkResult vmaCreateAllocator(
7964  const VmaAllocatorCreateInfo* pCreateInfo,
7965  VmaAllocator* pAllocator)
7966 {
7967  VMA_ASSERT(pCreateInfo && pAllocator);
7968  VMA_DEBUG_LOG("vmaCreateAllocator");
7969  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
7970  return VK_SUCCESS;
7971 }
7972 
7973 void vmaDestroyAllocator(
7974  VmaAllocator allocator)
7975 {
7976  if(allocator != VK_NULL_HANDLE)
7977  {
7978  VMA_DEBUG_LOG("vmaDestroyAllocator");
7979  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
7980  vma_delete(&allocationCallbacks, allocator);
7981  }
7982 }
7983 
7985  VmaAllocator allocator,
7986  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
7987 {
7988  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
7989  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
7990 }
7991 
7993  VmaAllocator allocator,
7994  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
7995 {
7996  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
7997  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
7998 }
7999 
8001  VmaAllocator allocator,
8002  uint32_t memoryTypeIndex,
8003  VkMemoryPropertyFlags* pFlags)
8004 {
8005  VMA_ASSERT(allocator && pFlags);
8006  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
8007  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
8008 }
8009 
8011  VmaAllocator allocator,
8012  uint32_t frameIndex)
8013 {
8014  VMA_ASSERT(allocator);
8015  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
8016 
8017  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8018 
8019  allocator->SetCurrentFrameIndex(frameIndex);
8020 }
8021 
8022 void vmaCalculateStats(
8023  VmaAllocator allocator,
8024  VmaStats* pStats)
8025 {
8026  VMA_ASSERT(allocator && pStats);
8027  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8028  allocator->CalculateStats(pStats);
8029 }
8030 
8031 #if VMA_STATS_STRING_ENABLED
8032 
8033 void vmaBuildStatsString(
8034  VmaAllocator allocator,
8035  char** ppStatsString,
8036  VkBool32 detailedMap)
8037 {
8038  VMA_ASSERT(allocator && ppStatsString);
8039  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8040 
8041  VmaStringBuilder sb(allocator);
8042  {
8043  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
8044  json.BeginObject();
8045 
8046  VmaStats stats;
8047  allocator->CalculateStats(&stats);
8048 
8049  json.WriteString("Total");
8050  VmaPrintStatInfo(json, stats.total);
8051 
8052  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
8053  {
8054  json.BeginString("Heap ");
8055  json.ContinueString(heapIndex);
8056  json.EndString();
8057  json.BeginObject();
8058 
8059  json.WriteString("Size");
8060  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
8061 
8062  json.WriteString("Flags");
8063  json.BeginArray(true);
8064  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
8065  {
8066  json.WriteString("DEVICE_LOCAL");
8067  }
8068  json.EndArray();
8069 
8070  if(stats.memoryHeap[heapIndex].blockCount > 0)
8071  {
8072  json.WriteString("Stats");
8073  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
8074  }
8075 
8076  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
8077  {
8078  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
8079  {
8080  json.BeginString("Type ");
8081  json.ContinueString(typeIndex);
8082  json.EndString();
8083 
8084  json.BeginObject();
8085 
8086  json.WriteString("Flags");
8087  json.BeginArray(true);
8088  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
8089  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
8090  {
8091  json.WriteString("DEVICE_LOCAL");
8092  }
8093  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
8094  {
8095  json.WriteString("HOST_VISIBLE");
8096  }
8097  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
8098  {
8099  json.WriteString("HOST_COHERENT");
8100  }
8101  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
8102  {
8103  json.WriteString("HOST_CACHED");
8104  }
8105  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
8106  {
8107  json.WriteString("LAZILY_ALLOCATED");
8108  }
8109  json.EndArray();
8110 
8111  if(stats.memoryType[typeIndex].blockCount > 0)
8112  {
8113  json.WriteString("Stats");
8114  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
8115  }
8116 
8117  json.EndObject();
8118  }
8119  }
8120 
8121  json.EndObject();
8122  }
8123  if(detailedMap == VK_TRUE)
8124  {
8125  allocator->PrintDetailedMap(json);
8126  }
8127 
8128  json.EndObject();
8129  }
8130 
8131  const size_t len = sb.GetLength();
8132  char* const pChars = vma_new_array(allocator, char, len + 1);
8133  if(len > 0)
8134  {
8135  memcpy(pChars, sb.GetData(), len);
8136  }
8137  pChars[len] = '\0';
8138  *ppStatsString = pChars;
8139 }
8140 
8141 void vmaFreeStatsString(
8142  VmaAllocator allocator,
8143  char* pStatsString)
8144 {
8145  if(pStatsString != VMA_NULL)
8146  {
8147  VMA_ASSERT(allocator);
8148  size_t len = strlen(pStatsString);
8149  vma_delete_array(allocator, pStatsString, len + 1);
8150  }
8151 }
8152 
8153 #endif // #if VMA_STATS_STRING_ENABLED
8154 
8155 /*
8156 This function is not protected by any mutex because it just reads immutable data.
8157 */
8158 VkResult vmaFindMemoryTypeIndex(
8159  VmaAllocator allocator,
8160  uint32_t memoryTypeBits,
8161  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8162  uint32_t* pMemoryTypeIndex)
8163 {
8164  VMA_ASSERT(allocator != VK_NULL_HANDLE);
8165  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
8166  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
8167 
8168  if(pAllocationCreateInfo->memoryTypeBits != 0)
8169  {
8170  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
8171  }
8172 
8173  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
8174  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
8175 
8176  // Convert usage to requiredFlags and preferredFlags.
8177  switch(pAllocationCreateInfo->usage)
8178  {
8180  break;
8182  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
8183  break;
8185  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
8186  break;
8188  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
8189  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
8190  break;
8192  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
8193  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
8194  break;
8195  default:
8196  break;
8197  }
8198 
8199  *pMemoryTypeIndex = UINT32_MAX;
8200  uint32_t minCost = UINT32_MAX;
8201  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
8202  memTypeIndex < allocator->GetMemoryTypeCount();
8203  ++memTypeIndex, memTypeBit <<= 1)
8204  {
8205  // This memory type is acceptable according to memoryTypeBits bitmask.
8206  if((memTypeBit & memoryTypeBits) != 0)
8207  {
8208  const VkMemoryPropertyFlags currFlags =
8209  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
8210  // This memory type contains requiredFlags.
8211  if((requiredFlags & ~currFlags) == 0)
8212  {
8213  // Calculate cost as number of bits from preferredFlags not present in this memory type.
8214  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
8215  // Remember memory type with lowest cost.
8216  if(currCost < minCost)
8217  {
8218  *pMemoryTypeIndex = memTypeIndex;
8219  if(currCost == 0)
8220  {
8221  return VK_SUCCESS;
8222  }
8223  minCost = currCost;
8224  }
8225  }
8226  }
8227  }
8228  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
8229 }
8230 
8231 VkResult vmaCreatePool(
8232  VmaAllocator allocator,
8233  const VmaPoolCreateInfo* pCreateInfo,
8234  VmaPool* pPool)
8235 {
8236  VMA_ASSERT(allocator && pCreateInfo && pPool);
8237 
8238  VMA_DEBUG_LOG("vmaCreatePool");
8239 
8240  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8241 
8242  return allocator->CreatePool(pCreateInfo, pPool);
8243 }
8244 
8245 void vmaDestroyPool(
8246  VmaAllocator allocator,
8247  VmaPool pool)
8248 {
8249  VMA_ASSERT(allocator);
8250 
8251  if(pool == VK_NULL_HANDLE)
8252  {
8253  return;
8254  }
8255 
8256  VMA_DEBUG_LOG("vmaDestroyPool");
8257 
8258  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8259 
8260  allocator->DestroyPool(pool);
8261 }
8262 
8263 void vmaGetPoolStats(
8264  VmaAllocator allocator,
8265  VmaPool pool,
8266  VmaPoolStats* pPoolStats)
8267 {
8268  VMA_ASSERT(allocator && pool && pPoolStats);
8269 
8270  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8271 
8272  allocator->GetPoolStats(pool, pPoolStats);
8273 }
8274 
8276  VmaAllocator allocator,
8277  VmaPool pool,
8278  size_t* pLostAllocationCount)
8279 {
8280  VMA_ASSERT(allocator && pool);
8281 
8282  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8283 
8284  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
8285 }
8286 
8287 VkResult vmaAllocateMemory(
8288  VmaAllocator allocator,
8289  const VkMemoryRequirements* pVkMemoryRequirements,
8290  const VmaAllocationCreateInfo* pCreateInfo,
8291  VmaAllocation* pAllocation,
8292  VmaAllocationInfo* pAllocationInfo)
8293 {
8294  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
8295 
8296  VMA_DEBUG_LOG("vmaAllocateMemory");
8297 
8298  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8299 
8300  VkResult result = allocator->AllocateMemory(
8301  *pVkMemoryRequirements,
8302  false, // requiresDedicatedAllocation
8303  false, // prefersDedicatedAllocation
8304  VK_NULL_HANDLE, // dedicatedBuffer
8305  VK_NULL_HANDLE, // dedicatedImage
8306  *pCreateInfo,
8307  VMA_SUBALLOCATION_TYPE_UNKNOWN,
8308  pAllocation);
8309 
8310  if(pAllocationInfo && result == VK_SUCCESS)
8311  {
8312  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8313  }
8314 
8315  return result;
8316 }
8317 
8319  VmaAllocator allocator,
8320  VkBuffer buffer,
8321  const VmaAllocationCreateInfo* pCreateInfo,
8322  VmaAllocation* pAllocation,
8323  VmaAllocationInfo* pAllocationInfo)
8324 {
8325  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
8326 
8327  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
8328 
8329  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8330 
8331  VkMemoryRequirements vkMemReq = {};
8332  bool requiresDedicatedAllocation = false;
8333  bool prefersDedicatedAllocation = false;
8334  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
8335  requiresDedicatedAllocation,
8336  prefersDedicatedAllocation);
8337 
8338  VkResult result = allocator->AllocateMemory(
8339  vkMemReq,
8340  requiresDedicatedAllocation,
8341  prefersDedicatedAllocation,
8342  buffer, // dedicatedBuffer
8343  VK_NULL_HANDLE, // dedicatedImage
8344  *pCreateInfo,
8345  VMA_SUBALLOCATION_TYPE_BUFFER,
8346  pAllocation);
8347 
8348  if(pAllocationInfo && result == VK_SUCCESS)
8349  {
8350  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8351  }
8352 
8353  return result;
8354 }
8355 
8356 VkResult vmaAllocateMemoryForImage(
8357  VmaAllocator allocator,
8358  VkImage image,
8359  const VmaAllocationCreateInfo* pCreateInfo,
8360  VmaAllocation* pAllocation,
8361  VmaAllocationInfo* pAllocationInfo)
8362 {
8363  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
8364 
8365  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
8366 
8367  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8368 
8369  VkResult result = AllocateMemoryForImage(
8370  allocator,
8371  image,
8372  pCreateInfo,
8373  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
8374  pAllocation);
8375 
8376  if(pAllocationInfo && result == VK_SUCCESS)
8377  {
8378  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8379  }
8380 
8381  return result;
8382 }
8383 
8384 void vmaFreeMemory(
8385  VmaAllocator allocator,
8386  VmaAllocation allocation)
8387 {
8388  VMA_ASSERT(allocator && allocation);
8389 
8390  VMA_DEBUG_LOG("vmaFreeMemory");
8391 
8392  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8393 
8394  allocator->FreeMemory(allocation);
8395 }
8396 
8398  VmaAllocator allocator,
8399  VmaAllocation allocation,
8400  VmaAllocationInfo* pAllocationInfo)
8401 {
8402  VMA_ASSERT(allocator && allocation && pAllocationInfo);
8403 
8404  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8405 
8406  allocator->GetAllocationInfo(allocation, pAllocationInfo);
8407 }
8408 
8410  VmaAllocator allocator,
8411  VmaAllocation allocation,
8412  void* pUserData)
8413 {
8414  VMA_ASSERT(allocator && allocation);
8415 
8416  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8417 
8418  allocation->SetUserData(allocator, pUserData);
8419 }
8420 
8422  VmaAllocator allocator,
8423  VmaAllocation* pAllocation)
8424 {
8425  VMA_ASSERT(allocator && pAllocation);
8426 
8427  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
8428 
8429  allocator->CreateLostAllocation(pAllocation);
8430 }
8431 
8432 VkResult vmaMapMemory(
8433  VmaAllocator allocator,
8434  VmaAllocation allocation,
8435  void** ppData)
8436 {
8437  VMA_ASSERT(allocator && allocation && ppData);
8438 
8439  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8440 
8441  return allocator->Map(allocation, ppData);
8442 }
8443 
8444 void vmaUnmapMemory(
8445  VmaAllocator allocator,
8446  VmaAllocation allocation)
8447 {
8448  VMA_ASSERT(allocator && allocation);
8449 
8450  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8451 
8452  allocator->Unmap(allocation);
8453 }
8454 
8455 VkResult vmaDefragment(
8456  VmaAllocator allocator,
8457  VmaAllocation* pAllocations,
8458  size_t allocationCount,
8459  VkBool32* pAllocationsChanged,
8460  const VmaDefragmentationInfo *pDefragmentationInfo,
8461  VmaDefragmentationStats* pDefragmentationStats)
8462 {
8463  VMA_ASSERT(allocator && pAllocations);
8464 
8465  VMA_DEBUG_LOG("vmaDefragment");
8466 
8467  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8468 
8469  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
8470 }
8471 
8472 VkResult vmaCreateBuffer(
8473  VmaAllocator allocator,
8474  const VkBufferCreateInfo* pBufferCreateInfo,
8475  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8476  VkBuffer* pBuffer,
8477  VmaAllocation* pAllocation,
8478  VmaAllocationInfo* pAllocationInfo)
8479 {
8480  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
8481 
8482  VMA_DEBUG_LOG("vmaCreateBuffer");
8483 
8484  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8485 
8486  *pBuffer = VK_NULL_HANDLE;
8487  *pAllocation = VK_NULL_HANDLE;
8488 
8489  // 1. Create VkBuffer.
8490  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
8491  allocator->m_hDevice,
8492  pBufferCreateInfo,
8493  allocator->GetAllocationCallbacks(),
8494  pBuffer);
8495  if(res >= 0)
8496  {
8497  // 2. vkGetBufferMemoryRequirements.
8498  VkMemoryRequirements vkMemReq = {};
8499  bool requiresDedicatedAllocation = false;
8500  bool prefersDedicatedAllocation = false;
8501  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
8502  requiresDedicatedAllocation, prefersDedicatedAllocation);
8503 
8504  // Make sure alignment requirements for specific buffer usages reported
8505  // in Physical Device Properties are included in alignment reported by memory requirements.
8506  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
8507  {
8508  VMA_ASSERT(vkMemReq.alignment %
8509  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
8510  }
8511  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
8512  {
8513  VMA_ASSERT(vkMemReq.alignment %
8514  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
8515  }
8516  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
8517  {
8518  VMA_ASSERT(vkMemReq.alignment %
8519  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
8520  }
8521 
8522  // 3. Allocate memory using allocator.
8523  res = allocator->AllocateMemory(
8524  vkMemReq,
8525  requiresDedicatedAllocation,
8526  prefersDedicatedAllocation,
8527  *pBuffer, // dedicatedBuffer
8528  VK_NULL_HANDLE, // dedicatedImage
8529  *pAllocationCreateInfo,
8530  VMA_SUBALLOCATION_TYPE_BUFFER,
8531  pAllocation);
8532  if(res >= 0)
8533  {
8534  // 3. Bind buffer with memory.
8535  res = (*allocator->GetVulkanFunctions().vkBindBufferMemory)(
8536  allocator->m_hDevice,
8537  *pBuffer,
8538  (*pAllocation)->GetMemory(),
8539  (*pAllocation)->GetOffset());
8540  if(res >= 0)
8541  {
8542  // All steps succeeded.
8543  if(pAllocationInfo != VMA_NULL)
8544  {
8545  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8546  }
8547  return VK_SUCCESS;
8548  }
8549  allocator->FreeMemory(*pAllocation);
8550  *pAllocation = VK_NULL_HANDLE;
8551  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
8552  *pBuffer = VK_NULL_HANDLE;
8553  return res;
8554  }
8555  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
8556  *pBuffer = VK_NULL_HANDLE;
8557  return res;
8558  }
8559  return res;
8560 }
8561 
8562 void vmaDestroyBuffer(
8563  VmaAllocator allocator,
8564  VkBuffer buffer,
8565  VmaAllocation allocation)
8566 {
8567  if(buffer != VK_NULL_HANDLE)
8568  {
8569  VMA_ASSERT(allocator);
8570 
8571  VMA_DEBUG_LOG("vmaDestroyBuffer");
8572 
8573  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8574 
8575  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
8576 
8577  allocator->FreeMemory(allocation);
8578  }
8579 }
8580 
8581 VkResult vmaCreateImage(
8582  VmaAllocator allocator,
8583  const VkImageCreateInfo* pImageCreateInfo,
8584  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8585  VkImage* pImage,
8586  VmaAllocation* pAllocation,
8587  VmaAllocationInfo* pAllocationInfo)
8588 {
8589  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
8590 
8591  VMA_DEBUG_LOG("vmaCreateImage");
8592 
8593  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8594 
8595  *pImage = VK_NULL_HANDLE;
8596  *pAllocation = VK_NULL_HANDLE;
8597 
8598  // 1. Create VkImage.
8599  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
8600  allocator->m_hDevice,
8601  pImageCreateInfo,
8602  allocator->GetAllocationCallbacks(),
8603  pImage);
8604  if(res >= 0)
8605  {
8606  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
8607  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
8608  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
8609 
8610  // 2. Allocate memory using allocator.
8611  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
8612  if(res >= 0)
8613  {
8614  // 3. Bind image with memory.
8615  res = (*allocator->GetVulkanFunctions().vkBindImageMemory)(
8616  allocator->m_hDevice,
8617  *pImage,
8618  (*pAllocation)->GetMemory(),
8619  (*pAllocation)->GetOffset());
8620  if(res >= 0)
8621  {
8622  // All steps succeeded.
8623  if(pAllocationInfo != VMA_NULL)
8624  {
8625  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8626  }
8627  return VK_SUCCESS;
8628  }
8629  allocator->FreeMemory(*pAllocation);
8630  *pAllocation = VK_NULL_HANDLE;
8631  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
8632  *pImage = VK_NULL_HANDLE;
8633  return res;
8634  }
8635  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
8636  *pImage = VK_NULL_HANDLE;
8637  return res;
8638  }
8639  return res;
8640 }
8641 
8642 void vmaDestroyImage(
8643  VmaAllocator allocator,
8644  VkImage image,
8645  VmaAllocation allocation)
8646 {
8647  if(image != VK_NULL_HANDLE)
8648  {
8649  VMA_ASSERT(allocator);
8650 
8651  VMA_DEBUG_LOG("vmaDestroyImage");
8652 
8653  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8654 
8655  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
8656 
8657  allocator->FreeMemory(allocation);
8658  }
8659 }
8660 
8661 #endif // #ifdef VMA_IMPLEMENTATION
PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
Definition: vk_mem_alloc.h:806
-
Set this flag if the allocation should have its own memory block.
Definition: vk_mem_alloc.h:1060
+Go to the documentation of this file.
1 //
2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
3 //
4 // Permission is hereby granted, free of charge, to any person obtaining a copy
5 // of this software and associated documentation files (the "Software"), to deal
6 // in the Software without restriction, including without limitation the rights
7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 // copies of the Software, and to permit persons to whom the Software is
9 // furnished to do so, subject to the following conditions:
10 //
11 // The above copyright notice and this permission notice shall be included in
12 // all copies or substantial portions of the Software.
13 //
14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
20 // THE SOFTWARE.
21 //
22 
23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
25 
26 #ifdef __cplusplus
27 extern "C" {
28 #endif
29 
777 #include <vulkan/vulkan.h>
778 
779 VK_DEFINE_HANDLE(VmaAllocator)
780 
781 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
783  VmaAllocator allocator,
784  uint32_t memoryType,
785  VkDeviceMemory memory,
786  VkDeviceSize size);
788 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
789  VmaAllocator allocator,
790  uint32_t memoryType,
791  VkDeviceMemory memory,
792  VkDeviceSize size);
793 
801 typedef struct VmaDeviceMemoryCallbacks {
807 
837 
840 typedef VkFlags VmaAllocatorCreateFlags;
841 
846 typedef struct VmaVulkanFunctions {
847  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
848  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
849  PFN_vkAllocateMemory vkAllocateMemory;
850  PFN_vkFreeMemory vkFreeMemory;
851  PFN_vkMapMemory vkMapMemory;
852  PFN_vkUnmapMemory vkUnmapMemory;
853  PFN_vkBindBufferMemory vkBindBufferMemory;
854  PFN_vkBindImageMemory vkBindImageMemory;
855  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
856  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
857  PFN_vkCreateBuffer vkCreateBuffer;
858  PFN_vkDestroyBuffer vkDestroyBuffer;
859  PFN_vkCreateImage vkCreateImage;
860  PFN_vkDestroyImage vkDestroyImage;
861  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
862  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
864 
867 {
869  VmaAllocatorCreateFlags flags;
871 
872  VkPhysicalDevice physicalDevice;
874 
875  VkDevice device;
877 
880 
881  const VkAllocationCallbacks* pAllocationCallbacks;
883 
898  uint32_t frameInUseCount;
922  const VkDeviceSize* pHeapSizeLimit;
936 
938 VkResult vmaCreateAllocator(
939  const VmaAllocatorCreateInfo* pCreateInfo,
940  VmaAllocator* pAllocator);
941 
944  VmaAllocator allocator);
945 
951  VmaAllocator allocator,
952  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
953 
959  VmaAllocator allocator,
960  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
961 
969  VmaAllocator allocator,
970  uint32_t memoryTypeIndex,
971  VkMemoryPropertyFlags* pFlags);
972 
982  VmaAllocator allocator,
983  uint32_t frameIndex);
984 
987 typedef struct VmaStatInfo
988 {
990  uint32_t blockCount;
992  uint32_t allocationCount;
996  VkDeviceSize usedBytes;
998  VkDeviceSize unusedBytes;
999  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
1000  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
1001 } VmaStatInfo;
1002 
1004 typedef struct VmaStats
1005 {
1006  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
1007  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
1009 } VmaStats;
1010 
1012 void vmaCalculateStats(
1013  VmaAllocator allocator,
1014  VmaStats* pStats);
1015 
1016 #define VMA_STATS_STRING_ENABLED 1
1017 
1018 #if VMA_STATS_STRING_ENABLED
1019 
1021 
1023 void vmaBuildStatsString(
1024  VmaAllocator allocator,
1025  char** ppStatsString,
1026  VkBool32 detailedMap);
1027 
1028 void vmaFreeStatsString(
1029  VmaAllocator allocator,
1030  char* pStatsString);
1031 
1032 #endif // #if VMA_STATS_STRING_ENABLED
1033 
1034 VK_DEFINE_HANDLE(VmaPool)
1035 
1036 typedef enum VmaMemoryUsage
1037 {
1086 } VmaMemoryUsage;
1087 
1102 
1152 
1156 
1158 {
1160  VmaAllocationCreateFlags flags;
1171  VkMemoryPropertyFlags requiredFlags;
1176  VkMemoryPropertyFlags preferredFlags;
1184  uint32_t memoryTypeBits;
1190  VmaPool pool;
1197  void* pUserData;
1199 
1214 VkResult vmaFindMemoryTypeIndex(
1215  VmaAllocator allocator,
1216  uint32_t memoryTypeBits,
1217  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1218  uint32_t* pMemoryTypeIndex);
1219 
1240 
1243 typedef VkFlags VmaPoolCreateFlags;
1244 
1247 typedef struct VmaPoolCreateInfo {
1253  VmaPoolCreateFlags flags;
1258  VkDeviceSize blockSize;
1287 
1290 typedef struct VmaPoolStats {
1293  VkDeviceSize size;
1296  VkDeviceSize unusedSize;
1309  VkDeviceSize unusedRangeSizeMax;
1310 } VmaPoolStats;
1311 
1318 VkResult vmaCreatePool(
1319  VmaAllocator allocator,
1320  const VmaPoolCreateInfo* pCreateInfo,
1321  VmaPool* pPool);
1322 
1325 void vmaDestroyPool(
1326  VmaAllocator allocator,
1327  VmaPool pool);
1328 
1335 void vmaGetPoolStats(
1336  VmaAllocator allocator,
1337  VmaPool pool,
1338  VmaPoolStats* pPoolStats);
1339 
1347  VmaAllocator allocator,
1348  VmaPool pool,
1349  size_t* pLostAllocationCount);
1350 
1351 VK_DEFINE_HANDLE(VmaAllocation)
1352 
1353 
1355 typedef struct VmaAllocationInfo {
1360  uint32_t memoryType;
1369  VkDeviceMemory deviceMemory;
1374  VkDeviceSize offset;
1379  VkDeviceSize size;
1393  void* pUserData;
1395 
1406 VkResult vmaAllocateMemory(
1407  VmaAllocator allocator,
1408  const VkMemoryRequirements* pVkMemoryRequirements,
1409  const VmaAllocationCreateInfo* pCreateInfo,
1410  VmaAllocation* pAllocation,
1411  VmaAllocationInfo* pAllocationInfo);
1412 
1420  VmaAllocator allocator,
1421  VkBuffer buffer,
1422  const VmaAllocationCreateInfo* pCreateInfo,
1423  VmaAllocation* pAllocation,
1424  VmaAllocationInfo* pAllocationInfo);
1425 
1427 VkResult vmaAllocateMemoryForImage(
1428  VmaAllocator allocator,
1429  VkImage image,
1430  const VmaAllocationCreateInfo* pCreateInfo,
1431  VmaAllocation* pAllocation,
1432  VmaAllocationInfo* pAllocationInfo);
1433 
1435 void vmaFreeMemory(
1436  VmaAllocator allocator,
1437  VmaAllocation allocation);
1438 
1441  VmaAllocator allocator,
1442  VmaAllocation allocation,
1443  VmaAllocationInfo* pAllocationInfo);
1444 
1459  VmaAllocator allocator,
1460  VmaAllocation allocation,
1461  void* pUserData);
1462 
1474  VmaAllocator allocator,
1475  VmaAllocation* pAllocation);
1476 
1511 VkResult vmaMapMemory(
1512  VmaAllocator allocator,
1513  VmaAllocation allocation,
1514  void** ppData);
1515 
1520 void vmaUnmapMemory(
1521  VmaAllocator allocator,
1522  VmaAllocation allocation);
1523 
1525 typedef struct VmaDefragmentationInfo {
1530  VkDeviceSize maxBytesToMove;
1537 
1539 typedef struct VmaDefragmentationStats {
1541  VkDeviceSize bytesMoved;
1543  VkDeviceSize bytesFreed;
1549 
1626 VkResult vmaDefragment(
1627  VmaAllocator allocator,
1628  VmaAllocation* pAllocations,
1629  size_t allocationCount,
1630  VkBool32* pAllocationsChanged,
1631  const VmaDefragmentationInfo *pDefragmentationInfo,
1632  VmaDefragmentationStats* pDefragmentationStats);
1633 
1660 VkResult vmaCreateBuffer(
1661  VmaAllocator allocator,
1662  const VkBufferCreateInfo* pBufferCreateInfo,
1663  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1664  VkBuffer* pBuffer,
1665  VmaAllocation* pAllocation,
1666  VmaAllocationInfo* pAllocationInfo);
1667 
1679 void vmaDestroyBuffer(
1680  VmaAllocator allocator,
1681  VkBuffer buffer,
1682  VmaAllocation allocation);
1683 
1685 VkResult vmaCreateImage(
1686  VmaAllocator allocator,
1687  const VkImageCreateInfo* pImageCreateInfo,
1688  const VmaAllocationCreateInfo* pAllocationCreateInfo,
1689  VkImage* pImage,
1690  VmaAllocation* pAllocation,
1691  VmaAllocationInfo* pAllocationInfo);
1692 
1704 void vmaDestroyImage(
1705  VmaAllocator allocator,
1706  VkImage image,
1707  VmaAllocation allocation);
1708 
1709 #ifdef __cplusplus
1710 }
1711 #endif
1712 
1713 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
1714 
1715 // For Visual Studio IntelliSense.
1716 #ifdef __INTELLISENSE__
1717 #define VMA_IMPLEMENTATION
1718 #endif
1719 
1720 #ifdef VMA_IMPLEMENTATION
1721 #undef VMA_IMPLEMENTATION
1722 
1723 #include <cstdint>
1724 #include <cstdlib>
1725 #include <cstring>
1726 
1727 /*******************************************************************************
1728 CONFIGURATION SECTION
1729 
1730 Define some of these macros before each #include of this header or change them
1731 here if you need other then default behavior depending on your environment.
1732 */
1733 
1734 /*
1735 Define this macro to 1 to make the library fetch pointers to Vulkan functions
1736 internally, like:
1737 
1738  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
1739 
1740 Define to 0 if you are going to provide you own pointers to Vulkan functions via
1741 VmaAllocatorCreateInfo::pVulkanFunctions.
1742 */
1743 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
1744 #define VMA_STATIC_VULKAN_FUNCTIONS 1
1745 #endif
1746 
1747 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
1748 //#define VMA_USE_STL_CONTAINERS 1
1749 
1750 /* Set this macro to 1 to make the library including and using STL containers:
1751 std::pair, std::vector, std::list, std::unordered_map.
1752 
1753 Set it to 0 or undefined to make the library using its own implementation of
1754 the containers.
1755 */
1756 #if VMA_USE_STL_CONTAINERS
1757  #define VMA_USE_STL_VECTOR 1
1758  #define VMA_USE_STL_UNORDERED_MAP 1
1759  #define VMA_USE_STL_LIST 1
1760 #endif
1761 
1762 #if VMA_USE_STL_VECTOR
1763  #include <vector>
1764 #endif
1765 
1766 #if VMA_USE_STL_UNORDERED_MAP
1767  #include <unordered_map>
1768 #endif
1769 
1770 #if VMA_USE_STL_LIST
1771  #include <list>
1772 #endif
1773 
1774 /*
1775 Following headers are used in this CONFIGURATION section only, so feel free to
1776 remove them if not needed.
1777 */
1778 #include <cassert> // for assert
1779 #include <algorithm> // for min, max
1780 #include <mutex> // for std::mutex
1781 #include <atomic> // for std::atomic
1782 
1783 #if !defined(_WIN32) && !defined(__APPLE__)
1784  #include <malloc.h> // for aligned_alloc()
1785 #endif
1786 
1787 #if defined(__APPLE__)
1788 #include <cstdlib>
1789 void *aligned_alloc(size_t alignment, size_t size)
1790 {
1791  // alignment must be >= sizeof(void*)
1792  if(alignment < sizeof(void*))
1793  {
1794  alignment = sizeof(void*);
1795  }
1796 
1797  void *pointer;
1798  if(posix_memalign(&pointer, alignment, size) == 0)
1799  return pointer;
1800  return VMA_NULL;
1801 }
1802 #endif
1803 
1804 // Normal assert to check for programmer's errors, especially in Debug configuration.
1805 #ifndef VMA_ASSERT
1806  #ifdef _DEBUG
1807  #define VMA_ASSERT(expr) assert(expr)
1808  #else
1809  #define VMA_ASSERT(expr)
1810  #endif
1811 #endif
1812 
1813 // Assert that will be called very often, like inside data structures e.g. operator[].
1814 // Making it non-empty can make program slow.
1815 #ifndef VMA_HEAVY_ASSERT
1816  #ifdef _DEBUG
1817  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
1818  #else
1819  #define VMA_HEAVY_ASSERT(expr)
1820  #endif
1821 #endif
1822 
1823 #ifndef VMA_NULL
1824  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
1825  #define VMA_NULL nullptr
1826 #endif
1827 
1828 #ifndef VMA_ALIGN_OF
1829  #define VMA_ALIGN_OF(type) (__alignof(type))
1830 #endif
1831 
1832 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
1833  #if defined(_WIN32)
1834  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
1835  #else
1836  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
1837  #endif
1838 #endif
1839 
1840 #ifndef VMA_SYSTEM_FREE
1841  #if defined(_WIN32)
1842  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
1843  #else
1844  #define VMA_SYSTEM_FREE(ptr) free(ptr)
1845  #endif
1846 #endif
1847 
1848 #ifndef VMA_MIN
1849  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
1850 #endif
1851 
1852 #ifndef VMA_MAX
1853  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
1854 #endif
1855 
1856 #ifndef VMA_SWAP
1857  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
1858 #endif
1859 
1860 #ifndef VMA_SORT
1861  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
1862 #endif
1863 
1864 #ifndef VMA_DEBUG_LOG
1865  #define VMA_DEBUG_LOG(format, ...)
1866  /*
1867  #define VMA_DEBUG_LOG(format, ...) do { \
1868  printf(format, __VA_ARGS__); \
1869  printf("\n"); \
1870  } while(false)
1871  */
1872 #endif
1873 
1874 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
1875 #if VMA_STATS_STRING_ENABLED
1876  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
1877  {
1878  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
1879  }
1880  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
1881  {
1882  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
1883  }
1884  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
1885  {
1886  snprintf(outStr, strLen, "%p", ptr);
1887  }
1888 #endif
1889 
1890 #ifndef VMA_MUTEX
1891  class VmaMutex
1892  {
1893  public:
1894  VmaMutex() { }
1895  ~VmaMutex() { }
1896  void Lock() { m_Mutex.lock(); }
1897  void Unlock() { m_Mutex.unlock(); }
1898  private:
1899  std::mutex m_Mutex;
1900  };
1901  #define VMA_MUTEX VmaMutex
1902 #endif
1903 
1904 /*
1905 If providing your own implementation, you need to implement a subset of std::atomic:
1906 
1907 - Constructor(uint32_t desired)
1908 - uint32_t load() const
1909 - void store(uint32_t desired)
1910 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
1911 */
1912 #ifndef VMA_ATOMIC_UINT32
1913  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
1914 #endif
1915 
1916 #ifndef VMA_BEST_FIT
1917 
1929  #define VMA_BEST_FIT (1)
1930 #endif
1931 
1932 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
1933 
1937  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
1938 #endif
1939 
1940 #ifndef VMA_DEBUG_ALIGNMENT
1941 
1945  #define VMA_DEBUG_ALIGNMENT (1)
1946 #endif
1947 
1948 #ifndef VMA_DEBUG_MARGIN
1949 
1953  #define VMA_DEBUG_MARGIN (0)
1954 #endif
1955 
1956 #ifndef VMA_DEBUG_GLOBAL_MUTEX
1957 
1961  #define VMA_DEBUG_GLOBAL_MUTEX (0)
1962 #endif
1963 
1964 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
1965 
1969  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
1970 #endif
1971 
1972 #ifndef VMA_SMALL_HEAP_MAX_SIZE
1973  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
1975 #endif
1976 
1977 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
1978  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
1980 #endif
1981 
1982 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
1983 
1984 /*******************************************************************************
1985 END OF CONFIGURATION
1986 */
1987 
1988 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
1989  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
1990 
1991 // Returns number of bits set to 1 in (v).
1992 static inline uint32_t VmaCountBitsSet(uint32_t v)
1993 {
1994  uint32_t c = v - ((v >> 1) & 0x55555555);
1995  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
1996  c = ((c >> 4) + c) & 0x0F0F0F0F;
1997  c = ((c >> 8) + c) & 0x00FF00FF;
1998  c = ((c >> 16) + c) & 0x0000FFFF;
1999  return c;
2000 }
2001 
2002 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
2003 // Use types like uint32_t, uint64_t as T.
2004 template <typename T>
2005 static inline T VmaAlignUp(T val, T align)
2006 {
2007  return (val + align - 1) / align * align;
2008 }
2009 
2010 // Division with mathematical rounding to nearest number.
2011 template <typename T>
2012 inline T VmaRoundDiv(T x, T y)
2013 {
2014  return (x + (y / (T)2)) / y;
2015 }
2016 
2017 #ifndef VMA_SORT
2018 
2019 template<typename Iterator, typename Compare>
2020 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
2021 {
2022  Iterator centerValue = end; --centerValue;
2023  Iterator insertIndex = beg;
2024  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
2025  {
2026  if(cmp(*memTypeIndex, *centerValue))
2027  {
2028  if(insertIndex != memTypeIndex)
2029  {
2030  VMA_SWAP(*memTypeIndex, *insertIndex);
2031  }
2032  ++insertIndex;
2033  }
2034  }
2035  if(insertIndex != centerValue)
2036  {
2037  VMA_SWAP(*insertIndex, *centerValue);
2038  }
2039  return insertIndex;
2040 }
2041 
2042 template<typename Iterator, typename Compare>
2043 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
2044 {
2045  if(beg < end)
2046  {
2047  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
2048  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
2049  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
2050  }
2051 }
2052 
2053 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
2054 
2055 #endif // #ifndef VMA_SORT
2056 
2057 /*
2058 Returns true if two memory blocks occupy overlapping pages.
2059 ResourceA must be in less memory offset than ResourceB.
2060 
2061 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
2062 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
2063 */
2064 static inline bool VmaBlocksOnSamePage(
2065  VkDeviceSize resourceAOffset,
2066  VkDeviceSize resourceASize,
2067  VkDeviceSize resourceBOffset,
2068  VkDeviceSize pageSize)
2069 {
2070  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
2071  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
2072  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
2073  VkDeviceSize resourceBStart = resourceBOffset;
2074  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
2075  return resourceAEndPage == resourceBStartPage;
2076 }
2077 
2078 enum VmaSuballocationType
2079 {
2080  VMA_SUBALLOCATION_TYPE_FREE = 0,
2081  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
2082  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
2083  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
2084  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
2085  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
2086  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
2087 };
2088 
2089 /*
2090 Returns true if given suballocation types could conflict and must respect
2091 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
2092 or linear image and another one is optimal image. If type is unknown, behave
2093 conservatively.
2094 */
2095 static inline bool VmaIsBufferImageGranularityConflict(
2096  VmaSuballocationType suballocType1,
2097  VmaSuballocationType suballocType2)
2098 {
2099  if(suballocType1 > suballocType2)
2100  {
2101  VMA_SWAP(suballocType1, suballocType2);
2102  }
2103 
2104  switch(suballocType1)
2105  {
2106  case VMA_SUBALLOCATION_TYPE_FREE:
2107  return false;
2108  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
2109  return true;
2110  case VMA_SUBALLOCATION_TYPE_BUFFER:
2111  return
2112  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
2113  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2114  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
2115  return
2116  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
2117  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
2118  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2119  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
2120  return
2121  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
2122  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
2123  return false;
2124  default:
2125  VMA_ASSERT(0);
2126  return true;
2127  }
2128 }
2129 
2130 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
2131 struct VmaMutexLock
2132 {
2133 public:
2134  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
2135  m_pMutex(useMutex ? &mutex : VMA_NULL)
2136  {
2137  if(m_pMutex)
2138  {
2139  m_pMutex->Lock();
2140  }
2141  }
2142 
2143  ~VmaMutexLock()
2144  {
2145  if(m_pMutex)
2146  {
2147  m_pMutex->Unlock();
2148  }
2149  }
2150 
2151 private:
2152  VMA_MUTEX* m_pMutex;
2153 };
2154 
2155 #if VMA_DEBUG_GLOBAL_MUTEX
2156  static VMA_MUTEX gDebugGlobalMutex;
2157  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
2158 #else
2159  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
2160 #endif
2161 
2162 // Minimum size of a free suballocation to register it in the free suballocation collection.
2163 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
2164 
2165 /*
2166 Performs binary search and returns iterator to first element that is greater or
2167 equal to (key), according to comparison (cmp).
2168 
2169 Cmp should return true if first argument is less than second argument.
2170 
2171 Returned value is the found element, if present in the collection or place where
2172 new element with value (key) should be inserted.
2173 */
2174 template <typename IterT, typename KeyT, typename CmpT>
2175 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
2176 {
2177  size_t down = 0, up = (end - beg);
2178  while(down < up)
2179  {
2180  const size_t mid = (down + up) / 2;
2181  if(cmp(*(beg+mid), key))
2182  {
2183  down = mid + 1;
2184  }
2185  else
2186  {
2187  up = mid;
2188  }
2189  }
2190  return beg + down;
2191 }
2192 
2194 // Memory allocation
2195 
2196 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
2197 {
2198  if((pAllocationCallbacks != VMA_NULL) &&
2199  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
2200  {
2201  return (*pAllocationCallbacks->pfnAllocation)(
2202  pAllocationCallbacks->pUserData,
2203  size,
2204  alignment,
2205  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
2206  }
2207  else
2208  {
2209  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
2210  }
2211 }
2212 
2213 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
2214 {
2215  if((pAllocationCallbacks != VMA_NULL) &&
2216  (pAllocationCallbacks->pfnFree != VMA_NULL))
2217  {
2218  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
2219  }
2220  else
2221  {
2222  VMA_SYSTEM_FREE(ptr);
2223  }
2224 }
2225 
2226 template<typename T>
2227 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
2228 {
2229  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
2230 }
2231 
2232 template<typename T>
2233 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
2234 {
2235  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
2236 }
2237 
2238 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
2239 
2240 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
2241 
2242 template<typename T>
2243 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
2244 {
2245  ptr->~T();
2246  VmaFree(pAllocationCallbacks, ptr);
2247 }
2248 
2249 template<typename T>
2250 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
2251 {
2252  if(ptr != VMA_NULL)
2253  {
2254  for(size_t i = count; i--; )
2255  {
2256  ptr[i].~T();
2257  }
2258  VmaFree(pAllocationCallbacks, ptr);
2259  }
2260 }
2261 
2262 // STL-compatible allocator.
2263 template<typename T>
2264 class VmaStlAllocator
2265 {
2266 public:
2267  const VkAllocationCallbacks* const m_pCallbacks;
2268  typedef T value_type;
2269 
2270  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
2271  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
2272 
2273  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
2274  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
2275 
2276  template<typename U>
2277  bool operator==(const VmaStlAllocator<U>& rhs) const
2278  {
2279  return m_pCallbacks == rhs.m_pCallbacks;
2280  }
2281  template<typename U>
2282  bool operator!=(const VmaStlAllocator<U>& rhs) const
2283  {
2284  return m_pCallbacks != rhs.m_pCallbacks;
2285  }
2286 
2287  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
2288 };
2289 
2290 #if VMA_USE_STL_VECTOR
2291 
2292 #define VmaVector std::vector
2293 
2294 template<typename T, typename allocatorT>
2295 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
2296 {
2297  vec.insert(vec.begin() + index, item);
2298 }
2299 
2300 template<typename T, typename allocatorT>
2301 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
2302 {
2303  vec.erase(vec.begin() + index);
2304 }
2305 
2306 #else // #if VMA_USE_STL_VECTOR
2307 
2308 /* Class with interface compatible with subset of std::vector.
2309 T must be POD because constructors and destructors are not called and memcpy is
2310 used for these objects. */
2311 template<typename T, typename AllocatorT>
2312 class VmaVector
2313 {
2314 public:
2315  typedef T value_type;
2316 
2317  VmaVector(const AllocatorT& allocator) :
2318  m_Allocator(allocator),
2319  m_pArray(VMA_NULL),
2320  m_Count(0),
2321  m_Capacity(0)
2322  {
2323  }
2324 
2325  VmaVector(size_t count, const AllocatorT& allocator) :
2326  m_Allocator(allocator),
2327  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
2328  m_Count(count),
2329  m_Capacity(count)
2330  {
2331  }
2332 
2333  VmaVector(const VmaVector<T, AllocatorT>& src) :
2334  m_Allocator(src.m_Allocator),
2335  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
2336  m_Count(src.m_Count),
2337  m_Capacity(src.m_Count)
2338  {
2339  if(m_Count != 0)
2340  {
2341  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
2342  }
2343  }
2344 
2345  ~VmaVector()
2346  {
2347  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2348  }
2349 
2350  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
2351  {
2352  if(&rhs != this)
2353  {
2354  resize(rhs.m_Count);
2355  if(m_Count != 0)
2356  {
2357  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
2358  }
2359  }
2360  return *this;
2361  }
2362 
2363  bool empty() const { return m_Count == 0; }
2364  size_t size() const { return m_Count; }
2365  T* data() { return m_pArray; }
2366  const T* data() const { return m_pArray; }
2367 
2368  T& operator[](size_t index)
2369  {
2370  VMA_HEAVY_ASSERT(index < m_Count);
2371  return m_pArray[index];
2372  }
2373  const T& operator[](size_t index) const
2374  {
2375  VMA_HEAVY_ASSERT(index < m_Count);
2376  return m_pArray[index];
2377  }
2378 
2379  T& front()
2380  {
2381  VMA_HEAVY_ASSERT(m_Count > 0);
2382  return m_pArray[0];
2383  }
2384  const T& front() const
2385  {
2386  VMA_HEAVY_ASSERT(m_Count > 0);
2387  return m_pArray[0];
2388  }
2389  T& back()
2390  {
2391  VMA_HEAVY_ASSERT(m_Count > 0);
2392  return m_pArray[m_Count - 1];
2393  }
2394  const T& back() const
2395  {
2396  VMA_HEAVY_ASSERT(m_Count > 0);
2397  return m_pArray[m_Count - 1];
2398  }
2399 
2400  void reserve(size_t newCapacity, bool freeMemory = false)
2401  {
2402  newCapacity = VMA_MAX(newCapacity, m_Count);
2403 
2404  if((newCapacity < m_Capacity) && !freeMemory)
2405  {
2406  newCapacity = m_Capacity;
2407  }
2408 
2409  if(newCapacity != m_Capacity)
2410  {
2411  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
2412  if(m_Count != 0)
2413  {
2414  memcpy(newArray, m_pArray, m_Count * sizeof(T));
2415  }
2416  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2417  m_Capacity = newCapacity;
2418  m_pArray = newArray;
2419  }
2420  }
2421 
2422  void resize(size_t newCount, bool freeMemory = false)
2423  {
2424  size_t newCapacity = m_Capacity;
2425  if(newCount > m_Capacity)
2426  {
2427  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
2428  }
2429  else if(freeMemory)
2430  {
2431  newCapacity = newCount;
2432  }
2433 
2434  if(newCapacity != m_Capacity)
2435  {
2436  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
2437  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
2438  if(elementsToCopy != 0)
2439  {
2440  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
2441  }
2442  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
2443  m_Capacity = newCapacity;
2444  m_pArray = newArray;
2445  }
2446 
2447  m_Count = newCount;
2448  }
2449 
2450  void clear(bool freeMemory = false)
2451  {
2452  resize(0, freeMemory);
2453  }
2454 
2455  void insert(size_t index, const T& src)
2456  {
2457  VMA_HEAVY_ASSERT(index <= m_Count);
2458  const size_t oldCount = size();
2459  resize(oldCount + 1);
2460  if(index < oldCount)
2461  {
2462  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
2463  }
2464  m_pArray[index] = src;
2465  }
2466 
2467  void remove(size_t index)
2468  {
2469  VMA_HEAVY_ASSERT(index < m_Count);
2470  const size_t oldCount = size();
2471  if(index < oldCount - 1)
2472  {
2473  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
2474  }
2475  resize(oldCount - 1);
2476  }
2477 
2478  void push_back(const T& src)
2479  {
2480  const size_t newIndex = size();
2481  resize(newIndex + 1);
2482  m_pArray[newIndex] = src;
2483  }
2484 
2485  void pop_back()
2486  {
2487  VMA_HEAVY_ASSERT(m_Count > 0);
2488  resize(size() - 1);
2489  }
2490 
2491  void push_front(const T& src)
2492  {
2493  insert(0, src);
2494  }
2495 
2496  void pop_front()
2497  {
2498  VMA_HEAVY_ASSERT(m_Count > 0);
2499  remove(0);
2500  }
2501 
2502  typedef T* iterator;
2503 
2504  iterator begin() { return m_pArray; }
2505  iterator end() { return m_pArray + m_Count; }
2506 
2507 private:
2508  AllocatorT m_Allocator;
2509  T* m_pArray;
2510  size_t m_Count;
2511  size_t m_Capacity;
2512 };
2513 
2514 template<typename T, typename allocatorT>
2515 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
2516 {
2517  vec.insert(index, item);
2518 }
2519 
2520 template<typename T, typename allocatorT>
2521 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
2522 {
2523  vec.remove(index);
2524 }
2525 
2526 #endif // #if VMA_USE_STL_VECTOR
2527 
2528 template<typename CmpLess, typename VectorT>
2529 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
2530 {
2531  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
2532  vector.data(),
2533  vector.data() + vector.size(),
2534  value,
2535  CmpLess()) - vector.data();
2536  VmaVectorInsert(vector, indexToInsert, value);
2537  return indexToInsert;
2538 }
2539 
2540 template<typename CmpLess, typename VectorT>
2541 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
2542 {
2543  CmpLess comparator;
2544  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
2545  vector.begin(),
2546  vector.end(),
2547  value,
2548  comparator);
2549  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
2550  {
2551  size_t indexToRemove = it - vector.begin();
2552  VmaVectorRemove(vector, indexToRemove);
2553  return true;
2554  }
2555  return false;
2556 }
2557 
2558 template<typename CmpLess, typename VectorT>
2559 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
2560 {
2561  CmpLess comparator;
2562  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
2563  vector.data(),
2564  vector.data() + vector.size(),
2565  value,
2566  comparator);
2567  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
2568  {
2569  return it - vector.begin();
2570  }
2571  else
2572  {
2573  return vector.size();
2574  }
2575 }
2576 
2578 // class VmaPoolAllocator
2579 
2580 /*
2581 Allocator for objects of type T using a list of arrays (pools) to speed up
2582 allocation. Number of elements that can be allocated is not bounded because
2583 allocator can create multiple blocks.
2584 */
2585 template<typename T>
2586 class VmaPoolAllocator
2587 {
2588 public:
2589  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
2590  ~VmaPoolAllocator();
2591  void Clear();
2592  T* Alloc();
2593  void Free(T* ptr);
2594 
2595 private:
2596  union Item
2597  {
2598  uint32_t NextFreeIndex;
2599  T Value;
2600  };
2601 
2602  struct ItemBlock
2603  {
2604  Item* pItems;
2605  uint32_t FirstFreeIndex;
2606  };
2607 
2608  const VkAllocationCallbacks* m_pAllocationCallbacks;
2609  size_t m_ItemsPerBlock;
2610  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
2611 
2612  ItemBlock& CreateNewBlock();
2613 };
2614 
2615 template<typename T>
2616 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
2617  m_pAllocationCallbacks(pAllocationCallbacks),
2618  m_ItemsPerBlock(itemsPerBlock),
2619  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
2620 {
2621  VMA_ASSERT(itemsPerBlock > 0);
2622 }
2623 
2624 template<typename T>
2625 VmaPoolAllocator<T>::~VmaPoolAllocator()
2626 {
2627  Clear();
2628 }
2629 
2630 template<typename T>
2631 void VmaPoolAllocator<T>::Clear()
2632 {
2633  for(size_t i = m_ItemBlocks.size(); i--; )
2634  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
2635  m_ItemBlocks.clear();
2636 }
2637 
2638 template<typename T>
2639 T* VmaPoolAllocator<T>::Alloc()
2640 {
2641  for(size_t i = m_ItemBlocks.size(); i--; )
2642  {
2643  ItemBlock& block = m_ItemBlocks[i];
2644  // This block has some free items: Use first one.
2645  if(block.FirstFreeIndex != UINT32_MAX)
2646  {
2647  Item* const pItem = &block.pItems[block.FirstFreeIndex];
2648  block.FirstFreeIndex = pItem->NextFreeIndex;
2649  return &pItem->Value;
2650  }
2651  }
2652 
2653  // No block has free item: Create new one and use it.
2654  ItemBlock& newBlock = CreateNewBlock();
2655  Item* const pItem = &newBlock.pItems[0];
2656  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
2657  return &pItem->Value;
2658 }
2659 
2660 template<typename T>
2661 void VmaPoolAllocator<T>::Free(T* ptr)
2662 {
2663  // Search all memory blocks to find ptr.
2664  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
2665  {
2666  ItemBlock& block = m_ItemBlocks[i];
2667 
2668  // Casting to union.
2669  Item* pItemPtr;
2670  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
2671 
2672  // Check if pItemPtr is in address range of this block.
2673  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
2674  {
2675  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
2676  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
2677  block.FirstFreeIndex = index;
2678  return;
2679  }
2680  }
2681  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
2682 }
2683 
2684 template<typename T>
2685 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
2686 {
2687  ItemBlock newBlock = {
2688  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
2689 
2690  m_ItemBlocks.push_back(newBlock);
2691 
2692  // Setup singly-linked list of all free items in this block.
2693  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
2694  newBlock.pItems[i].NextFreeIndex = i + 1;
2695  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
2696  return m_ItemBlocks.back();
2697 }
2698 
2700 // class VmaRawList, VmaList
2701 
2702 #if VMA_USE_STL_LIST
2703 
2704 #define VmaList std::list
2705 
2706 #else // #if VMA_USE_STL_LIST
2707 
2708 template<typename T>
2709 struct VmaListItem
2710 {
2711  VmaListItem* pPrev;
2712  VmaListItem* pNext;
2713  T Value;
2714 };
2715 
2716 // Doubly linked list.
2717 template<typename T>
2718 class VmaRawList
2719 {
2720 public:
2721  typedef VmaListItem<T> ItemType;
2722 
2723  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
2724  ~VmaRawList();
2725  void Clear();
2726 
2727  size_t GetCount() const { return m_Count; }
2728  bool IsEmpty() const { return m_Count == 0; }
2729 
2730  ItemType* Front() { return m_pFront; }
2731  const ItemType* Front() const { return m_pFront; }
2732  ItemType* Back() { return m_pBack; }
2733  const ItemType* Back() const { return m_pBack; }
2734 
2735  ItemType* PushBack();
2736  ItemType* PushFront();
2737  ItemType* PushBack(const T& value);
2738  ItemType* PushFront(const T& value);
2739  void PopBack();
2740  void PopFront();
2741 
2742  // Item can be null - it means PushBack.
2743  ItemType* InsertBefore(ItemType* pItem);
2744  // Item can be null - it means PushFront.
2745  ItemType* InsertAfter(ItemType* pItem);
2746 
2747  ItemType* InsertBefore(ItemType* pItem, const T& value);
2748  ItemType* InsertAfter(ItemType* pItem, const T& value);
2749 
2750  void Remove(ItemType* pItem);
2751 
2752 private:
2753  const VkAllocationCallbacks* const m_pAllocationCallbacks;
2754  VmaPoolAllocator<ItemType> m_ItemAllocator;
2755  ItemType* m_pFront;
2756  ItemType* m_pBack;
2757  size_t m_Count;
2758 
2759  // Declared not defined, to block copy constructor and assignment operator.
2760  VmaRawList(const VmaRawList<T>& src);
2761  VmaRawList<T>& operator=(const VmaRawList<T>& rhs);
2762 };
2763 
2764 template<typename T>
2765 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
2766  m_pAllocationCallbacks(pAllocationCallbacks),
2767  m_ItemAllocator(pAllocationCallbacks, 128),
2768  m_pFront(VMA_NULL),
2769  m_pBack(VMA_NULL),
2770  m_Count(0)
2771 {
2772 }
2773 
2774 template<typename T>
2775 VmaRawList<T>::~VmaRawList()
2776 {
2777  // Intentionally not calling Clear, because that would be unnecessary
2778  // computations to return all items to m_ItemAllocator as free.
2779 }
2780 
2781 template<typename T>
2782 void VmaRawList<T>::Clear()
2783 {
2784  if(IsEmpty() == false)
2785  {
2786  ItemType* pItem = m_pBack;
2787  while(pItem != VMA_NULL)
2788  {
2789  ItemType* const pPrevItem = pItem->pPrev;
2790  m_ItemAllocator.Free(pItem);
2791  pItem = pPrevItem;
2792  }
2793  m_pFront = VMA_NULL;
2794  m_pBack = VMA_NULL;
2795  m_Count = 0;
2796  }
2797 }
2798 
2799 template<typename T>
2800 VmaListItem<T>* VmaRawList<T>::PushBack()
2801 {
2802  ItemType* const pNewItem = m_ItemAllocator.Alloc();
2803  pNewItem->pNext = VMA_NULL;
2804  if(IsEmpty())
2805  {
2806  pNewItem->pPrev = VMA_NULL;
2807  m_pFront = pNewItem;
2808  m_pBack = pNewItem;
2809  m_Count = 1;
2810  }
2811  else
2812  {
2813  pNewItem->pPrev = m_pBack;
2814  m_pBack->pNext = pNewItem;
2815  m_pBack = pNewItem;
2816  ++m_Count;
2817  }
2818  return pNewItem;
2819 }
2820 
2821 template<typename T>
2822 VmaListItem<T>* VmaRawList<T>::PushFront()
2823 {
2824  ItemType* const pNewItem = m_ItemAllocator.Alloc();
2825  pNewItem->pPrev = VMA_NULL;
2826  if(IsEmpty())
2827  {
2828  pNewItem->pNext = VMA_NULL;
2829  m_pFront = pNewItem;
2830  m_pBack = pNewItem;
2831  m_Count = 1;
2832  }
2833  else
2834  {
2835  pNewItem->pNext = m_pFront;
2836  m_pFront->pPrev = pNewItem;
2837  m_pFront = pNewItem;
2838  ++m_Count;
2839  }
2840  return pNewItem;
2841 }
2842 
2843 template<typename T>
2844 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
2845 {
2846  ItemType* const pNewItem = PushBack();
2847  pNewItem->Value = value;
2848  return pNewItem;
2849 }
2850 
2851 template<typename T>
2852 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
2853 {
2854  ItemType* const pNewItem = PushFront();
2855  pNewItem->Value = value;
2856  return pNewItem;
2857 }
2858 
2859 template<typename T>
2860 void VmaRawList<T>::PopBack()
2861 {
2862  VMA_HEAVY_ASSERT(m_Count > 0);
2863  ItemType* const pBackItem = m_pBack;
2864  ItemType* const pPrevItem = pBackItem->pPrev;
2865  if(pPrevItem != VMA_NULL)
2866  {
2867  pPrevItem->pNext = VMA_NULL;
2868  }
2869  m_pBack = pPrevItem;
2870  m_ItemAllocator.Free(pBackItem);
2871  --m_Count;
2872 }
2873 
2874 template<typename T>
2875 void VmaRawList<T>::PopFront()
2876 {
2877  VMA_HEAVY_ASSERT(m_Count > 0);
2878  ItemType* const pFrontItem = m_pFront;
2879  ItemType* const pNextItem = pFrontItem->pNext;
2880  if(pNextItem != VMA_NULL)
2881  {
2882  pNextItem->pPrev = VMA_NULL;
2883  }
2884  m_pFront = pNextItem;
2885  m_ItemAllocator.Free(pFrontItem);
2886  --m_Count;
2887 }
2888 
2889 template<typename T>
2890 void VmaRawList<T>::Remove(ItemType* pItem)
2891 {
2892  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
2893  VMA_HEAVY_ASSERT(m_Count > 0);
2894 
2895  if(pItem->pPrev != VMA_NULL)
2896  {
2897  pItem->pPrev->pNext = pItem->pNext;
2898  }
2899  else
2900  {
2901  VMA_HEAVY_ASSERT(m_pFront == pItem);
2902  m_pFront = pItem->pNext;
2903  }
2904 
2905  if(pItem->pNext != VMA_NULL)
2906  {
2907  pItem->pNext->pPrev = pItem->pPrev;
2908  }
2909  else
2910  {
2911  VMA_HEAVY_ASSERT(m_pBack == pItem);
2912  m_pBack = pItem->pPrev;
2913  }
2914 
2915  m_ItemAllocator.Free(pItem);
2916  --m_Count;
2917 }
2918 
2919 template<typename T>
2920 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
2921 {
2922  if(pItem != VMA_NULL)
2923  {
2924  ItemType* const prevItem = pItem->pPrev;
2925  ItemType* const newItem = m_ItemAllocator.Alloc();
2926  newItem->pPrev = prevItem;
2927  newItem->pNext = pItem;
2928  pItem->pPrev = newItem;
2929  if(prevItem != VMA_NULL)
2930  {
2931  prevItem->pNext = newItem;
2932  }
2933  else
2934  {
2935  VMA_HEAVY_ASSERT(m_pFront == pItem);
2936  m_pFront = newItem;
2937  }
2938  ++m_Count;
2939  return newItem;
2940  }
2941  else
2942  return PushBack();
2943 }
2944 
2945 template<typename T>
2946 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
2947 {
2948  if(pItem != VMA_NULL)
2949  {
2950  ItemType* const nextItem = pItem->pNext;
2951  ItemType* const newItem = m_ItemAllocator.Alloc();
2952  newItem->pNext = nextItem;
2953  newItem->pPrev = pItem;
2954  pItem->pNext = newItem;
2955  if(nextItem != VMA_NULL)
2956  {
2957  nextItem->pPrev = newItem;
2958  }
2959  else
2960  {
2961  VMA_HEAVY_ASSERT(m_pBack == pItem);
2962  m_pBack = newItem;
2963  }
2964  ++m_Count;
2965  return newItem;
2966  }
2967  else
2968  return PushFront();
2969 }
2970 
2971 template<typename T>
2972 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
2973 {
2974  ItemType* const newItem = InsertBefore(pItem);
2975  newItem->Value = value;
2976  return newItem;
2977 }
2978 
2979 template<typename T>
2980 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
2981 {
2982  ItemType* const newItem = InsertAfter(pItem);
2983  newItem->Value = value;
2984  return newItem;
2985 }
2986 
2987 template<typename T, typename AllocatorT>
2988 class VmaList
2989 {
2990 public:
2991  class iterator
2992  {
2993  public:
2994  iterator() :
2995  m_pList(VMA_NULL),
2996  m_pItem(VMA_NULL)
2997  {
2998  }
2999 
3000  T& operator*() const
3001  {
3002  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3003  return m_pItem->Value;
3004  }
3005  T* operator->() const
3006  {
3007  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3008  return &m_pItem->Value;
3009  }
3010 
3011  iterator& operator++()
3012  {
3013  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3014  m_pItem = m_pItem->pNext;
3015  return *this;
3016  }
3017  iterator& operator--()
3018  {
3019  if(m_pItem != VMA_NULL)
3020  {
3021  m_pItem = m_pItem->pPrev;
3022  }
3023  else
3024  {
3025  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
3026  m_pItem = m_pList->Back();
3027  }
3028  return *this;
3029  }
3030 
3031  iterator operator++(int)
3032  {
3033  iterator result = *this;
3034  ++*this;
3035  return result;
3036  }
3037  iterator operator--(int)
3038  {
3039  iterator result = *this;
3040  --*this;
3041  return result;
3042  }
3043 
3044  bool operator==(const iterator& rhs) const
3045  {
3046  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3047  return m_pItem == rhs.m_pItem;
3048  }
3049  bool operator!=(const iterator& rhs) const
3050  {
3051  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3052  return m_pItem != rhs.m_pItem;
3053  }
3054 
3055  private:
3056  VmaRawList<T>* m_pList;
3057  VmaListItem<T>* m_pItem;
3058 
3059  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
3060  m_pList(pList),
3061  m_pItem(pItem)
3062  {
3063  }
3064 
3065  friend class VmaList<T, AllocatorT>;
3066  };
3067 
3068  class const_iterator
3069  {
3070  public:
3071  const_iterator() :
3072  m_pList(VMA_NULL),
3073  m_pItem(VMA_NULL)
3074  {
3075  }
3076 
3077  const_iterator(const iterator& src) :
3078  m_pList(src.m_pList),
3079  m_pItem(src.m_pItem)
3080  {
3081  }
3082 
3083  const T& operator*() const
3084  {
3085  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3086  return m_pItem->Value;
3087  }
3088  const T* operator->() const
3089  {
3090  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3091  return &m_pItem->Value;
3092  }
3093 
3094  const_iterator& operator++()
3095  {
3096  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
3097  m_pItem = m_pItem->pNext;
3098  return *this;
3099  }
3100  const_iterator& operator--()
3101  {
3102  if(m_pItem != VMA_NULL)
3103  {
3104  m_pItem = m_pItem->pPrev;
3105  }
3106  else
3107  {
3108  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
3109  m_pItem = m_pList->Back();
3110  }
3111  return *this;
3112  }
3113 
3114  const_iterator operator++(int)
3115  {
3116  const_iterator result = *this;
3117  ++*this;
3118  return result;
3119  }
3120  const_iterator operator--(int)
3121  {
3122  const_iterator result = *this;
3123  --*this;
3124  return result;
3125  }
3126 
3127  bool operator==(const const_iterator& rhs) const
3128  {
3129  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3130  return m_pItem == rhs.m_pItem;
3131  }
3132  bool operator!=(const const_iterator& rhs) const
3133  {
3134  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
3135  return m_pItem != rhs.m_pItem;
3136  }
3137 
3138  private:
3139  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
3140  m_pList(pList),
3141  m_pItem(pItem)
3142  {
3143  }
3144 
3145  const VmaRawList<T>* m_pList;
3146  const VmaListItem<T>* m_pItem;
3147 
3148  friend class VmaList<T, AllocatorT>;
3149  };
3150 
3151  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
3152 
3153  bool empty() const { return m_RawList.IsEmpty(); }
3154  size_t size() const { return m_RawList.GetCount(); }
3155 
3156  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
3157  iterator end() { return iterator(&m_RawList, VMA_NULL); }
3158 
3159  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
3160  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
3161 
3162  void clear() { m_RawList.Clear(); }
3163  void push_back(const T& value) { m_RawList.PushBack(value); }
3164  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
3165  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
3166 
3167 private:
3168  VmaRawList<T> m_RawList;
3169 };
3170 
3171 #endif // #if VMA_USE_STL_LIST
3172 
3174 // class VmaMap
3175 
3176 // Unused in this version.
3177 #if 0
3178 
3179 #if VMA_USE_STL_UNORDERED_MAP
3180 
3181 #define VmaPair std::pair
3182 
3183 #define VMA_MAP_TYPE(KeyT, ValueT) \
3184  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
3185 
3186 #else // #if VMA_USE_STL_UNORDERED_MAP
3187 
3188 template<typename T1, typename T2>
3189 struct VmaPair
3190 {
3191  T1 first;
3192  T2 second;
3193 
3194  VmaPair() : first(), second() { }
3195  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
3196 };
3197 
3198 /* Class compatible with subset of interface of std::unordered_map.
3199 KeyT, ValueT must be POD because they will be stored in VmaVector.
3200 */
3201 template<typename KeyT, typename ValueT>
3202 class VmaMap
3203 {
3204 public:
3205  typedef VmaPair<KeyT, ValueT> PairType;
3206  typedef PairType* iterator;
3207 
3208  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
3209 
3210  iterator begin() { return m_Vector.begin(); }
3211  iterator end() { return m_Vector.end(); }
3212 
3213  void insert(const PairType& pair);
3214  iterator find(const KeyT& key);
3215  void erase(iterator it);
3216 
3217 private:
3218  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
3219 };
3220 
3221 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
3222 
3223 template<typename FirstT, typename SecondT>
3224 struct VmaPairFirstLess
3225 {
3226  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
3227  {
3228  return lhs.first < rhs.first;
3229  }
3230  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
3231  {
3232  return lhs.first < rhsFirst;
3233  }
3234 };
3235 
3236 template<typename KeyT, typename ValueT>
3237 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
3238 {
3239  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
3240  m_Vector.data(),
3241  m_Vector.data() + m_Vector.size(),
3242  pair,
3243  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
3244  VmaVectorInsert(m_Vector, indexToInsert, pair);
3245 }
3246 
3247 template<typename KeyT, typename ValueT>
3248 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
3249 {
3250  PairType* it = VmaBinaryFindFirstNotLess(
3251  m_Vector.data(),
3252  m_Vector.data() + m_Vector.size(),
3253  key,
3254  VmaPairFirstLess<KeyT, ValueT>());
3255  if((it != m_Vector.end()) && (it->first == key))
3256  {
3257  return it;
3258  }
3259  else
3260  {
3261  return m_Vector.end();
3262  }
3263 }
3264 
3265 template<typename KeyT, typename ValueT>
3266 void VmaMap<KeyT, ValueT>::erase(iterator it)
3267 {
3268  VmaVectorRemove(m_Vector, it - m_Vector.begin());
3269 }
3270 
3271 #endif // #if VMA_USE_STL_UNORDERED_MAP
3272 
3273 #endif // #if 0
3274 
3276 
3277 class VmaDeviceMemoryBlock;
3278 
3279 struct VmaAllocation_T
3280 {
3281 private:
3282  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
3283 
3284  enum FLAGS
3285  {
3286  FLAG_USER_DATA_STRING = 0x01,
3287  };
3288 
3289 public:
3290  enum ALLOCATION_TYPE
3291  {
3292  ALLOCATION_TYPE_NONE,
3293  ALLOCATION_TYPE_BLOCK,
3294  ALLOCATION_TYPE_DEDICATED,
3295  };
3296 
3297  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
3298  m_Alignment(1),
3299  m_Size(0),
3300  m_pUserData(VMA_NULL),
3301  m_LastUseFrameIndex(currentFrameIndex),
3302  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
3303  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
3304  m_MapCount(0),
3305  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
3306  {
3307  }
3308 
3309  ~VmaAllocation_T()
3310  {
3311  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
3312 
3313  // Check if owned string was freed.
3314  VMA_ASSERT(m_pUserData == VMA_NULL);
3315  }
3316 
3317  void InitBlockAllocation(
3318  VmaPool hPool,
3319  VmaDeviceMemoryBlock* block,
3320  VkDeviceSize offset,
3321  VkDeviceSize alignment,
3322  VkDeviceSize size,
3323  VmaSuballocationType suballocationType,
3324  bool mapped,
3325  bool canBecomeLost)
3326  {
3327  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3328  VMA_ASSERT(block != VMA_NULL);
3329  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
3330  m_Alignment = alignment;
3331  m_Size = size;
3332  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
3333  m_SuballocationType = (uint8_t)suballocationType;
3334  m_BlockAllocation.m_hPool = hPool;
3335  m_BlockAllocation.m_Block = block;
3336  m_BlockAllocation.m_Offset = offset;
3337  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
3338  }
3339 
3340  void InitLost()
3341  {
3342  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3343  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
3344  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
3345  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
3346  m_BlockAllocation.m_Block = VMA_NULL;
3347  m_BlockAllocation.m_Offset = 0;
3348  m_BlockAllocation.m_CanBecomeLost = true;
3349  }
3350 
3351  void ChangeBlockAllocation(
3352  VmaAllocator hAllocator,
3353  VmaDeviceMemoryBlock* block,
3354  VkDeviceSize offset);
3355 
3356  // pMappedData not null means allocation is created with MAPPED flag.
3357  void InitDedicatedAllocation(
3358  uint32_t memoryTypeIndex,
3359  VkDeviceMemory hMemory,
3360  VmaSuballocationType suballocationType,
3361  void* pMappedData,
3362  VkDeviceSize size)
3363  {
3364  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
3365  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
3366  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
3367  m_Alignment = 0;
3368  m_Size = size;
3369  m_SuballocationType = (uint8_t)suballocationType;
3370  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
3371  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
3372  m_DedicatedAllocation.m_hMemory = hMemory;
3373  m_DedicatedAllocation.m_pMappedData = pMappedData;
3374  }
3375 
3376  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
3377  VkDeviceSize GetAlignment() const { return m_Alignment; }
3378  VkDeviceSize GetSize() const { return m_Size; }
3379  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
3380  void* GetUserData() const { return m_pUserData; }
3381  void SetUserData(VmaAllocator hAllocator, void* pUserData);
3382  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
3383 
3384  VmaDeviceMemoryBlock* GetBlock() const
3385  {
3386  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
3387  return m_BlockAllocation.m_Block;
3388  }
3389  VkDeviceSize GetOffset() const;
3390  VkDeviceMemory GetMemory() const;
3391  uint32_t GetMemoryTypeIndex() const;
3392  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
3393  void* GetMappedData() const;
3394  bool CanBecomeLost() const;
3395  VmaPool GetPool() const;
3396 
3397  uint32_t GetLastUseFrameIndex() const
3398  {
3399  return m_LastUseFrameIndex.load();
3400  }
3401  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
3402  {
3403  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
3404  }
3405  /*
3406  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
3407  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
3408  - Else, returns false.
3409 
3410  If hAllocation is already lost, assert - you should not call it then.
3411  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
3412  */
3413  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
3414 
3415  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
3416  {
3417  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
3418  outInfo.blockCount = 1;
3419  outInfo.allocationCount = 1;
3420  outInfo.unusedRangeCount = 0;
3421  outInfo.usedBytes = m_Size;
3422  outInfo.unusedBytes = 0;
3423  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
3424  outInfo.unusedRangeSizeMin = UINT64_MAX;
3425  outInfo.unusedRangeSizeMax = 0;
3426  }
3427 
3428  void BlockAllocMap();
3429  void BlockAllocUnmap();
3430  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
3431  void DedicatedAllocUnmap(VmaAllocator hAllocator);
3432 
3433 private:
3434  VkDeviceSize m_Alignment;
3435  VkDeviceSize m_Size;
3436  void* m_pUserData;
3437  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
3438  uint8_t m_Type; // ALLOCATION_TYPE
3439  uint8_t m_SuballocationType; // VmaSuballocationType
3440  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
3441  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
3442  uint8_t m_MapCount;
3443  uint8_t m_Flags; // enum FLAGS
3444 
3445  // Allocation out of VmaDeviceMemoryBlock.
3446  struct BlockAllocation
3447  {
3448  VmaPool m_hPool; // Null if belongs to general memory.
3449  VmaDeviceMemoryBlock* m_Block;
3450  VkDeviceSize m_Offset;
3451  bool m_CanBecomeLost;
3452  };
3453 
3454  // Allocation for an object that has its own private VkDeviceMemory.
3455  struct DedicatedAllocation
3456  {
3457  uint32_t m_MemoryTypeIndex;
3458  VkDeviceMemory m_hMemory;
3459  void* m_pMappedData; // Not null means memory is mapped.
3460  };
3461 
3462  union
3463  {
3464  // Allocation out of VmaDeviceMemoryBlock.
3465  BlockAllocation m_BlockAllocation;
3466  // Allocation for an object that has its own private VkDeviceMemory.
3467  DedicatedAllocation m_DedicatedAllocation;
3468  };
3469 
3470  void FreeUserDataString(VmaAllocator hAllocator);
3471 };
3472 
3473 /*
3474 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
3475 allocated memory block or free.
3476 */
3477 struct VmaSuballocation
3478 {
3479  VkDeviceSize offset;
3480  VkDeviceSize size;
3481  VmaAllocation hAllocation;
3482  VmaSuballocationType type;
3483 };
3484 
3485 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
3486 
3487 // Cost of one additional allocation lost, as equivalent in bytes.
3488 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
3489 
3490 /*
3491 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
3492 
3493 If canMakeOtherLost was false:
3494 - item points to a FREE suballocation.
3495 - itemsToMakeLostCount is 0.
3496 
3497 If canMakeOtherLost was true:
3498 - item points to first of sequence of suballocations, which are either FREE,
3499  or point to VmaAllocations that can become lost.
3500 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
3501  the requested allocation to succeed.
3502 */
3503 struct VmaAllocationRequest
3504 {
3505  VkDeviceSize offset;
3506  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
3507  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
3508  VmaSuballocationList::iterator item;
3509  size_t itemsToMakeLostCount;
3510 
3511  VkDeviceSize CalcCost() const
3512  {
3513  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
3514  }
3515 };
3516 
3517 /*
3518 Data structure used for bookkeeping of allocations and unused ranges of memory
3519 in a single VkDeviceMemory block.
3520 */
3521 class VmaBlockMetadata
3522 {
3523 public:
3524  VmaBlockMetadata(VmaAllocator hAllocator);
3525  ~VmaBlockMetadata();
3526  void Init(VkDeviceSize size);
3527 
3528  // Validates all data structures inside this object. If not valid, returns false.
3529  bool Validate() const;
3530  VkDeviceSize GetSize() const { return m_Size; }
3531  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
3532  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
3533  VkDeviceSize GetUnusedRangeSizeMax() const;
3534  // Returns true if this block is empty - contains only single free suballocation.
3535  bool IsEmpty() const;
3536 
3537  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
3538  void AddPoolStats(VmaPoolStats& inoutStats) const;
3539 
3540 #if VMA_STATS_STRING_ENABLED
3541  void PrintDetailedMap(class VmaJsonWriter& json) const;
3542 #endif
3543 
3544  // Creates trivial request for case when block is empty.
3545  void CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest);
3546 
3547  // Tries to find a place for suballocation with given parameters inside this block.
3548  // If succeeded, fills pAllocationRequest and returns true.
3549  // If failed, returns false.
3550  bool CreateAllocationRequest(
3551  uint32_t currentFrameIndex,
3552  uint32_t frameInUseCount,
3553  VkDeviceSize bufferImageGranularity,
3554  VkDeviceSize allocSize,
3555  VkDeviceSize allocAlignment,
3556  VmaSuballocationType allocType,
3557  bool canMakeOtherLost,
3558  VmaAllocationRequest* pAllocationRequest);
3559 
3560  bool MakeRequestedAllocationsLost(
3561  uint32_t currentFrameIndex,
3562  uint32_t frameInUseCount,
3563  VmaAllocationRequest* pAllocationRequest);
3564 
3565  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
3566 
3567  // Makes actual allocation based on request. Request must already be checked and valid.
3568  void Alloc(
3569  const VmaAllocationRequest& request,
3570  VmaSuballocationType type,
3571  VkDeviceSize allocSize,
3572  VmaAllocation hAllocation);
3573 
3574  // Frees suballocation assigned to given memory region.
3575  void Free(const VmaAllocation allocation);
3576  void FreeAtOffset(VkDeviceSize offset);
3577 
3578 private:
3579  VkDeviceSize m_Size;
3580  uint32_t m_FreeCount;
3581  VkDeviceSize m_SumFreeSize;
3582  VmaSuballocationList m_Suballocations;
3583  // Suballocations that are free and have size greater than certain threshold.
3584  // Sorted by size, ascending.
3585  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
3586 
3587  bool ValidateFreeSuballocationList() const;
3588 
3589  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
3590  // If yes, fills pOffset and returns true. If no, returns false.
3591  bool CheckAllocation(
3592  uint32_t currentFrameIndex,
3593  uint32_t frameInUseCount,
3594  VkDeviceSize bufferImageGranularity,
3595  VkDeviceSize allocSize,
3596  VkDeviceSize allocAlignment,
3597  VmaSuballocationType allocType,
3598  VmaSuballocationList::const_iterator suballocItem,
3599  bool canMakeOtherLost,
3600  VkDeviceSize* pOffset,
3601  size_t* itemsToMakeLostCount,
3602  VkDeviceSize* pSumFreeSize,
3603  VkDeviceSize* pSumItemSize) const;
3604  // Given free suballocation, it merges it with following one, which must also be free.
3605  void MergeFreeWithNext(VmaSuballocationList::iterator item);
3606  // Releases given suballocation, making it free.
3607  // Merges it with adjacent free suballocations if applicable.
3608  // Returns iterator to new free suballocation at this place.
3609  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
3610  // Given free suballocation, it inserts it into sorted list of
3611  // m_FreeSuballocationsBySize if it's suitable.
3612  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
3613  // Given free suballocation, it removes it from sorted list of
3614  // m_FreeSuballocationsBySize if it's suitable.
3615  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
3616 };
3617 
3618 // Helper class that represents mapped memory. Synchronized internally.
3619 class VmaDeviceMemoryMapping
3620 {
3621 public:
3622  VmaDeviceMemoryMapping();
3623  ~VmaDeviceMemoryMapping();
3624 
3625  void* GetMappedData() const { return m_pMappedData; }
3626 
3627  // ppData can be null.
3628  VkResult Map(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count, void **ppData);
3629  void Unmap(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count);
3630 
3631 private:
3632  VMA_MUTEX m_Mutex;
3633  uint32_t m_MapCount;
3634  void* m_pMappedData;
3635 };
3636 
3637 /*
3638 Represents a single block of device memory (`VkDeviceMemory`) with all the
3639 data about its regions (aka suballocations, `VmaAllocation`), assigned and free.
3640 
3641 Thread-safety: This class must be externally synchronized.
3642 */
3643 class VmaDeviceMemoryBlock
3644 {
3645 public:
3646  uint32_t m_MemoryTypeIndex;
3647  VkDeviceMemory m_hMemory;
3648  VmaDeviceMemoryMapping m_Mapping;
3649  VmaBlockMetadata m_Metadata;
3650 
3651  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
3652 
3653  ~VmaDeviceMemoryBlock()
3654  {
3655  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
3656  }
3657 
3658  // Always call after construction.
3659  void Init(
3660  uint32_t newMemoryTypeIndex,
3661  VkDeviceMemory newMemory,
3662  VkDeviceSize newSize);
3663  // Always call before destruction.
3664  void Destroy(VmaAllocator allocator);
3665 
3666  // Validates all data structures inside this object. If not valid, returns false.
3667  bool Validate() const;
3668 
3669  // ppData can be null.
3670  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
3671  void Unmap(VmaAllocator hAllocator, uint32_t count);
3672 };
3673 
3674 struct VmaPointerLess
3675 {
3676  bool operator()(const void* lhs, const void* rhs) const
3677  {
3678  return lhs < rhs;
3679  }
3680 };
3681 
3682 class VmaDefragmentator;
3683 
3684 /*
3685 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
3686 Vulkan memory type.
3687 
3688 Synchronized internally with a mutex.
3689 */
3690 struct VmaBlockVector
3691 {
3692  VmaBlockVector(
3693  VmaAllocator hAllocator,
3694  uint32_t memoryTypeIndex,
3695  VkDeviceSize preferredBlockSize,
3696  size_t minBlockCount,
3697  size_t maxBlockCount,
3698  VkDeviceSize bufferImageGranularity,
3699  uint32_t frameInUseCount,
3700  bool isCustomPool);
3701  ~VmaBlockVector();
3702 
3703  VkResult CreateMinBlocks();
3704 
3705  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
3706  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
3707  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
3708  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
3709 
3710  void GetPoolStats(VmaPoolStats* pStats);
3711 
3712  bool IsEmpty() const { return m_Blocks.empty(); }
3713 
3714  VkResult Allocate(
3715  VmaPool hCurrentPool,
3716  uint32_t currentFrameIndex,
3717  const VkMemoryRequirements& vkMemReq,
3718  const VmaAllocationCreateInfo& createInfo,
3719  VmaSuballocationType suballocType,
3720  VmaAllocation* pAllocation);
3721 
3722  void Free(
3723  VmaAllocation hAllocation);
3724 
3725  // Adds statistics of this BlockVector to pStats.
3726  void AddStats(VmaStats* pStats);
3727 
3728 #if VMA_STATS_STRING_ENABLED
3729  void PrintDetailedMap(class VmaJsonWriter& json);
3730 #endif
3731 
3732  void MakePoolAllocationsLost(
3733  uint32_t currentFrameIndex,
3734  size_t* pLostAllocationCount);
3735 
3736  VmaDefragmentator* EnsureDefragmentator(
3737  VmaAllocator hAllocator,
3738  uint32_t currentFrameIndex);
3739 
3740  VkResult Defragment(
3741  VmaDefragmentationStats* pDefragmentationStats,
3742  VkDeviceSize& maxBytesToMove,
3743  uint32_t& maxAllocationsToMove);
3744 
3745  void DestroyDefragmentator();
3746 
3747 private:
3748  friend class VmaDefragmentator;
3749 
3750  const VmaAllocator m_hAllocator;
3751  const uint32_t m_MemoryTypeIndex;
3752  const VkDeviceSize m_PreferredBlockSize;
3753  const size_t m_MinBlockCount;
3754  const size_t m_MaxBlockCount;
3755  const VkDeviceSize m_BufferImageGranularity;
3756  const uint32_t m_FrameInUseCount;
3757  const bool m_IsCustomPool;
3758  VMA_MUTEX m_Mutex;
3759  // Incrementally sorted by sumFreeSize, ascending.
3760  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
3761  /* There can be at most one allocation that is completely empty - a
3762  hysteresis to avoid pessimistic case of alternating creation and destruction
3763  of a VkDeviceMemory. */
3764  bool m_HasEmptyBlock;
3765  VmaDefragmentator* m_pDefragmentator;
3766 
3767  size_t CalcMaxBlockSize() const;
3768 
3769  // Finds and removes given block from vector.
3770  void Remove(VmaDeviceMemoryBlock* pBlock);
3771 
3772  // Performs single step in sorting m_Blocks. They may not be fully sorted
3773  // after this call.
3774  void IncrementallySortBlocks();
3775 
3776  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
3777 };
3778 
3779 struct VmaPool_T
3780 {
3781 public:
3782  VmaBlockVector m_BlockVector;
3783 
3784  // Takes ownership.
3785  VmaPool_T(
3786  VmaAllocator hAllocator,
3787  const VmaPoolCreateInfo& createInfo);
3788  ~VmaPool_T();
3789 
3790  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
3791 
3792 #if VMA_STATS_STRING_ENABLED
3793  //void PrintDetailedMap(class VmaStringBuilder& sb);
3794 #endif
3795 };
3796 
3797 class VmaDefragmentator
3798 {
3799  const VmaAllocator m_hAllocator;
3800  VmaBlockVector* const m_pBlockVector;
3801  uint32_t m_CurrentFrameIndex;
3802  VkDeviceSize m_BytesMoved;
3803  uint32_t m_AllocationsMoved;
3804 
3805  struct AllocationInfo
3806  {
3807  VmaAllocation m_hAllocation;
3808  VkBool32* m_pChanged;
3809 
3810  AllocationInfo() :
3811  m_hAllocation(VK_NULL_HANDLE),
3812  m_pChanged(VMA_NULL)
3813  {
3814  }
3815  };
3816 
3817  struct AllocationInfoSizeGreater
3818  {
3819  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
3820  {
3821  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
3822  }
3823  };
3824 
3825  // Used between AddAllocation and Defragment.
3826  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
3827 
3828  struct BlockInfo
3829  {
3830  VmaDeviceMemoryBlock* m_pBlock;
3831  bool m_HasNonMovableAllocations;
3832  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
3833 
3834  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
3835  m_pBlock(VMA_NULL),
3836  m_HasNonMovableAllocations(true),
3837  m_Allocations(pAllocationCallbacks),
3838  m_pMappedDataForDefragmentation(VMA_NULL)
3839  {
3840  }
3841 
3842  void CalcHasNonMovableAllocations()
3843  {
3844  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
3845  const size_t defragmentAllocCount = m_Allocations.size();
3846  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
3847  }
3848 
3849  void SortAllocationsBySizeDescecnding()
3850  {
3851  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
3852  }
3853 
3854  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
3855  void Unmap(VmaAllocator hAllocator);
3856 
3857  private:
3858  // Not null if mapped for defragmentation only, not originally mapped.
3859  void* m_pMappedDataForDefragmentation;
3860  };
3861 
3862  struct BlockPointerLess
3863  {
3864  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
3865  {
3866  return pLhsBlockInfo->m_pBlock < pRhsBlock;
3867  }
3868  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
3869  {
3870  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
3871  }
3872  };
3873 
3874  // 1. Blocks with some non-movable allocations go first.
3875  // 2. Blocks with smaller sumFreeSize go first.
3876  struct BlockInfoCompareMoveDestination
3877  {
3878  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
3879  {
3880  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
3881  {
3882  return true;
3883  }
3884  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
3885  {
3886  return false;
3887  }
3888  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
3889  {
3890  return true;
3891  }
3892  return false;
3893  }
3894  };
3895 
3896  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
3897  BlockInfoVector m_Blocks;
3898 
3899  VkResult DefragmentRound(
3900  VkDeviceSize maxBytesToMove,
3901  uint32_t maxAllocationsToMove);
3902 
3903  static bool MoveMakesSense(
3904  size_t dstBlockIndex, VkDeviceSize dstOffset,
3905  size_t srcBlockIndex, VkDeviceSize srcOffset);
3906 
3907 public:
3908  VmaDefragmentator(
3909  VmaAllocator hAllocator,
3910  VmaBlockVector* pBlockVector,
3911  uint32_t currentFrameIndex);
3912 
3913  ~VmaDefragmentator();
3914 
3915  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
3916  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
3917 
3918  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
3919 
3920  VkResult Defragment(
3921  VkDeviceSize maxBytesToMove,
3922  uint32_t maxAllocationsToMove);
3923 };
3924 
3925 // Main allocator object.
3926 struct VmaAllocator_T
3927 {
3928  bool m_UseMutex;
3929  bool m_UseKhrDedicatedAllocation;
3930  VkDevice m_hDevice;
3931  bool m_AllocationCallbacksSpecified;
3932  VkAllocationCallbacks m_AllocationCallbacks;
3933  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
3934 
3935  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
3936  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
3937  VMA_MUTEX m_HeapSizeLimitMutex;
3938 
3939  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
3940  VkPhysicalDeviceMemoryProperties m_MemProps;
3941 
3942  // Default pools.
3943  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
3944 
3945  // Each vector is sorted by memory (handle value).
3946  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
3947  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
3948  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
3949 
3950  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
3951  ~VmaAllocator_T();
3952 
3953  const VkAllocationCallbacks* GetAllocationCallbacks() const
3954  {
3955  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
3956  }
3957  const VmaVulkanFunctions& GetVulkanFunctions() const
3958  {
3959  return m_VulkanFunctions;
3960  }
3961 
3962  VkDeviceSize GetBufferImageGranularity() const
3963  {
3964  return VMA_MAX(
3965  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
3966  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
3967  }
3968 
3969  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
3970  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
3971 
3972  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
3973  {
3974  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
3975  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
3976  }
3977 
3978  void GetBufferMemoryRequirements(
3979  VkBuffer hBuffer,
3980  VkMemoryRequirements& memReq,
3981  bool& requiresDedicatedAllocation,
3982  bool& prefersDedicatedAllocation) const;
3983  void GetImageMemoryRequirements(
3984  VkImage hImage,
3985  VkMemoryRequirements& memReq,
3986  bool& requiresDedicatedAllocation,
3987  bool& prefersDedicatedAllocation) const;
3988 
3989  // Main allocation function.
3990  VkResult AllocateMemory(
3991  const VkMemoryRequirements& vkMemReq,
3992  bool requiresDedicatedAllocation,
3993  bool prefersDedicatedAllocation,
3994  VkBuffer dedicatedBuffer,
3995  VkImage dedicatedImage,
3996  const VmaAllocationCreateInfo& createInfo,
3997  VmaSuballocationType suballocType,
3998  VmaAllocation* pAllocation);
3999 
4000  // Main deallocation function.
4001  void FreeMemory(const VmaAllocation allocation);
4002 
4003  void CalculateStats(VmaStats* pStats);
4004 
4005 #if VMA_STATS_STRING_ENABLED
4006  void PrintDetailedMap(class VmaJsonWriter& json);
4007 #endif
4008 
4009  VkResult Defragment(
4010  VmaAllocation* pAllocations,
4011  size_t allocationCount,
4012  VkBool32* pAllocationsChanged,
4013  const VmaDefragmentationInfo* pDefragmentationInfo,
4014  VmaDefragmentationStats* pDefragmentationStats);
4015 
4016  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
4017 
4018  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
4019  void DestroyPool(VmaPool pool);
4020  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
4021 
4022  void SetCurrentFrameIndex(uint32_t frameIndex);
4023 
4024  void MakePoolAllocationsLost(
4025  VmaPool hPool,
4026  size_t* pLostAllocationCount);
4027 
4028  void CreateLostAllocation(VmaAllocation* pAllocation);
4029 
4030  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
4031  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
4032 
4033  VkResult Map(VmaAllocation hAllocation, void** ppData);
4034  void Unmap(VmaAllocation hAllocation);
4035 
4036 private:
4037  VkDeviceSize m_PreferredLargeHeapBlockSize;
4038 
4039  VkPhysicalDevice m_PhysicalDevice;
4040  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
4041 
4042  VMA_MUTEX m_PoolsMutex;
4043  // Protected by m_PoolsMutex. Sorted by pointer value.
4044  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
4045 
4046  VmaVulkanFunctions m_VulkanFunctions;
4047 
4048  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
4049 
4050  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
4051 
4052  VkResult AllocateMemoryOfType(
4053  const VkMemoryRequirements& vkMemReq,
4054  bool dedicatedAllocation,
4055  VkBuffer dedicatedBuffer,
4056  VkImage dedicatedImage,
4057  const VmaAllocationCreateInfo& createInfo,
4058  uint32_t memTypeIndex,
4059  VmaSuballocationType suballocType,
4060  VmaAllocation* pAllocation);
4061 
4062  // Allocates and registers new VkDeviceMemory specifically for single allocation.
4063  VkResult AllocateDedicatedMemory(
4064  VkDeviceSize size,
4065  VmaSuballocationType suballocType,
4066  uint32_t memTypeIndex,
4067  bool map,
4068  bool isUserDataString,
4069  void* pUserData,
4070  VkBuffer dedicatedBuffer,
4071  VkImage dedicatedImage,
4072  VmaAllocation* pAllocation);
4073 
4074  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
4075  void FreeDedicatedMemory(VmaAllocation allocation);
4076 };
4077 
4079 // Memory allocation #2 after VmaAllocator_T definition
4080 
4081 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
4082 {
4083  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
4084 }
4085 
4086 static void VmaFree(VmaAllocator hAllocator, void* ptr)
4087 {
4088  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
4089 }
4090 
4091 template<typename T>
4092 static T* VmaAllocate(VmaAllocator hAllocator)
4093 {
4094  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
4095 }
4096 
4097 template<typename T>
4098 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
4099 {
4100  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
4101 }
4102 
4103 template<typename T>
4104 static void vma_delete(VmaAllocator hAllocator, T* ptr)
4105 {
4106  if(ptr != VMA_NULL)
4107  {
4108  ptr->~T();
4109  VmaFree(hAllocator, ptr);
4110  }
4111 }
4112 
4113 template<typename T>
4114 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
4115 {
4116  if(ptr != VMA_NULL)
4117  {
4118  for(size_t i = count; i--; )
4119  ptr[i].~T();
4120  VmaFree(hAllocator, ptr);
4121  }
4122 }
4123 
4125 // VmaStringBuilder
4126 
4127 #if VMA_STATS_STRING_ENABLED
4128 
4129 class VmaStringBuilder
4130 {
4131 public:
4132  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
4133  size_t GetLength() const { return m_Data.size(); }
4134  const char* GetData() const { return m_Data.data(); }
4135 
4136  void Add(char ch) { m_Data.push_back(ch); }
4137  void Add(const char* pStr);
4138  void AddNewLine() { Add('\n'); }
4139  void AddNumber(uint32_t num);
4140  void AddNumber(uint64_t num);
4141  void AddPointer(const void* ptr);
4142 
4143 private:
4144  VmaVector< char, VmaStlAllocator<char> > m_Data;
4145 };
4146 
4147 void VmaStringBuilder::Add(const char* pStr)
4148 {
4149  const size_t strLen = strlen(pStr);
4150  if(strLen > 0)
4151  {
4152  const size_t oldCount = m_Data.size();
4153  m_Data.resize(oldCount + strLen);
4154  memcpy(m_Data.data() + oldCount, pStr, strLen);
4155  }
4156 }
4157 
4158 void VmaStringBuilder::AddNumber(uint32_t num)
4159 {
4160  char buf[11];
4161  VmaUint32ToStr(buf, sizeof(buf), num);
4162  Add(buf);
4163 }
4164 
4165 void VmaStringBuilder::AddNumber(uint64_t num)
4166 {
4167  char buf[21];
4168  VmaUint64ToStr(buf, sizeof(buf), num);
4169  Add(buf);
4170 }
4171 
4172 void VmaStringBuilder::AddPointer(const void* ptr)
4173 {
4174  char buf[21];
4175  VmaPtrToStr(buf, sizeof(buf), ptr);
4176  Add(buf);
4177 }
4178 
4179 #endif // #if VMA_STATS_STRING_ENABLED
4180 
4182 // VmaJsonWriter
4183 
4184 #if VMA_STATS_STRING_ENABLED
4185 
4186 class VmaJsonWriter
4187 {
4188 public:
4189  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
4190  ~VmaJsonWriter();
4191 
4192  void BeginObject(bool singleLine = false);
4193  void EndObject();
4194 
4195  void BeginArray(bool singleLine = false);
4196  void EndArray();
4197 
4198  void WriteString(const char* pStr);
4199  void BeginString(const char* pStr = VMA_NULL);
4200  void ContinueString(const char* pStr);
4201  void ContinueString(uint32_t n);
4202  void ContinueString(uint64_t n);
4203  void ContinueString_Pointer(const void* ptr);
4204  void EndString(const char* pStr = VMA_NULL);
4205 
4206  void WriteNumber(uint32_t n);
4207  void WriteNumber(uint64_t n);
4208  void WriteBool(bool b);
4209  void WriteNull();
4210 
4211 private:
4212  static const char* const INDENT;
4213 
4214  enum COLLECTION_TYPE
4215  {
4216  COLLECTION_TYPE_OBJECT,
4217  COLLECTION_TYPE_ARRAY,
4218  };
4219  struct StackItem
4220  {
4221  COLLECTION_TYPE type;
4222  uint32_t valueCount;
4223  bool singleLineMode;
4224  };
4225 
4226  VmaStringBuilder& m_SB;
4227  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
4228  bool m_InsideString;
4229 
4230  void BeginValue(bool isString);
4231  void WriteIndent(bool oneLess = false);
4232 };
4233 
4234 const char* const VmaJsonWriter::INDENT = " ";
4235 
4236 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
4237  m_SB(sb),
4238  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
4239  m_InsideString(false)
4240 {
4241 }
4242 
4243 VmaJsonWriter::~VmaJsonWriter()
4244 {
4245  VMA_ASSERT(!m_InsideString);
4246  VMA_ASSERT(m_Stack.empty());
4247 }
4248 
4249 void VmaJsonWriter::BeginObject(bool singleLine)
4250 {
4251  VMA_ASSERT(!m_InsideString);
4252 
4253  BeginValue(false);
4254  m_SB.Add('{');
4255 
4256  StackItem item;
4257  item.type = COLLECTION_TYPE_OBJECT;
4258  item.valueCount = 0;
4259  item.singleLineMode = singleLine;
4260  m_Stack.push_back(item);
4261 }
4262 
4263 void VmaJsonWriter::EndObject()
4264 {
4265  VMA_ASSERT(!m_InsideString);
4266 
4267  WriteIndent(true);
4268  m_SB.Add('}');
4269 
4270  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
4271  m_Stack.pop_back();
4272 }
4273 
4274 void VmaJsonWriter::BeginArray(bool singleLine)
4275 {
4276  VMA_ASSERT(!m_InsideString);
4277 
4278  BeginValue(false);
4279  m_SB.Add('[');
4280 
4281  StackItem item;
4282  item.type = COLLECTION_TYPE_ARRAY;
4283  item.valueCount = 0;
4284  item.singleLineMode = singleLine;
4285  m_Stack.push_back(item);
4286 }
4287 
4288 void VmaJsonWriter::EndArray()
4289 {
4290  VMA_ASSERT(!m_InsideString);
4291 
4292  WriteIndent(true);
4293  m_SB.Add(']');
4294 
4295  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
4296  m_Stack.pop_back();
4297 }
4298 
4299 void VmaJsonWriter::WriteString(const char* pStr)
4300 {
4301  BeginString(pStr);
4302  EndString();
4303 }
4304 
4305 void VmaJsonWriter::BeginString(const char* pStr)
4306 {
4307  VMA_ASSERT(!m_InsideString);
4308 
4309  BeginValue(true);
4310  m_SB.Add('"');
4311  m_InsideString = true;
4312  if(pStr != VMA_NULL && pStr[0] != '\0')
4313  {
4314  ContinueString(pStr);
4315  }
4316 }
4317 
4318 void VmaJsonWriter::ContinueString(const char* pStr)
4319 {
4320  VMA_ASSERT(m_InsideString);
4321 
4322  const size_t strLen = strlen(pStr);
4323  for(size_t i = 0; i < strLen; ++i)
4324  {
4325  char ch = pStr[i];
4326  if(ch == '\'')
4327  {
4328  m_SB.Add("\\\\");
4329  }
4330  else if(ch == '"')
4331  {
4332  m_SB.Add("\\\"");
4333  }
4334  else if(ch >= 32)
4335  {
4336  m_SB.Add(ch);
4337  }
4338  else switch(ch)
4339  {
4340  case '\b':
4341  m_SB.Add("\\b");
4342  break;
4343  case '\f':
4344  m_SB.Add("\\f");
4345  break;
4346  case '\n':
4347  m_SB.Add("\\n");
4348  break;
4349  case '\r':
4350  m_SB.Add("\\r");
4351  break;
4352  case '\t':
4353  m_SB.Add("\\t");
4354  break;
4355  default:
4356  VMA_ASSERT(0 && "Character not currently supported.");
4357  break;
4358  }
4359  }
4360 }
4361 
4362 void VmaJsonWriter::ContinueString(uint32_t n)
4363 {
4364  VMA_ASSERT(m_InsideString);
4365  m_SB.AddNumber(n);
4366 }
4367 
4368 void VmaJsonWriter::ContinueString(uint64_t n)
4369 {
4370  VMA_ASSERT(m_InsideString);
4371  m_SB.AddNumber(n);
4372 }
4373 
4374 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
4375 {
4376  VMA_ASSERT(m_InsideString);
4377  m_SB.AddPointer(ptr);
4378 }
4379 
4380 void VmaJsonWriter::EndString(const char* pStr)
4381 {
4382  VMA_ASSERT(m_InsideString);
4383  if(pStr != VMA_NULL && pStr[0] != '\0')
4384  {
4385  ContinueString(pStr);
4386  }
4387  m_SB.Add('"');
4388  m_InsideString = false;
4389 }
4390 
4391 void VmaJsonWriter::WriteNumber(uint32_t n)
4392 {
4393  VMA_ASSERT(!m_InsideString);
4394  BeginValue(false);
4395  m_SB.AddNumber(n);
4396 }
4397 
4398 void VmaJsonWriter::WriteNumber(uint64_t n)
4399 {
4400  VMA_ASSERT(!m_InsideString);
4401  BeginValue(false);
4402  m_SB.AddNumber(n);
4403 }
4404 
4405 void VmaJsonWriter::WriteBool(bool b)
4406 {
4407  VMA_ASSERT(!m_InsideString);
4408  BeginValue(false);
4409  m_SB.Add(b ? "true" : "false");
4410 }
4411 
4412 void VmaJsonWriter::WriteNull()
4413 {
4414  VMA_ASSERT(!m_InsideString);
4415  BeginValue(false);
4416  m_SB.Add("null");
4417 }
4418 
4419 void VmaJsonWriter::BeginValue(bool isString)
4420 {
4421  if(!m_Stack.empty())
4422  {
4423  StackItem& currItem = m_Stack.back();
4424  if(currItem.type == COLLECTION_TYPE_OBJECT &&
4425  currItem.valueCount % 2 == 0)
4426  {
4427  VMA_ASSERT(isString);
4428  }
4429 
4430  if(currItem.type == COLLECTION_TYPE_OBJECT &&
4431  currItem.valueCount % 2 != 0)
4432  {
4433  m_SB.Add(": ");
4434  }
4435  else if(currItem.valueCount > 0)
4436  {
4437  m_SB.Add(", ");
4438  WriteIndent();
4439  }
4440  else
4441  {
4442  WriteIndent();
4443  }
4444  ++currItem.valueCount;
4445  }
4446 }
4447 
4448 void VmaJsonWriter::WriteIndent(bool oneLess)
4449 {
4450  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
4451  {
4452  m_SB.AddNewLine();
4453 
4454  size_t count = m_Stack.size();
4455  if(count > 0 && oneLess)
4456  {
4457  --count;
4458  }
4459  for(size_t i = 0; i < count; ++i)
4460  {
4461  m_SB.Add(INDENT);
4462  }
4463  }
4464 }
4465 
4466 #endif // #if VMA_STATS_STRING_ENABLED
4467 
4469 
4470 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
4471 {
4472  if(IsUserDataString())
4473  {
4474  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
4475 
4476  FreeUserDataString(hAllocator);
4477 
4478  if(pUserData != VMA_NULL)
4479  {
4480  const char* const newStrSrc = (char*)pUserData;
4481  const size_t newStrLen = strlen(newStrSrc);
4482  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
4483  memcpy(newStrDst, newStrSrc, newStrLen + 1);
4484  m_pUserData = newStrDst;
4485  }
4486  }
4487  else
4488  {
4489  m_pUserData = pUserData;
4490  }
4491 }
4492 
4493 void VmaAllocation_T::ChangeBlockAllocation(
4494  VmaAllocator hAllocator,
4495  VmaDeviceMemoryBlock* block,
4496  VkDeviceSize offset)
4497 {
4498  VMA_ASSERT(block != VMA_NULL);
4499  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
4500 
4501  // Move mapping reference counter from old block to new block.
4502  if(block != m_BlockAllocation.m_Block)
4503  {
4504  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
4505  if(IsPersistentMap())
4506  ++mapRefCount;
4507  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
4508  block->Map(hAllocator, mapRefCount, VMA_NULL);
4509  }
4510 
4511  m_BlockAllocation.m_Block = block;
4512  m_BlockAllocation.m_Offset = offset;
4513 }
4514 
4515 VkDeviceSize VmaAllocation_T::GetOffset() const
4516 {
4517  switch(m_Type)
4518  {
4519  case ALLOCATION_TYPE_BLOCK:
4520  return m_BlockAllocation.m_Offset;
4521  case ALLOCATION_TYPE_DEDICATED:
4522  return 0;
4523  default:
4524  VMA_ASSERT(0);
4525  return 0;
4526  }
4527 }
4528 
4529 VkDeviceMemory VmaAllocation_T::GetMemory() const
4530 {
4531  switch(m_Type)
4532  {
4533  case ALLOCATION_TYPE_BLOCK:
4534  return m_BlockAllocation.m_Block->m_hMemory;
4535  case ALLOCATION_TYPE_DEDICATED:
4536  return m_DedicatedAllocation.m_hMemory;
4537  default:
4538  VMA_ASSERT(0);
4539  return VK_NULL_HANDLE;
4540  }
4541 }
4542 
4543 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
4544 {
4545  switch(m_Type)
4546  {
4547  case ALLOCATION_TYPE_BLOCK:
4548  return m_BlockAllocation.m_Block->m_MemoryTypeIndex;
4549  case ALLOCATION_TYPE_DEDICATED:
4550  return m_DedicatedAllocation.m_MemoryTypeIndex;
4551  default:
4552  VMA_ASSERT(0);
4553  return UINT32_MAX;
4554  }
4555 }
4556 
4557 void* VmaAllocation_T::GetMappedData() const
4558 {
4559  switch(m_Type)
4560  {
4561  case ALLOCATION_TYPE_BLOCK:
4562  if(m_MapCount != 0)
4563  {
4564  void* pBlockData = m_BlockAllocation.m_Block->m_Mapping.GetMappedData();
4565  VMA_ASSERT(pBlockData != VMA_NULL);
4566  return (char*)pBlockData + m_BlockAllocation.m_Offset;
4567  }
4568  else
4569  {
4570  return VMA_NULL;
4571  }
4572  break;
4573  case ALLOCATION_TYPE_DEDICATED:
4574  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
4575  return m_DedicatedAllocation.m_pMappedData;
4576  default:
4577  VMA_ASSERT(0);
4578  return VMA_NULL;
4579  }
4580 }
4581 
4582 bool VmaAllocation_T::CanBecomeLost() const
4583 {
4584  switch(m_Type)
4585  {
4586  case ALLOCATION_TYPE_BLOCK:
4587  return m_BlockAllocation.m_CanBecomeLost;
4588  case ALLOCATION_TYPE_DEDICATED:
4589  return false;
4590  default:
4591  VMA_ASSERT(0);
4592  return false;
4593  }
4594 }
4595 
4596 VmaPool VmaAllocation_T::GetPool() const
4597 {
4598  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
4599  return m_BlockAllocation.m_hPool;
4600 }
4601 
4602 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
4603 {
4604  VMA_ASSERT(CanBecomeLost());
4605 
4606  /*
4607  Warning: This is a carefully designed algorithm.
4608  Do not modify unless you really know what you're doing :)
4609  */
4610  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
4611  for(;;)
4612  {
4613  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
4614  {
4615  VMA_ASSERT(0);
4616  return false;
4617  }
4618  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
4619  {
4620  return false;
4621  }
4622  else // Last use time earlier than current time.
4623  {
4624  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
4625  {
4626  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
4627  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
4628  return true;
4629  }
4630  }
4631  }
4632 }
4633 
4634 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
4635 {
4636  VMA_ASSERT(IsUserDataString());
4637  if(m_pUserData != VMA_NULL)
4638  {
4639  char* const oldStr = (char*)m_pUserData;
4640  const size_t oldStrLen = strlen(oldStr);
4641  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
4642  m_pUserData = VMA_NULL;
4643  }
4644 }
4645 
4646 void VmaAllocation_T::BlockAllocMap()
4647 {
4648  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
4649 
4650  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
4651  {
4652  ++m_MapCount;
4653  }
4654  else
4655  {
4656  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
4657  }
4658 }
4659 
4660 void VmaAllocation_T::BlockAllocUnmap()
4661 {
4662  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
4663 
4664  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
4665  {
4666  --m_MapCount;
4667  }
4668  else
4669  {
4670  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
4671  }
4672 }
4673 
4674 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
4675 {
4676  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
4677 
4678  if(m_MapCount != 0)
4679  {
4680  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
4681  {
4682  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
4683  *ppData = m_DedicatedAllocation.m_pMappedData;
4684  ++m_MapCount;
4685  return VK_SUCCESS;
4686  }
4687  else
4688  {
4689  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
4690  return VK_ERROR_MEMORY_MAP_FAILED;
4691  }
4692  }
4693  else
4694  {
4695  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
4696  hAllocator->m_hDevice,
4697  m_DedicatedAllocation.m_hMemory,
4698  0, // offset
4699  VK_WHOLE_SIZE,
4700  0, // flags
4701  ppData);
4702  if(result == VK_SUCCESS)
4703  {
4704  m_DedicatedAllocation.m_pMappedData = *ppData;
4705  m_MapCount = 1;
4706  }
4707  return result;
4708  }
4709 }
4710 
4711 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
4712 {
4713  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
4714 
4715  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
4716  {
4717  --m_MapCount;
4718  if(m_MapCount == 0)
4719  {
4720  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
4721  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
4722  hAllocator->m_hDevice,
4723  m_DedicatedAllocation.m_hMemory);
4724  }
4725  }
4726  else
4727  {
4728  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
4729  }
4730 }
4731 
4732 #if VMA_STATS_STRING_ENABLED
4733 
4734 // Correspond to values of enum VmaSuballocationType.
4735 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
4736  "FREE",
4737  "UNKNOWN",
4738  "BUFFER",
4739  "IMAGE_UNKNOWN",
4740  "IMAGE_LINEAR",
4741  "IMAGE_OPTIMAL",
4742 };
4743 
4744 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
4745 {
4746  json.BeginObject();
4747 
4748  json.WriteString("Blocks");
4749  json.WriteNumber(stat.blockCount);
4750 
4751  json.WriteString("Allocations");
4752  json.WriteNumber(stat.allocationCount);
4753 
4754  json.WriteString("UnusedRanges");
4755  json.WriteNumber(stat.unusedRangeCount);
4756 
4757  json.WriteString("UsedBytes");
4758  json.WriteNumber(stat.usedBytes);
4759 
4760  json.WriteString("UnusedBytes");
4761  json.WriteNumber(stat.unusedBytes);
4762 
4763  if(stat.allocationCount > 1)
4764  {
4765  json.WriteString("AllocationSize");
4766  json.BeginObject(true);
4767  json.WriteString("Min");
4768  json.WriteNumber(stat.allocationSizeMin);
4769  json.WriteString("Avg");
4770  json.WriteNumber(stat.allocationSizeAvg);
4771  json.WriteString("Max");
4772  json.WriteNumber(stat.allocationSizeMax);
4773  json.EndObject();
4774  }
4775 
4776  if(stat.unusedRangeCount > 1)
4777  {
4778  json.WriteString("UnusedRangeSize");
4779  json.BeginObject(true);
4780  json.WriteString("Min");
4781  json.WriteNumber(stat.unusedRangeSizeMin);
4782  json.WriteString("Avg");
4783  json.WriteNumber(stat.unusedRangeSizeAvg);
4784  json.WriteString("Max");
4785  json.WriteNumber(stat.unusedRangeSizeMax);
4786  json.EndObject();
4787  }
4788 
4789  json.EndObject();
4790 }
4791 
4792 #endif // #if VMA_STATS_STRING_ENABLED
4793 
4794 struct VmaSuballocationItemSizeLess
4795 {
4796  bool operator()(
4797  const VmaSuballocationList::iterator lhs,
4798  const VmaSuballocationList::iterator rhs) const
4799  {
4800  return lhs->size < rhs->size;
4801  }
4802  bool operator()(
4803  const VmaSuballocationList::iterator lhs,
4804  VkDeviceSize rhsSize) const
4805  {
4806  return lhs->size < rhsSize;
4807  }
4808 };
4809 
4811 // class VmaBlockMetadata
4812 
4813 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
4814  m_Size(0),
4815  m_FreeCount(0),
4816  m_SumFreeSize(0),
4817  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
4818  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
4819 {
4820 }
4821 
4822 VmaBlockMetadata::~VmaBlockMetadata()
4823 {
4824 }
4825 
4826 void VmaBlockMetadata::Init(VkDeviceSize size)
4827 {
4828  m_Size = size;
4829  m_FreeCount = 1;
4830  m_SumFreeSize = size;
4831 
4832  VmaSuballocation suballoc = {};
4833  suballoc.offset = 0;
4834  suballoc.size = size;
4835  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
4836  suballoc.hAllocation = VK_NULL_HANDLE;
4837 
4838  m_Suballocations.push_back(suballoc);
4839  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
4840  --suballocItem;
4841  m_FreeSuballocationsBySize.push_back(suballocItem);
4842 }
4843 
4844 bool VmaBlockMetadata::Validate() const
4845 {
4846  if(m_Suballocations.empty())
4847  {
4848  return false;
4849  }
4850 
4851  // Expected offset of new suballocation as calculates from previous ones.
4852  VkDeviceSize calculatedOffset = 0;
4853  // Expected number of free suballocations as calculated from traversing their list.
4854  uint32_t calculatedFreeCount = 0;
4855  // Expected sum size of free suballocations as calculated from traversing their list.
4856  VkDeviceSize calculatedSumFreeSize = 0;
4857  // Expected number of free suballocations that should be registered in
4858  // m_FreeSuballocationsBySize calculated from traversing their list.
4859  size_t freeSuballocationsToRegister = 0;
4860  // True if previous visisted suballocation was free.
4861  bool prevFree = false;
4862 
4863  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
4864  suballocItem != m_Suballocations.cend();
4865  ++suballocItem)
4866  {
4867  const VmaSuballocation& subAlloc = *suballocItem;
4868 
4869  // Actual offset of this suballocation doesn't match expected one.
4870  if(subAlloc.offset != calculatedOffset)
4871  {
4872  return false;
4873  }
4874 
4875  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
4876  // Two adjacent free suballocations are invalid. They should be merged.
4877  if(prevFree && currFree)
4878  {
4879  return false;
4880  }
4881 
4882  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
4883  {
4884  return false;
4885  }
4886 
4887  if(currFree)
4888  {
4889  calculatedSumFreeSize += subAlloc.size;
4890  ++calculatedFreeCount;
4891  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
4892  {
4893  ++freeSuballocationsToRegister;
4894  }
4895  }
4896  else
4897  {
4898  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
4899  {
4900  return false;
4901  }
4902  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
4903  {
4904  return false;
4905  }
4906  }
4907 
4908  calculatedOffset += subAlloc.size;
4909  prevFree = currFree;
4910  }
4911 
4912  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
4913  // match expected one.
4914  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
4915  {
4916  return false;
4917  }
4918 
4919  VkDeviceSize lastSize = 0;
4920  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
4921  {
4922  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
4923 
4924  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
4925  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
4926  {
4927  return false;
4928  }
4929  // They must be sorted by size ascending.
4930  if(suballocItem->size < lastSize)
4931  {
4932  return false;
4933  }
4934 
4935  lastSize = suballocItem->size;
4936  }
4937 
4938  // Check if totals match calculacted values.
4939  if(!ValidateFreeSuballocationList() ||
4940  (calculatedOffset != m_Size) ||
4941  (calculatedSumFreeSize != m_SumFreeSize) ||
4942  (calculatedFreeCount != m_FreeCount))
4943  {
4944  return false;
4945  }
4946 
4947  return true;
4948 }
4949 
4950 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
4951 {
4952  if(!m_FreeSuballocationsBySize.empty())
4953  {
4954  return m_FreeSuballocationsBySize.back()->size;
4955  }
4956  else
4957  {
4958  return 0;
4959  }
4960 }
4961 
4962 bool VmaBlockMetadata::IsEmpty() const
4963 {
4964  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
4965 }
4966 
4967 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
4968 {
4969  outInfo.blockCount = 1;
4970 
4971  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
4972  outInfo.allocationCount = rangeCount - m_FreeCount;
4973  outInfo.unusedRangeCount = m_FreeCount;
4974 
4975  outInfo.unusedBytes = m_SumFreeSize;
4976  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
4977 
4978  outInfo.allocationSizeMin = UINT64_MAX;
4979  outInfo.allocationSizeMax = 0;
4980  outInfo.unusedRangeSizeMin = UINT64_MAX;
4981  outInfo.unusedRangeSizeMax = 0;
4982 
4983  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
4984  suballocItem != m_Suballocations.cend();
4985  ++suballocItem)
4986  {
4987  const VmaSuballocation& suballoc = *suballocItem;
4988  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
4989  {
4990  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
4991  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
4992  }
4993  else
4994  {
4995  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
4996  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
4997  }
4998  }
4999 }
5000 
5001 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
5002 {
5003  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
5004 
5005  inoutStats.size += m_Size;
5006  inoutStats.unusedSize += m_SumFreeSize;
5007  inoutStats.allocationCount += rangeCount - m_FreeCount;
5008  inoutStats.unusedRangeCount += m_FreeCount;
5009  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
5010 }
5011 
5012 #if VMA_STATS_STRING_ENABLED
5013 
5014 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
5015 {
5016  json.BeginObject();
5017 
5018  json.WriteString("TotalBytes");
5019  json.WriteNumber(m_Size);
5020 
5021  json.WriteString("UnusedBytes");
5022  json.WriteNumber(m_SumFreeSize);
5023 
5024  json.WriteString("Allocations");
5025  json.WriteNumber((uint64_t)m_Suballocations.size() - m_FreeCount);
5026 
5027  json.WriteString("UnusedRanges");
5028  json.WriteNumber(m_FreeCount);
5029 
5030  json.WriteString("Suballocations");
5031  json.BeginArray();
5032  size_t i = 0;
5033  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
5034  suballocItem != m_Suballocations.cend();
5035  ++suballocItem, ++i)
5036  {
5037  json.BeginObject(true);
5038 
5039  json.WriteString("Type");
5040  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[suballocItem->type]);
5041 
5042  json.WriteString("Size");
5043  json.WriteNumber(suballocItem->size);
5044 
5045  json.WriteString("Offset");
5046  json.WriteNumber(suballocItem->offset);
5047 
5048  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
5049  {
5050  const void* pUserData = suballocItem->hAllocation->GetUserData();
5051  if(pUserData != VMA_NULL)
5052  {
5053  json.WriteString("UserData");
5054  if(suballocItem->hAllocation->IsUserDataString())
5055  {
5056  json.WriteString((const char*)pUserData);
5057  }
5058  else
5059  {
5060  json.BeginString();
5061  json.ContinueString_Pointer(pUserData);
5062  json.EndString();
5063  }
5064  }
5065  }
5066 
5067  json.EndObject();
5068  }
5069  json.EndArray();
5070 
5071  json.EndObject();
5072 }
5073 
5074 #endif // #if VMA_STATS_STRING_ENABLED
5075 
5076 /*
5077 How many suitable free suballocations to analyze before choosing best one.
5078 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
5079  be chosen.
5080 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
5081  suballocations will be analized and best one will be chosen.
5082 - Any other value is also acceptable.
5083 */
5084 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
5085 
5086 void VmaBlockMetadata::CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest)
5087 {
5088  VMA_ASSERT(IsEmpty());
5089  pAllocationRequest->offset = 0;
5090  pAllocationRequest->sumFreeSize = m_SumFreeSize;
5091  pAllocationRequest->sumItemSize = 0;
5092  pAllocationRequest->item = m_Suballocations.begin();
5093  pAllocationRequest->itemsToMakeLostCount = 0;
5094 }
5095 
5096 bool VmaBlockMetadata::CreateAllocationRequest(
5097  uint32_t currentFrameIndex,
5098  uint32_t frameInUseCount,
5099  VkDeviceSize bufferImageGranularity,
5100  VkDeviceSize allocSize,
5101  VkDeviceSize allocAlignment,
5102  VmaSuballocationType allocType,
5103  bool canMakeOtherLost,
5104  VmaAllocationRequest* pAllocationRequest)
5105 {
5106  VMA_ASSERT(allocSize > 0);
5107  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
5108  VMA_ASSERT(pAllocationRequest != VMA_NULL);
5109  VMA_HEAVY_ASSERT(Validate());
5110 
5111  // There is not enough total free space in this block to fullfill the request: Early return.
5112  if(canMakeOtherLost == false && m_SumFreeSize < allocSize)
5113  {
5114  return false;
5115  }
5116 
5117  // New algorithm, efficiently searching freeSuballocationsBySize.
5118  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
5119  if(freeSuballocCount > 0)
5120  {
5121  if(VMA_BEST_FIT)
5122  {
5123  // Find first free suballocation with size not less than allocSize.
5124  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
5125  m_FreeSuballocationsBySize.data(),
5126  m_FreeSuballocationsBySize.data() + freeSuballocCount,
5127  allocSize,
5128  VmaSuballocationItemSizeLess());
5129  size_t index = it - m_FreeSuballocationsBySize.data();
5130  for(; index < freeSuballocCount; ++index)
5131  {
5132  if(CheckAllocation(
5133  currentFrameIndex,
5134  frameInUseCount,
5135  bufferImageGranularity,
5136  allocSize,
5137  allocAlignment,
5138  allocType,
5139  m_FreeSuballocationsBySize[index],
5140  false, // canMakeOtherLost
5141  &pAllocationRequest->offset,
5142  &pAllocationRequest->itemsToMakeLostCount,
5143  &pAllocationRequest->sumFreeSize,
5144  &pAllocationRequest->sumItemSize))
5145  {
5146  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
5147  return true;
5148  }
5149  }
5150  }
5151  else
5152  {
5153  // Search staring from biggest suballocations.
5154  for(size_t index = freeSuballocCount; index--; )
5155  {
5156  if(CheckAllocation(
5157  currentFrameIndex,
5158  frameInUseCount,
5159  bufferImageGranularity,
5160  allocSize,
5161  allocAlignment,
5162  allocType,
5163  m_FreeSuballocationsBySize[index],
5164  false, // canMakeOtherLost
5165  &pAllocationRequest->offset,
5166  &pAllocationRequest->itemsToMakeLostCount,
5167  &pAllocationRequest->sumFreeSize,
5168  &pAllocationRequest->sumItemSize))
5169  {
5170  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
5171  return true;
5172  }
5173  }
5174  }
5175  }
5176 
5177  if(canMakeOtherLost)
5178  {
5179  // Brute-force algorithm. TODO: Come up with something better.
5180 
5181  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
5182  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
5183 
5184  VmaAllocationRequest tmpAllocRequest = {};
5185  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
5186  suballocIt != m_Suballocations.end();
5187  ++suballocIt)
5188  {
5189  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
5190  suballocIt->hAllocation->CanBecomeLost())
5191  {
5192  if(CheckAllocation(
5193  currentFrameIndex,
5194  frameInUseCount,
5195  bufferImageGranularity,
5196  allocSize,
5197  allocAlignment,
5198  allocType,
5199  suballocIt,
5200  canMakeOtherLost,
5201  &tmpAllocRequest.offset,
5202  &tmpAllocRequest.itemsToMakeLostCount,
5203  &tmpAllocRequest.sumFreeSize,
5204  &tmpAllocRequest.sumItemSize))
5205  {
5206  tmpAllocRequest.item = suballocIt;
5207 
5208  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
5209  {
5210  *pAllocationRequest = tmpAllocRequest;
5211  }
5212  }
5213  }
5214  }
5215 
5216  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
5217  {
5218  return true;
5219  }
5220  }
5221 
5222  return false;
5223 }
5224 
5225 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
5226  uint32_t currentFrameIndex,
5227  uint32_t frameInUseCount,
5228  VmaAllocationRequest* pAllocationRequest)
5229 {
5230  while(pAllocationRequest->itemsToMakeLostCount > 0)
5231  {
5232  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
5233  {
5234  ++pAllocationRequest->item;
5235  }
5236  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
5237  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
5238  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
5239  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
5240  {
5241  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
5242  --pAllocationRequest->itemsToMakeLostCount;
5243  }
5244  else
5245  {
5246  return false;
5247  }
5248  }
5249 
5250  VMA_HEAVY_ASSERT(Validate());
5251  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
5252  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
5253 
5254  return true;
5255 }
5256 
5257 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
5258 {
5259  uint32_t lostAllocationCount = 0;
5260  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
5261  it != m_Suballocations.end();
5262  ++it)
5263  {
5264  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
5265  it->hAllocation->CanBecomeLost() &&
5266  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
5267  {
5268  it = FreeSuballocation(it);
5269  ++lostAllocationCount;
5270  }
5271  }
5272  return lostAllocationCount;
5273 }
5274 
5275 void VmaBlockMetadata::Alloc(
5276  const VmaAllocationRequest& request,
5277  VmaSuballocationType type,
5278  VkDeviceSize allocSize,
5279  VmaAllocation hAllocation)
5280 {
5281  VMA_ASSERT(request.item != m_Suballocations.end());
5282  VmaSuballocation& suballoc = *request.item;
5283  // Given suballocation is a free block.
5284  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
5285  // Given offset is inside this suballocation.
5286  VMA_ASSERT(request.offset >= suballoc.offset);
5287  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
5288  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
5289  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
5290 
5291  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
5292  // it to become used.
5293  UnregisterFreeSuballocation(request.item);
5294 
5295  suballoc.offset = request.offset;
5296  suballoc.size = allocSize;
5297  suballoc.type = type;
5298  suballoc.hAllocation = hAllocation;
5299 
5300  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
5301  if(paddingEnd)
5302  {
5303  VmaSuballocation paddingSuballoc = {};
5304  paddingSuballoc.offset = request.offset + allocSize;
5305  paddingSuballoc.size = paddingEnd;
5306  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5307  VmaSuballocationList::iterator next = request.item;
5308  ++next;
5309  const VmaSuballocationList::iterator paddingEndItem =
5310  m_Suballocations.insert(next, paddingSuballoc);
5311  RegisterFreeSuballocation(paddingEndItem);
5312  }
5313 
5314  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
5315  if(paddingBegin)
5316  {
5317  VmaSuballocation paddingSuballoc = {};
5318  paddingSuballoc.offset = request.offset - paddingBegin;
5319  paddingSuballoc.size = paddingBegin;
5320  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5321  const VmaSuballocationList::iterator paddingBeginItem =
5322  m_Suballocations.insert(request.item, paddingSuballoc);
5323  RegisterFreeSuballocation(paddingBeginItem);
5324  }
5325 
5326  // Update totals.
5327  m_FreeCount = m_FreeCount - 1;
5328  if(paddingBegin > 0)
5329  {
5330  ++m_FreeCount;
5331  }
5332  if(paddingEnd > 0)
5333  {
5334  ++m_FreeCount;
5335  }
5336  m_SumFreeSize -= allocSize;
5337 }
5338 
5339 void VmaBlockMetadata::Free(const VmaAllocation allocation)
5340 {
5341  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
5342  suballocItem != m_Suballocations.end();
5343  ++suballocItem)
5344  {
5345  VmaSuballocation& suballoc = *suballocItem;
5346  if(suballoc.hAllocation == allocation)
5347  {
5348  FreeSuballocation(suballocItem);
5349  VMA_HEAVY_ASSERT(Validate());
5350  return;
5351  }
5352  }
5353  VMA_ASSERT(0 && "Not found!");
5354 }
5355 
5356 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
5357 {
5358  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
5359  suballocItem != m_Suballocations.end();
5360  ++suballocItem)
5361  {
5362  VmaSuballocation& suballoc = *suballocItem;
5363  if(suballoc.offset == offset)
5364  {
5365  FreeSuballocation(suballocItem);
5366  return;
5367  }
5368  }
5369  VMA_ASSERT(0 && "Not found!");
5370 }
5371 
5372 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
5373 {
5374  VkDeviceSize lastSize = 0;
5375  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
5376  {
5377  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
5378 
5379  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
5380  {
5381  VMA_ASSERT(0);
5382  return false;
5383  }
5384  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5385  {
5386  VMA_ASSERT(0);
5387  return false;
5388  }
5389  if(it->size < lastSize)
5390  {
5391  VMA_ASSERT(0);
5392  return false;
5393  }
5394 
5395  lastSize = it->size;
5396  }
5397  return true;
5398 }
5399 
5400 bool VmaBlockMetadata::CheckAllocation(
5401  uint32_t currentFrameIndex,
5402  uint32_t frameInUseCount,
5403  VkDeviceSize bufferImageGranularity,
5404  VkDeviceSize allocSize,
5405  VkDeviceSize allocAlignment,
5406  VmaSuballocationType allocType,
5407  VmaSuballocationList::const_iterator suballocItem,
5408  bool canMakeOtherLost,
5409  VkDeviceSize* pOffset,
5410  size_t* itemsToMakeLostCount,
5411  VkDeviceSize* pSumFreeSize,
5412  VkDeviceSize* pSumItemSize) const
5413 {
5414  VMA_ASSERT(allocSize > 0);
5415  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
5416  VMA_ASSERT(suballocItem != m_Suballocations.cend());
5417  VMA_ASSERT(pOffset != VMA_NULL);
5418 
5419  *itemsToMakeLostCount = 0;
5420  *pSumFreeSize = 0;
5421  *pSumItemSize = 0;
5422 
5423  if(canMakeOtherLost)
5424  {
5425  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5426  {
5427  *pSumFreeSize = suballocItem->size;
5428  }
5429  else
5430  {
5431  if(suballocItem->hAllocation->CanBecomeLost() &&
5432  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5433  {
5434  ++*itemsToMakeLostCount;
5435  *pSumItemSize = suballocItem->size;
5436  }
5437  else
5438  {
5439  return false;
5440  }
5441  }
5442 
5443  // Remaining size is too small for this request: Early return.
5444  if(m_Size - suballocItem->offset < allocSize)
5445  {
5446  return false;
5447  }
5448 
5449  // Start from offset equal to beginning of this suballocation.
5450  *pOffset = suballocItem->offset;
5451 
5452  // Apply VMA_DEBUG_MARGIN at the beginning.
5453  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
5454  {
5455  *pOffset += VMA_DEBUG_MARGIN;
5456  }
5457 
5458  // Apply alignment.
5459  const VkDeviceSize alignment = VMA_MAX(allocAlignment, static_cast<VkDeviceSize>(VMA_DEBUG_ALIGNMENT));
5460  *pOffset = VmaAlignUp(*pOffset, alignment);
5461 
5462  // Check previous suballocations for BufferImageGranularity conflicts.
5463  // Make bigger alignment if necessary.
5464  if(bufferImageGranularity > 1)
5465  {
5466  bool bufferImageGranularityConflict = false;
5467  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
5468  while(prevSuballocItem != m_Suballocations.cbegin())
5469  {
5470  --prevSuballocItem;
5471  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
5472  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
5473  {
5474  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
5475  {
5476  bufferImageGranularityConflict = true;
5477  break;
5478  }
5479  }
5480  else
5481  // Already on previous page.
5482  break;
5483  }
5484  if(bufferImageGranularityConflict)
5485  {
5486  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
5487  }
5488  }
5489 
5490  // Now that we have final *pOffset, check if we are past suballocItem.
5491  // If yes, return false - this function should be called for another suballocItem as starting point.
5492  if(*pOffset >= suballocItem->offset + suballocItem->size)
5493  {
5494  return false;
5495  }
5496 
5497  // Calculate padding at the beginning based on current offset.
5498  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
5499 
5500  // Calculate required margin at the end if this is not last suballocation.
5501  VmaSuballocationList::const_iterator next = suballocItem;
5502  ++next;
5503  const VkDeviceSize requiredEndMargin =
5504  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
5505 
5506  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
5507  // Another early return check.
5508  if(suballocItem->offset + totalSize > m_Size)
5509  {
5510  return false;
5511  }
5512 
5513  // Advance lastSuballocItem until desired size is reached.
5514  // Update itemsToMakeLostCount.
5515  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
5516  if(totalSize > suballocItem->size)
5517  {
5518  VkDeviceSize remainingSize = totalSize - suballocItem->size;
5519  while(remainingSize > 0)
5520  {
5521  ++lastSuballocItem;
5522  if(lastSuballocItem == m_Suballocations.cend())
5523  {
5524  return false;
5525  }
5526  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5527  {
5528  *pSumFreeSize += lastSuballocItem->size;
5529  }
5530  else
5531  {
5532  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
5533  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
5534  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5535  {
5536  ++*itemsToMakeLostCount;
5537  *pSumItemSize += lastSuballocItem->size;
5538  }
5539  else
5540  {
5541  return false;
5542  }
5543  }
5544  remainingSize = (lastSuballocItem->size < remainingSize) ?
5545  remainingSize - lastSuballocItem->size : 0;
5546  }
5547  }
5548 
5549  // Check next suballocations for BufferImageGranularity conflicts.
5550  // If conflict exists, we must mark more allocations lost or fail.
5551  if(bufferImageGranularity > 1)
5552  {
5553  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
5554  ++nextSuballocItem;
5555  while(nextSuballocItem != m_Suballocations.cend())
5556  {
5557  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
5558  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
5559  {
5560  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
5561  {
5562  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
5563  if(nextSuballoc.hAllocation->CanBecomeLost() &&
5564  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
5565  {
5566  ++*itemsToMakeLostCount;
5567  }
5568  else
5569  {
5570  return false;
5571  }
5572  }
5573  }
5574  else
5575  {
5576  // Already on next page.
5577  break;
5578  }
5579  ++nextSuballocItem;
5580  }
5581  }
5582  }
5583  else
5584  {
5585  const VmaSuballocation& suballoc = *suballocItem;
5586  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
5587 
5588  *pSumFreeSize = suballoc.size;
5589 
5590  // Size of this suballocation is too small for this request: Early return.
5591  if(suballoc.size < allocSize)
5592  {
5593  return false;
5594  }
5595 
5596  // Start from offset equal to beginning of this suballocation.
5597  *pOffset = suballoc.offset;
5598 
5599  // Apply VMA_DEBUG_MARGIN at the beginning.
5600  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
5601  {
5602  *pOffset += VMA_DEBUG_MARGIN;
5603  }
5604 
5605  // Apply alignment.
5606  const VkDeviceSize alignment = VMA_MAX(allocAlignment, static_cast<VkDeviceSize>(VMA_DEBUG_ALIGNMENT));
5607  *pOffset = VmaAlignUp(*pOffset, alignment);
5608 
5609  // Check previous suballocations for BufferImageGranularity conflicts.
5610  // Make bigger alignment if necessary.
5611  if(bufferImageGranularity > 1)
5612  {
5613  bool bufferImageGranularityConflict = false;
5614  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
5615  while(prevSuballocItem != m_Suballocations.cbegin())
5616  {
5617  --prevSuballocItem;
5618  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
5619  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
5620  {
5621  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
5622  {
5623  bufferImageGranularityConflict = true;
5624  break;
5625  }
5626  }
5627  else
5628  // Already on previous page.
5629  break;
5630  }
5631  if(bufferImageGranularityConflict)
5632  {
5633  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
5634  }
5635  }
5636 
5637  // Calculate padding at the beginning based on current offset.
5638  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
5639 
5640  // Calculate required margin at the end if this is not last suballocation.
5641  VmaSuballocationList::const_iterator next = suballocItem;
5642  ++next;
5643  const VkDeviceSize requiredEndMargin =
5644  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
5645 
5646  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
5647  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
5648  {
5649  return false;
5650  }
5651 
5652  // Check next suballocations for BufferImageGranularity conflicts.
5653  // If conflict exists, allocation cannot be made here.
5654  if(bufferImageGranularity > 1)
5655  {
5656  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
5657  ++nextSuballocItem;
5658  while(nextSuballocItem != m_Suballocations.cend())
5659  {
5660  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
5661  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
5662  {
5663  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
5664  {
5665  return false;
5666  }
5667  }
5668  else
5669  {
5670  // Already on next page.
5671  break;
5672  }
5673  ++nextSuballocItem;
5674  }
5675  }
5676  }
5677 
5678  // All tests passed: Success. pOffset is already filled.
5679  return true;
5680 }
5681 
5682 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
5683 {
5684  VMA_ASSERT(item != m_Suballocations.end());
5685  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5686 
5687  VmaSuballocationList::iterator nextItem = item;
5688  ++nextItem;
5689  VMA_ASSERT(nextItem != m_Suballocations.end());
5690  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
5691 
5692  item->size += nextItem->size;
5693  --m_FreeCount;
5694  m_Suballocations.erase(nextItem);
5695 }
5696 
5697 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
5698 {
5699  // Change this suballocation to be marked as free.
5700  VmaSuballocation& suballoc = *suballocItem;
5701  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
5702  suballoc.hAllocation = VK_NULL_HANDLE;
5703 
5704  // Update totals.
5705  ++m_FreeCount;
5706  m_SumFreeSize += suballoc.size;
5707 
5708  // Merge with previous and/or next suballocation if it's also free.
5709  bool mergeWithNext = false;
5710  bool mergeWithPrev = false;
5711 
5712  VmaSuballocationList::iterator nextItem = suballocItem;
5713  ++nextItem;
5714  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
5715  {
5716  mergeWithNext = true;
5717  }
5718 
5719  VmaSuballocationList::iterator prevItem = suballocItem;
5720  if(suballocItem != m_Suballocations.begin())
5721  {
5722  --prevItem;
5723  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
5724  {
5725  mergeWithPrev = true;
5726  }
5727  }
5728 
5729  if(mergeWithNext)
5730  {
5731  UnregisterFreeSuballocation(nextItem);
5732  MergeFreeWithNext(suballocItem);
5733  }
5734 
5735  if(mergeWithPrev)
5736  {
5737  UnregisterFreeSuballocation(prevItem);
5738  MergeFreeWithNext(prevItem);
5739  RegisterFreeSuballocation(prevItem);
5740  return prevItem;
5741  }
5742  else
5743  {
5744  RegisterFreeSuballocation(suballocItem);
5745  return suballocItem;
5746  }
5747 }
5748 
5749 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
5750 {
5751  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5752  VMA_ASSERT(item->size > 0);
5753 
5754  // You may want to enable this validation at the beginning or at the end of
5755  // this function, depending on what do you want to check.
5756  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5757 
5758  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5759  {
5760  if(m_FreeSuballocationsBySize.empty())
5761  {
5762  m_FreeSuballocationsBySize.push_back(item);
5763  }
5764  else
5765  {
5766  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
5767  }
5768  }
5769 
5770  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5771 }
5772 
5773 
5774 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
5775 {
5776  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
5777  VMA_ASSERT(item->size > 0);
5778 
5779  // You may want to enable this validation at the beginning or at the end of
5780  // this function, depending on what do you want to check.
5781  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5782 
5783  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
5784  {
5785  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
5786  m_FreeSuballocationsBySize.data(),
5787  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
5788  item,
5789  VmaSuballocationItemSizeLess());
5790  for(size_t index = it - m_FreeSuballocationsBySize.data();
5791  index < m_FreeSuballocationsBySize.size();
5792  ++index)
5793  {
5794  if(m_FreeSuballocationsBySize[index] == item)
5795  {
5796  VmaVectorRemove(m_FreeSuballocationsBySize, index);
5797  return;
5798  }
5799  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
5800  }
5801  VMA_ASSERT(0 && "Not found.");
5802  }
5803 
5804  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
5805 }
5806 
5808 // class VmaDeviceMemoryMapping
5809 
5810 VmaDeviceMemoryMapping::VmaDeviceMemoryMapping() :
5811  m_MapCount(0),
5812  m_pMappedData(VMA_NULL)
5813 {
5814 }
5815 
5816 VmaDeviceMemoryMapping::~VmaDeviceMemoryMapping()
5817 {
5818  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
5819 }
5820 
5821 VkResult VmaDeviceMemoryMapping::Map(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count, void **ppData)
5822 {
5823  if(count == 0)
5824  {
5825  return VK_SUCCESS;
5826  }
5827 
5828  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
5829  if(m_MapCount != 0)
5830  {
5831  m_MapCount += count;
5832  VMA_ASSERT(m_pMappedData != VMA_NULL);
5833  if(ppData != VMA_NULL)
5834  {
5835  *ppData = m_pMappedData;
5836  }
5837  return VK_SUCCESS;
5838  }
5839  else
5840  {
5841  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
5842  hAllocator->m_hDevice,
5843  hMemory,
5844  0, // offset
5845  VK_WHOLE_SIZE,
5846  0, // flags
5847  &m_pMappedData);
5848  if(result == VK_SUCCESS)
5849  {
5850  if(ppData != VMA_NULL)
5851  {
5852  *ppData = m_pMappedData;
5853  }
5854  m_MapCount = count;
5855  }
5856  return result;
5857  }
5858 }
5859 
5860 void VmaDeviceMemoryMapping::Unmap(VmaAllocator hAllocator, VkDeviceMemory hMemory, uint32_t count)
5861 {
5862  if(count == 0)
5863  {
5864  return;
5865  }
5866 
5867  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
5868  if(m_MapCount >= count)
5869  {
5870  m_MapCount -= count;
5871  if(m_MapCount == 0)
5872  {
5873  m_pMappedData = VMA_NULL;
5874  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, hMemory);
5875  }
5876  }
5877  else
5878  {
5879  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
5880  }
5881 }
5882 
5884 // class VmaDeviceMemoryBlock
5885 
5886 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
5887  m_MemoryTypeIndex(UINT32_MAX),
5888  m_hMemory(VK_NULL_HANDLE),
5889  m_Metadata(hAllocator)
5890 {
5891 }
5892 
5893 void VmaDeviceMemoryBlock::Init(
5894  uint32_t newMemoryTypeIndex,
5895  VkDeviceMemory newMemory,
5896  VkDeviceSize newSize)
5897 {
5898  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
5899 
5900  m_MemoryTypeIndex = newMemoryTypeIndex;
5901  m_hMemory = newMemory;
5902 
5903  m_Metadata.Init(newSize);
5904 }
5905 
5906 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
5907 {
5908  // This is the most important assert in the entire library.
5909  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
5910  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
5911 
5912  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
5913  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
5914  m_hMemory = VK_NULL_HANDLE;
5915 }
5916 
5917 bool VmaDeviceMemoryBlock::Validate() const
5918 {
5919  if((m_hMemory == VK_NULL_HANDLE) ||
5920  (m_Metadata.GetSize() == 0))
5921  {
5922  return false;
5923  }
5924 
5925  return m_Metadata.Validate();
5926 }
5927 
5928 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
5929 {
5930  return m_Mapping.Map(hAllocator, m_hMemory, count, ppData);
5931 }
5932 
5933 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
5934 {
5935  m_Mapping.Unmap(hAllocator, m_hMemory, count);
5936 }
5937 
5938 static void InitStatInfo(VmaStatInfo& outInfo)
5939 {
5940  memset(&outInfo, 0, sizeof(outInfo));
5941  outInfo.allocationSizeMin = UINT64_MAX;
5942  outInfo.unusedRangeSizeMin = UINT64_MAX;
5943 }
5944 
5945 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
5946 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
5947 {
5948  inoutInfo.blockCount += srcInfo.blockCount;
5949  inoutInfo.allocationCount += srcInfo.allocationCount;
5950  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
5951  inoutInfo.usedBytes += srcInfo.usedBytes;
5952  inoutInfo.unusedBytes += srcInfo.unusedBytes;
5953  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
5954  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
5955  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
5956  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
5957 }
5958 
5959 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
5960 {
5961  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
5962  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
5963  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
5964  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
5965 }
5966 
5967 VmaPool_T::VmaPool_T(
5968  VmaAllocator hAllocator,
5969  const VmaPoolCreateInfo& createInfo) :
5970  m_BlockVector(
5971  hAllocator,
5972  createInfo.memoryTypeIndex,
5973  createInfo.blockSize,
5974  createInfo.minBlockCount,
5975  createInfo.maxBlockCount,
5976  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
5977  createInfo.frameInUseCount,
5978  true) // isCustomPool
5979 {
5980 }
5981 
5982 VmaPool_T::~VmaPool_T()
5983 {
5984 }
5985 
5986 #if VMA_STATS_STRING_ENABLED
5987 
5988 #endif // #if VMA_STATS_STRING_ENABLED
5989 
5990 VmaBlockVector::VmaBlockVector(
5991  VmaAllocator hAllocator,
5992  uint32_t memoryTypeIndex,
5993  VkDeviceSize preferredBlockSize,
5994  size_t minBlockCount,
5995  size_t maxBlockCount,
5996  VkDeviceSize bufferImageGranularity,
5997  uint32_t frameInUseCount,
5998  bool isCustomPool) :
5999  m_hAllocator(hAllocator),
6000  m_MemoryTypeIndex(memoryTypeIndex),
6001  m_PreferredBlockSize(preferredBlockSize),
6002  m_MinBlockCount(minBlockCount),
6003  m_MaxBlockCount(maxBlockCount),
6004  m_BufferImageGranularity(bufferImageGranularity),
6005  m_FrameInUseCount(frameInUseCount),
6006  m_IsCustomPool(isCustomPool),
6007  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
6008  m_HasEmptyBlock(false),
6009  m_pDefragmentator(VMA_NULL)
6010 {
6011 }
6012 
6013 VmaBlockVector::~VmaBlockVector()
6014 {
6015  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
6016 
6017  for(size_t i = m_Blocks.size(); i--; )
6018  {
6019  m_Blocks[i]->Destroy(m_hAllocator);
6020  vma_delete(m_hAllocator, m_Blocks[i]);
6021  }
6022 }
6023 
6024 VkResult VmaBlockVector::CreateMinBlocks()
6025 {
6026  for(size_t i = 0; i < m_MinBlockCount; ++i)
6027  {
6028  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
6029  if(res != VK_SUCCESS)
6030  {
6031  return res;
6032  }
6033  }
6034  return VK_SUCCESS;
6035 }
6036 
6037 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
6038 {
6039  pStats->size = 0;
6040  pStats->unusedSize = 0;
6041  pStats->allocationCount = 0;
6042  pStats->unusedRangeCount = 0;
6043  pStats->unusedRangeSizeMax = 0;
6044 
6045  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6046 
6047  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6048  {
6049  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
6050  VMA_ASSERT(pBlock);
6051  VMA_HEAVY_ASSERT(pBlock->Validate());
6052  pBlock->m_Metadata.AddPoolStats(*pStats);
6053  }
6054 }
6055 
6056 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
6057 
6058 VkResult VmaBlockVector::Allocate(
6059  VmaPool hCurrentPool,
6060  uint32_t currentFrameIndex,
6061  const VkMemoryRequirements& vkMemReq,
6062  const VmaAllocationCreateInfo& createInfo,
6063  VmaSuballocationType suballocType,
6064  VmaAllocation* pAllocation)
6065 {
6066  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
6067  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
6068 
6069  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6070 
6071  // 1. Search existing allocations. Try to allocate without making other allocations lost.
6072  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
6073  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
6074  {
6075  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
6076  VMA_ASSERT(pCurrBlock);
6077  VmaAllocationRequest currRequest = {};
6078  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
6079  currentFrameIndex,
6080  m_FrameInUseCount,
6081  m_BufferImageGranularity,
6082  vkMemReq.size,
6083  vkMemReq.alignment,
6084  suballocType,
6085  false, // canMakeOtherLost
6086  &currRequest))
6087  {
6088  // Allocate from pCurrBlock.
6089  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
6090 
6091  if(mapped)
6092  {
6093  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
6094  if(res != VK_SUCCESS)
6095  {
6096  return res;
6097  }
6098  }
6099 
6100  // We no longer have an empty Allocation.
6101  if(pCurrBlock->m_Metadata.IsEmpty())
6102  {
6103  m_HasEmptyBlock = false;
6104  }
6105 
6106  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6107  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, vkMemReq.size, *pAllocation);
6108  (*pAllocation)->InitBlockAllocation(
6109  hCurrentPool,
6110  pCurrBlock,
6111  currRequest.offset,
6112  vkMemReq.alignment,
6113  vkMemReq.size,
6114  suballocType,
6115  mapped,
6116  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6117  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
6118  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
6119  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6120  return VK_SUCCESS;
6121  }
6122  }
6123 
6124  const bool canCreateNewBlock =
6125  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
6126  (m_Blocks.size() < m_MaxBlockCount);
6127 
6128  // 2. Try to create new block.
6129  if(canCreateNewBlock)
6130  {
6131  // Calculate optimal size for new block.
6132  VkDeviceSize newBlockSize = m_PreferredBlockSize;
6133  uint32_t newBlockSizeShift = 0;
6134  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
6135 
6136  // Allocating blocks of other sizes is allowed only in default pools.
6137  // In custom pools block size is fixed.
6138  if(m_IsCustomPool == false)
6139  {
6140  // Allocate 1/8, 1/4, 1/2 as first blocks.
6141  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
6142  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
6143  {
6144  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
6145  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= vkMemReq.size * 2)
6146  {
6147  newBlockSize = smallerNewBlockSize;
6148  ++newBlockSizeShift;
6149  }
6150  else
6151  {
6152  break;
6153  }
6154  }
6155  }
6156 
6157  size_t newBlockIndex = 0;
6158  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
6159  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
6160  if(m_IsCustomPool == false)
6161  {
6162  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
6163  {
6164  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
6165  if(smallerNewBlockSize >= vkMemReq.size)
6166  {
6167  newBlockSize = smallerNewBlockSize;
6168  ++newBlockSizeShift;
6169  res = CreateBlock(newBlockSize, &newBlockIndex);
6170  }
6171  else
6172  {
6173  break;
6174  }
6175  }
6176  }
6177 
6178  if(res == VK_SUCCESS)
6179  {
6180  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
6181  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= vkMemReq.size);
6182 
6183  if(mapped)
6184  {
6185  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
6186  if(res != VK_SUCCESS)
6187  {
6188  return res;
6189  }
6190  }
6191 
6192  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
6193  VmaAllocationRequest allocRequest;
6194  pBlock->m_Metadata.CreateFirstAllocationRequest(&allocRequest);
6195  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6196  pBlock->m_Metadata.Alloc(allocRequest, suballocType, vkMemReq.size, *pAllocation);
6197  (*pAllocation)->InitBlockAllocation(
6198  hCurrentPool,
6199  pBlock,
6200  allocRequest.offset,
6201  vkMemReq.alignment,
6202  vkMemReq.size,
6203  suballocType,
6204  mapped,
6205  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6206  VMA_HEAVY_ASSERT(pBlock->Validate());
6207  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
6208  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6209  return VK_SUCCESS;
6210  }
6211  }
6212 
6213  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
6214 
6215  // 3. Try to allocate from existing blocks with making other allocations lost.
6216  if(canMakeOtherLost)
6217  {
6218  uint32_t tryIndex = 0;
6219  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
6220  {
6221  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
6222  VmaAllocationRequest bestRequest = {};
6223  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
6224 
6225  // 1. Search existing allocations.
6226  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
6227  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
6228  {
6229  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
6230  VMA_ASSERT(pCurrBlock);
6231  VmaAllocationRequest currRequest = {};
6232  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
6233  currentFrameIndex,
6234  m_FrameInUseCount,
6235  m_BufferImageGranularity,
6236  vkMemReq.size,
6237  vkMemReq.alignment,
6238  suballocType,
6239  canMakeOtherLost,
6240  &currRequest))
6241  {
6242  const VkDeviceSize currRequestCost = currRequest.CalcCost();
6243  if(pBestRequestBlock == VMA_NULL ||
6244  currRequestCost < bestRequestCost)
6245  {
6246  pBestRequestBlock = pCurrBlock;
6247  bestRequest = currRequest;
6248  bestRequestCost = currRequestCost;
6249 
6250  if(bestRequestCost == 0)
6251  {
6252  break;
6253  }
6254  }
6255  }
6256  }
6257 
6258  if(pBestRequestBlock != VMA_NULL)
6259  {
6260  if(mapped)
6261  {
6262  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
6263  if(res != VK_SUCCESS)
6264  {
6265  return res;
6266  }
6267  }
6268 
6269  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
6270  currentFrameIndex,
6271  m_FrameInUseCount,
6272  &bestRequest))
6273  {
6274  // We no longer have an empty Allocation.
6275  if(pBestRequestBlock->m_Metadata.IsEmpty())
6276  {
6277  m_HasEmptyBlock = false;
6278  }
6279  // Allocate from this pBlock.
6280  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
6281  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, vkMemReq.size, *pAllocation);
6282  (*pAllocation)->InitBlockAllocation(
6283  hCurrentPool,
6284  pBestRequestBlock,
6285  bestRequest.offset,
6286  vkMemReq.alignment,
6287  vkMemReq.size,
6288  suballocType,
6289  mapped,
6290  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
6291  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
6292  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
6293  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
6294  return VK_SUCCESS;
6295  }
6296  // else: Some allocations must have been touched while we are here. Next try.
6297  }
6298  else
6299  {
6300  // Could not find place in any of the blocks - break outer loop.
6301  break;
6302  }
6303  }
6304  /* Maximum number of tries exceeded - a very unlike event when many other
6305  threads are simultaneously touching allocations making it impossible to make
6306  lost at the same time as we try to allocate. */
6307  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
6308  {
6309  return VK_ERROR_TOO_MANY_OBJECTS;
6310  }
6311  }
6312 
6313  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
6314 }
6315 
6316 void VmaBlockVector::Free(
6317  VmaAllocation hAllocation)
6318 {
6319  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
6320 
6321  // Scope for lock.
6322  {
6323  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6324 
6325  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
6326 
6327  if(hAllocation->IsPersistentMap())
6328  {
6329  pBlock->m_Mapping.Unmap(m_hAllocator, pBlock->m_hMemory, 1);
6330  }
6331 
6332  pBlock->m_Metadata.Free(hAllocation);
6333  VMA_HEAVY_ASSERT(pBlock->Validate());
6334 
6335  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
6336 
6337  // pBlock became empty after this deallocation.
6338  if(pBlock->m_Metadata.IsEmpty())
6339  {
6340  // Already has empty Allocation. We don't want to have two, so delete this one.
6341  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
6342  {
6343  pBlockToDelete = pBlock;
6344  Remove(pBlock);
6345  }
6346  // We now have first empty Allocation.
6347  else
6348  {
6349  m_HasEmptyBlock = true;
6350  }
6351  }
6352  // pBlock didn't become empty, but we have another empty block - find and free that one.
6353  // (This is optional, heuristics.)
6354  else if(m_HasEmptyBlock)
6355  {
6356  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
6357  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
6358  {
6359  pBlockToDelete = pLastBlock;
6360  m_Blocks.pop_back();
6361  m_HasEmptyBlock = false;
6362  }
6363  }
6364 
6365  IncrementallySortBlocks();
6366  }
6367 
6368  // Destruction of a free Allocation. Deferred until this point, outside of mutex
6369  // lock, for performance reason.
6370  if(pBlockToDelete != VMA_NULL)
6371  {
6372  VMA_DEBUG_LOG(" Deleted empty allocation");
6373  pBlockToDelete->Destroy(m_hAllocator);
6374  vma_delete(m_hAllocator, pBlockToDelete);
6375  }
6376 }
6377 
6378 size_t VmaBlockVector::CalcMaxBlockSize() const
6379 {
6380  size_t result = 0;
6381  for(size_t i = m_Blocks.size(); i--; )
6382  {
6383  result = VMA_MAX((uint64_t)result, (uint64_t)m_Blocks[i]->m_Metadata.GetSize());
6384  if(result >= m_PreferredBlockSize)
6385  {
6386  break;
6387  }
6388  }
6389  return result;
6390 }
6391 
6392 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
6393 {
6394  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6395  {
6396  if(m_Blocks[blockIndex] == pBlock)
6397  {
6398  VmaVectorRemove(m_Blocks, blockIndex);
6399  return;
6400  }
6401  }
6402  VMA_ASSERT(0);
6403 }
6404 
6405 void VmaBlockVector::IncrementallySortBlocks()
6406 {
6407  // Bubble sort only until first swap.
6408  for(size_t i = 1; i < m_Blocks.size(); ++i)
6409  {
6410  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
6411  {
6412  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
6413  return;
6414  }
6415  }
6416 }
6417 
6418 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
6419 {
6420  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
6421  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
6422  allocInfo.allocationSize = blockSize;
6423  VkDeviceMemory mem = VK_NULL_HANDLE;
6424  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
6425  if(res < 0)
6426  {
6427  return res;
6428  }
6429 
6430  // New VkDeviceMemory successfully created.
6431 
6432  // Create new Allocation for it.
6433  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
6434  pBlock->Init(
6435  m_MemoryTypeIndex,
6436  mem,
6437  allocInfo.allocationSize);
6438 
6439  m_Blocks.push_back(pBlock);
6440  if(pNewBlockIndex != VMA_NULL)
6441  {
6442  *pNewBlockIndex = m_Blocks.size() - 1;
6443  }
6444 
6445  return VK_SUCCESS;
6446 }
6447 
6448 #if VMA_STATS_STRING_ENABLED
6449 
6450 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
6451 {
6452  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6453 
6454  json.BeginObject();
6455 
6456  if(m_IsCustomPool)
6457  {
6458  json.WriteString("MemoryTypeIndex");
6459  json.WriteNumber(m_MemoryTypeIndex);
6460 
6461  json.WriteString("BlockSize");
6462  json.WriteNumber(m_PreferredBlockSize);
6463 
6464  json.WriteString("BlockCount");
6465  json.BeginObject(true);
6466  if(m_MinBlockCount > 0)
6467  {
6468  json.WriteString("Min");
6469  json.WriteNumber((uint64_t)m_MinBlockCount);
6470  }
6471  if(m_MaxBlockCount < SIZE_MAX)
6472  {
6473  json.WriteString("Max");
6474  json.WriteNumber((uint64_t)m_MaxBlockCount);
6475  }
6476  json.WriteString("Cur");
6477  json.WriteNumber((uint64_t)m_Blocks.size());
6478  json.EndObject();
6479 
6480  if(m_FrameInUseCount > 0)
6481  {
6482  json.WriteString("FrameInUseCount");
6483  json.WriteNumber(m_FrameInUseCount);
6484  }
6485  }
6486  else
6487  {
6488  json.WriteString("PreferredBlockSize");
6489  json.WriteNumber(m_PreferredBlockSize);
6490  }
6491 
6492  json.WriteString("Blocks");
6493  json.BeginArray();
6494  for(size_t i = 0; i < m_Blocks.size(); ++i)
6495  {
6496  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
6497  }
6498  json.EndArray();
6499 
6500  json.EndObject();
6501 }
6502 
6503 #endif // #if VMA_STATS_STRING_ENABLED
6504 
6505 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
6506  VmaAllocator hAllocator,
6507  uint32_t currentFrameIndex)
6508 {
6509  if(m_pDefragmentator == VMA_NULL)
6510  {
6511  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
6512  hAllocator,
6513  this,
6514  currentFrameIndex);
6515  }
6516 
6517  return m_pDefragmentator;
6518 }
6519 
6520 VkResult VmaBlockVector::Defragment(
6521  VmaDefragmentationStats* pDefragmentationStats,
6522  VkDeviceSize& maxBytesToMove,
6523  uint32_t& maxAllocationsToMove)
6524 {
6525  if(m_pDefragmentator == VMA_NULL)
6526  {
6527  return VK_SUCCESS;
6528  }
6529 
6530  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6531 
6532  // Defragment.
6533  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
6534 
6535  // Accumulate statistics.
6536  if(pDefragmentationStats != VMA_NULL)
6537  {
6538  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
6539  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
6540  pDefragmentationStats->bytesMoved += bytesMoved;
6541  pDefragmentationStats->allocationsMoved += allocationsMoved;
6542  VMA_ASSERT(bytesMoved <= maxBytesToMove);
6543  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
6544  maxBytesToMove -= bytesMoved;
6545  maxAllocationsToMove -= allocationsMoved;
6546  }
6547 
6548  // Free empty blocks.
6549  m_HasEmptyBlock = false;
6550  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
6551  {
6552  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
6553  if(pBlock->m_Metadata.IsEmpty())
6554  {
6555  if(m_Blocks.size() > m_MinBlockCount)
6556  {
6557  if(pDefragmentationStats != VMA_NULL)
6558  {
6559  ++pDefragmentationStats->deviceMemoryBlocksFreed;
6560  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
6561  }
6562 
6563  VmaVectorRemove(m_Blocks, blockIndex);
6564  pBlock->Destroy(m_hAllocator);
6565  vma_delete(m_hAllocator, pBlock);
6566  }
6567  else
6568  {
6569  m_HasEmptyBlock = true;
6570  }
6571  }
6572  }
6573 
6574  return result;
6575 }
6576 
6577 void VmaBlockVector::DestroyDefragmentator()
6578 {
6579  if(m_pDefragmentator != VMA_NULL)
6580  {
6581  vma_delete(m_hAllocator, m_pDefragmentator);
6582  m_pDefragmentator = VMA_NULL;
6583  }
6584 }
6585 
6586 void VmaBlockVector::MakePoolAllocationsLost(
6587  uint32_t currentFrameIndex,
6588  size_t* pLostAllocationCount)
6589 {
6590  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6591  size_t lostAllocationCount = 0;
6592  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6593  {
6594  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
6595  VMA_ASSERT(pBlock);
6596  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
6597  }
6598  if(pLostAllocationCount != VMA_NULL)
6599  {
6600  *pLostAllocationCount = lostAllocationCount;
6601  }
6602 }
6603 
6604 void VmaBlockVector::AddStats(VmaStats* pStats)
6605 {
6606  const uint32_t memTypeIndex = m_MemoryTypeIndex;
6607  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
6608 
6609  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
6610 
6611  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
6612  {
6613  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
6614  VMA_ASSERT(pBlock);
6615  VMA_HEAVY_ASSERT(pBlock->Validate());
6616  VmaStatInfo allocationStatInfo;
6617  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
6618  VmaAddStatInfo(pStats->total, allocationStatInfo);
6619  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
6620  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
6621  }
6622 }
6623 
6625 // VmaDefragmentator members definition
6626 
6627 VmaDefragmentator::VmaDefragmentator(
6628  VmaAllocator hAllocator,
6629  VmaBlockVector* pBlockVector,
6630  uint32_t currentFrameIndex) :
6631  m_hAllocator(hAllocator),
6632  m_pBlockVector(pBlockVector),
6633  m_CurrentFrameIndex(currentFrameIndex),
6634  m_BytesMoved(0),
6635  m_AllocationsMoved(0),
6636  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
6637  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
6638 {
6639 }
6640 
6641 VmaDefragmentator::~VmaDefragmentator()
6642 {
6643  for(size_t i = m_Blocks.size(); i--; )
6644  {
6645  vma_delete(m_hAllocator, m_Blocks[i]);
6646  }
6647 }
6648 
6649 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
6650 {
6651  AllocationInfo allocInfo;
6652  allocInfo.m_hAllocation = hAlloc;
6653  allocInfo.m_pChanged = pChanged;
6654  m_Allocations.push_back(allocInfo);
6655 }
6656 
6657 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
6658 {
6659  // It has already been mapped for defragmentation.
6660  if(m_pMappedDataForDefragmentation)
6661  {
6662  *ppMappedData = m_pMappedDataForDefragmentation;
6663  return VK_SUCCESS;
6664  }
6665 
6666  // It is originally mapped.
6667  if(m_pBlock->m_Mapping.GetMappedData())
6668  {
6669  *ppMappedData = m_pBlock->m_Mapping.GetMappedData();
6670  return VK_SUCCESS;
6671  }
6672 
6673  // Map on first usage.
6674  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
6675  *ppMappedData = m_pMappedDataForDefragmentation;
6676  return res;
6677 }
6678 
6679 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
6680 {
6681  if(m_pMappedDataForDefragmentation != VMA_NULL)
6682  {
6683  m_pBlock->Unmap(hAllocator, 1);
6684  }
6685 }
6686 
6687 VkResult VmaDefragmentator::DefragmentRound(
6688  VkDeviceSize maxBytesToMove,
6689  uint32_t maxAllocationsToMove)
6690 {
6691  if(m_Blocks.empty())
6692  {
6693  return VK_SUCCESS;
6694  }
6695 
6696  size_t srcBlockIndex = m_Blocks.size() - 1;
6697  size_t srcAllocIndex = SIZE_MAX;
6698  for(;;)
6699  {
6700  // 1. Find next allocation to move.
6701  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
6702  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
6703  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
6704  {
6705  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
6706  {
6707  // Finished: no more allocations to process.
6708  if(srcBlockIndex == 0)
6709  {
6710  return VK_SUCCESS;
6711  }
6712  else
6713  {
6714  --srcBlockIndex;
6715  srcAllocIndex = SIZE_MAX;
6716  }
6717  }
6718  else
6719  {
6720  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
6721  }
6722  }
6723 
6724  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
6725  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
6726 
6727  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
6728  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
6729  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
6730  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
6731 
6732  // 2. Try to find new place for this allocation in preceding or current block.
6733  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
6734  {
6735  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
6736  VmaAllocationRequest dstAllocRequest;
6737  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
6738  m_CurrentFrameIndex,
6739  m_pBlockVector->GetFrameInUseCount(),
6740  m_pBlockVector->GetBufferImageGranularity(),
6741  size,
6742  alignment,
6743  suballocType,
6744  false, // canMakeOtherLost
6745  &dstAllocRequest) &&
6746  MoveMakesSense(
6747  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
6748  {
6749  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
6750 
6751  // Reached limit on number of allocations or bytes to move.
6752  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
6753  (m_BytesMoved + size > maxBytesToMove))
6754  {
6755  return VK_INCOMPLETE;
6756  }
6757 
6758  void* pDstMappedData = VMA_NULL;
6759  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
6760  if(res != VK_SUCCESS)
6761  {
6762  return res;
6763  }
6764 
6765  void* pSrcMappedData = VMA_NULL;
6766  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
6767  if(res != VK_SUCCESS)
6768  {
6769  return res;
6770  }
6771 
6772  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
6773  memcpy(
6774  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
6775  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
6776  static_cast<size_t>(size));
6777 
6778  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
6779  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
6780 
6781  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
6782 
6783  if(allocInfo.m_pChanged != VMA_NULL)
6784  {
6785  *allocInfo.m_pChanged = VK_TRUE;
6786  }
6787 
6788  ++m_AllocationsMoved;
6789  m_BytesMoved += size;
6790 
6791  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
6792 
6793  break;
6794  }
6795  }
6796 
6797  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
6798 
6799  if(srcAllocIndex > 0)
6800  {
6801  --srcAllocIndex;
6802  }
6803  else
6804  {
6805  if(srcBlockIndex > 0)
6806  {
6807  --srcBlockIndex;
6808  srcAllocIndex = SIZE_MAX;
6809  }
6810  else
6811  {
6812  return VK_SUCCESS;
6813  }
6814  }
6815  }
6816 }
6817 
6818 VkResult VmaDefragmentator::Defragment(
6819  VkDeviceSize maxBytesToMove,
6820  uint32_t maxAllocationsToMove)
6821 {
6822  if(m_Allocations.empty())
6823  {
6824  return VK_SUCCESS;
6825  }
6826 
6827  // Create block info for each block.
6828  const size_t blockCount = m_pBlockVector->m_Blocks.size();
6829  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6830  {
6831  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
6832  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
6833  m_Blocks.push_back(pBlockInfo);
6834  }
6835 
6836  // Sort them by m_pBlock pointer value.
6837  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
6838 
6839  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
6840  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
6841  {
6842  AllocationInfo& allocInfo = m_Allocations[blockIndex];
6843  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
6844  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
6845  {
6846  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
6847  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
6848  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
6849  {
6850  (*it)->m_Allocations.push_back(allocInfo);
6851  }
6852  else
6853  {
6854  VMA_ASSERT(0);
6855  }
6856  }
6857  }
6858  m_Allocations.clear();
6859 
6860  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6861  {
6862  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
6863  pBlockInfo->CalcHasNonMovableAllocations();
6864  pBlockInfo->SortAllocationsBySizeDescecnding();
6865  }
6866 
6867  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
6868  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
6869 
6870  // Execute defragmentation rounds (the main part).
6871  VkResult result = VK_SUCCESS;
6872  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
6873  {
6874  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
6875  }
6876 
6877  // Unmap blocks that were mapped for defragmentation.
6878  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
6879  {
6880  m_Blocks[blockIndex]->Unmap(m_hAllocator);
6881  }
6882 
6883  return result;
6884 }
6885 
6886 bool VmaDefragmentator::MoveMakesSense(
6887  size_t dstBlockIndex, VkDeviceSize dstOffset,
6888  size_t srcBlockIndex, VkDeviceSize srcOffset)
6889 {
6890  if(dstBlockIndex < srcBlockIndex)
6891  {
6892  return true;
6893  }
6894  if(dstBlockIndex > srcBlockIndex)
6895  {
6896  return false;
6897  }
6898  if(dstOffset < srcOffset)
6899  {
6900  return true;
6901  }
6902  return false;
6903 }
6904 
6906 // VmaAllocator_T
6907 
6908 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
6909  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
6910  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
6911  m_hDevice(pCreateInfo->device),
6912  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
6913  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
6914  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
6915  m_PreferredLargeHeapBlockSize(0),
6916  m_PhysicalDevice(pCreateInfo->physicalDevice),
6917  m_CurrentFrameIndex(0),
6918  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks()))
6919 {
6920  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
6921 
6922  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
6923  memset(&m_MemProps, 0, sizeof(m_MemProps));
6924  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
6925 
6926  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
6927  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
6928 
6929  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
6930  {
6931  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
6932  }
6933 
6934  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
6935  {
6936  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
6937  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
6938  }
6939 
6940  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
6941 
6942  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
6943  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
6944 
6945  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
6946  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
6947 
6948  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
6949  {
6950  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
6951  {
6952  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
6953  if(limit != VK_WHOLE_SIZE)
6954  {
6955  m_HeapSizeLimit[heapIndex] = limit;
6956  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
6957  {
6958  m_MemProps.memoryHeaps[heapIndex].size = limit;
6959  }
6960  }
6961  }
6962  }
6963 
6964  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
6965  {
6966  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
6967 
6968  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
6969  this,
6970  memTypeIndex,
6971  preferredBlockSize,
6972  0,
6973  SIZE_MAX,
6974  GetBufferImageGranularity(),
6975  pCreateInfo->frameInUseCount,
6976  false); // isCustomPool
6977  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
6978  // becase minBlockCount is 0.
6979  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
6980  }
6981 }
6982 
6983 VmaAllocator_T::~VmaAllocator_T()
6984 {
6985  VMA_ASSERT(m_Pools.empty());
6986 
6987  for(size_t i = GetMemoryTypeCount(); i--; )
6988  {
6989  vma_delete(this, m_pDedicatedAllocations[i]);
6990  vma_delete(this, m_pBlockVectors[i]);
6991  }
6992 }
6993 
6994 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
6995 {
6996 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
6997  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
6998  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
6999  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
7000  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
7001  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
7002  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
7003  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
7004  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
7005  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
7006  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
7007  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
7008  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
7009  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
7010  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
7011  if(m_UseKhrDedicatedAllocation)
7012  {
7013  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
7014  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
7015  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
7016  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
7017  }
7018 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
7019 
7020 #define VMA_COPY_IF_NOT_NULL(funcName) \
7021  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
7022 
7023  if(pVulkanFunctions != VMA_NULL)
7024  {
7025  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
7026  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
7027  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
7028  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
7029  VMA_COPY_IF_NOT_NULL(vkMapMemory);
7030  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
7031  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
7032  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
7033  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
7034  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
7035  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
7036  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
7037  VMA_COPY_IF_NOT_NULL(vkCreateImage);
7038  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
7039  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
7040  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
7041  }
7042 
7043 #undef VMA_COPY_IF_NOT_NULL
7044 
7045  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
7046  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
7047  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
7048  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
7049  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
7050  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
7051  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
7052  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
7053  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
7054  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
7055  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
7056  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
7057  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
7058  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
7059  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
7060  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
7061  if(m_UseKhrDedicatedAllocation)
7062  {
7063  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
7064  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
7065  }
7066 }
7067 
7068 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
7069 {
7070  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
7071  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
7072  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
7073  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
7074 }
7075 
7076 VkResult VmaAllocator_T::AllocateMemoryOfType(
7077  const VkMemoryRequirements& vkMemReq,
7078  bool dedicatedAllocation,
7079  VkBuffer dedicatedBuffer,
7080  VkImage dedicatedImage,
7081  const VmaAllocationCreateInfo& createInfo,
7082  uint32_t memTypeIndex,
7083  VmaSuballocationType suballocType,
7084  VmaAllocation* pAllocation)
7085 {
7086  VMA_ASSERT(pAllocation != VMA_NULL);
7087  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
7088 
7089  VmaAllocationCreateInfo finalCreateInfo = createInfo;
7090 
7091  // If memory type is not HOST_VISIBLE, disable MAPPED.
7092  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
7093  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
7094  {
7095  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
7096  }
7097 
7098  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
7099  VMA_ASSERT(blockVector);
7100 
7101  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
7102  bool preferDedicatedMemory =
7103  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
7104  dedicatedAllocation ||
7105  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
7106  vkMemReq.size > preferredBlockSize / 2;
7107 
7108  if(preferDedicatedMemory &&
7109  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
7110  finalCreateInfo.pool == VK_NULL_HANDLE)
7111  {
7113  }
7114 
7115  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
7116  {
7117  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7118  {
7119  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7120  }
7121  else
7122  {
7123  return AllocateDedicatedMemory(
7124  vkMemReq.size,
7125  suballocType,
7126  memTypeIndex,
7127  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
7128  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
7129  finalCreateInfo.pUserData,
7130  dedicatedBuffer,
7131  dedicatedImage,
7132  pAllocation);
7133  }
7134  }
7135  else
7136  {
7137  VkResult res = blockVector->Allocate(
7138  VK_NULL_HANDLE, // hCurrentPool
7139  m_CurrentFrameIndex.load(),
7140  vkMemReq,
7141  finalCreateInfo,
7142  suballocType,
7143  pAllocation);
7144  if(res == VK_SUCCESS)
7145  {
7146  return res;
7147  }
7148 
7149  // 5. Try dedicated memory.
7150  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7151  {
7152  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7153  }
7154  else
7155  {
7156  res = AllocateDedicatedMemory(
7157  vkMemReq.size,
7158  suballocType,
7159  memTypeIndex,
7160  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
7161  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
7162  finalCreateInfo.pUserData,
7163  dedicatedBuffer,
7164  dedicatedImage,
7165  pAllocation);
7166  if(res == VK_SUCCESS)
7167  {
7168  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
7169  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
7170  return VK_SUCCESS;
7171  }
7172  else
7173  {
7174  // Everything failed: Return error code.
7175  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
7176  return res;
7177  }
7178  }
7179  }
7180 }
7181 
7182 VkResult VmaAllocator_T::AllocateDedicatedMemory(
7183  VkDeviceSize size,
7184  VmaSuballocationType suballocType,
7185  uint32_t memTypeIndex,
7186  bool map,
7187  bool isUserDataString,
7188  void* pUserData,
7189  VkBuffer dedicatedBuffer,
7190  VkImage dedicatedImage,
7191  VmaAllocation* pAllocation)
7192 {
7193  VMA_ASSERT(pAllocation);
7194 
7195  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
7196  allocInfo.memoryTypeIndex = memTypeIndex;
7197  allocInfo.allocationSize = size;
7198 
7199  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
7200  if(m_UseKhrDedicatedAllocation)
7201  {
7202  if(dedicatedBuffer != VK_NULL_HANDLE)
7203  {
7204  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
7205  dedicatedAllocInfo.buffer = dedicatedBuffer;
7206  allocInfo.pNext = &dedicatedAllocInfo;
7207  }
7208  else if(dedicatedImage != VK_NULL_HANDLE)
7209  {
7210  dedicatedAllocInfo.image = dedicatedImage;
7211  allocInfo.pNext = &dedicatedAllocInfo;
7212  }
7213  }
7214 
7215  // Allocate VkDeviceMemory.
7216  VkDeviceMemory hMemory = VK_NULL_HANDLE;
7217  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
7218  if(res < 0)
7219  {
7220  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
7221  return res;
7222  }
7223 
7224  void* pMappedData = VMA_NULL;
7225  if(map)
7226  {
7227  res = (*m_VulkanFunctions.vkMapMemory)(
7228  m_hDevice,
7229  hMemory,
7230  0,
7231  VK_WHOLE_SIZE,
7232  0,
7233  &pMappedData);
7234  if(res < 0)
7235  {
7236  VMA_DEBUG_LOG(" vkMapMemory FAILED");
7237  FreeVulkanMemory(memTypeIndex, size, hMemory);
7238  return res;
7239  }
7240  }
7241 
7242  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
7243  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
7244  (*pAllocation)->SetUserData(this, pUserData);
7245 
7246  // Register it in m_pDedicatedAllocations.
7247  {
7248  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7249  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
7250  VMA_ASSERT(pDedicatedAllocations);
7251  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
7252  }
7253 
7254  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
7255 
7256  return VK_SUCCESS;
7257 }
7258 
7259 void VmaAllocator_T::GetBufferMemoryRequirements(
7260  VkBuffer hBuffer,
7261  VkMemoryRequirements& memReq,
7262  bool& requiresDedicatedAllocation,
7263  bool& prefersDedicatedAllocation) const
7264 {
7265  if(m_UseKhrDedicatedAllocation)
7266  {
7267  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
7268  memReqInfo.buffer = hBuffer;
7269 
7270  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
7271 
7272  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
7273  memReq2.pNext = &memDedicatedReq;
7274 
7275  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
7276 
7277  memReq = memReq2.memoryRequirements;
7278  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
7279  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
7280  }
7281  else
7282  {
7283  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
7284  requiresDedicatedAllocation = false;
7285  prefersDedicatedAllocation = false;
7286  }
7287 }
7288 
7289 void VmaAllocator_T::GetImageMemoryRequirements(
7290  VkImage hImage,
7291  VkMemoryRequirements& memReq,
7292  bool& requiresDedicatedAllocation,
7293  bool& prefersDedicatedAllocation) const
7294 {
7295  if(m_UseKhrDedicatedAllocation)
7296  {
7297  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
7298  memReqInfo.image = hImage;
7299 
7300  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
7301 
7302  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
7303  memReq2.pNext = &memDedicatedReq;
7304 
7305  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
7306 
7307  memReq = memReq2.memoryRequirements;
7308  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
7309  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
7310  }
7311  else
7312  {
7313  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
7314  requiresDedicatedAllocation = false;
7315  prefersDedicatedAllocation = false;
7316  }
7317 }
7318 
7319 VkResult VmaAllocator_T::AllocateMemory(
7320  const VkMemoryRequirements& vkMemReq,
7321  bool requiresDedicatedAllocation,
7322  bool prefersDedicatedAllocation,
7323  VkBuffer dedicatedBuffer,
7324  VkImage dedicatedImage,
7325  const VmaAllocationCreateInfo& createInfo,
7326  VmaSuballocationType suballocType,
7327  VmaAllocation* pAllocation)
7328 {
7329  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
7330  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7331  {
7332  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
7333  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7334  }
7335  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
7337  {
7338  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
7339  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7340  }
7341  if(requiresDedicatedAllocation)
7342  {
7343  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
7344  {
7345  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
7346  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7347  }
7348  if(createInfo.pool != VK_NULL_HANDLE)
7349  {
7350  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
7351  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7352  }
7353  }
7354  if((createInfo.pool != VK_NULL_HANDLE) &&
7355  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
7356  {
7357  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
7358  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7359  }
7360 
7361  if(createInfo.pool != VK_NULL_HANDLE)
7362  {
7363  return createInfo.pool->m_BlockVector.Allocate(
7364  createInfo.pool,
7365  m_CurrentFrameIndex.load(),
7366  vkMemReq,
7367  createInfo,
7368  suballocType,
7369  pAllocation);
7370  }
7371  else
7372  {
7373  // Bit mask of memory Vulkan types acceptable for this allocation.
7374  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
7375  uint32_t memTypeIndex = UINT32_MAX;
7376  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
7377  if(res == VK_SUCCESS)
7378  {
7379  res = AllocateMemoryOfType(
7380  vkMemReq,
7381  requiresDedicatedAllocation || prefersDedicatedAllocation,
7382  dedicatedBuffer,
7383  dedicatedImage,
7384  createInfo,
7385  memTypeIndex,
7386  suballocType,
7387  pAllocation);
7388  // Succeeded on first try.
7389  if(res == VK_SUCCESS)
7390  {
7391  return res;
7392  }
7393  // Allocation from this memory type failed. Try other compatible memory types.
7394  else
7395  {
7396  for(;;)
7397  {
7398  // Remove old memTypeIndex from list of possibilities.
7399  memoryTypeBits &= ~(1u << memTypeIndex);
7400  // Find alternative memTypeIndex.
7401  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
7402  if(res == VK_SUCCESS)
7403  {
7404  res = AllocateMemoryOfType(
7405  vkMemReq,
7406  requiresDedicatedAllocation || prefersDedicatedAllocation,
7407  dedicatedBuffer,
7408  dedicatedImage,
7409  createInfo,
7410  memTypeIndex,
7411  suballocType,
7412  pAllocation);
7413  // Allocation from this alternative memory type succeeded.
7414  if(res == VK_SUCCESS)
7415  {
7416  return res;
7417  }
7418  // else: Allocation from this memory type failed. Try next one - next loop iteration.
7419  }
7420  // No other matching memory type index could be found.
7421  else
7422  {
7423  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
7424  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
7425  }
7426  }
7427  }
7428  }
7429  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
7430  else
7431  return res;
7432  }
7433 }
7434 
7435 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
7436 {
7437  VMA_ASSERT(allocation);
7438 
7439  if(allocation->CanBecomeLost() == false ||
7440  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
7441  {
7442  switch(allocation->GetType())
7443  {
7444  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7445  {
7446  VmaBlockVector* pBlockVector = VMA_NULL;
7447  VmaPool hPool = allocation->GetPool();
7448  if(hPool != VK_NULL_HANDLE)
7449  {
7450  pBlockVector = &hPool->m_BlockVector;
7451  }
7452  else
7453  {
7454  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
7455  pBlockVector = m_pBlockVectors[memTypeIndex];
7456  }
7457  pBlockVector->Free(allocation);
7458  }
7459  break;
7460  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7461  FreeDedicatedMemory(allocation);
7462  break;
7463  default:
7464  VMA_ASSERT(0);
7465  }
7466  }
7467 
7468  allocation->SetUserData(this, VMA_NULL);
7469  vma_delete(this, allocation);
7470 }
7471 
7472 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
7473 {
7474  // Initialize.
7475  InitStatInfo(pStats->total);
7476  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
7477  InitStatInfo(pStats->memoryType[i]);
7478  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
7479  InitStatInfo(pStats->memoryHeap[i]);
7480 
7481  // Process default pools.
7482  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7483  {
7484  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
7485  VMA_ASSERT(pBlockVector);
7486  pBlockVector->AddStats(pStats);
7487  }
7488 
7489  // Process custom pools.
7490  {
7491  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7492  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
7493  {
7494  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
7495  }
7496  }
7497 
7498  // Process dedicated allocations.
7499  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7500  {
7501  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
7502  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7503  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
7504  VMA_ASSERT(pDedicatedAllocVector);
7505  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
7506  {
7507  VmaStatInfo allocationStatInfo;
7508  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
7509  VmaAddStatInfo(pStats->total, allocationStatInfo);
7510  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
7511  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
7512  }
7513  }
7514 
7515  // Postprocess.
7516  VmaPostprocessCalcStatInfo(pStats->total);
7517  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
7518  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
7519  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
7520  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
7521 }
7522 
7523 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
7524 
7525 VkResult VmaAllocator_T::Defragment(
7526  VmaAllocation* pAllocations,
7527  size_t allocationCount,
7528  VkBool32* pAllocationsChanged,
7529  const VmaDefragmentationInfo* pDefragmentationInfo,
7530  VmaDefragmentationStats* pDefragmentationStats)
7531 {
7532  if(pAllocationsChanged != VMA_NULL)
7533  {
7534  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
7535  }
7536  if(pDefragmentationStats != VMA_NULL)
7537  {
7538  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
7539  }
7540 
7541  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
7542 
7543  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
7544 
7545  const size_t poolCount = m_Pools.size();
7546 
7547  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
7548  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
7549  {
7550  VmaAllocation hAlloc = pAllocations[allocIndex];
7551  VMA_ASSERT(hAlloc);
7552  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
7553  // DedicatedAlloc cannot be defragmented.
7554  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
7555  // Only HOST_VISIBLE memory types can be defragmented.
7556  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
7557  // Lost allocation cannot be defragmented.
7558  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
7559  {
7560  VmaBlockVector* pAllocBlockVector = VMA_NULL;
7561 
7562  const VmaPool hAllocPool = hAlloc->GetPool();
7563  // This allocation belongs to custom pool.
7564  if(hAllocPool != VK_NULL_HANDLE)
7565  {
7566  pAllocBlockVector = &hAllocPool->GetBlockVector();
7567  }
7568  // This allocation belongs to general pool.
7569  else
7570  {
7571  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
7572  }
7573 
7574  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
7575 
7576  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
7577  &pAllocationsChanged[allocIndex] : VMA_NULL;
7578  pDefragmentator->AddAllocation(hAlloc, pChanged);
7579  }
7580  }
7581 
7582  VkResult result = VK_SUCCESS;
7583 
7584  // ======== Main processing.
7585 
7586  VkDeviceSize maxBytesToMove = SIZE_MAX;
7587  uint32_t maxAllocationsToMove = UINT32_MAX;
7588  if(pDefragmentationInfo != VMA_NULL)
7589  {
7590  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
7591  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
7592  }
7593 
7594  // Process standard memory.
7595  for(uint32_t memTypeIndex = 0;
7596  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
7597  ++memTypeIndex)
7598  {
7599  // Only HOST_VISIBLE memory types can be defragmented.
7600  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
7601  {
7602  result = m_pBlockVectors[memTypeIndex]->Defragment(
7603  pDefragmentationStats,
7604  maxBytesToMove,
7605  maxAllocationsToMove);
7606  }
7607  }
7608 
7609  // Process custom pools.
7610  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
7611  {
7612  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
7613  pDefragmentationStats,
7614  maxBytesToMove,
7615  maxAllocationsToMove);
7616  }
7617 
7618  // ======== Destroy defragmentators.
7619 
7620  // Process custom pools.
7621  for(size_t poolIndex = poolCount; poolIndex--; )
7622  {
7623  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
7624  }
7625 
7626  // Process standard memory.
7627  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
7628  {
7629  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
7630  {
7631  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
7632  }
7633  }
7634 
7635  return result;
7636 }
7637 
7638 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
7639 {
7640  if(hAllocation->CanBecomeLost())
7641  {
7642  /*
7643  Warning: This is a carefully designed algorithm.
7644  Do not modify unless you really know what you're doing :)
7645  */
7646  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
7647  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
7648  for(;;)
7649  {
7650  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
7651  {
7652  pAllocationInfo->memoryType = UINT32_MAX;
7653  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
7654  pAllocationInfo->offset = 0;
7655  pAllocationInfo->size = hAllocation->GetSize();
7656  pAllocationInfo->pMappedData = VMA_NULL;
7657  pAllocationInfo->pUserData = hAllocation->GetUserData();
7658  return;
7659  }
7660  else if(localLastUseFrameIndex == localCurrFrameIndex)
7661  {
7662  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
7663  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
7664  pAllocationInfo->offset = hAllocation->GetOffset();
7665  pAllocationInfo->size = hAllocation->GetSize();
7666  pAllocationInfo->pMappedData = VMA_NULL;
7667  pAllocationInfo->pUserData = hAllocation->GetUserData();
7668  return;
7669  }
7670  else // Last use time earlier than current time.
7671  {
7672  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
7673  {
7674  localLastUseFrameIndex = localCurrFrameIndex;
7675  }
7676  }
7677  }
7678  }
7679  else
7680  {
7681  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
7682  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
7683  pAllocationInfo->offset = hAllocation->GetOffset();
7684  pAllocationInfo->size = hAllocation->GetSize();
7685  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
7686  pAllocationInfo->pUserData = hAllocation->GetUserData();
7687  }
7688 }
7689 
7690 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
7691 {
7692  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
7693 
7694  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
7695 
7696  if(newCreateInfo.maxBlockCount == 0)
7697  {
7698  newCreateInfo.maxBlockCount = SIZE_MAX;
7699  }
7700  if(newCreateInfo.blockSize == 0)
7701  {
7702  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
7703  }
7704 
7705  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
7706 
7707  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
7708  if(res != VK_SUCCESS)
7709  {
7710  vma_delete(this, *pPool);
7711  *pPool = VMA_NULL;
7712  return res;
7713  }
7714 
7715  // Add to m_Pools.
7716  {
7717  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7718  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
7719  }
7720 
7721  return VK_SUCCESS;
7722 }
7723 
7724 void VmaAllocator_T::DestroyPool(VmaPool pool)
7725 {
7726  // Remove from m_Pools.
7727  {
7728  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7729  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
7730  VMA_ASSERT(success && "Pool not found in Allocator.");
7731  }
7732 
7733  vma_delete(this, pool);
7734 }
7735 
7736 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
7737 {
7738  pool->m_BlockVector.GetPoolStats(pPoolStats);
7739 }
7740 
7741 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
7742 {
7743  m_CurrentFrameIndex.store(frameIndex);
7744 }
7745 
7746 void VmaAllocator_T::MakePoolAllocationsLost(
7747  VmaPool hPool,
7748  size_t* pLostAllocationCount)
7749 {
7750  hPool->m_BlockVector.MakePoolAllocationsLost(
7751  m_CurrentFrameIndex.load(),
7752  pLostAllocationCount);
7753 }
7754 
7755 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
7756 {
7757  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
7758  (*pAllocation)->InitLost();
7759 }
7760 
7761 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
7762 {
7763  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
7764 
7765  VkResult res;
7766  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
7767  {
7768  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
7769  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
7770  {
7771  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
7772  if(res == VK_SUCCESS)
7773  {
7774  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
7775  }
7776  }
7777  else
7778  {
7779  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
7780  }
7781  }
7782  else
7783  {
7784  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
7785  }
7786 
7787  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
7788  {
7789  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
7790  }
7791 
7792  return res;
7793 }
7794 
7795 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
7796 {
7797  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
7798  {
7799  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
7800  }
7801 
7802  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
7803 
7804  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
7805  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
7806  {
7807  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
7808  m_HeapSizeLimit[heapIndex] += size;
7809  }
7810 }
7811 
7812 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
7813 {
7814  if(hAllocation->CanBecomeLost())
7815  {
7816  return VK_ERROR_MEMORY_MAP_FAILED;
7817  }
7818 
7819  switch(hAllocation->GetType())
7820  {
7821  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7822  {
7823  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
7824  char *pBytes = VMA_NULL;
7825  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
7826  if(res == VK_SUCCESS)
7827  {
7828  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
7829  hAllocation->BlockAllocMap();
7830  }
7831  return res;
7832  }
7833  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7834  return hAllocation->DedicatedAllocMap(this, ppData);
7835  default:
7836  VMA_ASSERT(0);
7837  return VK_ERROR_MEMORY_MAP_FAILED;
7838  }
7839 }
7840 
7841 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
7842 {
7843  switch(hAllocation->GetType())
7844  {
7845  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
7846  {
7847  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
7848  hAllocation->BlockAllocUnmap();
7849  pBlock->Unmap(this, 1);
7850  }
7851  break;
7852  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
7853  hAllocation->DedicatedAllocUnmap(this);
7854  break;
7855  default:
7856  VMA_ASSERT(0);
7857  }
7858 }
7859 
7860 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
7861 {
7862  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
7863 
7864  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
7865  {
7866  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7867  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
7868  VMA_ASSERT(pDedicatedAllocations);
7869  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
7870  VMA_ASSERT(success);
7871  }
7872 
7873  VkDeviceMemory hMemory = allocation->GetMemory();
7874 
7875  if(allocation->GetMappedData() != VMA_NULL)
7876  {
7877  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
7878  }
7879 
7880  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
7881 
7882  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
7883 }
7884 
7885 #if VMA_STATS_STRING_ENABLED
7886 
7887 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
7888 {
7889  bool dedicatedAllocationsStarted = false;
7890  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7891  {
7892  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
7893  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
7894  VMA_ASSERT(pDedicatedAllocVector);
7895  if(pDedicatedAllocVector->empty() == false)
7896  {
7897  if(dedicatedAllocationsStarted == false)
7898  {
7899  dedicatedAllocationsStarted = true;
7900  json.WriteString("DedicatedAllocations");
7901  json.BeginObject();
7902  }
7903 
7904  json.BeginString("Type ");
7905  json.ContinueString(memTypeIndex);
7906  json.EndString();
7907 
7908  json.BeginArray();
7909 
7910  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
7911  {
7912  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
7913  json.BeginObject(true);
7914 
7915  json.WriteString("Type");
7916  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[hAlloc->GetSuballocationType()]);
7917 
7918  json.WriteString("Size");
7919  json.WriteNumber(hAlloc->GetSize());
7920 
7921  const void* pUserData = hAlloc->GetUserData();
7922  if(pUserData != VMA_NULL)
7923  {
7924  json.WriteString("UserData");
7925  if(hAlloc->IsUserDataString())
7926  {
7927  json.WriteString((const char*)pUserData);
7928  }
7929  else
7930  {
7931  json.BeginString();
7932  json.ContinueString_Pointer(pUserData);
7933  json.EndString();
7934  }
7935  }
7936 
7937  json.EndObject();
7938  }
7939 
7940  json.EndArray();
7941  }
7942  }
7943  if(dedicatedAllocationsStarted)
7944  {
7945  json.EndObject();
7946  }
7947 
7948  {
7949  bool allocationsStarted = false;
7950  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
7951  {
7952  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
7953  {
7954  if(allocationsStarted == false)
7955  {
7956  allocationsStarted = true;
7957  json.WriteString("DefaultPools");
7958  json.BeginObject();
7959  }
7960 
7961  json.BeginString("Type ");
7962  json.ContinueString(memTypeIndex);
7963  json.EndString();
7964 
7965  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
7966  }
7967  }
7968  if(allocationsStarted)
7969  {
7970  json.EndObject();
7971  }
7972  }
7973 
7974  {
7975  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
7976  const size_t poolCount = m_Pools.size();
7977  if(poolCount > 0)
7978  {
7979  json.WriteString("Pools");
7980  json.BeginArray();
7981  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
7982  {
7983  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
7984  }
7985  json.EndArray();
7986  }
7987  }
7988 }
7989 
7990 #endif // #if VMA_STATS_STRING_ENABLED
7991 
7992 static VkResult AllocateMemoryForImage(
7993  VmaAllocator allocator,
7994  VkImage image,
7995  const VmaAllocationCreateInfo* pAllocationCreateInfo,
7996  VmaSuballocationType suballocType,
7997  VmaAllocation* pAllocation)
7998 {
7999  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
8000 
8001  VkMemoryRequirements vkMemReq = {};
8002  bool requiresDedicatedAllocation = false;
8003  bool prefersDedicatedAllocation = false;
8004  allocator->GetImageMemoryRequirements(image, vkMemReq,
8005  requiresDedicatedAllocation, prefersDedicatedAllocation);
8006 
8007  return allocator->AllocateMemory(
8008  vkMemReq,
8009  requiresDedicatedAllocation,
8010  prefersDedicatedAllocation,
8011  VK_NULL_HANDLE, // dedicatedBuffer
8012  image, // dedicatedImage
8013  *pAllocationCreateInfo,
8014  suballocType,
8015  pAllocation);
8016 }
8017 
8019 // Public interface
8020 
8021 VkResult vmaCreateAllocator(
8022  const VmaAllocatorCreateInfo* pCreateInfo,
8023  VmaAllocator* pAllocator)
8024 {
8025  VMA_ASSERT(pCreateInfo && pAllocator);
8026  VMA_DEBUG_LOG("vmaCreateAllocator");
8027  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
8028  return VK_SUCCESS;
8029 }
8030 
8031 void vmaDestroyAllocator(
8032  VmaAllocator allocator)
8033 {
8034  if(allocator != VK_NULL_HANDLE)
8035  {
8036  VMA_DEBUG_LOG("vmaDestroyAllocator");
8037  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
8038  vma_delete(&allocationCallbacks, allocator);
8039  }
8040 }
8041 
8043  VmaAllocator allocator,
8044  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
8045 {
8046  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
8047  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
8048 }
8049 
8051  VmaAllocator allocator,
8052  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
8053 {
8054  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
8055  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
8056 }
8057 
8059  VmaAllocator allocator,
8060  uint32_t memoryTypeIndex,
8061  VkMemoryPropertyFlags* pFlags)
8062 {
8063  VMA_ASSERT(allocator && pFlags);
8064  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
8065  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
8066 }
8067 
8069  VmaAllocator allocator,
8070  uint32_t frameIndex)
8071 {
8072  VMA_ASSERT(allocator);
8073  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
8074 
8075  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8076 
8077  allocator->SetCurrentFrameIndex(frameIndex);
8078 }
8079 
8080 void vmaCalculateStats(
8081  VmaAllocator allocator,
8082  VmaStats* pStats)
8083 {
8084  VMA_ASSERT(allocator && pStats);
8085  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8086  allocator->CalculateStats(pStats);
8087 }
8088 
8089 #if VMA_STATS_STRING_ENABLED
8090 
8091 void vmaBuildStatsString(
8092  VmaAllocator allocator,
8093  char** ppStatsString,
8094  VkBool32 detailedMap)
8095 {
8096  VMA_ASSERT(allocator && ppStatsString);
8097  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8098 
8099  VmaStringBuilder sb(allocator);
8100  {
8101  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
8102  json.BeginObject();
8103 
8104  VmaStats stats;
8105  allocator->CalculateStats(&stats);
8106 
8107  json.WriteString("Total");
8108  VmaPrintStatInfo(json, stats.total);
8109 
8110  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
8111  {
8112  json.BeginString("Heap ");
8113  json.ContinueString(heapIndex);
8114  json.EndString();
8115  json.BeginObject();
8116 
8117  json.WriteString("Size");
8118  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
8119 
8120  json.WriteString("Flags");
8121  json.BeginArray(true);
8122  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
8123  {
8124  json.WriteString("DEVICE_LOCAL");
8125  }
8126  json.EndArray();
8127 
8128  if(stats.memoryHeap[heapIndex].blockCount > 0)
8129  {
8130  json.WriteString("Stats");
8131  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
8132  }
8133 
8134  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
8135  {
8136  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
8137  {
8138  json.BeginString("Type ");
8139  json.ContinueString(typeIndex);
8140  json.EndString();
8141 
8142  json.BeginObject();
8143 
8144  json.WriteString("Flags");
8145  json.BeginArray(true);
8146  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
8147  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
8148  {
8149  json.WriteString("DEVICE_LOCAL");
8150  }
8151  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
8152  {
8153  json.WriteString("HOST_VISIBLE");
8154  }
8155  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
8156  {
8157  json.WriteString("HOST_COHERENT");
8158  }
8159  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
8160  {
8161  json.WriteString("HOST_CACHED");
8162  }
8163  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
8164  {
8165  json.WriteString("LAZILY_ALLOCATED");
8166  }
8167  json.EndArray();
8168 
8169  if(stats.memoryType[typeIndex].blockCount > 0)
8170  {
8171  json.WriteString("Stats");
8172  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
8173  }
8174 
8175  json.EndObject();
8176  }
8177  }
8178 
8179  json.EndObject();
8180  }
8181  if(detailedMap == VK_TRUE)
8182  {
8183  allocator->PrintDetailedMap(json);
8184  }
8185 
8186  json.EndObject();
8187  }
8188 
8189  const size_t len = sb.GetLength();
8190  char* const pChars = vma_new_array(allocator, char, len + 1);
8191  if(len > 0)
8192  {
8193  memcpy(pChars, sb.GetData(), len);
8194  }
8195  pChars[len] = '\0';
8196  *ppStatsString = pChars;
8197 }
8198 
8199 void vmaFreeStatsString(
8200  VmaAllocator allocator,
8201  char* pStatsString)
8202 {
8203  if(pStatsString != VMA_NULL)
8204  {
8205  VMA_ASSERT(allocator);
8206  size_t len = strlen(pStatsString);
8207  vma_delete_array(allocator, pStatsString, len + 1);
8208  }
8209 }
8210 
8211 #endif // #if VMA_STATS_STRING_ENABLED
8212 
8213 /*
8214 This function is not protected by any mutex because it just reads immutable data.
8215 */
8216 VkResult vmaFindMemoryTypeIndex(
8217  VmaAllocator allocator,
8218  uint32_t memoryTypeBits,
8219  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8220  uint32_t* pMemoryTypeIndex)
8221 {
8222  VMA_ASSERT(allocator != VK_NULL_HANDLE);
8223  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
8224  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
8225 
8226  if(pAllocationCreateInfo->memoryTypeBits != 0)
8227  {
8228  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
8229  }
8230 
8231  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
8232  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
8233 
8234  // Convert usage to requiredFlags and preferredFlags.
8235  switch(pAllocationCreateInfo->usage)
8236  {
8238  break;
8240  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
8241  break;
8243  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
8244  break;
8246  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
8247  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
8248  break;
8250  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
8251  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
8252  break;
8253  default:
8254  break;
8255  }
8256 
8257  *pMemoryTypeIndex = UINT32_MAX;
8258  uint32_t minCost = UINT32_MAX;
8259  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
8260  memTypeIndex < allocator->GetMemoryTypeCount();
8261  ++memTypeIndex, memTypeBit <<= 1)
8262  {
8263  // This memory type is acceptable according to memoryTypeBits bitmask.
8264  if((memTypeBit & memoryTypeBits) != 0)
8265  {
8266  const VkMemoryPropertyFlags currFlags =
8267  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
8268  // This memory type contains requiredFlags.
8269  if((requiredFlags & ~currFlags) == 0)
8270  {
8271  // Calculate cost as number of bits from preferredFlags not present in this memory type.
8272  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
8273  // Remember memory type with lowest cost.
8274  if(currCost < minCost)
8275  {
8276  *pMemoryTypeIndex = memTypeIndex;
8277  if(currCost == 0)
8278  {
8279  return VK_SUCCESS;
8280  }
8281  minCost = currCost;
8282  }
8283  }
8284  }
8285  }
8286  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
8287 }
8288 
8289 VkResult vmaCreatePool(
8290  VmaAllocator allocator,
8291  const VmaPoolCreateInfo* pCreateInfo,
8292  VmaPool* pPool)
8293 {
8294  VMA_ASSERT(allocator && pCreateInfo && pPool);
8295 
8296  VMA_DEBUG_LOG("vmaCreatePool");
8297 
8298  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8299 
8300  return allocator->CreatePool(pCreateInfo, pPool);
8301 }
8302 
8303 void vmaDestroyPool(
8304  VmaAllocator allocator,
8305  VmaPool pool)
8306 {
8307  VMA_ASSERT(allocator);
8308 
8309  if(pool == VK_NULL_HANDLE)
8310  {
8311  return;
8312  }
8313 
8314  VMA_DEBUG_LOG("vmaDestroyPool");
8315 
8316  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8317 
8318  allocator->DestroyPool(pool);
8319 }
8320 
8321 void vmaGetPoolStats(
8322  VmaAllocator allocator,
8323  VmaPool pool,
8324  VmaPoolStats* pPoolStats)
8325 {
8326  VMA_ASSERT(allocator && pool && pPoolStats);
8327 
8328  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8329 
8330  allocator->GetPoolStats(pool, pPoolStats);
8331 }
8332 
8334  VmaAllocator allocator,
8335  VmaPool pool,
8336  size_t* pLostAllocationCount)
8337 {
8338  VMA_ASSERT(allocator && pool);
8339 
8340  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8341 
8342  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
8343 }
8344 
8345 VkResult vmaAllocateMemory(
8346  VmaAllocator allocator,
8347  const VkMemoryRequirements* pVkMemoryRequirements,
8348  const VmaAllocationCreateInfo* pCreateInfo,
8349  VmaAllocation* pAllocation,
8350  VmaAllocationInfo* pAllocationInfo)
8351 {
8352  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
8353 
8354  VMA_DEBUG_LOG("vmaAllocateMemory");
8355 
8356  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8357 
8358  VkResult result = allocator->AllocateMemory(
8359  *pVkMemoryRequirements,
8360  false, // requiresDedicatedAllocation
8361  false, // prefersDedicatedAllocation
8362  VK_NULL_HANDLE, // dedicatedBuffer
8363  VK_NULL_HANDLE, // dedicatedImage
8364  *pCreateInfo,
8365  VMA_SUBALLOCATION_TYPE_UNKNOWN,
8366  pAllocation);
8367 
8368  if(pAllocationInfo && result == VK_SUCCESS)
8369  {
8370  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8371  }
8372 
8373  return result;
8374 }
8375 
8377  VmaAllocator allocator,
8378  VkBuffer buffer,
8379  const VmaAllocationCreateInfo* pCreateInfo,
8380  VmaAllocation* pAllocation,
8381  VmaAllocationInfo* pAllocationInfo)
8382 {
8383  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
8384 
8385  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
8386 
8387  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8388 
8389  VkMemoryRequirements vkMemReq = {};
8390  bool requiresDedicatedAllocation = false;
8391  bool prefersDedicatedAllocation = false;
8392  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
8393  requiresDedicatedAllocation,
8394  prefersDedicatedAllocation);
8395 
8396  VkResult result = allocator->AllocateMemory(
8397  vkMemReq,
8398  requiresDedicatedAllocation,
8399  prefersDedicatedAllocation,
8400  buffer, // dedicatedBuffer
8401  VK_NULL_HANDLE, // dedicatedImage
8402  *pCreateInfo,
8403  VMA_SUBALLOCATION_TYPE_BUFFER,
8404  pAllocation);
8405 
8406  if(pAllocationInfo && result == VK_SUCCESS)
8407  {
8408  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8409  }
8410 
8411  return result;
8412 }
8413 
8414 VkResult vmaAllocateMemoryForImage(
8415  VmaAllocator allocator,
8416  VkImage image,
8417  const VmaAllocationCreateInfo* pCreateInfo,
8418  VmaAllocation* pAllocation,
8419  VmaAllocationInfo* pAllocationInfo)
8420 {
8421  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
8422 
8423  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
8424 
8425  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8426 
8427  VkResult result = AllocateMemoryForImage(
8428  allocator,
8429  image,
8430  pCreateInfo,
8431  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
8432  pAllocation);
8433 
8434  if(pAllocationInfo && result == VK_SUCCESS)
8435  {
8436  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8437  }
8438 
8439  return result;
8440 }
8441 
8442 void vmaFreeMemory(
8443  VmaAllocator allocator,
8444  VmaAllocation allocation)
8445 {
8446  VMA_ASSERT(allocator && allocation);
8447 
8448  VMA_DEBUG_LOG("vmaFreeMemory");
8449 
8450  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8451 
8452  allocator->FreeMemory(allocation);
8453 }
8454 
8456  VmaAllocator allocator,
8457  VmaAllocation allocation,
8458  VmaAllocationInfo* pAllocationInfo)
8459 {
8460  VMA_ASSERT(allocator && allocation && pAllocationInfo);
8461 
8462  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8463 
8464  allocator->GetAllocationInfo(allocation, pAllocationInfo);
8465 }
8466 
8468  VmaAllocator allocator,
8469  VmaAllocation allocation,
8470  void* pUserData)
8471 {
8472  VMA_ASSERT(allocator && allocation);
8473 
8474  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8475 
8476  allocation->SetUserData(allocator, pUserData);
8477 }
8478 
8480  VmaAllocator allocator,
8481  VmaAllocation* pAllocation)
8482 {
8483  VMA_ASSERT(allocator && pAllocation);
8484 
8485  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
8486 
8487  allocator->CreateLostAllocation(pAllocation);
8488 }
8489 
8490 VkResult vmaMapMemory(
8491  VmaAllocator allocator,
8492  VmaAllocation allocation,
8493  void** ppData)
8494 {
8495  VMA_ASSERT(allocator && allocation && ppData);
8496 
8497  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8498 
8499  return allocator->Map(allocation, ppData);
8500 }
8501 
8502 void vmaUnmapMemory(
8503  VmaAllocator allocator,
8504  VmaAllocation allocation)
8505 {
8506  VMA_ASSERT(allocator && allocation);
8507 
8508  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8509 
8510  allocator->Unmap(allocation);
8511 }
8512 
8513 VkResult vmaDefragment(
8514  VmaAllocator allocator,
8515  VmaAllocation* pAllocations,
8516  size_t allocationCount,
8517  VkBool32* pAllocationsChanged,
8518  const VmaDefragmentationInfo *pDefragmentationInfo,
8519  VmaDefragmentationStats* pDefragmentationStats)
8520 {
8521  VMA_ASSERT(allocator && pAllocations);
8522 
8523  VMA_DEBUG_LOG("vmaDefragment");
8524 
8525  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8526 
8527  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
8528 }
8529 
8530 VkResult vmaCreateBuffer(
8531  VmaAllocator allocator,
8532  const VkBufferCreateInfo* pBufferCreateInfo,
8533  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8534  VkBuffer* pBuffer,
8535  VmaAllocation* pAllocation,
8536  VmaAllocationInfo* pAllocationInfo)
8537 {
8538  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
8539 
8540  VMA_DEBUG_LOG("vmaCreateBuffer");
8541 
8542  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8543 
8544  *pBuffer = VK_NULL_HANDLE;
8545  *pAllocation = VK_NULL_HANDLE;
8546 
8547  // 1. Create VkBuffer.
8548  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
8549  allocator->m_hDevice,
8550  pBufferCreateInfo,
8551  allocator->GetAllocationCallbacks(),
8552  pBuffer);
8553  if(res >= 0)
8554  {
8555  // 2. vkGetBufferMemoryRequirements.
8556  VkMemoryRequirements vkMemReq = {};
8557  bool requiresDedicatedAllocation = false;
8558  bool prefersDedicatedAllocation = false;
8559  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
8560  requiresDedicatedAllocation, prefersDedicatedAllocation);
8561 
8562  // Make sure alignment requirements for specific buffer usages reported
8563  // in Physical Device Properties are included in alignment reported by memory requirements.
8564  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
8565  {
8566  VMA_ASSERT(vkMemReq.alignment %
8567  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
8568  }
8569  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
8570  {
8571  VMA_ASSERT(vkMemReq.alignment %
8572  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
8573  }
8574  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
8575  {
8576  VMA_ASSERT(vkMemReq.alignment %
8577  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
8578  }
8579 
8580  // 3. Allocate memory using allocator.
8581  res = allocator->AllocateMemory(
8582  vkMemReq,
8583  requiresDedicatedAllocation,
8584  prefersDedicatedAllocation,
8585  *pBuffer, // dedicatedBuffer
8586  VK_NULL_HANDLE, // dedicatedImage
8587  *pAllocationCreateInfo,
8588  VMA_SUBALLOCATION_TYPE_BUFFER,
8589  pAllocation);
8590  if(res >= 0)
8591  {
8592  // 3. Bind buffer with memory.
8593  res = (*allocator->GetVulkanFunctions().vkBindBufferMemory)(
8594  allocator->m_hDevice,
8595  *pBuffer,
8596  (*pAllocation)->GetMemory(),
8597  (*pAllocation)->GetOffset());
8598  if(res >= 0)
8599  {
8600  // All steps succeeded.
8601  if(pAllocationInfo != VMA_NULL)
8602  {
8603  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8604  }
8605  return VK_SUCCESS;
8606  }
8607  allocator->FreeMemory(*pAllocation);
8608  *pAllocation = VK_NULL_HANDLE;
8609  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
8610  *pBuffer = VK_NULL_HANDLE;
8611  return res;
8612  }
8613  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
8614  *pBuffer = VK_NULL_HANDLE;
8615  return res;
8616  }
8617  return res;
8618 }
8619 
8620 void vmaDestroyBuffer(
8621  VmaAllocator allocator,
8622  VkBuffer buffer,
8623  VmaAllocation allocation)
8624 {
8625  if(buffer != VK_NULL_HANDLE)
8626  {
8627  VMA_ASSERT(allocator);
8628 
8629  VMA_DEBUG_LOG("vmaDestroyBuffer");
8630 
8631  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8632 
8633  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
8634 
8635  allocator->FreeMemory(allocation);
8636  }
8637 }
8638 
8639 VkResult vmaCreateImage(
8640  VmaAllocator allocator,
8641  const VkImageCreateInfo* pImageCreateInfo,
8642  const VmaAllocationCreateInfo* pAllocationCreateInfo,
8643  VkImage* pImage,
8644  VmaAllocation* pAllocation,
8645  VmaAllocationInfo* pAllocationInfo)
8646 {
8647  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
8648 
8649  VMA_DEBUG_LOG("vmaCreateImage");
8650 
8651  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8652 
8653  *pImage = VK_NULL_HANDLE;
8654  *pAllocation = VK_NULL_HANDLE;
8655 
8656  // 1. Create VkImage.
8657  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
8658  allocator->m_hDevice,
8659  pImageCreateInfo,
8660  allocator->GetAllocationCallbacks(),
8661  pImage);
8662  if(res >= 0)
8663  {
8664  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
8665  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
8666  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
8667 
8668  // 2. Allocate memory using allocator.
8669  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
8670  if(res >= 0)
8671  {
8672  // 3. Bind image with memory.
8673  res = (*allocator->GetVulkanFunctions().vkBindImageMemory)(
8674  allocator->m_hDevice,
8675  *pImage,
8676  (*pAllocation)->GetMemory(),
8677  (*pAllocation)->GetOffset());
8678  if(res >= 0)
8679  {
8680  // All steps succeeded.
8681  if(pAllocationInfo != VMA_NULL)
8682  {
8683  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
8684  }
8685  return VK_SUCCESS;
8686  }
8687  allocator->FreeMemory(*pAllocation);
8688  *pAllocation = VK_NULL_HANDLE;
8689  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
8690  *pImage = VK_NULL_HANDLE;
8691  return res;
8692  }
8693  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
8694  *pImage = VK_NULL_HANDLE;
8695  return res;
8696  }
8697  return res;
8698 }
8699 
8700 void vmaDestroyImage(
8701  VmaAllocator allocator,
8702  VkImage image,
8703  VmaAllocation allocation)
8704 {
8705  if(image != VK_NULL_HANDLE)
8706  {
8707  VMA_ASSERT(allocator);
8708 
8709  VMA_DEBUG_LOG("vmaDestroyImage");
8710 
8711  VMA_DEBUG_GLOBAL_MUTEX_LOCK
8712 
8713  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
8714 
8715  allocator->FreeMemory(allocation);
8716  }
8717 }
8718 
8719 #endif // #ifdef VMA_IMPLEMENTATION
PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
Definition: vk_mem_alloc.h:847
+
Set this flag if the allocation should have its own memory block.
Definition: vk_mem_alloc.h:1101
void vmaUnmapMemory(VmaAllocator allocator, VmaAllocation allocation)
Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
-
VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:831
+
VkPhysicalDevice physicalDevice
Vulkan physical device.
Definition: vk_mem_alloc.h:872
VkResult vmaDefragment(VmaAllocator allocator, VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)
Compacts memory by moving allocations.
-
PFN_vkCreateBuffer vkCreateBuffer
Definition: vk_mem_alloc.h:816
+
PFN_vkCreateBuffer vkCreateBuffer
Definition: vk_mem_alloc.h:857
void vmaFreeStatsString(VmaAllocator allocator, char *pStatsString)
struct VmaStats VmaStats
General statistics from current state of Allocator.
-
Definition: vk_mem_alloc.h:1017
-
PFN_vkMapMemory vkMapMemory
Definition: vk_mem_alloc.h:810
-
VkDeviceMemory deviceMemory
Handle to Vulkan memory object.
Definition: vk_mem_alloc.h:1328
-
VmaAllocatorCreateFlags flags
Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
Definition: vk_mem_alloc.h:828
-
uint32_t maxAllocationsToMove
Maximum number of allocations that can be moved to different place.
Definition: vk_mem_alloc.h:1494
-
Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
Definition: vk_mem_alloc.h:1198
+
Definition: vk_mem_alloc.h:1058
+
PFN_vkMapMemory vkMapMemory
Definition: vk_mem_alloc.h:851
+
VkDeviceMemory deviceMemory
Handle to Vulkan memory object.
Definition: vk_mem_alloc.h:1369
+
VmaAllocatorCreateFlags flags
Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
Definition: vk_mem_alloc.h:869
+
uint32_t maxAllocationsToMove
Maximum number of allocations that can be moved to different place.
Definition: vk_mem_alloc.h:1535
+
Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
Definition: vk_mem_alloc.h:1239
void vmaMakePoolAllocationsLost(VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)
Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInf...
-
VkDeviceSize size
Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
Definition: vk_mem_alloc.h:1252
-
Definition: vk_mem_alloc.h:1097
-
VkFlags VmaAllocatorCreateFlags
Definition: vk_mem_alloc.h:799
-
VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:1135
-
Definition: vk_mem_alloc.h:1044
-
const VkAllocationCallbacks * pAllocationCallbacks
Custom CPU memory allocation callbacks.
Definition: vk_mem_alloc.h:840
+
VkDeviceSize size
Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
Definition: vk_mem_alloc.h:1293
+
Definition: vk_mem_alloc.h:1138
+
VkFlags VmaAllocatorCreateFlags
Definition: vk_mem_alloc.h:840
+
VkMemoryPropertyFlags preferredFlags
Flags that preferably should be set in a memory type chosen for an allocation.
Definition: vk_mem_alloc.h:1176
+
Definition: vk_mem_alloc.h:1085
+
const VkAllocationCallbacks * pAllocationCallbacks
Custom CPU memory allocation callbacks.
Definition: vk_mem_alloc.h:881
void vmaCalculateStats(VmaAllocator allocator, VmaStats *pStats)
Retrieves statistics from current state of the Allocator.
-
const VmaVulkanFunctions * pVulkanFunctions
Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
Definition: vk_mem_alloc.h:893
-
Description of a Allocator to be created.
Definition: vk_mem_alloc.h:825
+
const VmaVulkanFunctions * pVulkanFunctions
Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
Definition: vk_mem_alloc.h:934
+
Description of a Allocator to be created.
Definition: vk_mem_alloc.h:866
void vmaDestroyAllocator(VmaAllocator allocator)
Destroys allocator object.
-
VmaAllocationCreateFlagBits
Flags to be passed as VmaAllocationCreateInfo::flags.
Definition: vk_mem_alloc.h:1048
+
VmaAllocationCreateFlagBits
Flags to be passed as VmaAllocationCreateInfo::flags.
Definition: vk_mem_alloc.h:1089
void vmaGetAllocationInfo(VmaAllocator allocator, VmaAllocation allocation, VmaAllocationInfo *pAllocationInfo)
Returns current information about specified allocation.
-
VkDeviceSize allocationSizeMax
Definition: vk_mem_alloc.h:958
-
PFN_vkBindImageMemory vkBindImageMemory
Definition: vk_mem_alloc.h:813
-
VkDeviceSize unusedBytes
Total number of bytes occupied by unused ranges.
Definition: vk_mem_alloc.h:957
-
PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR
Definition: vk_mem_alloc.h:821
-
Statistics returned by function vmaDefragment().
Definition: vk_mem_alloc.h:1498
+
VkDeviceSize allocationSizeMax
Definition: vk_mem_alloc.h:999
+
PFN_vkBindImageMemory vkBindImageMemory
Definition: vk_mem_alloc.h:854
+
VkDeviceSize unusedBytes
Total number of bytes occupied by unused ranges.
Definition: vk_mem_alloc.h:998
+
PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR
Definition: vk_mem_alloc.h:862
+
Statistics returned by function vmaDefragment().
Definition: vk_mem_alloc.h:1539
void vmaFreeMemory(VmaAllocator allocator, VmaAllocation allocation)
Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
-
uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:857
-
VmaStatInfo total
Definition: vk_mem_alloc.h:967
-
uint32_t deviceMemoryBlocksFreed
Number of empty VkDeviceMemory objects that have been released to the system.
Definition: vk_mem_alloc.h:1506
-
VmaAllocationCreateFlags flags
Use VmaAllocationCreateFlagBits enum.
Definition: vk_mem_alloc.h:1119
-
VkDeviceSize maxBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places...
Definition: vk_mem_alloc.h:1489
-
PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
Definition: vk_mem_alloc.h:814
-
void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
Callback function called after successful vkAllocateMemory.
Definition: vk_mem_alloc.h:741
-
VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:834
-
Describes parameter of created VmaPool.
Definition: vk_mem_alloc.h:1206
-
Definition: vk_mem_alloc.h:1200
-
VkDeviceSize size
Size of this allocation, in bytes.
Definition: vk_mem_alloc.h:1338
+
uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:898
+
VmaStatInfo total
Definition: vk_mem_alloc.h:1008
+
uint32_t deviceMemoryBlocksFreed
Number of empty VkDeviceMemory objects that have been released to the system.
Definition: vk_mem_alloc.h:1547
+
VmaAllocationCreateFlags flags
Use VmaAllocationCreateFlagBits enum.
Definition: vk_mem_alloc.h:1160
+
VkDeviceSize maxBytesToMove
Maximum total numbers of bytes that can be copied while moving allocations to different places...
Definition: vk_mem_alloc.h:1530
+
PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
Definition: vk_mem_alloc.h:855
+
void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
Callback function called after successful vkAllocateMemory.
Definition: vk_mem_alloc.h:782
+
VkDevice device
Vulkan device.
Definition: vk_mem_alloc.h:875
+
Describes parameter of created VmaPool.
Definition: vk_mem_alloc.h:1247
+
Definition: vk_mem_alloc.h:1241
+
VkDeviceSize size
Size of this allocation, in bytes.
Definition: vk_mem_alloc.h:1379
void vmaGetMemoryTypeProperties(VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
Given Memory Type Index, returns Property Flags of this memory type.
-
PFN_vkUnmapMemory vkUnmapMemory
Definition: vk_mem_alloc.h:811
-
void * pUserData
Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
Definition: vk_mem_alloc.h:1156
-
size_t minBlockCount
Minimum number of blocks to be always allocated in this pool, even if they stay empty.
Definition: vk_mem_alloc.h:1222
-
size_t allocationCount
Number of VmaAllocation objects created from this pool that were not destroyed or lost...
Definition: vk_mem_alloc.h:1258
+
PFN_vkUnmapMemory vkUnmapMemory
Definition: vk_mem_alloc.h:852
+
void * pUserData
Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
Definition: vk_mem_alloc.h:1197
+
size_t minBlockCount
Minimum number of blocks to be always allocated in this pool, even if they stay empty.
Definition: vk_mem_alloc.h:1263
+
size_t allocationCount
Number of VmaAllocation objects created from this pool that were not destroyed or lost...
Definition: vk_mem_alloc.h:1299
struct VmaVulkanFunctions VmaVulkanFunctions
Pointers to some Vulkan functions - a subset used by the library.
-
Definition: vk_mem_alloc.h:797
-
uint32_t memoryTypeIndex
Vulkan memory type index to allocate this pool from.
Definition: vk_mem_alloc.h:1209
+
Definition: vk_mem_alloc.h:838
+
uint32_t memoryTypeIndex
Vulkan memory type index to allocate this pool from.
Definition: vk_mem_alloc.h:1250
VkResult vmaFindMemoryTypeIndex(VmaAllocator allocator, uint32_t memoryTypeBits, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
-
VmaMemoryUsage
Definition: vk_mem_alloc.h:995
+
VmaMemoryUsage
Definition: vk_mem_alloc.h:1036
struct VmaAllocationInfo VmaAllocationInfo
Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
-
Optional configuration parameters to be passed to function vmaDefragment().
Definition: vk_mem_alloc.h:1484
+
Optional configuration parameters to be passed to function vmaDefragment().
Definition: vk_mem_alloc.h:1525
struct VmaPoolCreateInfo VmaPoolCreateInfo
Describes parameter of created VmaPool.
void vmaDestroyPool(VmaAllocator allocator, VmaPool pool)
Destroys VmaPool object and frees Vulkan device memory.
-
VkDeviceSize bytesFreed
Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
Definition: vk_mem_alloc.h:1502
-
Definition: vk_mem_alloc.h:1034
-
uint32_t memoryTypeBits
Bitmask containing one bit set for every memory type acceptable for this allocation.
Definition: vk_mem_alloc.h:1143
-
PFN_vkBindBufferMemory vkBindBufferMemory
Definition: vk_mem_alloc.h:812
+
VkDeviceSize bytesFreed
Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
Definition: vk_mem_alloc.h:1543
+
Definition: vk_mem_alloc.h:1075
+
uint32_t memoryTypeBits
Bitmask containing one bit set for every memory type acceptable for this allocation.
Definition: vk_mem_alloc.h:1184
+
PFN_vkBindBufferMemory vkBindBufferMemory
Definition: vk_mem_alloc.h:853
void vmaGetPoolStats(VmaAllocator allocator, VmaPool pool, VmaPoolStats *pPoolStats)
Retrieves statistics of existing VmaPool object.
struct VmaDefragmentationInfo VmaDefragmentationInfo
Optional configuration parameters to be passed to function vmaDefragment().
-
General statistics from current state of Allocator.
Definition: vk_mem_alloc.h:963
-
void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
Callback function called before vkFreeMemory.
Definition: vk_mem_alloc.h:747
+
General statistics from current state of Allocator.
Definition: vk_mem_alloc.h:1004
+
void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
Callback function called before vkFreeMemory.
Definition: vk_mem_alloc.h:788
void vmaSetAllocationUserData(VmaAllocator allocator, VmaAllocation allocation, void *pUserData)
Sets pUserData in given allocation to new value.
VkResult vmaCreatePool(VmaAllocator allocator, const VmaPoolCreateInfo *pCreateInfo, VmaPool *pPool)
Allocates Vulkan device memory and creates VmaPool object.
-
VmaAllocatorCreateFlagBits
Flags for created VmaAllocator.
Definition: vk_mem_alloc.h:768
+
VmaAllocatorCreateFlagBits
Flags for created VmaAllocator.
Definition: vk_mem_alloc.h:809
struct VmaStatInfo VmaStatInfo
Calculated statistics of memory usage in entire allocator.
-
Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
Definition: vk_mem_alloc.h:773
-
uint32_t allocationsMoved
Number of allocations that have been moved to different places.
Definition: vk_mem_alloc.h:1504
+
Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
Definition: vk_mem_alloc.h:814
+
uint32_t allocationsMoved
Number of allocations that have been moved to different places.
Definition: vk_mem_alloc.h:1545
void vmaCreateLostAllocation(VmaAllocator allocator, VmaAllocation *pAllocation)
Creates new allocation that is in lost state from the beginning.
-
VkMemoryPropertyFlags requiredFlags
Flags that must be set in a Memory Type chosen for an allocation.
Definition: vk_mem_alloc.h:1130
-
VkDeviceSize unusedRangeSizeMax
Size of the largest continuous free memory region.
Definition: vk_mem_alloc.h:1268
+
VkMemoryPropertyFlags requiredFlags
Flags that must be set in a Memory Type chosen for an allocation.
Definition: vk_mem_alloc.h:1171
+
VkDeviceSize unusedRangeSizeMax
Size of the largest continuous free memory region.
Definition: vk_mem_alloc.h:1309
void vmaBuildStatsString(VmaAllocator allocator, char **ppStatsString, VkBool32 detailedMap)
Builds and returns statistics as string in JSON format.
-
PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
Definition: vk_mem_alloc.h:807
-
Calculated statistics of memory usage in entire allocator.
Definition: vk_mem_alloc.h:946
-
VkDeviceSize blockSize
Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
Definition: vk_mem_alloc.h:1217
-
Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
Definition: vk_mem_alloc.h:760
+
PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
Definition: vk_mem_alloc.h:848
+
Calculated statistics of memory usage in entire allocator.
Definition: vk_mem_alloc.h:987
+
VkDeviceSize blockSize
Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
Definition: vk_mem_alloc.h:1258
+
Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
Definition: vk_mem_alloc.h:801
VkResult vmaCreateBuffer(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
-
Definition: vk_mem_alloc.h:1104
-
VkDeviceSize unusedRangeSizeMin
Definition: vk_mem_alloc.h:959
-
PFN_vmaFreeDeviceMemoryFunction pfnFree
Optional, can be null.
Definition: vk_mem_alloc.h:764
-
VmaPoolCreateFlags flags
Use combination of VmaPoolCreateFlagBits.
Definition: vk_mem_alloc.h:1212
-
Definition: vk_mem_alloc.h:1043
+
Definition: vk_mem_alloc.h:1145
+
VkDeviceSize unusedRangeSizeMin
Definition: vk_mem_alloc.h:1000
+
PFN_vmaFreeDeviceMemoryFunction pfnFree
Optional, can be null.
Definition: vk_mem_alloc.h:805
+
VmaPoolCreateFlags flags
Use combination of VmaPoolCreateFlagBits.
Definition: vk_mem_alloc.h:1253
+
Definition: vk_mem_alloc.h:1084
struct VmaPoolStats VmaPoolStats
Describes parameter of existing VmaPool.
VkResult vmaCreateImage(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkImage *pImage, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Function similar to vmaCreateBuffer().
-
VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:1125
-
Definition: vk_mem_alloc.h:1116
-
uint32_t blockCount
Number of VkDeviceMemory Vulkan memory blocks allocated.
Definition: vk_mem_alloc.h:949
-
PFN_vkFreeMemory vkFreeMemory
Definition: vk_mem_alloc.h:809
-
size_t maxBlockCount
Maximum number of blocks that can be allocated in this pool.
Definition: vk_mem_alloc.h:1230
-
const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
Informative callbacks for vkAllocateMemory, vkFreeMemory.
Definition: vk_mem_alloc.h:843
-
size_t unusedRangeCount
Number of continuous memory ranges in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:1261
-
VkFlags VmaAllocationCreateFlags
Definition: vk_mem_alloc.h:1114
-
VmaPool pool
Pool that this allocation should be created in.
Definition: vk_mem_alloc.h:1149
+
VmaMemoryUsage usage
Intended usage of memory.
Definition: vk_mem_alloc.h:1166
+
Definition: vk_mem_alloc.h:1157
+
uint32_t blockCount
Number of VkDeviceMemory Vulkan memory blocks allocated.
Definition: vk_mem_alloc.h:990
+
PFN_vkFreeMemory vkFreeMemory
Definition: vk_mem_alloc.h:850
+
size_t maxBlockCount
Maximum number of blocks that can be allocated in this pool.
Definition: vk_mem_alloc.h:1271
+
const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
Informative callbacks for vkAllocateMemory, vkFreeMemory.
Definition: vk_mem_alloc.h:884
+
size_t unusedRangeCount
Number of continuous memory ranges in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:1302
+
VkFlags VmaAllocationCreateFlags
Definition: vk_mem_alloc.h:1155
+
VmaPool pool
Pool that this allocation should be created in.
Definition: vk_mem_alloc.h:1190
void vmaGetMemoryProperties(VmaAllocator allocator, const VkPhysicalDeviceMemoryProperties **ppPhysicalDeviceMemoryProperties)
-
const VkDeviceSize * pHeapSizeLimit
Either NULL or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
Definition: vk_mem_alloc.h:881
-
VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
Definition: vk_mem_alloc.h:965
-
Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
Definition: vk_mem_alloc.h:1084
-
VkDeviceSize allocationSizeMin
Definition: vk_mem_alloc.h:958
-
PFN_vkCreateImage vkCreateImage
Definition: vk_mem_alloc.h:818
-
PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
Optional, can be null.
Definition: vk_mem_alloc.h:762
-
PFN_vkDestroyBuffer vkDestroyBuffer
Definition: vk_mem_alloc.h:817
+
const VkDeviceSize * pHeapSizeLimit
Either NULL or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
Definition: vk_mem_alloc.h:922
+
VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
Definition: vk_mem_alloc.h:1006
+
Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
Definition: vk_mem_alloc.h:1125
+
VkDeviceSize allocationSizeMin
Definition: vk_mem_alloc.h:999
+
PFN_vkCreateImage vkCreateImage
Definition: vk_mem_alloc.h:859
+
PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
Optional, can be null.
Definition: vk_mem_alloc.h:803
+
PFN_vkDestroyBuffer vkDestroyBuffer
Definition: vk_mem_alloc.h:858
VkResult vmaMapMemory(VmaAllocator allocator, VmaAllocation allocation, void **ppData)
Maps memory represented by given allocation and returns pointer to it.
-
uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:1244
+
uint32_t frameInUseCount
Maximum number of additional frames that are in use at the same time as current frame.
Definition: vk_mem_alloc.h:1285
VkResult vmaAllocateMemoryForImage(VmaAllocator allocator, VkImage image, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
Function similar to vmaAllocateMemoryForBuffer().
struct VmaAllocatorCreateInfo VmaAllocatorCreateInfo
Description of a Allocator to be created.
-
void * pUserData
Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
Definition: vk_mem_alloc.h:1352
-
VkDeviceSize preferredLargeHeapBlockSize
Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
Definition: vk_mem_alloc.h:837
-
VkDeviceSize allocationSizeAvg
Definition: vk_mem_alloc.h:958
-
VkDeviceSize usedBytes
Total number of bytes occupied by all allocations.
Definition: vk_mem_alloc.h:955
+
void * pUserData
Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
Definition: vk_mem_alloc.h:1393
+
VkDeviceSize preferredLargeHeapBlockSize
Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
Definition: vk_mem_alloc.h:878
+
VkDeviceSize allocationSizeAvg
Definition: vk_mem_alloc.h:999
+
VkDeviceSize usedBytes
Total number of bytes occupied by all allocations.
Definition: vk_mem_alloc.h:996
struct VmaDeviceMemoryCallbacks VmaDeviceMemoryCallbacks
Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
-
Describes parameter of existing VmaPool.
Definition: vk_mem_alloc.h:1249
-
VkDeviceSize offset
Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
Definition: vk_mem_alloc.h:1333
-
Definition: vk_mem_alloc.h:1112
-
VkDeviceSize bytesMoved
Total number of bytes that have been copied while moving allocations to different places...
Definition: vk_mem_alloc.h:1500
-
Pointers to some Vulkan functions - a subset used by the library.
Definition: vk_mem_alloc.h:805
+
Describes parameter of existing VmaPool.
Definition: vk_mem_alloc.h:1290
+
VkDeviceSize offset
Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
Definition: vk_mem_alloc.h:1374
+
Definition: vk_mem_alloc.h:1153
+
VkDeviceSize bytesMoved
Total number of bytes that have been copied while moving allocations to different places...
Definition: vk_mem_alloc.h:1541
+
Pointers to some Vulkan functions - a subset used by the library.
Definition: vk_mem_alloc.h:846
VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
Creates Allocator object.
-
PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR
Definition: vk_mem_alloc.h:820
-
uint32_t unusedRangeCount
Number of free ranges of memory between allocations.
Definition: vk_mem_alloc.h:953
-
Definition: vk_mem_alloc.h:1000
-
VkFlags VmaPoolCreateFlags
Definition: vk_mem_alloc.h:1202
+
PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR
Definition: vk_mem_alloc.h:861
+
uint32_t unusedRangeCount
Number of free ranges of memory between allocations.
Definition: vk_mem_alloc.h:994
+
Definition: vk_mem_alloc.h:1041
+
VkFlags VmaPoolCreateFlags
Definition: vk_mem_alloc.h:1243
void vmaGetPhysicalDeviceProperties(VmaAllocator allocator, const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
-
uint32_t allocationCount
Number of VmaAllocation allocation objects allocated.
Definition: vk_mem_alloc.h:951
-
PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
Definition: vk_mem_alloc.h:815
-
PFN_vkDestroyImage vkDestroyImage
Definition: vk_mem_alloc.h:819
-
Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
Definition: vk_mem_alloc.h:1071
-
Definition: vk_mem_alloc.h:1027
-
void * pMappedData
Pointer to the beginning of this allocation as mapped data.
Definition: vk_mem_alloc.h:1347
+
uint32_t allocationCount
Number of VmaAllocation allocation objects allocated.
Definition: vk_mem_alloc.h:992
+
PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
Definition: vk_mem_alloc.h:856
+
PFN_vkDestroyImage vkDestroyImage
Definition: vk_mem_alloc.h:860
+
Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
Definition: vk_mem_alloc.h:1112
+
Definition: vk_mem_alloc.h:1068
+
void * pMappedData
Pointer to the beginning of this allocation as mapped data.
Definition: vk_mem_alloc.h:1388
void vmaDestroyImage(VmaAllocator allocator, VkImage image, VmaAllocation allocation)
Destroys Vulkan image and frees allocated memory.
-
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:795
+
Enables usage of VK_KHR_dedicated_allocation extension.
Definition: vk_mem_alloc.h:836
struct VmaDefragmentationStats VmaDefragmentationStats
Statistics returned by function vmaDefragment().
-
PFN_vkAllocateMemory vkAllocateMemory
Definition: vk_mem_alloc.h:808
-
Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
Definition: vk_mem_alloc.h:1314
+
PFN_vkAllocateMemory vkAllocateMemory
Definition: vk_mem_alloc.h:849
+
Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
Definition: vk_mem_alloc.h:1355
VkResult vmaAllocateMemory(VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
General purpose memory allocation.
void vmaSetCurrentFrameIndex(VmaAllocator allocator, uint32_t frameIndex)
Sets index of the current frame.
struct VmaAllocationCreateInfo VmaAllocationCreateInfo
VkResult vmaAllocateMemoryForBuffer(VmaAllocator allocator, VkBuffer buffer, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
-
VmaPoolCreateFlagBits
Flags to be passed as VmaPoolCreateInfo::flags.
Definition: vk_mem_alloc.h:1180
-
VkDeviceSize unusedRangeSizeAvg
Definition: vk_mem_alloc.h:959
- -
VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
Definition: vk_mem_alloc.h:966
+
VmaPoolCreateFlagBits
Flags to be passed as VmaPoolCreateInfo::flags.
Definition: vk_mem_alloc.h:1221
+
VkDeviceSize unusedRangeSizeAvg
Definition: vk_mem_alloc.h:1000
+ +
VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
Definition: vk_mem_alloc.h:1007
void vmaDestroyBuffer(VmaAllocator allocator, VkBuffer buffer, VmaAllocation allocation)
Destroys Vulkan buffer and frees allocated memory.
-
VkDeviceSize unusedSize
Total number of bytes in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:1255
-
VkDeviceSize unusedRangeSizeMax
Definition: vk_mem_alloc.h:959
-
uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:1319
+
VkDeviceSize unusedSize
Total number of bytes in the pool not used by any VmaAllocation.
Definition: vk_mem_alloc.h:1296
+
VkDeviceSize unusedRangeSizeMax
Definition: vk_mem_alloc.h:1000
+
uint32_t memoryType
Memory type index that this allocation was allocated from.
Definition: vk_mem_alloc.h:1360
Enumerator
VMA_MEMORY_USAGE_UNKNOWN 

No intended memory usage specified. Use other members of VmaAllocationCreateInfo to specify your requirements.

VMA_MEMORY_USAGE_GPU_ONLY 

Memory will be used on device only, so fast access from the device is preferred. It usually means device-local GPU (video) memory. No need to be mappable on host. It is roughly equivalent of D3D12_HEAP_TYPE_DEFAULT.

+
VMA_MEMORY_USAGE_GPU_ONLY 

Memory will be used on device only, so fast access from the device is preferred. It usually means device-local GPU (video) memory. No need to be mappable on host. It is roughly equivalent of D3D12_HEAP_TYPE_DEFAULT.

Usage:

  • Resources written and read by device, e.g. images used as attachments.
  • @@ -719,17 +719,17 @@ Functions
VMA_MEMORY_USAGE_CPU_ONLY 

Memory will be mappable on host. It usually means CPU (system) memory. Resources created in this pool are still accessible to the device, but access to them can be slower. Guarantees to be HOST_VISIBLE and HOST_COHERENT. CPU read may be uncached. It is roughly equivalent of D3D12_HEAP_TYPE_UPLOAD.

+
VMA_MEMORY_USAGE_CPU_ONLY 

Memory will be mappable on host. It usually means CPU (system) memory. Resources created in this pool may still be accessible to the device, but access to them can be slower. Guarantees to be HOST_VISIBLE and HOST_COHERENT. CPU read may be uncached. It is roughly equivalent of D3D12_HEAP_TYPE_UPLOAD.

Usage: Staging copy of resources used as transfer source.

VMA_MEMORY_USAGE_CPU_TO_GPU 

Memory that is both mappable on host (guarantees to be HOST_VISIBLE) and preferably fast to access by GPU. CPU reads may be uncached and very slow.

Usage: Resources written frequently by host (dynamic), read by device. E.g. textures, vertex buffers, uniform buffers updated every frame or every draw call.

VMA_MEMORY_USAGE_GPU_TO_CPU 

Memory mappable on host (guarantees to be HOST_VISIBLE) and cached. It is roughly equivalent of D3D12_HEAP_TYPE_READBACK.

+
VMA_MEMORY_USAGE_GPU_TO_CPU 

Memory mappable on host (guarantees to be HOST_VISIBLE) and cached. It is roughly equivalent of D3D12_HEAP_TYPE_READBACK.

Usage:

  • Resources written by device, read by host - results of some computations, e.g. screen capture, average scene luminance for HDR tone mapping.
  • -
  • Any resources read on host, e.g. CPU-side copy of vertex buffer used as source of transfer, but also used for collision detection.
  • +
  • Any resources read or accessed randomly on host, e.g. CPU-side copy of vertex buffer used as source of transfer, but also used for collision detection.
VMA_MEMORY_USAGE_MAX_ENUM