diff --git a/README.md b/README.md index 7b33ebb..309b179 100644 --- a/README.md +++ b/README.md @@ -50,6 +50,7 @@ Additional features: - Debug annotations: Associate string with name or opaque pointer to your own data with every allocation. - JSON dump: Obtain a string in JSON format with detailed map of internal state, including list of allocations and gaps between them. - Convert this JSON dump into a picture to visualize your memory. See [tools/VmaDumpVis](tools/VmaDumpVis/README.md). +- Margins: Enable validation of a magic number before and after every allocation to detect out-of-bounds memory corruption. # Prequisites diff --git a/docs/gfx/Margins_1.png b/docs/gfx/Margins_1.png new file mode 100644 index 0000000..8c9e184 Binary files /dev/null and b/docs/gfx/Margins_1.png differ diff --git a/docs/gfx/Margins_2.png b/docs/gfx/Margins_2.png new file mode 100644 index 0000000..6f75877 Binary files /dev/null and b/docs/gfx/Margins_2.png differ diff --git a/docs/html/corruption_detection.html b/docs/html/corruption_detection.html new file mode 100644 index 0000000..d05cdb9 --- /dev/null +++ b/docs/html/corruption_detection.html @@ -0,0 +1,100 @@ + + + + + + + +Vulkan Memory Allocator: Corruption detection + + + + + + + + + +
+
+ + + + + + +
+
Vulkan Memory Allocator +
+
+
+ + + + + + + + +
+
+ + +
+ +
+ + +
+
+
+
Corruption detection
+
+
+

If you suspect a bug caused by memory being overwritten out of bounds of an allocation, you can use debug features of this library to verify this.

+

+Margins

+

By default, allocations are laid your in memory blocks next to each other if possible (considering required alignment, bufferImageGranularity, and nonCoherentAtomSize).

+
+Allocations without margin +
+

Define macro VMA_DEBUG_MARGIN to some non-zero value (e.g. 16) to enforce specified number of bytes as a margin before and after every allocation.

+
#define VMA_DEBUG_MARGIN 16
#include "vk_mem_alloc.h"
+Allocations with margin +
+

If your bug goes away after enabling margins, it means it may be caused by memory being overwritten outside of allocation boundaries. It is not 100% certain though. Change in application behavior may also be caused by different order and distribution of allocations across memory blocks after margins are applied.

+

The margin is applied also before first and after last allocation in a block. It may happen only once between two adjacent allocations.

+

Margin is applied only to allocations made out of memory blocks and not to dedicated allocations, which have their own memory block of specific size. It is thus not applied to allocations made using VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag or those automatically decided to put into dedicated allocations, e.g. due to its large size or recommended by VK_KHR_dedicated_allocation extension.

+

Margins appear in JSON dump as part of free space.

+

Note that enabling margins increases memory usage and fragmentation.

+

+Corruption detection

+

You can additionally define macro VMA_DEBUG_DETECT_CORRUPTION to 1 to enable validation of contents of the margins.

+
#define VMA_DEBUG_MARGIN 16
#define VMA_DEBUG_DETECT_CORRUPTION 1
#include "vk_mem_alloc.h"

When this feature is enabled, number of bytes specified as VMA_DEBUG_MARGIN (it must be multiply of 4) before and after every allocation is filled with a magic number. This idea is also know as "canary". Memory is automatically mapped and unmapped if necessary.

+

This number is validated automatically when the allocation is destroyed. If it's not equal to the expected value, VMA_ASSERT() is executed. It clearly means that either CPU or GPU overwritten the memory outside of boundaries of the allocation, which indicates a serious bug.

+

You can also explicitly request checking margins of all allocations in all memory blocks that belong to specified memory types by using function vmaCheckCorruption(), or in memory blocks that belong to specified custom pool, by using function vmaCheckPoolCorruption().

+

Margin validation (corruption detection) works only for memory types that are HOST_VISIBLE and HOST_COHERENT.

+
+ + + + diff --git a/docs/html/globals.html b/docs/html/globals.html index f53f299..08a3076 100644 --- a/docs/html/globals.html +++ b/docs/html/globals.html @@ -173,6 +173,12 @@ $(function() {
  • vmaCalculateStats() : vk_mem_alloc.h
  • +
  • vmaCheckCorruption() +: vk_mem_alloc.h +
  • +
  • vmaCheckPoolCorruption() +: vk_mem_alloc.h +
  • vmaCreateAllocator() : vk_mem_alloc.h
  • diff --git a/docs/html/globals_func.html b/docs/html/globals_func.html index 798f803..a31da7d 100644 --- a/docs/html/globals_func.html +++ b/docs/html/globals_func.html @@ -82,6 +82,12 @@ $(function() {
  • vmaCalculateStats() : vk_mem_alloc.h
  • +
  • vmaCheckCorruption() +: vk_mem_alloc.h +
  • +
  • vmaCheckPoolCorruption() +: vk_mem_alloc.h +
  • vmaCreateAllocator() : vk_mem_alloc.h
  • diff --git a/docs/html/index.html b/docs/html/index.html index 0df3adf..773bd84 100644 --- a/docs/html/index.html +++ b/docs/html/index.html @@ -62,7 +62,7 @@ $(function() {
    Vulkan Memory Allocator
    -

    Version 2.1.0-alpha.1 (2018-06-04)

    +

    Version 2.1.0-alpha.3 (2018-06-11)

    Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
    License: MIT

    Documentation of all members: vk_mem_alloc.h

    @@ -106,6 +106,11 @@ Table of contents
  • Allocation names
  • +
  • Corruption detection +
  • Recommended usage patterns
      diff --git a/docs/html/search/all_2.js b/docs/html/search/all_2.js index e5f807d..5289cc4 100644 --- a/docs/html/search/all_2.js +++ b/docs/html/search/all_2.js @@ -2,5 +2,6 @@ var searchData= [ ['choosing_20memory_20type',['Choosing memory type',['../choosing_memory_type.html',1,'index']]], ['configuration',['Configuration',['../configuration.html',1,'index']]], + ['corruption_20detection',['Corruption detection',['../corruption_detection.html',1,'index']]], ['custom_20memory_20pools',['Custom memory pools',['../custom_memory_pools.html',1,'index']]] ]; diff --git a/docs/html/search/all_f.js b/docs/html/search/all_f.js index c9b3721..a80478e 100644 --- a/docs/html/search/all_f.js +++ b/docs/html/search/all_f.js @@ -55,6 +55,8 @@ var searchData= ['vmabindimagememory',['vmaBindImageMemory',['../vk__mem__alloc_8h.html#a3d3ca45799923aa5d138e9e5f9eb2da5',1,'vk_mem_alloc.h']]], ['vmabuildstatsstring',['vmaBuildStatsString',['../vk__mem__alloc_8h.html#aa4fee7eb5253377599ef4fd38c93c2a0',1,'vk_mem_alloc.h']]], ['vmacalculatestats',['vmaCalculateStats',['../vk__mem__alloc_8h.html#a333b61c1788cb23559177531e6a93ca3',1,'vk_mem_alloc.h']]], + ['vmacheckcorruption',['vmaCheckCorruption',['../vk__mem__alloc_8h.html#a49329a7f030dafcf82f7b73334c22e98',1,'vk_mem_alloc.h']]], + ['vmacheckpoolcorruption',['vmaCheckPoolCorruption',['../vk__mem__alloc_8h.html#ad535935619c7a549bf837e1bb0068f89',1,'vk_mem_alloc.h']]], ['vmacreateallocator',['vmaCreateAllocator',['../vk__mem__alloc_8h.html#a200692051ddb34240248234f5f4c17bb',1,'vk_mem_alloc.h']]], ['vmacreatebuffer',['vmaCreateBuffer',['../vk__mem__alloc_8h.html#ac72ee55598617e8eecca384e746bab51',1,'vk_mem_alloc.h']]], ['vmacreateimage',['vmaCreateImage',['../vk__mem__alloc_8h.html#a02a94f25679275851a53e82eacbcfc73',1,'vk_mem_alloc.h']]], diff --git a/docs/html/search/functions_0.js b/docs/html/search/functions_0.js index 2b8caad..0ab7deb 100644 --- a/docs/html/search/functions_0.js +++ b/docs/html/search/functions_0.js @@ -7,6 +7,8 @@ var searchData= ['vmabindimagememory',['vmaBindImageMemory',['../vk__mem__alloc_8h.html#a3d3ca45799923aa5d138e9e5f9eb2da5',1,'vk_mem_alloc.h']]], ['vmabuildstatsstring',['vmaBuildStatsString',['../vk__mem__alloc_8h.html#aa4fee7eb5253377599ef4fd38c93c2a0',1,'vk_mem_alloc.h']]], ['vmacalculatestats',['vmaCalculateStats',['../vk__mem__alloc_8h.html#a333b61c1788cb23559177531e6a93ca3',1,'vk_mem_alloc.h']]], + ['vmacheckcorruption',['vmaCheckCorruption',['../vk__mem__alloc_8h.html#a49329a7f030dafcf82f7b73334c22e98',1,'vk_mem_alloc.h']]], + ['vmacheckpoolcorruption',['vmaCheckPoolCorruption',['../vk__mem__alloc_8h.html#ad535935619c7a549bf837e1bb0068f89',1,'vk_mem_alloc.h']]], ['vmacreateallocator',['vmaCreateAllocator',['../vk__mem__alloc_8h.html#a200692051ddb34240248234f5f4c17bb',1,'vk_mem_alloc.h']]], ['vmacreatebuffer',['vmaCreateBuffer',['../vk__mem__alloc_8h.html#ac72ee55598617e8eecca384e746bab51',1,'vk_mem_alloc.h']]], ['vmacreateimage',['vmaCreateImage',['../vk__mem__alloc_8h.html#a02a94f25679275851a53e82eacbcfc73',1,'vk_mem_alloc.h']]], diff --git a/docs/html/search/pages_1.js b/docs/html/search/pages_1.js index e5f807d..5289cc4 100644 --- a/docs/html/search/pages_1.js +++ b/docs/html/search/pages_1.js @@ -2,5 +2,6 @@ var searchData= [ ['choosing_20memory_20type',['Choosing memory type',['../choosing_memory_type.html',1,'index']]], ['configuration',['Configuration',['../configuration.html',1,'index']]], + ['corruption_20detection',['Corruption detection',['../corruption_detection.html',1,'index']]], ['custom_20memory_20pools',['Custom memory pools',['../custom_memory_pools.html',1,'index']]] ]; diff --git a/docs/html/vk__mem__alloc_8h.html b/docs/html/vk__mem__alloc_8h.html index 7aa389a..01ecea4 100644 --- a/docs/html/vk__mem__alloc_8h.html +++ b/docs/html/vk__mem__alloc_8h.html @@ -257,6 +257,9 @@ Functions void vmaMakePoolAllocationsLost (VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)  Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInfo::frameInUseCount back from now. More...
        +VkResult vmaCheckPoolCorruption (VmaAllocator allocator, VmaPool pool) + Checks magic number in margins around all allocations in given memory pool in search for corruptions. More...
      +  VkResult vmaAllocateMemory (VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)  General purpose memory allocation. More...
        @@ -292,6 +295,9 @@ Functions void vmaInvalidateAllocation (VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)  Invalidates memory of given allocation. More...
        +VkResult vmaCheckCorruption (VmaAllocator allocator, uint32_t memoryTypeBits) + Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions. More...
      +  VkResult vmaDefragment (VmaAllocator allocator, VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)  Compacts memory by moving allocations. More...
        @@ -1105,6 +1111,88 @@ Functions

      Retrieves statistics from current state of the Allocator.

      +
  • +
    + +

    ◆ vmaCheckCorruption()

    + +
    +
    + + + + + + + + + + + + + + + + + + +
    VkResult vmaCheckCorruption (VmaAllocator allocator,
    uint32_t memoryTypeBits 
    )
    +
    + +

    Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions.

    +
    Parameters
    + + +
    memoryTypeBitsBit mask, where each bit set means that a memory type with that index should be checked.
    +
    +
    +

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and only for memory types that are HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    +

    Possible return values:

    +
      +
    • VK_ERROR_FEATURE_NOT_PRESENT - corruption detection is not enabled for any of specified memory types.
    • +
    • VK_SUCCESS - corruption detection has been performed and succeeded.
    • +
    • VK_ERROR_VALIDATION_FAILED_EXT - corruption detection has been performed and found memory corruptions around one of the allocations. VMA_ASSERT is also fired in that case.
    • +
    • Other value: Error returned by Vulkan, e.g. memory mapping failure.
    • +
    + +
    +
    + +

    ◆ vmaCheckPoolCorruption()

    + +
    +
    + + + + + + + + + + + + + + + + + + +
    VkResult vmaCheckPoolCorruption (VmaAllocator allocator,
    VmaPool pool 
    )
    +
    + +

    Checks magic number in margins around all allocations in given memory pool in search for corruptions.

    +

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and the pool is created in memory type that is HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    +

    Possible return values:

    +
      +
    • VK_ERROR_FEATURE_NOT_PRESENT - corruption detection is not enabled for specified pool.
    • +
    • VK_SUCCESS - corruption detection has been performed and succeeded.
    • +
    • VK_ERROR_VALIDATION_FAILED_EXT - corruption detection has been performed and found memory corruptions around one of the allocations. VMA_ASSERT is also fired in that case.
    • +
    • Other value: Error returned by Vulkan, e.g. memory mapping failure.
    • +
    +
    diff --git a/docs/html/vk__mem__alloc_8h_source.html b/docs/html/vk__mem__alloc_8h_source.html index 5d35609..1d913f3 100644 --- a/docs/html/vk__mem__alloc_8h_source.html +++ b/docs/html/vk__mem__alloc_8h_source.html @@ -62,164 +62,166 @@ $(function() {
    vk_mem_alloc.h
    -Go to the documentation of this file.
    1 //
    2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
    3 //
    4 // Permission is hereby granted, free of charge, to any person obtaining a copy
    5 // of this software and associated documentation files (the "Software"), to deal
    6 // in the Software without restriction, including without limitation the rights
    7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    8 // copies of the Software, and to permit persons to whom the Software is
    9 // furnished to do so, subject to the following conditions:
    10 //
    11 // The above copyright notice and this permission notice shall be included in
    12 // all copies or substantial portions of the Software.
    13 //
    14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    20 // THE SOFTWARE.
    21 //
    22 
    23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
    24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
    25 
    26 #ifdef __cplusplus
    27 extern "C" {
    28 #endif
    29 
    1084 #include <vulkan/vulkan.h>
    1085 
    1086 #if !defined(VMA_DEDICATED_ALLOCATION)
    1087  #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
    1088  #define VMA_DEDICATED_ALLOCATION 1
    1089  #else
    1090  #define VMA_DEDICATED_ALLOCATION 0
    1091  #endif
    1092 #endif
    1093 
    1103 VK_DEFINE_HANDLE(VmaAllocator)
    1104 
    1105 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
    1107  VmaAllocator allocator,
    1108  uint32_t memoryType,
    1109  VkDeviceMemory memory,
    1110  VkDeviceSize size);
    1112 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
    1113  VmaAllocator allocator,
    1114  uint32_t memoryType,
    1115  VkDeviceMemory memory,
    1116  VkDeviceSize size);
    1117 
    1131 
    1161 
    1164 typedef VkFlags VmaAllocatorCreateFlags;
    1165 
    1170 typedef struct VmaVulkanFunctions {
    1171  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
    1172  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
    1173  PFN_vkAllocateMemory vkAllocateMemory;
    1174  PFN_vkFreeMemory vkFreeMemory;
    1175  PFN_vkMapMemory vkMapMemory;
    1176  PFN_vkUnmapMemory vkUnmapMemory;
    1177  PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
    1178  PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
    1179  PFN_vkBindBufferMemory vkBindBufferMemory;
    1180  PFN_vkBindImageMemory vkBindImageMemory;
    1181  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
    1182  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
    1183  PFN_vkCreateBuffer vkCreateBuffer;
    1184  PFN_vkDestroyBuffer vkDestroyBuffer;
    1185  PFN_vkCreateImage vkCreateImage;
    1186  PFN_vkDestroyImage vkDestroyImage;
    1187 #if VMA_DEDICATED_ALLOCATION
    1188  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
    1189  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
    1190 #endif
    1192 
    1195 {
    1197  VmaAllocatorCreateFlags flags;
    1199 
    1200  VkPhysicalDevice physicalDevice;
    1202 
    1203  VkDevice device;
    1205 
    1208 
    1209  const VkAllocationCallbacks* pAllocationCallbacks;
    1211 
    1250  const VkDeviceSize* pHeapSizeLimit;
    1264 
    1266 VkResult vmaCreateAllocator(
    1267  const VmaAllocatorCreateInfo* pCreateInfo,
    1268  VmaAllocator* pAllocator);
    1269 
    1271 void vmaDestroyAllocator(
    1272  VmaAllocator allocator);
    1273 
    1279  VmaAllocator allocator,
    1280  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
    1281 
    1287  VmaAllocator allocator,
    1288  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
    1289 
    1297  VmaAllocator allocator,
    1298  uint32_t memoryTypeIndex,
    1299  VkMemoryPropertyFlags* pFlags);
    1300 
    1310  VmaAllocator allocator,
    1311  uint32_t frameIndex);
    1312 
    1315 typedef struct VmaStatInfo
    1316 {
    1318  uint32_t blockCount;
    1324  VkDeviceSize usedBytes;
    1326  VkDeviceSize unusedBytes;
    1327  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
    1328  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
    1329 } VmaStatInfo;
    1330 
    1332 typedef struct VmaStats
    1333 {
    1334  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
    1335  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
    1337 } VmaStats;
    1338 
    1340 void vmaCalculateStats(
    1341  VmaAllocator allocator,
    1342  VmaStats* pStats);
    1343 
    1344 #define VMA_STATS_STRING_ENABLED 1
    1345 
    1346 #if VMA_STATS_STRING_ENABLED
    1347 
    1349 
    1351 void vmaBuildStatsString(
    1352  VmaAllocator allocator,
    1353  char** ppStatsString,
    1354  VkBool32 detailedMap);
    1355 
    1356 void vmaFreeStatsString(
    1357  VmaAllocator allocator,
    1358  char* pStatsString);
    1359 
    1360 #endif // #if VMA_STATS_STRING_ENABLED
    1361 
    1370 VK_DEFINE_HANDLE(VmaPool)
    1371 
    1372 typedef enum VmaMemoryUsage
    1373 {
    1422 } VmaMemoryUsage;
    1423 
    1438 
    1488 
    1492 
    1494 {
    1496  VmaAllocationCreateFlags flags;
    1507  VkMemoryPropertyFlags requiredFlags;
    1512  VkMemoryPropertyFlags preferredFlags;
    1520  uint32_t memoryTypeBits;
    1533  void* pUserData;
    1535 
    1552 VkResult vmaFindMemoryTypeIndex(
    1553  VmaAllocator allocator,
    1554  uint32_t memoryTypeBits,
    1555  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1556  uint32_t* pMemoryTypeIndex);
    1557 
    1571  VmaAllocator allocator,
    1572  const VkBufferCreateInfo* pBufferCreateInfo,
    1573  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1574  uint32_t* pMemoryTypeIndex);
    1575 
    1589  VmaAllocator allocator,
    1590  const VkImageCreateInfo* pImageCreateInfo,
    1591  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1592  uint32_t* pMemoryTypeIndex);
    1593 
    1614 
    1617 typedef VkFlags VmaPoolCreateFlags;
    1618 
    1621 typedef struct VmaPoolCreateInfo {
    1627  VmaPoolCreateFlags flags;
    1632  VkDeviceSize blockSize;
    1661 
    1664 typedef struct VmaPoolStats {
    1667  VkDeviceSize size;
    1670  VkDeviceSize unusedSize;
    1683  VkDeviceSize unusedRangeSizeMax;
    1684 } VmaPoolStats;
    1685 
    1692 VkResult vmaCreatePool(
    1693  VmaAllocator allocator,
    1694  const VmaPoolCreateInfo* pCreateInfo,
    1695  VmaPool* pPool);
    1696 
    1699 void vmaDestroyPool(
    1700  VmaAllocator allocator,
    1701  VmaPool pool);
    1702 
    1709 void vmaGetPoolStats(
    1710  VmaAllocator allocator,
    1711  VmaPool pool,
    1712  VmaPoolStats* pPoolStats);
    1713 
    1721  VmaAllocator allocator,
    1722  VmaPool pool,
    1723  size_t* pLostAllocationCount);
    1724 
    1749 VK_DEFINE_HANDLE(VmaAllocation)
    1750 
    1751 
    1753 typedef struct VmaAllocationInfo {
    1758  uint32_t memoryType;
    1767  VkDeviceMemory deviceMemory;
    1772  VkDeviceSize offset;
    1777  VkDeviceSize size;
    1791  void* pUserData;
    1793 
    1804 VkResult vmaAllocateMemory(
    1805  VmaAllocator allocator,
    1806  const VkMemoryRequirements* pVkMemoryRequirements,
    1807  const VmaAllocationCreateInfo* pCreateInfo,
    1808  VmaAllocation* pAllocation,
    1809  VmaAllocationInfo* pAllocationInfo);
    1810 
    1818  VmaAllocator allocator,
    1819  VkBuffer buffer,
    1820  const VmaAllocationCreateInfo* pCreateInfo,
    1821  VmaAllocation* pAllocation,
    1822  VmaAllocationInfo* pAllocationInfo);
    1823 
    1825 VkResult vmaAllocateMemoryForImage(
    1826  VmaAllocator allocator,
    1827  VkImage image,
    1828  const VmaAllocationCreateInfo* pCreateInfo,
    1829  VmaAllocation* pAllocation,
    1830  VmaAllocationInfo* pAllocationInfo);
    1831 
    1833 void vmaFreeMemory(
    1834  VmaAllocator allocator,
    1835  VmaAllocation allocation);
    1836 
    1854  VmaAllocator allocator,
    1855  VmaAllocation allocation,
    1856  VmaAllocationInfo* pAllocationInfo);
    1857 
    1872 VkBool32 vmaTouchAllocation(
    1873  VmaAllocator allocator,
    1874  VmaAllocation allocation);
    1875 
    1890  VmaAllocator allocator,
    1891  VmaAllocation allocation,
    1892  void* pUserData);
    1893 
    1905  VmaAllocator allocator,
    1906  VmaAllocation* pAllocation);
    1907 
    1942 VkResult vmaMapMemory(
    1943  VmaAllocator allocator,
    1944  VmaAllocation allocation,
    1945  void** ppData);
    1946 
    1951 void vmaUnmapMemory(
    1952  VmaAllocator allocator,
    1953  VmaAllocation allocation);
    1954 
    1967 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    1968 
    1981 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    1982 
    1984 typedef struct VmaDefragmentationInfo {
    1989  VkDeviceSize maxBytesToMove;
    1996 
    1998 typedef struct VmaDefragmentationStats {
    2000  VkDeviceSize bytesMoved;
    2002  VkDeviceSize bytesFreed;
    2008 
    2091 VkResult vmaDefragment(
    2092  VmaAllocator allocator,
    2093  VmaAllocation* pAllocations,
    2094  size_t allocationCount,
    2095  VkBool32* pAllocationsChanged,
    2096  const VmaDefragmentationInfo *pDefragmentationInfo,
    2097  VmaDefragmentationStats* pDefragmentationStats);
    2098 
    2111 VkResult vmaBindBufferMemory(
    2112  VmaAllocator allocator,
    2113  VmaAllocation allocation,
    2114  VkBuffer buffer);
    2115 
    2128 VkResult vmaBindImageMemory(
    2129  VmaAllocator allocator,
    2130  VmaAllocation allocation,
    2131  VkImage image);
    2132 
    2159 VkResult vmaCreateBuffer(
    2160  VmaAllocator allocator,
    2161  const VkBufferCreateInfo* pBufferCreateInfo,
    2162  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2163  VkBuffer* pBuffer,
    2164  VmaAllocation* pAllocation,
    2165  VmaAllocationInfo* pAllocationInfo);
    2166 
    2178 void vmaDestroyBuffer(
    2179  VmaAllocator allocator,
    2180  VkBuffer buffer,
    2181  VmaAllocation allocation);
    2182 
    2184 VkResult vmaCreateImage(
    2185  VmaAllocator allocator,
    2186  const VkImageCreateInfo* pImageCreateInfo,
    2187  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2188  VkImage* pImage,
    2189  VmaAllocation* pAllocation,
    2190  VmaAllocationInfo* pAllocationInfo);
    2191 
    2203 void vmaDestroyImage(
    2204  VmaAllocator allocator,
    2205  VkImage image,
    2206  VmaAllocation allocation);
    2207 
    2208 #ifdef __cplusplus
    2209 }
    2210 #endif
    2211 
    2212 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
    2213 
    2214 // For Visual Studio IntelliSense.
    2215 #if defined(__cplusplus) && defined(__INTELLISENSE__)
    2216 #define VMA_IMPLEMENTATION
    2217 #endif
    2218 
    2219 #ifdef VMA_IMPLEMENTATION
    2220 #undef VMA_IMPLEMENTATION
    2221 
    2222 #include <cstdint>
    2223 #include <cstdlib>
    2224 #include <cstring>
    2225 
    2226 /*******************************************************************************
    2227 CONFIGURATION SECTION
    2228 
    2229 Define some of these macros before each #include of this header or change them
    2230 here if you need other then default behavior depending on your environment.
    2231 */
    2232 
    2233 /*
    2234 Define this macro to 1 to make the library fetch pointers to Vulkan functions
    2235 internally, like:
    2236 
    2237  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    2238 
    2239 Define to 0 if you are going to provide you own pointers to Vulkan functions via
    2240 VmaAllocatorCreateInfo::pVulkanFunctions.
    2241 */
    2242 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
    2243 #define VMA_STATIC_VULKAN_FUNCTIONS 1
    2244 #endif
    2245 
    2246 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
    2247 //#define VMA_USE_STL_CONTAINERS 1
    2248 
    2249 /* Set this macro to 1 to make the library including and using STL containers:
    2250 std::pair, std::vector, std::list, std::unordered_map.
    2251 
    2252 Set it to 0 or undefined to make the library using its own implementation of
    2253 the containers.
    2254 */
    2255 #if VMA_USE_STL_CONTAINERS
    2256  #define VMA_USE_STL_VECTOR 1
    2257  #define VMA_USE_STL_UNORDERED_MAP 1
    2258  #define VMA_USE_STL_LIST 1
    2259 #endif
    2260 
    2261 #if VMA_USE_STL_VECTOR
    2262  #include <vector>
    2263 #endif
    2264 
    2265 #if VMA_USE_STL_UNORDERED_MAP
    2266  #include <unordered_map>
    2267 #endif
    2268 
    2269 #if VMA_USE_STL_LIST
    2270  #include <list>
    2271 #endif
    2272 
    2273 /*
    2274 Following headers are used in this CONFIGURATION section only, so feel free to
    2275 remove them if not needed.
    2276 */
    2277 #include <cassert> // for assert
    2278 #include <algorithm> // for min, max
    2279 #include <mutex> // for std::mutex
    2280 #include <atomic> // for std::atomic
    2281 
    2282 #ifndef VMA_NULL
    2283  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
    2284  #define VMA_NULL nullptr
    2285 #endif
    2286 
    2287 #if defined(__APPLE__) || defined(__ANDROID__)
    2288 #include <cstdlib>
    2289 void *aligned_alloc(size_t alignment, size_t size)
    2290 {
    2291  // alignment must be >= sizeof(void*)
    2292  if(alignment < sizeof(void*))
    2293  {
    2294  alignment = sizeof(void*);
    2295  }
    2296 
    2297  void *pointer;
    2298  if(posix_memalign(&pointer, alignment, size) == 0)
    2299  return pointer;
    2300  return VMA_NULL;
    2301 }
    2302 #endif
    2303 
    2304 // If your compiler is not compatible with C++11 and definition of
    2305 // aligned_alloc() function is missing, uncommeting following line may help:
    2306 
    2307 //#include <malloc.h>
    2308 
    2309 // Normal assert to check for programmer's errors, especially in Debug configuration.
    2310 #ifndef VMA_ASSERT
    2311  #ifdef _DEBUG
    2312  #define VMA_ASSERT(expr) assert(expr)
    2313  #else
    2314  #define VMA_ASSERT(expr)
    2315  #endif
    2316 #endif
    2317 
    2318 // Assert that will be called very often, like inside data structures e.g. operator[].
    2319 // Making it non-empty can make program slow.
    2320 #ifndef VMA_HEAVY_ASSERT
    2321  #ifdef _DEBUG
    2322  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
    2323  #else
    2324  #define VMA_HEAVY_ASSERT(expr)
    2325  #endif
    2326 #endif
    2327 
    2328 #ifndef VMA_ALIGN_OF
    2329  #define VMA_ALIGN_OF(type) (__alignof(type))
    2330 #endif
    2331 
    2332 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
    2333  #if defined(_WIN32)
    2334  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
    2335  #else
    2336  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
    2337  #endif
    2338 #endif
    2339 
    2340 #ifndef VMA_SYSTEM_FREE
    2341  #if defined(_WIN32)
    2342  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
    2343  #else
    2344  #define VMA_SYSTEM_FREE(ptr) free(ptr)
    2345  #endif
    2346 #endif
    2347 
    2348 #ifndef VMA_MIN
    2349  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
    2350 #endif
    2351 
    2352 #ifndef VMA_MAX
    2353  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
    2354 #endif
    2355 
    2356 #ifndef VMA_SWAP
    2357  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
    2358 #endif
    2359 
    2360 #ifndef VMA_SORT
    2361  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
    2362 #endif
    2363 
    2364 #ifndef VMA_DEBUG_LOG
    2365  #define VMA_DEBUG_LOG(format, ...)
    2366  /*
    2367  #define VMA_DEBUG_LOG(format, ...) do { \
    2368  printf(format, __VA_ARGS__); \
    2369  printf("\n"); \
    2370  } while(false)
    2371  */
    2372 #endif
    2373 
    2374 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
    2375 #if VMA_STATS_STRING_ENABLED
    2376  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
    2377  {
    2378  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
    2379  }
    2380  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
    2381  {
    2382  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
    2383  }
    2384  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
    2385  {
    2386  snprintf(outStr, strLen, "%p", ptr);
    2387  }
    2388 #endif
    2389 
    2390 #ifndef VMA_MUTEX
    2391  class VmaMutex
    2392  {
    2393  public:
    2394  VmaMutex() { }
    2395  ~VmaMutex() { }
    2396  void Lock() { m_Mutex.lock(); }
    2397  void Unlock() { m_Mutex.unlock(); }
    2398  private:
    2399  std::mutex m_Mutex;
    2400  };
    2401  #define VMA_MUTEX VmaMutex
    2402 #endif
    2403 
    2404 /*
    2405 If providing your own implementation, you need to implement a subset of std::atomic:
    2406 
    2407 - Constructor(uint32_t desired)
    2408 - uint32_t load() const
    2409 - void store(uint32_t desired)
    2410 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
    2411 */
    2412 #ifndef VMA_ATOMIC_UINT32
    2413  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
    2414 #endif
    2415 
    2416 #ifndef VMA_BEST_FIT
    2417 
    2429  #define VMA_BEST_FIT (1)
    2430 #endif
    2431 
    2432 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
    2433 
    2437  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
    2438 #endif
    2439 
    2440 #ifndef VMA_DEBUG_ALIGNMENT
    2441 
    2445  #define VMA_DEBUG_ALIGNMENT (1)
    2446 #endif
    2447 
    2448 #ifndef VMA_DEBUG_MARGIN
    2449 
    2453  #define VMA_DEBUG_MARGIN (0)
    2454 #endif
    2455 
    2456 #ifndef VMA_DEBUG_GLOBAL_MUTEX
    2457 
    2461  #define VMA_DEBUG_GLOBAL_MUTEX (0)
    2462 #endif
    2463 
    2464 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
    2465 
    2469  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
    2470 #endif
    2471 
    2472 #ifndef VMA_SMALL_HEAP_MAX_SIZE
    2473  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
    2475 #endif
    2476 
    2477 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
    2478  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
    2480 #endif
    2481 
    2482 #ifndef VMA_CLASS_NO_COPY
    2483  #define VMA_CLASS_NO_COPY(className) \
    2484  private: \
    2485  className(const className&) = delete; \
    2486  className& operator=(const className&) = delete;
    2487 #endif
    2488 
    2489 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
    2490 
    2491 /*******************************************************************************
    2492 END OF CONFIGURATION
    2493 */
    2494 
    2495 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
    2496  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
    2497 
    2498 // Returns number of bits set to 1 in (v).
    2499 static inline uint32_t VmaCountBitsSet(uint32_t v)
    2500 {
    2501  uint32_t c = v - ((v >> 1) & 0x55555555);
    2502  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
    2503  c = ((c >> 4) + c) & 0x0F0F0F0F;
    2504  c = ((c >> 8) + c) & 0x00FF00FF;
    2505  c = ((c >> 16) + c) & 0x0000FFFF;
    2506  return c;
    2507 }
    2508 
    2509 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
    2510 // Use types like uint32_t, uint64_t as T.
    2511 template <typename T>
    2512 static inline T VmaAlignUp(T val, T align)
    2513 {
    2514  return (val + align - 1) / align * align;
    2515 }
    2516 // Aligns given value down to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 8.
    2517 // Use types like uint32_t, uint64_t as T.
    2518 template <typename T>
    2519 static inline T VmaAlignDown(T val, T align)
    2520 {
    2521  return val / align * align;
    2522 }
    2523 
    2524 // Division with mathematical rounding to nearest number.
    2525 template <typename T>
    2526 inline T VmaRoundDiv(T x, T y)
    2527 {
    2528  return (x + (y / (T)2)) / y;
    2529 }
    2530 
    2531 #ifndef VMA_SORT
    2532 
    2533 template<typename Iterator, typename Compare>
    2534 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
    2535 {
    2536  Iterator centerValue = end; --centerValue;
    2537  Iterator insertIndex = beg;
    2538  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
    2539  {
    2540  if(cmp(*memTypeIndex, *centerValue))
    2541  {
    2542  if(insertIndex != memTypeIndex)
    2543  {
    2544  VMA_SWAP(*memTypeIndex, *insertIndex);
    2545  }
    2546  ++insertIndex;
    2547  }
    2548  }
    2549  if(insertIndex != centerValue)
    2550  {
    2551  VMA_SWAP(*insertIndex, *centerValue);
    2552  }
    2553  return insertIndex;
    2554 }
    2555 
    2556 template<typename Iterator, typename Compare>
    2557 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
    2558 {
    2559  if(beg < end)
    2560  {
    2561  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
    2562  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
    2563  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
    2564  }
    2565 }
    2566 
    2567 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
    2568 
    2569 #endif // #ifndef VMA_SORT
    2570 
    2571 /*
    2572 Returns true if two memory blocks occupy overlapping pages.
    2573 ResourceA must be in less memory offset than ResourceB.
    2574 
    2575 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
    2576 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
    2577 */
    2578 static inline bool VmaBlocksOnSamePage(
    2579  VkDeviceSize resourceAOffset,
    2580  VkDeviceSize resourceASize,
    2581  VkDeviceSize resourceBOffset,
    2582  VkDeviceSize pageSize)
    2583 {
    2584  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
    2585  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
    2586  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
    2587  VkDeviceSize resourceBStart = resourceBOffset;
    2588  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
    2589  return resourceAEndPage == resourceBStartPage;
    2590 }
    2591 
    2592 enum VmaSuballocationType
    2593 {
    2594  VMA_SUBALLOCATION_TYPE_FREE = 0,
    2595  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
    2596  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
    2597  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
    2598  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
    2599  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
    2600  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
    2601 };
    2602 
    2603 /*
    2604 Returns true if given suballocation types could conflict and must respect
    2605 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
    2606 or linear image and another one is optimal image. If type is unknown, behave
    2607 conservatively.
    2608 */
    2609 static inline bool VmaIsBufferImageGranularityConflict(
    2610  VmaSuballocationType suballocType1,
    2611  VmaSuballocationType suballocType2)
    2612 {
    2613  if(suballocType1 > suballocType2)
    2614  {
    2615  VMA_SWAP(suballocType1, suballocType2);
    2616  }
    2617 
    2618  switch(suballocType1)
    2619  {
    2620  case VMA_SUBALLOCATION_TYPE_FREE:
    2621  return false;
    2622  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
    2623  return true;
    2624  case VMA_SUBALLOCATION_TYPE_BUFFER:
    2625  return
    2626  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2627  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2628  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
    2629  return
    2630  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2631  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
    2632  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2633  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
    2634  return
    2635  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2636  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
    2637  return false;
    2638  default:
    2639  VMA_ASSERT(0);
    2640  return true;
    2641  }
    2642 }
    2643 
    2644 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
    2645 struct VmaMutexLock
    2646 {
    2647  VMA_CLASS_NO_COPY(VmaMutexLock)
    2648 public:
    2649  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
    2650  m_pMutex(useMutex ? &mutex : VMA_NULL)
    2651  {
    2652  if(m_pMutex)
    2653  {
    2654  m_pMutex->Lock();
    2655  }
    2656  }
    2657 
    2658  ~VmaMutexLock()
    2659  {
    2660  if(m_pMutex)
    2661  {
    2662  m_pMutex->Unlock();
    2663  }
    2664  }
    2665 
    2666 private:
    2667  VMA_MUTEX* m_pMutex;
    2668 };
    2669 
    2670 #if VMA_DEBUG_GLOBAL_MUTEX
    2671  static VMA_MUTEX gDebugGlobalMutex;
    2672  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
    2673 #else
    2674  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
    2675 #endif
    2676 
    2677 // Minimum size of a free suballocation to register it in the free suballocation collection.
    2678 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
    2679 
    2680 /*
    2681 Performs binary search and returns iterator to first element that is greater or
    2682 equal to (key), according to comparison (cmp).
    2683 
    2684 Cmp should return true if first argument is less than second argument.
    2685 
    2686 Returned value is the found element, if present in the collection or place where
    2687 new element with value (key) should be inserted.
    2688 */
    2689 template <typename IterT, typename KeyT, typename CmpT>
    2690 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
    2691 {
    2692  size_t down = 0, up = (end - beg);
    2693  while(down < up)
    2694  {
    2695  const size_t mid = (down + up) / 2;
    2696  if(cmp(*(beg+mid), key))
    2697  {
    2698  down = mid + 1;
    2699  }
    2700  else
    2701  {
    2702  up = mid;
    2703  }
    2704  }
    2705  return beg + down;
    2706 }
    2707 
    2709 // Memory allocation
    2710 
    2711 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
    2712 {
    2713  if((pAllocationCallbacks != VMA_NULL) &&
    2714  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
    2715  {
    2716  return (*pAllocationCallbacks->pfnAllocation)(
    2717  pAllocationCallbacks->pUserData,
    2718  size,
    2719  alignment,
    2720  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
    2721  }
    2722  else
    2723  {
    2724  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
    2725  }
    2726 }
    2727 
    2728 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
    2729 {
    2730  if((pAllocationCallbacks != VMA_NULL) &&
    2731  (pAllocationCallbacks->pfnFree != VMA_NULL))
    2732  {
    2733  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
    2734  }
    2735  else
    2736  {
    2737  VMA_SYSTEM_FREE(ptr);
    2738  }
    2739 }
    2740 
    2741 template<typename T>
    2742 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
    2743 {
    2744  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
    2745 }
    2746 
    2747 template<typename T>
    2748 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
    2749 {
    2750  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
    2751 }
    2752 
    2753 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
    2754 
    2755 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
    2756 
    2757 template<typename T>
    2758 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
    2759 {
    2760  ptr->~T();
    2761  VmaFree(pAllocationCallbacks, ptr);
    2762 }
    2763 
    2764 template<typename T>
    2765 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
    2766 {
    2767  if(ptr != VMA_NULL)
    2768  {
    2769  for(size_t i = count; i--; )
    2770  {
    2771  ptr[i].~T();
    2772  }
    2773  VmaFree(pAllocationCallbacks, ptr);
    2774  }
    2775 }
    2776 
    2777 // STL-compatible allocator.
    2778 template<typename T>
    2779 class VmaStlAllocator
    2780 {
    2781 public:
    2782  const VkAllocationCallbacks* const m_pCallbacks;
    2783  typedef T value_type;
    2784 
    2785  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
    2786  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
    2787 
    2788  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
    2789  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
    2790 
    2791  template<typename U>
    2792  bool operator==(const VmaStlAllocator<U>& rhs) const
    2793  {
    2794  return m_pCallbacks == rhs.m_pCallbacks;
    2795  }
    2796  template<typename U>
    2797  bool operator!=(const VmaStlAllocator<U>& rhs) const
    2798  {
    2799  return m_pCallbacks != rhs.m_pCallbacks;
    2800  }
    2801 
    2802  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
    2803 };
    2804 
    2805 #if VMA_USE_STL_VECTOR
    2806 
    2807 #define VmaVector std::vector
    2808 
    2809 template<typename T, typename allocatorT>
    2810 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
    2811 {
    2812  vec.insert(vec.begin() + index, item);
    2813 }
    2814 
    2815 template<typename T, typename allocatorT>
    2816 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
    2817 {
    2818  vec.erase(vec.begin() + index);
    2819 }
    2820 
    2821 #else // #if VMA_USE_STL_VECTOR
    2822 
    2823 /* Class with interface compatible with subset of std::vector.
    2824 T must be POD because constructors and destructors are not called and memcpy is
    2825 used for these objects. */
    2826 template<typename T, typename AllocatorT>
    2827 class VmaVector
    2828 {
    2829 public:
    2830  typedef T value_type;
    2831 
    2832  VmaVector(const AllocatorT& allocator) :
    2833  m_Allocator(allocator),
    2834  m_pArray(VMA_NULL),
    2835  m_Count(0),
    2836  m_Capacity(0)
    2837  {
    2838  }
    2839 
    2840  VmaVector(size_t count, const AllocatorT& allocator) :
    2841  m_Allocator(allocator),
    2842  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
    2843  m_Count(count),
    2844  m_Capacity(count)
    2845  {
    2846  }
    2847 
    2848  VmaVector(const VmaVector<T, AllocatorT>& src) :
    2849  m_Allocator(src.m_Allocator),
    2850  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
    2851  m_Count(src.m_Count),
    2852  m_Capacity(src.m_Count)
    2853  {
    2854  if(m_Count != 0)
    2855  {
    2856  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
    2857  }
    2858  }
    2859 
    2860  ~VmaVector()
    2861  {
    2862  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    2863  }
    2864 
    2865  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
    2866  {
    2867  if(&rhs != this)
    2868  {
    2869  resize(rhs.m_Count);
    2870  if(m_Count != 0)
    2871  {
    2872  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
    2873  }
    2874  }
    2875  return *this;
    2876  }
    2877 
    2878  bool empty() const { return m_Count == 0; }
    2879  size_t size() const { return m_Count; }
    2880  T* data() { return m_pArray; }
    2881  const T* data() const { return m_pArray; }
    2882 
    2883  T& operator[](size_t index)
    2884  {
    2885  VMA_HEAVY_ASSERT(index < m_Count);
    2886  return m_pArray[index];
    2887  }
    2888  const T& operator[](size_t index) const
    2889  {
    2890  VMA_HEAVY_ASSERT(index < m_Count);
    2891  return m_pArray[index];
    2892  }
    2893 
    2894  T& front()
    2895  {
    2896  VMA_HEAVY_ASSERT(m_Count > 0);
    2897  return m_pArray[0];
    2898  }
    2899  const T& front() const
    2900  {
    2901  VMA_HEAVY_ASSERT(m_Count > 0);
    2902  return m_pArray[0];
    2903  }
    2904  T& back()
    2905  {
    2906  VMA_HEAVY_ASSERT(m_Count > 0);
    2907  return m_pArray[m_Count - 1];
    2908  }
    2909  const T& back() const
    2910  {
    2911  VMA_HEAVY_ASSERT(m_Count > 0);
    2912  return m_pArray[m_Count - 1];
    2913  }
    2914 
    2915  void reserve(size_t newCapacity, bool freeMemory = false)
    2916  {
    2917  newCapacity = VMA_MAX(newCapacity, m_Count);
    2918 
    2919  if((newCapacity < m_Capacity) && !freeMemory)
    2920  {
    2921  newCapacity = m_Capacity;
    2922  }
    2923 
    2924  if(newCapacity != m_Capacity)
    2925  {
    2926  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
    2927  if(m_Count != 0)
    2928  {
    2929  memcpy(newArray, m_pArray, m_Count * sizeof(T));
    2930  }
    2931  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    2932  m_Capacity = newCapacity;
    2933  m_pArray = newArray;
    2934  }
    2935  }
    2936 
    2937  void resize(size_t newCount, bool freeMemory = false)
    2938  {
    2939  size_t newCapacity = m_Capacity;
    2940  if(newCount > m_Capacity)
    2941  {
    2942  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
    2943  }
    2944  else if(freeMemory)
    2945  {
    2946  newCapacity = newCount;
    2947  }
    2948 
    2949  if(newCapacity != m_Capacity)
    2950  {
    2951  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
    2952  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
    2953  if(elementsToCopy != 0)
    2954  {
    2955  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
    2956  }
    2957  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    2958  m_Capacity = newCapacity;
    2959  m_pArray = newArray;
    2960  }
    2961 
    2962  m_Count = newCount;
    2963  }
    2964 
    2965  void clear(bool freeMemory = false)
    2966  {
    2967  resize(0, freeMemory);
    2968  }
    2969 
    2970  void insert(size_t index, const T& src)
    2971  {
    2972  VMA_HEAVY_ASSERT(index <= m_Count);
    2973  const size_t oldCount = size();
    2974  resize(oldCount + 1);
    2975  if(index < oldCount)
    2976  {
    2977  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
    2978  }
    2979  m_pArray[index] = src;
    2980  }
    2981 
    2982  void remove(size_t index)
    2983  {
    2984  VMA_HEAVY_ASSERT(index < m_Count);
    2985  const size_t oldCount = size();
    2986  if(index < oldCount - 1)
    2987  {
    2988  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
    2989  }
    2990  resize(oldCount - 1);
    2991  }
    2992 
    2993  void push_back(const T& src)
    2994  {
    2995  const size_t newIndex = size();
    2996  resize(newIndex + 1);
    2997  m_pArray[newIndex] = src;
    2998  }
    2999 
    3000  void pop_back()
    3001  {
    3002  VMA_HEAVY_ASSERT(m_Count > 0);
    3003  resize(size() - 1);
    3004  }
    3005 
    3006  void push_front(const T& src)
    3007  {
    3008  insert(0, src);
    3009  }
    3010 
    3011  void pop_front()
    3012  {
    3013  VMA_HEAVY_ASSERT(m_Count > 0);
    3014  remove(0);
    3015  }
    3016 
    3017  typedef T* iterator;
    3018 
    3019  iterator begin() { return m_pArray; }
    3020  iterator end() { return m_pArray + m_Count; }
    3021 
    3022 private:
    3023  AllocatorT m_Allocator;
    3024  T* m_pArray;
    3025  size_t m_Count;
    3026  size_t m_Capacity;
    3027 };
    3028 
    3029 template<typename T, typename allocatorT>
    3030 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
    3031 {
    3032  vec.insert(index, item);
    3033 }
    3034 
    3035 template<typename T, typename allocatorT>
    3036 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
    3037 {
    3038  vec.remove(index);
    3039 }
    3040 
    3041 #endif // #if VMA_USE_STL_VECTOR
    3042 
    3043 template<typename CmpLess, typename VectorT>
    3044 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
    3045 {
    3046  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3047  vector.data(),
    3048  vector.data() + vector.size(),
    3049  value,
    3050  CmpLess()) - vector.data();
    3051  VmaVectorInsert(vector, indexToInsert, value);
    3052  return indexToInsert;
    3053 }
    3054 
    3055 template<typename CmpLess, typename VectorT>
    3056 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
    3057 {
    3058  CmpLess comparator;
    3059  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3060  vector.begin(),
    3061  vector.end(),
    3062  value,
    3063  comparator);
    3064  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
    3065  {
    3066  size_t indexToRemove = it - vector.begin();
    3067  VmaVectorRemove(vector, indexToRemove);
    3068  return true;
    3069  }
    3070  return false;
    3071 }
    3072 
    3073 template<typename CmpLess, typename VectorT>
    3074 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
    3075 {
    3076  CmpLess comparator;
    3077  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3078  vector.data(),
    3079  vector.data() + vector.size(),
    3080  value,
    3081  comparator);
    3082  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
    3083  {
    3084  return it - vector.begin();
    3085  }
    3086  else
    3087  {
    3088  return vector.size();
    3089  }
    3090 }
    3091 
    3093 // class VmaPoolAllocator
    3094 
    3095 /*
    3096 Allocator for objects of type T using a list of arrays (pools) to speed up
    3097 allocation. Number of elements that can be allocated is not bounded because
    3098 allocator can create multiple blocks.
    3099 */
    3100 template<typename T>
    3101 class VmaPoolAllocator
    3102 {
    3103  VMA_CLASS_NO_COPY(VmaPoolAllocator)
    3104 public:
    3105  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
    3106  ~VmaPoolAllocator();
    3107  void Clear();
    3108  T* Alloc();
    3109  void Free(T* ptr);
    3110 
    3111 private:
    3112  union Item
    3113  {
    3114  uint32_t NextFreeIndex;
    3115  T Value;
    3116  };
    3117 
    3118  struct ItemBlock
    3119  {
    3120  Item* pItems;
    3121  uint32_t FirstFreeIndex;
    3122  };
    3123 
    3124  const VkAllocationCallbacks* m_pAllocationCallbacks;
    3125  size_t m_ItemsPerBlock;
    3126  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
    3127 
    3128  ItemBlock& CreateNewBlock();
    3129 };
    3130 
    3131 template<typename T>
    3132 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
    3133  m_pAllocationCallbacks(pAllocationCallbacks),
    3134  m_ItemsPerBlock(itemsPerBlock),
    3135  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
    3136 {
    3137  VMA_ASSERT(itemsPerBlock > 0);
    3138 }
    3139 
    3140 template<typename T>
    3141 VmaPoolAllocator<T>::~VmaPoolAllocator()
    3142 {
    3143  Clear();
    3144 }
    3145 
    3146 template<typename T>
    3147 void VmaPoolAllocator<T>::Clear()
    3148 {
    3149  for(size_t i = m_ItemBlocks.size(); i--; )
    3150  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
    3151  m_ItemBlocks.clear();
    3152 }
    3153 
    3154 template<typename T>
    3155 T* VmaPoolAllocator<T>::Alloc()
    3156 {
    3157  for(size_t i = m_ItemBlocks.size(); i--; )
    3158  {
    3159  ItemBlock& block = m_ItemBlocks[i];
    3160  // This block has some free items: Use first one.
    3161  if(block.FirstFreeIndex != UINT32_MAX)
    3162  {
    3163  Item* const pItem = &block.pItems[block.FirstFreeIndex];
    3164  block.FirstFreeIndex = pItem->NextFreeIndex;
    3165  return &pItem->Value;
    3166  }
    3167  }
    3168 
    3169  // No block has free item: Create new one and use it.
    3170  ItemBlock& newBlock = CreateNewBlock();
    3171  Item* const pItem = &newBlock.pItems[0];
    3172  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
    3173  return &pItem->Value;
    3174 }
    3175 
    3176 template<typename T>
    3177 void VmaPoolAllocator<T>::Free(T* ptr)
    3178 {
    3179  // Search all memory blocks to find ptr.
    3180  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
    3181  {
    3182  ItemBlock& block = m_ItemBlocks[i];
    3183 
    3184  // Casting to union.
    3185  Item* pItemPtr;
    3186  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
    3187 
    3188  // Check if pItemPtr is in address range of this block.
    3189  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
    3190  {
    3191  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
    3192  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
    3193  block.FirstFreeIndex = index;
    3194  return;
    3195  }
    3196  }
    3197  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
    3198 }
    3199 
    3200 template<typename T>
    3201 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
    3202 {
    3203  ItemBlock newBlock = {
    3204  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
    3205 
    3206  m_ItemBlocks.push_back(newBlock);
    3207 
    3208  // Setup singly-linked list of all free items in this block.
    3209  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
    3210  newBlock.pItems[i].NextFreeIndex = i + 1;
    3211  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
    3212  return m_ItemBlocks.back();
    3213 }
    3214 
    3216 // class VmaRawList, VmaList
    3217 
    3218 #if VMA_USE_STL_LIST
    3219 
    3220 #define VmaList std::list
    3221 
    3222 #else // #if VMA_USE_STL_LIST
    3223 
    3224 template<typename T>
    3225 struct VmaListItem
    3226 {
    3227  VmaListItem* pPrev;
    3228  VmaListItem* pNext;
    3229  T Value;
    3230 };
    3231 
    3232 // Doubly linked list.
    3233 template<typename T>
    3234 class VmaRawList
    3235 {
    3236  VMA_CLASS_NO_COPY(VmaRawList)
    3237 public:
    3238  typedef VmaListItem<T> ItemType;
    3239 
    3240  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
    3241  ~VmaRawList();
    3242  void Clear();
    3243 
    3244  size_t GetCount() const { return m_Count; }
    3245  bool IsEmpty() const { return m_Count == 0; }
    3246 
    3247  ItemType* Front() { return m_pFront; }
    3248  const ItemType* Front() const { return m_pFront; }
    3249  ItemType* Back() { return m_pBack; }
    3250  const ItemType* Back() const { return m_pBack; }
    3251 
    3252  ItemType* PushBack();
    3253  ItemType* PushFront();
    3254  ItemType* PushBack(const T& value);
    3255  ItemType* PushFront(const T& value);
    3256  void PopBack();
    3257  void PopFront();
    3258 
    3259  // Item can be null - it means PushBack.
    3260  ItemType* InsertBefore(ItemType* pItem);
    3261  // Item can be null - it means PushFront.
    3262  ItemType* InsertAfter(ItemType* pItem);
    3263 
    3264  ItemType* InsertBefore(ItemType* pItem, const T& value);
    3265  ItemType* InsertAfter(ItemType* pItem, const T& value);
    3266 
    3267  void Remove(ItemType* pItem);
    3268 
    3269 private:
    3270  const VkAllocationCallbacks* const m_pAllocationCallbacks;
    3271  VmaPoolAllocator<ItemType> m_ItemAllocator;
    3272  ItemType* m_pFront;
    3273  ItemType* m_pBack;
    3274  size_t m_Count;
    3275 };
    3276 
    3277 template<typename T>
    3278 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
    3279  m_pAllocationCallbacks(pAllocationCallbacks),
    3280  m_ItemAllocator(pAllocationCallbacks, 128),
    3281  m_pFront(VMA_NULL),
    3282  m_pBack(VMA_NULL),
    3283  m_Count(0)
    3284 {
    3285 }
    3286 
    3287 template<typename T>
    3288 VmaRawList<T>::~VmaRawList()
    3289 {
    3290  // Intentionally not calling Clear, because that would be unnecessary
    3291  // computations to return all items to m_ItemAllocator as free.
    3292 }
    3293 
    3294 template<typename T>
    3295 void VmaRawList<T>::Clear()
    3296 {
    3297  if(IsEmpty() == false)
    3298  {
    3299  ItemType* pItem = m_pBack;
    3300  while(pItem != VMA_NULL)
    3301  {
    3302  ItemType* const pPrevItem = pItem->pPrev;
    3303  m_ItemAllocator.Free(pItem);
    3304  pItem = pPrevItem;
    3305  }
    3306  m_pFront = VMA_NULL;
    3307  m_pBack = VMA_NULL;
    3308  m_Count = 0;
    3309  }
    3310 }
    3311 
    3312 template<typename T>
    3313 VmaListItem<T>* VmaRawList<T>::PushBack()
    3314 {
    3315  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3316  pNewItem->pNext = VMA_NULL;
    3317  if(IsEmpty())
    3318  {
    3319  pNewItem->pPrev = VMA_NULL;
    3320  m_pFront = pNewItem;
    3321  m_pBack = pNewItem;
    3322  m_Count = 1;
    3323  }
    3324  else
    3325  {
    3326  pNewItem->pPrev = m_pBack;
    3327  m_pBack->pNext = pNewItem;
    3328  m_pBack = pNewItem;
    3329  ++m_Count;
    3330  }
    3331  return pNewItem;
    3332 }
    3333 
    3334 template<typename T>
    3335 VmaListItem<T>* VmaRawList<T>::PushFront()
    3336 {
    3337  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3338  pNewItem->pPrev = VMA_NULL;
    3339  if(IsEmpty())
    3340  {
    3341  pNewItem->pNext = VMA_NULL;
    3342  m_pFront = pNewItem;
    3343  m_pBack = pNewItem;
    3344  m_Count = 1;
    3345  }
    3346  else
    3347  {
    3348  pNewItem->pNext = m_pFront;
    3349  m_pFront->pPrev = pNewItem;
    3350  m_pFront = pNewItem;
    3351  ++m_Count;
    3352  }
    3353  return pNewItem;
    3354 }
    3355 
    3356 template<typename T>
    3357 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
    3358 {
    3359  ItemType* const pNewItem = PushBack();
    3360  pNewItem->Value = value;
    3361  return pNewItem;
    3362 }
    3363 
    3364 template<typename T>
    3365 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
    3366 {
    3367  ItemType* const pNewItem = PushFront();
    3368  pNewItem->Value = value;
    3369  return pNewItem;
    3370 }
    3371 
    3372 template<typename T>
    3373 void VmaRawList<T>::PopBack()
    3374 {
    3375  VMA_HEAVY_ASSERT(m_Count > 0);
    3376  ItemType* const pBackItem = m_pBack;
    3377  ItemType* const pPrevItem = pBackItem->pPrev;
    3378  if(pPrevItem != VMA_NULL)
    3379  {
    3380  pPrevItem->pNext = VMA_NULL;
    3381  }
    3382  m_pBack = pPrevItem;
    3383  m_ItemAllocator.Free(pBackItem);
    3384  --m_Count;
    3385 }
    3386 
    3387 template<typename T>
    3388 void VmaRawList<T>::PopFront()
    3389 {
    3390  VMA_HEAVY_ASSERT(m_Count > 0);
    3391  ItemType* const pFrontItem = m_pFront;
    3392  ItemType* const pNextItem = pFrontItem->pNext;
    3393  if(pNextItem != VMA_NULL)
    3394  {
    3395  pNextItem->pPrev = VMA_NULL;
    3396  }
    3397  m_pFront = pNextItem;
    3398  m_ItemAllocator.Free(pFrontItem);
    3399  --m_Count;
    3400 }
    3401 
    3402 template<typename T>
    3403 void VmaRawList<T>::Remove(ItemType* pItem)
    3404 {
    3405  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
    3406  VMA_HEAVY_ASSERT(m_Count > 0);
    3407 
    3408  if(pItem->pPrev != VMA_NULL)
    3409  {
    3410  pItem->pPrev->pNext = pItem->pNext;
    3411  }
    3412  else
    3413  {
    3414  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3415  m_pFront = pItem->pNext;
    3416  }
    3417 
    3418  if(pItem->pNext != VMA_NULL)
    3419  {
    3420  pItem->pNext->pPrev = pItem->pPrev;
    3421  }
    3422  else
    3423  {
    3424  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3425  m_pBack = pItem->pPrev;
    3426  }
    3427 
    3428  m_ItemAllocator.Free(pItem);
    3429  --m_Count;
    3430 }
    3431 
    3432 template<typename T>
    3433 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
    3434 {
    3435  if(pItem != VMA_NULL)
    3436  {
    3437  ItemType* const prevItem = pItem->pPrev;
    3438  ItemType* const newItem = m_ItemAllocator.Alloc();
    3439  newItem->pPrev = prevItem;
    3440  newItem->pNext = pItem;
    3441  pItem->pPrev = newItem;
    3442  if(prevItem != VMA_NULL)
    3443  {
    3444  prevItem->pNext = newItem;
    3445  }
    3446  else
    3447  {
    3448  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3449  m_pFront = newItem;
    3450  }
    3451  ++m_Count;
    3452  return newItem;
    3453  }
    3454  else
    3455  return PushBack();
    3456 }
    3457 
    3458 template<typename T>
    3459 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
    3460 {
    3461  if(pItem != VMA_NULL)
    3462  {
    3463  ItemType* const nextItem = pItem->pNext;
    3464  ItemType* const newItem = m_ItemAllocator.Alloc();
    3465  newItem->pNext = nextItem;
    3466  newItem->pPrev = pItem;
    3467  pItem->pNext = newItem;
    3468  if(nextItem != VMA_NULL)
    3469  {
    3470  nextItem->pPrev = newItem;
    3471  }
    3472  else
    3473  {
    3474  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3475  m_pBack = newItem;
    3476  }
    3477  ++m_Count;
    3478  return newItem;
    3479  }
    3480  else
    3481  return PushFront();
    3482 }
    3483 
    3484 template<typename T>
    3485 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
    3486 {
    3487  ItemType* const newItem = InsertBefore(pItem);
    3488  newItem->Value = value;
    3489  return newItem;
    3490 }
    3491 
    3492 template<typename T>
    3493 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
    3494 {
    3495  ItemType* const newItem = InsertAfter(pItem);
    3496  newItem->Value = value;
    3497  return newItem;
    3498 }
    3499 
    3500 template<typename T, typename AllocatorT>
    3501 class VmaList
    3502 {
    3503  VMA_CLASS_NO_COPY(VmaList)
    3504 public:
    3505  class iterator
    3506  {
    3507  public:
    3508  iterator() :
    3509  m_pList(VMA_NULL),
    3510  m_pItem(VMA_NULL)
    3511  {
    3512  }
    3513 
    3514  T& operator*() const
    3515  {
    3516  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3517  return m_pItem->Value;
    3518  }
    3519  T* operator->() const
    3520  {
    3521  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3522  return &m_pItem->Value;
    3523  }
    3524 
    3525  iterator& operator++()
    3526  {
    3527  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3528  m_pItem = m_pItem->pNext;
    3529  return *this;
    3530  }
    3531  iterator& operator--()
    3532  {
    3533  if(m_pItem != VMA_NULL)
    3534  {
    3535  m_pItem = m_pItem->pPrev;
    3536  }
    3537  else
    3538  {
    3539  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3540  m_pItem = m_pList->Back();
    3541  }
    3542  return *this;
    3543  }
    3544 
    3545  iterator operator++(int)
    3546  {
    3547  iterator result = *this;
    3548  ++*this;
    3549  return result;
    3550  }
    3551  iterator operator--(int)
    3552  {
    3553  iterator result = *this;
    3554  --*this;
    3555  return result;
    3556  }
    3557 
    3558  bool operator==(const iterator& rhs) const
    3559  {
    3560  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3561  return m_pItem == rhs.m_pItem;
    3562  }
    3563  bool operator!=(const iterator& rhs) const
    3564  {
    3565  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3566  return m_pItem != rhs.m_pItem;
    3567  }
    3568 
    3569  private:
    3570  VmaRawList<T>* m_pList;
    3571  VmaListItem<T>* m_pItem;
    3572 
    3573  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
    3574  m_pList(pList),
    3575  m_pItem(pItem)
    3576  {
    3577  }
    3578 
    3579  friend class VmaList<T, AllocatorT>;
    3580  };
    3581 
    3582  class const_iterator
    3583  {
    3584  public:
    3585  const_iterator() :
    3586  m_pList(VMA_NULL),
    3587  m_pItem(VMA_NULL)
    3588  {
    3589  }
    3590 
    3591  const_iterator(const iterator& src) :
    3592  m_pList(src.m_pList),
    3593  m_pItem(src.m_pItem)
    3594  {
    3595  }
    3596 
    3597  const T& operator*() const
    3598  {
    3599  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3600  return m_pItem->Value;
    3601  }
    3602  const T* operator->() const
    3603  {
    3604  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3605  return &m_pItem->Value;
    3606  }
    3607 
    3608  const_iterator& operator++()
    3609  {
    3610  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3611  m_pItem = m_pItem->pNext;
    3612  return *this;
    3613  }
    3614  const_iterator& operator--()
    3615  {
    3616  if(m_pItem != VMA_NULL)
    3617  {
    3618  m_pItem = m_pItem->pPrev;
    3619  }
    3620  else
    3621  {
    3622  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3623  m_pItem = m_pList->Back();
    3624  }
    3625  return *this;
    3626  }
    3627 
    3628  const_iterator operator++(int)
    3629  {
    3630  const_iterator result = *this;
    3631  ++*this;
    3632  return result;
    3633  }
    3634  const_iterator operator--(int)
    3635  {
    3636  const_iterator result = *this;
    3637  --*this;
    3638  return result;
    3639  }
    3640 
    3641  bool operator==(const const_iterator& rhs) const
    3642  {
    3643  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3644  return m_pItem == rhs.m_pItem;
    3645  }
    3646  bool operator!=(const const_iterator& rhs) const
    3647  {
    3648  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3649  return m_pItem != rhs.m_pItem;
    3650  }
    3651 
    3652  private:
    3653  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
    3654  m_pList(pList),
    3655  m_pItem(pItem)
    3656  {
    3657  }
    3658 
    3659  const VmaRawList<T>* m_pList;
    3660  const VmaListItem<T>* m_pItem;
    3661 
    3662  friend class VmaList<T, AllocatorT>;
    3663  };
    3664 
    3665  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
    3666 
    3667  bool empty() const { return m_RawList.IsEmpty(); }
    3668  size_t size() const { return m_RawList.GetCount(); }
    3669 
    3670  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
    3671  iterator end() { return iterator(&m_RawList, VMA_NULL); }
    3672 
    3673  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
    3674  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
    3675 
    3676  void clear() { m_RawList.Clear(); }
    3677  void push_back(const T& value) { m_RawList.PushBack(value); }
    3678  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
    3679  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
    3680 
    3681 private:
    3682  VmaRawList<T> m_RawList;
    3683 };
    3684 
    3685 #endif // #if VMA_USE_STL_LIST
    3686 
    3688 // class VmaMap
    3689 
    3690 // Unused in this version.
    3691 #if 0
    3692 
    3693 #if VMA_USE_STL_UNORDERED_MAP
    3694 
    3695 #define VmaPair std::pair
    3696 
    3697 #define VMA_MAP_TYPE(KeyT, ValueT) \
    3698  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
    3699 
    3700 #else // #if VMA_USE_STL_UNORDERED_MAP
    3701 
    3702 template<typename T1, typename T2>
    3703 struct VmaPair
    3704 {
    3705  T1 first;
    3706  T2 second;
    3707 
    3708  VmaPair() : first(), second() { }
    3709  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
    3710 };
    3711 
    3712 /* Class compatible with subset of interface of std::unordered_map.
    3713 KeyT, ValueT must be POD because they will be stored in VmaVector.
    3714 */
    3715 template<typename KeyT, typename ValueT>
    3716 class VmaMap
    3717 {
    3718 public:
    3719  typedef VmaPair<KeyT, ValueT> PairType;
    3720  typedef PairType* iterator;
    3721 
    3722  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
    3723 
    3724  iterator begin() { return m_Vector.begin(); }
    3725  iterator end() { return m_Vector.end(); }
    3726 
    3727  void insert(const PairType& pair);
    3728  iterator find(const KeyT& key);
    3729  void erase(iterator it);
    3730 
    3731 private:
    3732  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
    3733 };
    3734 
    3735 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
    3736 
    3737 template<typename FirstT, typename SecondT>
    3738 struct VmaPairFirstLess
    3739 {
    3740  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
    3741  {
    3742  return lhs.first < rhs.first;
    3743  }
    3744  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
    3745  {
    3746  return lhs.first < rhsFirst;
    3747  }
    3748 };
    3749 
    3750 template<typename KeyT, typename ValueT>
    3751 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
    3752 {
    3753  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3754  m_Vector.data(),
    3755  m_Vector.data() + m_Vector.size(),
    3756  pair,
    3757  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
    3758  VmaVectorInsert(m_Vector, indexToInsert, pair);
    3759 }
    3760 
    3761 template<typename KeyT, typename ValueT>
    3762 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
    3763 {
    3764  PairType* it = VmaBinaryFindFirstNotLess(
    3765  m_Vector.data(),
    3766  m_Vector.data() + m_Vector.size(),
    3767  key,
    3768  VmaPairFirstLess<KeyT, ValueT>());
    3769  if((it != m_Vector.end()) && (it->first == key))
    3770  {
    3771  return it;
    3772  }
    3773  else
    3774  {
    3775  return m_Vector.end();
    3776  }
    3777 }
    3778 
    3779 template<typename KeyT, typename ValueT>
    3780 void VmaMap<KeyT, ValueT>::erase(iterator it)
    3781 {
    3782  VmaVectorRemove(m_Vector, it - m_Vector.begin());
    3783 }
    3784 
    3785 #endif // #if VMA_USE_STL_UNORDERED_MAP
    3786 
    3787 #endif // #if 0
    3788 
    3790 
    3791 class VmaDeviceMemoryBlock;
    3792 
    3793 enum VMA_CACHE_OPERATION { VMA_CACHE_FLUSH, VMA_CACHE_INVALIDATE };
    3794 
    3795 struct VmaAllocation_T
    3796 {
    3797  VMA_CLASS_NO_COPY(VmaAllocation_T)
    3798 private:
    3799  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
    3800 
    3801  enum FLAGS
    3802  {
    3803  FLAG_USER_DATA_STRING = 0x01,
    3804  };
    3805 
    3806 public:
    3807  enum ALLOCATION_TYPE
    3808  {
    3809  ALLOCATION_TYPE_NONE,
    3810  ALLOCATION_TYPE_BLOCK,
    3811  ALLOCATION_TYPE_DEDICATED,
    3812  };
    3813 
    3814  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
    3815  m_Alignment(1),
    3816  m_Size(0),
    3817  m_pUserData(VMA_NULL),
    3818  m_LastUseFrameIndex(currentFrameIndex),
    3819  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
    3820  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
    3821  m_MapCount(0),
    3822  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
    3823  {
    3824 #if VMA_STATS_STRING_ENABLED
    3825  m_CreationFrameIndex = currentFrameIndex;
    3826  m_BufferImageUsage = 0;
    3827 #endif
    3828  }
    3829 
    3830  ~VmaAllocation_T()
    3831  {
    3832  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
    3833 
    3834  // Check if owned string was freed.
    3835  VMA_ASSERT(m_pUserData == VMA_NULL);
    3836  }
    3837 
    3838  void InitBlockAllocation(
    3839  VmaPool hPool,
    3840  VmaDeviceMemoryBlock* block,
    3841  VkDeviceSize offset,
    3842  VkDeviceSize alignment,
    3843  VkDeviceSize size,
    3844  VmaSuballocationType suballocationType,
    3845  bool mapped,
    3846  bool canBecomeLost)
    3847  {
    3848  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    3849  VMA_ASSERT(block != VMA_NULL);
    3850  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    3851  m_Alignment = alignment;
    3852  m_Size = size;
    3853  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    3854  m_SuballocationType = (uint8_t)suballocationType;
    3855  m_BlockAllocation.m_hPool = hPool;
    3856  m_BlockAllocation.m_Block = block;
    3857  m_BlockAllocation.m_Offset = offset;
    3858  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
    3859  }
    3860 
    3861  void InitLost()
    3862  {
    3863  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    3864  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
    3865  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    3866  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
    3867  m_BlockAllocation.m_Block = VMA_NULL;
    3868  m_BlockAllocation.m_Offset = 0;
    3869  m_BlockAllocation.m_CanBecomeLost = true;
    3870  }
    3871 
    3872  void ChangeBlockAllocation(
    3873  VmaAllocator hAllocator,
    3874  VmaDeviceMemoryBlock* block,
    3875  VkDeviceSize offset);
    3876 
    3877  // pMappedData not null means allocation is created with MAPPED flag.
    3878  void InitDedicatedAllocation(
    3879  uint32_t memoryTypeIndex,
    3880  VkDeviceMemory hMemory,
    3881  VmaSuballocationType suballocationType,
    3882  void* pMappedData,
    3883  VkDeviceSize size)
    3884  {
    3885  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    3886  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
    3887  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
    3888  m_Alignment = 0;
    3889  m_Size = size;
    3890  m_SuballocationType = (uint8_t)suballocationType;
    3891  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    3892  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
    3893  m_DedicatedAllocation.m_hMemory = hMemory;
    3894  m_DedicatedAllocation.m_pMappedData = pMappedData;
    3895  }
    3896 
    3897  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
    3898  VkDeviceSize GetAlignment() const { return m_Alignment; }
    3899  VkDeviceSize GetSize() const { return m_Size; }
    3900  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
    3901  void* GetUserData() const { return m_pUserData; }
    3902  void SetUserData(VmaAllocator hAllocator, void* pUserData);
    3903  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
    3904 
    3905  VmaDeviceMemoryBlock* GetBlock() const
    3906  {
    3907  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    3908  return m_BlockAllocation.m_Block;
    3909  }
    3910  VkDeviceSize GetOffset() const;
    3911  VkDeviceMemory GetMemory() const;
    3912  uint32_t GetMemoryTypeIndex() const;
    3913  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
    3914  void* GetMappedData() const;
    3915  bool CanBecomeLost() const;
    3916  VmaPool GetPool() const;
    3917 
    3918  uint32_t GetLastUseFrameIndex() const
    3919  {
    3920  return m_LastUseFrameIndex.load();
    3921  }
    3922  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
    3923  {
    3924  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
    3925  }
    3926  /*
    3927  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
    3928  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
    3929  - Else, returns false.
    3930 
    3931  If hAllocation is already lost, assert - you should not call it then.
    3932  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
    3933  */
    3934  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    3935 
    3936  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
    3937  {
    3938  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
    3939  outInfo.blockCount = 1;
    3940  outInfo.allocationCount = 1;
    3941  outInfo.unusedRangeCount = 0;
    3942  outInfo.usedBytes = m_Size;
    3943  outInfo.unusedBytes = 0;
    3944  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
    3945  outInfo.unusedRangeSizeMin = UINT64_MAX;
    3946  outInfo.unusedRangeSizeMax = 0;
    3947  }
    3948 
    3949  void BlockAllocMap();
    3950  void BlockAllocUnmap();
    3951  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
    3952  void DedicatedAllocUnmap(VmaAllocator hAllocator);
    3953 
    3954 #if VMA_STATS_STRING_ENABLED
    3955  uint32_t GetCreationFrameIndex() const { return m_CreationFrameIndex; }
    3956  uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
    3957 
    3958  void InitBufferImageUsage(uint32_t bufferImageUsage)
    3959  {
    3960  VMA_ASSERT(m_BufferImageUsage == 0);
    3961  m_BufferImageUsage = bufferImageUsage;
    3962  }
    3963 
    3964  void PrintParameters(class VmaJsonWriter& json) const;
    3965 #endif
    3966 
    3967 private:
    3968  VkDeviceSize m_Alignment;
    3969  VkDeviceSize m_Size;
    3970  void* m_pUserData;
    3971  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
    3972  uint8_t m_Type; // ALLOCATION_TYPE
    3973  uint8_t m_SuballocationType; // VmaSuballocationType
    3974  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
    3975  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
    3976  uint8_t m_MapCount;
    3977  uint8_t m_Flags; // enum FLAGS
    3978 
    3979  // Allocation out of VmaDeviceMemoryBlock.
    3980  struct BlockAllocation
    3981  {
    3982  VmaPool m_hPool; // Null if belongs to general memory.
    3983  VmaDeviceMemoryBlock* m_Block;
    3984  VkDeviceSize m_Offset;
    3985  bool m_CanBecomeLost;
    3986  };
    3987 
    3988  // Allocation for an object that has its own private VkDeviceMemory.
    3989  struct DedicatedAllocation
    3990  {
    3991  uint32_t m_MemoryTypeIndex;
    3992  VkDeviceMemory m_hMemory;
    3993  void* m_pMappedData; // Not null means memory is mapped.
    3994  };
    3995 
    3996  union
    3997  {
    3998  // Allocation out of VmaDeviceMemoryBlock.
    3999  BlockAllocation m_BlockAllocation;
    4000  // Allocation for an object that has its own private VkDeviceMemory.
    4001  DedicatedAllocation m_DedicatedAllocation;
    4002  };
    4003 
    4004 #if VMA_STATS_STRING_ENABLED
    4005  uint32_t m_CreationFrameIndex;
    4006  uint32_t m_BufferImageUsage; // 0 if unknown.
    4007 #endif
    4008 
    4009  void FreeUserDataString(VmaAllocator hAllocator);
    4010 };
    4011 
    4012 /*
    4013 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
    4014 allocated memory block or free.
    4015 */
    4016 struct VmaSuballocation
    4017 {
    4018  VkDeviceSize offset;
    4019  VkDeviceSize size;
    4020  VmaAllocation hAllocation;
    4021  VmaSuballocationType type;
    4022 };
    4023 
    4024 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
    4025 
    4026 // Cost of one additional allocation lost, as equivalent in bytes.
    4027 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
    4028 
    4029 /*
    4030 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
    4031 
    4032 If canMakeOtherLost was false:
    4033 - item points to a FREE suballocation.
    4034 - itemsToMakeLostCount is 0.
    4035 
    4036 If canMakeOtherLost was true:
    4037 - item points to first of sequence of suballocations, which are either FREE,
    4038  or point to VmaAllocations that can become lost.
    4039 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
    4040  the requested allocation to succeed.
    4041 */
    4042 struct VmaAllocationRequest
    4043 {
    4044  VkDeviceSize offset;
    4045  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
    4046  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
    4047  VmaSuballocationList::iterator item;
    4048  size_t itemsToMakeLostCount;
    4049 
    4050  VkDeviceSize CalcCost() const
    4051  {
    4052  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
    4053  }
    4054 };
    4055 
    4056 /*
    4057 Data structure used for bookkeeping of allocations and unused ranges of memory
    4058 in a single VkDeviceMemory block.
    4059 */
    4060 class VmaBlockMetadata
    4061 {
    4062  VMA_CLASS_NO_COPY(VmaBlockMetadata)
    4063 public:
    4064  VmaBlockMetadata(VmaAllocator hAllocator);
    4065  ~VmaBlockMetadata();
    4066  void Init(VkDeviceSize size);
    4067 
    4068  // Validates all data structures inside this object. If not valid, returns false.
    4069  bool Validate() const;
    4070  VkDeviceSize GetSize() const { return m_Size; }
    4071  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
    4072  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
    4073  VkDeviceSize GetUnusedRangeSizeMax() const;
    4074  // Returns true if this block is empty - contains only single free suballocation.
    4075  bool IsEmpty() const;
    4076 
    4077  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
    4078  void AddPoolStats(VmaPoolStats& inoutStats) const;
    4079 
    4080 #if VMA_STATS_STRING_ENABLED
    4081  void PrintDetailedMap(class VmaJsonWriter& json) const;
    4082 #endif
    4083 
    4084  // Creates trivial request for case when block is empty.
    4085  void CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest);
    4086 
    4087  // Tries to find a place for suballocation with given parameters inside this block.
    4088  // If succeeded, fills pAllocationRequest and returns true.
    4089  // If failed, returns false.
    4090  bool CreateAllocationRequest(
    4091  uint32_t currentFrameIndex,
    4092  uint32_t frameInUseCount,
    4093  VkDeviceSize bufferImageGranularity,
    4094  VkDeviceSize allocSize,
    4095  VkDeviceSize allocAlignment,
    4096  VmaSuballocationType allocType,
    4097  bool canMakeOtherLost,
    4098  VmaAllocationRequest* pAllocationRequest);
    4099 
    4100  bool MakeRequestedAllocationsLost(
    4101  uint32_t currentFrameIndex,
    4102  uint32_t frameInUseCount,
    4103  VmaAllocationRequest* pAllocationRequest);
    4104 
    4105  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4106 
    4107  // Makes actual allocation based on request. Request must already be checked and valid.
    4108  void Alloc(
    4109  const VmaAllocationRequest& request,
    4110  VmaSuballocationType type,
    4111  VkDeviceSize allocSize,
    4112  VmaAllocation hAllocation);
    4113 
    4114  // Frees suballocation assigned to given memory region.
    4115  void Free(const VmaAllocation allocation);
    4116  void FreeAtOffset(VkDeviceSize offset);
    4117 
    4118 private:
    4119  VkDeviceSize m_Size;
    4120  uint32_t m_FreeCount;
    4121  VkDeviceSize m_SumFreeSize;
    4122  VmaSuballocationList m_Suballocations;
    4123  // Suballocations that are free and have size greater than certain threshold.
    4124  // Sorted by size, ascending.
    4125  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
    4126 
    4127  bool ValidateFreeSuballocationList() const;
    4128 
    4129  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
    4130  // If yes, fills pOffset and returns true. If no, returns false.
    4131  bool CheckAllocation(
    4132  uint32_t currentFrameIndex,
    4133  uint32_t frameInUseCount,
    4134  VkDeviceSize bufferImageGranularity,
    4135  VkDeviceSize allocSize,
    4136  VkDeviceSize allocAlignment,
    4137  VmaSuballocationType allocType,
    4138  VmaSuballocationList::const_iterator suballocItem,
    4139  bool canMakeOtherLost,
    4140  VkDeviceSize* pOffset,
    4141  size_t* itemsToMakeLostCount,
    4142  VkDeviceSize* pSumFreeSize,
    4143  VkDeviceSize* pSumItemSize) const;
    4144  // Given free suballocation, it merges it with following one, which must also be free.
    4145  void MergeFreeWithNext(VmaSuballocationList::iterator item);
    4146  // Releases given suballocation, making it free.
    4147  // Merges it with adjacent free suballocations if applicable.
    4148  // Returns iterator to new free suballocation at this place.
    4149  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
    4150  // Given free suballocation, it inserts it into sorted list of
    4151  // m_FreeSuballocationsBySize if it's suitable.
    4152  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
    4153  // Given free suballocation, it removes it from sorted list of
    4154  // m_FreeSuballocationsBySize if it's suitable.
    4155  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
    4156 };
    4157 
    4158 /*
    4159 Represents a single block of device memory (`VkDeviceMemory`) with all the
    4160 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
    4161 
    4162 Thread-safety: This class must be externally synchronized.
    4163 */
    4164 class VmaDeviceMemoryBlock
    4165 {
    4166  VMA_CLASS_NO_COPY(VmaDeviceMemoryBlock)
    4167 public:
    4168  VmaBlockMetadata m_Metadata;
    4169 
    4170  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
    4171 
    4172  ~VmaDeviceMemoryBlock()
    4173  {
    4174  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
    4175  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    4176  }
    4177 
    4178  // Always call after construction.
    4179  void Init(
    4180  uint32_t newMemoryTypeIndex,
    4181  VkDeviceMemory newMemory,
    4182  VkDeviceSize newSize,
    4183  uint32_t id);
    4184  // Always call before destruction.
    4185  void Destroy(VmaAllocator allocator);
    4186 
    4187  VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
    4188  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4189  uint32_t GetId() const { return m_Id; }
    4190  void* GetMappedData() const { return m_pMappedData; }
    4191 
    4192  // Validates all data structures inside this object. If not valid, returns false.
    4193  bool Validate() const;
    4194 
    4195  // ppData can be null.
    4196  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
    4197  void Unmap(VmaAllocator hAllocator, uint32_t count);
    4198 
    4199  VkResult BindBufferMemory(
    4200  const VmaAllocator hAllocator,
    4201  const VmaAllocation hAllocation,
    4202  VkBuffer hBuffer);
    4203  VkResult BindImageMemory(
    4204  const VmaAllocator hAllocator,
    4205  const VmaAllocation hAllocation,
    4206  VkImage hImage);
    4207 
    4208 private:
    4209  uint32_t m_MemoryTypeIndex;
    4210  uint32_t m_Id;
    4211  VkDeviceMemory m_hMemory;
    4212 
    4213  // Protects access to m_hMemory so it's not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
    4214  // Also protects m_MapCount, m_pMappedData.
    4215  VMA_MUTEX m_Mutex;
    4216  uint32_t m_MapCount;
    4217  void* m_pMappedData;
    4218 };
    4219 
    4220 struct VmaPointerLess
    4221 {
    4222  bool operator()(const void* lhs, const void* rhs) const
    4223  {
    4224  return lhs < rhs;
    4225  }
    4226 };
    4227 
    4228 class VmaDefragmentator;
    4229 
    4230 /*
    4231 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
    4232 Vulkan memory type.
    4233 
    4234 Synchronized internally with a mutex.
    4235 */
    4236 struct VmaBlockVector
    4237 {
    4238  VMA_CLASS_NO_COPY(VmaBlockVector)
    4239 public:
    4240  VmaBlockVector(
    4241  VmaAllocator hAllocator,
    4242  uint32_t memoryTypeIndex,
    4243  VkDeviceSize preferredBlockSize,
    4244  size_t minBlockCount,
    4245  size_t maxBlockCount,
    4246  VkDeviceSize bufferImageGranularity,
    4247  uint32_t frameInUseCount,
    4248  bool isCustomPool);
    4249  ~VmaBlockVector();
    4250 
    4251  VkResult CreateMinBlocks();
    4252 
    4253  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4254  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
    4255  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
    4256  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
    4257 
    4258  void GetPoolStats(VmaPoolStats* pStats);
    4259 
    4260  bool IsEmpty() const { return m_Blocks.empty(); }
    4261 
    4262  VkResult Allocate(
    4263  VmaPool hCurrentPool,
    4264  uint32_t currentFrameIndex,
    4265  VkDeviceSize size,
    4266  VkDeviceSize alignment,
    4267  const VmaAllocationCreateInfo& createInfo,
    4268  VmaSuballocationType suballocType,
    4269  VmaAllocation* pAllocation);
    4270 
    4271  void Free(
    4272  VmaAllocation hAllocation);
    4273 
    4274  // Adds statistics of this BlockVector to pStats.
    4275  void AddStats(VmaStats* pStats);
    4276 
    4277 #if VMA_STATS_STRING_ENABLED
    4278  void PrintDetailedMap(class VmaJsonWriter& json);
    4279 #endif
    4280 
    4281  void MakePoolAllocationsLost(
    4282  uint32_t currentFrameIndex,
    4283  size_t* pLostAllocationCount);
    4284 
    4285  VmaDefragmentator* EnsureDefragmentator(
    4286  VmaAllocator hAllocator,
    4287  uint32_t currentFrameIndex);
    4288 
    4289  VkResult Defragment(
    4290  VmaDefragmentationStats* pDefragmentationStats,
    4291  VkDeviceSize& maxBytesToMove,
    4292  uint32_t& maxAllocationsToMove);
    4293 
    4294  void DestroyDefragmentator();
    4295 
    4296 private:
    4297  friend class VmaDefragmentator;
    4298 
    4299  const VmaAllocator m_hAllocator;
    4300  const uint32_t m_MemoryTypeIndex;
    4301  const VkDeviceSize m_PreferredBlockSize;
    4302  const size_t m_MinBlockCount;
    4303  const size_t m_MaxBlockCount;
    4304  const VkDeviceSize m_BufferImageGranularity;
    4305  const uint32_t m_FrameInUseCount;
    4306  const bool m_IsCustomPool;
    4307  VMA_MUTEX m_Mutex;
    4308  // Incrementally sorted by sumFreeSize, ascending.
    4309  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
    4310  /* There can be at most one allocation that is completely empty - a
    4311  hysteresis to avoid pessimistic case of alternating creation and destruction
    4312  of a VkDeviceMemory. */
    4313  bool m_HasEmptyBlock;
    4314  VmaDefragmentator* m_pDefragmentator;
    4315  uint32_t m_NextBlockId;
    4316 
    4317  VkDeviceSize CalcMaxBlockSize() const;
    4318 
    4319  // Finds and removes given block from vector.
    4320  void Remove(VmaDeviceMemoryBlock* pBlock);
    4321 
    4322  // Performs single step in sorting m_Blocks. They may not be fully sorted
    4323  // after this call.
    4324  void IncrementallySortBlocks();
    4325 
    4326  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
    4327 };
    4328 
    4329 struct VmaPool_T
    4330 {
    4331  VMA_CLASS_NO_COPY(VmaPool_T)
    4332 public:
    4333  VmaBlockVector m_BlockVector;
    4334 
    4335  VmaPool_T(
    4336  VmaAllocator hAllocator,
    4337  const VmaPoolCreateInfo& createInfo);
    4338  ~VmaPool_T();
    4339 
    4340  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
    4341  uint32_t GetId() const { return m_Id; }
    4342  void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
    4343 
    4344 #if VMA_STATS_STRING_ENABLED
    4345  //void PrintDetailedMap(class VmaStringBuilder& sb);
    4346 #endif
    4347 
    4348 private:
    4349  uint32_t m_Id;
    4350 };
    4351 
    4352 class VmaDefragmentator
    4353 {
    4354  VMA_CLASS_NO_COPY(VmaDefragmentator)
    4355 private:
    4356  const VmaAllocator m_hAllocator;
    4357  VmaBlockVector* const m_pBlockVector;
    4358  uint32_t m_CurrentFrameIndex;
    4359  VkDeviceSize m_BytesMoved;
    4360  uint32_t m_AllocationsMoved;
    4361 
    4362  struct AllocationInfo
    4363  {
    4364  VmaAllocation m_hAllocation;
    4365  VkBool32* m_pChanged;
    4366 
    4367  AllocationInfo() :
    4368  m_hAllocation(VK_NULL_HANDLE),
    4369  m_pChanged(VMA_NULL)
    4370  {
    4371  }
    4372  };
    4373 
    4374  struct AllocationInfoSizeGreater
    4375  {
    4376  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
    4377  {
    4378  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
    4379  }
    4380  };
    4381 
    4382  // Used between AddAllocation and Defragment.
    4383  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4384 
    4385  struct BlockInfo
    4386  {
    4387  VmaDeviceMemoryBlock* m_pBlock;
    4388  bool m_HasNonMovableAllocations;
    4389  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4390 
    4391  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
    4392  m_pBlock(VMA_NULL),
    4393  m_HasNonMovableAllocations(true),
    4394  m_Allocations(pAllocationCallbacks),
    4395  m_pMappedDataForDefragmentation(VMA_NULL)
    4396  {
    4397  }
    4398 
    4399  void CalcHasNonMovableAllocations()
    4400  {
    4401  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
    4402  const size_t defragmentAllocCount = m_Allocations.size();
    4403  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
    4404  }
    4405 
    4406  void SortAllocationsBySizeDescecnding()
    4407  {
    4408  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
    4409  }
    4410 
    4411  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
    4412  void Unmap(VmaAllocator hAllocator);
    4413 
    4414  private:
    4415  // Not null if mapped for defragmentation only, not originally mapped.
    4416  void* m_pMappedDataForDefragmentation;
    4417  };
    4418 
    4419  struct BlockPointerLess
    4420  {
    4421  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
    4422  {
    4423  return pLhsBlockInfo->m_pBlock < pRhsBlock;
    4424  }
    4425  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4426  {
    4427  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
    4428  }
    4429  };
    4430 
    4431  // 1. Blocks with some non-movable allocations go first.
    4432  // 2. Blocks with smaller sumFreeSize go first.
    4433  struct BlockInfoCompareMoveDestination
    4434  {
    4435  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4436  {
    4437  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
    4438  {
    4439  return true;
    4440  }
    4441  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
    4442  {
    4443  return false;
    4444  }
    4445  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
    4446  {
    4447  return true;
    4448  }
    4449  return false;
    4450  }
    4451  };
    4452 
    4453  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
    4454  BlockInfoVector m_Blocks;
    4455 
    4456  VkResult DefragmentRound(
    4457  VkDeviceSize maxBytesToMove,
    4458  uint32_t maxAllocationsToMove);
    4459 
    4460  static bool MoveMakesSense(
    4461  size_t dstBlockIndex, VkDeviceSize dstOffset,
    4462  size_t srcBlockIndex, VkDeviceSize srcOffset);
    4463 
    4464 public:
    4465  VmaDefragmentator(
    4466  VmaAllocator hAllocator,
    4467  VmaBlockVector* pBlockVector,
    4468  uint32_t currentFrameIndex);
    4469 
    4470  ~VmaDefragmentator();
    4471 
    4472  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
    4473  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
    4474 
    4475  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
    4476 
    4477  VkResult Defragment(
    4478  VkDeviceSize maxBytesToMove,
    4479  uint32_t maxAllocationsToMove);
    4480 };
    4481 
    4482 // Main allocator object.
    4483 struct VmaAllocator_T
    4484 {
    4485  VMA_CLASS_NO_COPY(VmaAllocator_T)
    4486 public:
    4487  bool m_UseMutex;
    4488  bool m_UseKhrDedicatedAllocation;
    4489  VkDevice m_hDevice;
    4490  bool m_AllocationCallbacksSpecified;
    4491  VkAllocationCallbacks m_AllocationCallbacks;
    4492  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
    4493 
    4494  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
    4495  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
    4496  VMA_MUTEX m_HeapSizeLimitMutex;
    4497 
    4498  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
    4499  VkPhysicalDeviceMemoryProperties m_MemProps;
    4500 
    4501  // Default pools.
    4502  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
    4503 
    4504  // Each vector is sorted by memory (handle value).
    4505  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
    4506  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
    4507  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
    4508 
    4509  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
    4510  ~VmaAllocator_T();
    4511 
    4512  const VkAllocationCallbacks* GetAllocationCallbacks() const
    4513  {
    4514  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
    4515  }
    4516  const VmaVulkanFunctions& GetVulkanFunctions() const
    4517  {
    4518  return m_VulkanFunctions;
    4519  }
    4520 
    4521  VkDeviceSize GetBufferImageGranularity() const
    4522  {
    4523  return VMA_MAX(
    4524  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
    4525  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
    4526  }
    4527 
    4528  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
    4529  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
    4530 
    4531  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
    4532  {
    4533  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
    4534  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
    4535  }
    4536  // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
    4537  bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
    4538  {
    4539  return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
    4540  VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    4541  }
    4542  // Minimum alignment for all allocations in specific memory type.
    4543  VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
    4544  {
    4545  return IsMemoryTypeNonCoherent(memTypeIndex) ?
    4546  VMA_MAX((VkDeviceSize)VMA_DEBUG_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
    4547  (VkDeviceSize)VMA_DEBUG_ALIGNMENT;
    4548  }
    4549 
    4550  bool IsIntegratedGpu() const
    4551  {
    4552  return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
    4553  }
    4554 
    4555  void GetBufferMemoryRequirements(
    4556  VkBuffer hBuffer,
    4557  VkMemoryRequirements& memReq,
    4558  bool& requiresDedicatedAllocation,
    4559  bool& prefersDedicatedAllocation) const;
    4560  void GetImageMemoryRequirements(
    4561  VkImage hImage,
    4562  VkMemoryRequirements& memReq,
    4563  bool& requiresDedicatedAllocation,
    4564  bool& prefersDedicatedAllocation) const;
    4565 
    4566  // Main allocation function.
    4567  VkResult AllocateMemory(
    4568  const VkMemoryRequirements& vkMemReq,
    4569  bool requiresDedicatedAllocation,
    4570  bool prefersDedicatedAllocation,
    4571  VkBuffer dedicatedBuffer,
    4572  VkImage dedicatedImage,
    4573  const VmaAllocationCreateInfo& createInfo,
    4574  VmaSuballocationType suballocType,
    4575  VmaAllocation* pAllocation);
    4576 
    4577  // Main deallocation function.
    4578  void FreeMemory(const VmaAllocation allocation);
    4579 
    4580  void CalculateStats(VmaStats* pStats);
    4581 
    4582 #if VMA_STATS_STRING_ENABLED
    4583  void PrintDetailedMap(class VmaJsonWriter& json);
    4584 #endif
    4585 
    4586  VkResult Defragment(
    4587  VmaAllocation* pAllocations,
    4588  size_t allocationCount,
    4589  VkBool32* pAllocationsChanged,
    4590  const VmaDefragmentationInfo* pDefragmentationInfo,
    4591  VmaDefragmentationStats* pDefragmentationStats);
    4592 
    4593  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
    4594  bool TouchAllocation(VmaAllocation hAllocation);
    4595 
    4596  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
    4597  void DestroyPool(VmaPool pool);
    4598  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
    4599 
    4600  void SetCurrentFrameIndex(uint32_t frameIndex);
    4601 
    4602  void MakePoolAllocationsLost(
    4603  VmaPool hPool,
    4604  size_t* pLostAllocationCount);
    4605 
    4606  void CreateLostAllocation(VmaAllocation* pAllocation);
    4607 
    4608  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
    4609  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
    4610 
    4611  VkResult Map(VmaAllocation hAllocation, void** ppData);
    4612  void Unmap(VmaAllocation hAllocation);
    4613 
    4614  VkResult BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer);
    4615  VkResult BindImageMemory(VmaAllocation hAllocation, VkImage hImage);
    4616 
    4617  void FlushOrInvalidateAllocation(
    4618  VmaAllocation hAllocation,
    4619  VkDeviceSize offset, VkDeviceSize size,
    4620  VMA_CACHE_OPERATION op);
    4621 
    4622 private:
    4623  VkDeviceSize m_PreferredLargeHeapBlockSize;
    4624 
    4625  VkPhysicalDevice m_PhysicalDevice;
    4626  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
    4627 
    4628  VMA_MUTEX m_PoolsMutex;
    4629  // Protected by m_PoolsMutex. Sorted by pointer value.
    4630  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
    4631  uint32_t m_NextPoolId;
    4632 
    4633  VmaVulkanFunctions m_VulkanFunctions;
    4634 
    4635  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
    4636 
    4637  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
    4638 
    4639  VkResult AllocateMemoryOfType(
    4640  VkDeviceSize size,
    4641  VkDeviceSize alignment,
    4642  bool dedicatedAllocation,
    4643  VkBuffer dedicatedBuffer,
    4644  VkImage dedicatedImage,
    4645  const VmaAllocationCreateInfo& createInfo,
    4646  uint32_t memTypeIndex,
    4647  VmaSuballocationType suballocType,
    4648  VmaAllocation* pAllocation);
    4649 
    4650  // Allocates and registers new VkDeviceMemory specifically for single allocation.
    4651  VkResult AllocateDedicatedMemory(
    4652  VkDeviceSize size,
    4653  VmaSuballocationType suballocType,
    4654  uint32_t memTypeIndex,
    4655  bool map,
    4656  bool isUserDataString,
    4657  void* pUserData,
    4658  VkBuffer dedicatedBuffer,
    4659  VkImage dedicatedImage,
    4660  VmaAllocation* pAllocation);
    4661 
    4662  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
    4663  void FreeDedicatedMemory(VmaAllocation allocation);
    4664 };
    4665 
    4667 // Memory allocation #2 after VmaAllocator_T definition
    4668 
    4669 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
    4670 {
    4671  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
    4672 }
    4673 
    4674 static void VmaFree(VmaAllocator hAllocator, void* ptr)
    4675 {
    4676  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
    4677 }
    4678 
    4679 template<typename T>
    4680 static T* VmaAllocate(VmaAllocator hAllocator)
    4681 {
    4682  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
    4683 }
    4684 
    4685 template<typename T>
    4686 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
    4687 {
    4688  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
    4689 }
    4690 
    4691 template<typename T>
    4692 static void vma_delete(VmaAllocator hAllocator, T* ptr)
    4693 {
    4694  if(ptr != VMA_NULL)
    4695  {
    4696  ptr->~T();
    4697  VmaFree(hAllocator, ptr);
    4698  }
    4699 }
    4700 
    4701 template<typename T>
    4702 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
    4703 {
    4704  if(ptr != VMA_NULL)
    4705  {
    4706  for(size_t i = count; i--; )
    4707  ptr[i].~T();
    4708  VmaFree(hAllocator, ptr);
    4709  }
    4710 }
    4711 
    4713 // VmaStringBuilder
    4714 
    4715 #if VMA_STATS_STRING_ENABLED
    4716 
    4717 class VmaStringBuilder
    4718 {
    4719 public:
    4720  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
    4721  size_t GetLength() const { return m_Data.size(); }
    4722  const char* GetData() const { return m_Data.data(); }
    4723 
    4724  void Add(char ch) { m_Data.push_back(ch); }
    4725  void Add(const char* pStr);
    4726  void AddNewLine() { Add('\n'); }
    4727  void AddNumber(uint32_t num);
    4728  void AddNumber(uint64_t num);
    4729  void AddPointer(const void* ptr);
    4730 
    4731 private:
    4732  VmaVector< char, VmaStlAllocator<char> > m_Data;
    4733 };
    4734 
    4735 void VmaStringBuilder::Add(const char* pStr)
    4736 {
    4737  const size_t strLen = strlen(pStr);
    4738  if(strLen > 0)
    4739  {
    4740  const size_t oldCount = m_Data.size();
    4741  m_Data.resize(oldCount + strLen);
    4742  memcpy(m_Data.data() + oldCount, pStr, strLen);
    4743  }
    4744 }
    4745 
    4746 void VmaStringBuilder::AddNumber(uint32_t num)
    4747 {
    4748  char buf[11];
    4749  VmaUint32ToStr(buf, sizeof(buf), num);
    4750  Add(buf);
    4751 }
    4752 
    4753 void VmaStringBuilder::AddNumber(uint64_t num)
    4754 {
    4755  char buf[21];
    4756  VmaUint64ToStr(buf, sizeof(buf), num);
    4757  Add(buf);
    4758 }
    4759 
    4760 void VmaStringBuilder::AddPointer(const void* ptr)
    4761 {
    4762  char buf[21];
    4763  VmaPtrToStr(buf, sizeof(buf), ptr);
    4764  Add(buf);
    4765 }
    4766 
    4767 #endif // #if VMA_STATS_STRING_ENABLED
    4768 
    4770 // VmaJsonWriter
    4771 
    4772 #if VMA_STATS_STRING_ENABLED
    4773 
    4774 class VmaJsonWriter
    4775 {
    4776  VMA_CLASS_NO_COPY(VmaJsonWriter)
    4777 public:
    4778  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
    4779  ~VmaJsonWriter();
    4780 
    4781  void BeginObject(bool singleLine = false);
    4782  void EndObject();
    4783 
    4784  void BeginArray(bool singleLine = false);
    4785  void EndArray();
    4786 
    4787  void WriteString(const char* pStr);
    4788  void BeginString(const char* pStr = VMA_NULL);
    4789  void ContinueString(const char* pStr);
    4790  void ContinueString(uint32_t n);
    4791  void ContinueString(uint64_t n);
    4792  void ContinueString_Pointer(const void* ptr);
    4793  void EndString(const char* pStr = VMA_NULL);
    4794 
    4795  void WriteNumber(uint32_t n);
    4796  void WriteNumber(uint64_t n);
    4797  void WriteBool(bool b);
    4798  void WriteNull();
    4799 
    4800 private:
    4801  static const char* const INDENT;
    4802 
    4803  enum COLLECTION_TYPE
    4804  {
    4805  COLLECTION_TYPE_OBJECT,
    4806  COLLECTION_TYPE_ARRAY,
    4807  };
    4808  struct StackItem
    4809  {
    4810  COLLECTION_TYPE type;
    4811  uint32_t valueCount;
    4812  bool singleLineMode;
    4813  };
    4814 
    4815  VmaStringBuilder& m_SB;
    4816  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
    4817  bool m_InsideString;
    4818 
    4819  void BeginValue(bool isString);
    4820  void WriteIndent(bool oneLess = false);
    4821 };
    4822 
    4823 const char* const VmaJsonWriter::INDENT = " ";
    4824 
    4825 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
    4826  m_SB(sb),
    4827  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
    4828  m_InsideString(false)
    4829 {
    4830 }
    4831 
    4832 VmaJsonWriter::~VmaJsonWriter()
    4833 {
    4834  VMA_ASSERT(!m_InsideString);
    4835  VMA_ASSERT(m_Stack.empty());
    4836 }
    4837 
    4838 void VmaJsonWriter::BeginObject(bool singleLine)
    4839 {
    4840  VMA_ASSERT(!m_InsideString);
    4841 
    4842  BeginValue(false);
    4843  m_SB.Add('{');
    4844 
    4845  StackItem item;
    4846  item.type = COLLECTION_TYPE_OBJECT;
    4847  item.valueCount = 0;
    4848  item.singleLineMode = singleLine;
    4849  m_Stack.push_back(item);
    4850 }
    4851 
    4852 void VmaJsonWriter::EndObject()
    4853 {
    4854  VMA_ASSERT(!m_InsideString);
    4855 
    4856  WriteIndent(true);
    4857  m_SB.Add('}');
    4858 
    4859  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
    4860  m_Stack.pop_back();
    4861 }
    4862 
    4863 void VmaJsonWriter::BeginArray(bool singleLine)
    4864 {
    4865  VMA_ASSERT(!m_InsideString);
    4866 
    4867  BeginValue(false);
    4868  m_SB.Add('[');
    4869 
    4870  StackItem item;
    4871  item.type = COLLECTION_TYPE_ARRAY;
    4872  item.valueCount = 0;
    4873  item.singleLineMode = singleLine;
    4874  m_Stack.push_back(item);
    4875 }
    4876 
    4877 void VmaJsonWriter::EndArray()
    4878 {
    4879  VMA_ASSERT(!m_InsideString);
    4880 
    4881  WriteIndent(true);
    4882  m_SB.Add(']');
    4883 
    4884  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
    4885  m_Stack.pop_back();
    4886 }
    4887 
    4888 void VmaJsonWriter::WriteString(const char* pStr)
    4889 {
    4890  BeginString(pStr);
    4891  EndString();
    4892 }
    4893 
    4894 void VmaJsonWriter::BeginString(const char* pStr)
    4895 {
    4896  VMA_ASSERT(!m_InsideString);
    4897 
    4898  BeginValue(true);
    4899  m_SB.Add('"');
    4900  m_InsideString = true;
    4901  if(pStr != VMA_NULL && pStr[0] != '\0')
    4902  {
    4903  ContinueString(pStr);
    4904  }
    4905 }
    4906 
    4907 void VmaJsonWriter::ContinueString(const char* pStr)
    4908 {
    4909  VMA_ASSERT(m_InsideString);
    4910 
    4911  const size_t strLen = strlen(pStr);
    4912  for(size_t i = 0; i < strLen; ++i)
    4913  {
    4914  char ch = pStr[i];
    4915  if(ch == '\'')
    4916  {
    4917  m_SB.Add("\\\\");
    4918  }
    4919  else if(ch == '"')
    4920  {
    4921  m_SB.Add("\\\"");
    4922  }
    4923  else if(ch >= 32)
    4924  {
    4925  m_SB.Add(ch);
    4926  }
    4927  else switch(ch)
    4928  {
    4929  case '\b':
    4930  m_SB.Add("\\b");
    4931  break;
    4932  case '\f':
    4933  m_SB.Add("\\f");
    4934  break;
    4935  case '\n':
    4936  m_SB.Add("\\n");
    4937  break;
    4938  case '\r':
    4939  m_SB.Add("\\r");
    4940  break;
    4941  case '\t':
    4942  m_SB.Add("\\t");
    4943  break;
    4944  default:
    4945  VMA_ASSERT(0 && "Character not currently supported.");
    4946  break;
    4947  }
    4948  }
    4949 }
    4950 
    4951 void VmaJsonWriter::ContinueString(uint32_t n)
    4952 {
    4953  VMA_ASSERT(m_InsideString);
    4954  m_SB.AddNumber(n);
    4955 }
    4956 
    4957 void VmaJsonWriter::ContinueString(uint64_t n)
    4958 {
    4959  VMA_ASSERT(m_InsideString);
    4960  m_SB.AddNumber(n);
    4961 }
    4962 
    4963 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
    4964 {
    4965  VMA_ASSERT(m_InsideString);
    4966  m_SB.AddPointer(ptr);
    4967 }
    4968 
    4969 void VmaJsonWriter::EndString(const char* pStr)
    4970 {
    4971  VMA_ASSERT(m_InsideString);
    4972  if(pStr != VMA_NULL && pStr[0] != '\0')
    4973  {
    4974  ContinueString(pStr);
    4975  }
    4976  m_SB.Add('"');
    4977  m_InsideString = false;
    4978 }
    4979 
    4980 void VmaJsonWriter::WriteNumber(uint32_t n)
    4981 {
    4982  VMA_ASSERT(!m_InsideString);
    4983  BeginValue(false);
    4984  m_SB.AddNumber(n);
    4985 }
    4986 
    4987 void VmaJsonWriter::WriteNumber(uint64_t n)
    4988 {
    4989  VMA_ASSERT(!m_InsideString);
    4990  BeginValue(false);
    4991  m_SB.AddNumber(n);
    4992 }
    4993 
    4994 void VmaJsonWriter::WriteBool(bool b)
    4995 {
    4996  VMA_ASSERT(!m_InsideString);
    4997  BeginValue(false);
    4998  m_SB.Add(b ? "true" : "false");
    4999 }
    5000 
    5001 void VmaJsonWriter::WriteNull()
    5002 {
    5003  VMA_ASSERT(!m_InsideString);
    5004  BeginValue(false);
    5005  m_SB.Add("null");
    5006 }
    5007 
    5008 void VmaJsonWriter::BeginValue(bool isString)
    5009 {
    5010  if(!m_Stack.empty())
    5011  {
    5012  StackItem& currItem = m_Stack.back();
    5013  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5014  currItem.valueCount % 2 == 0)
    5015  {
    5016  VMA_ASSERT(isString);
    5017  }
    5018 
    5019  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5020  currItem.valueCount % 2 != 0)
    5021  {
    5022  m_SB.Add(": ");
    5023  }
    5024  else if(currItem.valueCount > 0)
    5025  {
    5026  m_SB.Add(", ");
    5027  WriteIndent();
    5028  }
    5029  else
    5030  {
    5031  WriteIndent();
    5032  }
    5033  ++currItem.valueCount;
    5034  }
    5035 }
    5036 
    5037 void VmaJsonWriter::WriteIndent(bool oneLess)
    5038 {
    5039  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
    5040  {
    5041  m_SB.AddNewLine();
    5042 
    5043  size_t count = m_Stack.size();
    5044  if(count > 0 && oneLess)
    5045  {
    5046  --count;
    5047  }
    5048  for(size_t i = 0; i < count; ++i)
    5049  {
    5050  m_SB.Add(INDENT);
    5051  }
    5052  }
    5053 }
    5054 
    5055 #endif // #if VMA_STATS_STRING_ENABLED
    5056 
    5058 
    5059 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
    5060 {
    5061  if(IsUserDataString())
    5062  {
    5063  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
    5064 
    5065  FreeUserDataString(hAllocator);
    5066 
    5067  if(pUserData != VMA_NULL)
    5068  {
    5069  const char* const newStrSrc = (char*)pUserData;
    5070  const size_t newStrLen = strlen(newStrSrc);
    5071  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
    5072  memcpy(newStrDst, newStrSrc, newStrLen + 1);
    5073  m_pUserData = newStrDst;
    5074  }
    5075  }
    5076  else
    5077  {
    5078  m_pUserData = pUserData;
    5079  }
    5080 }
    5081 
    5082 void VmaAllocation_T::ChangeBlockAllocation(
    5083  VmaAllocator hAllocator,
    5084  VmaDeviceMemoryBlock* block,
    5085  VkDeviceSize offset)
    5086 {
    5087  VMA_ASSERT(block != VMA_NULL);
    5088  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5089 
    5090  // Move mapping reference counter from old block to new block.
    5091  if(block != m_BlockAllocation.m_Block)
    5092  {
    5093  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
    5094  if(IsPersistentMap())
    5095  ++mapRefCount;
    5096  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
    5097  block->Map(hAllocator, mapRefCount, VMA_NULL);
    5098  }
    5099 
    5100  m_BlockAllocation.m_Block = block;
    5101  m_BlockAllocation.m_Offset = offset;
    5102 }
    5103 
    5104 VkDeviceSize VmaAllocation_T::GetOffset() const
    5105 {
    5106  switch(m_Type)
    5107  {
    5108  case ALLOCATION_TYPE_BLOCK:
    5109  return m_BlockAllocation.m_Offset;
    5110  case ALLOCATION_TYPE_DEDICATED:
    5111  return 0;
    5112  default:
    5113  VMA_ASSERT(0);
    5114  return 0;
    5115  }
    5116 }
    5117 
    5118 VkDeviceMemory VmaAllocation_T::GetMemory() const
    5119 {
    5120  switch(m_Type)
    5121  {
    5122  case ALLOCATION_TYPE_BLOCK:
    5123  return m_BlockAllocation.m_Block->GetDeviceMemory();
    5124  case ALLOCATION_TYPE_DEDICATED:
    5125  return m_DedicatedAllocation.m_hMemory;
    5126  default:
    5127  VMA_ASSERT(0);
    5128  return VK_NULL_HANDLE;
    5129  }
    5130 }
    5131 
    5132 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
    5133 {
    5134  switch(m_Type)
    5135  {
    5136  case ALLOCATION_TYPE_BLOCK:
    5137  return m_BlockAllocation.m_Block->GetMemoryTypeIndex();
    5138  case ALLOCATION_TYPE_DEDICATED:
    5139  return m_DedicatedAllocation.m_MemoryTypeIndex;
    5140  default:
    5141  VMA_ASSERT(0);
    5142  return UINT32_MAX;
    5143  }
    5144 }
    5145 
    5146 void* VmaAllocation_T::GetMappedData() const
    5147 {
    5148  switch(m_Type)
    5149  {
    5150  case ALLOCATION_TYPE_BLOCK:
    5151  if(m_MapCount != 0)
    5152  {
    5153  void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
    5154  VMA_ASSERT(pBlockData != VMA_NULL);
    5155  return (char*)pBlockData + m_BlockAllocation.m_Offset;
    5156  }
    5157  else
    5158  {
    5159  return VMA_NULL;
    5160  }
    5161  break;
    5162  case ALLOCATION_TYPE_DEDICATED:
    5163  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
    5164  return m_DedicatedAllocation.m_pMappedData;
    5165  default:
    5166  VMA_ASSERT(0);
    5167  return VMA_NULL;
    5168  }
    5169 }
    5170 
    5171 bool VmaAllocation_T::CanBecomeLost() const
    5172 {
    5173  switch(m_Type)
    5174  {
    5175  case ALLOCATION_TYPE_BLOCK:
    5176  return m_BlockAllocation.m_CanBecomeLost;
    5177  case ALLOCATION_TYPE_DEDICATED:
    5178  return false;
    5179  default:
    5180  VMA_ASSERT(0);
    5181  return false;
    5182  }
    5183 }
    5184 
    5185 VmaPool VmaAllocation_T::GetPool() const
    5186 {
    5187  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5188  return m_BlockAllocation.m_hPool;
    5189 }
    5190 
    5191 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    5192 {
    5193  VMA_ASSERT(CanBecomeLost());
    5194 
    5195  /*
    5196  Warning: This is a carefully designed algorithm.
    5197  Do not modify unless you really know what you're doing :)
    5198  */
    5199  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
    5200  for(;;)
    5201  {
    5202  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    5203  {
    5204  VMA_ASSERT(0);
    5205  return false;
    5206  }
    5207  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
    5208  {
    5209  return false;
    5210  }
    5211  else // Last use time earlier than current time.
    5212  {
    5213  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
    5214  {
    5215  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
    5216  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
    5217  return true;
    5218  }
    5219  }
    5220  }
    5221 }
    5222 
    5223 #if VMA_STATS_STRING_ENABLED
    5224 
    5225 // Correspond to values of enum VmaSuballocationType.
    5226 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
    5227  "FREE",
    5228  "UNKNOWN",
    5229  "BUFFER",
    5230  "IMAGE_UNKNOWN",
    5231  "IMAGE_LINEAR",
    5232  "IMAGE_OPTIMAL",
    5233 };
    5234 
    5235 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
    5236 {
    5237  json.WriteString("Type");
    5238  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
    5239 
    5240  json.WriteString("Size");
    5241  json.WriteNumber(m_Size);
    5242 
    5243  if(m_pUserData != VMA_NULL)
    5244  {
    5245  json.WriteString("UserData");
    5246  if(IsUserDataString())
    5247  {
    5248  json.WriteString((const char*)m_pUserData);
    5249  }
    5250  else
    5251  {
    5252  json.BeginString();
    5253  json.ContinueString_Pointer(m_pUserData);
    5254  json.EndString();
    5255  }
    5256  }
    5257 
    5258  json.WriteString("CreationFrameIndex");
    5259  json.WriteNumber(m_CreationFrameIndex);
    5260 
    5261  json.WriteString("LastUseFrameIndex");
    5262  json.WriteNumber(GetLastUseFrameIndex());
    5263 
    5264  if(m_BufferImageUsage != 0)
    5265  {
    5266  json.WriteString("Usage");
    5267  json.WriteNumber(m_BufferImageUsage);
    5268  }
    5269 }
    5270 
    5271 #endif
    5272 
    5273 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
    5274 {
    5275  VMA_ASSERT(IsUserDataString());
    5276  if(m_pUserData != VMA_NULL)
    5277  {
    5278  char* const oldStr = (char*)m_pUserData;
    5279  const size_t oldStrLen = strlen(oldStr);
    5280  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
    5281  m_pUserData = VMA_NULL;
    5282  }
    5283 }
    5284 
    5285 void VmaAllocation_T::BlockAllocMap()
    5286 {
    5287  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5288 
    5289  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5290  {
    5291  ++m_MapCount;
    5292  }
    5293  else
    5294  {
    5295  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
    5296  }
    5297 }
    5298 
    5299 void VmaAllocation_T::BlockAllocUnmap()
    5300 {
    5301  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5302 
    5303  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5304  {
    5305  --m_MapCount;
    5306  }
    5307  else
    5308  {
    5309  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
    5310  }
    5311 }
    5312 
    5313 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
    5314 {
    5315  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5316 
    5317  if(m_MapCount != 0)
    5318  {
    5319  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5320  {
    5321  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
    5322  *ppData = m_DedicatedAllocation.m_pMappedData;
    5323  ++m_MapCount;
    5324  return VK_SUCCESS;
    5325  }
    5326  else
    5327  {
    5328  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
    5329  return VK_ERROR_MEMORY_MAP_FAILED;
    5330  }
    5331  }
    5332  else
    5333  {
    5334  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    5335  hAllocator->m_hDevice,
    5336  m_DedicatedAllocation.m_hMemory,
    5337  0, // offset
    5338  VK_WHOLE_SIZE,
    5339  0, // flags
    5340  ppData);
    5341  if(result == VK_SUCCESS)
    5342  {
    5343  m_DedicatedAllocation.m_pMappedData = *ppData;
    5344  m_MapCount = 1;
    5345  }
    5346  return result;
    5347  }
    5348 }
    5349 
    5350 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
    5351 {
    5352  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5353 
    5354  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5355  {
    5356  --m_MapCount;
    5357  if(m_MapCount == 0)
    5358  {
    5359  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
    5360  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
    5361  hAllocator->m_hDevice,
    5362  m_DedicatedAllocation.m_hMemory);
    5363  }
    5364  }
    5365  else
    5366  {
    5367  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
    5368  }
    5369 }
    5370 
    5371 #if VMA_STATS_STRING_ENABLED
    5372 
    5373 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
    5374 {
    5375  json.BeginObject();
    5376 
    5377  json.WriteString("Blocks");
    5378  json.WriteNumber(stat.blockCount);
    5379 
    5380  json.WriteString("Allocations");
    5381  json.WriteNumber(stat.allocationCount);
    5382 
    5383  json.WriteString("UnusedRanges");
    5384  json.WriteNumber(stat.unusedRangeCount);
    5385 
    5386  json.WriteString("UsedBytes");
    5387  json.WriteNumber(stat.usedBytes);
    5388 
    5389  json.WriteString("UnusedBytes");
    5390  json.WriteNumber(stat.unusedBytes);
    5391 
    5392  if(stat.allocationCount > 1)
    5393  {
    5394  json.WriteString("AllocationSize");
    5395  json.BeginObject(true);
    5396  json.WriteString("Min");
    5397  json.WriteNumber(stat.allocationSizeMin);
    5398  json.WriteString("Avg");
    5399  json.WriteNumber(stat.allocationSizeAvg);
    5400  json.WriteString("Max");
    5401  json.WriteNumber(stat.allocationSizeMax);
    5402  json.EndObject();
    5403  }
    5404 
    5405  if(stat.unusedRangeCount > 1)
    5406  {
    5407  json.WriteString("UnusedRangeSize");
    5408  json.BeginObject(true);
    5409  json.WriteString("Min");
    5410  json.WriteNumber(stat.unusedRangeSizeMin);
    5411  json.WriteString("Avg");
    5412  json.WriteNumber(stat.unusedRangeSizeAvg);
    5413  json.WriteString("Max");
    5414  json.WriteNumber(stat.unusedRangeSizeMax);
    5415  json.EndObject();
    5416  }
    5417 
    5418  json.EndObject();
    5419 }
    5420 
    5421 #endif // #if VMA_STATS_STRING_ENABLED
    5422 
    5423 struct VmaSuballocationItemSizeLess
    5424 {
    5425  bool operator()(
    5426  const VmaSuballocationList::iterator lhs,
    5427  const VmaSuballocationList::iterator rhs) const
    5428  {
    5429  return lhs->size < rhs->size;
    5430  }
    5431  bool operator()(
    5432  const VmaSuballocationList::iterator lhs,
    5433  VkDeviceSize rhsSize) const
    5434  {
    5435  return lhs->size < rhsSize;
    5436  }
    5437 };
    5438 
    5440 // class VmaBlockMetadata
    5441 
    5442 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
    5443  m_Size(0),
    5444  m_FreeCount(0),
    5445  m_SumFreeSize(0),
    5446  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
    5447  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
    5448 {
    5449 }
    5450 
    5451 VmaBlockMetadata::~VmaBlockMetadata()
    5452 {
    5453 }
    5454 
    5455 void VmaBlockMetadata::Init(VkDeviceSize size)
    5456 {
    5457  m_Size = size;
    5458  m_FreeCount = 1;
    5459  m_SumFreeSize = size;
    5460 
    5461  VmaSuballocation suballoc = {};
    5462  suballoc.offset = 0;
    5463  suballoc.size = size;
    5464  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5465  suballoc.hAllocation = VK_NULL_HANDLE;
    5466 
    5467  m_Suballocations.push_back(suballoc);
    5468  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
    5469  --suballocItem;
    5470  m_FreeSuballocationsBySize.push_back(suballocItem);
    5471 }
    5472 
    5473 bool VmaBlockMetadata::Validate() const
    5474 {
    5475  if(m_Suballocations.empty())
    5476  {
    5477  return false;
    5478  }
    5479 
    5480  // Expected offset of new suballocation as calculates from previous ones.
    5481  VkDeviceSize calculatedOffset = 0;
    5482  // Expected number of free suballocations as calculated from traversing their list.
    5483  uint32_t calculatedFreeCount = 0;
    5484  // Expected sum size of free suballocations as calculated from traversing their list.
    5485  VkDeviceSize calculatedSumFreeSize = 0;
    5486  // Expected number of free suballocations that should be registered in
    5487  // m_FreeSuballocationsBySize calculated from traversing their list.
    5488  size_t freeSuballocationsToRegister = 0;
    5489  // True if previous visisted suballocation was free.
    5490  bool prevFree = false;
    5491 
    5492  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5493  suballocItem != m_Suballocations.cend();
    5494  ++suballocItem)
    5495  {
    5496  const VmaSuballocation& subAlloc = *suballocItem;
    5497 
    5498  // Actual offset of this suballocation doesn't match expected one.
    5499  if(subAlloc.offset != calculatedOffset)
    5500  {
    5501  return false;
    5502  }
    5503 
    5504  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
    5505  // Two adjacent free suballocations are invalid. They should be merged.
    5506  if(prevFree && currFree)
    5507  {
    5508  return false;
    5509  }
    5510 
    5511  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
    5512  {
    5513  return false;
    5514  }
    5515 
    5516  if(currFree)
    5517  {
    5518  calculatedSumFreeSize += subAlloc.size;
    5519  ++calculatedFreeCount;
    5520  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    5521  {
    5522  ++freeSuballocationsToRegister;
    5523  }
    5524  }
    5525  else
    5526  {
    5527  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
    5528  {
    5529  return false;
    5530  }
    5531  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
    5532  {
    5533  return false;
    5534  }
    5535  }
    5536 
    5537  calculatedOffset += subAlloc.size;
    5538  prevFree = currFree;
    5539  }
    5540 
    5541  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
    5542  // match expected one.
    5543  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
    5544  {
    5545  return false;
    5546  }
    5547 
    5548  VkDeviceSize lastSize = 0;
    5549  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
    5550  {
    5551  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
    5552 
    5553  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
    5554  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
    5555  {
    5556  return false;
    5557  }
    5558  // They must be sorted by size ascending.
    5559  if(suballocItem->size < lastSize)
    5560  {
    5561  return false;
    5562  }
    5563 
    5564  lastSize = suballocItem->size;
    5565  }
    5566 
    5567  // Check if totals match calculacted values.
    5568  if(!ValidateFreeSuballocationList() ||
    5569  (calculatedOffset != m_Size) ||
    5570  (calculatedSumFreeSize != m_SumFreeSize) ||
    5571  (calculatedFreeCount != m_FreeCount))
    5572  {
    5573  return false;
    5574  }
    5575 
    5576  return true;
    5577 }
    5578 
    5579 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
    5580 {
    5581  if(!m_FreeSuballocationsBySize.empty())
    5582  {
    5583  return m_FreeSuballocationsBySize.back()->size;
    5584  }
    5585  else
    5586  {
    5587  return 0;
    5588  }
    5589 }
    5590 
    5591 bool VmaBlockMetadata::IsEmpty() const
    5592 {
    5593  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
    5594 }
    5595 
    5596 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
    5597 {
    5598  outInfo.blockCount = 1;
    5599 
    5600  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5601  outInfo.allocationCount = rangeCount - m_FreeCount;
    5602  outInfo.unusedRangeCount = m_FreeCount;
    5603 
    5604  outInfo.unusedBytes = m_SumFreeSize;
    5605  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
    5606 
    5607  outInfo.allocationSizeMin = UINT64_MAX;
    5608  outInfo.allocationSizeMax = 0;
    5609  outInfo.unusedRangeSizeMin = UINT64_MAX;
    5610  outInfo.unusedRangeSizeMax = 0;
    5611 
    5612  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5613  suballocItem != m_Suballocations.cend();
    5614  ++suballocItem)
    5615  {
    5616  const VmaSuballocation& suballoc = *suballocItem;
    5617  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
    5618  {
    5619  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
    5620  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
    5621  }
    5622  else
    5623  {
    5624  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
    5625  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
    5626  }
    5627  }
    5628 }
    5629 
    5630 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
    5631 {
    5632  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5633 
    5634  inoutStats.size += m_Size;
    5635  inoutStats.unusedSize += m_SumFreeSize;
    5636  inoutStats.allocationCount += rangeCount - m_FreeCount;
    5637  inoutStats.unusedRangeCount += m_FreeCount;
    5638  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
    5639 }
    5640 
    5641 #if VMA_STATS_STRING_ENABLED
    5642 
    5643 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
    5644 {
    5645  json.BeginObject();
    5646 
    5647  json.WriteString("TotalBytes");
    5648  json.WriteNumber(m_Size);
    5649 
    5650  json.WriteString("UnusedBytes");
    5651  json.WriteNumber(m_SumFreeSize);
    5652 
    5653  json.WriteString("Allocations");
    5654  json.WriteNumber((uint64_t)m_Suballocations.size() - m_FreeCount);
    5655 
    5656  json.WriteString("UnusedRanges");
    5657  json.WriteNumber(m_FreeCount);
    5658 
    5659  json.WriteString("Suballocations");
    5660  json.BeginArray();
    5661  size_t i = 0;
    5662  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5663  suballocItem != m_Suballocations.cend();
    5664  ++suballocItem, ++i)
    5665  {
    5666  json.BeginObject(true);
    5667 
    5668  json.WriteString("Offset");
    5669  json.WriteNumber(suballocItem->offset);
    5670 
    5671  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    5672  {
    5673  json.WriteString("Type");
    5674  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
    5675 
    5676  json.WriteString("Size");
    5677  json.WriteNumber(suballocItem->size);
    5678  }
    5679  else
    5680  {
    5681  suballocItem->hAllocation->PrintParameters(json);
    5682  }
    5683 
    5684  json.EndObject();
    5685  }
    5686  json.EndArray();
    5687 
    5688  json.EndObject();
    5689 }
    5690 
    5691 #endif // #if VMA_STATS_STRING_ENABLED
    5692 
    5693 /*
    5694 How many suitable free suballocations to analyze before choosing best one.
    5695 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
    5696  be chosen.
    5697 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
    5698  suballocations will be analized and best one will be chosen.
    5699 - Any other value is also acceptable.
    5700 */
    5701 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
    5702 
    5703 void VmaBlockMetadata::CreateFirstAllocationRequest(VmaAllocationRequest* pAllocationRequest)
    5704 {
    5705  VMA_ASSERT(IsEmpty());
    5706  pAllocationRequest->offset = 0;
    5707  pAllocationRequest->sumFreeSize = m_SumFreeSize;
    5708  pAllocationRequest->sumItemSize = 0;
    5709  pAllocationRequest->item = m_Suballocations.begin();
    5710  pAllocationRequest->itemsToMakeLostCount = 0;
    5711 }
    5712 
    5713 bool VmaBlockMetadata::CreateAllocationRequest(
    5714  uint32_t currentFrameIndex,
    5715  uint32_t frameInUseCount,
    5716  VkDeviceSize bufferImageGranularity,
    5717  VkDeviceSize allocSize,
    5718  VkDeviceSize allocAlignment,
    5719  VmaSuballocationType allocType,
    5720  bool canMakeOtherLost,
    5721  VmaAllocationRequest* pAllocationRequest)
    5722 {
    5723  VMA_ASSERT(allocSize > 0);
    5724  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    5725  VMA_ASSERT(pAllocationRequest != VMA_NULL);
    5726  VMA_HEAVY_ASSERT(Validate());
    5727 
    5728  // There is not enough total free space in this block to fullfill the request: Early return.
    5729  if(canMakeOtherLost == false && m_SumFreeSize < allocSize)
    5730  {
    5731  return false;
    5732  }
    5733 
    5734  // New algorithm, efficiently searching freeSuballocationsBySize.
    5735  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
    5736  if(freeSuballocCount > 0)
    5737  {
    5738  if(VMA_BEST_FIT)
    5739  {
    5740  // Find first free suballocation with size not less than allocSize.
    5741  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    5742  m_FreeSuballocationsBySize.data(),
    5743  m_FreeSuballocationsBySize.data() + freeSuballocCount,
    5744  allocSize,
    5745  VmaSuballocationItemSizeLess());
    5746  size_t index = it - m_FreeSuballocationsBySize.data();
    5747  for(; index < freeSuballocCount; ++index)
    5748  {
    5749  if(CheckAllocation(
    5750  currentFrameIndex,
    5751  frameInUseCount,
    5752  bufferImageGranularity,
    5753  allocSize,
    5754  allocAlignment,
    5755  allocType,
    5756  m_FreeSuballocationsBySize[index],
    5757  false, // canMakeOtherLost
    5758  &pAllocationRequest->offset,
    5759  &pAllocationRequest->itemsToMakeLostCount,
    5760  &pAllocationRequest->sumFreeSize,
    5761  &pAllocationRequest->sumItemSize))
    5762  {
    5763  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5764  return true;
    5765  }
    5766  }
    5767  }
    5768  else
    5769  {
    5770  // Search staring from biggest suballocations.
    5771  for(size_t index = freeSuballocCount; index--; )
    5772  {
    5773  if(CheckAllocation(
    5774  currentFrameIndex,
    5775  frameInUseCount,
    5776  bufferImageGranularity,
    5777  allocSize,
    5778  allocAlignment,
    5779  allocType,
    5780  m_FreeSuballocationsBySize[index],
    5781  false, // canMakeOtherLost
    5782  &pAllocationRequest->offset,
    5783  &pAllocationRequest->itemsToMakeLostCount,
    5784  &pAllocationRequest->sumFreeSize,
    5785  &pAllocationRequest->sumItemSize))
    5786  {
    5787  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5788  return true;
    5789  }
    5790  }
    5791  }
    5792  }
    5793 
    5794  if(canMakeOtherLost)
    5795  {
    5796  // Brute-force algorithm. TODO: Come up with something better.
    5797 
    5798  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
    5799  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
    5800 
    5801  VmaAllocationRequest tmpAllocRequest = {};
    5802  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
    5803  suballocIt != m_Suballocations.end();
    5804  ++suballocIt)
    5805  {
    5806  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
    5807  suballocIt->hAllocation->CanBecomeLost())
    5808  {
    5809  if(CheckAllocation(
    5810  currentFrameIndex,
    5811  frameInUseCount,
    5812  bufferImageGranularity,
    5813  allocSize,
    5814  allocAlignment,
    5815  allocType,
    5816  suballocIt,
    5817  canMakeOtherLost,
    5818  &tmpAllocRequest.offset,
    5819  &tmpAllocRequest.itemsToMakeLostCount,
    5820  &tmpAllocRequest.sumFreeSize,
    5821  &tmpAllocRequest.sumItemSize))
    5822  {
    5823  tmpAllocRequest.item = suballocIt;
    5824 
    5825  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
    5826  {
    5827  *pAllocationRequest = tmpAllocRequest;
    5828  }
    5829  }
    5830  }
    5831  }
    5832 
    5833  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
    5834  {
    5835  return true;
    5836  }
    5837  }
    5838 
    5839  return false;
    5840 }
    5841 
    5842 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
    5843  uint32_t currentFrameIndex,
    5844  uint32_t frameInUseCount,
    5845  VmaAllocationRequest* pAllocationRequest)
    5846 {
    5847  while(pAllocationRequest->itemsToMakeLostCount > 0)
    5848  {
    5849  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
    5850  {
    5851  ++pAllocationRequest->item;
    5852  }
    5853  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    5854  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
    5855  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
    5856  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    5857  {
    5858  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
    5859  --pAllocationRequest->itemsToMakeLostCount;
    5860  }
    5861  else
    5862  {
    5863  return false;
    5864  }
    5865  }
    5866 
    5867  VMA_HEAVY_ASSERT(Validate());
    5868  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    5869  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
    5870 
    5871  return true;
    5872 }
    5873 
    5874 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    5875 {
    5876  uint32_t lostAllocationCount = 0;
    5877  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    5878  it != m_Suballocations.end();
    5879  ++it)
    5880  {
    5881  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
    5882  it->hAllocation->CanBecomeLost() &&
    5883  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    5884  {
    5885  it = FreeSuballocation(it);
    5886  ++lostAllocationCount;
    5887  }
    5888  }
    5889  return lostAllocationCount;
    5890 }
    5891 
    5892 void VmaBlockMetadata::Alloc(
    5893  const VmaAllocationRequest& request,
    5894  VmaSuballocationType type,
    5895  VkDeviceSize allocSize,
    5896  VmaAllocation hAllocation)
    5897 {
    5898  VMA_ASSERT(request.item != m_Suballocations.end());
    5899  VmaSuballocation& suballoc = *request.item;
    5900  // Given suballocation is a free block.
    5901  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    5902  // Given offset is inside this suballocation.
    5903  VMA_ASSERT(request.offset >= suballoc.offset);
    5904  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
    5905  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
    5906  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
    5907 
    5908  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
    5909  // it to become used.
    5910  UnregisterFreeSuballocation(request.item);
    5911 
    5912  suballoc.offset = request.offset;
    5913  suballoc.size = allocSize;
    5914  suballoc.type = type;
    5915  suballoc.hAllocation = hAllocation;
    5916 
    5917  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
    5918  if(paddingEnd)
    5919  {
    5920  VmaSuballocation paddingSuballoc = {};
    5921  paddingSuballoc.offset = request.offset + allocSize;
    5922  paddingSuballoc.size = paddingEnd;
    5923  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5924  VmaSuballocationList::iterator next = request.item;
    5925  ++next;
    5926  const VmaSuballocationList::iterator paddingEndItem =
    5927  m_Suballocations.insert(next, paddingSuballoc);
    5928  RegisterFreeSuballocation(paddingEndItem);
    5929  }
    5930 
    5931  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
    5932  if(paddingBegin)
    5933  {
    5934  VmaSuballocation paddingSuballoc = {};
    5935  paddingSuballoc.offset = request.offset - paddingBegin;
    5936  paddingSuballoc.size = paddingBegin;
    5937  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5938  const VmaSuballocationList::iterator paddingBeginItem =
    5939  m_Suballocations.insert(request.item, paddingSuballoc);
    5940  RegisterFreeSuballocation(paddingBeginItem);
    5941  }
    5942 
    5943  // Update totals.
    5944  m_FreeCount = m_FreeCount - 1;
    5945  if(paddingBegin > 0)
    5946  {
    5947  ++m_FreeCount;
    5948  }
    5949  if(paddingEnd > 0)
    5950  {
    5951  ++m_FreeCount;
    5952  }
    5953  m_SumFreeSize -= allocSize;
    5954 }
    5955 
    5956 void VmaBlockMetadata::Free(const VmaAllocation allocation)
    5957 {
    5958  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    5959  suballocItem != m_Suballocations.end();
    5960  ++suballocItem)
    5961  {
    5962  VmaSuballocation& suballoc = *suballocItem;
    5963  if(suballoc.hAllocation == allocation)
    5964  {
    5965  FreeSuballocation(suballocItem);
    5966  VMA_HEAVY_ASSERT(Validate());
    5967  return;
    5968  }
    5969  }
    5970  VMA_ASSERT(0 && "Not found!");
    5971 }
    5972 
    5973 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
    5974 {
    5975  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    5976  suballocItem != m_Suballocations.end();
    5977  ++suballocItem)
    5978  {
    5979  VmaSuballocation& suballoc = *suballocItem;
    5980  if(suballoc.offset == offset)
    5981  {
    5982  FreeSuballocation(suballocItem);
    5983  return;
    5984  }
    5985  }
    5986  VMA_ASSERT(0 && "Not found!");
    5987 }
    5988 
    5989 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
    5990 {
    5991  VkDeviceSize lastSize = 0;
    5992  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
    5993  {
    5994  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
    5995 
    5996  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    5997  {
    5998  VMA_ASSERT(0);
    5999  return false;
    6000  }
    6001  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6002  {
    6003  VMA_ASSERT(0);
    6004  return false;
    6005  }
    6006  if(it->size < lastSize)
    6007  {
    6008  VMA_ASSERT(0);
    6009  return false;
    6010  }
    6011 
    6012  lastSize = it->size;
    6013  }
    6014  return true;
    6015 }
    6016 
    6017 bool VmaBlockMetadata::CheckAllocation(
    6018  uint32_t currentFrameIndex,
    6019  uint32_t frameInUseCount,
    6020  VkDeviceSize bufferImageGranularity,
    6021  VkDeviceSize allocSize,
    6022  VkDeviceSize allocAlignment,
    6023  VmaSuballocationType allocType,
    6024  VmaSuballocationList::const_iterator suballocItem,
    6025  bool canMakeOtherLost,
    6026  VkDeviceSize* pOffset,
    6027  size_t* itemsToMakeLostCount,
    6028  VkDeviceSize* pSumFreeSize,
    6029  VkDeviceSize* pSumItemSize) const
    6030 {
    6031  VMA_ASSERT(allocSize > 0);
    6032  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    6033  VMA_ASSERT(suballocItem != m_Suballocations.cend());
    6034  VMA_ASSERT(pOffset != VMA_NULL);
    6035 
    6036  *itemsToMakeLostCount = 0;
    6037  *pSumFreeSize = 0;
    6038  *pSumItemSize = 0;
    6039 
    6040  if(canMakeOtherLost)
    6041  {
    6042  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6043  {
    6044  *pSumFreeSize = suballocItem->size;
    6045  }
    6046  else
    6047  {
    6048  if(suballocItem->hAllocation->CanBecomeLost() &&
    6049  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6050  {
    6051  ++*itemsToMakeLostCount;
    6052  *pSumItemSize = suballocItem->size;
    6053  }
    6054  else
    6055  {
    6056  return false;
    6057  }
    6058  }
    6059 
    6060  // Remaining size is too small for this request: Early return.
    6061  if(m_Size - suballocItem->offset < allocSize)
    6062  {
    6063  return false;
    6064  }
    6065 
    6066  // Start from offset equal to beginning of this suballocation.
    6067  *pOffset = suballocItem->offset;
    6068 
    6069  // Apply VMA_DEBUG_MARGIN at the beginning.
    6070  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
    6071  {
    6072  *pOffset += VMA_DEBUG_MARGIN;
    6073  }
    6074 
    6075  // Apply alignment.
    6076  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6077 
    6078  // Check previous suballocations for BufferImageGranularity conflicts.
    6079  // Make bigger alignment if necessary.
    6080  if(bufferImageGranularity > 1)
    6081  {
    6082  bool bufferImageGranularityConflict = false;
    6083  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6084  while(prevSuballocItem != m_Suballocations.cbegin())
    6085  {
    6086  --prevSuballocItem;
    6087  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6088  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6089  {
    6090  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6091  {
    6092  bufferImageGranularityConflict = true;
    6093  break;
    6094  }
    6095  }
    6096  else
    6097  // Already on previous page.
    6098  break;
    6099  }
    6100  if(bufferImageGranularityConflict)
    6101  {
    6102  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6103  }
    6104  }
    6105 
    6106  // Now that we have final *pOffset, check if we are past suballocItem.
    6107  // If yes, return false - this function should be called for another suballocItem as starting point.
    6108  if(*pOffset >= suballocItem->offset + suballocItem->size)
    6109  {
    6110  return false;
    6111  }
    6112 
    6113  // Calculate padding at the beginning based on current offset.
    6114  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
    6115 
    6116  // Calculate required margin at the end if this is not last suballocation.
    6117  VmaSuballocationList::const_iterator next = suballocItem;
    6118  ++next;
    6119  const VkDeviceSize requiredEndMargin =
    6120  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
    6121 
    6122  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
    6123  // Another early return check.
    6124  if(suballocItem->offset + totalSize > m_Size)
    6125  {
    6126  return false;
    6127  }
    6128 
    6129  // Advance lastSuballocItem until desired size is reached.
    6130  // Update itemsToMakeLostCount.
    6131  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
    6132  if(totalSize > suballocItem->size)
    6133  {
    6134  VkDeviceSize remainingSize = totalSize - suballocItem->size;
    6135  while(remainingSize > 0)
    6136  {
    6137  ++lastSuballocItem;
    6138  if(lastSuballocItem == m_Suballocations.cend())
    6139  {
    6140  return false;
    6141  }
    6142  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6143  {
    6144  *pSumFreeSize += lastSuballocItem->size;
    6145  }
    6146  else
    6147  {
    6148  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
    6149  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
    6150  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6151  {
    6152  ++*itemsToMakeLostCount;
    6153  *pSumItemSize += lastSuballocItem->size;
    6154  }
    6155  else
    6156  {
    6157  return false;
    6158  }
    6159  }
    6160  remainingSize = (lastSuballocItem->size < remainingSize) ?
    6161  remainingSize - lastSuballocItem->size : 0;
    6162  }
    6163  }
    6164 
    6165  // Check next suballocations for BufferImageGranularity conflicts.
    6166  // If conflict exists, we must mark more allocations lost or fail.
    6167  if(bufferImageGranularity > 1)
    6168  {
    6169  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
    6170  ++nextSuballocItem;
    6171  while(nextSuballocItem != m_Suballocations.cend())
    6172  {
    6173  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6174  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6175  {
    6176  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6177  {
    6178  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
    6179  if(nextSuballoc.hAllocation->CanBecomeLost() &&
    6180  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6181  {
    6182  ++*itemsToMakeLostCount;
    6183  }
    6184  else
    6185  {
    6186  return false;
    6187  }
    6188  }
    6189  }
    6190  else
    6191  {
    6192  // Already on next page.
    6193  break;
    6194  }
    6195  ++nextSuballocItem;
    6196  }
    6197  }
    6198  }
    6199  else
    6200  {
    6201  const VmaSuballocation& suballoc = *suballocItem;
    6202  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6203 
    6204  *pSumFreeSize = suballoc.size;
    6205 
    6206  // Size of this suballocation is too small for this request: Early return.
    6207  if(suballoc.size < allocSize)
    6208  {
    6209  return false;
    6210  }
    6211 
    6212  // Start from offset equal to beginning of this suballocation.
    6213  *pOffset = suballoc.offset;
    6214 
    6215  // Apply VMA_DEBUG_MARGIN at the beginning.
    6216  if((VMA_DEBUG_MARGIN > 0) && suballocItem != m_Suballocations.cbegin())
    6217  {
    6218  *pOffset += VMA_DEBUG_MARGIN;
    6219  }
    6220 
    6221  // Apply alignment.
    6222  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6223 
    6224  // Check previous suballocations for BufferImageGranularity conflicts.
    6225  // Make bigger alignment if necessary.
    6226  if(bufferImageGranularity > 1)
    6227  {
    6228  bool bufferImageGranularityConflict = false;
    6229  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6230  while(prevSuballocItem != m_Suballocations.cbegin())
    6231  {
    6232  --prevSuballocItem;
    6233  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6234  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6235  {
    6236  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6237  {
    6238  bufferImageGranularityConflict = true;
    6239  break;
    6240  }
    6241  }
    6242  else
    6243  // Already on previous page.
    6244  break;
    6245  }
    6246  if(bufferImageGranularityConflict)
    6247  {
    6248  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6249  }
    6250  }
    6251 
    6252  // Calculate padding at the beginning based on current offset.
    6253  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
    6254 
    6255  // Calculate required margin at the end if this is not last suballocation.
    6256  VmaSuballocationList::const_iterator next = suballocItem;
    6257  ++next;
    6258  const VkDeviceSize requiredEndMargin =
    6259  (next != m_Suballocations.cend()) ? VMA_DEBUG_MARGIN : 0;
    6260 
    6261  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
    6262  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
    6263  {
    6264  return false;
    6265  }
    6266 
    6267  // Check next suballocations for BufferImageGranularity conflicts.
    6268  // If conflict exists, allocation cannot be made here.
    6269  if(bufferImageGranularity > 1)
    6270  {
    6271  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
    6272  ++nextSuballocItem;
    6273  while(nextSuballocItem != m_Suballocations.cend())
    6274  {
    6275  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6276  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6277  {
    6278  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6279  {
    6280  return false;
    6281  }
    6282  }
    6283  else
    6284  {
    6285  // Already on next page.
    6286  break;
    6287  }
    6288  ++nextSuballocItem;
    6289  }
    6290  }
    6291  }
    6292 
    6293  // All tests passed: Success. pOffset is already filled.
    6294  return true;
    6295 }
    6296 
    6297 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
    6298 {
    6299  VMA_ASSERT(item != m_Suballocations.end());
    6300  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6301 
    6302  VmaSuballocationList::iterator nextItem = item;
    6303  ++nextItem;
    6304  VMA_ASSERT(nextItem != m_Suballocations.end());
    6305  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
    6306 
    6307  item->size += nextItem->size;
    6308  --m_FreeCount;
    6309  m_Suballocations.erase(nextItem);
    6310 }
    6311 
    6312 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
    6313 {
    6314  // Change this suballocation to be marked as free.
    6315  VmaSuballocation& suballoc = *suballocItem;
    6316  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6317  suballoc.hAllocation = VK_NULL_HANDLE;
    6318 
    6319  // Update totals.
    6320  ++m_FreeCount;
    6321  m_SumFreeSize += suballoc.size;
    6322 
    6323  // Merge with previous and/or next suballocation if it's also free.
    6324  bool mergeWithNext = false;
    6325  bool mergeWithPrev = false;
    6326 
    6327  VmaSuballocationList::iterator nextItem = suballocItem;
    6328  ++nextItem;
    6329  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
    6330  {
    6331  mergeWithNext = true;
    6332  }
    6333 
    6334  VmaSuballocationList::iterator prevItem = suballocItem;
    6335  if(suballocItem != m_Suballocations.begin())
    6336  {
    6337  --prevItem;
    6338  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6339  {
    6340  mergeWithPrev = true;
    6341  }
    6342  }
    6343 
    6344  if(mergeWithNext)
    6345  {
    6346  UnregisterFreeSuballocation(nextItem);
    6347  MergeFreeWithNext(suballocItem);
    6348  }
    6349 
    6350  if(mergeWithPrev)
    6351  {
    6352  UnregisterFreeSuballocation(prevItem);
    6353  MergeFreeWithNext(prevItem);
    6354  RegisterFreeSuballocation(prevItem);
    6355  return prevItem;
    6356  }
    6357  else
    6358  {
    6359  RegisterFreeSuballocation(suballocItem);
    6360  return suballocItem;
    6361  }
    6362 }
    6363 
    6364 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
    6365 {
    6366  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6367  VMA_ASSERT(item->size > 0);
    6368 
    6369  // You may want to enable this validation at the beginning or at the end of
    6370  // this function, depending on what do you want to check.
    6371  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6372 
    6373  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6374  {
    6375  if(m_FreeSuballocationsBySize.empty())
    6376  {
    6377  m_FreeSuballocationsBySize.push_back(item);
    6378  }
    6379  else
    6380  {
    6381  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
    6382  }
    6383  }
    6384 
    6385  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6386 }
    6387 
    6388 
    6389 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
    6390 {
    6391  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6392  VMA_ASSERT(item->size > 0);
    6393 
    6394  // You may want to enable this validation at the beginning or at the end of
    6395  // this function, depending on what do you want to check.
    6396  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6397 
    6398  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6399  {
    6400  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    6401  m_FreeSuballocationsBySize.data(),
    6402  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
    6403  item,
    6404  VmaSuballocationItemSizeLess());
    6405  for(size_t index = it - m_FreeSuballocationsBySize.data();
    6406  index < m_FreeSuballocationsBySize.size();
    6407  ++index)
    6408  {
    6409  if(m_FreeSuballocationsBySize[index] == item)
    6410  {
    6411  VmaVectorRemove(m_FreeSuballocationsBySize, index);
    6412  return;
    6413  }
    6414  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
    6415  }
    6416  VMA_ASSERT(0 && "Not found.");
    6417  }
    6418 
    6419  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6420 }
    6421 
    6423 // class VmaDeviceMemoryBlock
    6424 
    6425 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
    6426  m_Metadata(hAllocator),
    6427  m_MemoryTypeIndex(UINT32_MAX),
    6428  m_Id(0),
    6429  m_hMemory(VK_NULL_HANDLE),
    6430  m_MapCount(0),
    6431  m_pMappedData(VMA_NULL)
    6432 {
    6433 }
    6434 
    6435 void VmaDeviceMemoryBlock::Init(
    6436  uint32_t newMemoryTypeIndex,
    6437  VkDeviceMemory newMemory,
    6438  VkDeviceSize newSize,
    6439  uint32_t id)
    6440 {
    6441  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    6442 
    6443  m_MemoryTypeIndex = newMemoryTypeIndex;
    6444  m_Id = id;
    6445  m_hMemory = newMemory;
    6446 
    6447  m_Metadata.Init(newSize);
    6448 }
    6449 
    6450 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
    6451 {
    6452  // This is the most important assert in the entire library.
    6453  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
    6454  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
    6455 
    6456  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
    6457  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
    6458  m_hMemory = VK_NULL_HANDLE;
    6459 }
    6460 
    6461 bool VmaDeviceMemoryBlock::Validate() const
    6462 {
    6463  if((m_hMemory == VK_NULL_HANDLE) ||
    6464  (m_Metadata.GetSize() == 0))
    6465  {
    6466  return false;
    6467  }
    6468 
    6469  return m_Metadata.Validate();
    6470 }
    6471 
    6472 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
    6473 {
    6474  if(count == 0)
    6475  {
    6476  return VK_SUCCESS;
    6477  }
    6478 
    6479  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6480  if(m_MapCount != 0)
    6481  {
    6482  m_MapCount += count;
    6483  VMA_ASSERT(m_pMappedData != VMA_NULL);
    6484  if(ppData != VMA_NULL)
    6485  {
    6486  *ppData = m_pMappedData;
    6487  }
    6488  return VK_SUCCESS;
    6489  }
    6490  else
    6491  {
    6492  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    6493  hAllocator->m_hDevice,
    6494  m_hMemory,
    6495  0, // offset
    6496  VK_WHOLE_SIZE,
    6497  0, // flags
    6498  &m_pMappedData);
    6499  if(result == VK_SUCCESS)
    6500  {
    6501  if(ppData != VMA_NULL)
    6502  {
    6503  *ppData = m_pMappedData;
    6504  }
    6505  m_MapCount = count;
    6506  }
    6507  return result;
    6508  }
    6509 }
    6510 
    6511 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
    6512 {
    6513  if(count == 0)
    6514  {
    6515  return;
    6516  }
    6517 
    6518  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6519  if(m_MapCount >= count)
    6520  {
    6521  m_MapCount -= count;
    6522  if(m_MapCount == 0)
    6523  {
    6524  m_pMappedData = VMA_NULL;
    6525  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
    6526  }
    6527  }
    6528  else
    6529  {
    6530  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
    6531  }
    6532 }
    6533 
    6534 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
    6535  const VmaAllocator hAllocator,
    6536  const VmaAllocation hAllocation,
    6537  VkBuffer hBuffer)
    6538 {
    6539  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6540  hAllocation->GetBlock() == this);
    6541  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6542  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6543  return hAllocator->GetVulkanFunctions().vkBindBufferMemory(
    6544  hAllocator->m_hDevice,
    6545  hBuffer,
    6546  m_hMemory,
    6547  hAllocation->GetOffset());
    6548 }
    6549 
    6550 VkResult VmaDeviceMemoryBlock::BindImageMemory(
    6551  const VmaAllocator hAllocator,
    6552  const VmaAllocation hAllocation,
    6553  VkImage hImage)
    6554 {
    6555  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6556  hAllocation->GetBlock() == this);
    6557  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6558  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6559  return hAllocator->GetVulkanFunctions().vkBindImageMemory(
    6560  hAllocator->m_hDevice,
    6561  hImage,
    6562  m_hMemory,
    6563  hAllocation->GetOffset());
    6564 }
    6565 
    6566 static void InitStatInfo(VmaStatInfo& outInfo)
    6567 {
    6568  memset(&outInfo, 0, sizeof(outInfo));
    6569  outInfo.allocationSizeMin = UINT64_MAX;
    6570  outInfo.unusedRangeSizeMin = UINT64_MAX;
    6571 }
    6572 
    6573 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
    6574 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
    6575 {
    6576  inoutInfo.blockCount += srcInfo.blockCount;
    6577  inoutInfo.allocationCount += srcInfo.allocationCount;
    6578  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
    6579  inoutInfo.usedBytes += srcInfo.usedBytes;
    6580  inoutInfo.unusedBytes += srcInfo.unusedBytes;
    6581  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
    6582  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
    6583  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
    6584  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
    6585 }
    6586 
    6587 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
    6588 {
    6589  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
    6590  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
    6591  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
    6592  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
    6593 }
    6594 
    6595 VmaPool_T::VmaPool_T(
    6596  VmaAllocator hAllocator,
    6597  const VmaPoolCreateInfo& createInfo) :
    6598  m_BlockVector(
    6599  hAllocator,
    6600  createInfo.memoryTypeIndex,
    6601  createInfo.blockSize,
    6602  createInfo.minBlockCount,
    6603  createInfo.maxBlockCount,
    6604  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
    6605  createInfo.frameInUseCount,
    6606  true), // isCustomPool
    6607  m_Id(0)
    6608 {
    6609 }
    6610 
    6611 VmaPool_T::~VmaPool_T()
    6612 {
    6613 }
    6614 
    6615 #if VMA_STATS_STRING_ENABLED
    6616 
    6617 #endif // #if VMA_STATS_STRING_ENABLED
    6618 
    6619 VmaBlockVector::VmaBlockVector(
    6620  VmaAllocator hAllocator,
    6621  uint32_t memoryTypeIndex,
    6622  VkDeviceSize preferredBlockSize,
    6623  size_t minBlockCount,
    6624  size_t maxBlockCount,
    6625  VkDeviceSize bufferImageGranularity,
    6626  uint32_t frameInUseCount,
    6627  bool isCustomPool) :
    6628  m_hAllocator(hAllocator),
    6629  m_MemoryTypeIndex(memoryTypeIndex),
    6630  m_PreferredBlockSize(preferredBlockSize),
    6631  m_MinBlockCount(minBlockCount),
    6632  m_MaxBlockCount(maxBlockCount),
    6633  m_BufferImageGranularity(bufferImageGranularity),
    6634  m_FrameInUseCount(frameInUseCount),
    6635  m_IsCustomPool(isCustomPool),
    6636  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
    6637  m_HasEmptyBlock(false),
    6638  m_pDefragmentator(VMA_NULL),
    6639  m_NextBlockId(0)
    6640 {
    6641 }
    6642 
    6643 VmaBlockVector::~VmaBlockVector()
    6644 {
    6645  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
    6646 
    6647  for(size_t i = m_Blocks.size(); i--; )
    6648  {
    6649  m_Blocks[i]->Destroy(m_hAllocator);
    6650  vma_delete(m_hAllocator, m_Blocks[i]);
    6651  }
    6652 }
    6653 
    6654 VkResult VmaBlockVector::CreateMinBlocks()
    6655 {
    6656  for(size_t i = 0; i < m_MinBlockCount; ++i)
    6657  {
    6658  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
    6659  if(res != VK_SUCCESS)
    6660  {
    6661  return res;
    6662  }
    6663  }
    6664  return VK_SUCCESS;
    6665 }
    6666 
    6667 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
    6668 {
    6669  pStats->size = 0;
    6670  pStats->unusedSize = 0;
    6671  pStats->allocationCount = 0;
    6672  pStats->unusedRangeCount = 0;
    6673  pStats->unusedRangeSizeMax = 0;
    6674 
    6675  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6676 
    6677  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    6678  {
    6679  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    6680  VMA_ASSERT(pBlock);
    6681  VMA_HEAVY_ASSERT(pBlock->Validate());
    6682  pBlock->m_Metadata.AddPoolStats(*pStats);
    6683  }
    6684 }
    6685 
    6686 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
    6687 
    6688 VkResult VmaBlockVector::Allocate(
    6689  VmaPool hCurrentPool,
    6690  uint32_t currentFrameIndex,
    6691  VkDeviceSize size,
    6692  VkDeviceSize alignment,
    6693  const VmaAllocationCreateInfo& createInfo,
    6694  VmaSuballocationType suballocType,
    6695  VmaAllocation* pAllocation)
    6696 {
    6697  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    6698  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
    6699 
    6700  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6701 
    6702  // 1. Search existing allocations. Try to allocate without making other allocations lost.
    6703  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    6704  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    6705  {
    6706  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    6707  VMA_ASSERT(pCurrBlock);
    6708  VmaAllocationRequest currRequest = {};
    6709  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    6710  currentFrameIndex,
    6711  m_FrameInUseCount,
    6712  m_BufferImageGranularity,
    6713  size,
    6714  alignment,
    6715  suballocType,
    6716  false, // canMakeOtherLost
    6717  &currRequest))
    6718  {
    6719  // Allocate from pCurrBlock.
    6720  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
    6721 
    6722  if(mapped)
    6723  {
    6724  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
    6725  if(res != VK_SUCCESS)
    6726  {
    6727  return res;
    6728  }
    6729  }
    6730 
    6731  // We no longer have an empty Allocation.
    6732  if(pCurrBlock->m_Metadata.IsEmpty())
    6733  {
    6734  m_HasEmptyBlock = false;
    6735  }
    6736 
    6737  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    6738  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, size, *pAllocation);
    6739  (*pAllocation)->InitBlockAllocation(
    6740  hCurrentPool,
    6741  pCurrBlock,
    6742  currRequest.offset,
    6743  alignment,
    6744  size,
    6745  suballocType,
    6746  mapped,
    6747  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    6748  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
    6749  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    6750  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    6751  return VK_SUCCESS;
    6752  }
    6753  }
    6754 
    6755  const bool canCreateNewBlock =
    6756  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
    6757  (m_Blocks.size() < m_MaxBlockCount);
    6758 
    6759  // 2. Try to create new block.
    6760  if(canCreateNewBlock)
    6761  {
    6762  // Calculate optimal size for new block.
    6763  VkDeviceSize newBlockSize = m_PreferredBlockSize;
    6764  uint32_t newBlockSizeShift = 0;
    6765  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
    6766 
    6767  // Allocating blocks of other sizes is allowed only in default pools.
    6768  // In custom pools block size is fixed.
    6769  if(m_IsCustomPool == false)
    6770  {
    6771  // Allocate 1/8, 1/4, 1/2 as first blocks.
    6772  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
    6773  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
    6774  {
    6775  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    6776  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
    6777  {
    6778  newBlockSize = smallerNewBlockSize;
    6779  ++newBlockSizeShift;
    6780  }
    6781  else
    6782  {
    6783  break;
    6784  }
    6785  }
    6786  }
    6787 
    6788  size_t newBlockIndex = 0;
    6789  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
    6790  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
    6791  if(m_IsCustomPool == false)
    6792  {
    6793  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
    6794  {
    6795  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    6796  if(smallerNewBlockSize >= size)
    6797  {
    6798  newBlockSize = smallerNewBlockSize;
    6799  ++newBlockSizeShift;
    6800  res = CreateBlock(newBlockSize, &newBlockIndex);
    6801  }
    6802  else
    6803  {
    6804  break;
    6805  }
    6806  }
    6807  }
    6808 
    6809  if(res == VK_SUCCESS)
    6810  {
    6811  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
    6812  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= size);
    6813 
    6814  if(mapped)
    6815  {
    6816  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
    6817  if(res != VK_SUCCESS)
    6818  {
    6819  return res;
    6820  }
    6821  }
    6822 
    6823  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
    6824  VmaAllocationRequest allocRequest;
    6825  pBlock->m_Metadata.CreateFirstAllocationRequest(&allocRequest);
    6826  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    6827  pBlock->m_Metadata.Alloc(allocRequest, suballocType, size, *pAllocation);
    6828  (*pAllocation)->InitBlockAllocation(
    6829  hCurrentPool,
    6830  pBlock,
    6831  allocRequest.offset,
    6832  alignment,
    6833  size,
    6834  suballocType,
    6835  mapped,
    6836  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    6837  VMA_HEAVY_ASSERT(pBlock->Validate());
    6838  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
    6839  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    6840  return VK_SUCCESS;
    6841  }
    6842  }
    6843 
    6844  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
    6845 
    6846  // 3. Try to allocate from existing blocks with making other allocations lost.
    6847  if(canMakeOtherLost)
    6848  {
    6849  uint32_t tryIndex = 0;
    6850  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
    6851  {
    6852  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
    6853  VmaAllocationRequest bestRequest = {};
    6854  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
    6855 
    6856  // 1. Search existing allocations.
    6857  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    6858  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    6859  {
    6860  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    6861  VMA_ASSERT(pCurrBlock);
    6862  VmaAllocationRequest currRequest = {};
    6863  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    6864  currentFrameIndex,
    6865  m_FrameInUseCount,
    6866  m_BufferImageGranularity,
    6867  size,
    6868  alignment,
    6869  suballocType,
    6870  canMakeOtherLost,
    6871  &currRequest))
    6872  {
    6873  const VkDeviceSize currRequestCost = currRequest.CalcCost();
    6874  if(pBestRequestBlock == VMA_NULL ||
    6875  currRequestCost < bestRequestCost)
    6876  {
    6877  pBestRequestBlock = pCurrBlock;
    6878  bestRequest = currRequest;
    6879  bestRequestCost = currRequestCost;
    6880 
    6881  if(bestRequestCost == 0)
    6882  {
    6883  break;
    6884  }
    6885  }
    6886  }
    6887  }
    6888 
    6889  if(pBestRequestBlock != VMA_NULL)
    6890  {
    6891  if(mapped)
    6892  {
    6893  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
    6894  if(res != VK_SUCCESS)
    6895  {
    6896  return res;
    6897  }
    6898  }
    6899 
    6900  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
    6901  currentFrameIndex,
    6902  m_FrameInUseCount,
    6903  &bestRequest))
    6904  {
    6905  // We no longer have an empty Allocation.
    6906  if(pBestRequestBlock->m_Metadata.IsEmpty())
    6907  {
    6908  m_HasEmptyBlock = false;
    6909  }
    6910  // Allocate from this pBlock.
    6911  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    6912  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, size, *pAllocation);
    6913  (*pAllocation)->InitBlockAllocation(
    6914  hCurrentPool,
    6915  pBestRequestBlock,
    6916  bestRequest.offset,
    6917  alignment,
    6918  size,
    6919  suballocType,
    6920  mapped,
    6921  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    6922  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
    6923  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    6924  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    6925  return VK_SUCCESS;
    6926  }
    6927  // else: Some allocations must have been touched while we are here. Next try.
    6928  }
    6929  else
    6930  {
    6931  // Could not find place in any of the blocks - break outer loop.
    6932  break;
    6933  }
    6934  }
    6935  /* Maximum number of tries exceeded - a very unlike event when many other
    6936  threads are simultaneously touching allocations making it impossible to make
    6937  lost at the same time as we try to allocate. */
    6938  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
    6939  {
    6940  return VK_ERROR_TOO_MANY_OBJECTS;
    6941  }
    6942  }
    6943 
    6944  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    6945 }
    6946 
    6947 void VmaBlockVector::Free(
    6948  VmaAllocation hAllocation)
    6949 {
    6950  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
    6951 
    6952  // Scope for lock.
    6953  {
    6954  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6955 
    6956  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    6957 
    6958  if(hAllocation->IsPersistentMap())
    6959  {
    6960  pBlock->Unmap(m_hAllocator, 1);
    6961  }
    6962 
    6963  pBlock->m_Metadata.Free(hAllocation);
    6964  VMA_HEAVY_ASSERT(pBlock->Validate());
    6965 
    6966  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
    6967 
    6968  // pBlock became empty after this deallocation.
    6969  if(pBlock->m_Metadata.IsEmpty())
    6970  {
    6971  // Already has empty Allocation. We don't want to have two, so delete this one.
    6972  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
    6973  {
    6974  pBlockToDelete = pBlock;
    6975  Remove(pBlock);
    6976  }
    6977  // We now have first empty block.
    6978  else
    6979  {
    6980  m_HasEmptyBlock = true;
    6981  }
    6982  }
    6983  // pBlock didn't become empty, but we have another empty block - find and free that one.
    6984  // (This is optional, heuristics.)
    6985  else if(m_HasEmptyBlock)
    6986  {
    6987  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
    6988  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
    6989  {
    6990  pBlockToDelete = pLastBlock;
    6991  m_Blocks.pop_back();
    6992  m_HasEmptyBlock = false;
    6993  }
    6994  }
    6995 
    6996  IncrementallySortBlocks();
    6997  }
    6998 
    6999  // Destruction of a free Allocation. Deferred until this point, outside of mutex
    7000  // lock, for performance reason.
    7001  if(pBlockToDelete != VMA_NULL)
    7002  {
    7003  VMA_DEBUG_LOG(" Deleted empty allocation");
    7004  pBlockToDelete->Destroy(m_hAllocator);
    7005  vma_delete(m_hAllocator, pBlockToDelete);
    7006  }
    7007 }
    7008 
    7009 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
    7010 {
    7011  VkDeviceSize result = 0;
    7012  for(size_t i = m_Blocks.size(); i--; )
    7013  {
    7014  result = VMA_MAX(result, m_Blocks[i]->m_Metadata.GetSize());
    7015  if(result >= m_PreferredBlockSize)
    7016  {
    7017  break;
    7018  }
    7019  }
    7020  return result;
    7021 }
    7022 
    7023 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
    7024 {
    7025  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7026  {
    7027  if(m_Blocks[blockIndex] == pBlock)
    7028  {
    7029  VmaVectorRemove(m_Blocks, blockIndex);
    7030  return;
    7031  }
    7032  }
    7033  VMA_ASSERT(0);
    7034 }
    7035 
    7036 void VmaBlockVector::IncrementallySortBlocks()
    7037 {
    7038  // Bubble sort only until first swap.
    7039  for(size_t i = 1; i < m_Blocks.size(); ++i)
    7040  {
    7041  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
    7042  {
    7043  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
    7044  return;
    7045  }
    7046  }
    7047 }
    7048 
    7049 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
    7050 {
    7051  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    7052  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
    7053  allocInfo.allocationSize = blockSize;
    7054  VkDeviceMemory mem = VK_NULL_HANDLE;
    7055  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
    7056  if(res < 0)
    7057  {
    7058  return res;
    7059  }
    7060 
    7061  // New VkDeviceMemory successfully created.
    7062 
    7063  // Create new Allocation for it.
    7064  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
    7065  pBlock->Init(
    7066  m_MemoryTypeIndex,
    7067  mem,
    7068  allocInfo.allocationSize,
    7069  m_NextBlockId++);
    7070 
    7071  m_Blocks.push_back(pBlock);
    7072  if(pNewBlockIndex != VMA_NULL)
    7073  {
    7074  *pNewBlockIndex = m_Blocks.size() - 1;
    7075  }
    7076 
    7077  return VK_SUCCESS;
    7078 }
    7079 
    7080 #if VMA_STATS_STRING_ENABLED
    7081 
    7082 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
    7083 {
    7084  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7085 
    7086  json.BeginObject();
    7087 
    7088  if(m_IsCustomPool)
    7089  {
    7090  json.WriteString("MemoryTypeIndex");
    7091  json.WriteNumber(m_MemoryTypeIndex);
    7092 
    7093  json.WriteString("BlockSize");
    7094  json.WriteNumber(m_PreferredBlockSize);
    7095 
    7096  json.WriteString("BlockCount");
    7097  json.BeginObject(true);
    7098  if(m_MinBlockCount > 0)
    7099  {
    7100  json.WriteString("Min");
    7101  json.WriteNumber((uint64_t)m_MinBlockCount);
    7102  }
    7103  if(m_MaxBlockCount < SIZE_MAX)
    7104  {
    7105  json.WriteString("Max");
    7106  json.WriteNumber((uint64_t)m_MaxBlockCount);
    7107  }
    7108  json.WriteString("Cur");
    7109  json.WriteNumber((uint64_t)m_Blocks.size());
    7110  json.EndObject();
    7111 
    7112  if(m_FrameInUseCount > 0)
    7113  {
    7114  json.WriteString("FrameInUseCount");
    7115  json.WriteNumber(m_FrameInUseCount);
    7116  }
    7117  }
    7118  else
    7119  {
    7120  json.WriteString("PreferredBlockSize");
    7121  json.WriteNumber(m_PreferredBlockSize);
    7122  }
    7123 
    7124  json.WriteString("Blocks");
    7125  json.BeginObject();
    7126  for(size_t i = 0; i < m_Blocks.size(); ++i)
    7127  {
    7128  json.BeginString();
    7129  json.ContinueString(m_Blocks[i]->GetId());
    7130  json.EndString();
    7131 
    7132  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
    7133  }
    7134  json.EndObject();
    7135 
    7136  json.EndObject();
    7137 }
    7138 
    7139 #endif // #if VMA_STATS_STRING_ENABLED
    7140 
    7141 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
    7142  VmaAllocator hAllocator,
    7143  uint32_t currentFrameIndex)
    7144 {
    7145  if(m_pDefragmentator == VMA_NULL)
    7146  {
    7147  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
    7148  hAllocator,
    7149  this,
    7150  currentFrameIndex);
    7151  }
    7152 
    7153  return m_pDefragmentator;
    7154 }
    7155 
    7156 VkResult VmaBlockVector::Defragment(
    7157  VmaDefragmentationStats* pDefragmentationStats,
    7158  VkDeviceSize& maxBytesToMove,
    7159  uint32_t& maxAllocationsToMove)
    7160 {
    7161  if(m_pDefragmentator == VMA_NULL)
    7162  {
    7163  return VK_SUCCESS;
    7164  }
    7165 
    7166  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7167 
    7168  // Defragment.
    7169  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
    7170 
    7171  // Accumulate statistics.
    7172  if(pDefragmentationStats != VMA_NULL)
    7173  {
    7174  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
    7175  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
    7176  pDefragmentationStats->bytesMoved += bytesMoved;
    7177  pDefragmentationStats->allocationsMoved += allocationsMoved;
    7178  VMA_ASSERT(bytesMoved <= maxBytesToMove);
    7179  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
    7180  maxBytesToMove -= bytesMoved;
    7181  maxAllocationsToMove -= allocationsMoved;
    7182  }
    7183 
    7184  // Free empty blocks.
    7185  m_HasEmptyBlock = false;
    7186  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
    7187  {
    7188  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
    7189  if(pBlock->m_Metadata.IsEmpty())
    7190  {
    7191  if(m_Blocks.size() > m_MinBlockCount)
    7192  {
    7193  if(pDefragmentationStats != VMA_NULL)
    7194  {
    7195  ++pDefragmentationStats->deviceMemoryBlocksFreed;
    7196  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
    7197  }
    7198 
    7199  VmaVectorRemove(m_Blocks, blockIndex);
    7200  pBlock->Destroy(m_hAllocator);
    7201  vma_delete(m_hAllocator, pBlock);
    7202  }
    7203  else
    7204  {
    7205  m_HasEmptyBlock = true;
    7206  }
    7207  }
    7208  }
    7209 
    7210  return result;
    7211 }
    7212 
    7213 void VmaBlockVector::DestroyDefragmentator()
    7214 {
    7215  if(m_pDefragmentator != VMA_NULL)
    7216  {
    7217  vma_delete(m_hAllocator, m_pDefragmentator);
    7218  m_pDefragmentator = VMA_NULL;
    7219  }
    7220 }
    7221 
    7222 void VmaBlockVector::MakePoolAllocationsLost(
    7223  uint32_t currentFrameIndex,
    7224  size_t* pLostAllocationCount)
    7225 {
    7226  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7227  size_t lostAllocationCount = 0;
    7228  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7229  {
    7230  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7231  VMA_ASSERT(pBlock);
    7232  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
    7233  }
    7234  if(pLostAllocationCount != VMA_NULL)
    7235  {
    7236  *pLostAllocationCount = lostAllocationCount;
    7237  }
    7238 }
    7239 
    7240 void VmaBlockVector::AddStats(VmaStats* pStats)
    7241 {
    7242  const uint32_t memTypeIndex = m_MemoryTypeIndex;
    7243  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
    7244 
    7245  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7246 
    7247  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7248  {
    7249  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7250  VMA_ASSERT(pBlock);
    7251  VMA_HEAVY_ASSERT(pBlock->Validate());
    7252  VmaStatInfo allocationStatInfo;
    7253  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
    7254  VmaAddStatInfo(pStats->total, allocationStatInfo);
    7255  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    7256  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    7257  }
    7258 }
    7259 
    7261 // VmaDefragmentator members definition
    7262 
    7263 VmaDefragmentator::VmaDefragmentator(
    7264  VmaAllocator hAllocator,
    7265  VmaBlockVector* pBlockVector,
    7266  uint32_t currentFrameIndex) :
    7267  m_hAllocator(hAllocator),
    7268  m_pBlockVector(pBlockVector),
    7269  m_CurrentFrameIndex(currentFrameIndex),
    7270  m_BytesMoved(0),
    7271  m_AllocationsMoved(0),
    7272  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
    7273  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
    7274 {
    7275 }
    7276 
    7277 VmaDefragmentator::~VmaDefragmentator()
    7278 {
    7279  for(size_t i = m_Blocks.size(); i--; )
    7280  {
    7281  vma_delete(m_hAllocator, m_Blocks[i]);
    7282  }
    7283 }
    7284 
    7285 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
    7286 {
    7287  AllocationInfo allocInfo;
    7288  allocInfo.m_hAllocation = hAlloc;
    7289  allocInfo.m_pChanged = pChanged;
    7290  m_Allocations.push_back(allocInfo);
    7291 }
    7292 
    7293 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
    7294 {
    7295  // It has already been mapped for defragmentation.
    7296  if(m_pMappedDataForDefragmentation)
    7297  {
    7298  *ppMappedData = m_pMappedDataForDefragmentation;
    7299  return VK_SUCCESS;
    7300  }
    7301 
    7302  // It is originally mapped.
    7303  if(m_pBlock->GetMappedData())
    7304  {
    7305  *ppMappedData = m_pBlock->GetMappedData();
    7306  return VK_SUCCESS;
    7307  }
    7308 
    7309  // Map on first usage.
    7310  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
    7311  *ppMappedData = m_pMappedDataForDefragmentation;
    7312  return res;
    7313 }
    7314 
    7315 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
    7316 {
    7317  if(m_pMappedDataForDefragmentation != VMA_NULL)
    7318  {
    7319  m_pBlock->Unmap(hAllocator, 1);
    7320  }
    7321 }
    7322 
    7323 VkResult VmaDefragmentator::DefragmentRound(
    7324  VkDeviceSize maxBytesToMove,
    7325  uint32_t maxAllocationsToMove)
    7326 {
    7327  if(m_Blocks.empty())
    7328  {
    7329  return VK_SUCCESS;
    7330  }
    7331 
    7332  size_t srcBlockIndex = m_Blocks.size() - 1;
    7333  size_t srcAllocIndex = SIZE_MAX;
    7334  for(;;)
    7335  {
    7336  // 1. Find next allocation to move.
    7337  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
    7338  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
    7339  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
    7340  {
    7341  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
    7342  {
    7343  // Finished: no more allocations to process.
    7344  if(srcBlockIndex == 0)
    7345  {
    7346  return VK_SUCCESS;
    7347  }
    7348  else
    7349  {
    7350  --srcBlockIndex;
    7351  srcAllocIndex = SIZE_MAX;
    7352  }
    7353  }
    7354  else
    7355  {
    7356  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
    7357  }
    7358  }
    7359 
    7360  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
    7361  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
    7362 
    7363  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
    7364  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
    7365  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
    7366  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
    7367 
    7368  // 2. Try to find new place for this allocation in preceding or current block.
    7369  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
    7370  {
    7371  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
    7372  VmaAllocationRequest dstAllocRequest;
    7373  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
    7374  m_CurrentFrameIndex,
    7375  m_pBlockVector->GetFrameInUseCount(),
    7376  m_pBlockVector->GetBufferImageGranularity(),
    7377  size,
    7378  alignment,
    7379  suballocType,
    7380  false, // canMakeOtherLost
    7381  &dstAllocRequest) &&
    7382  MoveMakesSense(
    7383  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
    7384  {
    7385  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
    7386 
    7387  // Reached limit on number of allocations or bytes to move.
    7388  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
    7389  (m_BytesMoved + size > maxBytesToMove))
    7390  {
    7391  return VK_INCOMPLETE;
    7392  }
    7393 
    7394  void* pDstMappedData = VMA_NULL;
    7395  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
    7396  if(res != VK_SUCCESS)
    7397  {
    7398  return res;
    7399  }
    7400 
    7401  void* pSrcMappedData = VMA_NULL;
    7402  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
    7403  if(res != VK_SUCCESS)
    7404  {
    7405  return res;
    7406  }
    7407 
    7408  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
    7409  memcpy(
    7410  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
    7411  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
    7412  static_cast<size_t>(size));
    7413 
    7414  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
    7415  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
    7416 
    7417  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
    7418 
    7419  if(allocInfo.m_pChanged != VMA_NULL)
    7420  {
    7421  *allocInfo.m_pChanged = VK_TRUE;
    7422  }
    7423 
    7424  ++m_AllocationsMoved;
    7425  m_BytesMoved += size;
    7426 
    7427  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
    7428 
    7429  break;
    7430  }
    7431  }
    7432 
    7433  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
    7434 
    7435  if(srcAllocIndex > 0)
    7436  {
    7437  --srcAllocIndex;
    7438  }
    7439  else
    7440  {
    7441  if(srcBlockIndex > 0)
    7442  {
    7443  --srcBlockIndex;
    7444  srcAllocIndex = SIZE_MAX;
    7445  }
    7446  else
    7447  {
    7448  return VK_SUCCESS;
    7449  }
    7450  }
    7451  }
    7452 }
    7453 
    7454 VkResult VmaDefragmentator::Defragment(
    7455  VkDeviceSize maxBytesToMove,
    7456  uint32_t maxAllocationsToMove)
    7457 {
    7458  if(m_Allocations.empty())
    7459  {
    7460  return VK_SUCCESS;
    7461  }
    7462 
    7463  // Create block info for each block.
    7464  const size_t blockCount = m_pBlockVector->m_Blocks.size();
    7465  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7466  {
    7467  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
    7468  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
    7469  m_Blocks.push_back(pBlockInfo);
    7470  }
    7471 
    7472  // Sort them by m_pBlock pointer value.
    7473  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
    7474 
    7475  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
    7476  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
    7477  {
    7478  AllocationInfo& allocInfo = m_Allocations[blockIndex];
    7479  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
    7480  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    7481  {
    7482  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
    7483  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
    7484  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
    7485  {
    7486  (*it)->m_Allocations.push_back(allocInfo);
    7487  }
    7488  else
    7489  {
    7490  VMA_ASSERT(0);
    7491  }
    7492  }
    7493  }
    7494  m_Allocations.clear();
    7495 
    7496  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7497  {
    7498  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
    7499  pBlockInfo->CalcHasNonMovableAllocations();
    7500  pBlockInfo->SortAllocationsBySizeDescecnding();
    7501  }
    7502 
    7503  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
    7504  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
    7505 
    7506  // Execute defragmentation rounds (the main part).
    7507  VkResult result = VK_SUCCESS;
    7508  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
    7509  {
    7510  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
    7511  }
    7512 
    7513  // Unmap blocks that were mapped for defragmentation.
    7514  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7515  {
    7516  m_Blocks[blockIndex]->Unmap(m_hAllocator);
    7517  }
    7518 
    7519  return result;
    7520 }
    7521 
    7522 bool VmaDefragmentator::MoveMakesSense(
    7523  size_t dstBlockIndex, VkDeviceSize dstOffset,
    7524  size_t srcBlockIndex, VkDeviceSize srcOffset)
    7525 {
    7526  if(dstBlockIndex < srcBlockIndex)
    7527  {
    7528  return true;
    7529  }
    7530  if(dstBlockIndex > srcBlockIndex)
    7531  {
    7532  return false;
    7533  }
    7534  if(dstOffset < srcOffset)
    7535  {
    7536  return true;
    7537  }
    7538  return false;
    7539 }
    7540 
    7542 // VmaAllocator_T
    7543 
    7544 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
    7545  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
    7546  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
    7547  m_hDevice(pCreateInfo->device),
    7548  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
    7549  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
    7550  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
    7551  m_PreferredLargeHeapBlockSize(0),
    7552  m_PhysicalDevice(pCreateInfo->physicalDevice),
    7553  m_CurrentFrameIndex(0),
    7554  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks())),
    7555  m_NextPoolId(0)
    7556 {
    7557  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
    7558 
    7559 #if !(VMA_DEDICATED_ALLOCATION)
    7561  {
    7562  VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
    7563  }
    7564 #endif
    7565 
    7566  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
    7567  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
    7568  memset(&m_MemProps, 0, sizeof(m_MemProps));
    7569 
    7570  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
    7571  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
    7572 
    7573  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    7574  {
    7575  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
    7576  }
    7577 
    7578  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
    7579  {
    7580  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
    7581  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
    7582  }
    7583 
    7584  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
    7585 
    7586  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
    7587  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
    7588 
    7589  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
    7590  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
    7591 
    7592  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
    7593  {
    7594  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
    7595  {
    7596  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
    7597  if(limit != VK_WHOLE_SIZE)
    7598  {
    7599  m_HeapSizeLimit[heapIndex] = limit;
    7600  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
    7601  {
    7602  m_MemProps.memoryHeaps[heapIndex].size = limit;
    7603  }
    7604  }
    7605  }
    7606  }
    7607 
    7608  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    7609  {
    7610  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
    7611 
    7612  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
    7613  this,
    7614  memTypeIndex,
    7615  preferredBlockSize,
    7616  0,
    7617  SIZE_MAX,
    7618  GetBufferImageGranularity(),
    7619  pCreateInfo->frameInUseCount,
    7620  false); // isCustomPool
    7621  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
    7622  // becase minBlockCount is 0.
    7623  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
    7624 
    7625  }
    7626 }
    7627 
    7628 VmaAllocator_T::~VmaAllocator_T()
    7629 {
    7630  VMA_ASSERT(m_Pools.empty());
    7631 
    7632  for(size_t i = GetMemoryTypeCount(); i--; )
    7633  {
    7634  vma_delete(this, m_pDedicatedAllocations[i]);
    7635  vma_delete(this, m_pBlockVectors[i]);
    7636  }
    7637 }
    7638 
    7639 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
    7640 {
    7641 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7642  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
    7643  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
    7644  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    7645  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
    7646  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
    7647  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
    7648  m_VulkanFunctions.vkFlushMappedMemoryRanges = &vkFlushMappedMemoryRanges;
    7649  m_VulkanFunctions.vkInvalidateMappedMemoryRanges = &vkInvalidateMappedMemoryRanges;
    7650  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
    7651  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
    7652  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
    7653  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
    7654  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
    7655  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
    7656  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
    7657  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
    7658 #if VMA_DEDICATED_ALLOCATION
    7659  if(m_UseKhrDedicatedAllocation)
    7660  {
    7661  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
    7662  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
    7663  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
    7664  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
    7665  }
    7666 #endif // #if VMA_DEDICATED_ALLOCATION
    7667 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7668 
    7669 #define VMA_COPY_IF_NOT_NULL(funcName) \
    7670  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
    7671 
    7672  if(pVulkanFunctions != VMA_NULL)
    7673  {
    7674  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
    7675  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
    7676  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
    7677  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
    7678  VMA_COPY_IF_NOT_NULL(vkMapMemory);
    7679  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
    7680  VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
    7681  VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
    7682  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
    7683  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
    7684  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
    7685  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
    7686  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
    7687  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
    7688  VMA_COPY_IF_NOT_NULL(vkCreateImage);
    7689  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
    7690 #if VMA_DEDICATED_ALLOCATION
    7691  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
    7692  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
    7693 #endif
    7694  }
    7695 
    7696 #undef VMA_COPY_IF_NOT_NULL
    7697 
    7698  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
    7699  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
    7700  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
    7701  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
    7702  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
    7703  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
    7704  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
    7705  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
    7706  VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
    7707  VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
    7708  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
    7709  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
    7710  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
    7711  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
    7712  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
    7713  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
    7714  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
    7715  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
    7716 #if VMA_DEDICATED_ALLOCATION
    7717  if(m_UseKhrDedicatedAllocation)
    7718  {
    7719  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
    7720  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
    7721  }
    7722 #endif
    7723 }
    7724 
    7725 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
    7726 {
    7727  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    7728  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
    7729  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
    7730  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
    7731 }
    7732 
    7733 VkResult VmaAllocator_T::AllocateMemoryOfType(
    7734  VkDeviceSize size,
    7735  VkDeviceSize alignment,
    7736  bool dedicatedAllocation,
    7737  VkBuffer dedicatedBuffer,
    7738  VkImage dedicatedImage,
    7739  const VmaAllocationCreateInfo& createInfo,
    7740  uint32_t memTypeIndex,
    7741  VmaSuballocationType suballocType,
    7742  VmaAllocation* pAllocation)
    7743 {
    7744  VMA_ASSERT(pAllocation != VMA_NULL);
    7745  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
    7746 
    7747  VmaAllocationCreateInfo finalCreateInfo = createInfo;
    7748 
    7749  // If memory type is not HOST_VISIBLE, disable MAPPED.
    7750  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    7751  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    7752  {
    7753  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
    7754  }
    7755 
    7756  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
    7757  VMA_ASSERT(blockVector);
    7758 
    7759  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
    7760  bool preferDedicatedMemory =
    7761  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
    7762  dedicatedAllocation ||
    7763  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
    7764  size > preferredBlockSize / 2;
    7765 
    7766  if(preferDedicatedMemory &&
    7767  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
    7768  finalCreateInfo.pool == VK_NULL_HANDLE)
    7769  {
    7771  }
    7772 
    7773  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
    7774  {
    7775  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    7776  {
    7777  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7778  }
    7779  else
    7780  {
    7781  return AllocateDedicatedMemory(
    7782  size,
    7783  suballocType,
    7784  memTypeIndex,
    7785  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    7786  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    7787  finalCreateInfo.pUserData,
    7788  dedicatedBuffer,
    7789  dedicatedImage,
    7790  pAllocation);
    7791  }
    7792  }
    7793  else
    7794  {
    7795  VkResult res = blockVector->Allocate(
    7796  VK_NULL_HANDLE, // hCurrentPool
    7797  m_CurrentFrameIndex.load(),
    7798  size,
    7799  alignment,
    7800  finalCreateInfo,
    7801  suballocType,
    7802  pAllocation);
    7803  if(res == VK_SUCCESS)
    7804  {
    7805  return res;
    7806  }
    7807 
    7808  // 5. Try dedicated memory.
    7809  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    7810  {
    7811  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7812  }
    7813  else
    7814  {
    7815  res = AllocateDedicatedMemory(
    7816  size,
    7817  suballocType,
    7818  memTypeIndex,
    7819  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    7820  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    7821  finalCreateInfo.pUserData,
    7822  dedicatedBuffer,
    7823  dedicatedImage,
    7824  pAllocation);
    7825  if(res == VK_SUCCESS)
    7826  {
    7827  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
    7828  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
    7829  return VK_SUCCESS;
    7830  }
    7831  else
    7832  {
    7833  // Everything failed: Return error code.
    7834  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    7835  return res;
    7836  }
    7837  }
    7838  }
    7839 }
    7840 
    7841 VkResult VmaAllocator_T::AllocateDedicatedMemory(
    7842  VkDeviceSize size,
    7843  VmaSuballocationType suballocType,
    7844  uint32_t memTypeIndex,
    7845  bool map,
    7846  bool isUserDataString,
    7847  void* pUserData,
    7848  VkBuffer dedicatedBuffer,
    7849  VkImage dedicatedImage,
    7850  VmaAllocation* pAllocation)
    7851 {
    7852  VMA_ASSERT(pAllocation);
    7853 
    7854  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    7855  allocInfo.memoryTypeIndex = memTypeIndex;
    7856  allocInfo.allocationSize = size;
    7857 
    7858 #if VMA_DEDICATED_ALLOCATION
    7859  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
    7860  if(m_UseKhrDedicatedAllocation)
    7861  {
    7862  if(dedicatedBuffer != VK_NULL_HANDLE)
    7863  {
    7864  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
    7865  dedicatedAllocInfo.buffer = dedicatedBuffer;
    7866  allocInfo.pNext = &dedicatedAllocInfo;
    7867  }
    7868  else if(dedicatedImage != VK_NULL_HANDLE)
    7869  {
    7870  dedicatedAllocInfo.image = dedicatedImage;
    7871  allocInfo.pNext = &dedicatedAllocInfo;
    7872  }
    7873  }
    7874 #endif // #if VMA_DEDICATED_ALLOCATION
    7875 
    7876  // Allocate VkDeviceMemory.
    7877  VkDeviceMemory hMemory = VK_NULL_HANDLE;
    7878  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
    7879  if(res < 0)
    7880  {
    7881  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    7882  return res;
    7883  }
    7884 
    7885  void* pMappedData = VMA_NULL;
    7886  if(map)
    7887  {
    7888  res = (*m_VulkanFunctions.vkMapMemory)(
    7889  m_hDevice,
    7890  hMemory,
    7891  0,
    7892  VK_WHOLE_SIZE,
    7893  0,
    7894  &pMappedData);
    7895  if(res < 0)
    7896  {
    7897  VMA_DEBUG_LOG(" vkMapMemory FAILED");
    7898  FreeVulkanMemory(memTypeIndex, size, hMemory);
    7899  return res;
    7900  }
    7901  }
    7902 
    7903  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
    7904  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
    7905  (*pAllocation)->SetUserData(this, pUserData);
    7906 
    7907  // Register it in m_pDedicatedAllocations.
    7908  {
    7909  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    7910  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    7911  VMA_ASSERT(pDedicatedAllocations);
    7912  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
    7913  }
    7914 
    7915  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
    7916 
    7917  return VK_SUCCESS;
    7918 }
    7919 
    7920 void VmaAllocator_T::GetBufferMemoryRequirements(
    7921  VkBuffer hBuffer,
    7922  VkMemoryRequirements& memReq,
    7923  bool& requiresDedicatedAllocation,
    7924  bool& prefersDedicatedAllocation) const
    7925 {
    7926 #if VMA_DEDICATED_ALLOCATION
    7927  if(m_UseKhrDedicatedAllocation)
    7928  {
    7929  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
    7930  memReqInfo.buffer = hBuffer;
    7931 
    7932  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    7933 
    7934  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    7935  memReq2.pNext = &memDedicatedReq;
    7936 
    7937  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    7938 
    7939  memReq = memReq2.memoryRequirements;
    7940  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    7941  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    7942  }
    7943  else
    7944 #endif // #if VMA_DEDICATED_ALLOCATION
    7945  {
    7946  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
    7947  requiresDedicatedAllocation = false;
    7948  prefersDedicatedAllocation = false;
    7949  }
    7950 }
    7951 
    7952 void VmaAllocator_T::GetImageMemoryRequirements(
    7953  VkImage hImage,
    7954  VkMemoryRequirements& memReq,
    7955  bool& requiresDedicatedAllocation,
    7956  bool& prefersDedicatedAllocation) const
    7957 {
    7958 #if VMA_DEDICATED_ALLOCATION
    7959  if(m_UseKhrDedicatedAllocation)
    7960  {
    7961  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
    7962  memReqInfo.image = hImage;
    7963 
    7964  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    7965 
    7966  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    7967  memReq2.pNext = &memDedicatedReq;
    7968 
    7969  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    7970 
    7971  memReq = memReq2.memoryRequirements;
    7972  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    7973  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    7974  }
    7975  else
    7976 #endif // #if VMA_DEDICATED_ALLOCATION
    7977  {
    7978  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
    7979  requiresDedicatedAllocation = false;
    7980  prefersDedicatedAllocation = false;
    7981  }
    7982 }
    7983 
    7984 VkResult VmaAllocator_T::AllocateMemory(
    7985  const VkMemoryRequirements& vkMemReq,
    7986  bool requiresDedicatedAllocation,
    7987  bool prefersDedicatedAllocation,
    7988  VkBuffer dedicatedBuffer,
    7989  VkImage dedicatedImage,
    7990  const VmaAllocationCreateInfo& createInfo,
    7991  VmaSuballocationType suballocType,
    7992  VmaAllocation* pAllocation)
    7993 {
    7994  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
    7995  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    7996  {
    7997  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
    7998  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7999  }
    8000  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8002  {
    8003  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
    8004  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8005  }
    8006  if(requiresDedicatedAllocation)
    8007  {
    8008  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8009  {
    8010  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
    8011  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8012  }
    8013  if(createInfo.pool != VK_NULL_HANDLE)
    8014  {
    8015  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
    8016  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8017  }
    8018  }
    8019  if((createInfo.pool != VK_NULL_HANDLE) &&
    8020  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
    8021  {
    8022  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
    8023  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8024  }
    8025 
    8026  if(createInfo.pool != VK_NULL_HANDLE)
    8027  {
    8028  const VkDeviceSize alignmentForPool = VMA_MAX(
    8029  vkMemReq.alignment,
    8030  GetMemoryTypeMinAlignment(createInfo.pool->m_BlockVector.GetMemoryTypeIndex()));
    8031  return createInfo.pool->m_BlockVector.Allocate(
    8032  createInfo.pool,
    8033  m_CurrentFrameIndex.load(),
    8034  vkMemReq.size,
    8035  alignmentForPool,
    8036  createInfo,
    8037  suballocType,
    8038  pAllocation);
    8039  }
    8040  else
    8041  {
    8042  // Bit mask of memory Vulkan types acceptable for this allocation.
    8043  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
    8044  uint32_t memTypeIndex = UINT32_MAX;
    8045  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8046  if(res == VK_SUCCESS)
    8047  {
    8048  VkDeviceSize alignmentForMemType = VMA_MAX(
    8049  vkMemReq.alignment,
    8050  GetMemoryTypeMinAlignment(memTypeIndex));
    8051 
    8052  res = AllocateMemoryOfType(
    8053  vkMemReq.size,
    8054  alignmentForMemType,
    8055  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8056  dedicatedBuffer,
    8057  dedicatedImage,
    8058  createInfo,
    8059  memTypeIndex,
    8060  suballocType,
    8061  pAllocation);
    8062  // Succeeded on first try.
    8063  if(res == VK_SUCCESS)
    8064  {
    8065  return res;
    8066  }
    8067  // Allocation from this memory type failed. Try other compatible memory types.
    8068  else
    8069  {
    8070  for(;;)
    8071  {
    8072  // Remove old memTypeIndex from list of possibilities.
    8073  memoryTypeBits &= ~(1u << memTypeIndex);
    8074  // Find alternative memTypeIndex.
    8075  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8076  if(res == VK_SUCCESS)
    8077  {
    8078  alignmentForMemType = VMA_MAX(
    8079  vkMemReq.alignment,
    8080  GetMemoryTypeMinAlignment(memTypeIndex));
    8081 
    8082  res = AllocateMemoryOfType(
    8083  vkMemReq.size,
    8084  alignmentForMemType,
    8085  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8086  dedicatedBuffer,
    8087  dedicatedImage,
    8088  createInfo,
    8089  memTypeIndex,
    8090  suballocType,
    8091  pAllocation);
    8092  // Allocation from this alternative memory type succeeded.
    8093  if(res == VK_SUCCESS)
    8094  {
    8095  return res;
    8096  }
    8097  // else: Allocation from this memory type failed. Try next one - next loop iteration.
    8098  }
    8099  // No other matching memory type index could be found.
    8100  else
    8101  {
    8102  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
    8103  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8104  }
    8105  }
    8106  }
    8107  }
    8108  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
    8109  else
    8110  return res;
    8111  }
    8112 }
    8113 
    8114 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
    8115 {
    8116  VMA_ASSERT(allocation);
    8117 
    8118  if(allocation->CanBecomeLost() == false ||
    8119  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    8120  {
    8121  switch(allocation->GetType())
    8122  {
    8123  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8124  {
    8125  VmaBlockVector* pBlockVector = VMA_NULL;
    8126  VmaPool hPool = allocation->GetPool();
    8127  if(hPool != VK_NULL_HANDLE)
    8128  {
    8129  pBlockVector = &hPool->m_BlockVector;
    8130  }
    8131  else
    8132  {
    8133  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    8134  pBlockVector = m_pBlockVectors[memTypeIndex];
    8135  }
    8136  pBlockVector->Free(allocation);
    8137  }
    8138  break;
    8139  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8140  FreeDedicatedMemory(allocation);
    8141  break;
    8142  default:
    8143  VMA_ASSERT(0);
    8144  }
    8145  }
    8146 
    8147  allocation->SetUserData(this, VMA_NULL);
    8148  vma_delete(this, allocation);
    8149 }
    8150 
    8151 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
    8152 {
    8153  // Initialize.
    8154  InitStatInfo(pStats->total);
    8155  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
    8156  InitStatInfo(pStats->memoryType[i]);
    8157  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    8158  InitStatInfo(pStats->memoryHeap[i]);
    8159 
    8160  // Process default pools.
    8161  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8162  {
    8163  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8164  VMA_ASSERT(pBlockVector);
    8165  pBlockVector->AddStats(pStats);
    8166  }
    8167 
    8168  // Process custom pools.
    8169  {
    8170  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8171  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8172  {
    8173  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
    8174  }
    8175  }
    8176 
    8177  // Process dedicated allocations.
    8178  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8179  {
    8180  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8181  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8182  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    8183  VMA_ASSERT(pDedicatedAllocVector);
    8184  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
    8185  {
    8186  VmaStatInfo allocationStatInfo;
    8187  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
    8188  VmaAddStatInfo(pStats->total, allocationStatInfo);
    8189  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    8190  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    8191  }
    8192  }
    8193 
    8194  // Postprocess.
    8195  VmaPostprocessCalcStatInfo(pStats->total);
    8196  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
    8197  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
    8198  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
    8199  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
    8200 }
    8201 
    8202 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
    8203 
    8204 VkResult VmaAllocator_T::Defragment(
    8205  VmaAllocation* pAllocations,
    8206  size_t allocationCount,
    8207  VkBool32* pAllocationsChanged,
    8208  const VmaDefragmentationInfo* pDefragmentationInfo,
    8209  VmaDefragmentationStats* pDefragmentationStats)
    8210 {
    8211  if(pAllocationsChanged != VMA_NULL)
    8212  {
    8213  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
    8214  }
    8215  if(pDefragmentationStats != VMA_NULL)
    8216  {
    8217  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
    8218  }
    8219 
    8220  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
    8221 
    8222  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
    8223 
    8224  const size_t poolCount = m_Pools.size();
    8225 
    8226  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
    8227  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
    8228  {
    8229  VmaAllocation hAlloc = pAllocations[allocIndex];
    8230  VMA_ASSERT(hAlloc);
    8231  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
    8232  // DedicatedAlloc cannot be defragmented.
    8233  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
    8234  // Only HOST_VISIBLE memory types can be defragmented.
    8235  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
    8236  // Lost allocation cannot be defragmented.
    8237  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
    8238  {
    8239  VmaBlockVector* pAllocBlockVector = VMA_NULL;
    8240 
    8241  const VmaPool hAllocPool = hAlloc->GetPool();
    8242  // This allocation belongs to custom pool.
    8243  if(hAllocPool != VK_NULL_HANDLE)
    8244  {
    8245  pAllocBlockVector = &hAllocPool->GetBlockVector();
    8246  }
    8247  // This allocation belongs to general pool.
    8248  else
    8249  {
    8250  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
    8251  }
    8252 
    8253  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
    8254 
    8255  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
    8256  &pAllocationsChanged[allocIndex] : VMA_NULL;
    8257  pDefragmentator->AddAllocation(hAlloc, pChanged);
    8258  }
    8259  }
    8260 
    8261  VkResult result = VK_SUCCESS;
    8262 
    8263  // ======== Main processing.
    8264 
    8265  VkDeviceSize maxBytesToMove = SIZE_MAX;
    8266  uint32_t maxAllocationsToMove = UINT32_MAX;
    8267  if(pDefragmentationInfo != VMA_NULL)
    8268  {
    8269  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
    8270  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
    8271  }
    8272 
    8273  // Process standard memory.
    8274  for(uint32_t memTypeIndex = 0;
    8275  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
    8276  ++memTypeIndex)
    8277  {
    8278  // Only HOST_VISIBLE memory types can be defragmented.
    8279  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8280  {
    8281  result = m_pBlockVectors[memTypeIndex]->Defragment(
    8282  pDefragmentationStats,
    8283  maxBytesToMove,
    8284  maxAllocationsToMove);
    8285  }
    8286  }
    8287 
    8288  // Process custom pools.
    8289  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
    8290  {
    8291  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
    8292  pDefragmentationStats,
    8293  maxBytesToMove,
    8294  maxAllocationsToMove);
    8295  }
    8296 
    8297  // ======== Destroy defragmentators.
    8298 
    8299  // Process custom pools.
    8300  for(size_t poolIndex = poolCount; poolIndex--; )
    8301  {
    8302  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
    8303  }
    8304 
    8305  // Process standard memory.
    8306  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
    8307  {
    8308  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8309  {
    8310  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
    8311  }
    8312  }
    8313 
    8314  return result;
    8315 }
    8316 
    8317 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
    8318 {
    8319  if(hAllocation->CanBecomeLost())
    8320  {
    8321  /*
    8322  Warning: This is a carefully designed algorithm.
    8323  Do not modify unless you really know what you're doing :)
    8324  */
    8325  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8326  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8327  for(;;)
    8328  {
    8329  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8330  {
    8331  pAllocationInfo->memoryType = UINT32_MAX;
    8332  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
    8333  pAllocationInfo->offset = 0;
    8334  pAllocationInfo->size = hAllocation->GetSize();
    8335  pAllocationInfo->pMappedData = VMA_NULL;
    8336  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8337  return;
    8338  }
    8339  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8340  {
    8341  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8342  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8343  pAllocationInfo->offset = hAllocation->GetOffset();
    8344  pAllocationInfo->size = hAllocation->GetSize();
    8345  pAllocationInfo->pMappedData = VMA_NULL;
    8346  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8347  return;
    8348  }
    8349  else // Last use time earlier than current time.
    8350  {
    8351  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8352  {
    8353  localLastUseFrameIndex = localCurrFrameIndex;
    8354  }
    8355  }
    8356  }
    8357  }
    8358  else
    8359  {
    8360 #if VMA_STATS_STRING_ENABLED
    8361  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8362  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8363  for(;;)
    8364  {
    8365  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8366  if(localLastUseFrameIndex == localCurrFrameIndex)
    8367  {
    8368  break;
    8369  }
    8370  else // Last use time earlier than current time.
    8371  {
    8372  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8373  {
    8374  localLastUseFrameIndex = localCurrFrameIndex;
    8375  }
    8376  }
    8377  }
    8378 #endif
    8379 
    8380  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8381  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8382  pAllocationInfo->offset = hAllocation->GetOffset();
    8383  pAllocationInfo->size = hAllocation->GetSize();
    8384  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
    8385  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8386  }
    8387 }
    8388 
    8389 bool VmaAllocator_T::TouchAllocation(VmaAllocation hAllocation)
    8390 {
    8391  // This is a stripped-down version of VmaAllocator_T::GetAllocationInfo.
    8392  if(hAllocation->CanBecomeLost())
    8393  {
    8394  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8395  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8396  for(;;)
    8397  {
    8398  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8399  {
    8400  return false;
    8401  }
    8402  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8403  {
    8404  return true;
    8405  }
    8406  else // Last use time earlier than current time.
    8407  {
    8408  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8409  {
    8410  localLastUseFrameIndex = localCurrFrameIndex;
    8411  }
    8412  }
    8413  }
    8414  }
    8415  else
    8416  {
    8417 #if VMA_STATS_STRING_ENABLED
    8418  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8419  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8420  for(;;)
    8421  {
    8422  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8423  if(localLastUseFrameIndex == localCurrFrameIndex)
    8424  {
    8425  break;
    8426  }
    8427  else // Last use time earlier than current time.
    8428  {
    8429  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8430  {
    8431  localLastUseFrameIndex = localCurrFrameIndex;
    8432  }
    8433  }
    8434  }
    8435 #endif
    8436 
    8437  return true;
    8438  }
    8439 }
    8440 
    8441 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
    8442 {
    8443  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
    8444 
    8445  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
    8446 
    8447  if(newCreateInfo.maxBlockCount == 0)
    8448  {
    8449  newCreateInfo.maxBlockCount = SIZE_MAX;
    8450  }
    8451  if(newCreateInfo.blockSize == 0)
    8452  {
    8453  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
    8454  }
    8455 
    8456  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
    8457 
    8458  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
    8459  if(res != VK_SUCCESS)
    8460  {
    8461  vma_delete(this, *pPool);
    8462  *pPool = VMA_NULL;
    8463  return res;
    8464  }
    8465 
    8466  // Add to m_Pools.
    8467  {
    8468  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8469  (*pPool)->SetId(m_NextPoolId++);
    8470  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
    8471  }
    8472 
    8473  return VK_SUCCESS;
    8474 }
    8475 
    8476 void VmaAllocator_T::DestroyPool(VmaPool pool)
    8477 {
    8478  // Remove from m_Pools.
    8479  {
    8480  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8481  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
    8482  VMA_ASSERT(success && "Pool not found in Allocator.");
    8483  }
    8484 
    8485  vma_delete(this, pool);
    8486 }
    8487 
    8488 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
    8489 {
    8490  pool->m_BlockVector.GetPoolStats(pPoolStats);
    8491 }
    8492 
    8493 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
    8494 {
    8495  m_CurrentFrameIndex.store(frameIndex);
    8496 }
    8497 
    8498 void VmaAllocator_T::MakePoolAllocationsLost(
    8499  VmaPool hPool,
    8500  size_t* pLostAllocationCount)
    8501 {
    8502  hPool->m_BlockVector.MakePoolAllocationsLost(
    8503  m_CurrentFrameIndex.load(),
    8504  pLostAllocationCount);
    8505 }
    8506 
    8507 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
    8508 {
    8509  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
    8510  (*pAllocation)->InitLost();
    8511 }
    8512 
    8513 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
    8514 {
    8515  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
    8516 
    8517  VkResult res;
    8518  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8519  {
    8520  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8521  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
    8522  {
    8523  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8524  if(res == VK_SUCCESS)
    8525  {
    8526  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
    8527  }
    8528  }
    8529  else
    8530  {
    8531  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8532  }
    8533  }
    8534  else
    8535  {
    8536  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8537  }
    8538 
    8539  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
    8540  {
    8541  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
    8542  }
    8543 
    8544  return res;
    8545 }
    8546 
    8547 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
    8548 {
    8549  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
    8550  {
    8551  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
    8552  }
    8553 
    8554  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
    8555 
    8556  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
    8557  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8558  {
    8559  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8560  m_HeapSizeLimit[heapIndex] += size;
    8561  }
    8562 }
    8563 
    8564 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
    8565 {
    8566  if(hAllocation->CanBecomeLost())
    8567  {
    8568  return VK_ERROR_MEMORY_MAP_FAILED;
    8569  }
    8570 
    8571  switch(hAllocation->GetType())
    8572  {
    8573  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8574  {
    8575  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8576  char *pBytes = VMA_NULL;
    8577  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
    8578  if(res == VK_SUCCESS)
    8579  {
    8580  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
    8581  hAllocation->BlockAllocMap();
    8582  }
    8583  return res;
    8584  }
    8585  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8586  return hAllocation->DedicatedAllocMap(this, ppData);
    8587  default:
    8588  VMA_ASSERT(0);
    8589  return VK_ERROR_MEMORY_MAP_FAILED;
    8590  }
    8591 }
    8592 
    8593 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
    8594 {
    8595  switch(hAllocation->GetType())
    8596  {
    8597  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8598  {
    8599  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8600  hAllocation->BlockAllocUnmap();
    8601  pBlock->Unmap(this, 1);
    8602  }
    8603  break;
    8604  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8605  hAllocation->DedicatedAllocUnmap(this);
    8606  break;
    8607  default:
    8608  VMA_ASSERT(0);
    8609  }
    8610 }
    8611 
    8612 VkResult VmaAllocator_T::BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer)
    8613 {
    8614  VkResult res = VK_SUCCESS;
    8615  switch(hAllocation->GetType())
    8616  {
    8617  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8618  res = GetVulkanFunctions().vkBindBufferMemory(
    8619  m_hDevice,
    8620  hBuffer,
    8621  hAllocation->GetMemory(),
    8622  0); //memoryOffset
    8623  break;
    8624  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8625  {
    8626  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    8627  VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
    8628  res = pBlock->BindBufferMemory(this, hAllocation, hBuffer);
    8629  break;
    8630  }
    8631  default:
    8632  VMA_ASSERT(0);
    8633  }
    8634  return res;
    8635 }
    8636 
    8637 VkResult VmaAllocator_T::BindImageMemory(VmaAllocation hAllocation, VkImage hImage)
    8638 {
    8639  VkResult res = VK_SUCCESS;
    8640  switch(hAllocation->GetType())
    8641  {
    8642  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8643  res = GetVulkanFunctions().vkBindImageMemory(
    8644  m_hDevice,
    8645  hImage,
    8646  hAllocation->GetMemory(),
    8647  0); //memoryOffset
    8648  break;
    8649  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8650  {
    8651  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    8652  VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
    8653  res = pBlock->BindImageMemory(this, hAllocation, hImage);
    8654  break;
    8655  }
    8656  default:
    8657  VMA_ASSERT(0);
    8658  }
    8659  return res;
    8660 }
    8661 
    8662 void VmaAllocator_T::FlushOrInvalidateAllocation(
    8663  VmaAllocation hAllocation,
    8664  VkDeviceSize offset, VkDeviceSize size,
    8665  VMA_CACHE_OPERATION op)
    8666 {
    8667  const uint32_t memTypeIndex = hAllocation->GetMemoryTypeIndex();
    8668  if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
    8669  {
    8670  const VkDeviceSize allocationSize = hAllocation->GetSize();
    8671  VMA_ASSERT(offset <= allocationSize);
    8672 
    8673  const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
    8674 
    8675  VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    8676  memRange.memory = hAllocation->GetMemory();
    8677 
    8678  switch(hAllocation->GetType())
    8679  {
    8680  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8681  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    8682  if(size == VK_WHOLE_SIZE)
    8683  {
    8684  memRange.size = allocationSize - memRange.offset;
    8685  }
    8686  else
    8687  {
    8688  VMA_ASSERT(offset + size <= allocationSize);
    8689  memRange.size = VMA_MIN(
    8690  VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize),
    8691  allocationSize - memRange.offset);
    8692  }
    8693  break;
    8694 
    8695  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8696  {
    8697  // 1. Still within this allocation.
    8698  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    8699  if(size == VK_WHOLE_SIZE)
    8700  {
    8701  size = allocationSize - offset;
    8702  }
    8703  else
    8704  {
    8705  VMA_ASSERT(offset + size <= allocationSize);
    8706  }
    8707  memRange.size = VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize);
    8708 
    8709  // 2. Adjust to whole block.
    8710  const VkDeviceSize allocationOffset = hAllocation->GetOffset();
    8711  VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
    8712  const VkDeviceSize blockSize = hAllocation->GetBlock()->m_Metadata.GetSize();
    8713  memRange.offset += allocationOffset;
    8714  memRange.size = VMA_MIN(memRange.size, blockSize - memRange.offset);
    8715 
    8716  break;
    8717  }
    8718 
    8719  default:
    8720  VMA_ASSERT(0);
    8721  }
    8722 
    8723  switch(op)
    8724  {
    8725  case VMA_CACHE_FLUSH:
    8726  (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
    8727  break;
    8728  case VMA_CACHE_INVALIDATE:
    8729  (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
    8730  break;
    8731  default:
    8732  VMA_ASSERT(0);
    8733  }
    8734  }
    8735  // else: Just ignore this call.
    8736 }
    8737 
    8738 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
    8739 {
    8740  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
    8741 
    8742  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    8743  {
    8744  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8745  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    8746  VMA_ASSERT(pDedicatedAllocations);
    8747  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
    8748  VMA_ASSERT(success);
    8749  }
    8750 
    8751  VkDeviceMemory hMemory = allocation->GetMemory();
    8752 
    8753  if(allocation->GetMappedData() != VMA_NULL)
    8754  {
    8755  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
    8756  }
    8757 
    8758  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
    8759 
    8760  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
    8761 }
    8762 
    8763 #if VMA_STATS_STRING_ENABLED
    8764 
    8765 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
    8766 {
    8767  bool dedicatedAllocationsStarted = false;
    8768  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8769  {
    8770  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8771  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    8772  VMA_ASSERT(pDedicatedAllocVector);
    8773  if(pDedicatedAllocVector->empty() == false)
    8774  {
    8775  if(dedicatedAllocationsStarted == false)
    8776  {
    8777  dedicatedAllocationsStarted = true;
    8778  json.WriteString("DedicatedAllocations");
    8779  json.BeginObject();
    8780  }
    8781 
    8782  json.BeginString("Type ");
    8783  json.ContinueString(memTypeIndex);
    8784  json.EndString();
    8785 
    8786  json.BeginArray();
    8787 
    8788  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
    8789  {
    8790  json.BeginObject(true);
    8791  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
    8792  hAlloc->PrintParameters(json);
    8793  json.EndObject();
    8794  }
    8795 
    8796  json.EndArray();
    8797  }
    8798  }
    8799  if(dedicatedAllocationsStarted)
    8800  {
    8801  json.EndObject();
    8802  }
    8803 
    8804  {
    8805  bool allocationsStarted = false;
    8806  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8807  {
    8808  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
    8809  {
    8810  if(allocationsStarted == false)
    8811  {
    8812  allocationsStarted = true;
    8813  json.WriteString("DefaultPools");
    8814  json.BeginObject();
    8815  }
    8816 
    8817  json.BeginString("Type ");
    8818  json.ContinueString(memTypeIndex);
    8819  json.EndString();
    8820 
    8821  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
    8822  }
    8823  }
    8824  if(allocationsStarted)
    8825  {
    8826  json.EndObject();
    8827  }
    8828  }
    8829 
    8830  {
    8831  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8832  const size_t poolCount = m_Pools.size();
    8833  if(poolCount > 0)
    8834  {
    8835  json.WriteString("Pools");
    8836  json.BeginObject();
    8837  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
    8838  {
    8839  json.BeginString();
    8840  json.ContinueString(m_Pools[poolIndex]->GetId());
    8841  json.EndString();
    8842 
    8843  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
    8844  }
    8845  json.EndObject();
    8846  }
    8847  }
    8848 }
    8849 
    8850 #endif // #if VMA_STATS_STRING_ENABLED
    8851 
    8852 static VkResult AllocateMemoryForImage(
    8853  VmaAllocator allocator,
    8854  VkImage image,
    8855  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    8856  VmaSuballocationType suballocType,
    8857  VmaAllocation* pAllocation)
    8858 {
    8859  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
    8860 
    8861  VkMemoryRequirements vkMemReq = {};
    8862  bool requiresDedicatedAllocation = false;
    8863  bool prefersDedicatedAllocation = false;
    8864  allocator->GetImageMemoryRequirements(image, vkMemReq,
    8865  requiresDedicatedAllocation, prefersDedicatedAllocation);
    8866 
    8867  return allocator->AllocateMemory(
    8868  vkMemReq,
    8869  requiresDedicatedAllocation,
    8870  prefersDedicatedAllocation,
    8871  VK_NULL_HANDLE, // dedicatedBuffer
    8872  image, // dedicatedImage
    8873  *pAllocationCreateInfo,
    8874  suballocType,
    8875  pAllocation);
    8876 }
    8877 
    8879 // Public interface
    8880 
    8881 VkResult vmaCreateAllocator(
    8882  const VmaAllocatorCreateInfo* pCreateInfo,
    8883  VmaAllocator* pAllocator)
    8884 {
    8885  VMA_ASSERT(pCreateInfo && pAllocator);
    8886  VMA_DEBUG_LOG("vmaCreateAllocator");
    8887  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
    8888  return VK_SUCCESS;
    8889 }
    8890 
    8891 void vmaDestroyAllocator(
    8892  VmaAllocator allocator)
    8893 {
    8894  if(allocator != VK_NULL_HANDLE)
    8895  {
    8896  VMA_DEBUG_LOG("vmaDestroyAllocator");
    8897  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
    8898  vma_delete(&allocationCallbacks, allocator);
    8899  }
    8900 }
    8901 
    8903  VmaAllocator allocator,
    8904  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    8905 {
    8906  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
    8907  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
    8908 }
    8909 
    8911  VmaAllocator allocator,
    8912  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
    8913 {
    8914  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
    8915  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
    8916 }
    8917 
    8919  VmaAllocator allocator,
    8920  uint32_t memoryTypeIndex,
    8921  VkMemoryPropertyFlags* pFlags)
    8922 {
    8923  VMA_ASSERT(allocator && pFlags);
    8924  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
    8925  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
    8926 }
    8927 
    8929  VmaAllocator allocator,
    8930  uint32_t frameIndex)
    8931 {
    8932  VMA_ASSERT(allocator);
    8933  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
    8934 
    8935  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    8936 
    8937  allocator->SetCurrentFrameIndex(frameIndex);
    8938 }
    8939 
    8940 void vmaCalculateStats(
    8941  VmaAllocator allocator,
    8942  VmaStats* pStats)
    8943 {
    8944  VMA_ASSERT(allocator && pStats);
    8945  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    8946  allocator->CalculateStats(pStats);
    8947 }
    8948 
    8949 #if VMA_STATS_STRING_ENABLED
    8950 
    8951 void vmaBuildStatsString(
    8952  VmaAllocator allocator,
    8953  char** ppStatsString,
    8954  VkBool32 detailedMap)
    8955 {
    8956  VMA_ASSERT(allocator && ppStatsString);
    8957  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    8958 
    8959  VmaStringBuilder sb(allocator);
    8960  {
    8961  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
    8962  json.BeginObject();
    8963 
    8964  VmaStats stats;
    8965  allocator->CalculateStats(&stats);
    8966 
    8967  json.WriteString("Total");
    8968  VmaPrintStatInfo(json, stats.total);
    8969 
    8970  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
    8971  {
    8972  json.BeginString("Heap ");
    8973  json.ContinueString(heapIndex);
    8974  json.EndString();
    8975  json.BeginObject();
    8976 
    8977  json.WriteString("Size");
    8978  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
    8979 
    8980  json.WriteString("Flags");
    8981  json.BeginArray(true);
    8982  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
    8983  {
    8984  json.WriteString("DEVICE_LOCAL");
    8985  }
    8986  json.EndArray();
    8987 
    8988  if(stats.memoryHeap[heapIndex].blockCount > 0)
    8989  {
    8990  json.WriteString("Stats");
    8991  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
    8992  }
    8993 
    8994  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
    8995  {
    8996  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
    8997  {
    8998  json.BeginString("Type ");
    8999  json.ContinueString(typeIndex);
    9000  json.EndString();
    9001 
    9002  json.BeginObject();
    9003 
    9004  json.WriteString("Flags");
    9005  json.BeginArray(true);
    9006  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
    9007  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
    9008  {
    9009  json.WriteString("DEVICE_LOCAL");
    9010  }
    9011  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    9012  {
    9013  json.WriteString("HOST_VISIBLE");
    9014  }
    9015  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
    9016  {
    9017  json.WriteString("HOST_COHERENT");
    9018  }
    9019  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
    9020  {
    9021  json.WriteString("HOST_CACHED");
    9022  }
    9023  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
    9024  {
    9025  json.WriteString("LAZILY_ALLOCATED");
    9026  }
    9027  json.EndArray();
    9028 
    9029  if(stats.memoryType[typeIndex].blockCount > 0)
    9030  {
    9031  json.WriteString("Stats");
    9032  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
    9033  }
    9034 
    9035  json.EndObject();
    9036  }
    9037  }
    9038 
    9039  json.EndObject();
    9040  }
    9041  if(detailedMap == VK_TRUE)
    9042  {
    9043  allocator->PrintDetailedMap(json);
    9044  }
    9045 
    9046  json.EndObject();
    9047  }
    9048 
    9049  const size_t len = sb.GetLength();
    9050  char* const pChars = vma_new_array(allocator, char, len + 1);
    9051  if(len > 0)
    9052  {
    9053  memcpy(pChars, sb.GetData(), len);
    9054  }
    9055  pChars[len] = '\0';
    9056  *ppStatsString = pChars;
    9057 }
    9058 
    9059 void vmaFreeStatsString(
    9060  VmaAllocator allocator,
    9061  char* pStatsString)
    9062 {
    9063  if(pStatsString != VMA_NULL)
    9064  {
    9065  VMA_ASSERT(allocator);
    9066  size_t len = strlen(pStatsString);
    9067  vma_delete_array(allocator, pStatsString, len + 1);
    9068  }
    9069 }
    9070 
    9071 #endif // #if VMA_STATS_STRING_ENABLED
    9072 
    9073 /*
    9074 This function is not protected by any mutex because it just reads immutable data.
    9075 */
    9076 VkResult vmaFindMemoryTypeIndex(
    9077  VmaAllocator allocator,
    9078  uint32_t memoryTypeBits,
    9079  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9080  uint32_t* pMemoryTypeIndex)
    9081 {
    9082  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9083  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9084  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9085 
    9086  if(pAllocationCreateInfo->memoryTypeBits != 0)
    9087  {
    9088  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
    9089  }
    9090 
    9091  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
    9092  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
    9093 
    9094  const bool mapped = (pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    9095  if(mapped)
    9096  {
    9097  preferredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9098  }
    9099 
    9100  // Convert usage to requiredFlags and preferredFlags.
    9101  switch(pAllocationCreateInfo->usage)
    9102  {
    9104  break;
    9106  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9107  {
    9108  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9109  }
    9110  break;
    9112  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    9113  break;
    9115  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9116  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9117  {
    9118  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9119  }
    9120  break;
    9122  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9123  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
    9124  break;
    9125  default:
    9126  break;
    9127  }
    9128 
    9129  *pMemoryTypeIndex = UINT32_MAX;
    9130  uint32_t minCost = UINT32_MAX;
    9131  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
    9132  memTypeIndex < allocator->GetMemoryTypeCount();
    9133  ++memTypeIndex, memTypeBit <<= 1)
    9134  {
    9135  // This memory type is acceptable according to memoryTypeBits bitmask.
    9136  if((memTypeBit & memoryTypeBits) != 0)
    9137  {
    9138  const VkMemoryPropertyFlags currFlags =
    9139  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
    9140  // This memory type contains requiredFlags.
    9141  if((requiredFlags & ~currFlags) == 0)
    9142  {
    9143  // Calculate cost as number of bits from preferredFlags not present in this memory type.
    9144  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
    9145  // Remember memory type with lowest cost.
    9146  if(currCost < minCost)
    9147  {
    9148  *pMemoryTypeIndex = memTypeIndex;
    9149  if(currCost == 0)
    9150  {
    9151  return VK_SUCCESS;
    9152  }
    9153  minCost = currCost;
    9154  }
    9155  }
    9156  }
    9157  }
    9158  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
    9159 }
    9160 
    9162  VmaAllocator allocator,
    9163  const VkBufferCreateInfo* pBufferCreateInfo,
    9164  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9165  uint32_t* pMemoryTypeIndex)
    9166 {
    9167  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9168  VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
    9169  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9170  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9171 
    9172  const VkDevice hDev = allocator->m_hDevice;
    9173  VkBuffer hBuffer = VK_NULL_HANDLE;
    9174  VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
    9175  hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
    9176  if(res == VK_SUCCESS)
    9177  {
    9178  VkMemoryRequirements memReq = {};
    9179  allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
    9180  hDev, hBuffer, &memReq);
    9181 
    9182  res = vmaFindMemoryTypeIndex(
    9183  allocator,
    9184  memReq.memoryTypeBits,
    9185  pAllocationCreateInfo,
    9186  pMemoryTypeIndex);
    9187 
    9188  allocator->GetVulkanFunctions().vkDestroyBuffer(
    9189  hDev, hBuffer, allocator->GetAllocationCallbacks());
    9190  }
    9191  return res;
    9192 }
    9193 
    9195  VmaAllocator allocator,
    9196  const VkImageCreateInfo* pImageCreateInfo,
    9197  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9198  uint32_t* pMemoryTypeIndex)
    9199 {
    9200  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9201  VMA_ASSERT(pImageCreateInfo != VMA_NULL);
    9202  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9203  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9204 
    9205  const VkDevice hDev = allocator->m_hDevice;
    9206  VkImage hImage = VK_NULL_HANDLE;
    9207  VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
    9208  hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
    9209  if(res == VK_SUCCESS)
    9210  {
    9211  VkMemoryRequirements memReq = {};
    9212  allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
    9213  hDev, hImage, &memReq);
    9214 
    9215  res = vmaFindMemoryTypeIndex(
    9216  allocator,
    9217  memReq.memoryTypeBits,
    9218  pAllocationCreateInfo,
    9219  pMemoryTypeIndex);
    9220 
    9221  allocator->GetVulkanFunctions().vkDestroyImage(
    9222  hDev, hImage, allocator->GetAllocationCallbacks());
    9223  }
    9224  return res;
    9225 }
    9226 
    9227 VkResult vmaCreatePool(
    9228  VmaAllocator allocator,
    9229  const VmaPoolCreateInfo* pCreateInfo,
    9230  VmaPool* pPool)
    9231 {
    9232  VMA_ASSERT(allocator && pCreateInfo && pPool);
    9233 
    9234  VMA_DEBUG_LOG("vmaCreatePool");
    9235 
    9236  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9237 
    9238  return allocator->CreatePool(pCreateInfo, pPool);
    9239 }
    9240 
    9241 void vmaDestroyPool(
    9242  VmaAllocator allocator,
    9243  VmaPool pool)
    9244 {
    9245  VMA_ASSERT(allocator);
    9246 
    9247  if(pool == VK_NULL_HANDLE)
    9248  {
    9249  return;
    9250  }
    9251 
    9252  VMA_DEBUG_LOG("vmaDestroyPool");
    9253 
    9254  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9255 
    9256  allocator->DestroyPool(pool);
    9257 }
    9258 
    9259 void vmaGetPoolStats(
    9260  VmaAllocator allocator,
    9261  VmaPool pool,
    9262  VmaPoolStats* pPoolStats)
    9263 {
    9264  VMA_ASSERT(allocator && pool && pPoolStats);
    9265 
    9266  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9267 
    9268  allocator->GetPoolStats(pool, pPoolStats);
    9269 }
    9270 
    9272  VmaAllocator allocator,
    9273  VmaPool pool,
    9274  size_t* pLostAllocationCount)
    9275 {
    9276  VMA_ASSERT(allocator && pool);
    9277 
    9278  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9279 
    9280  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
    9281 }
    9282 
    9283 VkResult vmaAllocateMemory(
    9284  VmaAllocator allocator,
    9285  const VkMemoryRequirements* pVkMemoryRequirements,
    9286  const VmaAllocationCreateInfo* pCreateInfo,
    9287  VmaAllocation* pAllocation,
    9288  VmaAllocationInfo* pAllocationInfo)
    9289 {
    9290  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
    9291 
    9292  VMA_DEBUG_LOG("vmaAllocateMemory");
    9293 
    9294  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9295 
    9296  VkResult result = allocator->AllocateMemory(
    9297  *pVkMemoryRequirements,
    9298  false, // requiresDedicatedAllocation
    9299  false, // prefersDedicatedAllocation
    9300  VK_NULL_HANDLE, // dedicatedBuffer
    9301  VK_NULL_HANDLE, // dedicatedImage
    9302  *pCreateInfo,
    9303  VMA_SUBALLOCATION_TYPE_UNKNOWN,
    9304  pAllocation);
    9305 
    9306  if(pAllocationInfo && result == VK_SUCCESS)
    9307  {
    9308  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9309  }
    9310 
    9311  return result;
    9312 }
    9313 
    9315  VmaAllocator allocator,
    9316  VkBuffer buffer,
    9317  const VmaAllocationCreateInfo* pCreateInfo,
    9318  VmaAllocation* pAllocation,
    9319  VmaAllocationInfo* pAllocationInfo)
    9320 {
    9321  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9322 
    9323  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
    9324 
    9325  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9326 
    9327  VkMemoryRequirements vkMemReq = {};
    9328  bool requiresDedicatedAllocation = false;
    9329  bool prefersDedicatedAllocation = false;
    9330  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
    9331  requiresDedicatedAllocation,
    9332  prefersDedicatedAllocation);
    9333 
    9334  VkResult result = allocator->AllocateMemory(
    9335  vkMemReq,
    9336  requiresDedicatedAllocation,
    9337  prefersDedicatedAllocation,
    9338  buffer, // dedicatedBuffer
    9339  VK_NULL_HANDLE, // dedicatedImage
    9340  *pCreateInfo,
    9341  VMA_SUBALLOCATION_TYPE_BUFFER,
    9342  pAllocation);
    9343 
    9344  if(pAllocationInfo && result == VK_SUCCESS)
    9345  {
    9346  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9347  }
    9348 
    9349  return result;
    9350 }
    9351 
    9352 VkResult vmaAllocateMemoryForImage(
    9353  VmaAllocator allocator,
    9354  VkImage image,
    9355  const VmaAllocationCreateInfo* pCreateInfo,
    9356  VmaAllocation* pAllocation,
    9357  VmaAllocationInfo* pAllocationInfo)
    9358 {
    9359  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9360 
    9361  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
    9362 
    9363  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9364 
    9365  VkResult result = AllocateMemoryForImage(
    9366  allocator,
    9367  image,
    9368  pCreateInfo,
    9369  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
    9370  pAllocation);
    9371 
    9372  if(pAllocationInfo && result == VK_SUCCESS)
    9373  {
    9374  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9375  }
    9376 
    9377  return result;
    9378 }
    9379 
    9380 void vmaFreeMemory(
    9381  VmaAllocator allocator,
    9382  VmaAllocation allocation)
    9383 {
    9384  VMA_ASSERT(allocator);
    9385  VMA_DEBUG_LOG("vmaFreeMemory");
    9386  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9387  if(allocation != VK_NULL_HANDLE)
    9388  {
    9389  allocator->FreeMemory(allocation);
    9390  }
    9391 }
    9392 
    9394  VmaAllocator allocator,
    9395  VmaAllocation allocation,
    9396  VmaAllocationInfo* pAllocationInfo)
    9397 {
    9398  VMA_ASSERT(allocator && allocation && pAllocationInfo);
    9399 
    9400  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9401 
    9402  allocator->GetAllocationInfo(allocation, pAllocationInfo);
    9403 }
    9404 
    9405 VkBool32 vmaTouchAllocation(
    9406  VmaAllocator allocator,
    9407  VmaAllocation allocation)
    9408 {
    9409  VMA_ASSERT(allocator && allocation);
    9410 
    9411  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9412 
    9413  return allocator->TouchAllocation(allocation);
    9414 }
    9415 
    9417  VmaAllocator allocator,
    9418  VmaAllocation allocation,
    9419  void* pUserData)
    9420 {
    9421  VMA_ASSERT(allocator && allocation);
    9422 
    9423  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9424 
    9425  allocation->SetUserData(allocator, pUserData);
    9426 }
    9427 
    9429  VmaAllocator allocator,
    9430  VmaAllocation* pAllocation)
    9431 {
    9432  VMA_ASSERT(allocator && pAllocation);
    9433 
    9434  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
    9435 
    9436  allocator->CreateLostAllocation(pAllocation);
    9437 }
    9438 
    9439 VkResult vmaMapMemory(
    9440  VmaAllocator allocator,
    9441  VmaAllocation allocation,
    9442  void** ppData)
    9443 {
    9444  VMA_ASSERT(allocator && allocation && ppData);
    9445 
    9446  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9447 
    9448  return allocator->Map(allocation, ppData);
    9449 }
    9450 
    9451 void vmaUnmapMemory(
    9452  VmaAllocator allocator,
    9453  VmaAllocation allocation)
    9454 {
    9455  VMA_ASSERT(allocator && allocation);
    9456 
    9457  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9458 
    9459  allocator->Unmap(allocation);
    9460 }
    9461 
    9462 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9463 {
    9464  VMA_ASSERT(allocator && allocation);
    9465 
    9466  VMA_DEBUG_LOG("vmaFlushAllocation");
    9467 
    9468  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9469 
    9470  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
    9471 }
    9472 
    9473 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9474 {
    9475  VMA_ASSERT(allocator && allocation);
    9476 
    9477  VMA_DEBUG_LOG("vmaInvalidateAllocation");
    9478 
    9479  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9480 
    9481  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
    9482 }
    9483 
    9484 VkResult vmaDefragment(
    9485  VmaAllocator allocator,
    9486  VmaAllocation* pAllocations,
    9487  size_t allocationCount,
    9488  VkBool32* pAllocationsChanged,
    9489  const VmaDefragmentationInfo *pDefragmentationInfo,
    9490  VmaDefragmentationStats* pDefragmentationStats)
    9491 {
    9492  VMA_ASSERT(allocator && pAllocations);
    9493 
    9494  VMA_DEBUG_LOG("vmaDefragment");
    9495 
    9496  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9497 
    9498  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
    9499 }
    9500 
    9501 VkResult vmaBindBufferMemory(
    9502  VmaAllocator allocator,
    9503  VmaAllocation allocation,
    9504  VkBuffer buffer)
    9505 {
    9506  VMA_ASSERT(allocator && allocation && buffer);
    9507 
    9508  VMA_DEBUG_LOG("vmaBindBufferMemory");
    9509 
    9510  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9511 
    9512  return allocator->BindBufferMemory(allocation, buffer);
    9513 }
    9514 
    9515 VkResult vmaBindImageMemory(
    9516  VmaAllocator allocator,
    9517  VmaAllocation allocation,
    9518  VkImage image)
    9519 {
    9520  VMA_ASSERT(allocator && allocation && image);
    9521 
    9522  VMA_DEBUG_LOG("vmaBindImageMemory");
    9523 
    9524  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9525 
    9526  return allocator->BindImageMemory(allocation, image);
    9527 }
    9528 
    9529 VkResult vmaCreateBuffer(
    9530  VmaAllocator allocator,
    9531  const VkBufferCreateInfo* pBufferCreateInfo,
    9532  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9533  VkBuffer* pBuffer,
    9534  VmaAllocation* pAllocation,
    9535  VmaAllocationInfo* pAllocationInfo)
    9536 {
    9537  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
    9538 
    9539  VMA_DEBUG_LOG("vmaCreateBuffer");
    9540 
    9541  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9542 
    9543  *pBuffer = VK_NULL_HANDLE;
    9544  *pAllocation = VK_NULL_HANDLE;
    9545 
    9546  // 1. Create VkBuffer.
    9547  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
    9548  allocator->m_hDevice,
    9549  pBufferCreateInfo,
    9550  allocator->GetAllocationCallbacks(),
    9551  pBuffer);
    9552  if(res >= 0)
    9553  {
    9554  // 2. vkGetBufferMemoryRequirements.
    9555  VkMemoryRequirements vkMemReq = {};
    9556  bool requiresDedicatedAllocation = false;
    9557  bool prefersDedicatedAllocation = false;
    9558  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
    9559  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9560 
    9561  // Make sure alignment requirements for specific buffer usages reported
    9562  // in Physical Device Properties are included in alignment reported by memory requirements.
    9563  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
    9564  {
    9565  VMA_ASSERT(vkMemReq.alignment %
    9566  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
    9567  }
    9568  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
    9569  {
    9570  VMA_ASSERT(vkMemReq.alignment %
    9571  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
    9572  }
    9573  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
    9574  {
    9575  VMA_ASSERT(vkMemReq.alignment %
    9576  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
    9577  }
    9578 
    9579  // 3. Allocate memory using allocator.
    9580  res = allocator->AllocateMemory(
    9581  vkMemReq,
    9582  requiresDedicatedAllocation,
    9583  prefersDedicatedAllocation,
    9584  *pBuffer, // dedicatedBuffer
    9585  VK_NULL_HANDLE, // dedicatedImage
    9586  *pAllocationCreateInfo,
    9587  VMA_SUBALLOCATION_TYPE_BUFFER,
    9588  pAllocation);
    9589  if(res >= 0)
    9590  {
    9591  // 3. Bind buffer with memory.
    9592  res = allocator->BindBufferMemory(*pAllocation, *pBuffer);
    9593  if(res >= 0)
    9594  {
    9595  // All steps succeeded.
    9596  #if VMA_STATS_STRING_ENABLED
    9597  (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
    9598  #endif
    9599  if(pAllocationInfo != VMA_NULL)
    9600  {
    9601  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9602  }
    9603  return VK_SUCCESS;
    9604  }
    9605  allocator->FreeMemory(*pAllocation);
    9606  *pAllocation = VK_NULL_HANDLE;
    9607  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    9608  *pBuffer = VK_NULL_HANDLE;
    9609  return res;
    9610  }
    9611  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    9612  *pBuffer = VK_NULL_HANDLE;
    9613  return res;
    9614  }
    9615  return res;
    9616 }
    9617 
    9618 void vmaDestroyBuffer(
    9619  VmaAllocator allocator,
    9620  VkBuffer buffer,
    9621  VmaAllocation allocation)
    9622 {
    9623  VMA_ASSERT(allocator);
    9624  VMA_DEBUG_LOG("vmaDestroyBuffer");
    9625  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9626  if(buffer != VK_NULL_HANDLE)
    9627  {
    9628  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
    9629  }
    9630  if(allocation != VK_NULL_HANDLE)
    9631  {
    9632  allocator->FreeMemory(allocation);
    9633  }
    9634 }
    9635 
    9636 VkResult vmaCreateImage(
    9637  VmaAllocator allocator,
    9638  const VkImageCreateInfo* pImageCreateInfo,
    9639  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9640  VkImage* pImage,
    9641  VmaAllocation* pAllocation,
    9642  VmaAllocationInfo* pAllocationInfo)
    9643 {
    9644  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
    9645 
    9646  VMA_DEBUG_LOG("vmaCreateImage");
    9647 
    9648  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9649 
    9650  *pImage = VK_NULL_HANDLE;
    9651  *pAllocation = VK_NULL_HANDLE;
    9652 
    9653  // 1. Create VkImage.
    9654  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
    9655  allocator->m_hDevice,
    9656  pImageCreateInfo,
    9657  allocator->GetAllocationCallbacks(),
    9658  pImage);
    9659  if(res >= 0)
    9660  {
    9661  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
    9662  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
    9663  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
    9664 
    9665  // 2. Allocate memory using allocator.
    9666  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
    9667  if(res >= 0)
    9668  {
    9669  // 3. Bind image with memory.
    9670  res = allocator->BindImageMemory(*pAllocation, *pImage);
    9671  if(res >= 0)
    9672  {
    9673  // All steps succeeded.
    9674  #if VMA_STATS_STRING_ENABLED
    9675  (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
    9676  #endif
    9677  if(pAllocationInfo != VMA_NULL)
    9678  {
    9679  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9680  }
    9681  return VK_SUCCESS;
    9682  }
    9683  allocator->FreeMemory(*pAllocation);
    9684  *pAllocation = VK_NULL_HANDLE;
    9685  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    9686  *pImage = VK_NULL_HANDLE;
    9687  return res;
    9688  }
    9689  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    9690  *pImage = VK_NULL_HANDLE;
    9691  return res;
    9692  }
    9693  return res;
    9694 }
    9695 
    9696 void vmaDestroyImage(
    9697  VmaAllocator allocator,
    9698  VkImage image,
    9699  VmaAllocation allocation)
    9700 {
    9701  VMA_ASSERT(allocator);
    9702  VMA_DEBUG_LOG("vmaDestroyImage");
    9703  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9704  if(image != VK_NULL_HANDLE)
    9705  {
    9706  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
    9707  }
    9708  if(allocation != VK_NULL_HANDLE)
    9709  {
    9710  allocator->FreeMemory(allocation);
    9711  }
    9712 }
    9713 
    9714 #endif // #ifdef VMA_IMPLEMENTATION
    PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
    Definition: vk_mem_alloc.h:1171
    -
    Set this flag if the allocation should have its own memory block.
    Definition: vk_mem_alloc.h:1437
    +Go to the documentation of this file.
    1 //
    2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
    3 //
    4 // Permission is hereby granted, free of charge, to any person obtaining a copy
    5 // of this software and associated documentation files (the "Software"), to deal
    6 // in the Software without restriction, including without limitation the rights
    7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    8 // copies of the Software, and to permit persons to whom the Software is
    9 // furnished to do so, subject to the following conditions:
    10 //
    11 // The above copyright notice and this permission notice shall be included in
    12 // all copies or substantial portions of the Software.
    13 //
    14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    20 // THE SOFTWARE.
    21 //
    22 
    23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
    24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
    25 
    26 #ifdef __cplusplus
    27 extern "C" {
    28 #endif
    29 
    1157 #include <vulkan/vulkan.h>
    1158 
    1159 #if !defined(VMA_DEDICATED_ALLOCATION)
    1160  #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
    1161  #define VMA_DEDICATED_ALLOCATION 1
    1162  #else
    1163  #define VMA_DEDICATED_ALLOCATION 0
    1164  #endif
    1165 #endif
    1166 
    1176 VK_DEFINE_HANDLE(VmaAllocator)
    1177 
    1178 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
    1180  VmaAllocator allocator,
    1181  uint32_t memoryType,
    1182  VkDeviceMemory memory,
    1183  VkDeviceSize size);
    1185 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
    1186  VmaAllocator allocator,
    1187  uint32_t memoryType,
    1188  VkDeviceMemory memory,
    1189  VkDeviceSize size);
    1190 
    1204 
    1234 
    1237 typedef VkFlags VmaAllocatorCreateFlags;
    1238 
    1243 typedef struct VmaVulkanFunctions {
    1244  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
    1245  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
    1246  PFN_vkAllocateMemory vkAllocateMemory;
    1247  PFN_vkFreeMemory vkFreeMemory;
    1248  PFN_vkMapMemory vkMapMemory;
    1249  PFN_vkUnmapMemory vkUnmapMemory;
    1250  PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
    1251  PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
    1252  PFN_vkBindBufferMemory vkBindBufferMemory;
    1253  PFN_vkBindImageMemory vkBindImageMemory;
    1254  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
    1255  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
    1256  PFN_vkCreateBuffer vkCreateBuffer;
    1257  PFN_vkDestroyBuffer vkDestroyBuffer;
    1258  PFN_vkCreateImage vkCreateImage;
    1259  PFN_vkDestroyImage vkDestroyImage;
    1260 #if VMA_DEDICATED_ALLOCATION
    1261  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
    1262  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
    1263 #endif
    1265 
    1268 {
    1270  VmaAllocatorCreateFlags flags;
    1272 
    1273  VkPhysicalDevice physicalDevice;
    1275 
    1276  VkDevice device;
    1278 
    1281 
    1282  const VkAllocationCallbacks* pAllocationCallbacks;
    1284 
    1323  const VkDeviceSize* pHeapSizeLimit;
    1337 
    1339 VkResult vmaCreateAllocator(
    1340  const VmaAllocatorCreateInfo* pCreateInfo,
    1341  VmaAllocator* pAllocator);
    1342 
    1344 void vmaDestroyAllocator(
    1345  VmaAllocator allocator);
    1346 
    1352  VmaAllocator allocator,
    1353  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
    1354 
    1360  VmaAllocator allocator,
    1361  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
    1362 
    1370  VmaAllocator allocator,
    1371  uint32_t memoryTypeIndex,
    1372  VkMemoryPropertyFlags* pFlags);
    1373 
    1383  VmaAllocator allocator,
    1384  uint32_t frameIndex);
    1385 
    1388 typedef struct VmaStatInfo
    1389 {
    1391  uint32_t blockCount;
    1397  VkDeviceSize usedBytes;
    1399  VkDeviceSize unusedBytes;
    1400  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
    1401  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
    1402 } VmaStatInfo;
    1403 
    1405 typedef struct VmaStats
    1406 {
    1407  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
    1408  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
    1410 } VmaStats;
    1411 
    1413 void vmaCalculateStats(
    1414  VmaAllocator allocator,
    1415  VmaStats* pStats);
    1416 
    1417 #define VMA_STATS_STRING_ENABLED 1
    1418 
    1419 #if VMA_STATS_STRING_ENABLED
    1420 
    1422 
    1424 void vmaBuildStatsString(
    1425  VmaAllocator allocator,
    1426  char** ppStatsString,
    1427  VkBool32 detailedMap);
    1428 
    1429 void vmaFreeStatsString(
    1430  VmaAllocator allocator,
    1431  char* pStatsString);
    1432 
    1433 #endif // #if VMA_STATS_STRING_ENABLED
    1434 
    1443 VK_DEFINE_HANDLE(VmaPool)
    1444 
    1445 typedef enum VmaMemoryUsage
    1446 {
    1495 } VmaMemoryUsage;
    1496 
    1511 
    1561 
    1565 
    1567 {
    1569  VmaAllocationCreateFlags flags;
    1580  VkMemoryPropertyFlags requiredFlags;
    1585  VkMemoryPropertyFlags preferredFlags;
    1593  uint32_t memoryTypeBits;
    1606  void* pUserData;
    1608 
    1625 VkResult vmaFindMemoryTypeIndex(
    1626  VmaAllocator allocator,
    1627  uint32_t memoryTypeBits,
    1628  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1629  uint32_t* pMemoryTypeIndex);
    1630 
    1644  VmaAllocator allocator,
    1645  const VkBufferCreateInfo* pBufferCreateInfo,
    1646  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1647  uint32_t* pMemoryTypeIndex);
    1648 
    1662  VmaAllocator allocator,
    1663  const VkImageCreateInfo* pImageCreateInfo,
    1664  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1665  uint32_t* pMemoryTypeIndex);
    1666 
    1687 
    1690 typedef VkFlags VmaPoolCreateFlags;
    1691 
    1694 typedef struct VmaPoolCreateInfo {
    1700  VmaPoolCreateFlags flags;
    1705  VkDeviceSize blockSize;
    1734 
    1737 typedef struct VmaPoolStats {
    1740  VkDeviceSize size;
    1743  VkDeviceSize unusedSize;
    1756  VkDeviceSize unusedRangeSizeMax;
    1757 } VmaPoolStats;
    1758 
    1765 VkResult vmaCreatePool(
    1766  VmaAllocator allocator,
    1767  const VmaPoolCreateInfo* pCreateInfo,
    1768  VmaPool* pPool);
    1769 
    1772 void vmaDestroyPool(
    1773  VmaAllocator allocator,
    1774  VmaPool pool);
    1775 
    1782 void vmaGetPoolStats(
    1783  VmaAllocator allocator,
    1784  VmaPool pool,
    1785  VmaPoolStats* pPoolStats);
    1786 
    1794  VmaAllocator allocator,
    1795  VmaPool pool,
    1796  size_t* pLostAllocationCount);
    1797 
    1812 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool);
    1813 
    1838 VK_DEFINE_HANDLE(VmaAllocation)
    1839 
    1840 
    1842 typedef struct VmaAllocationInfo {
    1847  uint32_t memoryType;
    1856  VkDeviceMemory deviceMemory;
    1861  VkDeviceSize offset;
    1866  VkDeviceSize size;
    1880  void* pUserData;
    1882 
    1893 VkResult vmaAllocateMemory(
    1894  VmaAllocator allocator,
    1895  const VkMemoryRequirements* pVkMemoryRequirements,
    1896  const VmaAllocationCreateInfo* pCreateInfo,
    1897  VmaAllocation* pAllocation,
    1898  VmaAllocationInfo* pAllocationInfo);
    1899 
    1907  VmaAllocator allocator,
    1908  VkBuffer buffer,
    1909  const VmaAllocationCreateInfo* pCreateInfo,
    1910  VmaAllocation* pAllocation,
    1911  VmaAllocationInfo* pAllocationInfo);
    1912 
    1914 VkResult vmaAllocateMemoryForImage(
    1915  VmaAllocator allocator,
    1916  VkImage image,
    1917  const VmaAllocationCreateInfo* pCreateInfo,
    1918  VmaAllocation* pAllocation,
    1919  VmaAllocationInfo* pAllocationInfo);
    1920 
    1922 void vmaFreeMemory(
    1923  VmaAllocator allocator,
    1924  VmaAllocation allocation);
    1925 
    1943  VmaAllocator allocator,
    1944  VmaAllocation allocation,
    1945  VmaAllocationInfo* pAllocationInfo);
    1946 
    1961 VkBool32 vmaTouchAllocation(
    1962  VmaAllocator allocator,
    1963  VmaAllocation allocation);
    1964 
    1979  VmaAllocator allocator,
    1980  VmaAllocation allocation,
    1981  void* pUserData);
    1982 
    1994  VmaAllocator allocator,
    1995  VmaAllocation* pAllocation);
    1996 
    2031 VkResult vmaMapMemory(
    2032  VmaAllocator allocator,
    2033  VmaAllocation allocation,
    2034  void** ppData);
    2035 
    2040 void vmaUnmapMemory(
    2041  VmaAllocator allocator,
    2042  VmaAllocation allocation);
    2043 
    2056 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2057 
    2070 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2071 
    2088 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits);
    2089 
    2091 typedef struct VmaDefragmentationInfo {
    2096  VkDeviceSize maxBytesToMove;
    2103 
    2105 typedef struct VmaDefragmentationStats {
    2107  VkDeviceSize bytesMoved;
    2109  VkDeviceSize bytesFreed;
    2115 
    2198 VkResult vmaDefragment(
    2199  VmaAllocator allocator,
    2200  VmaAllocation* pAllocations,
    2201  size_t allocationCount,
    2202  VkBool32* pAllocationsChanged,
    2203  const VmaDefragmentationInfo *pDefragmentationInfo,
    2204  VmaDefragmentationStats* pDefragmentationStats);
    2205 
    2218 VkResult vmaBindBufferMemory(
    2219  VmaAllocator allocator,
    2220  VmaAllocation allocation,
    2221  VkBuffer buffer);
    2222 
    2235 VkResult vmaBindImageMemory(
    2236  VmaAllocator allocator,
    2237  VmaAllocation allocation,
    2238  VkImage image);
    2239 
    2266 VkResult vmaCreateBuffer(
    2267  VmaAllocator allocator,
    2268  const VkBufferCreateInfo* pBufferCreateInfo,
    2269  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2270  VkBuffer* pBuffer,
    2271  VmaAllocation* pAllocation,
    2272  VmaAllocationInfo* pAllocationInfo);
    2273 
    2285 void vmaDestroyBuffer(
    2286  VmaAllocator allocator,
    2287  VkBuffer buffer,
    2288  VmaAllocation allocation);
    2289 
    2291 VkResult vmaCreateImage(
    2292  VmaAllocator allocator,
    2293  const VkImageCreateInfo* pImageCreateInfo,
    2294  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2295  VkImage* pImage,
    2296  VmaAllocation* pAllocation,
    2297  VmaAllocationInfo* pAllocationInfo);
    2298 
    2310 void vmaDestroyImage(
    2311  VmaAllocator allocator,
    2312  VkImage image,
    2313  VmaAllocation allocation);
    2314 
    2315 #ifdef __cplusplus
    2316 }
    2317 #endif
    2318 
    2319 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
    2320 
    2321 // For Visual Studio IntelliSense.
    2322 #if defined(__cplusplus) && defined(__INTELLISENSE__)
    2323 #define VMA_IMPLEMENTATION
    2324 #endif
    2325 
    2326 #ifdef VMA_IMPLEMENTATION
    2327 #undef VMA_IMPLEMENTATION
    2328 
    2329 #include <cstdint>
    2330 #include <cstdlib>
    2331 #include <cstring>
    2332 
    2333 /*******************************************************************************
    2334 CONFIGURATION SECTION
    2335 
    2336 Define some of these macros before each #include of this header or change them
    2337 here if you need other then default behavior depending on your environment.
    2338 */
    2339 
    2340 /*
    2341 Define this macro to 1 to make the library fetch pointers to Vulkan functions
    2342 internally, like:
    2343 
    2344  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    2345 
    2346 Define to 0 if you are going to provide you own pointers to Vulkan functions via
    2347 VmaAllocatorCreateInfo::pVulkanFunctions.
    2348 */
    2349 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
    2350 #define VMA_STATIC_VULKAN_FUNCTIONS 1
    2351 #endif
    2352 
    2353 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
    2354 //#define VMA_USE_STL_CONTAINERS 1
    2355 
    2356 /* Set this macro to 1 to make the library including and using STL containers:
    2357 std::pair, std::vector, std::list, std::unordered_map.
    2358 
    2359 Set it to 0 or undefined to make the library using its own implementation of
    2360 the containers.
    2361 */
    2362 #if VMA_USE_STL_CONTAINERS
    2363  #define VMA_USE_STL_VECTOR 1
    2364  #define VMA_USE_STL_UNORDERED_MAP 1
    2365  #define VMA_USE_STL_LIST 1
    2366 #endif
    2367 
    2368 #if VMA_USE_STL_VECTOR
    2369  #include <vector>
    2370 #endif
    2371 
    2372 #if VMA_USE_STL_UNORDERED_MAP
    2373  #include <unordered_map>
    2374 #endif
    2375 
    2376 #if VMA_USE_STL_LIST
    2377  #include <list>
    2378 #endif
    2379 
    2380 /*
    2381 Following headers are used in this CONFIGURATION section only, so feel free to
    2382 remove them if not needed.
    2383 */
    2384 #include <cassert> // for assert
    2385 #include <algorithm> // for min, max
    2386 #include <mutex> // for std::mutex
    2387 #include <atomic> // for std::atomic
    2388 
    2389 #ifndef VMA_NULL
    2390  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
    2391  #define VMA_NULL nullptr
    2392 #endif
    2393 
    2394 #if defined(__APPLE__) || defined(__ANDROID__)
    2395 #include <cstdlib>
    2396 void *aligned_alloc(size_t alignment, size_t size)
    2397 {
    2398  // alignment must be >= sizeof(void*)
    2399  if(alignment < sizeof(void*))
    2400  {
    2401  alignment = sizeof(void*);
    2402  }
    2403 
    2404  void *pointer;
    2405  if(posix_memalign(&pointer, alignment, size) == 0)
    2406  return pointer;
    2407  return VMA_NULL;
    2408 }
    2409 #endif
    2410 
    2411 // If your compiler is not compatible with C++11 and definition of
    2412 // aligned_alloc() function is missing, uncommeting following line may help:
    2413 
    2414 //#include <malloc.h>
    2415 
    2416 // Normal assert to check for programmer's errors, especially in Debug configuration.
    2417 #ifndef VMA_ASSERT
    2418  #ifdef _DEBUG
    2419  #define VMA_ASSERT(expr) assert(expr)
    2420  #else
    2421  #define VMA_ASSERT(expr)
    2422  #endif
    2423 #endif
    2424 
    2425 // Assert that will be called very often, like inside data structures e.g. operator[].
    2426 // Making it non-empty can make program slow.
    2427 #ifndef VMA_HEAVY_ASSERT
    2428  #ifdef _DEBUG
    2429  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
    2430  #else
    2431  #define VMA_HEAVY_ASSERT(expr)
    2432  #endif
    2433 #endif
    2434 
    2435 #ifndef VMA_ALIGN_OF
    2436  #define VMA_ALIGN_OF(type) (__alignof(type))
    2437 #endif
    2438 
    2439 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
    2440  #if defined(_WIN32)
    2441  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
    2442  #else
    2443  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
    2444  #endif
    2445 #endif
    2446 
    2447 #ifndef VMA_SYSTEM_FREE
    2448  #if defined(_WIN32)
    2449  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
    2450  #else
    2451  #define VMA_SYSTEM_FREE(ptr) free(ptr)
    2452  #endif
    2453 #endif
    2454 
    2455 #ifndef VMA_MIN
    2456  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
    2457 #endif
    2458 
    2459 #ifndef VMA_MAX
    2460  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
    2461 #endif
    2462 
    2463 #ifndef VMA_SWAP
    2464  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
    2465 #endif
    2466 
    2467 #ifndef VMA_SORT
    2468  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
    2469 #endif
    2470 
    2471 #ifndef VMA_DEBUG_LOG
    2472  #define VMA_DEBUG_LOG(format, ...)
    2473  /*
    2474  #define VMA_DEBUG_LOG(format, ...) do { \
    2475  printf(format, __VA_ARGS__); \
    2476  printf("\n"); \
    2477  } while(false)
    2478  */
    2479 #endif
    2480 
    2481 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
    2482 #if VMA_STATS_STRING_ENABLED
    2483  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
    2484  {
    2485  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
    2486  }
    2487  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
    2488  {
    2489  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
    2490  }
    2491  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
    2492  {
    2493  snprintf(outStr, strLen, "%p", ptr);
    2494  }
    2495 #endif
    2496 
    2497 #ifndef VMA_MUTEX
    2498  class VmaMutex
    2499  {
    2500  public:
    2501  VmaMutex() { }
    2502  ~VmaMutex() { }
    2503  void Lock() { m_Mutex.lock(); }
    2504  void Unlock() { m_Mutex.unlock(); }
    2505  private:
    2506  std::mutex m_Mutex;
    2507  };
    2508  #define VMA_MUTEX VmaMutex
    2509 #endif
    2510 
    2511 /*
    2512 If providing your own implementation, you need to implement a subset of std::atomic:
    2513 
    2514 - Constructor(uint32_t desired)
    2515 - uint32_t load() const
    2516 - void store(uint32_t desired)
    2517 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
    2518 */
    2519 #ifndef VMA_ATOMIC_UINT32
    2520  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
    2521 #endif
    2522 
    2523 #ifndef VMA_BEST_FIT
    2524 
    2536  #define VMA_BEST_FIT (1)
    2537 #endif
    2538 
    2539 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
    2540 
    2544  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
    2545 #endif
    2546 
    2547 #ifndef VMA_DEBUG_ALIGNMENT
    2548 
    2552  #define VMA_DEBUG_ALIGNMENT (1)
    2553 #endif
    2554 
    2555 #ifndef VMA_DEBUG_MARGIN
    2556 
    2560  #define VMA_DEBUG_MARGIN (0)
    2561 #endif
    2562 
    2563 #ifndef VMA_DEBUG_DETECT_CORRUPTION
    2564 
    2569  #define VMA_DEBUG_DETECT_CORRUPTION (0)
    2570 #endif
    2571 
    2572 #ifndef VMA_DEBUG_GLOBAL_MUTEX
    2573 
    2577  #define VMA_DEBUG_GLOBAL_MUTEX (0)
    2578 #endif
    2579 
    2580 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
    2581 
    2585  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
    2586 #endif
    2587 
    2588 #ifndef VMA_SMALL_HEAP_MAX_SIZE
    2589  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
    2591 #endif
    2592 
    2593 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
    2594  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
    2596 #endif
    2597 
    2598 #ifndef VMA_CLASS_NO_COPY
    2599  #define VMA_CLASS_NO_COPY(className) \
    2600  private: \
    2601  className(const className&) = delete; \
    2602  className& operator=(const className&) = delete;
    2603 #endif
    2604 
    2605 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
    2606 
    2607 // Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
    2608 static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
    2609 
    2610 /*******************************************************************************
    2611 END OF CONFIGURATION
    2612 */
    2613 
    2614 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
    2615  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
    2616 
    2617 // Returns number of bits set to 1 in (v).
    2618 static inline uint32_t VmaCountBitsSet(uint32_t v)
    2619 {
    2620  uint32_t c = v - ((v >> 1) & 0x55555555);
    2621  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
    2622  c = ((c >> 4) + c) & 0x0F0F0F0F;
    2623  c = ((c >> 8) + c) & 0x00FF00FF;
    2624  c = ((c >> 16) + c) & 0x0000FFFF;
    2625  return c;
    2626 }
    2627 
    2628 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
    2629 // Use types like uint32_t, uint64_t as T.
    2630 template <typename T>
    2631 static inline T VmaAlignUp(T val, T align)
    2632 {
    2633  return (val + align - 1) / align * align;
    2634 }
    2635 // Aligns given value down to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 8.
    2636 // Use types like uint32_t, uint64_t as T.
    2637 template <typename T>
    2638 static inline T VmaAlignDown(T val, T align)
    2639 {
    2640  return val / align * align;
    2641 }
    2642 
    2643 // Division with mathematical rounding to nearest number.
    2644 template <typename T>
    2645 inline T VmaRoundDiv(T x, T y)
    2646 {
    2647  return (x + (y / (T)2)) / y;
    2648 }
    2649 
    2650 #ifndef VMA_SORT
    2651 
    2652 template<typename Iterator, typename Compare>
    2653 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
    2654 {
    2655  Iterator centerValue = end; --centerValue;
    2656  Iterator insertIndex = beg;
    2657  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
    2658  {
    2659  if(cmp(*memTypeIndex, *centerValue))
    2660  {
    2661  if(insertIndex != memTypeIndex)
    2662  {
    2663  VMA_SWAP(*memTypeIndex, *insertIndex);
    2664  }
    2665  ++insertIndex;
    2666  }
    2667  }
    2668  if(insertIndex != centerValue)
    2669  {
    2670  VMA_SWAP(*insertIndex, *centerValue);
    2671  }
    2672  return insertIndex;
    2673 }
    2674 
    2675 template<typename Iterator, typename Compare>
    2676 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
    2677 {
    2678  if(beg < end)
    2679  {
    2680  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
    2681  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
    2682  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
    2683  }
    2684 }
    2685 
    2686 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
    2687 
    2688 #endif // #ifndef VMA_SORT
    2689 
    2690 /*
    2691 Returns true if two memory blocks occupy overlapping pages.
    2692 ResourceA must be in less memory offset than ResourceB.
    2693 
    2694 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
    2695 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
    2696 */
    2697 static inline bool VmaBlocksOnSamePage(
    2698  VkDeviceSize resourceAOffset,
    2699  VkDeviceSize resourceASize,
    2700  VkDeviceSize resourceBOffset,
    2701  VkDeviceSize pageSize)
    2702 {
    2703  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
    2704  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
    2705  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
    2706  VkDeviceSize resourceBStart = resourceBOffset;
    2707  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
    2708  return resourceAEndPage == resourceBStartPage;
    2709 }
    2710 
    2711 enum VmaSuballocationType
    2712 {
    2713  VMA_SUBALLOCATION_TYPE_FREE = 0,
    2714  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
    2715  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
    2716  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
    2717  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
    2718  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
    2719  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
    2720 };
    2721 
    2722 /*
    2723 Returns true if given suballocation types could conflict and must respect
    2724 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
    2725 or linear image and another one is optimal image. If type is unknown, behave
    2726 conservatively.
    2727 */
    2728 static inline bool VmaIsBufferImageGranularityConflict(
    2729  VmaSuballocationType suballocType1,
    2730  VmaSuballocationType suballocType2)
    2731 {
    2732  if(suballocType1 > suballocType2)
    2733  {
    2734  VMA_SWAP(suballocType1, suballocType2);
    2735  }
    2736 
    2737  switch(suballocType1)
    2738  {
    2739  case VMA_SUBALLOCATION_TYPE_FREE:
    2740  return false;
    2741  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
    2742  return true;
    2743  case VMA_SUBALLOCATION_TYPE_BUFFER:
    2744  return
    2745  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2746  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2747  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
    2748  return
    2749  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2750  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
    2751  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2752  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
    2753  return
    2754  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2755  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
    2756  return false;
    2757  default:
    2758  VMA_ASSERT(0);
    2759  return true;
    2760  }
    2761 }
    2762 
    2763 static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
    2764 {
    2765  uint32_t* pDst = (uint32_t*)((char*)pData + offset);
    2766  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2767  for(size_t i = 0; i < numberCount; ++i, ++pDst)
    2768  {
    2769  *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
    2770  }
    2771 }
    2772 
    2773 static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
    2774 {
    2775  const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
    2776  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2777  for(size_t i = 0; i < numberCount; ++i, ++pSrc)
    2778  {
    2779  if(*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
    2780  {
    2781  return false;
    2782  }
    2783  }
    2784  return true;
    2785 }
    2786 
    2787 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
    2788 struct VmaMutexLock
    2789 {
    2790  VMA_CLASS_NO_COPY(VmaMutexLock)
    2791 public:
    2792  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
    2793  m_pMutex(useMutex ? &mutex : VMA_NULL)
    2794  {
    2795  if(m_pMutex)
    2796  {
    2797  m_pMutex->Lock();
    2798  }
    2799  }
    2800 
    2801  ~VmaMutexLock()
    2802  {
    2803  if(m_pMutex)
    2804  {
    2805  m_pMutex->Unlock();
    2806  }
    2807  }
    2808 
    2809 private:
    2810  VMA_MUTEX* m_pMutex;
    2811 };
    2812 
    2813 #if VMA_DEBUG_GLOBAL_MUTEX
    2814  static VMA_MUTEX gDebugGlobalMutex;
    2815  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
    2816 #else
    2817  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
    2818 #endif
    2819 
    2820 // Minimum size of a free suballocation to register it in the free suballocation collection.
    2821 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
    2822 
    2823 /*
    2824 Performs binary search and returns iterator to first element that is greater or
    2825 equal to (key), according to comparison (cmp).
    2826 
    2827 Cmp should return true if first argument is less than second argument.
    2828 
    2829 Returned value is the found element, if present in the collection or place where
    2830 new element with value (key) should be inserted.
    2831 */
    2832 template <typename IterT, typename KeyT, typename CmpT>
    2833 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
    2834 {
    2835  size_t down = 0, up = (end - beg);
    2836  while(down < up)
    2837  {
    2838  const size_t mid = (down + up) / 2;
    2839  if(cmp(*(beg+mid), key))
    2840  {
    2841  down = mid + 1;
    2842  }
    2843  else
    2844  {
    2845  up = mid;
    2846  }
    2847  }
    2848  return beg + down;
    2849 }
    2850 
    2852 // Memory allocation
    2853 
    2854 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
    2855 {
    2856  if((pAllocationCallbacks != VMA_NULL) &&
    2857  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
    2858  {
    2859  return (*pAllocationCallbacks->pfnAllocation)(
    2860  pAllocationCallbacks->pUserData,
    2861  size,
    2862  alignment,
    2863  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
    2864  }
    2865  else
    2866  {
    2867  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
    2868  }
    2869 }
    2870 
    2871 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
    2872 {
    2873  if((pAllocationCallbacks != VMA_NULL) &&
    2874  (pAllocationCallbacks->pfnFree != VMA_NULL))
    2875  {
    2876  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
    2877  }
    2878  else
    2879  {
    2880  VMA_SYSTEM_FREE(ptr);
    2881  }
    2882 }
    2883 
    2884 template<typename T>
    2885 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
    2886 {
    2887  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
    2888 }
    2889 
    2890 template<typename T>
    2891 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
    2892 {
    2893  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
    2894 }
    2895 
    2896 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
    2897 
    2898 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
    2899 
    2900 template<typename T>
    2901 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
    2902 {
    2903  ptr->~T();
    2904  VmaFree(pAllocationCallbacks, ptr);
    2905 }
    2906 
    2907 template<typename T>
    2908 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
    2909 {
    2910  if(ptr != VMA_NULL)
    2911  {
    2912  for(size_t i = count; i--; )
    2913  {
    2914  ptr[i].~T();
    2915  }
    2916  VmaFree(pAllocationCallbacks, ptr);
    2917  }
    2918 }
    2919 
    2920 // STL-compatible allocator.
    2921 template<typename T>
    2922 class VmaStlAllocator
    2923 {
    2924 public:
    2925  const VkAllocationCallbacks* const m_pCallbacks;
    2926  typedef T value_type;
    2927 
    2928  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
    2929  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
    2930 
    2931  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
    2932  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
    2933 
    2934  template<typename U>
    2935  bool operator==(const VmaStlAllocator<U>& rhs) const
    2936  {
    2937  return m_pCallbacks == rhs.m_pCallbacks;
    2938  }
    2939  template<typename U>
    2940  bool operator!=(const VmaStlAllocator<U>& rhs) const
    2941  {
    2942  return m_pCallbacks != rhs.m_pCallbacks;
    2943  }
    2944 
    2945  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
    2946 };
    2947 
    2948 #if VMA_USE_STL_VECTOR
    2949 
    2950 #define VmaVector std::vector
    2951 
    2952 template<typename T, typename allocatorT>
    2953 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
    2954 {
    2955  vec.insert(vec.begin() + index, item);
    2956 }
    2957 
    2958 template<typename T, typename allocatorT>
    2959 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
    2960 {
    2961  vec.erase(vec.begin() + index);
    2962 }
    2963 
    2964 #else // #if VMA_USE_STL_VECTOR
    2965 
    2966 /* Class with interface compatible with subset of std::vector.
    2967 T must be POD because constructors and destructors are not called and memcpy is
    2968 used for these objects. */
    2969 template<typename T, typename AllocatorT>
    2970 class VmaVector
    2971 {
    2972 public:
    2973  typedef T value_type;
    2974 
    2975  VmaVector(const AllocatorT& allocator) :
    2976  m_Allocator(allocator),
    2977  m_pArray(VMA_NULL),
    2978  m_Count(0),
    2979  m_Capacity(0)
    2980  {
    2981  }
    2982 
    2983  VmaVector(size_t count, const AllocatorT& allocator) :
    2984  m_Allocator(allocator),
    2985  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
    2986  m_Count(count),
    2987  m_Capacity(count)
    2988  {
    2989  }
    2990 
    2991  VmaVector(const VmaVector<T, AllocatorT>& src) :
    2992  m_Allocator(src.m_Allocator),
    2993  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
    2994  m_Count(src.m_Count),
    2995  m_Capacity(src.m_Count)
    2996  {
    2997  if(m_Count != 0)
    2998  {
    2999  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
    3000  }
    3001  }
    3002 
    3003  ~VmaVector()
    3004  {
    3005  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3006  }
    3007 
    3008  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
    3009  {
    3010  if(&rhs != this)
    3011  {
    3012  resize(rhs.m_Count);
    3013  if(m_Count != 0)
    3014  {
    3015  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
    3016  }
    3017  }
    3018  return *this;
    3019  }
    3020 
    3021  bool empty() const { return m_Count == 0; }
    3022  size_t size() const { return m_Count; }
    3023  T* data() { return m_pArray; }
    3024  const T* data() const { return m_pArray; }
    3025 
    3026  T& operator[](size_t index)
    3027  {
    3028  VMA_HEAVY_ASSERT(index < m_Count);
    3029  return m_pArray[index];
    3030  }
    3031  const T& operator[](size_t index) const
    3032  {
    3033  VMA_HEAVY_ASSERT(index < m_Count);
    3034  return m_pArray[index];
    3035  }
    3036 
    3037  T& front()
    3038  {
    3039  VMA_HEAVY_ASSERT(m_Count > 0);
    3040  return m_pArray[0];
    3041  }
    3042  const T& front() const
    3043  {
    3044  VMA_HEAVY_ASSERT(m_Count > 0);
    3045  return m_pArray[0];
    3046  }
    3047  T& back()
    3048  {
    3049  VMA_HEAVY_ASSERT(m_Count > 0);
    3050  return m_pArray[m_Count - 1];
    3051  }
    3052  const T& back() const
    3053  {
    3054  VMA_HEAVY_ASSERT(m_Count > 0);
    3055  return m_pArray[m_Count - 1];
    3056  }
    3057 
    3058  void reserve(size_t newCapacity, bool freeMemory = false)
    3059  {
    3060  newCapacity = VMA_MAX(newCapacity, m_Count);
    3061 
    3062  if((newCapacity < m_Capacity) && !freeMemory)
    3063  {
    3064  newCapacity = m_Capacity;
    3065  }
    3066 
    3067  if(newCapacity != m_Capacity)
    3068  {
    3069  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
    3070  if(m_Count != 0)
    3071  {
    3072  memcpy(newArray, m_pArray, m_Count * sizeof(T));
    3073  }
    3074  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3075  m_Capacity = newCapacity;
    3076  m_pArray = newArray;
    3077  }
    3078  }
    3079 
    3080  void resize(size_t newCount, bool freeMemory = false)
    3081  {
    3082  size_t newCapacity = m_Capacity;
    3083  if(newCount > m_Capacity)
    3084  {
    3085  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
    3086  }
    3087  else if(freeMemory)
    3088  {
    3089  newCapacity = newCount;
    3090  }
    3091 
    3092  if(newCapacity != m_Capacity)
    3093  {
    3094  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
    3095  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
    3096  if(elementsToCopy != 0)
    3097  {
    3098  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
    3099  }
    3100  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3101  m_Capacity = newCapacity;
    3102  m_pArray = newArray;
    3103  }
    3104 
    3105  m_Count = newCount;
    3106  }
    3107 
    3108  void clear(bool freeMemory = false)
    3109  {
    3110  resize(0, freeMemory);
    3111  }
    3112 
    3113  void insert(size_t index, const T& src)
    3114  {
    3115  VMA_HEAVY_ASSERT(index <= m_Count);
    3116  const size_t oldCount = size();
    3117  resize(oldCount + 1);
    3118  if(index < oldCount)
    3119  {
    3120  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
    3121  }
    3122  m_pArray[index] = src;
    3123  }
    3124 
    3125  void remove(size_t index)
    3126  {
    3127  VMA_HEAVY_ASSERT(index < m_Count);
    3128  const size_t oldCount = size();
    3129  if(index < oldCount - 1)
    3130  {
    3131  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
    3132  }
    3133  resize(oldCount - 1);
    3134  }
    3135 
    3136  void push_back(const T& src)
    3137  {
    3138  const size_t newIndex = size();
    3139  resize(newIndex + 1);
    3140  m_pArray[newIndex] = src;
    3141  }
    3142 
    3143  void pop_back()
    3144  {
    3145  VMA_HEAVY_ASSERT(m_Count > 0);
    3146  resize(size() - 1);
    3147  }
    3148 
    3149  void push_front(const T& src)
    3150  {
    3151  insert(0, src);
    3152  }
    3153 
    3154  void pop_front()
    3155  {
    3156  VMA_HEAVY_ASSERT(m_Count > 0);
    3157  remove(0);
    3158  }
    3159 
    3160  typedef T* iterator;
    3161 
    3162  iterator begin() { return m_pArray; }
    3163  iterator end() { return m_pArray + m_Count; }
    3164 
    3165 private:
    3166  AllocatorT m_Allocator;
    3167  T* m_pArray;
    3168  size_t m_Count;
    3169  size_t m_Capacity;
    3170 };
    3171 
    3172 template<typename T, typename allocatorT>
    3173 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
    3174 {
    3175  vec.insert(index, item);
    3176 }
    3177 
    3178 template<typename T, typename allocatorT>
    3179 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
    3180 {
    3181  vec.remove(index);
    3182 }
    3183 
    3184 #endif // #if VMA_USE_STL_VECTOR
    3185 
    3186 template<typename CmpLess, typename VectorT>
    3187 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
    3188 {
    3189  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3190  vector.data(),
    3191  vector.data() + vector.size(),
    3192  value,
    3193  CmpLess()) - vector.data();
    3194  VmaVectorInsert(vector, indexToInsert, value);
    3195  return indexToInsert;
    3196 }
    3197 
    3198 template<typename CmpLess, typename VectorT>
    3199 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
    3200 {
    3201  CmpLess comparator;
    3202  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3203  vector.begin(),
    3204  vector.end(),
    3205  value,
    3206  comparator);
    3207  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
    3208  {
    3209  size_t indexToRemove = it - vector.begin();
    3210  VmaVectorRemove(vector, indexToRemove);
    3211  return true;
    3212  }
    3213  return false;
    3214 }
    3215 
    3216 template<typename CmpLess, typename VectorT>
    3217 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
    3218 {
    3219  CmpLess comparator;
    3220  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3221  vector.data(),
    3222  vector.data() + vector.size(),
    3223  value,
    3224  comparator);
    3225  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
    3226  {
    3227  return it - vector.begin();
    3228  }
    3229  else
    3230  {
    3231  return vector.size();
    3232  }
    3233 }
    3234 
    3236 // class VmaPoolAllocator
    3237 
    3238 /*
    3239 Allocator for objects of type T using a list of arrays (pools) to speed up
    3240 allocation. Number of elements that can be allocated is not bounded because
    3241 allocator can create multiple blocks.
    3242 */
    3243 template<typename T>
    3244 class VmaPoolAllocator
    3245 {
    3246  VMA_CLASS_NO_COPY(VmaPoolAllocator)
    3247 public:
    3248  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
    3249  ~VmaPoolAllocator();
    3250  void Clear();
    3251  T* Alloc();
    3252  void Free(T* ptr);
    3253 
    3254 private:
    3255  union Item
    3256  {
    3257  uint32_t NextFreeIndex;
    3258  T Value;
    3259  };
    3260 
    3261  struct ItemBlock
    3262  {
    3263  Item* pItems;
    3264  uint32_t FirstFreeIndex;
    3265  };
    3266 
    3267  const VkAllocationCallbacks* m_pAllocationCallbacks;
    3268  size_t m_ItemsPerBlock;
    3269  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
    3270 
    3271  ItemBlock& CreateNewBlock();
    3272 };
    3273 
    3274 template<typename T>
    3275 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
    3276  m_pAllocationCallbacks(pAllocationCallbacks),
    3277  m_ItemsPerBlock(itemsPerBlock),
    3278  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
    3279 {
    3280  VMA_ASSERT(itemsPerBlock > 0);
    3281 }
    3282 
    3283 template<typename T>
    3284 VmaPoolAllocator<T>::~VmaPoolAllocator()
    3285 {
    3286  Clear();
    3287 }
    3288 
    3289 template<typename T>
    3290 void VmaPoolAllocator<T>::Clear()
    3291 {
    3292  for(size_t i = m_ItemBlocks.size(); i--; )
    3293  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
    3294  m_ItemBlocks.clear();
    3295 }
    3296 
    3297 template<typename T>
    3298 T* VmaPoolAllocator<T>::Alloc()
    3299 {
    3300  for(size_t i = m_ItemBlocks.size(); i--; )
    3301  {
    3302  ItemBlock& block = m_ItemBlocks[i];
    3303  // This block has some free items: Use first one.
    3304  if(block.FirstFreeIndex != UINT32_MAX)
    3305  {
    3306  Item* const pItem = &block.pItems[block.FirstFreeIndex];
    3307  block.FirstFreeIndex = pItem->NextFreeIndex;
    3308  return &pItem->Value;
    3309  }
    3310  }
    3311 
    3312  // No block has free item: Create new one and use it.
    3313  ItemBlock& newBlock = CreateNewBlock();
    3314  Item* const pItem = &newBlock.pItems[0];
    3315  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
    3316  return &pItem->Value;
    3317 }
    3318 
    3319 template<typename T>
    3320 void VmaPoolAllocator<T>::Free(T* ptr)
    3321 {
    3322  // Search all memory blocks to find ptr.
    3323  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
    3324  {
    3325  ItemBlock& block = m_ItemBlocks[i];
    3326 
    3327  // Casting to union.
    3328  Item* pItemPtr;
    3329  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
    3330 
    3331  // Check if pItemPtr is in address range of this block.
    3332  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
    3333  {
    3334  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
    3335  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
    3336  block.FirstFreeIndex = index;
    3337  return;
    3338  }
    3339  }
    3340  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
    3341 }
    3342 
    3343 template<typename T>
    3344 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
    3345 {
    3346  ItemBlock newBlock = {
    3347  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
    3348 
    3349  m_ItemBlocks.push_back(newBlock);
    3350 
    3351  // Setup singly-linked list of all free items in this block.
    3352  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
    3353  newBlock.pItems[i].NextFreeIndex = i + 1;
    3354  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
    3355  return m_ItemBlocks.back();
    3356 }
    3357 
    3359 // class VmaRawList, VmaList
    3360 
    3361 #if VMA_USE_STL_LIST
    3362 
    3363 #define VmaList std::list
    3364 
    3365 #else // #if VMA_USE_STL_LIST
    3366 
    3367 template<typename T>
    3368 struct VmaListItem
    3369 {
    3370  VmaListItem* pPrev;
    3371  VmaListItem* pNext;
    3372  T Value;
    3373 };
    3374 
    3375 // Doubly linked list.
    3376 template<typename T>
    3377 class VmaRawList
    3378 {
    3379  VMA_CLASS_NO_COPY(VmaRawList)
    3380 public:
    3381  typedef VmaListItem<T> ItemType;
    3382 
    3383  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
    3384  ~VmaRawList();
    3385  void Clear();
    3386 
    3387  size_t GetCount() const { return m_Count; }
    3388  bool IsEmpty() const { return m_Count == 0; }
    3389 
    3390  ItemType* Front() { return m_pFront; }
    3391  const ItemType* Front() const { return m_pFront; }
    3392  ItemType* Back() { return m_pBack; }
    3393  const ItemType* Back() const { return m_pBack; }
    3394 
    3395  ItemType* PushBack();
    3396  ItemType* PushFront();
    3397  ItemType* PushBack(const T& value);
    3398  ItemType* PushFront(const T& value);
    3399  void PopBack();
    3400  void PopFront();
    3401 
    3402  // Item can be null - it means PushBack.
    3403  ItemType* InsertBefore(ItemType* pItem);
    3404  // Item can be null - it means PushFront.
    3405  ItemType* InsertAfter(ItemType* pItem);
    3406 
    3407  ItemType* InsertBefore(ItemType* pItem, const T& value);
    3408  ItemType* InsertAfter(ItemType* pItem, const T& value);
    3409 
    3410  void Remove(ItemType* pItem);
    3411 
    3412 private:
    3413  const VkAllocationCallbacks* const m_pAllocationCallbacks;
    3414  VmaPoolAllocator<ItemType> m_ItemAllocator;
    3415  ItemType* m_pFront;
    3416  ItemType* m_pBack;
    3417  size_t m_Count;
    3418 };
    3419 
    3420 template<typename T>
    3421 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
    3422  m_pAllocationCallbacks(pAllocationCallbacks),
    3423  m_ItemAllocator(pAllocationCallbacks, 128),
    3424  m_pFront(VMA_NULL),
    3425  m_pBack(VMA_NULL),
    3426  m_Count(0)
    3427 {
    3428 }
    3429 
    3430 template<typename T>
    3431 VmaRawList<T>::~VmaRawList()
    3432 {
    3433  // Intentionally not calling Clear, because that would be unnecessary
    3434  // computations to return all items to m_ItemAllocator as free.
    3435 }
    3436 
    3437 template<typename T>
    3438 void VmaRawList<T>::Clear()
    3439 {
    3440  if(IsEmpty() == false)
    3441  {
    3442  ItemType* pItem = m_pBack;
    3443  while(pItem != VMA_NULL)
    3444  {
    3445  ItemType* const pPrevItem = pItem->pPrev;
    3446  m_ItemAllocator.Free(pItem);
    3447  pItem = pPrevItem;
    3448  }
    3449  m_pFront = VMA_NULL;
    3450  m_pBack = VMA_NULL;
    3451  m_Count = 0;
    3452  }
    3453 }
    3454 
    3455 template<typename T>
    3456 VmaListItem<T>* VmaRawList<T>::PushBack()
    3457 {
    3458  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3459  pNewItem->pNext = VMA_NULL;
    3460  if(IsEmpty())
    3461  {
    3462  pNewItem->pPrev = VMA_NULL;
    3463  m_pFront = pNewItem;
    3464  m_pBack = pNewItem;
    3465  m_Count = 1;
    3466  }
    3467  else
    3468  {
    3469  pNewItem->pPrev = m_pBack;
    3470  m_pBack->pNext = pNewItem;
    3471  m_pBack = pNewItem;
    3472  ++m_Count;
    3473  }
    3474  return pNewItem;
    3475 }
    3476 
    3477 template<typename T>
    3478 VmaListItem<T>* VmaRawList<T>::PushFront()
    3479 {
    3480  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3481  pNewItem->pPrev = VMA_NULL;
    3482  if(IsEmpty())
    3483  {
    3484  pNewItem->pNext = VMA_NULL;
    3485  m_pFront = pNewItem;
    3486  m_pBack = pNewItem;
    3487  m_Count = 1;
    3488  }
    3489  else
    3490  {
    3491  pNewItem->pNext = m_pFront;
    3492  m_pFront->pPrev = pNewItem;
    3493  m_pFront = pNewItem;
    3494  ++m_Count;
    3495  }
    3496  return pNewItem;
    3497 }
    3498 
    3499 template<typename T>
    3500 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
    3501 {
    3502  ItemType* const pNewItem = PushBack();
    3503  pNewItem->Value = value;
    3504  return pNewItem;
    3505 }
    3506 
    3507 template<typename T>
    3508 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
    3509 {
    3510  ItemType* const pNewItem = PushFront();
    3511  pNewItem->Value = value;
    3512  return pNewItem;
    3513 }
    3514 
    3515 template<typename T>
    3516 void VmaRawList<T>::PopBack()
    3517 {
    3518  VMA_HEAVY_ASSERT(m_Count > 0);
    3519  ItemType* const pBackItem = m_pBack;
    3520  ItemType* const pPrevItem = pBackItem->pPrev;
    3521  if(pPrevItem != VMA_NULL)
    3522  {
    3523  pPrevItem->pNext = VMA_NULL;
    3524  }
    3525  m_pBack = pPrevItem;
    3526  m_ItemAllocator.Free(pBackItem);
    3527  --m_Count;
    3528 }
    3529 
    3530 template<typename T>
    3531 void VmaRawList<T>::PopFront()
    3532 {
    3533  VMA_HEAVY_ASSERT(m_Count > 0);
    3534  ItemType* const pFrontItem = m_pFront;
    3535  ItemType* const pNextItem = pFrontItem->pNext;
    3536  if(pNextItem != VMA_NULL)
    3537  {
    3538  pNextItem->pPrev = VMA_NULL;
    3539  }
    3540  m_pFront = pNextItem;
    3541  m_ItemAllocator.Free(pFrontItem);
    3542  --m_Count;
    3543 }
    3544 
    3545 template<typename T>
    3546 void VmaRawList<T>::Remove(ItemType* pItem)
    3547 {
    3548  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
    3549  VMA_HEAVY_ASSERT(m_Count > 0);
    3550 
    3551  if(pItem->pPrev != VMA_NULL)
    3552  {
    3553  pItem->pPrev->pNext = pItem->pNext;
    3554  }
    3555  else
    3556  {
    3557  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3558  m_pFront = pItem->pNext;
    3559  }
    3560 
    3561  if(pItem->pNext != VMA_NULL)
    3562  {
    3563  pItem->pNext->pPrev = pItem->pPrev;
    3564  }
    3565  else
    3566  {
    3567  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3568  m_pBack = pItem->pPrev;
    3569  }
    3570 
    3571  m_ItemAllocator.Free(pItem);
    3572  --m_Count;
    3573 }
    3574 
    3575 template<typename T>
    3576 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
    3577 {
    3578  if(pItem != VMA_NULL)
    3579  {
    3580  ItemType* const prevItem = pItem->pPrev;
    3581  ItemType* const newItem = m_ItemAllocator.Alloc();
    3582  newItem->pPrev = prevItem;
    3583  newItem->pNext = pItem;
    3584  pItem->pPrev = newItem;
    3585  if(prevItem != VMA_NULL)
    3586  {
    3587  prevItem->pNext = newItem;
    3588  }
    3589  else
    3590  {
    3591  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3592  m_pFront = newItem;
    3593  }
    3594  ++m_Count;
    3595  return newItem;
    3596  }
    3597  else
    3598  return PushBack();
    3599 }
    3600 
    3601 template<typename T>
    3602 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
    3603 {
    3604  if(pItem != VMA_NULL)
    3605  {
    3606  ItemType* const nextItem = pItem->pNext;
    3607  ItemType* const newItem = m_ItemAllocator.Alloc();
    3608  newItem->pNext = nextItem;
    3609  newItem->pPrev = pItem;
    3610  pItem->pNext = newItem;
    3611  if(nextItem != VMA_NULL)
    3612  {
    3613  nextItem->pPrev = newItem;
    3614  }
    3615  else
    3616  {
    3617  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3618  m_pBack = newItem;
    3619  }
    3620  ++m_Count;
    3621  return newItem;
    3622  }
    3623  else
    3624  return PushFront();
    3625 }
    3626 
    3627 template<typename T>
    3628 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
    3629 {
    3630  ItemType* const newItem = InsertBefore(pItem);
    3631  newItem->Value = value;
    3632  return newItem;
    3633 }
    3634 
    3635 template<typename T>
    3636 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
    3637 {
    3638  ItemType* const newItem = InsertAfter(pItem);
    3639  newItem->Value = value;
    3640  return newItem;
    3641 }
    3642 
    3643 template<typename T, typename AllocatorT>
    3644 class VmaList
    3645 {
    3646  VMA_CLASS_NO_COPY(VmaList)
    3647 public:
    3648  class iterator
    3649  {
    3650  public:
    3651  iterator() :
    3652  m_pList(VMA_NULL),
    3653  m_pItem(VMA_NULL)
    3654  {
    3655  }
    3656 
    3657  T& operator*() const
    3658  {
    3659  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3660  return m_pItem->Value;
    3661  }
    3662  T* operator->() const
    3663  {
    3664  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3665  return &m_pItem->Value;
    3666  }
    3667 
    3668  iterator& operator++()
    3669  {
    3670  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3671  m_pItem = m_pItem->pNext;
    3672  return *this;
    3673  }
    3674  iterator& operator--()
    3675  {
    3676  if(m_pItem != VMA_NULL)
    3677  {
    3678  m_pItem = m_pItem->pPrev;
    3679  }
    3680  else
    3681  {
    3682  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3683  m_pItem = m_pList->Back();
    3684  }
    3685  return *this;
    3686  }
    3687 
    3688  iterator operator++(int)
    3689  {
    3690  iterator result = *this;
    3691  ++*this;
    3692  return result;
    3693  }
    3694  iterator operator--(int)
    3695  {
    3696  iterator result = *this;
    3697  --*this;
    3698  return result;
    3699  }
    3700 
    3701  bool operator==(const iterator& rhs) const
    3702  {
    3703  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3704  return m_pItem == rhs.m_pItem;
    3705  }
    3706  bool operator!=(const iterator& rhs) const
    3707  {
    3708  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3709  return m_pItem != rhs.m_pItem;
    3710  }
    3711 
    3712  private:
    3713  VmaRawList<T>* m_pList;
    3714  VmaListItem<T>* m_pItem;
    3715 
    3716  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
    3717  m_pList(pList),
    3718  m_pItem(pItem)
    3719  {
    3720  }
    3721 
    3722  friend class VmaList<T, AllocatorT>;
    3723  };
    3724 
    3725  class const_iterator
    3726  {
    3727  public:
    3728  const_iterator() :
    3729  m_pList(VMA_NULL),
    3730  m_pItem(VMA_NULL)
    3731  {
    3732  }
    3733 
    3734  const_iterator(const iterator& src) :
    3735  m_pList(src.m_pList),
    3736  m_pItem(src.m_pItem)
    3737  {
    3738  }
    3739 
    3740  const T& operator*() const
    3741  {
    3742  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3743  return m_pItem->Value;
    3744  }
    3745  const T* operator->() const
    3746  {
    3747  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3748  return &m_pItem->Value;
    3749  }
    3750 
    3751  const_iterator& operator++()
    3752  {
    3753  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3754  m_pItem = m_pItem->pNext;
    3755  return *this;
    3756  }
    3757  const_iterator& operator--()
    3758  {
    3759  if(m_pItem != VMA_NULL)
    3760  {
    3761  m_pItem = m_pItem->pPrev;
    3762  }
    3763  else
    3764  {
    3765  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3766  m_pItem = m_pList->Back();
    3767  }
    3768  return *this;
    3769  }
    3770 
    3771  const_iterator operator++(int)
    3772  {
    3773  const_iterator result = *this;
    3774  ++*this;
    3775  return result;
    3776  }
    3777  const_iterator operator--(int)
    3778  {
    3779  const_iterator result = *this;
    3780  --*this;
    3781  return result;
    3782  }
    3783 
    3784  bool operator==(const const_iterator& rhs) const
    3785  {
    3786  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3787  return m_pItem == rhs.m_pItem;
    3788  }
    3789  bool operator!=(const const_iterator& rhs) const
    3790  {
    3791  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3792  return m_pItem != rhs.m_pItem;
    3793  }
    3794 
    3795  private:
    3796  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
    3797  m_pList(pList),
    3798  m_pItem(pItem)
    3799  {
    3800  }
    3801 
    3802  const VmaRawList<T>* m_pList;
    3803  const VmaListItem<T>* m_pItem;
    3804 
    3805  friend class VmaList<T, AllocatorT>;
    3806  };
    3807 
    3808  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
    3809 
    3810  bool empty() const { return m_RawList.IsEmpty(); }
    3811  size_t size() const { return m_RawList.GetCount(); }
    3812 
    3813  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
    3814  iterator end() { return iterator(&m_RawList, VMA_NULL); }
    3815 
    3816  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
    3817  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
    3818 
    3819  void clear() { m_RawList.Clear(); }
    3820  void push_back(const T& value) { m_RawList.PushBack(value); }
    3821  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
    3822  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
    3823 
    3824 private:
    3825  VmaRawList<T> m_RawList;
    3826 };
    3827 
    3828 #endif // #if VMA_USE_STL_LIST
    3829 
    3831 // class VmaMap
    3832 
    3833 // Unused in this version.
    3834 #if 0
    3835 
    3836 #if VMA_USE_STL_UNORDERED_MAP
    3837 
    3838 #define VmaPair std::pair
    3839 
    3840 #define VMA_MAP_TYPE(KeyT, ValueT) \
    3841  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
    3842 
    3843 #else // #if VMA_USE_STL_UNORDERED_MAP
    3844 
    3845 template<typename T1, typename T2>
    3846 struct VmaPair
    3847 {
    3848  T1 first;
    3849  T2 second;
    3850 
    3851  VmaPair() : first(), second() { }
    3852  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
    3853 };
    3854 
    3855 /* Class compatible with subset of interface of std::unordered_map.
    3856 KeyT, ValueT must be POD because they will be stored in VmaVector.
    3857 */
    3858 template<typename KeyT, typename ValueT>
    3859 class VmaMap
    3860 {
    3861 public:
    3862  typedef VmaPair<KeyT, ValueT> PairType;
    3863  typedef PairType* iterator;
    3864 
    3865  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
    3866 
    3867  iterator begin() { return m_Vector.begin(); }
    3868  iterator end() { return m_Vector.end(); }
    3869 
    3870  void insert(const PairType& pair);
    3871  iterator find(const KeyT& key);
    3872  void erase(iterator it);
    3873 
    3874 private:
    3875  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
    3876 };
    3877 
    3878 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
    3879 
    3880 template<typename FirstT, typename SecondT>
    3881 struct VmaPairFirstLess
    3882 {
    3883  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
    3884  {
    3885  return lhs.first < rhs.first;
    3886  }
    3887  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
    3888  {
    3889  return lhs.first < rhsFirst;
    3890  }
    3891 };
    3892 
    3893 template<typename KeyT, typename ValueT>
    3894 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
    3895 {
    3896  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3897  m_Vector.data(),
    3898  m_Vector.data() + m_Vector.size(),
    3899  pair,
    3900  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
    3901  VmaVectorInsert(m_Vector, indexToInsert, pair);
    3902 }
    3903 
    3904 template<typename KeyT, typename ValueT>
    3905 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
    3906 {
    3907  PairType* it = VmaBinaryFindFirstNotLess(
    3908  m_Vector.data(),
    3909  m_Vector.data() + m_Vector.size(),
    3910  key,
    3911  VmaPairFirstLess<KeyT, ValueT>());
    3912  if((it != m_Vector.end()) && (it->first == key))
    3913  {
    3914  return it;
    3915  }
    3916  else
    3917  {
    3918  return m_Vector.end();
    3919  }
    3920 }
    3921 
    3922 template<typename KeyT, typename ValueT>
    3923 void VmaMap<KeyT, ValueT>::erase(iterator it)
    3924 {
    3925  VmaVectorRemove(m_Vector, it - m_Vector.begin());
    3926 }
    3927 
    3928 #endif // #if VMA_USE_STL_UNORDERED_MAP
    3929 
    3930 #endif // #if 0
    3931 
    3933 
    3934 class VmaDeviceMemoryBlock;
    3935 
    3936 enum VMA_CACHE_OPERATION { VMA_CACHE_FLUSH, VMA_CACHE_INVALIDATE };
    3937 
    3938 struct VmaAllocation_T
    3939 {
    3940  VMA_CLASS_NO_COPY(VmaAllocation_T)
    3941 private:
    3942  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
    3943 
    3944  enum FLAGS
    3945  {
    3946  FLAG_USER_DATA_STRING = 0x01,
    3947  };
    3948 
    3949 public:
    3950  enum ALLOCATION_TYPE
    3951  {
    3952  ALLOCATION_TYPE_NONE,
    3953  ALLOCATION_TYPE_BLOCK,
    3954  ALLOCATION_TYPE_DEDICATED,
    3955  };
    3956 
    3957  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
    3958  m_Alignment(1),
    3959  m_Size(0),
    3960  m_pUserData(VMA_NULL),
    3961  m_LastUseFrameIndex(currentFrameIndex),
    3962  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
    3963  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
    3964  m_MapCount(0),
    3965  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
    3966  {
    3967 #if VMA_STATS_STRING_ENABLED
    3968  m_CreationFrameIndex = currentFrameIndex;
    3969  m_BufferImageUsage = 0;
    3970 #endif
    3971  }
    3972 
    3973  ~VmaAllocation_T()
    3974  {
    3975  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
    3976 
    3977  // Check if owned string was freed.
    3978  VMA_ASSERT(m_pUserData == VMA_NULL);
    3979  }
    3980 
    3981  void InitBlockAllocation(
    3982  VmaPool hPool,
    3983  VmaDeviceMemoryBlock* block,
    3984  VkDeviceSize offset,
    3985  VkDeviceSize alignment,
    3986  VkDeviceSize size,
    3987  VmaSuballocationType suballocationType,
    3988  bool mapped,
    3989  bool canBecomeLost)
    3990  {
    3991  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    3992  VMA_ASSERT(block != VMA_NULL);
    3993  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    3994  m_Alignment = alignment;
    3995  m_Size = size;
    3996  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    3997  m_SuballocationType = (uint8_t)suballocationType;
    3998  m_BlockAllocation.m_hPool = hPool;
    3999  m_BlockAllocation.m_Block = block;
    4000  m_BlockAllocation.m_Offset = offset;
    4001  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
    4002  }
    4003 
    4004  void InitLost()
    4005  {
    4006  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4007  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
    4008  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    4009  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
    4010  m_BlockAllocation.m_Block = VMA_NULL;
    4011  m_BlockAllocation.m_Offset = 0;
    4012  m_BlockAllocation.m_CanBecomeLost = true;
    4013  }
    4014 
    4015  void ChangeBlockAllocation(
    4016  VmaAllocator hAllocator,
    4017  VmaDeviceMemoryBlock* block,
    4018  VkDeviceSize offset);
    4019 
    4020  // pMappedData not null means allocation is created with MAPPED flag.
    4021  void InitDedicatedAllocation(
    4022  uint32_t memoryTypeIndex,
    4023  VkDeviceMemory hMemory,
    4024  VmaSuballocationType suballocationType,
    4025  void* pMappedData,
    4026  VkDeviceSize size)
    4027  {
    4028  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4029  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
    4030  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
    4031  m_Alignment = 0;
    4032  m_Size = size;
    4033  m_SuballocationType = (uint8_t)suballocationType;
    4034  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    4035  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
    4036  m_DedicatedAllocation.m_hMemory = hMemory;
    4037  m_DedicatedAllocation.m_pMappedData = pMappedData;
    4038  }
    4039 
    4040  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
    4041  VkDeviceSize GetAlignment() const { return m_Alignment; }
    4042  VkDeviceSize GetSize() const { return m_Size; }
    4043  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
    4044  void* GetUserData() const { return m_pUserData; }
    4045  void SetUserData(VmaAllocator hAllocator, void* pUserData);
    4046  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
    4047 
    4048  VmaDeviceMemoryBlock* GetBlock() const
    4049  {
    4050  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    4051  return m_BlockAllocation.m_Block;
    4052  }
    4053  VkDeviceSize GetOffset() const;
    4054  VkDeviceMemory GetMemory() const;
    4055  uint32_t GetMemoryTypeIndex() const;
    4056  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
    4057  void* GetMappedData() const;
    4058  bool CanBecomeLost() const;
    4059  VmaPool GetPool() const;
    4060 
    4061  uint32_t GetLastUseFrameIndex() const
    4062  {
    4063  return m_LastUseFrameIndex.load();
    4064  }
    4065  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
    4066  {
    4067  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
    4068  }
    4069  /*
    4070  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
    4071  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
    4072  - Else, returns false.
    4073 
    4074  If hAllocation is already lost, assert - you should not call it then.
    4075  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
    4076  */
    4077  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4078 
    4079  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
    4080  {
    4081  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
    4082  outInfo.blockCount = 1;
    4083  outInfo.allocationCount = 1;
    4084  outInfo.unusedRangeCount = 0;
    4085  outInfo.usedBytes = m_Size;
    4086  outInfo.unusedBytes = 0;
    4087  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
    4088  outInfo.unusedRangeSizeMin = UINT64_MAX;
    4089  outInfo.unusedRangeSizeMax = 0;
    4090  }
    4091 
    4092  void BlockAllocMap();
    4093  void BlockAllocUnmap();
    4094  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
    4095  void DedicatedAllocUnmap(VmaAllocator hAllocator);
    4096 
    4097 #if VMA_STATS_STRING_ENABLED
    4098  uint32_t GetCreationFrameIndex() const { return m_CreationFrameIndex; }
    4099  uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
    4100 
    4101  void InitBufferImageUsage(uint32_t bufferImageUsage)
    4102  {
    4103  VMA_ASSERT(m_BufferImageUsage == 0);
    4104  m_BufferImageUsage = bufferImageUsage;
    4105  }
    4106 
    4107  void PrintParameters(class VmaJsonWriter& json) const;
    4108 #endif
    4109 
    4110 private:
    4111  VkDeviceSize m_Alignment;
    4112  VkDeviceSize m_Size;
    4113  void* m_pUserData;
    4114  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
    4115  uint8_t m_Type; // ALLOCATION_TYPE
    4116  uint8_t m_SuballocationType; // VmaSuballocationType
    4117  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
    4118  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
    4119  uint8_t m_MapCount;
    4120  uint8_t m_Flags; // enum FLAGS
    4121 
    4122  // Allocation out of VmaDeviceMemoryBlock.
    4123  struct BlockAllocation
    4124  {
    4125  VmaPool m_hPool; // Null if belongs to general memory.
    4126  VmaDeviceMemoryBlock* m_Block;
    4127  VkDeviceSize m_Offset;
    4128  bool m_CanBecomeLost;
    4129  };
    4130 
    4131  // Allocation for an object that has its own private VkDeviceMemory.
    4132  struct DedicatedAllocation
    4133  {
    4134  uint32_t m_MemoryTypeIndex;
    4135  VkDeviceMemory m_hMemory;
    4136  void* m_pMappedData; // Not null means memory is mapped.
    4137  };
    4138 
    4139  union
    4140  {
    4141  // Allocation out of VmaDeviceMemoryBlock.
    4142  BlockAllocation m_BlockAllocation;
    4143  // Allocation for an object that has its own private VkDeviceMemory.
    4144  DedicatedAllocation m_DedicatedAllocation;
    4145  };
    4146 
    4147 #if VMA_STATS_STRING_ENABLED
    4148  uint32_t m_CreationFrameIndex;
    4149  uint32_t m_BufferImageUsage; // 0 if unknown.
    4150 #endif
    4151 
    4152  void FreeUserDataString(VmaAllocator hAllocator);
    4153 };
    4154 
    4155 /*
    4156 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
    4157 allocated memory block or free.
    4158 */
    4159 struct VmaSuballocation
    4160 {
    4161  VkDeviceSize offset;
    4162  VkDeviceSize size;
    4163  VmaAllocation hAllocation;
    4164  VmaSuballocationType type;
    4165 };
    4166 
    4167 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
    4168 
    4169 // Cost of one additional allocation lost, as equivalent in bytes.
    4170 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
    4171 
    4172 /*
    4173 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
    4174 
    4175 If canMakeOtherLost was false:
    4176 - item points to a FREE suballocation.
    4177 - itemsToMakeLostCount is 0.
    4178 
    4179 If canMakeOtherLost was true:
    4180 - item points to first of sequence of suballocations, which are either FREE,
    4181  or point to VmaAllocations that can become lost.
    4182 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
    4183  the requested allocation to succeed.
    4184 */
    4185 struct VmaAllocationRequest
    4186 {
    4187  VkDeviceSize offset;
    4188  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
    4189  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
    4190  VmaSuballocationList::iterator item;
    4191  size_t itemsToMakeLostCount;
    4192 
    4193  VkDeviceSize CalcCost() const
    4194  {
    4195  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
    4196  }
    4197 };
    4198 
    4199 /*
    4200 Data structure used for bookkeeping of allocations and unused ranges of memory
    4201 in a single VkDeviceMemory block.
    4202 */
    4203 class VmaBlockMetadata
    4204 {
    4205  VMA_CLASS_NO_COPY(VmaBlockMetadata)
    4206 public:
    4207  VmaBlockMetadata(VmaAllocator hAllocator);
    4208  ~VmaBlockMetadata();
    4209  void Init(VkDeviceSize size);
    4210 
    4211  // Validates all data structures inside this object. If not valid, returns false.
    4212  bool Validate() const;
    4213  VkDeviceSize GetSize() const { return m_Size; }
    4214  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
    4215  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
    4216  VkDeviceSize GetUnusedRangeSizeMax() const;
    4217  // Returns true if this block is empty - contains only single free suballocation.
    4218  bool IsEmpty() const;
    4219 
    4220  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
    4221  void AddPoolStats(VmaPoolStats& inoutStats) const;
    4222 
    4223 #if VMA_STATS_STRING_ENABLED
    4224  void PrintDetailedMap(class VmaJsonWriter& json) const;
    4225 #endif
    4226 
    4227  // Tries to find a place for suballocation with given parameters inside this block.
    4228  // If succeeded, fills pAllocationRequest and returns true.
    4229  // If failed, returns false.
    4230  bool CreateAllocationRequest(
    4231  uint32_t currentFrameIndex,
    4232  uint32_t frameInUseCount,
    4233  VkDeviceSize bufferImageGranularity,
    4234  VkDeviceSize allocSize,
    4235  VkDeviceSize allocAlignment,
    4236  VmaSuballocationType allocType,
    4237  bool canMakeOtherLost,
    4238  VmaAllocationRequest* pAllocationRequest);
    4239 
    4240  bool MakeRequestedAllocationsLost(
    4241  uint32_t currentFrameIndex,
    4242  uint32_t frameInUseCount,
    4243  VmaAllocationRequest* pAllocationRequest);
    4244 
    4245  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4246 
    4247  VkResult CheckCorruption(const void* pBlockData);
    4248 
    4249  // Makes actual allocation based on request. Request must already be checked and valid.
    4250  void Alloc(
    4251  const VmaAllocationRequest& request,
    4252  VmaSuballocationType type,
    4253  VkDeviceSize allocSize,
    4254  VmaAllocation hAllocation);
    4255 
    4256  // Frees suballocation assigned to given memory region.
    4257  void Free(const VmaAllocation allocation);
    4258  void FreeAtOffset(VkDeviceSize offset);
    4259 
    4260 private:
    4261  VkDeviceSize m_Size;
    4262  uint32_t m_FreeCount;
    4263  VkDeviceSize m_SumFreeSize;
    4264  VmaSuballocationList m_Suballocations;
    4265  // Suballocations that are free and have size greater than certain threshold.
    4266  // Sorted by size, ascending.
    4267  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
    4268 
    4269  bool ValidateFreeSuballocationList() const;
    4270 
    4271  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
    4272  // If yes, fills pOffset and returns true. If no, returns false.
    4273  bool CheckAllocation(
    4274  uint32_t currentFrameIndex,
    4275  uint32_t frameInUseCount,
    4276  VkDeviceSize bufferImageGranularity,
    4277  VkDeviceSize allocSize,
    4278  VkDeviceSize allocAlignment,
    4279  VmaSuballocationType allocType,
    4280  VmaSuballocationList::const_iterator suballocItem,
    4281  bool canMakeOtherLost,
    4282  VkDeviceSize* pOffset,
    4283  size_t* itemsToMakeLostCount,
    4284  VkDeviceSize* pSumFreeSize,
    4285  VkDeviceSize* pSumItemSize) const;
    4286  // Given free suballocation, it merges it with following one, which must also be free.
    4287  void MergeFreeWithNext(VmaSuballocationList::iterator item);
    4288  // Releases given suballocation, making it free.
    4289  // Merges it with adjacent free suballocations if applicable.
    4290  // Returns iterator to new free suballocation at this place.
    4291  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
    4292  // Given free suballocation, it inserts it into sorted list of
    4293  // m_FreeSuballocationsBySize if it's suitable.
    4294  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
    4295  // Given free suballocation, it removes it from sorted list of
    4296  // m_FreeSuballocationsBySize if it's suitable.
    4297  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
    4298 };
    4299 
    4300 /*
    4301 Represents a single block of device memory (`VkDeviceMemory`) with all the
    4302 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
    4303 
    4304 Thread-safety: This class must be externally synchronized.
    4305 */
    4306 class VmaDeviceMemoryBlock
    4307 {
    4308  VMA_CLASS_NO_COPY(VmaDeviceMemoryBlock)
    4309 public:
    4310  VmaBlockMetadata m_Metadata;
    4311 
    4312  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
    4313 
    4314  ~VmaDeviceMemoryBlock()
    4315  {
    4316  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
    4317  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    4318  }
    4319 
    4320  // Always call after construction.
    4321  void Init(
    4322  uint32_t newMemoryTypeIndex,
    4323  VkDeviceMemory newMemory,
    4324  VkDeviceSize newSize,
    4325  uint32_t id);
    4326  // Always call before destruction.
    4327  void Destroy(VmaAllocator allocator);
    4328 
    4329  VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
    4330  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4331  uint32_t GetId() const { return m_Id; }
    4332  void* GetMappedData() const { return m_pMappedData; }
    4333 
    4334  // Validates all data structures inside this object. If not valid, returns false.
    4335  bool Validate() const;
    4336 
    4337  VkResult CheckCorruption(VmaAllocator hAllocator);
    4338 
    4339  // ppData can be null.
    4340  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
    4341  void Unmap(VmaAllocator hAllocator, uint32_t count);
    4342 
    4343  VkResult WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4344  VkResult ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4345 
    4346  VkResult BindBufferMemory(
    4347  const VmaAllocator hAllocator,
    4348  const VmaAllocation hAllocation,
    4349  VkBuffer hBuffer);
    4350  VkResult BindImageMemory(
    4351  const VmaAllocator hAllocator,
    4352  const VmaAllocation hAllocation,
    4353  VkImage hImage);
    4354 
    4355 private:
    4356  uint32_t m_MemoryTypeIndex;
    4357  uint32_t m_Id;
    4358  VkDeviceMemory m_hMemory;
    4359 
    4360  // Protects access to m_hMemory so it's not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
    4361  // Also protects m_MapCount, m_pMappedData.
    4362  VMA_MUTEX m_Mutex;
    4363  uint32_t m_MapCount;
    4364  void* m_pMappedData;
    4365 };
    4366 
    4367 struct VmaPointerLess
    4368 {
    4369  bool operator()(const void* lhs, const void* rhs) const
    4370  {
    4371  return lhs < rhs;
    4372  }
    4373 };
    4374 
    4375 class VmaDefragmentator;
    4376 
    4377 /*
    4378 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
    4379 Vulkan memory type.
    4380 
    4381 Synchronized internally with a mutex.
    4382 */
    4383 struct VmaBlockVector
    4384 {
    4385  VMA_CLASS_NO_COPY(VmaBlockVector)
    4386 public:
    4387  VmaBlockVector(
    4388  VmaAllocator hAllocator,
    4389  uint32_t memoryTypeIndex,
    4390  VkDeviceSize preferredBlockSize,
    4391  size_t minBlockCount,
    4392  size_t maxBlockCount,
    4393  VkDeviceSize bufferImageGranularity,
    4394  uint32_t frameInUseCount,
    4395  bool isCustomPool);
    4396  ~VmaBlockVector();
    4397 
    4398  VkResult CreateMinBlocks();
    4399 
    4400  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4401  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
    4402  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
    4403  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
    4404 
    4405  void GetPoolStats(VmaPoolStats* pStats);
    4406 
    4407  bool IsEmpty() const { return m_Blocks.empty(); }
    4408  bool IsCorruptionDetectionEnabled() const;
    4409 
    4410  VkResult Allocate(
    4411  VmaPool hCurrentPool,
    4412  uint32_t currentFrameIndex,
    4413  VkDeviceSize size,
    4414  VkDeviceSize alignment,
    4415  const VmaAllocationCreateInfo& createInfo,
    4416  VmaSuballocationType suballocType,
    4417  VmaAllocation* pAllocation);
    4418 
    4419  void Free(
    4420  VmaAllocation hAllocation);
    4421 
    4422  // Adds statistics of this BlockVector to pStats.
    4423  void AddStats(VmaStats* pStats);
    4424 
    4425 #if VMA_STATS_STRING_ENABLED
    4426  void PrintDetailedMap(class VmaJsonWriter& json);
    4427 #endif
    4428 
    4429  void MakePoolAllocationsLost(
    4430  uint32_t currentFrameIndex,
    4431  size_t* pLostAllocationCount);
    4432  VkResult CheckCorruption();
    4433 
    4434  VmaDefragmentator* EnsureDefragmentator(
    4435  VmaAllocator hAllocator,
    4436  uint32_t currentFrameIndex);
    4437 
    4438  VkResult Defragment(
    4439  VmaDefragmentationStats* pDefragmentationStats,
    4440  VkDeviceSize& maxBytesToMove,
    4441  uint32_t& maxAllocationsToMove);
    4442 
    4443  void DestroyDefragmentator();
    4444 
    4445 private:
    4446  friend class VmaDefragmentator;
    4447 
    4448  const VmaAllocator m_hAllocator;
    4449  const uint32_t m_MemoryTypeIndex;
    4450  const VkDeviceSize m_PreferredBlockSize;
    4451  const size_t m_MinBlockCount;
    4452  const size_t m_MaxBlockCount;
    4453  const VkDeviceSize m_BufferImageGranularity;
    4454  const uint32_t m_FrameInUseCount;
    4455  const bool m_IsCustomPool;
    4456  VMA_MUTEX m_Mutex;
    4457  // Incrementally sorted by sumFreeSize, ascending.
    4458  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
    4459  /* There can be at most one allocation that is completely empty - a
    4460  hysteresis to avoid pessimistic case of alternating creation and destruction
    4461  of a VkDeviceMemory. */
    4462  bool m_HasEmptyBlock;
    4463  VmaDefragmentator* m_pDefragmentator;
    4464  uint32_t m_NextBlockId;
    4465 
    4466  VkDeviceSize CalcMaxBlockSize() const;
    4467 
    4468  // Finds and removes given block from vector.
    4469  void Remove(VmaDeviceMemoryBlock* pBlock);
    4470 
    4471  // Performs single step in sorting m_Blocks. They may not be fully sorted
    4472  // after this call.
    4473  void IncrementallySortBlocks();
    4474 
    4475  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
    4476 };
    4477 
    4478 struct VmaPool_T
    4479 {
    4480  VMA_CLASS_NO_COPY(VmaPool_T)
    4481 public:
    4482  VmaBlockVector m_BlockVector;
    4483 
    4484  VmaPool_T(
    4485  VmaAllocator hAllocator,
    4486  const VmaPoolCreateInfo& createInfo);
    4487  ~VmaPool_T();
    4488 
    4489  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
    4490  uint32_t GetId() const { return m_Id; }
    4491  void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
    4492 
    4493 #if VMA_STATS_STRING_ENABLED
    4494  //void PrintDetailedMap(class VmaStringBuilder& sb);
    4495 #endif
    4496 
    4497 private:
    4498  uint32_t m_Id;
    4499 };
    4500 
    4501 class VmaDefragmentator
    4502 {
    4503  VMA_CLASS_NO_COPY(VmaDefragmentator)
    4504 private:
    4505  const VmaAllocator m_hAllocator;
    4506  VmaBlockVector* const m_pBlockVector;
    4507  uint32_t m_CurrentFrameIndex;
    4508  VkDeviceSize m_BytesMoved;
    4509  uint32_t m_AllocationsMoved;
    4510 
    4511  struct AllocationInfo
    4512  {
    4513  VmaAllocation m_hAllocation;
    4514  VkBool32* m_pChanged;
    4515 
    4516  AllocationInfo() :
    4517  m_hAllocation(VK_NULL_HANDLE),
    4518  m_pChanged(VMA_NULL)
    4519  {
    4520  }
    4521  };
    4522 
    4523  struct AllocationInfoSizeGreater
    4524  {
    4525  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
    4526  {
    4527  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
    4528  }
    4529  };
    4530 
    4531  // Used between AddAllocation and Defragment.
    4532  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4533 
    4534  struct BlockInfo
    4535  {
    4536  VmaDeviceMemoryBlock* m_pBlock;
    4537  bool m_HasNonMovableAllocations;
    4538  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4539 
    4540  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
    4541  m_pBlock(VMA_NULL),
    4542  m_HasNonMovableAllocations(true),
    4543  m_Allocations(pAllocationCallbacks),
    4544  m_pMappedDataForDefragmentation(VMA_NULL)
    4545  {
    4546  }
    4547 
    4548  void CalcHasNonMovableAllocations()
    4549  {
    4550  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
    4551  const size_t defragmentAllocCount = m_Allocations.size();
    4552  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
    4553  }
    4554 
    4555  void SortAllocationsBySizeDescecnding()
    4556  {
    4557  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
    4558  }
    4559 
    4560  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
    4561  void Unmap(VmaAllocator hAllocator);
    4562 
    4563  private:
    4564  // Not null if mapped for defragmentation only, not originally mapped.
    4565  void* m_pMappedDataForDefragmentation;
    4566  };
    4567 
    4568  struct BlockPointerLess
    4569  {
    4570  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
    4571  {
    4572  return pLhsBlockInfo->m_pBlock < pRhsBlock;
    4573  }
    4574  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4575  {
    4576  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
    4577  }
    4578  };
    4579 
    4580  // 1. Blocks with some non-movable allocations go first.
    4581  // 2. Blocks with smaller sumFreeSize go first.
    4582  struct BlockInfoCompareMoveDestination
    4583  {
    4584  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4585  {
    4586  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
    4587  {
    4588  return true;
    4589  }
    4590  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
    4591  {
    4592  return false;
    4593  }
    4594  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
    4595  {
    4596  return true;
    4597  }
    4598  return false;
    4599  }
    4600  };
    4601 
    4602  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
    4603  BlockInfoVector m_Blocks;
    4604 
    4605  VkResult DefragmentRound(
    4606  VkDeviceSize maxBytesToMove,
    4607  uint32_t maxAllocationsToMove);
    4608 
    4609  static bool MoveMakesSense(
    4610  size_t dstBlockIndex, VkDeviceSize dstOffset,
    4611  size_t srcBlockIndex, VkDeviceSize srcOffset);
    4612 
    4613 public:
    4614  VmaDefragmentator(
    4615  VmaAllocator hAllocator,
    4616  VmaBlockVector* pBlockVector,
    4617  uint32_t currentFrameIndex);
    4618 
    4619  ~VmaDefragmentator();
    4620 
    4621  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
    4622  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
    4623 
    4624  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
    4625 
    4626  VkResult Defragment(
    4627  VkDeviceSize maxBytesToMove,
    4628  uint32_t maxAllocationsToMove);
    4629 };
    4630 
    4631 // Main allocator object.
    4632 struct VmaAllocator_T
    4633 {
    4634  VMA_CLASS_NO_COPY(VmaAllocator_T)
    4635 public:
    4636  bool m_UseMutex;
    4637  bool m_UseKhrDedicatedAllocation;
    4638  VkDevice m_hDevice;
    4639  bool m_AllocationCallbacksSpecified;
    4640  VkAllocationCallbacks m_AllocationCallbacks;
    4641  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
    4642 
    4643  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
    4644  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
    4645  VMA_MUTEX m_HeapSizeLimitMutex;
    4646 
    4647  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
    4648  VkPhysicalDeviceMemoryProperties m_MemProps;
    4649 
    4650  // Default pools.
    4651  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
    4652 
    4653  // Each vector is sorted by memory (handle value).
    4654  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
    4655  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
    4656  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
    4657 
    4658  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
    4659  ~VmaAllocator_T();
    4660 
    4661  const VkAllocationCallbacks* GetAllocationCallbacks() const
    4662  {
    4663  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
    4664  }
    4665  const VmaVulkanFunctions& GetVulkanFunctions() const
    4666  {
    4667  return m_VulkanFunctions;
    4668  }
    4669 
    4670  VkDeviceSize GetBufferImageGranularity() const
    4671  {
    4672  return VMA_MAX(
    4673  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
    4674  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
    4675  }
    4676 
    4677  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
    4678  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
    4679 
    4680  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
    4681  {
    4682  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
    4683  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
    4684  }
    4685  // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
    4686  bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
    4687  {
    4688  return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
    4689  VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    4690  }
    4691  // Minimum alignment for all allocations in specific memory type.
    4692  VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
    4693  {
    4694  return IsMemoryTypeNonCoherent(memTypeIndex) ?
    4695  VMA_MAX((VkDeviceSize)VMA_DEBUG_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
    4696  (VkDeviceSize)VMA_DEBUG_ALIGNMENT;
    4697  }
    4698 
    4699  bool IsIntegratedGpu() const
    4700  {
    4701  return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
    4702  }
    4703 
    4704  void GetBufferMemoryRequirements(
    4705  VkBuffer hBuffer,
    4706  VkMemoryRequirements& memReq,
    4707  bool& requiresDedicatedAllocation,
    4708  bool& prefersDedicatedAllocation) const;
    4709  void GetImageMemoryRequirements(
    4710  VkImage hImage,
    4711  VkMemoryRequirements& memReq,
    4712  bool& requiresDedicatedAllocation,
    4713  bool& prefersDedicatedAllocation) const;
    4714 
    4715  // Main allocation function.
    4716  VkResult AllocateMemory(
    4717  const VkMemoryRequirements& vkMemReq,
    4718  bool requiresDedicatedAllocation,
    4719  bool prefersDedicatedAllocation,
    4720  VkBuffer dedicatedBuffer,
    4721  VkImage dedicatedImage,
    4722  const VmaAllocationCreateInfo& createInfo,
    4723  VmaSuballocationType suballocType,
    4724  VmaAllocation* pAllocation);
    4725 
    4726  // Main deallocation function.
    4727  void FreeMemory(const VmaAllocation allocation);
    4728 
    4729  void CalculateStats(VmaStats* pStats);
    4730 
    4731 #if VMA_STATS_STRING_ENABLED
    4732  void PrintDetailedMap(class VmaJsonWriter& json);
    4733 #endif
    4734 
    4735  VkResult Defragment(
    4736  VmaAllocation* pAllocations,
    4737  size_t allocationCount,
    4738  VkBool32* pAllocationsChanged,
    4739  const VmaDefragmentationInfo* pDefragmentationInfo,
    4740  VmaDefragmentationStats* pDefragmentationStats);
    4741 
    4742  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
    4743  bool TouchAllocation(VmaAllocation hAllocation);
    4744 
    4745  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
    4746  void DestroyPool(VmaPool pool);
    4747  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
    4748 
    4749  void SetCurrentFrameIndex(uint32_t frameIndex);
    4750 
    4751  void MakePoolAllocationsLost(
    4752  VmaPool hPool,
    4753  size_t* pLostAllocationCount);
    4754  VkResult CheckPoolCorruption(VmaPool hPool);
    4755  VkResult CheckCorruption(uint32_t memoryTypeBits);
    4756 
    4757  void CreateLostAllocation(VmaAllocation* pAllocation);
    4758 
    4759  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
    4760  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
    4761 
    4762  VkResult Map(VmaAllocation hAllocation, void** ppData);
    4763  void Unmap(VmaAllocation hAllocation);
    4764 
    4765  VkResult BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer);
    4766  VkResult BindImageMemory(VmaAllocation hAllocation, VkImage hImage);
    4767 
    4768  void FlushOrInvalidateAllocation(
    4769  VmaAllocation hAllocation,
    4770  VkDeviceSize offset, VkDeviceSize size,
    4771  VMA_CACHE_OPERATION op);
    4772 
    4773 private:
    4774  VkDeviceSize m_PreferredLargeHeapBlockSize;
    4775 
    4776  VkPhysicalDevice m_PhysicalDevice;
    4777  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
    4778 
    4779  VMA_MUTEX m_PoolsMutex;
    4780  // Protected by m_PoolsMutex. Sorted by pointer value.
    4781  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
    4782  uint32_t m_NextPoolId;
    4783 
    4784  VmaVulkanFunctions m_VulkanFunctions;
    4785 
    4786  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
    4787 
    4788  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
    4789 
    4790  VkResult AllocateMemoryOfType(
    4791  VkDeviceSize size,
    4792  VkDeviceSize alignment,
    4793  bool dedicatedAllocation,
    4794  VkBuffer dedicatedBuffer,
    4795  VkImage dedicatedImage,
    4796  const VmaAllocationCreateInfo& createInfo,
    4797  uint32_t memTypeIndex,
    4798  VmaSuballocationType suballocType,
    4799  VmaAllocation* pAllocation);
    4800 
    4801  // Allocates and registers new VkDeviceMemory specifically for single allocation.
    4802  VkResult AllocateDedicatedMemory(
    4803  VkDeviceSize size,
    4804  VmaSuballocationType suballocType,
    4805  uint32_t memTypeIndex,
    4806  bool map,
    4807  bool isUserDataString,
    4808  void* pUserData,
    4809  VkBuffer dedicatedBuffer,
    4810  VkImage dedicatedImage,
    4811  VmaAllocation* pAllocation);
    4812 
    4813  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
    4814  void FreeDedicatedMemory(VmaAllocation allocation);
    4815 };
    4816 
    4818 // Memory allocation #2 after VmaAllocator_T definition
    4819 
    4820 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
    4821 {
    4822  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
    4823 }
    4824 
    4825 static void VmaFree(VmaAllocator hAllocator, void* ptr)
    4826 {
    4827  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
    4828 }
    4829 
    4830 template<typename T>
    4831 static T* VmaAllocate(VmaAllocator hAllocator)
    4832 {
    4833  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
    4834 }
    4835 
    4836 template<typename T>
    4837 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
    4838 {
    4839  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
    4840 }
    4841 
    4842 template<typename T>
    4843 static void vma_delete(VmaAllocator hAllocator, T* ptr)
    4844 {
    4845  if(ptr != VMA_NULL)
    4846  {
    4847  ptr->~T();
    4848  VmaFree(hAllocator, ptr);
    4849  }
    4850 }
    4851 
    4852 template<typename T>
    4853 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
    4854 {
    4855  if(ptr != VMA_NULL)
    4856  {
    4857  for(size_t i = count; i--; )
    4858  ptr[i].~T();
    4859  VmaFree(hAllocator, ptr);
    4860  }
    4861 }
    4862 
    4864 // VmaStringBuilder
    4865 
    4866 #if VMA_STATS_STRING_ENABLED
    4867 
    4868 class VmaStringBuilder
    4869 {
    4870 public:
    4871  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
    4872  size_t GetLength() const { return m_Data.size(); }
    4873  const char* GetData() const { return m_Data.data(); }
    4874 
    4875  void Add(char ch) { m_Data.push_back(ch); }
    4876  void Add(const char* pStr);
    4877  void AddNewLine() { Add('\n'); }
    4878  void AddNumber(uint32_t num);
    4879  void AddNumber(uint64_t num);
    4880  void AddPointer(const void* ptr);
    4881 
    4882 private:
    4883  VmaVector< char, VmaStlAllocator<char> > m_Data;
    4884 };
    4885 
    4886 void VmaStringBuilder::Add(const char* pStr)
    4887 {
    4888  const size_t strLen = strlen(pStr);
    4889  if(strLen > 0)
    4890  {
    4891  const size_t oldCount = m_Data.size();
    4892  m_Data.resize(oldCount + strLen);
    4893  memcpy(m_Data.data() + oldCount, pStr, strLen);
    4894  }
    4895 }
    4896 
    4897 void VmaStringBuilder::AddNumber(uint32_t num)
    4898 {
    4899  char buf[11];
    4900  VmaUint32ToStr(buf, sizeof(buf), num);
    4901  Add(buf);
    4902 }
    4903 
    4904 void VmaStringBuilder::AddNumber(uint64_t num)
    4905 {
    4906  char buf[21];
    4907  VmaUint64ToStr(buf, sizeof(buf), num);
    4908  Add(buf);
    4909 }
    4910 
    4911 void VmaStringBuilder::AddPointer(const void* ptr)
    4912 {
    4913  char buf[21];
    4914  VmaPtrToStr(buf, sizeof(buf), ptr);
    4915  Add(buf);
    4916 }
    4917 
    4918 #endif // #if VMA_STATS_STRING_ENABLED
    4919 
    4921 // VmaJsonWriter
    4922 
    4923 #if VMA_STATS_STRING_ENABLED
    4924 
    4925 class VmaJsonWriter
    4926 {
    4927  VMA_CLASS_NO_COPY(VmaJsonWriter)
    4928 public:
    4929  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
    4930  ~VmaJsonWriter();
    4931 
    4932  void BeginObject(bool singleLine = false);
    4933  void EndObject();
    4934 
    4935  void BeginArray(bool singleLine = false);
    4936  void EndArray();
    4937 
    4938  void WriteString(const char* pStr);
    4939  void BeginString(const char* pStr = VMA_NULL);
    4940  void ContinueString(const char* pStr);
    4941  void ContinueString(uint32_t n);
    4942  void ContinueString(uint64_t n);
    4943  void ContinueString_Pointer(const void* ptr);
    4944  void EndString(const char* pStr = VMA_NULL);
    4945 
    4946  void WriteNumber(uint32_t n);
    4947  void WriteNumber(uint64_t n);
    4948  void WriteBool(bool b);
    4949  void WriteNull();
    4950 
    4951 private:
    4952  static const char* const INDENT;
    4953 
    4954  enum COLLECTION_TYPE
    4955  {
    4956  COLLECTION_TYPE_OBJECT,
    4957  COLLECTION_TYPE_ARRAY,
    4958  };
    4959  struct StackItem
    4960  {
    4961  COLLECTION_TYPE type;
    4962  uint32_t valueCount;
    4963  bool singleLineMode;
    4964  };
    4965 
    4966  VmaStringBuilder& m_SB;
    4967  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
    4968  bool m_InsideString;
    4969 
    4970  void BeginValue(bool isString);
    4971  void WriteIndent(bool oneLess = false);
    4972 };
    4973 
    4974 const char* const VmaJsonWriter::INDENT = " ";
    4975 
    4976 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
    4977  m_SB(sb),
    4978  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
    4979  m_InsideString(false)
    4980 {
    4981 }
    4982 
    4983 VmaJsonWriter::~VmaJsonWriter()
    4984 {
    4985  VMA_ASSERT(!m_InsideString);
    4986  VMA_ASSERT(m_Stack.empty());
    4987 }
    4988 
    4989 void VmaJsonWriter::BeginObject(bool singleLine)
    4990 {
    4991  VMA_ASSERT(!m_InsideString);
    4992 
    4993  BeginValue(false);
    4994  m_SB.Add('{');
    4995 
    4996  StackItem item;
    4997  item.type = COLLECTION_TYPE_OBJECT;
    4998  item.valueCount = 0;
    4999  item.singleLineMode = singleLine;
    5000  m_Stack.push_back(item);
    5001 }
    5002 
    5003 void VmaJsonWriter::EndObject()
    5004 {
    5005  VMA_ASSERT(!m_InsideString);
    5006 
    5007  WriteIndent(true);
    5008  m_SB.Add('}');
    5009 
    5010  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
    5011  m_Stack.pop_back();
    5012 }
    5013 
    5014 void VmaJsonWriter::BeginArray(bool singleLine)
    5015 {
    5016  VMA_ASSERT(!m_InsideString);
    5017 
    5018  BeginValue(false);
    5019  m_SB.Add('[');
    5020 
    5021  StackItem item;
    5022  item.type = COLLECTION_TYPE_ARRAY;
    5023  item.valueCount = 0;
    5024  item.singleLineMode = singleLine;
    5025  m_Stack.push_back(item);
    5026 }
    5027 
    5028 void VmaJsonWriter::EndArray()
    5029 {
    5030  VMA_ASSERT(!m_InsideString);
    5031 
    5032  WriteIndent(true);
    5033  m_SB.Add(']');
    5034 
    5035  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
    5036  m_Stack.pop_back();
    5037 }
    5038 
    5039 void VmaJsonWriter::WriteString(const char* pStr)
    5040 {
    5041  BeginString(pStr);
    5042  EndString();
    5043 }
    5044 
    5045 void VmaJsonWriter::BeginString(const char* pStr)
    5046 {
    5047  VMA_ASSERT(!m_InsideString);
    5048 
    5049  BeginValue(true);
    5050  m_SB.Add('"');
    5051  m_InsideString = true;
    5052  if(pStr != VMA_NULL && pStr[0] != '\0')
    5053  {
    5054  ContinueString(pStr);
    5055  }
    5056 }
    5057 
    5058 void VmaJsonWriter::ContinueString(const char* pStr)
    5059 {
    5060  VMA_ASSERT(m_InsideString);
    5061 
    5062  const size_t strLen = strlen(pStr);
    5063  for(size_t i = 0; i < strLen; ++i)
    5064  {
    5065  char ch = pStr[i];
    5066  if(ch == '\'')
    5067  {
    5068  m_SB.Add("\\\\");
    5069  }
    5070  else if(ch == '"')
    5071  {
    5072  m_SB.Add("\\\"");
    5073  }
    5074  else if(ch >= 32)
    5075  {
    5076  m_SB.Add(ch);
    5077  }
    5078  else switch(ch)
    5079  {
    5080  case '\b':
    5081  m_SB.Add("\\b");
    5082  break;
    5083  case '\f':
    5084  m_SB.Add("\\f");
    5085  break;
    5086  case '\n':
    5087  m_SB.Add("\\n");
    5088  break;
    5089  case '\r':
    5090  m_SB.Add("\\r");
    5091  break;
    5092  case '\t':
    5093  m_SB.Add("\\t");
    5094  break;
    5095  default:
    5096  VMA_ASSERT(0 && "Character not currently supported.");
    5097  break;
    5098  }
    5099  }
    5100 }
    5101 
    5102 void VmaJsonWriter::ContinueString(uint32_t n)
    5103 {
    5104  VMA_ASSERT(m_InsideString);
    5105  m_SB.AddNumber(n);
    5106 }
    5107 
    5108 void VmaJsonWriter::ContinueString(uint64_t n)
    5109 {
    5110  VMA_ASSERT(m_InsideString);
    5111  m_SB.AddNumber(n);
    5112 }
    5113 
    5114 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
    5115 {
    5116  VMA_ASSERT(m_InsideString);
    5117  m_SB.AddPointer(ptr);
    5118 }
    5119 
    5120 void VmaJsonWriter::EndString(const char* pStr)
    5121 {
    5122  VMA_ASSERT(m_InsideString);
    5123  if(pStr != VMA_NULL && pStr[0] != '\0')
    5124  {
    5125  ContinueString(pStr);
    5126  }
    5127  m_SB.Add('"');
    5128  m_InsideString = false;
    5129 }
    5130 
    5131 void VmaJsonWriter::WriteNumber(uint32_t n)
    5132 {
    5133  VMA_ASSERT(!m_InsideString);
    5134  BeginValue(false);
    5135  m_SB.AddNumber(n);
    5136 }
    5137 
    5138 void VmaJsonWriter::WriteNumber(uint64_t n)
    5139 {
    5140  VMA_ASSERT(!m_InsideString);
    5141  BeginValue(false);
    5142  m_SB.AddNumber(n);
    5143 }
    5144 
    5145 void VmaJsonWriter::WriteBool(bool b)
    5146 {
    5147  VMA_ASSERT(!m_InsideString);
    5148  BeginValue(false);
    5149  m_SB.Add(b ? "true" : "false");
    5150 }
    5151 
    5152 void VmaJsonWriter::WriteNull()
    5153 {
    5154  VMA_ASSERT(!m_InsideString);
    5155  BeginValue(false);
    5156  m_SB.Add("null");
    5157 }
    5158 
    5159 void VmaJsonWriter::BeginValue(bool isString)
    5160 {
    5161  if(!m_Stack.empty())
    5162  {
    5163  StackItem& currItem = m_Stack.back();
    5164  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5165  currItem.valueCount % 2 == 0)
    5166  {
    5167  VMA_ASSERT(isString);
    5168  }
    5169 
    5170  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5171  currItem.valueCount % 2 != 0)
    5172  {
    5173  m_SB.Add(": ");
    5174  }
    5175  else if(currItem.valueCount > 0)
    5176  {
    5177  m_SB.Add(", ");
    5178  WriteIndent();
    5179  }
    5180  else
    5181  {
    5182  WriteIndent();
    5183  }
    5184  ++currItem.valueCount;
    5185  }
    5186 }
    5187 
    5188 void VmaJsonWriter::WriteIndent(bool oneLess)
    5189 {
    5190  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
    5191  {
    5192  m_SB.AddNewLine();
    5193 
    5194  size_t count = m_Stack.size();
    5195  if(count > 0 && oneLess)
    5196  {
    5197  --count;
    5198  }
    5199  for(size_t i = 0; i < count; ++i)
    5200  {
    5201  m_SB.Add(INDENT);
    5202  }
    5203  }
    5204 }
    5205 
    5206 #endif // #if VMA_STATS_STRING_ENABLED
    5207 
    5209 
    5210 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
    5211 {
    5212  if(IsUserDataString())
    5213  {
    5214  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
    5215 
    5216  FreeUserDataString(hAllocator);
    5217 
    5218  if(pUserData != VMA_NULL)
    5219  {
    5220  const char* const newStrSrc = (char*)pUserData;
    5221  const size_t newStrLen = strlen(newStrSrc);
    5222  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
    5223  memcpy(newStrDst, newStrSrc, newStrLen + 1);
    5224  m_pUserData = newStrDst;
    5225  }
    5226  }
    5227  else
    5228  {
    5229  m_pUserData = pUserData;
    5230  }
    5231 }
    5232 
    5233 void VmaAllocation_T::ChangeBlockAllocation(
    5234  VmaAllocator hAllocator,
    5235  VmaDeviceMemoryBlock* block,
    5236  VkDeviceSize offset)
    5237 {
    5238  VMA_ASSERT(block != VMA_NULL);
    5239  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5240 
    5241  // Move mapping reference counter from old block to new block.
    5242  if(block != m_BlockAllocation.m_Block)
    5243  {
    5244  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
    5245  if(IsPersistentMap())
    5246  ++mapRefCount;
    5247  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
    5248  block->Map(hAllocator, mapRefCount, VMA_NULL);
    5249  }
    5250 
    5251  m_BlockAllocation.m_Block = block;
    5252  m_BlockAllocation.m_Offset = offset;
    5253 }
    5254 
    5255 VkDeviceSize VmaAllocation_T::GetOffset() const
    5256 {
    5257  switch(m_Type)
    5258  {
    5259  case ALLOCATION_TYPE_BLOCK:
    5260  return m_BlockAllocation.m_Offset;
    5261  case ALLOCATION_TYPE_DEDICATED:
    5262  return 0;
    5263  default:
    5264  VMA_ASSERT(0);
    5265  return 0;
    5266  }
    5267 }
    5268 
    5269 VkDeviceMemory VmaAllocation_T::GetMemory() const
    5270 {
    5271  switch(m_Type)
    5272  {
    5273  case ALLOCATION_TYPE_BLOCK:
    5274  return m_BlockAllocation.m_Block->GetDeviceMemory();
    5275  case ALLOCATION_TYPE_DEDICATED:
    5276  return m_DedicatedAllocation.m_hMemory;
    5277  default:
    5278  VMA_ASSERT(0);
    5279  return VK_NULL_HANDLE;
    5280  }
    5281 }
    5282 
    5283 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
    5284 {
    5285  switch(m_Type)
    5286  {
    5287  case ALLOCATION_TYPE_BLOCK:
    5288  return m_BlockAllocation.m_Block->GetMemoryTypeIndex();
    5289  case ALLOCATION_TYPE_DEDICATED:
    5290  return m_DedicatedAllocation.m_MemoryTypeIndex;
    5291  default:
    5292  VMA_ASSERT(0);
    5293  return UINT32_MAX;
    5294  }
    5295 }
    5296 
    5297 void* VmaAllocation_T::GetMappedData() const
    5298 {
    5299  switch(m_Type)
    5300  {
    5301  case ALLOCATION_TYPE_BLOCK:
    5302  if(m_MapCount != 0)
    5303  {
    5304  void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
    5305  VMA_ASSERT(pBlockData != VMA_NULL);
    5306  return (char*)pBlockData + m_BlockAllocation.m_Offset;
    5307  }
    5308  else
    5309  {
    5310  return VMA_NULL;
    5311  }
    5312  break;
    5313  case ALLOCATION_TYPE_DEDICATED:
    5314  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
    5315  return m_DedicatedAllocation.m_pMappedData;
    5316  default:
    5317  VMA_ASSERT(0);
    5318  return VMA_NULL;
    5319  }
    5320 }
    5321 
    5322 bool VmaAllocation_T::CanBecomeLost() const
    5323 {
    5324  switch(m_Type)
    5325  {
    5326  case ALLOCATION_TYPE_BLOCK:
    5327  return m_BlockAllocation.m_CanBecomeLost;
    5328  case ALLOCATION_TYPE_DEDICATED:
    5329  return false;
    5330  default:
    5331  VMA_ASSERT(0);
    5332  return false;
    5333  }
    5334 }
    5335 
    5336 VmaPool VmaAllocation_T::GetPool() const
    5337 {
    5338  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5339  return m_BlockAllocation.m_hPool;
    5340 }
    5341 
    5342 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    5343 {
    5344  VMA_ASSERT(CanBecomeLost());
    5345 
    5346  /*
    5347  Warning: This is a carefully designed algorithm.
    5348  Do not modify unless you really know what you're doing :)
    5349  */
    5350  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
    5351  for(;;)
    5352  {
    5353  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    5354  {
    5355  VMA_ASSERT(0);
    5356  return false;
    5357  }
    5358  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
    5359  {
    5360  return false;
    5361  }
    5362  else // Last use time earlier than current time.
    5363  {
    5364  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
    5365  {
    5366  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
    5367  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
    5368  return true;
    5369  }
    5370  }
    5371  }
    5372 }
    5373 
    5374 #if VMA_STATS_STRING_ENABLED
    5375 
    5376 // Correspond to values of enum VmaSuballocationType.
    5377 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
    5378  "FREE",
    5379  "UNKNOWN",
    5380  "BUFFER",
    5381  "IMAGE_UNKNOWN",
    5382  "IMAGE_LINEAR",
    5383  "IMAGE_OPTIMAL",
    5384 };
    5385 
    5386 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
    5387 {
    5388  json.WriteString("Type");
    5389  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
    5390 
    5391  json.WriteString("Size");
    5392  json.WriteNumber(m_Size);
    5393 
    5394  if(m_pUserData != VMA_NULL)
    5395  {
    5396  json.WriteString("UserData");
    5397  if(IsUserDataString())
    5398  {
    5399  json.WriteString((const char*)m_pUserData);
    5400  }
    5401  else
    5402  {
    5403  json.BeginString();
    5404  json.ContinueString_Pointer(m_pUserData);
    5405  json.EndString();
    5406  }
    5407  }
    5408 
    5409  json.WriteString("CreationFrameIndex");
    5410  json.WriteNumber(m_CreationFrameIndex);
    5411 
    5412  json.WriteString("LastUseFrameIndex");
    5413  json.WriteNumber(GetLastUseFrameIndex());
    5414 
    5415  if(m_BufferImageUsage != 0)
    5416  {
    5417  json.WriteString("Usage");
    5418  json.WriteNumber(m_BufferImageUsage);
    5419  }
    5420 }
    5421 
    5422 #endif
    5423 
    5424 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
    5425 {
    5426  VMA_ASSERT(IsUserDataString());
    5427  if(m_pUserData != VMA_NULL)
    5428  {
    5429  char* const oldStr = (char*)m_pUserData;
    5430  const size_t oldStrLen = strlen(oldStr);
    5431  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
    5432  m_pUserData = VMA_NULL;
    5433  }
    5434 }
    5435 
    5436 void VmaAllocation_T::BlockAllocMap()
    5437 {
    5438  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5439 
    5440  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5441  {
    5442  ++m_MapCount;
    5443  }
    5444  else
    5445  {
    5446  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
    5447  }
    5448 }
    5449 
    5450 void VmaAllocation_T::BlockAllocUnmap()
    5451 {
    5452  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5453 
    5454  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5455  {
    5456  --m_MapCount;
    5457  }
    5458  else
    5459  {
    5460  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
    5461  }
    5462 }
    5463 
    5464 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
    5465 {
    5466  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5467 
    5468  if(m_MapCount != 0)
    5469  {
    5470  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5471  {
    5472  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
    5473  *ppData = m_DedicatedAllocation.m_pMappedData;
    5474  ++m_MapCount;
    5475  return VK_SUCCESS;
    5476  }
    5477  else
    5478  {
    5479  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
    5480  return VK_ERROR_MEMORY_MAP_FAILED;
    5481  }
    5482  }
    5483  else
    5484  {
    5485  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    5486  hAllocator->m_hDevice,
    5487  m_DedicatedAllocation.m_hMemory,
    5488  0, // offset
    5489  VK_WHOLE_SIZE,
    5490  0, // flags
    5491  ppData);
    5492  if(result == VK_SUCCESS)
    5493  {
    5494  m_DedicatedAllocation.m_pMappedData = *ppData;
    5495  m_MapCount = 1;
    5496  }
    5497  return result;
    5498  }
    5499 }
    5500 
    5501 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
    5502 {
    5503  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5504 
    5505  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5506  {
    5507  --m_MapCount;
    5508  if(m_MapCount == 0)
    5509  {
    5510  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
    5511  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
    5512  hAllocator->m_hDevice,
    5513  m_DedicatedAllocation.m_hMemory);
    5514  }
    5515  }
    5516  else
    5517  {
    5518  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
    5519  }
    5520 }
    5521 
    5522 #if VMA_STATS_STRING_ENABLED
    5523 
    5524 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
    5525 {
    5526  json.BeginObject();
    5527 
    5528  json.WriteString("Blocks");
    5529  json.WriteNumber(stat.blockCount);
    5530 
    5531  json.WriteString("Allocations");
    5532  json.WriteNumber(stat.allocationCount);
    5533 
    5534  json.WriteString("UnusedRanges");
    5535  json.WriteNumber(stat.unusedRangeCount);
    5536 
    5537  json.WriteString("UsedBytes");
    5538  json.WriteNumber(stat.usedBytes);
    5539 
    5540  json.WriteString("UnusedBytes");
    5541  json.WriteNumber(stat.unusedBytes);
    5542 
    5543  if(stat.allocationCount > 1)
    5544  {
    5545  json.WriteString("AllocationSize");
    5546  json.BeginObject(true);
    5547  json.WriteString("Min");
    5548  json.WriteNumber(stat.allocationSizeMin);
    5549  json.WriteString("Avg");
    5550  json.WriteNumber(stat.allocationSizeAvg);
    5551  json.WriteString("Max");
    5552  json.WriteNumber(stat.allocationSizeMax);
    5553  json.EndObject();
    5554  }
    5555 
    5556  if(stat.unusedRangeCount > 1)
    5557  {
    5558  json.WriteString("UnusedRangeSize");
    5559  json.BeginObject(true);
    5560  json.WriteString("Min");
    5561  json.WriteNumber(stat.unusedRangeSizeMin);
    5562  json.WriteString("Avg");
    5563  json.WriteNumber(stat.unusedRangeSizeAvg);
    5564  json.WriteString("Max");
    5565  json.WriteNumber(stat.unusedRangeSizeMax);
    5566  json.EndObject();
    5567  }
    5568 
    5569  json.EndObject();
    5570 }
    5571 
    5572 #endif // #if VMA_STATS_STRING_ENABLED
    5573 
    5574 struct VmaSuballocationItemSizeLess
    5575 {
    5576  bool operator()(
    5577  const VmaSuballocationList::iterator lhs,
    5578  const VmaSuballocationList::iterator rhs) const
    5579  {
    5580  return lhs->size < rhs->size;
    5581  }
    5582  bool operator()(
    5583  const VmaSuballocationList::iterator lhs,
    5584  VkDeviceSize rhsSize) const
    5585  {
    5586  return lhs->size < rhsSize;
    5587  }
    5588 };
    5589 
    5591 // class VmaBlockMetadata
    5592 
    5593 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
    5594  m_Size(0),
    5595  m_FreeCount(0),
    5596  m_SumFreeSize(0),
    5597  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
    5598  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
    5599 {
    5600 }
    5601 
    5602 VmaBlockMetadata::~VmaBlockMetadata()
    5603 {
    5604 }
    5605 
    5606 void VmaBlockMetadata::Init(VkDeviceSize size)
    5607 {
    5608  m_Size = size;
    5609  m_FreeCount = 1;
    5610  m_SumFreeSize = size;
    5611 
    5612  VmaSuballocation suballoc = {};
    5613  suballoc.offset = 0;
    5614  suballoc.size = size;
    5615  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5616  suballoc.hAllocation = VK_NULL_HANDLE;
    5617 
    5618  m_Suballocations.push_back(suballoc);
    5619  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
    5620  --suballocItem;
    5621  m_FreeSuballocationsBySize.push_back(suballocItem);
    5622 }
    5623 
    5624 bool VmaBlockMetadata::Validate() const
    5625 {
    5626  if(m_Suballocations.empty())
    5627  {
    5628  return false;
    5629  }
    5630 
    5631  // Expected offset of new suballocation as calculates from previous ones.
    5632  VkDeviceSize calculatedOffset = 0;
    5633  // Expected number of free suballocations as calculated from traversing their list.
    5634  uint32_t calculatedFreeCount = 0;
    5635  // Expected sum size of free suballocations as calculated from traversing their list.
    5636  VkDeviceSize calculatedSumFreeSize = 0;
    5637  // Expected number of free suballocations that should be registered in
    5638  // m_FreeSuballocationsBySize calculated from traversing their list.
    5639  size_t freeSuballocationsToRegister = 0;
    5640  // True if previous visited suballocation was free.
    5641  bool prevFree = false;
    5642 
    5643  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5644  suballocItem != m_Suballocations.cend();
    5645  ++suballocItem)
    5646  {
    5647  const VmaSuballocation& subAlloc = *suballocItem;
    5648 
    5649  // Actual offset of this suballocation doesn't match expected one.
    5650  if(subAlloc.offset != calculatedOffset)
    5651  {
    5652  return false;
    5653  }
    5654 
    5655  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
    5656  // Two adjacent free suballocations are invalid. They should be merged.
    5657  if(prevFree && currFree)
    5658  {
    5659  return false;
    5660  }
    5661 
    5662  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
    5663  {
    5664  return false;
    5665  }
    5666 
    5667  if(currFree)
    5668  {
    5669  calculatedSumFreeSize += subAlloc.size;
    5670  ++calculatedFreeCount;
    5671  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    5672  {
    5673  ++freeSuballocationsToRegister;
    5674  }
    5675 
    5676  // Margin required between allocations - every free space must be at least that large.
    5677  if(subAlloc.size < VMA_DEBUG_MARGIN)
    5678  {
    5679  return false;
    5680  }
    5681  }
    5682  else
    5683  {
    5684  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
    5685  {
    5686  return false;
    5687  }
    5688  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
    5689  {
    5690  return false;
    5691  }
    5692 
    5693  // Margin required between allocations - previous allocation must be free.
    5694  if(VMA_DEBUG_MARGIN > 0 && !prevFree)
    5695  {
    5696  return false;
    5697  }
    5698  }
    5699 
    5700  calculatedOffset += subAlloc.size;
    5701  prevFree = currFree;
    5702  }
    5703 
    5704  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
    5705  // match expected one.
    5706  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
    5707  {
    5708  return false;
    5709  }
    5710 
    5711  VkDeviceSize lastSize = 0;
    5712  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
    5713  {
    5714  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
    5715 
    5716  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
    5717  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
    5718  {
    5719  return false;
    5720  }
    5721  // They must be sorted by size ascending.
    5722  if(suballocItem->size < lastSize)
    5723  {
    5724  return false;
    5725  }
    5726 
    5727  lastSize = suballocItem->size;
    5728  }
    5729 
    5730  // Check if totals match calculacted values.
    5731  if(!ValidateFreeSuballocationList() ||
    5732  (calculatedOffset != m_Size) ||
    5733  (calculatedSumFreeSize != m_SumFreeSize) ||
    5734  (calculatedFreeCount != m_FreeCount))
    5735  {
    5736  return false;
    5737  }
    5738 
    5739  return true;
    5740 }
    5741 
    5742 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
    5743 {
    5744  if(!m_FreeSuballocationsBySize.empty())
    5745  {
    5746  return m_FreeSuballocationsBySize.back()->size;
    5747  }
    5748  else
    5749  {
    5750  return 0;
    5751  }
    5752 }
    5753 
    5754 bool VmaBlockMetadata::IsEmpty() const
    5755 {
    5756  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
    5757 }
    5758 
    5759 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
    5760 {
    5761  outInfo.blockCount = 1;
    5762 
    5763  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5764  outInfo.allocationCount = rangeCount - m_FreeCount;
    5765  outInfo.unusedRangeCount = m_FreeCount;
    5766 
    5767  outInfo.unusedBytes = m_SumFreeSize;
    5768  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
    5769 
    5770  outInfo.allocationSizeMin = UINT64_MAX;
    5771  outInfo.allocationSizeMax = 0;
    5772  outInfo.unusedRangeSizeMin = UINT64_MAX;
    5773  outInfo.unusedRangeSizeMax = 0;
    5774 
    5775  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5776  suballocItem != m_Suballocations.cend();
    5777  ++suballocItem)
    5778  {
    5779  const VmaSuballocation& suballoc = *suballocItem;
    5780  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
    5781  {
    5782  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
    5783  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
    5784  }
    5785  else
    5786  {
    5787  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
    5788  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
    5789  }
    5790  }
    5791 }
    5792 
    5793 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
    5794 {
    5795  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5796 
    5797  inoutStats.size += m_Size;
    5798  inoutStats.unusedSize += m_SumFreeSize;
    5799  inoutStats.allocationCount += rangeCount - m_FreeCount;
    5800  inoutStats.unusedRangeCount += m_FreeCount;
    5801  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
    5802 }
    5803 
    5804 #if VMA_STATS_STRING_ENABLED
    5805 
    5806 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
    5807 {
    5808  json.BeginObject();
    5809 
    5810  json.WriteString("TotalBytes");
    5811  json.WriteNumber(m_Size);
    5812 
    5813  json.WriteString("UnusedBytes");
    5814  json.WriteNumber(m_SumFreeSize);
    5815 
    5816  json.WriteString("Allocations");
    5817  json.WriteNumber((uint64_t)m_Suballocations.size() - m_FreeCount);
    5818 
    5819  json.WriteString("UnusedRanges");
    5820  json.WriteNumber(m_FreeCount);
    5821 
    5822  json.WriteString("Suballocations");
    5823  json.BeginArray();
    5824  size_t i = 0;
    5825  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5826  suballocItem != m_Suballocations.cend();
    5827  ++suballocItem, ++i)
    5828  {
    5829  json.BeginObject(true);
    5830 
    5831  json.WriteString("Offset");
    5832  json.WriteNumber(suballocItem->offset);
    5833 
    5834  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    5835  {
    5836  json.WriteString("Type");
    5837  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
    5838 
    5839  json.WriteString("Size");
    5840  json.WriteNumber(suballocItem->size);
    5841  }
    5842  else
    5843  {
    5844  suballocItem->hAllocation->PrintParameters(json);
    5845  }
    5846 
    5847  json.EndObject();
    5848  }
    5849  json.EndArray();
    5850 
    5851  json.EndObject();
    5852 }
    5853 
    5854 #endif // #if VMA_STATS_STRING_ENABLED
    5855 
    5856 /*
    5857 How many suitable free suballocations to analyze before choosing best one.
    5858 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
    5859  be chosen.
    5860 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
    5861  suballocations will be analized and best one will be chosen.
    5862 - Any other value is also acceptable.
    5863 */
    5864 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
    5865 
    5866 bool VmaBlockMetadata::CreateAllocationRequest(
    5867  uint32_t currentFrameIndex,
    5868  uint32_t frameInUseCount,
    5869  VkDeviceSize bufferImageGranularity,
    5870  VkDeviceSize allocSize,
    5871  VkDeviceSize allocAlignment,
    5872  VmaSuballocationType allocType,
    5873  bool canMakeOtherLost,
    5874  VmaAllocationRequest* pAllocationRequest)
    5875 {
    5876  VMA_ASSERT(allocSize > 0);
    5877  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    5878  VMA_ASSERT(pAllocationRequest != VMA_NULL);
    5879  VMA_HEAVY_ASSERT(Validate());
    5880 
    5881  // There is not enough total free space in this block to fullfill the request: Early return.
    5882  if(canMakeOtherLost == false && m_SumFreeSize < allocSize + 2 * VMA_DEBUG_MARGIN)
    5883  {
    5884  return false;
    5885  }
    5886 
    5887  // New algorithm, efficiently searching freeSuballocationsBySize.
    5888  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
    5889  if(freeSuballocCount > 0)
    5890  {
    5891  if(VMA_BEST_FIT)
    5892  {
    5893  // Find first free suballocation with size not less than allocSize + 2 * VMA_DEBUG_MARGIN.
    5894  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    5895  m_FreeSuballocationsBySize.data(),
    5896  m_FreeSuballocationsBySize.data() + freeSuballocCount,
    5897  allocSize + 2 * VMA_DEBUG_MARGIN,
    5898  VmaSuballocationItemSizeLess());
    5899  size_t index = it - m_FreeSuballocationsBySize.data();
    5900  for(; index < freeSuballocCount; ++index)
    5901  {
    5902  if(CheckAllocation(
    5903  currentFrameIndex,
    5904  frameInUseCount,
    5905  bufferImageGranularity,
    5906  allocSize,
    5907  allocAlignment,
    5908  allocType,
    5909  m_FreeSuballocationsBySize[index],
    5910  false, // canMakeOtherLost
    5911  &pAllocationRequest->offset,
    5912  &pAllocationRequest->itemsToMakeLostCount,
    5913  &pAllocationRequest->sumFreeSize,
    5914  &pAllocationRequest->sumItemSize))
    5915  {
    5916  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5917  return true;
    5918  }
    5919  }
    5920  }
    5921  else
    5922  {
    5923  // Search staring from biggest suballocations.
    5924  for(size_t index = freeSuballocCount; index--; )
    5925  {
    5926  if(CheckAllocation(
    5927  currentFrameIndex,
    5928  frameInUseCount,
    5929  bufferImageGranularity,
    5930  allocSize,
    5931  allocAlignment,
    5932  allocType,
    5933  m_FreeSuballocationsBySize[index],
    5934  false, // canMakeOtherLost
    5935  &pAllocationRequest->offset,
    5936  &pAllocationRequest->itemsToMakeLostCount,
    5937  &pAllocationRequest->sumFreeSize,
    5938  &pAllocationRequest->sumItemSize))
    5939  {
    5940  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5941  return true;
    5942  }
    5943  }
    5944  }
    5945  }
    5946 
    5947  if(canMakeOtherLost)
    5948  {
    5949  // Brute-force algorithm. TODO: Come up with something better.
    5950 
    5951  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
    5952  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
    5953 
    5954  VmaAllocationRequest tmpAllocRequest = {};
    5955  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
    5956  suballocIt != m_Suballocations.end();
    5957  ++suballocIt)
    5958  {
    5959  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
    5960  suballocIt->hAllocation->CanBecomeLost())
    5961  {
    5962  if(CheckAllocation(
    5963  currentFrameIndex,
    5964  frameInUseCount,
    5965  bufferImageGranularity,
    5966  allocSize,
    5967  allocAlignment,
    5968  allocType,
    5969  suballocIt,
    5970  canMakeOtherLost,
    5971  &tmpAllocRequest.offset,
    5972  &tmpAllocRequest.itemsToMakeLostCount,
    5973  &tmpAllocRequest.sumFreeSize,
    5974  &tmpAllocRequest.sumItemSize))
    5975  {
    5976  tmpAllocRequest.item = suballocIt;
    5977 
    5978  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
    5979  {
    5980  *pAllocationRequest = tmpAllocRequest;
    5981  }
    5982  }
    5983  }
    5984  }
    5985 
    5986  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
    5987  {
    5988  return true;
    5989  }
    5990  }
    5991 
    5992  return false;
    5993 }
    5994 
    5995 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
    5996  uint32_t currentFrameIndex,
    5997  uint32_t frameInUseCount,
    5998  VmaAllocationRequest* pAllocationRequest)
    5999 {
    6000  while(pAllocationRequest->itemsToMakeLostCount > 0)
    6001  {
    6002  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
    6003  {
    6004  ++pAllocationRequest->item;
    6005  }
    6006  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6007  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
    6008  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
    6009  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6010  {
    6011  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
    6012  --pAllocationRequest->itemsToMakeLostCount;
    6013  }
    6014  else
    6015  {
    6016  return false;
    6017  }
    6018  }
    6019 
    6020  VMA_HEAVY_ASSERT(Validate());
    6021  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6022  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6023 
    6024  return true;
    6025 }
    6026 
    6027 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    6028 {
    6029  uint32_t lostAllocationCount = 0;
    6030  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6031  it != m_Suballocations.end();
    6032  ++it)
    6033  {
    6034  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
    6035  it->hAllocation->CanBecomeLost() &&
    6036  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6037  {
    6038  it = FreeSuballocation(it);
    6039  ++lostAllocationCount;
    6040  }
    6041  }
    6042  return lostAllocationCount;
    6043 }
    6044 
    6045 VkResult VmaBlockMetadata::CheckCorruption(const void* pBlockData)
    6046 {
    6047  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6048  it != m_Suballocations.end();
    6049  ++it)
    6050  {
    6051  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6052  {
    6053  if(!VmaValidateMagicValue(pBlockData, it->offset - VMA_DEBUG_MARGIN))
    6054  {
    6055  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
    6056  return VK_ERROR_VALIDATION_FAILED_EXT;
    6057  }
    6058  if(!VmaValidateMagicValue(pBlockData, it->offset + it->size))
    6059  {
    6060  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
    6061  return VK_ERROR_VALIDATION_FAILED_EXT;
    6062  }
    6063  }
    6064  }
    6065 
    6066  return VK_SUCCESS;
    6067 }
    6068 
    6069 void VmaBlockMetadata::Alloc(
    6070  const VmaAllocationRequest& request,
    6071  VmaSuballocationType type,
    6072  VkDeviceSize allocSize,
    6073  VmaAllocation hAllocation)
    6074 {
    6075  VMA_ASSERT(request.item != m_Suballocations.end());
    6076  VmaSuballocation& suballoc = *request.item;
    6077  // Given suballocation is a free block.
    6078  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6079  // Given offset is inside this suballocation.
    6080  VMA_ASSERT(request.offset >= suballoc.offset);
    6081  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
    6082  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
    6083  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
    6084 
    6085  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
    6086  // it to become used.
    6087  UnregisterFreeSuballocation(request.item);
    6088 
    6089  suballoc.offset = request.offset;
    6090  suballoc.size = allocSize;
    6091  suballoc.type = type;
    6092  suballoc.hAllocation = hAllocation;
    6093 
    6094  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
    6095  if(paddingEnd)
    6096  {
    6097  VmaSuballocation paddingSuballoc = {};
    6098  paddingSuballoc.offset = request.offset + allocSize;
    6099  paddingSuballoc.size = paddingEnd;
    6100  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6101  VmaSuballocationList::iterator next = request.item;
    6102  ++next;
    6103  const VmaSuballocationList::iterator paddingEndItem =
    6104  m_Suballocations.insert(next, paddingSuballoc);
    6105  RegisterFreeSuballocation(paddingEndItem);
    6106  }
    6107 
    6108  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
    6109  if(paddingBegin)
    6110  {
    6111  VmaSuballocation paddingSuballoc = {};
    6112  paddingSuballoc.offset = request.offset - paddingBegin;
    6113  paddingSuballoc.size = paddingBegin;
    6114  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6115  const VmaSuballocationList::iterator paddingBeginItem =
    6116  m_Suballocations.insert(request.item, paddingSuballoc);
    6117  RegisterFreeSuballocation(paddingBeginItem);
    6118  }
    6119 
    6120  // Update totals.
    6121  m_FreeCount = m_FreeCount - 1;
    6122  if(paddingBegin > 0)
    6123  {
    6124  ++m_FreeCount;
    6125  }
    6126  if(paddingEnd > 0)
    6127  {
    6128  ++m_FreeCount;
    6129  }
    6130  m_SumFreeSize -= allocSize;
    6131 }
    6132 
    6133 void VmaBlockMetadata::Free(const VmaAllocation allocation)
    6134 {
    6135  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6136  suballocItem != m_Suballocations.end();
    6137  ++suballocItem)
    6138  {
    6139  VmaSuballocation& suballoc = *suballocItem;
    6140  if(suballoc.hAllocation == allocation)
    6141  {
    6142  FreeSuballocation(suballocItem);
    6143  VMA_HEAVY_ASSERT(Validate());
    6144  return;
    6145  }
    6146  }
    6147  VMA_ASSERT(0 && "Not found!");
    6148 }
    6149 
    6150 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
    6151 {
    6152  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6153  suballocItem != m_Suballocations.end();
    6154  ++suballocItem)
    6155  {
    6156  VmaSuballocation& suballoc = *suballocItem;
    6157  if(suballoc.offset == offset)
    6158  {
    6159  FreeSuballocation(suballocItem);
    6160  return;
    6161  }
    6162  }
    6163  VMA_ASSERT(0 && "Not found!");
    6164 }
    6165 
    6166 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
    6167 {
    6168  VkDeviceSize lastSize = 0;
    6169  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
    6170  {
    6171  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
    6172 
    6173  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6174  {
    6175  VMA_ASSERT(0);
    6176  return false;
    6177  }
    6178  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6179  {
    6180  VMA_ASSERT(0);
    6181  return false;
    6182  }
    6183  if(it->size < lastSize)
    6184  {
    6185  VMA_ASSERT(0);
    6186  return false;
    6187  }
    6188 
    6189  lastSize = it->size;
    6190  }
    6191  return true;
    6192 }
    6193 
    6194 bool VmaBlockMetadata::CheckAllocation(
    6195  uint32_t currentFrameIndex,
    6196  uint32_t frameInUseCount,
    6197  VkDeviceSize bufferImageGranularity,
    6198  VkDeviceSize allocSize,
    6199  VkDeviceSize allocAlignment,
    6200  VmaSuballocationType allocType,
    6201  VmaSuballocationList::const_iterator suballocItem,
    6202  bool canMakeOtherLost,
    6203  VkDeviceSize* pOffset,
    6204  size_t* itemsToMakeLostCount,
    6205  VkDeviceSize* pSumFreeSize,
    6206  VkDeviceSize* pSumItemSize) const
    6207 {
    6208  VMA_ASSERT(allocSize > 0);
    6209  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    6210  VMA_ASSERT(suballocItem != m_Suballocations.cend());
    6211  VMA_ASSERT(pOffset != VMA_NULL);
    6212 
    6213  *itemsToMakeLostCount = 0;
    6214  *pSumFreeSize = 0;
    6215  *pSumItemSize = 0;
    6216 
    6217  if(canMakeOtherLost)
    6218  {
    6219  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6220  {
    6221  *pSumFreeSize = suballocItem->size;
    6222  }
    6223  else
    6224  {
    6225  if(suballocItem->hAllocation->CanBecomeLost() &&
    6226  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6227  {
    6228  ++*itemsToMakeLostCount;
    6229  *pSumItemSize = suballocItem->size;
    6230  }
    6231  else
    6232  {
    6233  return false;
    6234  }
    6235  }
    6236 
    6237  // Remaining size is too small for this request: Early return.
    6238  if(m_Size - suballocItem->offset < allocSize)
    6239  {
    6240  return false;
    6241  }
    6242 
    6243  // Start from offset equal to beginning of this suballocation.
    6244  *pOffset = suballocItem->offset;
    6245 
    6246  // Apply VMA_DEBUG_MARGIN at the beginning.
    6247  if(VMA_DEBUG_MARGIN > 0)
    6248  {
    6249  *pOffset += VMA_DEBUG_MARGIN;
    6250  }
    6251 
    6252  // Apply alignment.
    6253  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6254 
    6255  // Check previous suballocations for BufferImageGranularity conflicts.
    6256  // Make bigger alignment if necessary.
    6257  if(bufferImageGranularity > 1)
    6258  {
    6259  bool bufferImageGranularityConflict = false;
    6260  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6261  while(prevSuballocItem != m_Suballocations.cbegin())
    6262  {
    6263  --prevSuballocItem;
    6264  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6265  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6266  {
    6267  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6268  {
    6269  bufferImageGranularityConflict = true;
    6270  break;
    6271  }
    6272  }
    6273  else
    6274  // Already on previous page.
    6275  break;
    6276  }
    6277  if(bufferImageGranularityConflict)
    6278  {
    6279  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6280  }
    6281  }
    6282 
    6283  // Now that we have final *pOffset, check if we are past suballocItem.
    6284  // If yes, return false - this function should be called for another suballocItem as starting point.
    6285  if(*pOffset >= suballocItem->offset + suballocItem->size)
    6286  {
    6287  return false;
    6288  }
    6289 
    6290  // Calculate padding at the beginning based on current offset.
    6291  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
    6292 
    6293  // Calculate required margin at the end.
    6294  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6295 
    6296  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
    6297  // Another early return check.
    6298  if(suballocItem->offset + totalSize > m_Size)
    6299  {
    6300  return false;
    6301  }
    6302 
    6303  // Advance lastSuballocItem until desired size is reached.
    6304  // Update itemsToMakeLostCount.
    6305  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
    6306  if(totalSize > suballocItem->size)
    6307  {
    6308  VkDeviceSize remainingSize = totalSize - suballocItem->size;
    6309  while(remainingSize > 0)
    6310  {
    6311  ++lastSuballocItem;
    6312  if(lastSuballocItem == m_Suballocations.cend())
    6313  {
    6314  return false;
    6315  }
    6316  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6317  {
    6318  *pSumFreeSize += lastSuballocItem->size;
    6319  }
    6320  else
    6321  {
    6322  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
    6323  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
    6324  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6325  {
    6326  ++*itemsToMakeLostCount;
    6327  *pSumItemSize += lastSuballocItem->size;
    6328  }
    6329  else
    6330  {
    6331  return false;
    6332  }
    6333  }
    6334  remainingSize = (lastSuballocItem->size < remainingSize) ?
    6335  remainingSize - lastSuballocItem->size : 0;
    6336  }
    6337  }
    6338 
    6339  // Check next suballocations for BufferImageGranularity conflicts.
    6340  // If conflict exists, we must mark more allocations lost or fail.
    6341  if(bufferImageGranularity > 1)
    6342  {
    6343  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
    6344  ++nextSuballocItem;
    6345  while(nextSuballocItem != m_Suballocations.cend())
    6346  {
    6347  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6348  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6349  {
    6350  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6351  {
    6352  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
    6353  if(nextSuballoc.hAllocation->CanBecomeLost() &&
    6354  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6355  {
    6356  ++*itemsToMakeLostCount;
    6357  }
    6358  else
    6359  {
    6360  return false;
    6361  }
    6362  }
    6363  }
    6364  else
    6365  {
    6366  // Already on next page.
    6367  break;
    6368  }
    6369  ++nextSuballocItem;
    6370  }
    6371  }
    6372  }
    6373  else
    6374  {
    6375  const VmaSuballocation& suballoc = *suballocItem;
    6376  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6377 
    6378  *pSumFreeSize = suballoc.size;
    6379 
    6380  // Size of this suballocation is too small for this request: Early return.
    6381  if(suballoc.size < allocSize)
    6382  {
    6383  return false;
    6384  }
    6385 
    6386  // Start from offset equal to beginning of this suballocation.
    6387  *pOffset = suballoc.offset;
    6388 
    6389  // Apply VMA_DEBUG_MARGIN at the beginning.
    6390  if(VMA_DEBUG_MARGIN > 0)
    6391  {
    6392  *pOffset += VMA_DEBUG_MARGIN;
    6393  }
    6394 
    6395  // Apply alignment.
    6396  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6397 
    6398  // Check previous suballocations for BufferImageGranularity conflicts.
    6399  // Make bigger alignment if necessary.
    6400  if(bufferImageGranularity > 1)
    6401  {
    6402  bool bufferImageGranularityConflict = false;
    6403  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6404  while(prevSuballocItem != m_Suballocations.cbegin())
    6405  {
    6406  --prevSuballocItem;
    6407  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6408  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6409  {
    6410  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6411  {
    6412  bufferImageGranularityConflict = true;
    6413  break;
    6414  }
    6415  }
    6416  else
    6417  // Already on previous page.
    6418  break;
    6419  }
    6420  if(bufferImageGranularityConflict)
    6421  {
    6422  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6423  }
    6424  }
    6425 
    6426  // Calculate padding at the beginning based on current offset.
    6427  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
    6428 
    6429  // Calculate required margin at the end.
    6430  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6431 
    6432  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
    6433  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
    6434  {
    6435  return false;
    6436  }
    6437 
    6438  // Check next suballocations for BufferImageGranularity conflicts.
    6439  // If conflict exists, allocation cannot be made here.
    6440  if(bufferImageGranularity > 1)
    6441  {
    6442  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
    6443  ++nextSuballocItem;
    6444  while(nextSuballocItem != m_Suballocations.cend())
    6445  {
    6446  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6447  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6448  {
    6449  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6450  {
    6451  return false;
    6452  }
    6453  }
    6454  else
    6455  {
    6456  // Already on next page.
    6457  break;
    6458  }
    6459  ++nextSuballocItem;
    6460  }
    6461  }
    6462  }
    6463 
    6464  // All tests passed: Success. pOffset is already filled.
    6465  return true;
    6466 }
    6467 
    6468 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
    6469 {
    6470  VMA_ASSERT(item != m_Suballocations.end());
    6471  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6472 
    6473  VmaSuballocationList::iterator nextItem = item;
    6474  ++nextItem;
    6475  VMA_ASSERT(nextItem != m_Suballocations.end());
    6476  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
    6477 
    6478  item->size += nextItem->size;
    6479  --m_FreeCount;
    6480  m_Suballocations.erase(nextItem);
    6481 }
    6482 
    6483 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
    6484 {
    6485  // Change this suballocation to be marked as free.
    6486  VmaSuballocation& suballoc = *suballocItem;
    6487  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6488  suballoc.hAllocation = VK_NULL_HANDLE;
    6489 
    6490  // Update totals.
    6491  ++m_FreeCount;
    6492  m_SumFreeSize += suballoc.size;
    6493 
    6494  // Merge with previous and/or next suballocation if it's also free.
    6495  bool mergeWithNext = false;
    6496  bool mergeWithPrev = false;
    6497 
    6498  VmaSuballocationList::iterator nextItem = suballocItem;
    6499  ++nextItem;
    6500  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
    6501  {
    6502  mergeWithNext = true;
    6503  }
    6504 
    6505  VmaSuballocationList::iterator prevItem = suballocItem;
    6506  if(suballocItem != m_Suballocations.begin())
    6507  {
    6508  --prevItem;
    6509  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6510  {
    6511  mergeWithPrev = true;
    6512  }
    6513  }
    6514 
    6515  if(mergeWithNext)
    6516  {
    6517  UnregisterFreeSuballocation(nextItem);
    6518  MergeFreeWithNext(suballocItem);
    6519  }
    6520 
    6521  if(mergeWithPrev)
    6522  {
    6523  UnregisterFreeSuballocation(prevItem);
    6524  MergeFreeWithNext(prevItem);
    6525  RegisterFreeSuballocation(prevItem);
    6526  return prevItem;
    6527  }
    6528  else
    6529  {
    6530  RegisterFreeSuballocation(suballocItem);
    6531  return suballocItem;
    6532  }
    6533 }
    6534 
    6535 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
    6536 {
    6537  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6538  VMA_ASSERT(item->size > 0);
    6539 
    6540  // You may want to enable this validation at the beginning or at the end of
    6541  // this function, depending on what do you want to check.
    6542  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6543 
    6544  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6545  {
    6546  if(m_FreeSuballocationsBySize.empty())
    6547  {
    6548  m_FreeSuballocationsBySize.push_back(item);
    6549  }
    6550  else
    6551  {
    6552  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
    6553  }
    6554  }
    6555 
    6556  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6557 }
    6558 
    6559 
    6560 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
    6561 {
    6562  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6563  VMA_ASSERT(item->size > 0);
    6564 
    6565  // You may want to enable this validation at the beginning or at the end of
    6566  // this function, depending on what do you want to check.
    6567  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6568 
    6569  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6570  {
    6571  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    6572  m_FreeSuballocationsBySize.data(),
    6573  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
    6574  item,
    6575  VmaSuballocationItemSizeLess());
    6576  for(size_t index = it - m_FreeSuballocationsBySize.data();
    6577  index < m_FreeSuballocationsBySize.size();
    6578  ++index)
    6579  {
    6580  if(m_FreeSuballocationsBySize[index] == item)
    6581  {
    6582  VmaVectorRemove(m_FreeSuballocationsBySize, index);
    6583  return;
    6584  }
    6585  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
    6586  }
    6587  VMA_ASSERT(0 && "Not found.");
    6588  }
    6589 
    6590  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6591 }
    6592 
    6594 // class VmaDeviceMemoryBlock
    6595 
    6596 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
    6597  m_Metadata(hAllocator),
    6598  m_MemoryTypeIndex(UINT32_MAX),
    6599  m_Id(0),
    6600  m_hMemory(VK_NULL_HANDLE),
    6601  m_MapCount(0),
    6602  m_pMappedData(VMA_NULL)
    6603 {
    6604 }
    6605 
    6606 void VmaDeviceMemoryBlock::Init(
    6607  uint32_t newMemoryTypeIndex,
    6608  VkDeviceMemory newMemory,
    6609  VkDeviceSize newSize,
    6610  uint32_t id)
    6611 {
    6612  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    6613 
    6614  m_MemoryTypeIndex = newMemoryTypeIndex;
    6615  m_Id = id;
    6616  m_hMemory = newMemory;
    6617 
    6618  m_Metadata.Init(newSize);
    6619 }
    6620 
    6621 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
    6622 {
    6623  // This is the most important assert in the entire library.
    6624  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
    6625  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
    6626 
    6627  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
    6628  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
    6629  m_hMemory = VK_NULL_HANDLE;
    6630 }
    6631 
    6632 bool VmaDeviceMemoryBlock::Validate() const
    6633 {
    6634  if((m_hMemory == VK_NULL_HANDLE) ||
    6635  (m_Metadata.GetSize() == 0))
    6636  {
    6637  return false;
    6638  }
    6639 
    6640  return m_Metadata.Validate();
    6641 }
    6642 
    6643 VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
    6644 {
    6645  void* pData = nullptr;
    6646  VkResult res = Map(hAllocator, 1, &pData);
    6647  if(res != VK_SUCCESS)
    6648  {
    6649  return res;
    6650  }
    6651 
    6652  res = m_Metadata.CheckCorruption(pData);
    6653 
    6654  Unmap(hAllocator, 1);
    6655 
    6656  return res;
    6657 }
    6658 
    6659 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
    6660 {
    6661  if(count == 0)
    6662  {
    6663  return VK_SUCCESS;
    6664  }
    6665 
    6666  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6667  if(m_MapCount != 0)
    6668  {
    6669  m_MapCount += count;
    6670  VMA_ASSERT(m_pMappedData != VMA_NULL);
    6671  if(ppData != VMA_NULL)
    6672  {
    6673  *ppData = m_pMappedData;
    6674  }
    6675  return VK_SUCCESS;
    6676  }
    6677  else
    6678  {
    6679  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    6680  hAllocator->m_hDevice,
    6681  m_hMemory,
    6682  0, // offset
    6683  VK_WHOLE_SIZE,
    6684  0, // flags
    6685  &m_pMappedData);
    6686  if(result == VK_SUCCESS)
    6687  {
    6688  if(ppData != VMA_NULL)
    6689  {
    6690  *ppData = m_pMappedData;
    6691  }
    6692  m_MapCount = count;
    6693  }
    6694  return result;
    6695  }
    6696 }
    6697 
    6698 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
    6699 {
    6700  if(count == 0)
    6701  {
    6702  return;
    6703  }
    6704 
    6705  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6706  if(m_MapCount >= count)
    6707  {
    6708  m_MapCount -= count;
    6709  if(m_MapCount == 0)
    6710  {
    6711  m_pMappedData = VMA_NULL;
    6712  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
    6713  }
    6714  }
    6715  else
    6716  {
    6717  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
    6718  }
    6719 }
    6720 
    6721 VkResult VmaDeviceMemoryBlock::WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6722 {
    6723  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6724  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6725 
    6726  void* pData;
    6727  VkResult res = Map(hAllocator, 1, &pData);
    6728  if(res != VK_SUCCESS)
    6729  {
    6730  return res;
    6731  }
    6732 
    6733  VmaWriteMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN);
    6734  VmaWriteMagicValue(pData, allocOffset + allocSize);
    6735 
    6736  Unmap(hAllocator, 1);
    6737 
    6738  return VK_SUCCESS;
    6739 }
    6740 
    6741 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6742 {
    6743  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6744  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6745 
    6746  void* pData;
    6747  VkResult res = Map(hAllocator, 1, &pData);
    6748  if(res != VK_SUCCESS)
    6749  {
    6750  return res;
    6751  }
    6752 
    6753  if(!VmaValidateMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN))
    6754  {
    6755  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE FREED ALLOCATION!");
    6756  }
    6757  else if(!VmaValidateMagicValue(pData, allocOffset + allocSize))
    6758  {
    6759  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
    6760  }
    6761 
    6762  Unmap(hAllocator, 1);
    6763 
    6764  return VK_SUCCESS;
    6765 }
    6766 
    6767 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
    6768  const VmaAllocator hAllocator,
    6769  const VmaAllocation hAllocation,
    6770  VkBuffer hBuffer)
    6771 {
    6772  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6773  hAllocation->GetBlock() == this);
    6774  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6775  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6776  return hAllocator->GetVulkanFunctions().vkBindBufferMemory(
    6777  hAllocator->m_hDevice,
    6778  hBuffer,
    6779  m_hMemory,
    6780  hAllocation->GetOffset());
    6781 }
    6782 
    6783 VkResult VmaDeviceMemoryBlock::BindImageMemory(
    6784  const VmaAllocator hAllocator,
    6785  const VmaAllocation hAllocation,
    6786  VkImage hImage)
    6787 {
    6788  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6789  hAllocation->GetBlock() == this);
    6790  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6791  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6792  return hAllocator->GetVulkanFunctions().vkBindImageMemory(
    6793  hAllocator->m_hDevice,
    6794  hImage,
    6795  m_hMemory,
    6796  hAllocation->GetOffset());
    6797 }
    6798 
    6799 static void InitStatInfo(VmaStatInfo& outInfo)
    6800 {
    6801  memset(&outInfo, 0, sizeof(outInfo));
    6802  outInfo.allocationSizeMin = UINT64_MAX;
    6803  outInfo.unusedRangeSizeMin = UINT64_MAX;
    6804 }
    6805 
    6806 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
    6807 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
    6808 {
    6809  inoutInfo.blockCount += srcInfo.blockCount;
    6810  inoutInfo.allocationCount += srcInfo.allocationCount;
    6811  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
    6812  inoutInfo.usedBytes += srcInfo.usedBytes;
    6813  inoutInfo.unusedBytes += srcInfo.unusedBytes;
    6814  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
    6815  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
    6816  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
    6817  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
    6818 }
    6819 
    6820 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
    6821 {
    6822  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
    6823  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
    6824  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
    6825  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
    6826 }
    6827 
    6828 VmaPool_T::VmaPool_T(
    6829  VmaAllocator hAllocator,
    6830  const VmaPoolCreateInfo& createInfo) :
    6831  m_BlockVector(
    6832  hAllocator,
    6833  createInfo.memoryTypeIndex,
    6834  createInfo.blockSize,
    6835  createInfo.minBlockCount,
    6836  createInfo.maxBlockCount,
    6837  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
    6838  createInfo.frameInUseCount,
    6839  true), // isCustomPool
    6840  m_Id(0)
    6841 {
    6842 }
    6843 
    6844 VmaPool_T::~VmaPool_T()
    6845 {
    6846 }
    6847 
    6848 #if VMA_STATS_STRING_ENABLED
    6849 
    6850 #endif // #if VMA_STATS_STRING_ENABLED
    6851 
    6852 VmaBlockVector::VmaBlockVector(
    6853  VmaAllocator hAllocator,
    6854  uint32_t memoryTypeIndex,
    6855  VkDeviceSize preferredBlockSize,
    6856  size_t minBlockCount,
    6857  size_t maxBlockCount,
    6858  VkDeviceSize bufferImageGranularity,
    6859  uint32_t frameInUseCount,
    6860  bool isCustomPool) :
    6861  m_hAllocator(hAllocator),
    6862  m_MemoryTypeIndex(memoryTypeIndex),
    6863  m_PreferredBlockSize(preferredBlockSize),
    6864  m_MinBlockCount(minBlockCount),
    6865  m_MaxBlockCount(maxBlockCount),
    6866  m_BufferImageGranularity(bufferImageGranularity),
    6867  m_FrameInUseCount(frameInUseCount),
    6868  m_IsCustomPool(isCustomPool),
    6869  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
    6870  m_HasEmptyBlock(false),
    6871  m_pDefragmentator(VMA_NULL),
    6872  m_NextBlockId(0)
    6873 {
    6874 }
    6875 
    6876 VmaBlockVector::~VmaBlockVector()
    6877 {
    6878  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
    6879 
    6880  for(size_t i = m_Blocks.size(); i--; )
    6881  {
    6882  m_Blocks[i]->Destroy(m_hAllocator);
    6883  vma_delete(m_hAllocator, m_Blocks[i]);
    6884  }
    6885 }
    6886 
    6887 VkResult VmaBlockVector::CreateMinBlocks()
    6888 {
    6889  for(size_t i = 0; i < m_MinBlockCount; ++i)
    6890  {
    6891  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
    6892  if(res != VK_SUCCESS)
    6893  {
    6894  return res;
    6895  }
    6896  }
    6897  return VK_SUCCESS;
    6898 }
    6899 
    6900 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
    6901 {
    6902  pStats->size = 0;
    6903  pStats->unusedSize = 0;
    6904  pStats->allocationCount = 0;
    6905  pStats->unusedRangeCount = 0;
    6906  pStats->unusedRangeSizeMax = 0;
    6907 
    6908  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6909 
    6910  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    6911  {
    6912  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    6913  VMA_ASSERT(pBlock);
    6914  VMA_HEAVY_ASSERT(pBlock->Validate());
    6915  pBlock->m_Metadata.AddPoolStats(*pStats);
    6916  }
    6917 }
    6918 
    6919 bool VmaBlockVector::IsCorruptionDetectionEnabled() const
    6920 {
    6921  const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    6922  return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
    6923  (VMA_DEBUG_MARGIN > 0) &&
    6924  (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
    6925 }
    6926 
    6927 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
    6928 
    6929 VkResult VmaBlockVector::Allocate(
    6930  VmaPool hCurrentPool,
    6931  uint32_t currentFrameIndex,
    6932  VkDeviceSize size,
    6933  VkDeviceSize alignment,
    6934  const VmaAllocationCreateInfo& createInfo,
    6935  VmaSuballocationType suballocType,
    6936  VmaAllocation* pAllocation)
    6937 {
    6938  // Early reject: requested allocation size is larger that maximum block size for this block vector.
    6939  if(size + 2 * VMA_DEBUG_MARGIN > m_PreferredBlockSize)
    6940  {
    6941  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    6942  }
    6943 
    6944  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    6945  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
    6946 
    6947  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6948 
    6949  // 1. Search existing allocations. Try to allocate without making other allocations lost.
    6950  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    6951  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    6952  {
    6953  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    6954  VMA_ASSERT(pCurrBlock);
    6955  VmaAllocationRequest currRequest = {};
    6956  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    6957  currentFrameIndex,
    6958  m_FrameInUseCount,
    6959  m_BufferImageGranularity,
    6960  size,
    6961  alignment,
    6962  suballocType,
    6963  false, // canMakeOtherLost
    6964  &currRequest))
    6965  {
    6966  // Allocate from pCurrBlock.
    6967  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
    6968 
    6969  if(mapped)
    6970  {
    6971  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
    6972  if(res != VK_SUCCESS)
    6973  {
    6974  return res;
    6975  }
    6976  }
    6977 
    6978  // We no longer have an empty Allocation.
    6979  if(pCurrBlock->m_Metadata.IsEmpty())
    6980  {
    6981  m_HasEmptyBlock = false;
    6982  }
    6983 
    6984  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    6985  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, size, *pAllocation);
    6986  (*pAllocation)->InitBlockAllocation(
    6987  hCurrentPool,
    6988  pCurrBlock,
    6989  currRequest.offset,
    6990  alignment,
    6991  size,
    6992  suballocType,
    6993  mapped,
    6994  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    6995  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
    6996  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    6997  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    6998  if(IsCorruptionDetectionEnabled())
    6999  {
    7000  VkResult res = pCurrBlock->WriteMagicValueAroundAllocation(m_hAllocator, currRequest.offset, size);
    7001  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7002  }
    7003  return VK_SUCCESS;
    7004  }
    7005  }
    7006 
    7007  const bool canCreateNewBlock =
    7008  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
    7009  (m_Blocks.size() < m_MaxBlockCount);
    7010 
    7011  // 2. Try to create new block.
    7012  if(canCreateNewBlock)
    7013  {
    7014  // Calculate optimal size for new block.
    7015  VkDeviceSize newBlockSize = m_PreferredBlockSize;
    7016  uint32_t newBlockSizeShift = 0;
    7017  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
    7018 
    7019  // Allocating blocks of other sizes is allowed only in default pools.
    7020  // In custom pools block size is fixed.
    7021  if(m_IsCustomPool == false)
    7022  {
    7023  // Allocate 1/8, 1/4, 1/2 as first blocks.
    7024  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
    7025  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
    7026  {
    7027  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7028  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
    7029  {
    7030  newBlockSize = smallerNewBlockSize;
    7031  ++newBlockSizeShift;
    7032  }
    7033  else
    7034  {
    7035  break;
    7036  }
    7037  }
    7038  }
    7039 
    7040  size_t newBlockIndex = 0;
    7041  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
    7042  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
    7043  if(m_IsCustomPool == false)
    7044  {
    7045  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
    7046  {
    7047  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7048  if(smallerNewBlockSize >= size)
    7049  {
    7050  newBlockSize = smallerNewBlockSize;
    7051  ++newBlockSizeShift;
    7052  res = CreateBlock(newBlockSize, &newBlockIndex);
    7053  }
    7054  else
    7055  {
    7056  break;
    7057  }
    7058  }
    7059  }
    7060 
    7061  if(res == VK_SUCCESS)
    7062  {
    7063  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
    7064  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= size);
    7065 
    7066  if(mapped)
    7067  {
    7068  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
    7069  if(res != VK_SUCCESS)
    7070  {
    7071  return res;
    7072  }
    7073  }
    7074 
    7075  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
    7076  VmaAllocationRequest allocRequest;
    7077  if(pBlock->m_Metadata.CreateAllocationRequest(
    7078  currentFrameIndex,
    7079  m_FrameInUseCount,
    7080  m_BufferImageGranularity,
    7081  size,
    7082  alignment,
    7083  suballocType,
    7084  false, // canMakeOtherLost
    7085  &allocRequest))
    7086  {
    7087  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7088  pBlock->m_Metadata.Alloc(allocRequest, suballocType, size, *pAllocation);
    7089  (*pAllocation)->InitBlockAllocation(
    7090  hCurrentPool,
    7091  pBlock,
    7092  allocRequest.offset,
    7093  alignment,
    7094  size,
    7095  suballocType,
    7096  mapped,
    7097  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7098  VMA_HEAVY_ASSERT(pBlock->Validate());
    7099  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
    7100  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7101  if(IsCorruptionDetectionEnabled())
    7102  {
    7103  res = pBlock->WriteMagicValueAroundAllocation(m_hAllocator, allocRequest.offset, size);
    7104  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7105  }
    7106  return VK_SUCCESS;
    7107  }
    7108  else
    7109  {
    7110  // Allocation from empty block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
    7111  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7112  }
    7113  }
    7114  }
    7115 
    7116  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
    7117 
    7118  // 3. Try to allocate from existing blocks with making other allocations lost.
    7119  if(canMakeOtherLost)
    7120  {
    7121  uint32_t tryIndex = 0;
    7122  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
    7123  {
    7124  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
    7125  VmaAllocationRequest bestRequest = {};
    7126  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
    7127 
    7128  // 1. Search existing allocations.
    7129  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    7130  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    7131  {
    7132  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    7133  VMA_ASSERT(pCurrBlock);
    7134  VmaAllocationRequest currRequest = {};
    7135  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    7136  currentFrameIndex,
    7137  m_FrameInUseCount,
    7138  m_BufferImageGranularity,
    7139  size,
    7140  alignment,
    7141  suballocType,
    7142  canMakeOtherLost,
    7143  &currRequest))
    7144  {
    7145  const VkDeviceSize currRequestCost = currRequest.CalcCost();
    7146  if(pBestRequestBlock == VMA_NULL ||
    7147  currRequestCost < bestRequestCost)
    7148  {
    7149  pBestRequestBlock = pCurrBlock;
    7150  bestRequest = currRequest;
    7151  bestRequestCost = currRequestCost;
    7152 
    7153  if(bestRequestCost == 0)
    7154  {
    7155  break;
    7156  }
    7157  }
    7158  }
    7159  }
    7160 
    7161  if(pBestRequestBlock != VMA_NULL)
    7162  {
    7163  if(mapped)
    7164  {
    7165  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
    7166  if(res != VK_SUCCESS)
    7167  {
    7168  return res;
    7169  }
    7170  }
    7171 
    7172  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
    7173  currentFrameIndex,
    7174  m_FrameInUseCount,
    7175  &bestRequest))
    7176  {
    7177  // We no longer have an empty Allocation.
    7178  if(pBestRequestBlock->m_Metadata.IsEmpty())
    7179  {
    7180  m_HasEmptyBlock = false;
    7181  }
    7182  // Allocate from this pBlock.
    7183  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7184  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, size, *pAllocation);
    7185  (*pAllocation)->InitBlockAllocation(
    7186  hCurrentPool,
    7187  pBestRequestBlock,
    7188  bestRequest.offset,
    7189  alignment,
    7190  size,
    7191  suballocType,
    7192  mapped,
    7193  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7194  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
    7195  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    7196  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7197  if(IsCorruptionDetectionEnabled())
    7198  {
    7199  VkResult res = pBestRequestBlock->WriteMagicValueAroundAllocation(m_hAllocator, bestRequest.offset, size);
    7200  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7201  }
    7202  return VK_SUCCESS;
    7203  }
    7204  // else: Some allocations must have been touched while we are here. Next try.
    7205  }
    7206  else
    7207  {
    7208  // Could not find place in any of the blocks - break outer loop.
    7209  break;
    7210  }
    7211  }
    7212  /* Maximum number of tries exceeded - a very unlike event when many other
    7213  threads are simultaneously touching allocations making it impossible to make
    7214  lost at the same time as we try to allocate. */
    7215  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
    7216  {
    7217  return VK_ERROR_TOO_MANY_OBJECTS;
    7218  }
    7219  }
    7220 
    7221  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7222 }
    7223 
    7224 void VmaBlockVector::Free(
    7225  VmaAllocation hAllocation)
    7226 {
    7227  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
    7228 
    7229  // Scope for lock.
    7230  {
    7231  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7232 
    7233  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    7234 
    7235  if(IsCorruptionDetectionEnabled())
    7236  {
    7237  VkResult res = pBlock->ValidateMagicValueAroundAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
    7238  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
    7239  }
    7240 
    7241  if(hAllocation->IsPersistentMap())
    7242  {
    7243  pBlock->Unmap(m_hAllocator, 1);
    7244  }
    7245 
    7246  pBlock->m_Metadata.Free(hAllocation);
    7247  VMA_HEAVY_ASSERT(pBlock->Validate());
    7248 
    7249  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
    7250 
    7251  // pBlock became empty after this deallocation.
    7252  if(pBlock->m_Metadata.IsEmpty())
    7253  {
    7254  // Already has empty Allocation. We don't want to have two, so delete this one.
    7255  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
    7256  {
    7257  pBlockToDelete = pBlock;
    7258  Remove(pBlock);
    7259  }
    7260  // We now have first empty block.
    7261  else
    7262  {
    7263  m_HasEmptyBlock = true;
    7264  }
    7265  }
    7266  // pBlock didn't become empty, but we have another empty block - find and free that one.
    7267  // (This is optional, heuristics.)
    7268  else if(m_HasEmptyBlock)
    7269  {
    7270  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
    7271  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
    7272  {
    7273  pBlockToDelete = pLastBlock;
    7274  m_Blocks.pop_back();
    7275  m_HasEmptyBlock = false;
    7276  }
    7277  }
    7278 
    7279  IncrementallySortBlocks();
    7280  }
    7281 
    7282  // Destruction of a free Allocation. Deferred until this point, outside of mutex
    7283  // lock, for performance reason.
    7284  if(pBlockToDelete != VMA_NULL)
    7285  {
    7286  VMA_DEBUG_LOG(" Deleted empty allocation");
    7287  pBlockToDelete->Destroy(m_hAllocator);
    7288  vma_delete(m_hAllocator, pBlockToDelete);
    7289  }
    7290 }
    7291 
    7292 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
    7293 {
    7294  VkDeviceSize result = 0;
    7295  for(size_t i = m_Blocks.size(); i--; )
    7296  {
    7297  result = VMA_MAX(result, m_Blocks[i]->m_Metadata.GetSize());
    7298  if(result >= m_PreferredBlockSize)
    7299  {
    7300  break;
    7301  }
    7302  }
    7303  return result;
    7304 }
    7305 
    7306 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
    7307 {
    7308  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7309  {
    7310  if(m_Blocks[blockIndex] == pBlock)
    7311  {
    7312  VmaVectorRemove(m_Blocks, blockIndex);
    7313  return;
    7314  }
    7315  }
    7316  VMA_ASSERT(0);
    7317 }
    7318 
    7319 void VmaBlockVector::IncrementallySortBlocks()
    7320 {
    7321  // Bubble sort only until first swap.
    7322  for(size_t i = 1; i < m_Blocks.size(); ++i)
    7323  {
    7324  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
    7325  {
    7326  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
    7327  return;
    7328  }
    7329  }
    7330 }
    7331 
    7332 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
    7333 {
    7334  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    7335  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
    7336  allocInfo.allocationSize = blockSize;
    7337  VkDeviceMemory mem = VK_NULL_HANDLE;
    7338  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
    7339  if(res < 0)
    7340  {
    7341  return res;
    7342  }
    7343 
    7344  // New VkDeviceMemory successfully created.
    7345 
    7346  // Create new Allocation for it.
    7347  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
    7348  pBlock->Init(
    7349  m_MemoryTypeIndex,
    7350  mem,
    7351  allocInfo.allocationSize,
    7352  m_NextBlockId++);
    7353 
    7354  m_Blocks.push_back(pBlock);
    7355  if(pNewBlockIndex != VMA_NULL)
    7356  {
    7357  *pNewBlockIndex = m_Blocks.size() - 1;
    7358  }
    7359 
    7360  return VK_SUCCESS;
    7361 }
    7362 
    7363 #if VMA_STATS_STRING_ENABLED
    7364 
    7365 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
    7366 {
    7367  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7368 
    7369  json.BeginObject();
    7370 
    7371  if(m_IsCustomPool)
    7372  {
    7373  json.WriteString("MemoryTypeIndex");
    7374  json.WriteNumber(m_MemoryTypeIndex);
    7375 
    7376  json.WriteString("BlockSize");
    7377  json.WriteNumber(m_PreferredBlockSize);
    7378 
    7379  json.WriteString("BlockCount");
    7380  json.BeginObject(true);
    7381  if(m_MinBlockCount > 0)
    7382  {
    7383  json.WriteString("Min");
    7384  json.WriteNumber((uint64_t)m_MinBlockCount);
    7385  }
    7386  if(m_MaxBlockCount < SIZE_MAX)
    7387  {
    7388  json.WriteString("Max");
    7389  json.WriteNumber((uint64_t)m_MaxBlockCount);
    7390  }
    7391  json.WriteString("Cur");
    7392  json.WriteNumber((uint64_t)m_Blocks.size());
    7393  json.EndObject();
    7394 
    7395  if(m_FrameInUseCount > 0)
    7396  {
    7397  json.WriteString("FrameInUseCount");
    7398  json.WriteNumber(m_FrameInUseCount);
    7399  }
    7400  }
    7401  else
    7402  {
    7403  json.WriteString("PreferredBlockSize");
    7404  json.WriteNumber(m_PreferredBlockSize);
    7405  }
    7406 
    7407  json.WriteString("Blocks");
    7408  json.BeginObject();
    7409  for(size_t i = 0; i < m_Blocks.size(); ++i)
    7410  {
    7411  json.BeginString();
    7412  json.ContinueString(m_Blocks[i]->GetId());
    7413  json.EndString();
    7414 
    7415  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
    7416  }
    7417  json.EndObject();
    7418 
    7419  json.EndObject();
    7420 }
    7421 
    7422 #endif // #if VMA_STATS_STRING_ENABLED
    7423 
    7424 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
    7425  VmaAllocator hAllocator,
    7426  uint32_t currentFrameIndex)
    7427 {
    7428  if(m_pDefragmentator == VMA_NULL)
    7429  {
    7430  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
    7431  hAllocator,
    7432  this,
    7433  currentFrameIndex);
    7434  }
    7435 
    7436  return m_pDefragmentator;
    7437 }
    7438 
    7439 VkResult VmaBlockVector::Defragment(
    7440  VmaDefragmentationStats* pDefragmentationStats,
    7441  VkDeviceSize& maxBytesToMove,
    7442  uint32_t& maxAllocationsToMove)
    7443 {
    7444  if(m_pDefragmentator == VMA_NULL)
    7445  {
    7446  return VK_SUCCESS;
    7447  }
    7448 
    7449  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7450 
    7451  // Defragment.
    7452  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
    7453 
    7454  // Accumulate statistics.
    7455  if(pDefragmentationStats != VMA_NULL)
    7456  {
    7457  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
    7458  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
    7459  pDefragmentationStats->bytesMoved += bytesMoved;
    7460  pDefragmentationStats->allocationsMoved += allocationsMoved;
    7461  VMA_ASSERT(bytesMoved <= maxBytesToMove);
    7462  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
    7463  maxBytesToMove -= bytesMoved;
    7464  maxAllocationsToMove -= allocationsMoved;
    7465  }
    7466 
    7467  // Free empty blocks.
    7468  m_HasEmptyBlock = false;
    7469  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
    7470  {
    7471  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
    7472  if(pBlock->m_Metadata.IsEmpty())
    7473  {
    7474  if(m_Blocks.size() > m_MinBlockCount)
    7475  {
    7476  if(pDefragmentationStats != VMA_NULL)
    7477  {
    7478  ++pDefragmentationStats->deviceMemoryBlocksFreed;
    7479  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
    7480  }
    7481 
    7482  VmaVectorRemove(m_Blocks, blockIndex);
    7483  pBlock->Destroy(m_hAllocator);
    7484  vma_delete(m_hAllocator, pBlock);
    7485  }
    7486  else
    7487  {
    7488  m_HasEmptyBlock = true;
    7489  }
    7490  }
    7491  }
    7492 
    7493  return result;
    7494 }
    7495 
    7496 void VmaBlockVector::DestroyDefragmentator()
    7497 {
    7498  if(m_pDefragmentator != VMA_NULL)
    7499  {
    7500  vma_delete(m_hAllocator, m_pDefragmentator);
    7501  m_pDefragmentator = VMA_NULL;
    7502  }
    7503 }
    7504 
    7505 void VmaBlockVector::MakePoolAllocationsLost(
    7506  uint32_t currentFrameIndex,
    7507  size_t* pLostAllocationCount)
    7508 {
    7509  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7510  size_t lostAllocationCount = 0;
    7511  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7512  {
    7513  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7514  VMA_ASSERT(pBlock);
    7515  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
    7516  }
    7517  if(pLostAllocationCount != VMA_NULL)
    7518  {
    7519  *pLostAllocationCount = lostAllocationCount;
    7520  }
    7521 }
    7522 
    7523 VkResult VmaBlockVector::CheckCorruption()
    7524 {
    7525  if(!IsCorruptionDetectionEnabled())
    7526  {
    7527  return VK_ERROR_FEATURE_NOT_PRESENT;
    7528  }
    7529 
    7530  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7531  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7532  {
    7533  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7534  VMA_ASSERT(pBlock);
    7535  VkResult res = pBlock->CheckCorruption(m_hAllocator);
    7536  if(res != VK_SUCCESS)
    7537  {
    7538  return res;
    7539  }
    7540  }
    7541  return VK_SUCCESS;
    7542 }
    7543 
    7544 void VmaBlockVector::AddStats(VmaStats* pStats)
    7545 {
    7546  const uint32_t memTypeIndex = m_MemoryTypeIndex;
    7547  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
    7548 
    7549  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7550 
    7551  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7552  {
    7553  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7554  VMA_ASSERT(pBlock);
    7555  VMA_HEAVY_ASSERT(pBlock->Validate());
    7556  VmaStatInfo allocationStatInfo;
    7557  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
    7558  VmaAddStatInfo(pStats->total, allocationStatInfo);
    7559  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    7560  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    7561  }
    7562 }
    7563 
    7565 // VmaDefragmentator members definition
    7566 
    7567 VmaDefragmentator::VmaDefragmentator(
    7568  VmaAllocator hAllocator,
    7569  VmaBlockVector* pBlockVector,
    7570  uint32_t currentFrameIndex) :
    7571  m_hAllocator(hAllocator),
    7572  m_pBlockVector(pBlockVector),
    7573  m_CurrentFrameIndex(currentFrameIndex),
    7574  m_BytesMoved(0),
    7575  m_AllocationsMoved(0),
    7576  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
    7577  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
    7578 {
    7579 }
    7580 
    7581 VmaDefragmentator::~VmaDefragmentator()
    7582 {
    7583  for(size_t i = m_Blocks.size(); i--; )
    7584  {
    7585  vma_delete(m_hAllocator, m_Blocks[i]);
    7586  }
    7587 }
    7588 
    7589 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
    7590 {
    7591  AllocationInfo allocInfo;
    7592  allocInfo.m_hAllocation = hAlloc;
    7593  allocInfo.m_pChanged = pChanged;
    7594  m_Allocations.push_back(allocInfo);
    7595 }
    7596 
    7597 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
    7598 {
    7599  // It has already been mapped for defragmentation.
    7600  if(m_pMappedDataForDefragmentation)
    7601  {
    7602  *ppMappedData = m_pMappedDataForDefragmentation;
    7603  return VK_SUCCESS;
    7604  }
    7605 
    7606  // It is originally mapped.
    7607  if(m_pBlock->GetMappedData())
    7608  {
    7609  *ppMappedData = m_pBlock->GetMappedData();
    7610  return VK_SUCCESS;
    7611  }
    7612 
    7613  // Map on first usage.
    7614  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
    7615  *ppMappedData = m_pMappedDataForDefragmentation;
    7616  return res;
    7617 }
    7618 
    7619 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
    7620 {
    7621  if(m_pMappedDataForDefragmentation != VMA_NULL)
    7622  {
    7623  m_pBlock->Unmap(hAllocator, 1);
    7624  }
    7625 }
    7626 
    7627 VkResult VmaDefragmentator::DefragmentRound(
    7628  VkDeviceSize maxBytesToMove,
    7629  uint32_t maxAllocationsToMove)
    7630 {
    7631  if(m_Blocks.empty())
    7632  {
    7633  return VK_SUCCESS;
    7634  }
    7635 
    7636  size_t srcBlockIndex = m_Blocks.size() - 1;
    7637  size_t srcAllocIndex = SIZE_MAX;
    7638  for(;;)
    7639  {
    7640  // 1. Find next allocation to move.
    7641  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
    7642  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
    7643  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
    7644  {
    7645  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
    7646  {
    7647  // Finished: no more allocations to process.
    7648  if(srcBlockIndex == 0)
    7649  {
    7650  return VK_SUCCESS;
    7651  }
    7652  else
    7653  {
    7654  --srcBlockIndex;
    7655  srcAllocIndex = SIZE_MAX;
    7656  }
    7657  }
    7658  else
    7659  {
    7660  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
    7661  }
    7662  }
    7663 
    7664  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
    7665  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
    7666 
    7667  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
    7668  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
    7669  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
    7670  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
    7671 
    7672  // 2. Try to find new place for this allocation in preceding or current block.
    7673  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
    7674  {
    7675  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
    7676  VmaAllocationRequest dstAllocRequest;
    7677  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
    7678  m_CurrentFrameIndex,
    7679  m_pBlockVector->GetFrameInUseCount(),
    7680  m_pBlockVector->GetBufferImageGranularity(),
    7681  size,
    7682  alignment,
    7683  suballocType,
    7684  false, // canMakeOtherLost
    7685  &dstAllocRequest) &&
    7686  MoveMakesSense(
    7687  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
    7688  {
    7689  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
    7690 
    7691  // Reached limit on number of allocations or bytes to move.
    7692  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
    7693  (m_BytesMoved + size > maxBytesToMove))
    7694  {
    7695  return VK_INCOMPLETE;
    7696  }
    7697 
    7698  void* pDstMappedData = VMA_NULL;
    7699  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
    7700  if(res != VK_SUCCESS)
    7701  {
    7702  return res;
    7703  }
    7704 
    7705  void* pSrcMappedData = VMA_NULL;
    7706  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
    7707  if(res != VK_SUCCESS)
    7708  {
    7709  return res;
    7710  }
    7711 
    7712  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
    7713  memcpy(
    7714  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
    7715  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
    7716  static_cast<size_t>(size));
    7717 
    7718  if(VMA_DEBUG_MARGIN > 0)
    7719  {
    7720  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset - VMA_DEBUG_MARGIN);
    7721  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset + size);
    7722  }
    7723 
    7724  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
    7725  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
    7726 
    7727  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
    7728 
    7729  if(allocInfo.m_pChanged != VMA_NULL)
    7730  {
    7731  *allocInfo.m_pChanged = VK_TRUE;
    7732  }
    7733 
    7734  ++m_AllocationsMoved;
    7735  m_BytesMoved += size;
    7736 
    7737  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
    7738 
    7739  break;
    7740  }
    7741  }
    7742 
    7743  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
    7744 
    7745  if(srcAllocIndex > 0)
    7746  {
    7747  --srcAllocIndex;
    7748  }
    7749  else
    7750  {
    7751  if(srcBlockIndex > 0)
    7752  {
    7753  --srcBlockIndex;
    7754  srcAllocIndex = SIZE_MAX;
    7755  }
    7756  else
    7757  {
    7758  return VK_SUCCESS;
    7759  }
    7760  }
    7761  }
    7762 }
    7763 
    7764 VkResult VmaDefragmentator::Defragment(
    7765  VkDeviceSize maxBytesToMove,
    7766  uint32_t maxAllocationsToMove)
    7767 {
    7768  if(m_Allocations.empty())
    7769  {
    7770  return VK_SUCCESS;
    7771  }
    7772 
    7773  // Create block info for each block.
    7774  const size_t blockCount = m_pBlockVector->m_Blocks.size();
    7775  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7776  {
    7777  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
    7778  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
    7779  m_Blocks.push_back(pBlockInfo);
    7780  }
    7781 
    7782  // Sort them by m_pBlock pointer value.
    7783  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
    7784 
    7785  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
    7786  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
    7787  {
    7788  AllocationInfo& allocInfo = m_Allocations[blockIndex];
    7789  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
    7790  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    7791  {
    7792  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
    7793  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
    7794  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
    7795  {
    7796  (*it)->m_Allocations.push_back(allocInfo);
    7797  }
    7798  else
    7799  {
    7800  VMA_ASSERT(0);
    7801  }
    7802  }
    7803  }
    7804  m_Allocations.clear();
    7805 
    7806  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7807  {
    7808  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
    7809  pBlockInfo->CalcHasNonMovableAllocations();
    7810  pBlockInfo->SortAllocationsBySizeDescecnding();
    7811  }
    7812 
    7813  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
    7814  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
    7815 
    7816  // Execute defragmentation rounds (the main part).
    7817  VkResult result = VK_SUCCESS;
    7818  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
    7819  {
    7820  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
    7821  }
    7822 
    7823  // Unmap blocks that were mapped for defragmentation.
    7824  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7825  {
    7826  m_Blocks[blockIndex]->Unmap(m_hAllocator);
    7827  }
    7828 
    7829  return result;
    7830 }
    7831 
    7832 bool VmaDefragmentator::MoveMakesSense(
    7833  size_t dstBlockIndex, VkDeviceSize dstOffset,
    7834  size_t srcBlockIndex, VkDeviceSize srcOffset)
    7835 {
    7836  if(dstBlockIndex < srcBlockIndex)
    7837  {
    7838  return true;
    7839  }
    7840  if(dstBlockIndex > srcBlockIndex)
    7841  {
    7842  return false;
    7843  }
    7844  if(dstOffset < srcOffset)
    7845  {
    7846  return true;
    7847  }
    7848  return false;
    7849 }
    7850 
    7852 // VmaAllocator_T
    7853 
    7854 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
    7855  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
    7856  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
    7857  m_hDevice(pCreateInfo->device),
    7858  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
    7859  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
    7860  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
    7861  m_PreferredLargeHeapBlockSize(0),
    7862  m_PhysicalDevice(pCreateInfo->physicalDevice),
    7863  m_CurrentFrameIndex(0),
    7864  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks())),
    7865  m_NextPoolId(0)
    7866 {
    7867  if(VMA_DEBUG_DETECT_CORRUPTION)
    7868  {
    7869  // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
    7870  VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
    7871  }
    7872 
    7873  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
    7874 
    7875 #if !(VMA_DEDICATED_ALLOCATION)
    7877  {
    7878  VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
    7879  }
    7880 #endif
    7881 
    7882  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
    7883  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
    7884  memset(&m_MemProps, 0, sizeof(m_MemProps));
    7885 
    7886  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
    7887  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
    7888 
    7889  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    7890  {
    7891  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
    7892  }
    7893 
    7894  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
    7895  {
    7896  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
    7897  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
    7898  }
    7899 
    7900  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
    7901 
    7902  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
    7903  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
    7904 
    7905  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
    7906  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
    7907 
    7908  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
    7909  {
    7910  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
    7911  {
    7912  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
    7913  if(limit != VK_WHOLE_SIZE)
    7914  {
    7915  m_HeapSizeLimit[heapIndex] = limit;
    7916  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
    7917  {
    7918  m_MemProps.memoryHeaps[heapIndex].size = limit;
    7919  }
    7920  }
    7921  }
    7922  }
    7923 
    7924  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    7925  {
    7926  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
    7927 
    7928  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
    7929  this,
    7930  memTypeIndex,
    7931  preferredBlockSize,
    7932  0,
    7933  SIZE_MAX,
    7934  GetBufferImageGranularity(),
    7935  pCreateInfo->frameInUseCount,
    7936  false); // isCustomPool
    7937  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
    7938  // becase minBlockCount is 0.
    7939  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
    7940 
    7941  }
    7942 }
    7943 
    7944 VmaAllocator_T::~VmaAllocator_T()
    7945 {
    7946  VMA_ASSERT(m_Pools.empty());
    7947 
    7948  for(size_t i = GetMemoryTypeCount(); i--; )
    7949  {
    7950  vma_delete(this, m_pDedicatedAllocations[i]);
    7951  vma_delete(this, m_pBlockVectors[i]);
    7952  }
    7953 }
    7954 
    7955 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
    7956 {
    7957 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7958  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
    7959  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
    7960  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    7961  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
    7962  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
    7963  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
    7964  m_VulkanFunctions.vkFlushMappedMemoryRanges = &vkFlushMappedMemoryRanges;
    7965  m_VulkanFunctions.vkInvalidateMappedMemoryRanges = &vkInvalidateMappedMemoryRanges;
    7966  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
    7967  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
    7968  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
    7969  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
    7970  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
    7971  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
    7972  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
    7973  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
    7974 #if VMA_DEDICATED_ALLOCATION
    7975  if(m_UseKhrDedicatedAllocation)
    7976  {
    7977  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
    7978  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
    7979  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
    7980  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
    7981  }
    7982 #endif // #if VMA_DEDICATED_ALLOCATION
    7983 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7984 
    7985 #define VMA_COPY_IF_NOT_NULL(funcName) \
    7986  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
    7987 
    7988  if(pVulkanFunctions != VMA_NULL)
    7989  {
    7990  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
    7991  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
    7992  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
    7993  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
    7994  VMA_COPY_IF_NOT_NULL(vkMapMemory);
    7995  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
    7996  VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
    7997  VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
    7998  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
    7999  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
    8000  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
    8001  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
    8002  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
    8003  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
    8004  VMA_COPY_IF_NOT_NULL(vkCreateImage);
    8005  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
    8006 #if VMA_DEDICATED_ALLOCATION
    8007  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
    8008  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
    8009 #endif
    8010  }
    8011 
    8012 #undef VMA_COPY_IF_NOT_NULL
    8013 
    8014  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
    8015  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
    8016  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
    8017  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
    8018  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
    8019  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
    8020  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
    8021  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
    8022  VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
    8023  VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
    8024  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
    8025  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
    8026  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
    8027  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
    8028  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
    8029  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
    8030  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
    8031  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
    8032 #if VMA_DEDICATED_ALLOCATION
    8033  if(m_UseKhrDedicatedAllocation)
    8034  {
    8035  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
    8036  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
    8037  }
    8038 #endif
    8039 }
    8040 
    8041 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
    8042 {
    8043  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8044  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
    8045  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
    8046  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
    8047 }
    8048 
    8049 VkResult VmaAllocator_T::AllocateMemoryOfType(
    8050  VkDeviceSize size,
    8051  VkDeviceSize alignment,
    8052  bool dedicatedAllocation,
    8053  VkBuffer dedicatedBuffer,
    8054  VkImage dedicatedImage,
    8055  const VmaAllocationCreateInfo& createInfo,
    8056  uint32_t memTypeIndex,
    8057  VmaSuballocationType suballocType,
    8058  VmaAllocation* pAllocation)
    8059 {
    8060  VMA_ASSERT(pAllocation != VMA_NULL);
    8061  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
    8062 
    8063  VmaAllocationCreateInfo finalCreateInfo = createInfo;
    8064 
    8065  // If memory type is not HOST_VISIBLE, disable MAPPED.
    8066  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8067  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    8068  {
    8069  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
    8070  }
    8071 
    8072  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
    8073  VMA_ASSERT(blockVector);
    8074 
    8075  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
    8076  bool preferDedicatedMemory =
    8077  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
    8078  dedicatedAllocation ||
    8079  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
    8080  size > preferredBlockSize / 2;
    8081 
    8082  if(preferDedicatedMemory &&
    8083  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
    8084  finalCreateInfo.pool == VK_NULL_HANDLE)
    8085  {
    8087  }
    8088 
    8089  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
    8090  {
    8091  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8092  {
    8093  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8094  }
    8095  else
    8096  {
    8097  return AllocateDedicatedMemory(
    8098  size,
    8099  suballocType,
    8100  memTypeIndex,
    8101  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8102  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8103  finalCreateInfo.pUserData,
    8104  dedicatedBuffer,
    8105  dedicatedImage,
    8106  pAllocation);
    8107  }
    8108  }
    8109  else
    8110  {
    8111  VkResult res = blockVector->Allocate(
    8112  VK_NULL_HANDLE, // hCurrentPool
    8113  m_CurrentFrameIndex.load(),
    8114  size,
    8115  alignment,
    8116  finalCreateInfo,
    8117  suballocType,
    8118  pAllocation);
    8119  if(res == VK_SUCCESS)
    8120  {
    8121  return res;
    8122  }
    8123 
    8124  // 5. Try dedicated memory.
    8125  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8126  {
    8127  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8128  }
    8129  else
    8130  {
    8131  res = AllocateDedicatedMemory(
    8132  size,
    8133  suballocType,
    8134  memTypeIndex,
    8135  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8136  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8137  finalCreateInfo.pUserData,
    8138  dedicatedBuffer,
    8139  dedicatedImage,
    8140  pAllocation);
    8141  if(res == VK_SUCCESS)
    8142  {
    8143  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
    8144  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
    8145  return VK_SUCCESS;
    8146  }
    8147  else
    8148  {
    8149  // Everything failed: Return error code.
    8150  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8151  return res;
    8152  }
    8153  }
    8154  }
    8155 }
    8156 
    8157 VkResult VmaAllocator_T::AllocateDedicatedMemory(
    8158  VkDeviceSize size,
    8159  VmaSuballocationType suballocType,
    8160  uint32_t memTypeIndex,
    8161  bool map,
    8162  bool isUserDataString,
    8163  void* pUserData,
    8164  VkBuffer dedicatedBuffer,
    8165  VkImage dedicatedImage,
    8166  VmaAllocation* pAllocation)
    8167 {
    8168  VMA_ASSERT(pAllocation);
    8169 
    8170  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    8171  allocInfo.memoryTypeIndex = memTypeIndex;
    8172  allocInfo.allocationSize = size;
    8173 
    8174 #if VMA_DEDICATED_ALLOCATION
    8175  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
    8176  if(m_UseKhrDedicatedAllocation)
    8177  {
    8178  if(dedicatedBuffer != VK_NULL_HANDLE)
    8179  {
    8180  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
    8181  dedicatedAllocInfo.buffer = dedicatedBuffer;
    8182  allocInfo.pNext = &dedicatedAllocInfo;
    8183  }
    8184  else if(dedicatedImage != VK_NULL_HANDLE)
    8185  {
    8186  dedicatedAllocInfo.image = dedicatedImage;
    8187  allocInfo.pNext = &dedicatedAllocInfo;
    8188  }
    8189  }
    8190 #endif // #if VMA_DEDICATED_ALLOCATION
    8191 
    8192  // Allocate VkDeviceMemory.
    8193  VkDeviceMemory hMemory = VK_NULL_HANDLE;
    8194  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
    8195  if(res < 0)
    8196  {
    8197  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8198  return res;
    8199  }
    8200 
    8201  void* pMappedData = VMA_NULL;
    8202  if(map)
    8203  {
    8204  res = (*m_VulkanFunctions.vkMapMemory)(
    8205  m_hDevice,
    8206  hMemory,
    8207  0,
    8208  VK_WHOLE_SIZE,
    8209  0,
    8210  &pMappedData);
    8211  if(res < 0)
    8212  {
    8213  VMA_DEBUG_LOG(" vkMapMemory FAILED");
    8214  FreeVulkanMemory(memTypeIndex, size, hMemory);
    8215  return res;
    8216  }
    8217  }
    8218 
    8219  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
    8220  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
    8221  (*pAllocation)->SetUserData(this, pUserData);
    8222 
    8223  // Register it in m_pDedicatedAllocations.
    8224  {
    8225  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8226  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    8227  VMA_ASSERT(pDedicatedAllocations);
    8228  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
    8229  }
    8230 
    8231  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
    8232 
    8233  return VK_SUCCESS;
    8234 }
    8235 
    8236 void VmaAllocator_T::GetBufferMemoryRequirements(
    8237  VkBuffer hBuffer,
    8238  VkMemoryRequirements& memReq,
    8239  bool& requiresDedicatedAllocation,
    8240  bool& prefersDedicatedAllocation) const
    8241 {
    8242 #if VMA_DEDICATED_ALLOCATION
    8243  if(m_UseKhrDedicatedAllocation)
    8244  {
    8245  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8246  memReqInfo.buffer = hBuffer;
    8247 
    8248  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8249 
    8250  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8251  memReq2.pNext = &memDedicatedReq;
    8252 
    8253  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8254 
    8255  memReq = memReq2.memoryRequirements;
    8256  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8257  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8258  }
    8259  else
    8260 #endif // #if VMA_DEDICATED_ALLOCATION
    8261  {
    8262  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
    8263  requiresDedicatedAllocation = false;
    8264  prefersDedicatedAllocation = false;
    8265  }
    8266 }
    8267 
    8268 void VmaAllocator_T::GetImageMemoryRequirements(
    8269  VkImage hImage,
    8270  VkMemoryRequirements& memReq,
    8271  bool& requiresDedicatedAllocation,
    8272  bool& prefersDedicatedAllocation) const
    8273 {
    8274 #if VMA_DEDICATED_ALLOCATION
    8275  if(m_UseKhrDedicatedAllocation)
    8276  {
    8277  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8278  memReqInfo.image = hImage;
    8279 
    8280  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8281 
    8282  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8283  memReq2.pNext = &memDedicatedReq;
    8284 
    8285  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8286 
    8287  memReq = memReq2.memoryRequirements;
    8288  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8289  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8290  }
    8291  else
    8292 #endif // #if VMA_DEDICATED_ALLOCATION
    8293  {
    8294  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
    8295  requiresDedicatedAllocation = false;
    8296  prefersDedicatedAllocation = false;
    8297  }
    8298 }
    8299 
    8300 VkResult VmaAllocator_T::AllocateMemory(
    8301  const VkMemoryRequirements& vkMemReq,
    8302  bool requiresDedicatedAllocation,
    8303  bool prefersDedicatedAllocation,
    8304  VkBuffer dedicatedBuffer,
    8305  VkImage dedicatedImage,
    8306  const VmaAllocationCreateInfo& createInfo,
    8307  VmaSuballocationType suballocType,
    8308  VmaAllocation* pAllocation)
    8309 {
    8310  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
    8311  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8312  {
    8313  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
    8314  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8315  }
    8316  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8318  {
    8319  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
    8320  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8321  }
    8322  if(requiresDedicatedAllocation)
    8323  {
    8324  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8325  {
    8326  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
    8327  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8328  }
    8329  if(createInfo.pool != VK_NULL_HANDLE)
    8330  {
    8331  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
    8332  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8333  }
    8334  }
    8335  if((createInfo.pool != VK_NULL_HANDLE) &&
    8336  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
    8337  {
    8338  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
    8339  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8340  }
    8341 
    8342  if(createInfo.pool != VK_NULL_HANDLE)
    8343  {
    8344  const VkDeviceSize alignmentForPool = VMA_MAX(
    8345  vkMemReq.alignment,
    8346  GetMemoryTypeMinAlignment(createInfo.pool->m_BlockVector.GetMemoryTypeIndex()));
    8347  return createInfo.pool->m_BlockVector.Allocate(
    8348  createInfo.pool,
    8349  m_CurrentFrameIndex.load(),
    8350  vkMemReq.size,
    8351  alignmentForPool,
    8352  createInfo,
    8353  suballocType,
    8354  pAllocation);
    8355  }
    8356  else
    8357  {
    8358  // Bit mask of memory Vulkan types acceptable for this allocation.
    8359  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
    8360  uint32_t memTypeIndex = UINT32_MAX;
    8361  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8362  if(res == VK_SUCCESS)
    8363  {
    8364  VkDeviceSize alignmentForMemType = VMA_MAX(
    8365  vkMemReq.alignment,
    8366  GetMemoryTypeMinAlignment(memTypeIndex));
    8367 
    8368  res = AllocateMemoryOfType(
    8369  vkMemReq.size,
    8370  alignmentForMemType,
    8371  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8372  dedicatedBuffer,
    8373  dedicatedImage,
    8374  createInfo,
    8375  memTypeIndex,
    8376  suballocType,
    8377  pAllocation);
    8378  // Succeeded on first try.
    8379  if(res == VK_SUCCESS)
    8380  {
    8381  return res;
    8382  }
    8383  // Allocation from this memory type failed. Try other compatible memory types.
    8384  else
    8385  {
    8386  for(;;)
    8387  {
    8388  // Remove old memTypeIndex from list of possibilities.
    8389  memoryTypeBits &= ~(1u << memTypeIndex);
    8390  // Find alternative memTypeIndex.
    8391  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8392  if(res == VK_SUCCESS)
    8393  {
    8394  alignmentForMemType = VMA_MAX(
    8395  vkMemReq.alignment,
    8396  GetMemoryTypeMinAlignment(memTypeIndex));
    8397 
    8398  res = AllocateMemoryOfType(
    8399  vkMemReq.size,
    8400  alignmentForMemType,
    8401  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8402  dedicatedBuffer,
    8403  dedicatedImage,
    8404  createInfo,
    8405  memTypeIndex,
    8406  suballocType,
    8407  pAllocation);
    8408  // Allocation from this alternative memory type succeeded.
    8409  if(res == VK_SUCCESS)
    8410  {
    8411  return res;
    8412  }
    8413  // else: Allocation from this memory type failed. Try next one - next loop iteration.
    8414  }
    8415  // No other matching memory type index could be found.
    8416  else
    8417  {
    8418  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
    8419  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8420  }
    8421  }
    8422  }
    8423  }
    8424  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
    8425  else
    8426  return res;
    8427  }
    8428 }
    8429 
    8430 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
    8431 {
    8432  VMA_ASSERT(allocation);
    8433 
    8434  if(allocation->CanBecomeLost() == false ||
    8435  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    8436  {
    8437  switch(allocation->GetType())
    8438  {
    8439  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8440  {
    8441  VmaBlockVector* pBlockVector = VMA_NULL;
    8442  VmaPool hPool = allocation->GetPool();
    8443  if(hPool != VK_NULL_HANDLE)
    8444  {
    8445  pBlockVector = &hPool->m_BlockVector;
    8446  }
    8447  else
    8448  {
    8449  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    8450  pBlockVector = m_pBlockVectors[memTypeIndex];
    8451  }
    8452  pBlockVector->Free(allocation);
    8453  }
    8454  break;
    8455  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8456  FreeDedicatedMemory(allocation);
    8457  break;
    8458  default:
    8459  VMA_ASSERT(0);
    8460  }
    8461  }
    8462 
    8463  allocation->SetUserData(this, VMA_NULL);
    8464  vma_delete(this, allocation);
    8465 }
    8466 
    8467 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
    8468 {
    8469  // Initialize.
    8470  InitStatInfo(pStats->total);
    8471  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
    8472  InitStatInfo(pStats->memoryType[i]);
    8473  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    8474  InitStatInfo(pStats->memoryHeap[i]);
    8475 
    8476  // Process default pools.
    8477  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8478  {
    8479  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8480  VMA_ASSERT(pBlockVector);
    8481  pBlockVector->AddStats(pStats);
    8482  }
    8483 
    8484  // Process custom pools.
    8485  {
    8486  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8487  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8488  {
    8489  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
    8490  }
    8491  }
    8492 
    8493  // Process dedicated allocations.
    8494  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8495  {
    8496  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8497  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8498  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    8499  VMA_ASSERT(pDedicatedAllocVector);
    8500  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
    8501  {
    8502  VmaStatInfo allocationStatInfo;
    8503  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
    8504  VmaAddStatInfo(pStats->total, allocationStatInfo);
    8505  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    8506  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    8507  }
    8508  }
    8509 
    8510  // Postprocess.
    8511  VmaPostprocessCalcStatInfo(pStats->total);
    8512  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
    8513  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
    8514  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
    8515  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
    8516 }
    8517 
    8518 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
    8519 
    8520 VkResult VmaAllocator_T::Defragment(
    8521  VmaAllocation* pAllocations,
    8522  size_t allocationCount,
    8523  VkBool32* pAllocationsChanged,
    8524  const VmaDefragmentationInfo* pDefragmentationInfo,
    8525  VmaDefragmentationStats* pDefragmentationStats)
    8526 {
    8527  if(pAllocationsChanged != VMA_NULL)
    8528  {
    8529  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
    8530  }
    8531  if(pDefragmentationStats != VMA_NULL)
    8532  {
    8533  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
    8534  }
    8535 
    8536  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
    8537 
    8538  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
    8539 
    8540  const size_t poolCount = m_Pools.size();
    8541 
    8542  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
    8543  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
    8544  {
    8545  VmaAllocation hAlloc = pAllocations[allocIndex];
    8546  VMA_ASSERT(hAlloc);
    8547  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
    8548  // DedicatedAlloc cannot be defragmented.
    8549  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
    8550  // Only HOST_VISIBLE memory types can be defragmented.
    8551  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
    8552  // Lost allocation cannot be defragmented.
    8553  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
    8554  {
    8555  VmaBlockVector* pAllocBlockVector = VMA_NULL;
    8556 
    8557  const VmaPool hAllocPool = hAlloc->GetPool();
    8558  // This allocation belongs to custom pool.
    8559  if(hAllocPool != VK_NULL_HANDLE)
    8560  {
    8561  pAllocBlockVector = &hAllocPool->GetBlockVector();
    8562  }
    8563  // This allocation belongs to general pool.
    8564  else
    8565  {
    8566  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
    8567  }
    8568 
    8569  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
    8570 
    8571  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
    8572  &pAllocationsChanged[allocIndex] : VMA_NULL;
    8573  pDefragmentator->AddAllocation(hAlloc, pChanged);
    8574  }
    8575  }
    8576 
    8577  VkResult result = VK_SUCCESS;
    8578 
    8579  // ======== Main processing.
    8580 
    8581  VkDeviceSize maxBytesToMove = SIZE_MAX;
    8582  uint32_t maxAllocationsToMove = UINT32_MAX;
    8583  if(pDefragmentationInfo != VMA_NULL)
    8584  {
    8585  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
    8586  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
    8587  }
    8588 
    8589  // Process standard memory.
    8590  for(uint32_t memTypeIndex = 0;
    8591  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
    8592  ++memTypeIndex)
    8593  {
    8594  // Only HOST_VISIBLE memory types can be defragmented.
    8595  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8596  {
    8597  result = m_pBlockVectors[memTypeIndex]->Defragment(
    8598  pDefragmentationStats,
    8599  maxBytesToMove,
    8600  maxAllocationsToMove);
    8601  }
    8602  }
    8603 
    8604  // Process custom pools.
    8605  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
    8606  {
    8607  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
    8608  pDefragmentationStats,
    8609  maxBytesToMove,
    8610  maxAllocationsToMove);
    8611  }
    8612 
    8613  // ======== Destroy defragmentators.
    8614 
    8615  // Process custom pools.
    8616  for(size_t poolIndex = poolCount; poolIndex--; )
    8617  {
    8618  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
    8619  }
    8620 
    8621  // Process standard memory.
    8622  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
    8623  {
    8624  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8625  {
    8626  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
    8627  }
    8628  }
    8629 
    8630  return result;
    8631 }
    8632 
    8633 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
    8634 {
    8635  if(hAllocation->CanBecomeLost())
    8636  {
    8637  /*
    8638  Warning: This is a carefully designed algorithm.
    8639  Do not modify unless you really know what you're doing :)
    8640  */
    8641  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8642  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8643  for(;;)
    8644  {
    8645  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8646  {
    8647  pAllocationInfo->memoryType = UINT32_MAX;
    8648  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
    8649  pAllocationInfo->offset = 0;
    8650  pAllocationInfo->size = hAllocation->GetSize();
    8651  pAllocationInfo->pMappedData = VMA_NULL;
    8652  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8653  return;
    8654  }
    8655  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8656  {
    8657  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8658  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8659  pAllocationInfo->offset = hAllocation->GetOffset();
    8660  pAllocationInfo->size = hAllocation->GetSize();
    8661  pAllocationInfo->pMappedData = VMA_NULL;
    8662  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8663  return;
    8664  }
    8665  else // Last use time earlier than current time.
    8666  {
    8667  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8668  {
    8669  localLastUseFrameIndex = localCurrFrameIndex;
    8670  }
    8671  }
    8672  }
    8673  }
    8674  else
    8675  {
    8676 #if VMA_STATS_STRING_ENABLED
    8677  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8678  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8679  for(;;)
    8680  {
    8681  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8682  if(localLastUseFrameIndex == localCurrFrameIndex)
    8683  {
    8684  break;
    8685  }
    8686  else // Last use time earlier than current time.
    8687  {
    8688  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8689  {
    8690  localLastUseFrameIndex = localCurrFrameIndex;
    8691  }
    8692  }
    8693  }
    8694 #endif
    8695 
    8696  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8697  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8698  pAllocationInfo->offset = hAllocation->GetOffset();
    8699  pAllocationInfo->size = hAllocation->GetSize();
    8700  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
    8701  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8702  }
    8703 }
    8704 
    8705 bool VmaAllocator_T::TouchAllocation(VmaAllocation hAllocation)
    8706 {
    8707  // This is a stripped-down version of VmaAllocator_T::GetAllocationInfo.
    8708  if(hAllocation->CanBecomeLost())
    8709  {
    8710  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8711  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8712  for(;;)
    8713  {
    8714  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8715  {
    8716  return false;
    8717  }
    8718  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8719  {
    8720  return true;
    8721  }
    8722  else // Last use time earlier than current time.
    8723  {
    8724  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8725  {
    8726  localLastUseFrameIndex = localCurrFrameIndex;
    8727  }
    8728  }
    8729  }
    8730  }
    8731  else
    8732  {
    8733 #if VMA_STATS_STRING_ENABLED
    8734  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8735  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8736  for(;;)
    8737  {
    8738  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8739  if(localLastUseFrameIndex == localCurrFrameIndex)
    8740  {
    8741  break;
    8742  }
    8743  else // Last use time earlier than current time.
    8744  {
    8745  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8746  {
    8747  localLastUseFrameIndex = localCurrFrameIndex;
    8748  }
    8749  }
    8750  }
    8751 #endif
    8752 
    8753  return true;
    8754  }
    8755 }
    8756 
    8757 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
    8758 {
    8759  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
    8760 
    8761  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
    8762 
    8763  if(newCreateInfo.maxBlockCount == 0)
    8764  {
    8765  newCreateInfo.maxBlockCount = SIZE_MAX;
    8766  }
    8767  if(newCreateInfo.blockSize == 0)
    8768  {
    8769  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
    8770  }
    8771 
    8772  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
    8773 
    8774  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
    8775  if(res != VK_SUCCESS)
    8776  {
    8777  vma_delete(this, *pPool);
    8778  *pPool = VMA_NULL;
    8779  return res;
    8780  }
    8781 
    8782  // Add to m_Pools.
    8783  {
    8784  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8785  (*pPool)->SetId(m_NextPoolId++);
    8786  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
    8787  }
    8788 
    8789  return VK_SUCCESS;
    8790 }
    8791 
    8792 void VmaAllocator_T::DestroyPool(VmaPool pool)
    8793 {
    8794  // Remove from m_Pools.
    8795  {
    8796  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8797  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
    8798  VMA_ASSERT(success && "Pool not found in Allocator.");
    8799  }
    8800 
    8801  vma_delete(this, pool);
    8802 }
    8803 
    8804 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
    8805 {
    8806  pool->m_BlockVector.GetPoolStats(pPoolStats);
    8807 }
    8808 
    8809 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
    8810 {
    8811  m_CurrentFrameIndex.store(frameIndex);
    8812 }
    8813 
    8814 void VmaAllocator_T::MakePoolAllocationsLost(
    8815  VmaPool hPool,
    8816  size_t* pLostAllocationCount)
    8817 {
    8818  hPool->m_BlockVector.MakePoolAllocationsLost(
    8819  m_CurrentFrameIndex.load(),
    8820  pLostAllocationCount);
    8821 }
    8822 
    8823 VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
    8824 {
    8825  return hPool->m_BlockVector.CheckCorruption();
    8826 }
    8827 
    8828 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
    8829 {
    8830  VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
    8831 
    8832  // Process default pools.
    8833  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8834  {
    8835  if(((1u << memTypeIndex) & memoryTypeBits) != 0)
    8836  {
    8837  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8838  VMA_ASSERT(pBlockVector);
    8839  VkResult localRes = pBlockVector->CheckCorruption();
    8840  switch(localRes)
    8841  {
    8842  case VK_ERROR_FEATURE_NOT_PRESENT:
    8843  break;
    8844  case VK_SUCCESS:
    8845  finalRes = VK_SUCCESS;
    8846  break;
    8847  default:
    8848  return localRes;
    8849  }
    8850  }
    8851  }
    8852 
    8853  // Process custom pools.
    8854  {
    8855  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8856  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8857  {
    8858  if(((1u << m_Pools[poolIndex]->GetBlockVector().GetMemoryTypeIndex()) & memoryTypeBits) != 0)
    8859  {
    8860  VkResult localRes = m_Pools[poolIndex]->GetBlockVector().CheckCorruption();
    8861  switch(localRes)
    8862  {
    8863  case VK_ERROR_FEATURE_NOT_PRESENT:
    8864  break;
    8865  case VK_SUCCESS:
    8866  finalRes = VK_SUCCESS;
    8867  break;
    8868  default:
    8869  return localRes;
    8870  }
    8871  }
    8872  }
    8873  }
    8874 
    8875  return finalRes;
    8876 }
    8877 
    8878 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
    8879 {
    8880  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
    8881  (*pAllocation)->InitLost();
    8882 }
    8883 
    8884 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
    8885 {
    8886  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
    8887 
    8888  VkResult res;
    8889  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8890  {
    8891  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8892  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
    8893  {
    8894  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8895  if(res == VK_SUCCESS)
    8896  {
    8897  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
    8898  }
    8899  }
    8900  else
    8901  {
    8902  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8903  }
    8904  }
    8905  else
    8906  {
    8907  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8908  }
    8909 
    8910  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
    8911  {
    8912  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
    8913  }
    8914 
    8915  return res;
    8916 }
    8917 
    8918 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
    8919 {
    8920  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
    8921  {
    8922  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
    8923  }
    8924 
    8925  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
    8926 
    8927  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
    8928  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8929  {
    8930  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8931  m_HeapSizeLimit[heapIndex] += size;
    8932  }
    8933 }
    8934 
    8935 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
    8936 {
    8937  if(hAllocation->CanBecomeLost())
    8938  {
    8939  return VK_ERROR_MEMORY_MAP_FAILED;
    8940  }
    8941 
    8942  switch(hAllocation->GetType())
    8943  {
    8944  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8945  {
    8946  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8947  char *pBytes = VMA_NULL;
    8948  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
    8949  if(res == VK_SUCCESS)
    8950  {
    8951  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
    8952  hAllocation->BlockAllocMap();
    8953  }
    8954  return res;
    8955  }
    8956  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8957  return hAllocation->DedicatedAllocMap(this, ppData);
    8958  default:
    8959  VMA_ASSERT(0);
    8960  return VK_ERROR_MEMORY_MAP_FAILED;
    8961  }
    8962 }
    8963 
    8964 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
    8965 {
    8966  switch(hAllocation->GetType())
    8967  {
    8968  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8969  {
    8970  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8971  hAllocation->BlockAllocUnmap();
    8972  pBlock->Unmap(this, 1);
    8973  }
    8974  break;
    8975  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8976  hAllocation->DedicatedAllocUnmap(this);
    8977  break;
    8978  default:
    8979  VMA_ASSERT(0);
    8980  }
    8981 }
    8982 
    8983 VkResult VmaAllocator_T::BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer)
    8984 {
    8985  VkResult res = VK_SUCCESS;
    8986  switch(hAllocation->GetType())
    8987  {
    8988  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8989  res = GetVulkanFunctions().vkBindBufferMemory(
    8990  m_hDevice,
    8991  hBuffer,
    8992  hAllocation->GetMemory(),
    8993  0); //memoryOffset
    8994  break;
    8995  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8996  {
    8997  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    8998  VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
    8999  res = pBlock->BindBufferMemory(this, hAllocation, hBuffer);
    9000  break;
    9001  }
    9002  default:
    9003  VMA_ASSERT(0);
    9004  }
    9005  return res;
    9006 }
    9007 
    9008 VkResult VmaAllocator_T::BindImageMemory(VmaAllocation hAllocation, VkImage hImage)
    9009 {
    9010  VkResult res = VK_SUCCESS;
    9011  switch(hAllocation->GetType())
    9012  {
    9013  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9014  res = GetVulkanFunctions().vkBindImageMemory(
    9015  m_hDevice,
    9016  hImage,
    9017  hAllocation->GetMemory(),
    9018  0); //memoryOffset
    9019  break;
    9020  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9021  {
    9022  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    9023  VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
    9024  res = pBlock->BindImageMemory(this, hAllocation, hImage);
    9025  break;
    9026  }
    9027  default:
    9028  VMA_ASSERT(0);
    9029  }
    9030  return res;
    9031 }
    9032 
    9033 void VmaAllocator_T::FlushOrInvalidateAllocation(
    9034  VmaAllocation hAllocation,
    9035  VkDeviceSize offset, VkDeviceSize size,
    9036  VMA_CACHE_OPERATION op)
    9037 {
    9038  const uint32_t memTypeIndex = hAllocation->GetMemoryTypeIndex();
    9039  if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
    9040  {
    9041  const VkDeviceSize allocationSize = hAllocation->GetSize();
    9042  VMA_ASSERT(offset <= allocationSize);
    9043 
    9044  const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
    9045 
    9046  VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    9047  memRange.memory = hAllocation->GetMemory();
    9048 
    9049  switch(hAllocation->GetType())
    9050  {
    9051  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9052  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9053  if(size == VK_WHOLE_SIZE)
    9054  {
    9055  memRange.size = allocationSize - memRange.offset;
    9056  }
    9057  else
    9058  {
    9059  VMA_ASSERT(offset + size <= allocationSize);
    9060  memRange.size = VMA_MIN(
    9061  VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize),
    9062  allocationSize - memRange.offset);
    9063  }
    9064  break;
    9065 
    9066  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9067  {
    9068  // 1. Still within this allocation.
    9069  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9070  if(size == VK_WHOLE_SIZE)
    9071  {
    9072  size = allocationSize - offset;
    9073  }
    9074  else
    9075  {
    9076  VMA_ASSERT(offset + size <= allocationSize);
    9077  }
    9078  memRange.size = VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize);
    9079 
    9080  // 2. Adjust to whole block.
    9081  const VkDeviceSize allocationOffset = hAllocation->GetOffset();
    9082  VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
    9083  const VkDeviceSize blockSize = hAllocation->GetBlock()->m_Metadata.GetSize();
    9084  memRange.offset += allocationOffset;
    9085  memRange.size = VMA_MIN(memRange.size, blockSize - memRange.offset);
    9086 
    9087  break;
    9088  }
    9089 
    9090  default:
    9091  VMA_ASSERT(0);
    9092  }
    9093 
    9094  switch(op)
    9095  {
    9096  case VMA_CACHE_FLUSH:
    9097  (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9098  break;
    9099  case VMA_CACHE_INVALIDATE:
    9100  (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9101  break;
    9102  default:
    9103  VMA_ASSERT(0);
    9104  }
    9105  }
    9106  // else: Just ignore this call.
    9107 }
    9108 
    9109 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
    9110 {
    9111  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
    9112 
    9113  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    9114  {
    9115  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9116  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    9117  VMA_ASSERT(pDedicatedAllocations);
    9118  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
    9119  VMA_ASSERT(success);
    9120  }
    9121 
    9122  VkDeviceMemory hMemory = allocation->GetMemory();
    9123 
    9124  if(allocation->GetMappedData() != VMA_NULL)
    9125  {
    9126  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
    9127  }
    9128 
    9129  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
    9130 
    9131  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
    9132 }
    9133 
    9134 #if VMA_STATS_STRING_ENABLED
    9135 
    9136 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
    9137 {
    9138  bool dedicatedAllocationsStarted = false;
    9139  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9140  {
    9141  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9142  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    9143  VMA_ASSERT(pDedicatedAllocVector);
    9144  if(pDedicatedAllocVector->empty() == false)
    9145  {
    9146  if(dedicatedAllocationsStarted == false)
    9147  {
    9148  dedicatedAllocationsStarted = true;
    9149  json.WriteString("DedicatedAllocations");
    9150  json.BeginObject();
    9151  }
    9152 
    9153  json.BeginString("Type ");
    9154  json.ContinueString(memTypeIndex);
    9155  json.EndString();
    9156 
    9157  json.BeginArray();
    9158 
    9159  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
    9160  {
    9161  json.BeginObject(true);
    9162  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
    9163  hAlloc->PrintParameters(json);
    9164  json.EndObject();
    9165  }
    9166 
    9167  json.EndArray();
    9168  }
    9169  }
    9170  if(dedicatedAllocationsStarted)
    9171  {
    9172  json.EndObject();
    9173  }
    9174 
    9175  {
    9176  bool allocationsStarted = false;
    9177  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9178  {
    9179  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
    9180  {
    9181  if(allocationsStarted == false)
    9182  {
    9183  allocationsStarted = true;
    9184  json.WriteString("DefaultPools");
    9185  json.BeginObject();
    9186  }
    9187 
    9188  json.BeginString("Type ");
    9189  json.ContinueString(memTypeIndex);
    9190  json.EndString();
    9191 
    9192  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
    9193  }
    9194  }
    9195  if(allocationsStarted)
    9196  {
    9197  json.EndObject();
    9198  }
    9199  }
    9200 
    9201  {
    9202  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    9203  const size_t poolCount = m_Pools.size();
    9204  if(poolCount > 0)
    9205  {
    9206  json.WriteString("Pools");
    9207  json.BeginObject();
    9208  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
    9209  {
    9210  json.BeginString();
    9211  json.ContinueString(m_Pools[poolIndex]->GetId());
    9212  json.EndString();
    9213 
    9214  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
    9215  }
    9216  json.EndObject();
    9217  }
    9218  }
    9219 }
    9220 
    9221 #endif // #if VMA_STATS_STRING_ENABLED
    9222 
    9223 static VkResult AllocateMemoryForImage(
    9224  VmaAllocator allocator,
    9225  VkImage image,
    9226  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9227  VmaSuballocationType suballocType,
    9228  VmaAllocation* pAllocation)
    9229 {
    9230  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
    9231 
    9232  VkMemoryRequirements vkMemReq = {};
    9233  bool requiresDedicatedAllocation = false;
    9234  bool prefersDedicatedAllocation = false;
    9235  allocator->GetImageMemoryRequirements(image, vkMemReq,
    9236  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9237 
    9238  return allocator->AllocateMemory(
    9239  vkMemReq,
    9240  requiresDedicatedAllocation,
    9241  prefersDedicatedAllocation,
    9242  VK_NULL_HANDLE, // dedicatedBuffer
    9243  image, // dedicatedImage
    9244  *pAllocationCreateInfo,
    9245  suballocType,
    9246  pAllocation);
    9247 }
    9248 
    9250 // Public interface
    9251 
    9252 VkResult vmaCreateAllocator(
    9253  const VmaAllocatorCreateInfo* pCreateInfo,
    9254  VmaAllocator* pAllocator)
    9255 {
    9256  VMA_ASSERT(pCreateInfo && pAllocator);
    9257  VMA_DEBUG_LOG("vmaCreateAllocator");
    9258  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
    9259  return VK_SUCCESS;
    9260 }
    9261 
    9262 void vmaDestroyAllocator(
    9263  VmaAllocator allocator)
    9264 {
    9265  if(allocator != VK_NULL_HANDLE)
    9266  {
    9267  VMA_DEBUG_LOG("vmaDestroyAllocator");
    9268  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
    9269  vma_delete(&allocationCallbacks, allocator);
    9270  }
    9271 }
    9272 
    9274  VmaAllocator allocator,
    9275  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    9276 {
    9277  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
    9278  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
    9279 }
    9280 
    9282  VmaAllocator allocator,
    9283  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
    9284 {
    9285  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
    9286  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
    9287 }
    9288 
    9290  VmaAllocator allocator,
    9291  uint32_t memoryTypeIndex,
    9292  VkMemoryPropertyFlags* pFlags)
    9293 {
    9294  VMA_ASSERT(allocator && pFlags);
    9295  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
    9296  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
    9297 }
    9298 
    9300  VmaAllocator allocator,
    9301  uint32_t frameIndex)
    9302 {
    9303  VMA_ASSERT(allocator);
    9304  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
    9305 
    9306  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9307 
    9308  allocator->SetCurrentFrameIndex(frameIndex);
    9309 }
    9310 
    9311 void vmaCalculateStats(
    9312  VmaAllocator allocator,
    9313  VmaStats* pStats)
    9314 {
    9315  VMA_ASSERT(allocator && pStats);
    9316  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9317  allocator->CalculateStats(pStats);
    9318 }
    9319 
    9320 #if VMA_STATS_STRING_ENABLED
    9321 
    9322 void vmaBuildStatsString(
    9323  VmaAllocator allocator,
    9324  char** ppStatsString,
    9325  VkBool32 detailedMap)
    9326 {
    9327  VMA_ASSERT(allocator && ppStatsString);
    9328  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9329 
    9330  VmaStringBuilder sb(allocator);
    9331  {
    9332  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
    9333  json.BeginObject();
    9334 
    9335  VmaStats stats;
    9336  allocator->CalculateStats(&stats);
    9337 
    9338  json.WriteString("Total");
    9339  VmaPrintStatInfo(json, stats.total);
    9340 
    9341  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
    9342  {
    9343  json.BeginString("Heap ");
    9344  json.ContinueString(heapIndex);
    9345  json.EndString();
    9346  json.BeginObject();
    9347 
    9348  json.WriteString("Size");
    9349  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
    9350 
    9351  json.WriteString("Flags");
    9352  json.BeginArray(true);
    9353  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
    9354  {
    9355  json.WriteString("DEVICE_LOCAL");
    9356  }
    9357  json.EndArray();
    9358 
    9359  if(stats.memoryHeap[heapIndex].blockCount > 0)
    9360  {
    9361  json.WriteString("Stats");
    9362  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
    9363  }
    9364 
    9365  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
    9366  {
    9367  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
    9368  {
    9369  json.BeginString("Type ");
    9370  json.ContinueString(typeIndex);
    9371  json.EndString();
    9372 
    9373  json.BeginObject();
    9374 
    9375  json.WriteString("Flags");
    9376  json.BeginArray(true);
    9377  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
    9378  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
    9379  {
    9380  json.WriteString("DEVICE_LOCAL");
    9381  }
    9382  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    9383  {
    9384  json.WriteString("HOST_VISIBLE");
    9385  }
    9386  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
    9387  {
    9388  json.WriteString("HOST_COHERENT");
    9389  }
    9390  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
    9391  {
    9392  json.WriteString("HOST_CACHED");
    9393  }
    9394  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
    9395  {
    9396  json.WriteString("LAZILY_ALLOCATED");
    9397  }
    9398  json.EndArray();
    9399 
    9400  if(stats.memoryType[typeIndex].blockCount > 0)
    9401  {
    9402  json.WriteString("Stats");
    9403  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
    9404  }
    9405 
    9406  json.EndObject();
    9407  }
    9408  }
    9409 
    9410  json.EndObject();
    9411  }
    9412  if(detailedMap == VK_TRUE)
    9413  {
    9414  allocator->PrintDetailedMap(json);
    9415  }
    9416 
    9417  json.EndObject();
    9418  }
    9419 
    9420  const size_t len = sb.GetLength();
    9421  char* const pChars = vma_new_array(allocator, char, len + 1);
    9422  if(len > 0)
    9423  {
    9424  memcpy(pChars, sb.GetData(), len);
    9425  }
    9426  pChars[len] = '\0';
    9427  *ppStatsString = pChars;
    9428 }
    9429 
    9430 void vmaFreeStatsString(
    9431  VmaAllocator allocator,
    9432  char* pStatsString)
    9433 {
    9434  if(pStatsString != VMA_NULL)
    9435  {
    9436  VMA_ASSERT(allocator);
    9437  size_t len = strlen(pStatsString);
    9438  vma_delete_array(allocator, pStatsString, len + 1);
    9439  }
    9440 }
    9441 
    9442 #endif // #if VMA_STATS_STRING_ENABLED
    9443 
    9444 /*
    9445 This function is not protected by any mutex because it just reads immutable data.
    9446 */
    9447 VkResult vmaFindMemoryTypeIndex(
    9448  VmaAllocator allocator,
    9449  uint32_t memoryTypeBits,
    9450  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9451  uint32_t* pMemoryTypeIndex)
    9452 {
    9453  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9454  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9455  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9456 
    9457  if(pAllocationCreateInfo->memoryTypeBits != 0)
    9458  {
    9459  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
    9460  }
    9461 
    9462  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
    9463  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
    9464 
    9465  const bool mapped = (pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    9466  if(mapped)
    9467  {
    9468  preferredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9469  }
    9470 
    9471  // Convert usage to requiredFlags and preferredFlags.
    9472  switch(pAllocationCreateInfo->usage)
    9473  {
    9475  break;
    9477  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9478  {
    9479  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9480  }
    9481  break;
    9483  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    9484  break;
    9486  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9487  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9488  {
    9489  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9490  }
    9491  break;
    9493  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9494  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
    9495  break;
    9496  default:
    9497  break;
    9498  }
    9499 
    9500  *pMemoryTypeIndex = UINT32_MAX;
    9501  uint32_t minCost = UINT32_MAX;
    9502  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
    9503  memTypeIndex < allocator->GetMemoryTypeCount();
    9504  ++memTypeIndex, memTypeBit <<= 1)
    9505  {
    9506  // This memory type is acceptable according to memoryTypeBits bitmask.
    9507  if((memTypeBit & memoryTypeBits) != 0)
    9508  {
    9509  const VkMemoryPropertyFlags currFlags =
    9510  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
    9511  // This memory type contains requiredFlags.
    9512  if((requiredFlags & ~currFlags) == 0)
    9513  {
    9514  // Calculate cost as number of bits from preferredFlags not present in this memory type.
    9515  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
    9516  // Remember memory type with lowest cost.
    9517  if(currCost < minCost)
    9518  {
    9519  *pMemoryTypeIndex = memTypeIndex;
    9520  if(currCost == 0)
    9521  {
    9522  return VK_SUCCESS;
    9523  }
    9524  minCost = currCost;
    9525  }
    9526  }
    9527  }
    9528  }
    9529  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
    9530 }
    9531 
    9533  VmaAllocator allocator,
    9534  const VkBufferCreateInfo* pBufferCreateInfo,
    9535  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9536  uint32_t* pMemoryTypeIndex)
    9537 {
    9538  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9539  VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
    9540  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9541  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9542 
    9543  const VkDevice hDev = allocator->m_hDevice;
    9544  VkBuffer hBuffer = VK_NULL_HANDLE;
    9545  VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
    9546  hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
    9547  if(res == VK_SUCCESS)
    9548  {
    9549  VkMemoryRequirements memReq = {};
    9550  allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
    9551  hDev, hBuffer, &memReq);
    9552 
    9553  res = vmaFindMemoryTypeIndex(
    9554  allocator,
    9555  memReq.memoryTypeBits,
    9556  pAllocationCreateInfo,
    9557  pMemoryTypeIndex);
    9558 
    9559  allocator->GetVulkanFunctions().vkDestroyBuffer(
    9560  hDev, hBuffer, allocator->GetAllocationCallbacks());
    9561  }
    9562  return res;
    9563 }
    9564 
    9566  VmaAllocator allocator,
    9567  const VkImageCreateInfo* pImageCreateInfo,
    9568  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9569  uint32_t* pMemoryTypeIndex)
    9570 {
    9571  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9572  VMA_ASSERT(pImageCreateInfo != VMA_NULL);
    9573  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9574  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9575 
    9576  const VkDevice hDev = allocator->m_hDevice;
    9577  VkImage hImage = VK_NULL_HANDLE;
    9578  VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
    9579  hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
    9580  if(res == VK_SUCCESS)
    9581  {
    9582  VkMemoryRequirements memReq = {};
    9583  allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
    9584  hDev, hImage, &memReq);
    9585 
    9586  res = vmaFindMemoryTypeIndex(
    9587  allocator,
    9588  memReq.memoryTypeBits,
    9589  pAllocationCreateInfo,
    9590  pMemoryTypeIndex);
    9591 
    9592  allocator->GetVulkanFunctions().vkDestroyImage(
    9593  hDev, hImage, allocator->GetAllocationCallbacks());
    9594  }
    9595  return res;
    9596 }
    9597 
    9598 VkResult vmaCreatePool(
    9599  VmaAllocator allocator,
    9600  const VmaPoolCreateInfo* pCreateInfo,
    9601  VmaPool* pPool)
    9602 {
    9603  VMA_ASSERT(allocator && pCreateInfo && pPool);
    9604 
    9605  VMA_DEBUG_LOG("vmaCreatePool");
    9606 
    9607  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9608 
    9609  return allocator->CreatePool(pCreateInfo, pPool);
    9610 }
    9611 
    9612 void vmaDestroyPool(
    9613  VmaAllocator allocator,
    9614  VmaPool pool)
    9615 {
    9616  VMA_ASSERT(allocator);
    9617 
    9618  if(pool == VK_NULL_HANDLE)
    9619  {
    9620  return;
    9621  }
    9622 
    9623  VMA_DEBUG_LOG("vmaDestroyPool");
    9624 
    9625  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9626 
    9627  allocator->DestroyPool(pool);
    9628 }
    9629 
    9630 void vmaGetPoolStats(
    9631  VmaAllocator allocator,
    9632  VmaPool pool,
    9633  VmaPoolStats* pPoolStats)
    9634 {
    9635  VMA_ASSERT(allocator && pool && pPoolStats);
    9636 
    9637  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9638 
    9639  allocator->GetPoolStats(pool, pPoolStats);
    9640 }
    9641 
    9643  VmaAllocator allocator,
    9644  VmaPool pool,
    9645  size_t* pLostAllocationCount)
    9646 {
    9647  VMA_ASSERT(allocator && pool);
    9648 
    9649  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9650 
    9651  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
    9652 }
    9653 
    9654 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
    9655 {
    9656  VMA_ASSERT(allocator && pool);
    9657 
    9658  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9659 
    9660  VMA_DEBUG_LOG("vmaCheckPoolCorruption");
    9661 
    9662  return allocator->CheckPoolCorruption(pool);
    9663 }
    9664 
    9665 VkResult vmaAllocateMemory(
    9666  VmaAllocator allocator,
    9667  const VkMemoryRequirements* pVkMemoryRequirements,
    9668  const VmaAllocationCreateInfo* pCreateInfo,
    9669  VmaAllocation* pAllocation,
    9670  VmaAllocationInfo* pAllocationInfo)
    9671 {
    9672  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
    9673 
    9674  VMA_DEBUG_LOG("vmaAllocateMemory");
    9675 
    9676  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9677 
    9678  VkResult result = allocator->AllocateMemory(
    9679  *pVkMemoryRequirements,
    9680  false, // requiresDedicatedAllocation
    9681  false, // prefersDedicatedAllocation
    9682  VK_NULL_HANDLE, // dedicatedBuffer
    9683  VK_NULL_HANDLE, // dedicatedImage
    9684  *pCreateInfo,
    9685  VMA_SUBALLOCATION_TYPE_UNKNOWN,
    9686  pAllocation);
    9687 
    9688  if(pAllocationInfo && result == VK_SUCCESS)
    9689  {
    9690  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9691  }
    9692 
    9693  return result;
    9694 }
    9695 
    9697  VmaAllocator allocator,
    9698  VkBuffer buffer,
    9699  const VmaAllocationCreateInfo* pCreateInfo,
    9700  VmaAllocation* pAllocation,
    9701  VmaAllocationInfo* pAllocationInfo)
    9702 {
    9703  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9704 
    9705  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
    9706 
    9707  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9708 
    9709  VkMemoryRequirements vkMemReq = {};
    9710  bool requiresDedicatedAllocation = false;
    9711  bool prefersDedicatedAllocation = false;
    9712  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
    9713  requiresDedicatedAllocation,
    9714  prefersDedicatedAllocation);
    9715 
    9716  VkResult result = allocator->AllocateMemory(
    9717  vkMemReq,
    9718  requiresDedicatedAllocation,
    9719  prefersDedicatedAllocation,
    9720  buffer, // dedicatedBuffer
    9721  VK_NULL_HANDLE, // dedicatedImage
    9722  *pCreateInfo,
    9723  VMA_SUBALLOCATION_TYPE_BUFFER,
    9724  pAllocation);
    9725 
    9726  if(pAllocationInfo && result == VK_SUCCESS)
    9727  {
    9728  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9729  }
    9730 
    9731  return result;
    9732 }
    9733 
    9734 VkResult vmaAllocateMemoryForImage(
    9735  VmaAllocator allocator,
    9736  VkImage image,
    9737  const VmaAllocationCreateInfo* pCreateInfo,
    9738  VmaAllocation* pAllocation,
    9739  VmaAllocationInfo* pAllocationInfo)
    9740 {
    9741  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9742 
    9743  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
    9744 
    9745  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9746 
    9747  VkResult result = AllocateMemoryForImage(
    9748  allocator,
    9749  image,
    9750  pCreateInfo,
    9751  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
    9752  pAllocation);
    9753 
    9754  if(pAllocationInfo && result == VK_SUCCESS)
    9755  {
    9756  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9757  }
    9758 
    9759  return result;
    9760 }
    9761 
    9762 void vmaFreeMemory(
    9763  VmaAllocator allocator,
    9764  VmaAllocation allocation)
    9765 {
    9766  VMA_ASSERT(allocator);
    9767  VMA_DEBUG_LOG("vmaFreeMemory");
    9768  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9769  if(allocation != VK_NULL_HANDLE)
    9770  {
    9771  allocator->FreeMemory(allocation);
    9772  }
    9773 }
    9774 
    9776  VmaAllocator allocator,
    9777  VmaAllocation allocation,
    9778  VmaAllocationInfo* pAllocationInfo)
    9779 {
    9780  VMA_ASSERT(allocator && allocation && pAllocationInfo);
    9781 
    9782  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9783 
    9784  allocator->GetAllocationInfo(allocation, pAllocationInfo);
    9785 }
    9786 
    9787 VkBool32 vmaTouchAllocation(
    9788  VmaAllocator allocator,
    9789  VmaAllocation allocation)
    9790 {
    9791  VMA_ASSERT(allocator && allocation);
    9792 
    9793  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9794 
    9795  return allocator->TouchAllocation(allocation);
    9796 }
    9797 
    9799  VmaAllocator allocator,
    9800  VmaAllocation allocation,
    9801  void* pUserData)
    9802 {
    9803  VMA_ASSERT(allocator && allocation);
    9804 
    9805  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9806 
    9807  allocation->SetUserData(allocator, pUserData);
    9808 }
    9809 
    9811  VmaAllocator allocator,
    9812  VmaAllocation* pAllocation)
    9813 {
    9814  VMA_ASSERT(allocator && pAllocation);
    9815 
    9816  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
    9817 
    9818  allocator->CreateLostAllocation(pAllocation);
    9819 }
    9820 
    9821 VkResult vmaMapMemory(
    9822  VmaAllocator allocator,
    9823  VmaAllocation allocation,
    9824  void** ppData)
    9825 {
    9826  VMA_ASSERT(allocator && allocation && ppData);
    9827 
    9828  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9829 
    9830  return allocator->Map(allocation, ppData);
    9831 }
    9832 
    9833 void vmaUnmapMemory(
    9834  VmaAllocator allocator,
    9835  VmaAllocation allocation)
    9836 {
    9837  VMA_ASSERT(allocator && allocation);
    9838 
    9839  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9840 
    9841  allocator->Unmap(allocation);
    9842 }
    9843 
    9844 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9845 {
    9846  VMA_ASSERT(allocator && allocation);
    9847 
    9848  VMA_DEBUG_LOG("vmaFlushAllocation");
    9849 
    9850  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9851 
    9852  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
    9853 }
    9854 
    9855 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9856 {
    9857  VMA_ASSERT(allocator && allocation);
    9858 
    9859  VMA_DEBUG_LOG("vmaInvalidateAllocation");
    9860 
    9861  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9862 
    9863  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
    9864 }
    9865 
    9866 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
    9867 {
    9868  VMA_ASSERT(allocator);
    9869 
    9870  VMA_DEBUG_LOG("vmaCheckCorruption");
    9871 
    9872  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9873 
    9874  return allocator->CheckCorruption(memoryTypeBits);
    9875 }
    9876 
    9877 VkResult vmaDefragment(
    9878  VmaAllocator allocator,
    9879  VmaAllocation* pAllocations,
    9880  size_t allocationCount,
    9881  VkBool32* pAllocationsChanged,
    9882  const VmaDefragmentationInfo *pDefragmentationInfo,
    9883  VmaDefragmentationStats* pDefragmentationStats)
    9884 {
    9885  VMA_ASSERT(allocator && pAllocations);
    9886 
    9887  VMA_DEBUG_LOG("vmaDefragment");
    9888 
    9889  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9890 
    9891  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
    9892 }
    9893 
    9894 VkResult vmaBindBufferMemory(
    9895  VmaAllocator allocator,
    9896  VmaAllocation allocation,
    9897  VkBuffer buffer)
    9898 {
    9899  VMA_ASSERT(allocator && allocation && buffer);
    9900 
    9901  VMA_DEBUG_LOG("vmaBindBufferMemory");
    9902 
    9903  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9904 
    9905  return allocator->BindBufferMemory(allocation, buffer);
    9906 }
    9907 
    9908 VkResult vmaBindImageMemory(
    9909  VmaAllocator allocator,
    9910  VmaAllocation allocation,
    9911  VkImage image)
    9912 {
    9913  VMA_ASSERT(allocator && allocation && image);
    9914 
    9915  VMA_DEBUG_LOG("vmaBindImageMemory");
    9916 
    9917  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9918 
    9919  return allocator->BindImageMemory(allocation, image);
    9920 }
    9921 
    9922 VkResult vmaCreateBuffer(
    9923  VmaAllocator allocator,
    9924  const VkBufferCreateInfo* pBufferCreateInfo,
    9925  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9926  VkBuffer* pBuffer,
    9927  VmaAllocation* pAllocation,
    9928  VmaAllocationInfo* pAllocationInfo)
    9929 {
    9930  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
    9931 
    9932  VMA_DEBUG_LOG("vmaCreateBuffer");
    9933 
    9934  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9935 
    9936  *pBuffer = VK_NULL_HANDLE;
    9937  *pAllocation = VK_NULL_HANDLE;
    9938 
    9939  // 1. Create VkBuffer.
    9940  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
    9941  allocator->m_hDevice,
    9942  pBufferCreateInfo,
    9943  allocator->GetAllocationCallbacks(),
    9944  pBuffer);
    9945  if(res >= 0)
    9946  {
    9947  // 2. vkGetBufferMemoryRequirements.
    9948  VkMemoryRequirements vkMemReq = {};
    9949  bool requiresDedicatedAllocation = false;
    9950  bool prefersDedicatedAllocation = false;
    9951  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
    9952  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9953 
    9954  // Make sure alignment requirements for specific buffer usages reported
    9955  // in Physical Device Properties are included in alignment reported by memory requirements.
    9956  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
    9957  {
    9958  VMA_ASSERT(vkMemReq.alignment %
    9959  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
    9960  }
    9961  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
    9962  {
    9963  VMA_ASSERT(vkMemReq.alignment %
    9964  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
    9965  }
    9966  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
    9967  {
    9968  VMA_ASSERT(vkMemReq.alignment %
    9969  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
    9970  }
    9971 
    9972  // 3. Allocate memory using allocator.
    9973  res = allocator->AllocateMemory(
    9974  vkMemReq,
    9975  requiresDedicatedAllocation,
    9976  prefersDedicatedAllocation,
    9977  *pBuffer, // dedicatedBuffer
    9978  VK_NULL_HANDLE, // dedicatedImage
    9979  *pAllocationCreateInfo,
    9980  VMA_SUBALLOCATION_TYPE_BUFFER,
    9981  pAllocation);
    9982  if(res >= 0)
    9983  {
    9984  // 3. Bind buffer with memory.
    9985  res = allocator->BindBufferMemory(*pAllocation, *pBuffer);
    9986  if(res >= 0)
    9987  {
    9988  // All steps succeeded.
    9989  #if VMA_STATS_STRING_ENABLED
    9990  (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
    9991  #endif
    9992  if(pAllocationInfo != VMA_NULL)
    9993  {
    9994  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9995  }
    9996  return VK_SUCCESS;
    9997  }
    9998  allocator->FreeMemory(*pAllocation);
    9999  *pAllocation = VK_NULL_HANDLE;
    10000  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10001  *pBuffer = VK_NULL_HANDLE;
    10002  return res;
    10003  }
    10004  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10005  *pBuffer = VK_NULL_HANDLE;
    10006  return res;
    10007  }
    10008  return res;
    10009 }
    10010 
    10011 void vmaDestroyBuffer(
    10012  VmaAllocator allocator,
    10013  VkBuffer buffer,
    10014  VmaAllocation allocation)
    10015 {
    10016  VMA_ASSERT(allocator);
    10017  VMA_DEBUG_LOG("vmaDestroyBuffer");
    10018  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10019  if(buffer != VK_NULL_HANDLE)
    10020  {
    10021  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
    10022  }
    10023  if(allocation != VK_NULL_HANDLE)
    10024  {
    10025  allocator->FreeMemory(allocation);
    10026  }
    10027 }
    10028 
    10029 VkResult vmaCreateImage(
    10030  VmaAllocator allocator,
    10031  const VkImageCreateInfo* pImageCreateInfo,
    10032  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    10033  VkImage* pImage,
    10034  VmaAllocation* pAllocation,
    10035  VmaAllocationInfo* pAllocationInfo)
    10036 {
    10037  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
    10038 
    10039  VMA_DEBUG_LOG("vmaCreateImage");
    10040 
    10041  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10042 
    10043  *pImage = VK_NULL_HANDLE;
    10044  *pAllocation = VK_NULL_HANDLE;
    10045 
    10046  // 1. Create VkImage.
    10047  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
    10048  allocator->m_hDevice,
    10049  pImageCreateInfo,
    10050  allocator->GetAllocationCallbacks(),
    10051  pImage);
    10052  if(res >= 0)
    10053  {
    10054  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
    10055  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
    10056  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
    10057 
    10058  // 2. Allocate memory using allocator.
    10059  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
    10060  if(res >= 0)
    10061  {
    10062  // 3. Bind image with memory.
    10063  res = allocator->BindImageMemory(*pAllocation, *pImage);
    10064  if(res >= 0)
    10065  {
    10066  // All steps succeeded.
    10067  #if VMA_STATS_STRING_ENABLED
    10068  (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
    10069  #endif
    10070  if(pAllocationInfo != VMA_NULL)
    10071  {
    10072  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    10073  }
    10074  return VK_SUCCESS;
    10075  }
    10076  allocator->FreeMemory(*pAllocation);
    10077  *pAllocation = VK_NULL_HANDLE;
    10078  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10079  *pImage = VK_NULL_HANDLE;
    10080  return res;
    10081  }
    10082  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10083  *pImage = VK_NULL_HANDLE;
    10084  return res;
    10085  }
    10086  return res;
    10087 }
    10088 
    10089 void vmaDestroyImage(
    10090  VmaAllocator allocator,
    10091  VkImage image,
    10092  VmaAllocation allocation)
    10093 {
    10094  VMA_ASSERT(allocator);
    10095  VMA_DEBUG_LOG("vmaDestroyImage");
    10096  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10097  if(image != VK_NULL_HANDLE)
    10098  {
    10099  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
    10100  }
    10101  if(allocation != VK_NULL_HANDLE)
    10102  {
    10103  allocator->FreeMemory(allocation);
    10104  }
    10105 }
    10106 
    10107 #endif // #ifdef VMA_IMPLEMENTATION
    PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
    Definition: vk_mem_alloc.h:1244
    +
    Set this flag if the allocation should have its own memory block.
    Definition: vk_mem_alloc.h:1510
    void vmaUnmapMemory(VmaAllocator allocator, VmaAllocation allocation)
    Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
    -
    VkPhysicalDevice physicalDevice
    Vulkan physical device.
    Definition: vk_mem_alloc.h:1200
    +
    VkPhysicalDevice physicalDevice
    Vulkan physical device.
    Definition: vk_mem_alloc.h:1273
    VkResult vmaDefragment(VmaAllocator allocator, VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)
    Compacts memory by moving allocations.
    void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    Invalidates memory of given allocation.
    Represents single memory allocation.
    -
    PFN_vkCreateBuffer vkCreateBuffer
    Definition: vk_mem_alloc.h:1183
    +
    PFN_vkCreateBuffer vkCreateBuffer
    Definition: vk_mem_alloc.h:1256
    void vmaFreeStatsString(VmaAllocator allocator, char *pStatsString)
    struct VmaStats VmaStats
    General statistics from current state of Allocator.
    -
    Definition: vk_mem_alloc.h:1394
    -
    PFN_vkMapMemory vkMapMemory
    Definition: vk_mem_alloc.h:1175
    -
    VkDeviceMemory deviceMemory
    Handle to Vulkan memory object.
    Definition: vk_mem_alloc.h:1767
    -
    VmaAllocatorCreateFlags flags
    Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1197
    -
    uint32_t maxAllocationsToMove
    Maximum number of allocations that can be moved to different place.
    Definition: vk_mem_alloc.h:1994
    -
    Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
    Definition: vk_mem_alloc.h:1613
    +
    Definition: vk_mem_alloc.h:1467
    +
    PFN_vkMapMemory vkMapMemory
    Definition: vk_mem_alloc.h:1248
    +
    VkDeviceMemory deviceMemory
    Handle to Vulkan memory object.
    Definition: vk_mem_alloc.h:1856
    +
    VmaAllocatorCreateFlags flags
    Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1270
    +
    uint32_t maxAllocationsToMove
    Maximum number of allocations that can be moved to different place.
    Definition: vk_mem_alloc.h:2101
    +
    Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
    Definition: vk_mem_alloc.h:1686
    void vmaMakePoolAllocationsLost(VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)
    Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInf...
    -
    VkDeviceSize size
    Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
    Definition: vk_mem_alloc.h:1667
    -
    Definition: vk_mem_alloc.h:1474
    -
    VkFlags VmaAllocatorCreateFlags
    Definition: vk_mem_alloc.h:1164
    -
    VkMemoryPropertyFlags preferredFlags
    Flags that preferably should be set in a memory type chosen for an allocation.
    Definition: vk_mem_alloc.h:1512
    -
    Definition: vk_mem_alloc.h:1421
    -
    const VkAllocationCallbacks * pAllocationCallbacks
    Custom CPU memory allocation callbacks. Optional.
    Definition: vk_mem_alloc.h:1209
    +
    VkDeviceSize size
    Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
    Definition: vk_mem_alloc.h:1740
    +
    Definition: vk_mem_alloc.h:1547
    +
    VkFlags VmaAllocatorCreateFlags
    Definition: vk_mem_alloc.h:1237
    +
    VkMemoryPropertyFlags preferredFlags
    Flags that preferably should be set in a memory type chosen for an allocation.
    Definition: vk_mem_alloc.h:1585
    +
    Definition: vk_mem_alloc.h:1494
    +
    const VkAllocationCallbacks * pAllocationCallbacks
    Custom CPU memory allocation callbacks. Optional.
    Definition: vk_mem_alloc.h:1282
    void vmaCalculateStats(VmaAllocator allocator, VmaStats *pStats)
    Retrieves statistics from current state of the Allocator.
    -
    const VmaVulkanFunctions * pVulkanFunctions
    Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
    Definition: vk_mem_alloc.h:1262
    -
    Description of a Allocator to be created.
    Definition: vk_mem_alloc.h:1194
    +
    const VmaVulkanFunctions * pVulkanFunctions
    Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
    Definition: vk_mem_alloc.h:1335
    +
    Description of a Allocator to be created.
    Definition: vk_mem_alloc.h:1267
    void vmaDestroyAllocator(VmaAllocator allocator)
    Destroys allocator object.
    -
    VmaAllocationCreateFlagBits
    Flags to be passed as VmaAllocationCreateInfo::flags.
    Definition: vk_mem_alloc.h:1425
    +
    VmaAllocationCreateFlagBits
    Flags to be passed as VmaAllocationCreateInfo::flags.
    Definition: vk_mem_alloc.h:1498
    void vmaGetAllocationInfo(VmaAllocator allocator, VmaAllocation allocation, VmaAllocationInfo *pAllocationInfo)
    Returns current information about specified allocation and atomically marks it as used in current fra...
    -
    VkDeviceSize allocationSizeMax
    Definition: vk_mem_alloc.h:1327
    -
    PFN_vkBindImageMemory vkBindImageMemory
    Definition: vk_mem_alloc.h:1180
    -
    VkDeviceSize unusedBytes
    Total number of bytes occupied by unused ranges.
    Definition: vk_mem_alloc.h:1326
    -
    Statistics returned by function vmaDefragment().
    Definition: vk_mem_alloc.h:1998
    +
    VkDeviceSize allocationSizeMax
    Definition: vk_mem_alloc.h:1400
    +
    PFN_vkBindImageMemory vkBindImageMemory
    Definition: vk_mem_alloc.h:1253
    +
    VkDeviceSize unusedBytes
    Total number of bytes occupied by unused ranges.
    Definition: vk_mem_alloc.h:1399
    +
    Statistics returned by function vmaDefragment().
    Definition: vk_mem_alloc.h:2105
    void vmaFreeMemory(VmaAllocator allocator, VmaAllocation allocation)
    Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
    -
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1226
    -
    VmaStatInfo total
    Definition: vk_mem_alloc.h:1336
    -
    uint32_t deviceMemoryBlocksFreed
    Number of empty VkDeviceMemory objects that have been released to the system.
    Definition: vk_mem_alloc.h:2006
    -
    VmaAllocationCreateFlags flags
    Use VmaAllocationCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1496
    -
    VkDeviceSize maxBytesToMove
    Maximum total numbers of bytes that can be copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:1989
    -
    PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
    Definition: vk_mem_alloc.h:1181
    -
    void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called after successful vkAllocateMemory.
    Definition: vk_mem_alloc.h:1106
    +
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1299
    +
    VmaStatInfo total
    Definition: vk_mem_alloc.h:1409
    +
    uint32_t deviceMemoryBlocksFreed
    Number of empty VkDeviceMemory objects that have been released to the system.
    Definition: vk_mem_alloc.h:2113
    +
    VmaAllocationCreateFlags flags
    Use VmaAllocationCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1569
    +
    VkDeviceSize maxBytesToMove
    Maximum total numbers of bytes that can be copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2096
    +
    PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
    Definition: vk_mem_alloc.h:1254
    +
    void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called after successful vkAllocateMemory.
    Definition: vk_mem_alloc.h:1179
    Represents main object of this library initialized.
    -
    VkDevice device
    Vulkan device.
    Definition: vk_mem_alloc.h:1203
    +
    VkDevice device
    Vulkan device.
    Definition: vk_mem_alloc.h:1276
    VkResult vmaBindBufferMemory(VmaAllocator allocator, VmaAllocation allocation, VkBuffer buffer)
    Binds buffer to allocation.
    -
    Describes parameter of created VmaPool.
    Definition: vk_mem_alloc.h:1621
    -
    Definition: vk_mem_alloc.h:1615
    -
    VkDeviceSize size
    Size of this allocation, in bytes.
    Definition: vk_mem_alloc.h:1777
    +
    Describes parameter of created VmaPool.
    Definition: vk_mem_alloc.h:1694
    +
    Definition: vk_mem_alloc.h:1688
    +
    VkDeviceSize size
    Size of this allocation, in bytes.
    Definition: vk_mem_alloc.h:1866
    void vmaGetMemoryTypeProperties(VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
    Given Memory Type Index, returns Property Flags of this memory type.
    -
    PFN_vkUnmapMemory vkUnmapMemory
    Definition: vk_mem_alloc.h:1176
    -
    void * pUserData
    Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
    Definition: vk_mem_alloc.h:1533
    -
    size_t minBlockCount
    Minimum number of blocks to be always allocated in this pool, even if they stay empty.
    Definition: vk_mem_alloc.h:1637
    -
    size_t allocationCount
    Number of VmaAllocation objects created from this pool that were not destroyed or lost...
    Definition: vk_mem_alloc.h:1673
    +
    PFN_vkUnmapMemory vkUnmapMemory
    Definition: vk_mem_alloc.h:1249
    +
    void * pUserData
    Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
    Definition: vk_mem_alloc.h:1606
    +
    size_t minBlockCount
    Minimum number of blocks to be always allocated in this pool, even if they stay empty.
    Definition: vk_mem_alloc.h:1710
    +
    size_t allocationCount
    Number of VmaAllocation objects created from this pool that were not destroyed or lost...
    Definition: vk_mem_alloc.h:1746
    struct VmaVulkanFunctions VmaVulkanFunctions
    Pointers to some Vulkan functions - a subset used by the library.
    -
    Definition: vk_mem_alloc.h:1162
    -
    uint32_t memoryTypeIndex
    Vulkan memory type index to allocate this pool from.
    Definition: vk_mem_alloc.h:1624
    +
    Definition: vk_mem_alloc.h:1235
    +
    uint32_t memoryTypeIndex
    Vulkan memory type index to allocate this pool from.
    Definition: vk_mem_alloc.h:1697
    VkResult vmaFindMemoryTypeIndex(VmaAllocator allocator, uint32_t memoryTypeBits, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
    -
    VmaMemoryUsage
    Definition: vk_mem_alloc.h:1372
    +
    VmaMemoryUsage
    Definition: vk_mem_alloc.h:1445
    struct VmaAllocationInfo VmaAllocationInfo
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    Flushes memory of given allocation.
    -
    Optional configuration parameters to be passed to function vmaDefragment().
    Definition: vk_mem_alloc.h:1984
    +
    Optional configuration parameters to be passed to function vmaDefragment().
    Definition: vk_mem_alloc.h:2091
    struct VmaPoolCreateInfo VmaPoolCreateInfo
    Describes parameter of created VmaPool.
    void vmaDestroyPool(VmaAllocator allocator, VmaPool pool)
    Destroys VmaPool object and frees Vulkan device memory.
    -
    VkDeviceSize bytesFreed
    Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
    Definition: vk_mem_alloc.h:2002
    -
    Definition: vk_mem_alloc.h:1411
    -
    uint32_t memoryTypeBits
    Bitmask containing one bit set for every memory type acceptable for this allocation.
    Definition: vk_mem_alloc.h:1520
    -
    PFN_vkBindBufferMemory vkBindBufferMemory
    Definition: vk_mem_alloc.h:1179
    +
    VkDeviceSize bytesFreed
    Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
    Definition: vk_mem_alloc.h:2109
    +
    Definition: vk_mem_alloc.h:1484
    +
    uint32_t memoryTypeBits
    Bitmask containing one bit set for every memory type acceptable for this allocation.
    Definition: vk_mem_alloc.h:1593
    +
    PFN_vkBindBufferMemory vkBindBufferMemory
    Definition: vk_mem_alloc.h:1252
    Represents custom memory pool.
    void vmaGetPoolStats(VmaAllocator allocator, VmaPool pool, VmaPoolStats *pPoolStats)
    Retrieves statistics of existing VmaPool object.
    struct VmaDefragmentationInfo VmaDefragmentationInfo
    Optional configuration parameters to be passed to function vmaDefragment().
    -
    General statistics from current state of Allocator.
    Definition: vk_mem_alloc.h:1332
    -
    void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called before vkFreeMemory.
    Definition: vk_mem_alloc.h:1112
    +
    General statistics from current state of Allocator.
    Definition: vk_mem_alloc.h:1405
    +
    void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called before vkFreeMemory.
    Definition: vk_mem_alloc.h:1185
    void vmaSetAllocationUserData(VmaAllocator allocator, VmaAllocation allocation, void *pUserData)
    Sets pUserData in given allocation to new value.
    VkResult vmaCreatePool(VmaAllocator allocator, const VmaPoolCreateInfo *pCreateInfo, VmaPool *pPool)
    Allocates Vulkan device memory and creates VmaPool object.
    -
    VmaAllocatorCreateFlagBits
    Flags for created VmaAllocator.
    Definition: vk_mem_alloc.h:1133
    +
    VmaAllocatorCreateFlagBits
    Flags for created VmaAllocator.
    Definition: vk_mem_alloc.h:1206
    VkResult vmaBindImageMemory(VmaAllocator allocator, VmaAllocation allocation, VkImage image)
    Binds image to allocation.
    struct VmaStatInfo VmaStatInfo
    Calculated statistics of memory usage in entire allocator.
    -
    Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
    Definition: vk_mem_alloc.h:1138
    -
    uint32_t allocationsMoved
    Number of allocations that have been moved to different places.
    Definition: vk_mem_alloc.h:2004
    +
    Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
    Definition: vk_mem_alloc.h:1211
    +
    uint32_t allocationsMoved
    Number of allocations that have been moved to different places.
    Definition: vk_mem_alloc.h:2111
    void vmaCreateLostAllocation(VmaAllocator allocator, VmaAllocation *pAllocation)
    Creates new allocation that is in lost state from the beginning.
    -
    VkMemoryPropertyFlags requiredFlags
    Flags that must be set in a Memory Type chosen for an allocation.
    Definition: vk_mem_alloc.h:1507
    -
    VkDeviceSize unusedRangeSizeMax
    Size of the largest continuous free memory region.
    Definition: vk_mem_alloc.h:1683
    +
    VkMemoryPropertyFlags requiredFlags
    Flags that must be set in a Memory Type chosen for an allocation.
    Definition: vk_mem_alloc.h:1580
    +
    VkDeviceSize unusedRangeSizeMax
    Size of the largest continuous free memory region.
    Definition: vk_mem_alloc.h:1756
    void vmaBuildStatsString(VmaAllocator allocator, char **ppStatsString, VkBool32 detailedMap)
    Builds and returns statistics as string in JSON format.
    -
    PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
    Definition: vk_mem_alloc.h:1172
    -
    Calculated statistics of memory usage in entire allocator.
    Definition: vk_mem_alloc.h:1315
    -
    VkDeviceSize blockSize
    Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
    Definition: vk_mem_alloc.h:1632
    -
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    Definition: vk_mem_alloc.h:1125
    +
    PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
    Definition: vk_mem_alloc.h:1245
    +
    Calculated statistics of memory usage in entire allocator.
    Definition: vk_mem_alloc.h:1388
    +
    VkDeviceSize blockSize
    Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
    Definition: vk_mem_alloc.h:1705
    +
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    Definition: vk_mem_alloc.h:1198
    VkResult vmaCreateBuffer(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    -
    Definition: vk_mem_alloc.h:1481
    -
    VkDeviceSize unusedRangeSizeMin
    Definition: vk_mem_alloc.h:1328
    -
    PFN_vmaFreeDeviceMemoryFunction pfnFree
    Optional, can be null.
    Definition: vk_mem_alloc.h:1129
    -
    VmaPoolCreateFlags flags
    Use combination of VmaPoolCreateFlagBits.
    Definition: vk_mem_alloc.h:1627
    -
    Definition: vk_mem_alloc.h:1420
    -
    PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
    Definition: vk_mem_alloc.h:1178
    +
    Definition: vk_mem_alloc.h:1554
    +
    VkDeviceSize unusedRangeSizeMin
    Definition: vk_mem_alloc.h:1401
    +
    PFN_vmaFreeDeviceMemoryFunction pfnFree
    Optional, can be null.
    Definition: vk_mem_alloc.h:1202
    +
    VmaPoolCreateFlags flags
    Use combination of VmaPoolCreateFlagBits.
    Definition: vk_mem_alloc.h:1700
    +
    Definition: vk_mem_alloc.h:1493
    +
    PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
    Definition: vk_mem_alloc.h:1251
    struct VmaPoolStats VmaPoolStats
    Describes parameter of existing VmaPool.
    VkResult vmaCreateImage(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkImage *pImage, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    Function similar to vmaCreateBuffer().
    -
    VmaMemoryUsage usage
    Intended usage of memory.
    Definition: vk_mem_alloc.h:1502
    -
    Definition: vk_mem_alloc.h:1493
    +
    VmaMemoryUsage usage
    Intended usage of memory.
    Definition: vk_mem_alloc.h:1575
    +
    Definition: vk_mem_alloc.h:1566
    VkResult vmaFindMemoryTypeIndexForImageInfo(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
    -
    uint32_t blockCount
    Number of VkDeviceMemory Vulkan memory blocks allocated.
    Definition: vk_mem_alloc.h:1318
    -
    PFN_vkFreeMemory vkFreeMemory
    Definition: vk_mem_alloc.h:1174
    -
    size_t maxBlockCount
    Maximum number of blocks that can be allocated in this pool. Optional.
    Definition: vk_mem_alloc.h:1645
    -
    const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
    Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
    Definition: vk_mem_alloc.h:1212
    -
    size_t unusedRangeCount
    Number of continuous memory ranges in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1676
    -
    VkFlags VmaAllocationCreateFlags
    Definition: vk_mem_alloc.h:1491
    -
    VmaPool pool
    Pool that this allocation should be created in.
    Definition: vk_mem_alloc.h:1526
    +
    uint32_t blockCount
    Number of VkDeviceMemory Vulkan memory blocks allocated.
    Definition: vk_mem_alloc.h:1391
    +
    PFN_vkFreeMemory vkFreeMemory
    Definition: vk_mem_alloc.h:1247
    +
    size_t maxBlockCount
    Maximum number of blocks that can be allocated in this pool. Optional.
    Definition: vk_mem_alloc.h:1718
    +
    const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
    Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
    Definition: vk_mem_alloc.h:1285
    +
    size_t unusedRangeCount
    Number of continuous memory ranges in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1749
    +
    VkFlags VmaAllocationCreateFlags
    Definition: vk_mem_alloc.h:1564
    +
    VmaPool pool
    Pool that this allocation should be created in.
    Definition: vk_mem_alloc.h:1599
    void vmaGetMemoryProperties(VmaAllocator allocator, const VkPhysicalDeviceMemoryProperties **ppPhysicalDeviceMemoryProperties)
    -
    const VkDeviceSize * pHeapSizeLimit
    Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
    Definition: vk_mem_alloc.h:1250
    -
    VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
    Definition: vk_mem_alloc.h:1334
    -
    Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
    Definition: vk_mem_alloc.h:1461
    -
    VkDeviceSize allocationSizeMin
    Definition: vk_mem_alloc.h:1327
    +
    const VkDeviceSize * pHeapSizeLimit
    Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
    Definition: vk_mem_alloc.h:1323
    +
    VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
    Definition: vk_mem_alloc.h:1407
    +
    Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
    Definition: vk_mem_alloc.h:1534
    +
    VkDeviceSize allocationSizeMin
    Definition: vk_mem_alloc.h:1400
    VkResult vmaFindMemoryTypeIndexForBufferInfo(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
    -
    PFN_vkCreateImage vkCreateImage
    Definition: vk_mem_alloc.h:1185
    -
    PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
    Optional, can be null.
    Definition: vk_mem_alloc.h:1127
    -
    PFN_vkDestroyBuffer vkDestroyBuffer
    Definition: vk_mem_alloc.h:1184
    +
    PFN_vkCreateImage vkCreateImage
    Definition: vk_mem_alloc.h:1258
    +
    PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
    Optional, can be null.
    Definition: vk_mem_alloc.h:1200
    +
    PFN_vkDestroyBuffer vkDestroyBuffer
    Definition: vk_mem_alloc.h:1257
    VkResult vmaMapMemory(VmaAllocator allocator, VmaAllocation allocation, void **ppData)
    Maps memory represented by given allocation and returns pointer to it.
    -
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1659
    -
    PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
    Definition: vk_mem_alloc.h:1177
    +
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1732
    +
    PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
    Definition: vk_mem_alloc.h:1250
    VkResult vmaAllocateMemoryForImage(VmaAllocator allocator, VkImage image, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    Function similar to vmaAllocateMemoryForBuffer().
    struct VmaAllocatorCreateInfo VmaAllocatorCreateInfo
    Description of a Allocator to be created.
    -
    void * pUserData
    Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
    Definition: vk_mem_alloc.h:1791
    -
    VkDeviceSize preferredLargeHeapBlockSize
    Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
    Definition: vk_mem_alloc.h:1206
    -
    VkDeviceSize allocationSizeAvg
    Definition: vk_mem_alloc.h:1327
    -
    VkDeviceSize usedBytes
    Total number of bytes occupied by all allocations.
    Definition: vk_mem_alloc.h:1324
    +
    void * pUserData
    Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
    Definition: vk_mem_alloc.h:1880
    +
    VkDeviceSize preferredLargeHeapBlockSize
    Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
    Definition: vk_mem_alloc.h:1279
    +
    VkDeviceSize allocationSizeAvg
    Definition: vk_mem_alloc.h:1400
    +
    VkDeviceSize usedBytes
    Total number of bytes occupied by all allocations.
    Definition: vk_mem_alloc.h:1397
    struct VmaDeviceMemoryCallbacks VmaDeviceMemoryCallbacks
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    -
    Describes parameter of existing VmaPool.
    Definition: vk_mem_alloc.h:1664
    -
    VkDeviceSize offset
    Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
    Definition: vk_mem_alloc.h:1772
    -
    Definition: vk_mem_alloc.h:1489
    -
    VkDeviceSize bytesMoved
    Total number of bytes that have been copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2000
    -
    Pointers to some Vulkan functions - a subset used by the library.
    Definition: vk_mem_alloc.h:1170
    +
    VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
    Checks magic number in margins around all allocations in given memory types (in both default and cust...
    +
    Describes parameter of existing VmaPool.
    Definition: vk_mem_alloc.h:1737
    +
    VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
    Checks magic number in margins around all allocations in given memory pool in search for corruptions...
    +
    VkDeviceSize offset
    Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
    Definition: vk_mem_alloc.h:1861
    +
    Definition: vk_mem_alloc.h:1562
    +
    VkDeviceSize bytesMoved
    Total number of bytes that have been copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2107
    +
    Pointers to some Vulkan functions - a subset used by the library.
    Definition: vk_mem_alloc.h:1243
    VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
    Creates Allocator object.
    -
    uint32_t unusedRangeCount
    Number of free ranges of memory between allocations.
    Definition: vk_mem_alloc.h:1322
    -
    Definition: vk_mem_alloc.h:1377
    -
    VkFlags VmaPoolCreateFlags
    Definition: vk_mem_alloc.h:1617
    +
    uint32_t unusedRangeCount
    Number of free ranges of memory between allocations.
    Definition: vk_mem_alloc.h:1395
    +
    Definition: vk_mem_alloc.h:1450
    +
    VkFlags VmaPoolCreateFlags
    Definition: vk_mem_alloc.h:1690
    void vmaGetPhysicalDeviceProperties(VmaAllocator allocator, const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    -
    uint32_t allocationCount
    Number of VmaAllocation allocation objects allocated.
    Definition: vk_mem_alloc.h:1320
    -
    PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
    Definition: vk_mem_alloc.h:1182
    -
    PFN_vkDestroyImage vkDestroyImage
    Definition: vk_mem_alloc.h:1186
    -
    Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
    Definition: vk_mem_alloc.h:1448
    -
    Definition: vk_mem_alloc.h:1404
    -
    void * pMappedData
    Pointer to the beginning of this allocation as mapped data.
    Definition: vk_mem_alloc.h:1786
    +
    uint32_t allocationCount
    Number of VmaAllocation allocation objects allocated.
    Definition: vk_mem_alloc.h:1393
    +
    PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
    Definition: vk_mem_alloc.h:1255
    +
    PFN_vkDestroyImage vkDestroyImage
    Definition: vk_mem_alloc.h:1259
    +
    Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
    Definition: vk_mem_alloc.h:1521
    +
    Definition: vk_mem_alloc.h:1477
    +
    void * pMappedData
    Pointer to the beginning of this allocation as mapped data.
    Definition: vk_mem_alloc.h:1875
    void vmaDestroyImage(VmaAllocator allocator, VkImage image, VmaAllocation allocation)
    Destroys Vulkan image and frees allocated memory.
    -
    Enables usage of VK_KHR_dedicated_allocation extension.
    Definition: vk_mem_alloc.h:1160
    +
    Enables usage of VK_KHR_dedicated_allocation extension.
    Definition: vk_mem_alloc.h:1233
    struct VmaDefragmentationStats VmaDefragmentationStats
    Statistics returned by function vmaDefragment().
    -
    PFN_vkAllocateMemory vkAllocateMemory
    Definition: vk_mem_alloc.h:1173
    -
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    Definition: vk_mem_alloc.h:1753
    +
    PFN_vkAllocateMemory vkAllocateMemory
    Definition: vk_mem_alloc.h:1246
    +
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    Definition: vk_mem_alloc.h:1842
    VkResult vmaAllocateMemory(VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    General purpose memory allocation.
    void vmaSetCurrentFrameIndex(VmaAllocator allocator, uint32_t frameIndex)
    Sets index of the current frame.
    struct VmaAllocationCreateInfo VmaAllocationCreateInfo
    VkResult vmaAllocateMemoryForBuffer(VmaAllocator allocator, VkBuffer buffer, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    -
    VmaPoolCreateFlagBits
    Flags to be passed as VmaPoolCreateInfo::flags.
    Definition: vk_mem_alloc.h:1595
    -
    VkDeviceSize unusedRangeSizeAvg
    Definition: vk_mem_alloc.h:1328
    +
    VmaPoolCreateFlagBits
    Flags to be passed as VmaPoolCreateInfo::flags.
    Definition: vk_mem_alloc.h:1668
    +
    VkDeviceSize unusedRangeSizeAvg
    Definition: vk_mem_alloc.h:1401
    VkBool32 vmaTouchAllocation(VmaAllocator allocator, VmaAllocation allocation)
    Returns VK_TRUE if allocation is not lost and atomically marks it as used in current frame...
    - -
    VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
    Definition: vk_mem_alloc.h:1335
    + +
    VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
    Definition: vk_mem_alloc.h:1408
    void vmaDestroyBuffer(VmaAllocator allocator, VkBuffer buffer, VmaAllocation allocation)
    Destroys Vulkan buffer and frees allocated memory.
    -
    VkDeviceSize unusedSize
    Total number of bytes in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1670
    -
    VkDeviceSize unusedRangeSizeMax
    Definition: vk_mem_alloc.h:1328
    -
    uint32_t memoryType
    Memory type index that this allocation was allocated from.
    Definition: vk_mem_alloc.h:1758
    +
    VkDeviceSize unusedSize
    Total number of bytes in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1743
    +
    VkDeviceSize unusedRangeSizeMax
    Definition: vk_mem_alloc.h:1401
    +
    uint32_t memoryType
    Memory type index that this allocation was allocated from.
    Definition: vk_mem_alloc.h:1847