├── .gitattributes ├── README.md ├── ijha_h32.h └── ijss.h /.gitattributes: -------------------------------------------------------------------------------- 1 | *.h linguist-language=C 2 | *.c linguist-language=C 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Collection of handle/id allocators 2 | 3 | In many situations it's desirable to refer to objects/resources by handles instead of pointers. In addition to memory safety, like detecting double free's and reference freed/reallocated memory, it allows the private implementation to hide its implementation and reorganize data without changing the public API. [Andre Weissflog](https://github.com/floooh) goes into this in great detail in [Handles are the better pointers](https://floooh.github.io/2018/06/17/handles-vs-pointers.html). 4 | 5 | ### Pros 6 | - detect stale handles ('use after free'). 7 | - sizeof(pointer) > sizeof(handle_type) (on 64bit architectures with 32 bit handles). 8 | - system is free to arrange memory of resource refered by handle (ex. keep data linear in memory). 9 | - more userflag bits available (if needed). pointers would only have a bottom bits (due to alignment) and with extra care added to mask out those bits before usage. 10 | 11 | ### Cons 12 | - one (atleast) extra indirection when using the resource externally (_but_ in most cases more resources are touched internally than are referenced externally). 13 | - trickier to debug/inspect individual resources. 14 | 15 | ## Allocators 16 | 17 | All allocators are implemented as a [stb-style header-file library](https://github.com/nothings/stb) and comes with unittest and usage examples. 18 | 19 | - [ijha_h32.h](https://github.com/incrediblejr/ijhandlealloc/blob/master/ijha_h32.h) is a runtime configurable thread-safe FIFO/LIFO handle allocator with handles that have a user configurable number of userflags bits and variable number of generation bits. Memory usage: 4bytes / handle. 20 | 21 | - [ijss.h](https://github.com/incrediblejr/ijhandlealloc/blob/master/ijss.h) sparse set for bookkeeping of dense<->sparse index mapping or a building-block for a simple LIFO index/handle allocator. 22 | 23 | ## License 24 | 25 | Dual-licensed under 3-Clause BSD & Unlicense license. 26 | -------------------------------------------------------------------------------- /ijha_h32.h: -------------------------------------------------------------------------------- 1 | /* clang-format off */ 2 | 3 | /* 4 | ijha_h32 : IncredibleJunior HandleAllocator 32-bit Handles - v1.0 5 | 6 | In many situations it's desirable to refer to objects/resources by handles 7 | instead of pointers. In addition to memory safety, like detecting double free's 8 | and reference freed/reallocated memory, it allows the private implementation to 9 | hide its implementation and reorganize data without changing the public API. 10 | Andre Weissflog (@floooh) goes into this in great detail in 11 | 'Handles are the better pointers' [1]. 12 | 13 | ijha_h32 is a runtime configurable thread-safe FIFO/LIFO handle allocator with 14 | handles that have a user configurable number of userflags bits and 15 | variable number of generation bits. Memory usage is 4 bytes per handle. 16 | 17 | ijha_h32 and it's many, often unpublished, predecessors was originally inspired 18 | by Niklas Gray's (@niklasfrykholm) blogpost about packed arrays [2], 19 | which is recommended reading. 20 | 21 | The following properties for handles holds (all of Niklas' requirements in [2]): 22 | 23 | - 1-1 mapping between a valid object/resource and a handle 24 | - stale handles can be detected 25 | - lookup from handle to object/resource is fast (only a mask operation) 26 | - adding and removing handles should be fast 27 | - optional userflags per handle (not present in [2]) 28 | 29 | Handles are 32 bits and can have user configurable number of userflag bits 30 | and variable number generation bits, which depends on number of requested bits 31 | for userflags and the number of bits needed to represent the requested max number 32 | of handles. 33 | 34 | The generation part of the handle dictates how many times a handle can be reused 35 | before giving a false positive 'is-valid' answer. 36 | 37 | All valid handles are guaranteed to never be 0, which guarantees that the 38 | 'clear to zero is initialization' pattern works. In fact a valid handle is 39 | guaranteed to never be [0, capacity mask], where capacity mask is the configured 40 | max number of handles (rounded up to power of 2) minus 1. 41 | 42 | Each time a handle is reused the generation part of the handle is increased, 43 | provided >0 generation bits been reserved. How many times a handle can be reused, 44 | before giving a false positive 'is-valid' answer, depends on how many free slots 45 | there are (if a FIFO queue is used) and the number of generation bits. 46 | Once a handle is acquired from the queue, it can be reused 47 | 2^(num generation bits)-1 times before returning a false positive. 48 | 49 | The _optional_ userflags is stored before the most significant bit (MSB) 50 | of the 32-bit handle by default. 51 | 52 | MSB LSB 53 | +--------------------------------------------------------------------------------+ 54 | | in-use-bit | _optional_ userflags | generation | sparse-index or freelist next | 55 | +--------------------------------------------------------------------------------+ 56 | 57 | If handle allocator is initialized with the flag 'IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT' 58 | the handle layout changes to the following: 59 | 60 | MSB LSB 61 | +--------------------------------------------------------------------------------+ 62 | | _optional_ userflags | generation | in-use-bit | sparse-index or freelist next | 63 | +--------------------------------------------------------------------------------+ 64 | 65 | A newly initialized handle allocator starts allocating handles with sparse index 66 | going from [0, max number of handles) (*) 67 | 68 | Storing the in-use-bit in the MSB, coupled with the fact that newly allocated 69 | handles starts at (sparse) index 0 (*) enables defining constants that is 70 | independent of how many handles the handle allocator was initialized with, which 71 | would be the case if the in-use-bit would be stored between the sparse index and 72 | generation part of the handle. 73 | 74 | (*) iff it's initialized with thread-safe-flag, then it starts at 1, 75 | as 0 is used as end-of-list/sentinel node. 76 | 77 | Please refer to the 'ijha_h32_test_constant_handles'-test at the end of the file 78 | which goes into greater detail showing how to setup this and get back the sentinel 79 | node that is 'lost' in the thread-safe version (with some caveats though). 80 | 81 | The ijha_h32 is initialized with a memory area and information about the size of 82 | the _optional_ userdata and offsets to handles. This enables both having handles 83 | 'external'/'non-inline' to the userdata, 'internal'/'inlined' in the userdata 84 | and with _no_ userdata at all, as it is optional. 85 | 86 | Inline in this context means that the handle is 'inlined'/'embedded' in the 87 | userdata, for example: 88 | 89 | struct MyObject { 90 | unsigned ijha_h32_handle; 91 | float x, y, z; 92 | }; 93 | 94 | Which one, 'inline' or 'non-inline', to choose depends on situation, 95 | and preconditions, and dictates how the resulting memory layout looks like. 96 | Note that if using handles 'non-inline' the user must be wary of alignment 97 | requirements of the userdata as the userdata is interleaved with handles. 98 | 99 | H: Handle 100 | UD: UserData 101 | 102 | No userdata: [H][H][H][...] 103 | Userdata with 'non-inline' handles: [H][UD][H][UD][H][UD][...] 104 | Userdata with 'inline' handles: [UD][UD][UD][...] 105 | 106 | Please refer to the 'ijha_h32_init_no_inlinehandles' and 107 | 'ijha_h32_init_inlinehandles' helper macros when initializing the handle allocator. 108 | 109 | This file provides both the interface and the implementation. 110 | The handle allocator is implemented as a stb-style header-file library[3] 111 | which means that in *ONE* source file, put: 112 | 113 | #define IJHA_H32_IMPLEMENTATION 114 | // if custom assert wanted (and no dependencies on assert.h) 115 | #define IJHA_H32_assert custom_assert 116 | // #define IJHA_H32_NO_THREADSAFE_SUPPORT // to disable the thread-safe versions 117 | #include "ijha_h32.h" 118 | 119 | Other source files should just include ijha_h32.h 120 | 121 | EXAMPLES/UNIT TESTS 122 | Usage examples+tests is at the bottom of the file in the IJHA_H32_TEST section. 123 | LICENSE 124 | See end of file for license information 125 | 126 | REVISIONS 127 | 1.0 (2022-08-12) First version 128 | 'IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT' flag added 129 | 1.1 (2024-02-29) Added 'ijha_h32_memory_size_allocated' 130 | *Breaking Change* 131 | Removed unused flags parameter from 'ijha_h32_memory_size_needed'. 132 | When upgrading just remove 'ijha_flags' parameter from call (last parameter) 133 | 134 | References: 135 | [1] https://floooh.github.io/2018/06/17/handles-vs-pointers.html 136 | [2] http://bitsquid.blogspot.se/2011/09/managing-decoupling-part-4-id-lookup.html 137 | [3] https://github.com/nothings/stb 138 | 139 | */ 140 | 141 | #ifndef IJHA_H32_INCLUDED_H 142 | #define IJHA_H32_INCLUDED_H 143 | 144 | #ifdef __cplusplus 145 | extern "C" { 146 | #endif 147 | 148 | #if defined(IJHA_H32_STATIC) 149 | #define IJHA_H32_API static 150 | #else 151 | #define IJHA_H32_API extern 152 | #endif 153 | 154 | #define IJHA_H32_INVALID_INDEX ((unsigned)-1) 155 | 156 | struct ijha_h32; 157 | 158 | typedef unsigned ijha_h32_acquire_func(struct ijha_h32 *self, unsigned userflags, unsigned *handle_out); 159 | typedef unsigned ijha_h32_release_func(struct ijha_h32 *self, unsigned handle); 160 | 161 | struct ijha_h32 { 162 | void *handles; 163 | 164 | ijha_h32_acquire_func *acquire_func; 165 | ijha_h32_release_func *release_func; 166 | 167 | unsigned flags_num_userflag_bits; 168 | unsigned handles_stride_userdata_offset; 169 | 170 | unsigned size; 171 | unsigned capacity; 172 | 173 | unsigned capacity_mask; 174 | unsigned generation_mask; 175 | unsigned userflags_mask; 176 | 177 | unsigned in_use_bit; 178 | 179 | /* enqueue/add/put items at the back (+dequeue/remove/get items from the front _iff_ LIFO) */ 180 | unsigned freelist_enqueue_index; 181 | /* dequeue/remove/get items from the front (FIFO) */ 182 | unsigned freelist_dequeue_index; 183 | }; 184 | 185 | /* max number of handles does _not_ have to be power of two. 186 | * NB: number of usable handles is only guaranteed to equal max number of handles 187 | * if the handle allocator is 'pure LIFO' (i.e not thread-safe (*) or FIFO). 188 | * add 1 to max number of handles in the event that this is really needed in 189 | * those cases. 190 | * 191 | * (*) the sentinel node in the thread-safe can be, with some caveats, used. 192 | * 193 | * NB: size of 'struct ijha_h32' is *NOT* included 194 | */ 195 | IJHA_H32_API unsigned ijha_h32_memory_size_needed(unsigned max_num_handles, unsigned userdata_size_in_bytes_per_item, int inline_handles); 196 | 197 | /* returns the number of bytes allocated for instance, the inverse of 'ijha_h32_memory_size_needed' 198 | * NB: size of 'struct ijha_h32' is *NOT* included */ 199 | #define ijha_h32_memory_size_allocated(self) ((self)->capacity * ijha_h32_handle_stride((self)->handles_stride_userdata_offset)) 200 | 201 | enum ijha_h32_init_res { 202 | IJHA_H32_INIT_NO_ERROR = 0, 203 | IJHA_H32_INIT_CONFIGURATION_UNSUPPORTED = 1 << 0, /* the requested userflag bits + num bits needed to represent a handle could not fit into a 32 bit handle */ 204 | IJHA_H32_INIT_THREADSAFE_UNSUPPORTED = 1 << 1, 205 | IJHA_H32_INIT_USERDATA_TOO_BIG = 1 << 2, 206 | IJHA_H32_INIT_HANDLE_OFFSET_TOO_BIG = 1 << 3, /* offset to handle is too big */ 207 | IJHA_H32_INIT_HANDLE_NON_INLINE_SIZE_TOO_BIG = 1 << 4, 208 | IJHA_H32_INIT_INVALID_INPUT_FLAGS = 1 << 5 209 | }; 210 | 211 | enum ijha_h32_init_flags { 212 | IJHA_H32_INIT_LIFO = 1 << 6, 213 | IJHA_H32_INIT_FIFO = 1 << 7, 214 | IJHA_H32_INIT_LIFOFIFO_MASK = 0xc0, 215 | 216 | /* NB: FIFO is unsupported in thread-safe version, just LIFO */ 217 | IJHA_H32_INIT_THREADSAFE = 1 << 8, 218 | 219 | /* disable default behaviour of storing the "in use"-bit in MSB of handle 220 | and instead uses the bit after the bits used to represent the sparse index */ 221 | IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT = 1 << 9 222 | }; 223 | 224 | /* please refer to 'ijha_h32_init_no_inlinehandles' and 'ijha_h32_init_inlinehandles' helper macros */ 225 | IJHA_H32_API int ijha_h32_initex(struct ijha_h32 *self, unsigned max_num_handles, unsigned num_userflag_bits, unsigned non_inline_handle_size_bytes, unsigned handle_offset, unsigned userdata_size_in_bytes_per_item, unsigned ijha_flags, void *memory); 226 | 227 | /* initialize without 'inlined' handles in _optional_ userdata, handles is 228 | * external to the _optional_ userdata. 229 | * 230 | * resulting in either: 231 | * 232 | * H: Handle 233 | * UD: UserData 234 | * 235 | * #1: [H][UD][H][UD][...] (with userdata) memory-layout 236 | * 237 | * or 238 | * 239 | * #2: [H][H][H][...] (without userdata) memory-layout 240 | * 241 | * if the ijha_h32-instance should just be used for allocating handles, and no 242 | * other interleaved data, then set the userdata_size_in_bytes_per_item to 0, 243 | * yielding the memory layout in #2 244 | * 245 | * NB: if using userdata be wary the alignment requirement of the userdata as it 246 | * is interleaved with handles. alignment requirement greater than 4 can not 247 | * be serviced and must be handled by user (structure modification or pragma pack) 248 | * 249 | * 'num_userflag_bits' is the _optional_ number of bits that will be reserved in a 250 | * handle for user storage, stored before the most significant bit of the 32-bit handle 251 | * 252 | * 'max_num_handles' does _not_ have to be power of two. 253 | * 254 | * NB: the max number of handles specified is not the same as usable handles 255 | * post initialization as 1 has to be reserved for bookkeeping, 256 | * in non-'pure LIFO' configuration. 257 | * 258 | * use 'ijha_h32_memory_size_needed' to calculate how much memory is needed. 259 | * 260 | * ijha_flags = ORed ijha_h32_init_flags 261 | * 262 | * NB: future operations on self is undefined if return value, which is a combination 263 | * of ijha_h32_init_res ORed together, is not equal to IJHA_H32_INIT_NO_ERROR. 264 | */ 265 | #define ijha_h32_init_no_inlinehandles(self, max_num_handles, num_userflag_bits, userdata_size_in_bytes_per_item, ijha_flags, memory) ijha_h32_initex((self), (max_num_handles), (num_userflag_bits), sizeof(unsigned), 0, (userdata_size_in_bytes_per_item), (ijha_flags), (void*)(memory)) 266 | 267 | /* initialize with handles inlined in userdata (UD) resulting in a 268 | * [UD][UD][UD][...] memory-layout 269 | * 270 | * ex: 271 | * struct UserdataWithInlineHandle { 272 | * void *p; 273 | * int flags; 274 | * unsigned inline_handle; 275 | * }; 276 | * 277 | * ijha_h32_init_inlinehandles(self, max_num_handles, num_userflag_bits, 278 | * sizeof(struct UserdataWithInlineHandle), 279 | * offsetof(struct UserdataWithInlineHandle, inline_handle), 280 | * ijha_flags, memory) 281 | * 282 | * 'num_userflag_bits' is the _optional_ number of bits that will be reserved in a 283 | * handle for user storage, stored before the most significant bit of the 32-bit handle 284 | * 285 | * 'max_num_handles' does _not_ have to be power of two. 286 | * 287 | * NB: the memory that is used when initializing the handle allocator is the 288 | * userdata/user defined structures hence the memory must be aligned according 289 | * to the memory alignment requirements of the userdata. 290 | * 291 | * NB: the max number of handles specified is not the same as usable handles 292 | * post initialization as 1 has to be reserved for bookkeeping, 293 | * in non-'pure LIFO' configuration. 294 | * 295 | * use 'ijha_h32_memory_size_needed' to calculate how much memory is needed. 296 | * 297 | * ijha_flags = ORed ijha_h32_init_flags 298 | * 299 | * NB: future operations on self is undefined if return value, which is a combination 300 | * of ijha_h32_init_res ORed together, is not equal to IJHA_H32_INIT_NO_ERROR. 301 | */ 302 | #define ijha_h32_init_inlinehandles(self, max_num_handles, num_userflag_bits, userdata_size_in_bytes_per_item, byte_offset_to_handle, ijha_flags, memory) ijha_h32_initex((self), (max_num_handles), (num_userflag_bits), 0, (byte_offset_to_handle), (userdata_size_in_bytes_per_item), (ijha_flags), (memory)) 303 | 304 | /* reset to initial state (as if no handles been used) */ 305 | IJHA_H32_API void ijha_h32_reset(struct ijha_h32 *self); 306 | 307 | #define ijha_h32_is_fifo(self) (((self)->flags_num_userflag_bits&IJHA_H32_INIT_FIFO)==IJHA_H32_INIT_FIFO) 308 | 309 | /* how many handles can be used */ 310 | #define ijha_h32_capacity(self) ((self)->capacity - (((self)->flags_num_userflag_bits&(IJHA_H32_INIT_FIFO|IJHA_H32_INIT_THREADSAFE)) ? 1 : 0)) 311 | 312 | /* acquires a handle, stored in handle_out. 313 | * returns the index of the handle on success, IJHA_H32_INVALID_INDEX when all 314 | * handles is used. 315 | * 316 | * if no userflags was reserved/requested on initialization, use 'ijha_h32_acquire' 317 | * 318 | * if userflags was reserved/requested on initialization, use 'ijha_h32_acquire_userflags' 319 | * 320 | * NB: if the handles is inlined in the userdata then extra care has to be taken 321 | * when initializing the userdata after successfully acquired a handle, as 322 | * the handle contains bookkeeping information 323 | * (generation/freelist traversal information/etc). 324 | * 325 | * ex: 326 | * struct UserdataWithInlineHandle { 327 | * unsigned inlined_handle; 328 | * unsigned char payload[60]; 329 | * }; 330 | * 331 | * unsigned handle; 332 | * unsigned sparse_index = ijha_h32_acquire(instance, &handle); 333 | * struct UserdataWithInlineHandle *userdata = ijha_h32_userdata(struct UserdataWithInlineHandle*, instance, handle); 334 | * // NB: extra care has to be taken in order not to overwrite the handle information 335 | * memset(ijha_h32_pointer_add(void*, userdata, sizeof handle), 336 | * 0, 337 | * sizeof *userdata-sizeof handle); 338 | * 339 | * NB: the userflags is stored before the most significant bit, so user may or 340 | * may not need to shift the userflags. use the 'ijha_h32_userflags_from_handle' 341 | * and 'ijha_h32_userflags_to_handle' helper macros to transform userflags 342 | * back and forth. see notes by the 'ijha_h32_userflags_'-macros. 343 | * 344 | * ex: 345 | * enum Color { 346 | * RED = 0, GREEN = 1, BLUE = 2, YELLOW = 3 347 | * }; 348 | * 349 | * initialize the ijha_h32 instance with 2 userflag bits [0,3] 350 | * unsigned num_userflag_bits = 2; 351 | * unsigned original_userflags = (unsigned)YELLOW; 352 | * unsigned handle_userflags = ijha_h32_userflags_to_handle_bits(original_userflags, num_userflag_bits); 353 | * unsigned handle; 354 | * unsigned sparse_index = ijha_h32_acquire_userflags(instance, handle_userflags, &handle); 355 | * unsigned stored_userflags = ijha_h32_userflags(instance, handle); 356 | * this now holds (given that the acquire succeeded) : 357 | * original_userflags == ijha_h32_userflags_from_handle_bits(handle, num_userflag_bits) 358 | * handle_userflags == stored_userflags 359 | */ 360 | #define ijha_h32_acquire_userflags(self, userflags, handle_out) ((self)->acquire_func)((self), (userflags), (handle_out)) 361 | #define ijha_h32_acquire(self, handle_out) ijha_h32_acquire_userflags((self), 0, (handle_out)) 362 | 363 | /* index of the handle (stable, i.e. will not move) */ 364 | #define ijha_h32_index(self, handle) ((self)->capacity_mask & (handle)) 365 | 366 | /* if index or handle is in use 367 | * NB: 'ijha_h32_in_use' checks the passed in handle, _NOT_ the stored handle */ 368 | #define ijha_h32_in_use_bit(self) ((self)->in_use_bit) 369 | #define ijha_h32_in_use(self, handle) ((handle)&ijha_h32_in_use_bit((self))) 370 | #define ijha_h32_in_use_index(self, index) ijha_h32_in_use((self), *ijha_h32_handle_info_at((self), (index))) 371 | 372 | #define ijha_h32_in_use_msb(self) (ijha_h32_in_use_bit(self)&0x80000000) 373 | 374 | #define ijha_h32_handle_stride(v) ((v)&0x0000ffffu) 375 | #define ijha_h32_handle_offset(v) (((v)&0x00ff0000u) >> 16) 376 | #define ijha_h32_userdata_offset(v) (((v)&0xff000000u) >> 24) 377 | 378 | #define ijha_h32_pointer_add(type, p, num_bytes) ((type)((unsigned char *)(p) + (num_bytes))) 379 | 380 | /* pointer to handle */ 381 | #define ijha_h32_handle_info_at(self, index) ijha_h32_pointer_add(unsigned *, (self)->handles, ijha_h32_handle_offset((self)->handles_stride_userdata_offset) + ijha_h32_handle_stride((self)->handles_stride_userdata_offset) * (index)) 382 | 383 | #define ijha_h32_valid_mask(self, handle, handlemask) (((self)->capacity > ((handle) & (self)->capacity_mask)) && ijha_h32_in_use((self), (handle)) && ((*ijha_h32_handle_info_at((self), ((handle) & (self)->capacity_mask)) & (handlemask)) == ((handle) & (handlemask)))) 384 | /* if handle is valid/active */ 385 | #define ijha_h32_valid(self, handle) ijha_h32_valid_mask((self), (handle), (0xffffffffu)) 386 | 387 | /* retrieve the stored userflags from handle or index (assumes the handle is valid, 388 | * use 'ijha_h32_valid' beforehand if unsure) */ 389 | #define ijha_h32_userflags(self, handle_or_index) *ijha_h32_handle_info_at((self), ijha_h32_index((self), (handle_or_index)))&(self->userflags_mask) 390 | /* returns the old userflags */ 391 | IJHA_H32_API unsigned ijha_h32_userflags_set(struct ijha_h32 *self, unsigned handle, unsigned userflags); 392 | 393 | /* helper macros for transforming the userflags stored in the handle back and forth. 394 | * more often than not the userflags stored in handles is 0 based, think enum-types / 395 | * constants starting from 0 / etc, which user can not, and shall not for the sake 396 | * of conforming to a handle allocator, change. the '_bits' versions is when you 397 | * know the number of bits at the call-site, which often is the case, for other 398 | * times the number of userflags-bits is stored in the instance */ 399 | #define ijha_h32_userflags_num_bits(self) ((self)->flags_num_userflag_bits&31) 400 | 401 | #define ijha_h32_userflags_to_handle_bits(self, userflags, num_userflag_bits) (((unsigned)(userflags))<<((31+!ijha_h32_in_use_msb(self))-(num_userflag_bits))) 402 | #define ijha_h32_userflags_to_handle(self, userflags) ijha_h32_userflags_to_handle_bits(self, userflags, ijha_h32_userflags_num_bits((self))) 403 | 404 | #define ijha_h32_userflags_from_handle_bits(self, handle, num_userflag_bits) ((((handle)&(0xffffffff>>!!ijha_h32_in_use_msb(self)))>>((31+!ijha_h32_in_use_msb(self))-(num_userflag_bits)))) 405 | #define ijha_h32_userflags_from_handle(self, handle) ijha_h32_userflags_from_handle_bits(self, handle, ijha_h32_userflags_num_bits((self))) 406 | 407 | /* retrieve pointer to userdata of handle (assumes instance was initialized with userdata) 408 | * NB: 'ijha_h32_userdata' assumes that the handle/index is valid 409 | * 'ijha_h32_userdata_checked' does a valid check beforehand, but assumes it is passed a handle */ 410 | #define ijha_h32_userdata(userdata_type, self, handle_or_index) ijha_h32_pointer_add(userdata_type, (self)->handles, ijha_h32_handle_stride((self)->handles_stride_userdata_offset) * (ijha_h32_index((self), (handle_or_index))) + ijha_h32_userdata_offset((self)->handles_stride_userdata_offset)) 411 | #define ijha_h32_userdata_checked(userdata_type, self, handle) (ijha_h32_valid(self, handle) ? ijha_h32_userdata(userdata_type, self, handle) : 0) 412 | 413 | /* release the handle back to the pool thus making it invalid. 414 | * returns the index of the handle if the handle was valid, IJHA_H32_INVALID_INDEX if invalid. */ 415 | #define ijha_h32_release(self, handle) ((self)->release_func)((self), handle) 416 | 417 | #ifdef __cplusplus 418 | } 419 | #endif 420 | 421 | #endif /* IJHA_H32_INCLUDED_H */ 422 | 423 | #if defined(IJHA_H32_IMPLEMENTATION) && !defined(IJHA_H32_IMPLEMENTATION_DEFINED) 424 | 425 | #define IJHA_H32_IMPLEMENTATION_DEFINED (1) 426 | 427 | #ifndef IJHA_H32_NO_THREADSAFE_SUPPORT 428 | #if _WIN32 429 | #ifdef __cplusplus 430 | #define IJHA_H32__EXTERNC_DECL_BEGIN extern "C" { 431 | #define IJHA_H32__EXTERNC_DECL_END } 432 | #else 433 | #define IJHA_H32__EXTERNC_DECL_BEGIN 434 | #define IJHA_H32__EXTERNC_DECL_END 435 | #endif 436 | 437 | IJHA_H32__EXTERNC_DECL_BEGIN 438 | long _InterlockedCompareExchange(long volatile *Destination, long Exchange, long Comparand); 439 | IJHA_H32__EXTERNC_DECL_END 440 | 441 | #pragma intrinsic(_InterlockedCompareExchange) 442 | #define IJHA_H32_InterlockedCompareExchange(ptr, exch, comp) _InterlockedCompareExchange((long volatile *)(ptr), (exch), (comp)) 443 | #define IJHA_H32_CAS(ptr, new, old) ((old)==(unsigned)IJHA_H32_InterlockedCompareExchange((ptr), (new), (old))) 444 | 445 | IJHA_H32__EXTERNC_DECL_BEGIN 446 | long _InterlockedIncrement(long volatile *Addend); 447 | long _InterlockedDecrement(long volatile *Addend); 448 | IJHA_H32__EXTERNC_DECL_END 449 | 450 | #pragma intrinsic(_InterlockedIncrement) 451 | #pragma intrinsic(_InterlockedDecrement) 452 | /* returns the result of the operation */ 453 | #define IJHA_H32_InterlockedIncrement(ptr) _InterlockedIncrement((long volatile*)(ptr)) 454 | #define IJHA_H32_InterlockedDecrement(ptr) _InterlockedDecrement((long volatile*)(ptr)) 455 | 456 | #define IJHA_H32_HAS_ATOMICS (1) 457 | #else 458 | #define IJHA_H32_CAS(ptr, new, old) __sync_bool_compare_and_swap((ptr), (old), (new)) 459 | 460 | /* returns the result of the operation */ 461 | #define IJHA_H32_InterlockedIncrement(ptr) __sync_add_and_fetch((ptr), 1) 462 | #define IJHA_H32_InterlockedDecrement(ptr) __sync_sub_and_fetch((ptr), 1) 463 | 464 | #define IJHA_H32_HAS_ATOMICS (1) 465 | #endif 466 | #endif /* ifndef IJHA_H32_NO_THREADSAFE_SUPPORT */ 467 | 468 | #ifndef IJHA_H32_assert 469 | #include 470 | #define IJHA_H32_assert assert 471 | #endif 472 | 473 | /* handle the runtime-option where the "in use"-bit is stored. 474 | * - "in use"-bit is stored in MSB -> (capacity_mask+1) is the first generation bit 475 | * or 476 | * - "in use"-bit is (capacity_mask+1) -> (capacity_mask+1)<<1 is the first generation bit */ 477 | #define ijha_h32__generation_add(self) (((self)->flags_num_userflag_bits&IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT)?(((self)->capacity_mask+1)<<1):(((self)->capacity_mask+1))) 478 | 479 | IJHA_H32_API unsigned ijha_h32_memory_size_needed(unsigned max_num_handles, unsigned userdata_size_in_bytes_per_item, int inline_handles) 480 | { 481 | return max_num_handles * (sizeof(unsigned)*(inline_handles ? 0 : 1) + userdata_size_in_bytes_per_item); 482 | } 483 | 484 | static unsigned ijha_h32__acquire_userflags_lifo_fifo(struct ijha_h32 *self, unsigned userflags, unsigned *handle_out) 485 | { 486 | unsigned current_cursor = self->freelist_dequeue_index; 487 | unsigned in_use_bit = ijha_h32_in_use_bit(self); 488 | unsigned userflags_mask = self->userflags_mask; 489 | unsigned maxnhandles = self->capacity - ijha_h32_is_fifo(self); 490 | IJHA_H32_assert((userflags_mask & userflags) == userflags); 491 | 492 | if (self->size == maxnhandles) { 493 | /* NOTE 494 | * if only used as a LIFO queue and one don't need/want to keep dense<->sparse mapping, 495 | * or the need of number acquired handles, then the 'size' bookkeeping could be skipped altogether. 496 | * a check 'if (*ijha_h32_handle_info_at(current_cursor)&in_use_bit)' will tell if all handles is used */ 497 | *handle_out = 0; 498 | return IJHA_H32_INVALID_INDEX; 499 | } else { 500 | unsigned *handle = ijha_h32_handle_info_at(self, current_cursor); 501 | unsigned current_handle = *handle; 502 | unsigned generation_mask = self->generation_mask; 503 | unsigned generation_to_add = ijha_h32__generation_add(self); 504 | 505 | unsigned new_cursor = current_handle & self->capacity_mask; 506 | unsigned new_generation = generation_mask & (current_handle + generation_to_add); 507 | unsigned new_handle = userflags | new_generation | in_use_bit | current_cursor; 508 | 509 | IJHA_H32_assert(!generation_mask || (*handle & generation_mask) != new_generation); /* no generation or has changed generation */ 510 | 511 | *handle = *handle_out = new_handle; 512 | 513 | self->freelist_dequeue_index = new_cursor; 514 | ++self->size; 515 | return current_cursor; 516 | } 517 | } 518 | 519 | #if IJHA_H32_HAS_ATOMICS 520 | 521 | static unsigned ijha_h32__acquire_lifo_ts(struct ijha_h32 *self, unsigned userflags, unsigned *handle_out) 522 | { 523 | unsigned *current_freelist_index_serial = &self->freelist_dequeue_index; 524 | unsigned generation_mask = self->generation_mask; 525 | unsigned capacity_mask = self->capacity_mask; 526 | unsigned freelist_serial_add = capacity_mask+1; 527 | unsigned in_use_bit = ijha_h32_in_use_bit(self); 528 | unsigned handle_generation_add = ijha_h32__generation_add(self); 529 | 530 | IJHA_H32_assert((self->userflags_mask & userflags) == userflags); 531 | 532 | for (;;) { 533 | unsigned next_freelist_index; 534 | unsigned new_freelist_index_serial; 535 | unsigned old_freelist_index_serial = *current_freelist_index_serial; 536 | unsigned current_index = old_freelist_index_serial&capacity_mask; 537 | unsigned *handle = ijha_h32_handle_info_at(self, current_index), current_handle = *handle; 538 | 539 | /* first slot is used as a sentinel/end-of-list */ 540 | if (current_index == 0) { 541 | *handle_out = 0; 542 | return IJHA_H32_INVALID_INDEX; 543 | } 544 | 545 | next_freelist_index = current_handle&capacity_mask; 546 | 547 | new_freelist_index_serial = ((old_freelist_index_serial + freelist_serial_add)&~capacity_mask) | next_freelist_index; 548 | IJHA_H32_assert((old_freelist_index_serial&~capacity_mask) != (new_freelist_index_serial&~capacity_mask)); 549 | 550 | if (IJHA_H32_CAS(current_freelist_index_serial, new_freelist_index_serial, old_freelist_index_serial)) { 551 | unsigned new_generation = generation_mask & (current_handle + handle_generation_add); 552 | unsigned new_handle = userflags | new_generation | in_use_bit | current_index; 553 | 554 | IJHA_H32_assert(!generation_mask || (current_handle & generation_mask) != new_generation); /* no generation or has changed generation */ 555 | 556 | *handle = *handle_out = new_handle; 557 | IJHA_H32_InterlockedIncrement(&self->size); 558 | 559 | return current_index; 560 | } 561 | } 562 | } 563 | 564 | #endif /* IJHA_H32_HAS_ATOMICS */ 565 | 566 | static unsigned ijha_h32__release_fifo(struct ijha_h32 *self, unsigned handle) 567 | { 568 | unsigned in_use_bit = ijha_h32_in_use_bit(self); 569 | unsigned idx = handle & self->capacity_mask; 570 | unsigned *stored_handle = ((self->capacity > idx) && (handle & in_use_bit)) ? ijha_h32_handle_info_at(self, idx) : 0; 571 | 572 | if (stored_handle && *stored_handle == handle) { 573 | /* clear in_use-bit of current */ 574 | *stored_handle &= ~in_use_bit; 575 | 576 | stored_handle = ijha_h32_handle_info_at(self, self->freelist_enqueue_index); 577 | IJHA_H32_assert((*stored_handle & in_use_bit) == 0); 578 | *stored_handle = (*stored_handle & ~self->capacity_mask) | idx; 579 | 580 | self->freelist_enqueue_index = idx; 581 | --self->size; 582 | return idx; 583 | } 584 | 585 | return IJHA_H32_INVALID_INDEX; 586 | } 587 | 588 | static unsigned ijha_h32__release_lifo(struct ijha_h32 *self, unsigned handle) 589 | { 590 | unsigned in_use_bit = ijha_h32_in_use_bit(self); 591 | unsigned idx = handle & self->capacity_mask; 592 | unsigned *stored_handle = ((self->capacity > idx) && (handle & in_use_bit)) ? ijha_h32_handle_info_at(self, idx) : 0; 593 | 594 | if (stored_handle && *stored_handle == handle) { 595 | unsigned current_cursor = self->freelist_dequeue_index; 596 | /* clear in_use-bit and store current (soon the be old) cursor */ 597 | *stored_handle = ~in_use_bit & ((handle & ~self->capacity_mask) | current_cursor); 598 | self->freelist_dequeue_index = idx; 599 | 600 | --self->size; 601 | return idx; 602 | } 603 | 604 | return IJHA_H32_INVALID_INDEX; 605 | } 606 | 607 | #if IJHA_H32_HAS_ATOMICS 608 | 609 | static unsigned ijha_h32__release_lifo_ts(struct ijha_h32 *self, unsigned handle) 610 | { 611 | unsigned *current_freelist_index_serial = &self->freelist_dequeue_index; 612 | unsigned capacity_mask = self->capacity_mask; 613 | unsigned in_use_bit = ijha_h32_in_use_bit(self); 614 | unsigned idx = handle & capacity_mask; 615 | unsigned *stored_handle = ((self->capacity > idx) && (handle & in_use_bit)) ? ijha_h32_handle_info_at(self, idx) : 0; 616 | 617 | if (stored_handle && *stored_handle == handle) { 618 | unsigned freelist_serial_add = capacity_mask + 1; 619 | /* clear in_use_bit and index */ 620 | handle &= ~(capacity_mask | in_use_bit); 621 | 622 | for (;;) { 623 | unsigned old_freelist_index_serial = *current_freelist_index_serial; 624 | /* increase serial and change the freelist index to that of the released handle */ 625 | unsigned new_freelist_index_serial = ((old_freelist_index_serial + freelist_serial_add)&~capacity_mask) | idx; 626 | 627 | IJHA_H32_assert((old_freelist_index_serial&~capacity_mask) != (new_freelist_index_serial&~capacity_mask)); 628 | 629 | /* store current freelist index at the place of the release handle */ 630 | *stored_handle = handle | (old_freelist_index_serial&capacity_mask); 631 | 632 | /* try to redirect freelist to current index */ 633 | if (IJHA_H32_CAS(current_freelist_index_serial, new_freelist_index_serial, old_freelist_index_serial)) 634 | break; 635 | } 636 | 637 | IJHA_H32_InterlockedDecrement(&self->size); 638 | 639 | return idx; 640 | } 641 | 642 | return IJHA_H32_INVALID_INDEX; 643 | } 644 | 645 | #endif /* IJHA_H32_HAS_ATOMICS */ 646 | 647 | #define ijha_h32__roundup(x) (--(x), (x) |= (x) >> 1, (x) |= (x) >> 2, (x) |= (x) >> 4, (x) |= (x) >> 8, (x) |= (x) >> 16, ++(x)) 648 | 649 | static unsigned ijha_h32__num_bits(unsigned n) 650 | { 651 | unsigned res = 0; 652 | while (n >>= 1) 653 | res++; 654 | return res; 655 | } 656 | 657 | IJHA_H32_API int ijha_h32_initex(struct ijha_h32 *self, unsigned max_num_handles, unsigned num_userflag_bits, unsigned non_inline_handle_size_bytes, unsigned handle_offset, unsigned userdata_size_in_bytes_per_item, unsigned ijha_flags, void *memory) 658 | { 659 | int init_res = IJHA_H32_INIT_NO_ERROR; 660 | unsigned handles_stride; 661 | unsigned userflags_mask; 662 | 663 | if ((userdata_size_in_bytes_per_item & 0xffff0000) != 0) 664 | init_res |= IJHA_H32_INIT_USERDATA_TOO_BIG; 665 | if ((non_inline_handle_size_bytes & 0xffffff00) != 0) 666 | init_res |= IJHA_H32_INIT_HANDLE_NON_INLINE_SIZE_TOO_BIG; 667 | if ((handle_offset & 0xffffff00) != 0) 668 | init_res |= IJHA_H32_INIT_HANDLE_OFFSET_TOO_BIG; 669 | 670 | self->handles = memory; 671 | self->flags_num_userflag_bits = ijha_flags; 672 | if ((self->flags_num_userflag_bits & 31) != 0) 673 | init_res |= IJHA_H32_INIT_INVALID_INPUT_FLAGS; /* erroneous flags passed in */ 674 | 675 | self->flags_num_userflag_bits |= num_userflag_bits; 676 | 677 | handles_stride = non_inline_handle_size_bytes + userdata_size_in_bytes_per_item; 678 | self->handles_stride_userdata_offset = handles_stride | (non_inline_handle_size_bytes << 24) | (handle_offset << 16); 679 | 680 | self->size = 0; 681 | self->capacity = max_num_handles; 682 | ijha_h32__roundup(max_num_handles); 683 | self->capacity_mask = max_num_handles - 1; 684 | 685 | userflags_mask = num_userflag_bits ? (0xffffffffu << (32 - num_userflag_bits)) : 0; 686 | 687 | self->generation_mask = ~(self->capacity_mask | userflags_mask); 688 | 689 | if ((ijha_flags&IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT) == 0) { 690 | self->in_use_bit = 0x80000000; 691 | self->generation_mask = (self->generation_mask >> 1) & ~self->capacity_mask; /* in_use-bit is the MSB */ 692 | 693 | self->userflags_mask = userflags_mask >> 1; 694 | } else { 695 | self->in_use_bit = self->capacity_mask+1; 696 | 697 | self->generation_mask &= self->generation_mask << 1; /* mask out the in_use-bit */ 698 | 699 | self->userflags_mask = userflags_mask; 700 | } 701 | 702 | if ((ijha_h32__num_bits(max_num_handles) - 1) + num_userflag_bits >= 32) 703 | init_res |= IJHA_H32_INIT_CONFIGURATION_UNSUPPORTED; 704 | 705 | if (ijha_flags&IJHA_H32_INIT_THREADSAFE) { 706 | #if IJHA_H32_HAS_ATOMICS 707 | if ((ijha_flags&IJHA_H32_INIT_LIFOFIFO_MASK) == IJHA_H32_INIT_FIFO) { 708 | init_res |= IJHA_H32_INIT_THREADSAFE_UNSUPPORTED; 709 | self->acquire_func = 0; 710 | self->release_func = 0; 711 | } else { 712 | self->acquire_func = &ijha_h32__acquire_lifo_ts; 713 | self->release_func = &ijha_h32__release_lifo_ts; 714 | self->flags_num_userflag_bits |= IJHA_H32_INIT_LIFO; 715 | } 716 | #else 717 | init_res |= IJHA_H32_INIT_THREADSAFE_UNSUPPORTED; 718 | self->acquire_func = 0; 719 | self->release_func = 0; 720 | #endif 721 | } else { 722 | if ((ijha_flags&IJHA_H32_INIT_LIFOFIFO_MASK) == IJHA_H32_INIT_LIFO) { 723 | self->acquire_func = &ijha_h32__acquire_userflags_lifo_fifo; 724 | self->release_func = &ijha_h32__release_lifo; 725 | } else { 726 | self->acquire_func = &ijha_h32__acquire_userflags_lifo_fifo; 727 | self->release_func = &ijha_h32__release_fifo; 728 | } 729 | } 730 | 731 | if (init_res == IJHA_H32_INIT_NO_ERROR) 732 | ijha_h32_reset(self); 733 | 734 | return init_res; 735 | } 736 | 737 | IJHA_H32_API void ijha_h32_reset(struct ijha_h32 *self) 738 | { 739 | /* always reset handles with full generation mask as then the first 740 | * allocation/acquire makes it wrap-around. this guarantees that the 741 | * handles allocated, barring any releases and handle allocator is not 742 | * initialized with 'IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT'-flag, becomes: 743 | * 744 | * (0x80000000 | 0) -> (0x80000000 | 1) -> (0x80000000 | 2) -> etc 745 | * 746 | * this can be useful if you know that there is always allocated N 747 | * objects/handles at the start and want to guarantee that the handles to 748 | * these objects always is the same, regardless of the capacity the handle 749 | * allocator is initialized with. 750 | * 751 | * NB : this is just guaranteed for the non-thread safe version, as the 752 | * thread-safe version starts with sparse index 1. this can be worked 753 | * around but the user has to jump through a few hoops in order to achieve 754 | * it. 755 | */ 756 | unsigned i, generation_mask = self->generation_mask; 757 | self->size = 0; 758 | 759 | self->freelist_dequeue_index = 0; 760 | self->freelist_enqueue_index = self->capacity - 1; 761 | 762 | for (i = 0; i != self->capacity; ++i) { 763 | unsigned *current = ijha_h32_handle_info_at(self, i); 764 | *current = (i + 1) | generation_mask; 765 | } 766 | 767 | /* make last handle/slot loop back to 0 */ 768 | *ijha_h32_handle_info_at(self, self->capacity - 1) = 0 | generation_mask; 769 | 770 | if (self->flags_num_userflag_bits & IJHA_H32_INIT_THREADSAFE) 771 | self->freelist_dequeue_index = 1; /* use the first slot as a sentinel/end-of-list */ 772 | } 773 | 774 | IJHA_H32_API unsigned ijha_h32_userflags_set(struct ijha_h32 *self, unsigned handle, unsigned userflags) 775 | { 776 | unsigned ohandle, *p; 777 | IJHA_H32_assert(((userflags) & (self)->userflags_mask) == (userflags)); 778 | IJHA_H32_assert(ijha_h32_valid_mask(self, handle, ~self->userflags_mask)); 779 | p = ijha_h32_handle_info_at(self, handle & self->capacity_mask), ohandle = *p; 780 | *p = (ohandle & ~self->userflags_mask) | userflags; 781 | 782 | return ohandle & self->userflags_mask; 783 | } 784 | 785 | #if defined(IJHA_H32_TEST) || defined(IJHA_H32_TEST_MAIN) 786 | 787 | #ifndef IJHA_H32_memset 788 | #include 789 | #define IJHA_H32_memset memset 790 | #endif 791 | 792 | struct ijha_h32_test_userdata { 793 | void *p; 794 | unsigned a, b, c; 795 | unsigned inline_handle; 796 | }; 797 | 798 | #ifndef offsetof 799 | typedef unsigned int ijha_h32_uint32; 800 | 801 | #ifdef _MSC_VER 802 | typedef unsigned __int64 ijha_h32_uint64; 803 | #else 804 | typedef unsigned long long ijha_h32_uint64; 805 | #endif 806 | 807 | #if defined(__ppc64__) || defined(__aarch64__) || defined(_M_X64) || defined(__x86_64__) || defined(__x86_64) 808 | typedef ijha_h32_uint64 ijha_h32_uintptr; 809 | #else 810 | typedef ijha_h32_uint32 ijha_h32_uintptr; 811 | #endif 812 | 813 | #define ijha_h32_test_offsetof(st, m) ((ijha_h32_uintptr)&(((st *)0)->m)) 814 | #else 815 | #define ijha_h32_test_offsetof offsetof 816 | #endif 817 | 818 | static void ijha_h32_test_inline_noinline_handles(void) 819 | { 820 | #define IJHA_TEST_MAX_NUM_HANDLES 5 821 | #if defined(IJHA_H32_HAS_ATOMICS) 822 | unsigned LIFO_FIFO_FLAGS[] = {IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO, IJHA_H32_INIT_THREADSAFE | IJHA_H32_INIT_LIFO}; 823 | #else 824 | unsigned LIFO_FIFO_FLAGS[] = {IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO}; 825 | #endif 826 | struct ijha_h32 l, *self = &l; 827 | unsigned idx, num = sizeof LIFO_FIFO_FLAGS / sizeof *LIFO_FIFO_FLAGS; 828 | unsigned i, j, maxnhandles, dummy, handles[IJHA_TEST_MAX_NUM_HANDLES]; 829 | int init_res; 830 | 831 | for (idx = 0; idx != num*2; ++idx) { 832 | unsigned LIFO_FIFO_FLAG = LIFO_FIFO_FLAGS[idx%num]; 833 | 834 | unsigned num_userflag_bits = 0; 835 | unsigned userdata_size_in_bytes_per_item = sizeof(struct ijha_h32_test_userdata); 836 | struct ijha_h32_test_userdata userdata_inlinehandles[IJHA_TEST_MAX_NUM_HANDLES]; 837 | 838 | if (idx >= num) 839 | LIFO_FIFO_FLAG |= IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT; 840 | 841 | IJHA_H32_assert(sizeof userdata_inlinehandles >= ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 1)); 842 | init_res = ijha_h32_init_inlinehandles(self, IJHA_TEST_MAX_NUM_HANDLES, num_userflag_bits, sizeof(struct ijha_h32_test_userdata), ijha_h32_test_offsetof(struct ijha_h32_test_userdata, inline_handle), LIFO_FIFO_FLAG, userdata_inlinehandles); 843 | IJHA_H32_assert(init_res == IJHA_H32_INIT_NO_ERROR); 844 | IJHA_H32_assert(ijha_h32_memory_size_allocated(self) == ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 1)); 845 | maxnhandles = ijha_h32_capacity(self); 846 | 847 | for (i = 0; i != maxnhandles; ++i) { 848 | unsigned si = ijha_h32_acquire_userflags(self, 0, &handles[i]); 849 | for (j = 0; j != i + 1; ++j) 850 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 851 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 852 | } 853 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 854 | for (i = 0; i != maxnhandles; ++i) { 855 | unsigned handleidx = ijha_h32_index(self, handles[i]); 856 | struct ijha_h32_test_userdata *userdata = ijha_h32_userdata(struct ijha_h32_test_userdata*, self, handleidx); 857 | unsigned *handleinfo = ijha_h32_handle_info_at(self, handleidx); 858 | IJHA_H32_assert(userdata == &userdata_inlinehandles[handleidx]); 859 | IJHA_H32_assert(handleinfo == &userdata_inlinehandles[handleidx].inline_handle); 860 | IJHA_H32_assert(userdata == ijha_h32_userdata_checked(struct ijha_h32_test_userdata*, self, handles[i])); 861 | } 862 | } 863 | 864 | for (idx = 0; idx != num*2; ++idx) { 865 | unsigned LIFO_FIFO_FLAG = LIFO_FIFO_FLAGS[idx%num]; 866 | 867 | unsigned num_userflag_bits = 0; 868 | unsigned userdata_size_in_bytes_per_item = 0; 869 | unsigned all_handles_memory[IJHA_TEST_MAX_NUM_HANDLES]; 870 | 871 | if (idx >= num) 872 | LIFO_FIFO_FLAG |= IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT; 873 | 874 | IJHA_H32_assert(sizeof all_handles_memory >= ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 875 | init_res = ijha_h32_init_no_inlinehandles(self, IJHA_TEST_MAX_NUM_HANDLES, num_userflag_bits, userdata_size_in_bytes_per_item, LIFO_FIFO_FLAG, all_handles_memory); 876 | IJHA_H32_assert(init_res == IJHA_H32_INIT_NO_ERROR); 877 | IJHA_H32_assert(ijha_h32_memory_size_allocated(self) == ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 878 | maxnhandles = ijha_h32_capacity(self); 879 | 880 | for (i = 0; i != maxnhandles; ++i) { 881 | unsigned si = ijha_h32_acquire_userflags(self, 0, &handles[i]); 882 | for (j = 0; j != i + 1; ++j) 883 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 884 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 885 | } 886 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 887 | for (i = 0; i != maxnhandles; ++i) { 888 | unsigned handleidx = ijha_h32_index(self, handles[i]); 889 | IJHA_H32_assert(ijha_h32_handle_info_at(self, handleidx) == &all_handles_memory[handleidx]); 890 | } 891 | } 892 | 893 | for (idx = 0; idx != num*2; ++idx) { 894 | unsigned LIFO_FIFO_FLAG = LIFO_FIFO_FLAGS[idx%num]; 895 | unsigned num_userflag_bits = 0; 896 | unsigned userdata_size_in_bytes_per_item = sizeof(struct ijha_h32_test_userdata); 897 | unsigned stride = sizeof(struct ijha_h32_test_userdata) + sizeof(unsigned); 898 | unsigned char memory_for_noinline_handles[(sizeof(struct ijha_h32_test_userdata) + sizeof(unsigned))*IJHA_TEST_MAX_NUM_HANDLES]; 899 | 900 | if (idx >= num) 901 | LIFO_FIFO_FLAG |= IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT; 902 | 903 | IJHA_H32_assert(sizeof memory_for_noinline_handles >= ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 904 | init_res = ijha_h32_init_no_inlinehandles(self, IJHA_TEST_MAX_NUM_HANDLES, num_userflag_bits, userdata_size_in_bytes_per_item, LIFO_FIFO_FLAG, memory_for_noinline_handles); 905 | IJHA_H32_assert(init_res == IJHA_H32_INIT_NO_ERROR); 906 | IJHA_H32_assert(ijha_h32_memory_size_allocated(self) == ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 907 | maxnhandles = ijha_h32_capacity(self); 908 | 909 | for (i = 0; i != maxnhandles; ++i) { 910 | unsigned si = ijha_h32_acquire_userflags(self, 0, &handles[i]); 911 | for (j = 0; j != i + 1; ++j) 912 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 913 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 914 | } 915 | 916 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 917 | for (i = 0; i != maxnhandles; ++i) { 918 | unsigned handleidx = ijha_h32_index(self, handles[i]); 919 | unsigned *handleinfo = ijha_h32_handle_info_at(self, handleidx); 920 | IJHA_H32_assert(ijha_h32_pointer_add(char *, memory_for_noinline_handles, stride*handleidx) == (char*)handleinfo); 921 | } 922 | } 923 | #undef IJHA_TEST_MAX_NUM_HANDLES 924 | } 925 | 926 | enum IJHA_H32_TestColor { 927 | IJHA_H32_TestColor_RED=0, IJHA_H32_TestColor_GREEN=1, IJHA_H32_TestColor_BLUE=2, IJHA_H32_TestColor_YELLOW=3 928 | }; 929 | 930 | static void ijha_h32_test_basic_operations(void) 931 | { 932 | #define IJHA_TEST_MAX_NUM_HANDLES 5 933 | #if defined(IJHA_H32_HAS_ATOMICS) 934 | unsigned LIFO_FIFO_FLAGS[] = {IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO, IJHA_H32_INIT_THREADSAFE|IJHA_H32_INIT_LIFO}; 935 | #else 936 | unsigned LIFO_FIFO_FLAGS[] = {IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO}; 937 | #endif 938 | unsigned ijha_h32_memory_area[IJHA_TEST_MAX_NUM_HANDLES]; 939 | unsigned handles[IJHA_TEST_MAX_NUM_HANDLES]; 940 | struct ijha_h32 l, *self = &l; 941 | unsigned idx, num = sizeof LIFO_FIFO_FLAGS / sizeof *LIFO_FIFO_FLAGS; 942 | int init_res; 943 | 944 | for (idx = 0; idx != num*2; ++idx) { 945 | unsigned LIFO_FIFO_FLAG = LIFO_FIFO_FLAGS[idx%num]; 946 | unsigned i, j, user_nbits; 947 | 948 | if (idx >= num) 949 | LIFO_FIFO_FLAG |= IJHA_H32_INIT_DONT_USE_MSB_AS_IN_USE_BIT; 950 | 951 | for (user_nbits = 0; user_nbits != 29; ++user_nbits) { 952 | unsigned maxnhandles; 953 | unsigned dummy; 954 | unsigned userflags_test; 955 | unsigned userdata_size_in_bytes_per_item = 0; 956 | IJHA_H32_memset(ijha_h32_memory_area, 0, sizeof ijha_h32_memory_area); 957 | IJHA_H32_memset(handles, 0, sizeof handles); 958 | 959 | IJHA_H32_assert(sizeof ijha_h32_memory_area >= ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 960 | init_res = ijha_h32_init_no_inlinehandles(self, IJHA_TEST_MAX_NUM_HANDLES, user_nbits, userdata_size_in_bytes_per_item, LIFO_FIFO_FLAG, ijha_h32_memory_area); 961 | IJHA_H32_assert(init_res == IJHA_H32_INIT_NO_ERROR); 962 | IJHA_H32_assert(ijha_h32_memory_size_allocated(self) == ijha_h32_memory_size_needed(IJHA_TEST_MAX_NUM_HANDLES, userdata_size_in_bytes_per_item, 0)); 963 | maxnhandles = ijha_h32_capacity(self); 964 | 965 | for (i = 0; i != maxnhandles; ++i) { 966 | unsigned si, userflags = 0; 967 | enum IJHA_H32_TestColor testcolor = IJHA_H32_TestColor_RED; 968 | if (user_nbits > 1) { 969 | testcolor = (enum IJHA_H32_TestColor)(i%4); 970 | userflags = ijha_h32_userflags_to_handle(self, testcolor); 971 | userflags_test = ijha_h32_userflags_to_handle_bits(self, testcolor, user_nbits); 972 | IJHA_H32_assert(userflags == userflags_test); 973 | userflags_test = ijha_h32_userflags_from_handle(self, userflags); 974 | IJHA_H32_assert((unsigned)testcolor == userflags_test); 975 | } 976 | si = ijha_h32_acquire_userflags(self, userflags, &handles[i]); 977 | IJHA_H32_assert(ijha_h32_in_use(self, handles[i])); 978 | IJHA_H32_assert(ijha_h32_in_use_index(self, si)); 979 | for (j = 0; j != i + 1; ++j) 980 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 981 | 982 | if (user_nbits > 1) { 983 | unsigned stored_userflags = ijha_h32_userflags(self, handles[i]); 984 | unsigned userflags_from_handle = ijha_h32_userflags_from_handle(self, handles[i]); 985 | 986 | IJHA_H32_assert(stored_userflags == userflags); 987 | IJHA_H32_assert(userflags_from_handle == (unsigned)testcolor); 988 | IJHA_H32_assert(ijha_h32_userflags_from_handle_bits(self, stored_userflags, user_nbits) == (unsigned)testcolor); 989 | IJHA_H32_assert(ijha_h32_userflags_set(self, handles[i], stored_userflags) == userflags); 990 | IJHA_H32_assert(ijha_h32_userflags_set(self, handles[i], stored_userflags) == userflags); 991 | } else { 992 | /* thread-safe LIFO starts the idx at 1, as the 0 is used as end-of-list/sentinel */ 993 | unsigned idx_add = (LIFO_FIFO_FLAG&IJHA_H32_INIT_THREADSAFE) ? 1 : 0; 994 | IJHA_H32_assert(handles[i] == (ijha_h32_in_use_bit(self) | (i + idx_add))); 995 | } 996 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 997 | } 998 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 999 | 1000 | for (userflags_test = 1; userflags_test < user_nbits; ++userflags_test) { 1001 | for (i = 0; i != maxnhandles; ++i) { 1002 | unsigned ohandle = handles[i]; 1003 | 1004 | unsigned userflag = 1 << (32 - user_nbits + userflags_test - 1); 1005 | unsigned old_userflags = ijha_h32_userflags_set(self, ohandle, userflag); 1006 | IJHA_H32_assert(userflags_test == 1 || old_userflags == (1u << (32 - user_nbits + userflags_test - 1 - 1))); 1007 | 1008 | handles[i] = (ohandle & ~self->userflags_mask) | userflag; 1009 | } 1010 | } 1011 | 1012 | for (i = 0; i != maxnhandles; ++i) { 1013 | unsigned si = ijha_h32_release(self, handles[i]); 1014 | for (j = 0; j != i + 1; ++j) 1015 | IJHA_H32_assert(!ijha_h32_valid(self, handles[j])); 1016 | for (j = i + 1; j < maxnhandles; ++j) 1017 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 1018 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 1019 | } 1020 | 1021 | for (i = 0; i != maxnhandles; ++i) { 1022 | unsigned si = ijha_h32_acquire_userflags(self, 0, &handles[i]); 1023 | for (j = 0; j != i + 1; ++j) 1024 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 1025 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 1026 | } 1027 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 1028 | 1029 | for (i = 0; i != maxnhandles; ++i) { 1030 | unsigned si = ijha_h32_release(self, handles[i]); 1031 | for (j = 0; j != i + 1; ++j) 1032 | IJHA_H32_assert(!ijha_h32_valid(self, handles[j])); 1033 | for (j = i + 1; j < maxnhandles; ++j) 1034 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 1035 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 1036 | } 1037 | 1038 | for (i = 0; i != maxnhandles; ++i) { 1039 | unsigned sia, sir; 1040 | IJHA_H32_assert(!ijha_h32_valid(self, handles[0])); 1041 | sia = ijha_h32_acquire_userflags(self, 0, &handles[0]); 1042 | IJHA_H32_assert(ijha_h32_valid(self, handles[0])); 1043 | sir = ijha_h32_release(self, handles[0]); 1044 | IJHA_H32_assert(!ijha_h32_valid(self, handles[0])); 1045 | IJHA_H32_assert(sir == sia); 1046 | } 1047 | IJHA_H32_assert((IJHA_H32_INIT_THREADSAFE&LIFO_FIFO_FLAG)== 0 || (self->size == 0)); 1048 | } 1049 | } 1050 | #undef IJHA_TEST_MAX_NUM_HANDLES 1051 | } 1052 | 1053 | static void ijha_h32_test_constant_handles(void) 1054 | { 1055 | #define IJHA_TEST_MAX_NUM_HANDLES (9) 1056 | #if defined(IJHA_H32_HAS_ATOMICS) 1057 | unsigned LIFO_FIFO_FLAGS[] = { IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO, IJHA_H32_INIT_THREADSAFE | IJHA_H32_INIT_LIFO }; 1058 | #else 1059 | unsigned LIFO_FIFO_FLAGS[] = { IJHA_H32_INIT_LIFO, IJHA_H32_INIT_FIFO }; 1060 | #endif 1061 | 1062 | /* some public API constant to to refer to resources that is always created/valid 1063 | * if using userflags then these have to be present here also (no userflags in 1064 | * this example) 1065 | * 1066 | * NB: that the constants have the in_use-bit set (ijha_h32_in_use_bit / 0x80000000) 1067 | * so they will pass the 'ijha_h32_valid' checks when used 1068 | */ 1069 | #define PUBLIC_API_MAIN_WINDOW_HANDLE (0x80000000) 1070 | #define PUBLIC_API_SECONDARY_WINDOW_HANDLE (0x80000001) 1071 | 1072 | struct ijha_h32 l, *self = &l; 1073 | unsigned cap, idx, num = sizeof LIFO_FIFO_FLAGS / sizeof *LIFO_FIFO_FLAGS; 1074 | unsigned i, n, j, maxnhandles, dummy, handles[IJHA_TEST_MAX_NUM_HANDLES]; 1075 | struct ijha_h32_test_userdata userdata_inlinehandles[IJHA_TEST_MAX_NUM_HANDLES]; 1076 | int init_res; 1077 | 1078 | /* increase the capacity to verify that the first (two) handles we defined 1079 | * in our public API will not change when we increase the capacity */ 1080 | 1081 | for (cap = 3; cap < IJHA_TEST_MAX_NUM_HANDLES; ++cap) { 1082 | for (idx = 0; idx != num; ++idx) { 1083 | /* first do initial setup of the handle allocator */ 1084 | unsigned LIFO_FIFO_FLAG = LIFO_FIFO_FLAGS[idx]; 1085 | unsigned num_userflag_bits = 0; 1086 | unsigned ijha_flags = LIFO_FIFO_FLAG; 1087 | unsigned userdata_size_in_bytes_per_item = sizeof(struct ijha_h32_test_userdata); 1088 | IJHA_H32_assert(sizeof userdata_inlinehandles >= ijha_h32_memory_size_needed(cap, userdata_size_in_bytes_per_item, 1)); 1089 | init_res = ijha_h32_init_inlinehandles(self, cap, num_userflag_bits, sizeof(struct ijha_h32_test_userdata), ijha_h32_test_offsetof(struct ijha_h32_test_userdata, inline_handle), ijha_flags, userdata_inlinehandles); 1090 | IJHA_H32_assert(init_res == IJHA_H32_INIT_NO_ERROR); 1091 | /* setup finished */ 1092 | 1093 | if (ijha_flags&IJHA_H32_INIT_THREADSAFE) { 1094 | /* as this is the setup phase we have most likely not finished setting 1095 | * up all other resources. we can therefore take for granted that the 1096 | * handle allocator is not accessed concurrently at this point. which is 1097 | * great because we can get "back" our resource at index 0, which is 1098 | * used as a sentinel node. 1099 | * 1100 | * NB: if (ab-)using it like this it is of utmost importance that this 1101 | * handle _IS NOT_ released back into the pool at any time. it 1102 | * should only be used for resources/data that have the same 1103 | * lifetime as the handle allocator itself. 1104 | */ 1105 | unsigned *handleinfo = ijha_h32_handle_info_at(self, 0); 1106 | struct ijha_h32_test_userdata *userdata = ijha_h32_userdata(struct ijha_h32_test_userdata*, self, 0); 1107 | /* as it is a freelist it points to next node */ 1108 | IJHA_H32_assert(ijha_h32_index(self, *handleinfo) == 1); 1109 | IJHA_H32_assert(ijha_h32_in_use(self, *handleinfo) == 0); 1110 | *handleinfo = PUBLIC_API_MAIN_WINDOW_HANDLE; 1111 | self->size++; 1112 | handles[0] = *handleinfo; 1113 | /* here you would initialize the userdata */ 1114 | i = sizeof userdata; /* squash warnings of unused variable */ 1115 | } 1116 | maxnhandles = ijha_h32_capacity(self); 1117 | IJHA_H32_assert(maxnhandles >= 2); 1118 | if (ijha_flags&IJHA_H32_INIT_THREADSAFE) 1119 | i = 1, n = maxnhandles + 1; 1120 | else 1121 | i = 0, n = maxnhandles; 1122 | 1123 | for (; i != n; ++i) { 1124 | unsigned si = ijha_h32_acquire_userflags(self, 0, &handles[i]); 1125 | /* here you would initialize the userdata */ 1126 | for (j = 0; j != i + 1; ++j) 1127 | IJHA_H32_assert(ijha_h32_valid(self, handles[j])); 1128 | IJHA_H32_assert(si != IJHA_H32_INVALID_INDEX); 1129 | } 1130 | IJHA_H32_assert(ijha_h32_acquire_userflags(self, 0, &dummy) == IJHA_H32_INVALID_INDEX); 1131 | IJHA_H32_assert(handles[0] == PUBLIC_API_MAIN_WINDOW_HANDLE); 1132 | IJHA_H32_assert(handles[1] == PUBLIC_API_SECONDARY_WINDOW_HANDLE); 1133 | 1134 | if (ijha_flags&IJHA_H32_INIT_THREADSAFE) 1135 | n = maxnhandles + 1; /* as we (ab-)use the fact that we can 'steal' the node at zero */ 1136 | else 1137 | n = maxnhandles; 1138 | 1139 | IJHA_H32_assert(n == self->size); 1140 | for (i = 0; i != n; ++i) { 1141 | unsigned handleidx = ijha_h32_index(self, handles[i]); 1142 | struct ijha_h32_test_userdata *userdata = ijha_h32_userdata(struct ijha_h32_test_userdata*, self, handleidx); 1143 | unsigned *handleinfo = ijha_h32_handle_info_at(self, handleidx); 1144 | IJHA_H32_assert(userdata == &userdata_inlinehandles[handleidx]); 1145 | IJHA_H32_assert(handleinfo == &userdata_inlinehandles[handleidx].inline_handle); 1146 | IJHA_H32_assert(ijha_h32_valid(self, handles[i])); 1147 | } 1148 | } 1149 | } 1150 | #undef IJHA_TEST_MAX_NUM_HANDLES 1151 | #undef PUBLIC_API_MAIN_WINDOW_HANDLE 1152 | #undef PUBLIC_API_SECONDARY_WINDOW_HANDLE 1153 | } 1154 | 1155 | static void ijha_h32_test_suite(void) 1156 | { 1157 | ijha_h32_test_basic_operations(); 1158 | ijha_h32_test_inline_noinline_handles(); 1159 | ijha_h32_test_constant_handles(); 1160 | } 1161 | 1162 | #if defined(IJHA_H32_TEST_MAIN) 1163 | 1164 | #include 1165 | 1166 | int main(int args, char **argc) 1167 | { 1168 | (void)args; 1169 | (void)argc; 1170 | ijha_h32_test_suite(); 1171 | printf("ijha_h32: all tests done.\n"); 1172 | return 0; 1173 | } 1174 | #endif 1175 | 1176 | #endif /* defined(IJHA_H32_TEST) || defined(IJHA_H32_TEST_MAIN) */ 1177 | #endif /* defined(IJHA_H32_IMPLEMENTATION) */ 1178 | 1179 | /* 1180 | LICENSE 1181 | ------------------------------------------------------------------------------ 1182 | This software is available under 2 licenses -- choose whichever you prefer. 1183 | ------------------------------------------------------------------------------ 1184 | ALTERNATIVE A - 3-Clause BSD License 1185 | Copyright (c) 2019-, Fredrik Engkvist 1186 | All rights reserved. 1187 | 1188 | Redistribution and use in source and binary forms, with or without 1189 | modification, are permitted provided that the following conditions are met: 1190 | * Redistributions of source code must retain the above copyright 1191 | notice, this list of conditions and the following disclaimer. 1192 | * Redistributions in binary form must reproduce the above copyright 1193 | notice, this list of conditions and the following disclaimer in the 1194 | documentation and/or other materials provided with the distribution. 1195 | * Neither the name of the copyright holder nor the 1196 | names of its contributors may be used to endorse or promote products 1197 | derived from this software without specific prior written permission. 1198 | 1199 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 1200 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 1201 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 1202 | DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER BE LIABLE FOR ANY 1203 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 1204 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 1205 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 1206 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 1207 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 1208 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 1209 | ------------------------------------------------------------------------------ 1210 | ALTERNATIVE B - Public Domain (www.unlicense.org) 1211 | This is free and unencumbered software released into the public domain. 1212 | Anyone is free to copy, modify, publish, use, compile, sell, or distribute this 1213 | software, either in source code form or as a compiled binary, for any purpose, 1214 | commercial or non-commercial, and by any means. 1215 | In jurisdictions that recognize copyright laws, the author or authors of this 1216 | software dedicate any and all copyright interest in the software to the public 1217 | domain. We make this dedication for the benefit of the public at large and to 1218 | the detriment of our heirs and successors. We intend this dedication to be an 1219 | overt act of relinquishment in perpetuity of all present and future rights to 1220 | this software under copyright law. 1221 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 1222 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 1223 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 1224 | AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN 1225 | ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 1226 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 1227 | ------------------------------------------------------------------------------ 1228 | */ 1229 | /* clang-format on */ 1230 | -------------------------------------------------------------------------------- /ijss.h: -------------------------------------------------------------------------------- 1 | /* clang-format off */ 2 | 3 | /* 4 | ijss : IncredibleJunior SparseSet 5 | 6 | sparse set [1] for bookkeeping of dense<->sparse index mapping or a 7 | building-block for a simple LIFO index/handle allocator. 8 | 9 | This file provides both the interface and the implementation. 10 | The sparse set is implemented as a stb-style header-file library[2] 11 | which means that in *ONE* source file, put: 12 | 13 | #define IJSS_IMPLEMENTATION 14 | // if custom assert wanted (and no dependencies on assert.h) 15 | #define IJSS_assert custom_assert 16 | #include "ijss.h" 17 | 18 | Other source files should just include ijss.h 19 | 20 | EXAMPLES/UNIT TESTS 21 | Usage examples+tests is at the bottom of the file in the IJSS_TEST section. 22 | LICENSE 23 | See end of file for license information 24 | 25 | References: 26 | [1] https://research.swtch.com/sparse 27 | [2] https://github.com/nothings/stb 28 | 29 | */ 30 | 31 | #ifndef IJSS_INCLUDED_H 32 | #define IJSS_INCLUDED_H 33 | 34 | #ifdef __cplusplus 35 | extern "C" { 36 | #endif 37 | 38 | #if defined(IJSS_STATIC) 39 | #define IJSS_API static 40 | #else 41 | #define IJSS_API extern 42 | #endif 43 | 44 | struct ijss_pair8 { 45 | unsigned char sparse_index; 46 | unsigned char dense_index; 47 | }; 48 | 49 | struct ijss_pair16 { 50 | unsigned short sparse_index; 51 | unsigned short dense_index; 52 | }; 53 | 54 | struct ijss_pair32 { 55 | unsigned sparse_index; 56 | unsigned dense_index; 57 | }; 58 | 59 | struct ijss { 60 | void *dense; 61 | void *sparse; 62 | unsigned dense_stride; 63 | unsigned sparse_stride; 64 | unsigned size; 65 | unsigned capacity; 66 | unsigned elementsize; /* size in bytes for _one_ dense/sparse index */ 67 | unsigned reserved32; 68 | }; 69 | 70 | 71 | /* dense: pointer to storage of dense indices 72 | * dense_stride: how many bytes to advance from index A to A+1 73 | * 74 | * sparse+sparse_index: same as dense above but for sparse indices 75 | * 76 | * elementsize: 77 | * the size in bytes of _one_ sparse/dense _index_ 78 | * i.e. if using ijss_pairXX for bookkeeping use 'sizeof(struct ijss_pairXX)>>1' 79 | * 80 | * capacity: how many dense/sparse pairs to manage 81 | * 82 | * please refer to 'ijss_init_from_pairtype_size' or 'ijss_init_from_pairtype' 83 | * helper macros 84 | */ 85 | IJSS_API void ijss_init(struct ijss *self, void *dense, unsigned dense_stride, void *sparse, unsigned sparse_stride, unsigned elementsize, unsigned capacity); 86 | 87 | /* initialize sparse set with a pair (ex struct ijss_pair) size (size of the _pair_) */ 88 | #define ijss_init_from_pairtype_size(pairtype_size, self, pairs, stride, capacity) ijss_init((self), (unsigned char*)(pairs), (stride), (unsigned char*)(pairs)+((pairtype_size)>>1), (stride), (pairtype_size)>>1, capacity) 89 | 90 | /* initialize sparse set with a 'struct ijss_pair' 91 | * 92 | * pairs is the memory location where the pairs start. 93 | * 94 | * ex: pairs is inlined in another structure 95 | * 96 | * struct Object { 97 | * unsigned char payload[24]; 98 | * struct ijss_pair32 ss_bookkeeping; 99 | * }; 100 | * struct Object all_objects[16]; 101 | * unsigned capacity = sizeof objects / sizeof *objects; 102 | * struct ijss sparse_set; 103 | * ijss_init_from_pairtype(struct ijss_pair32, 104 | * &sparse_set 105 | * (unsigned char*)all_objects + offsetof(struct Object, ss_bookkeeping), 106 | * sizeof(struct Object), 107 | * capacity); 108 | * 109 | * ex: pairs just used 'as-is' 110 | * struct ijss_pair32 object_ss_pairs[16]; 111 | * unsigned capacity = sizeof object_ss_pairs / sizeof *object_ss_pairs; 112 | * struct ijss sparse_set; 113 | * ijss_init_from_pairtype(struct ijss_pair32, &sparse_set, object_ss_pairs, sizeof(struct ijss_pair32), capacity); 114 | * 115 | */ 116 | #define ijss_init_from_pairtype(pairtype, self, pairs, stride, capacity) ijss_init_from_pairtype_size(sizeof(pairtype), (self), (pairs), (stride), (capacity)) 117 | 118 | IJSS_API void ijss_reset(struct ijss *self); 119 | 120 | /* reset and sets to D[x] = x for x [0, capacity) */ 121 | IJSS_API void ijss_reset_identity(struct ijss *self); 122 | 123 | /* returns the dense index */ 124 | IJSS_API unsigned ijss_add(struct ijss *self, unsigned sparse_index); 125 | 126 | /* returns -1 on invalid sparse index else if a move of (external) data is needed 127 | * stored the indices that should move in move_to_index and move_from_index respectively. 128 | * ex: 129 | * unsigned move_from, move_to; 130 | * int do_move_data = ijss_remove(self, idx) > 0; 131 | * if (do_move_data) 132 | * my_external_data[move_to] = my_external_data[move_from]; 133 | * 134 | * stores the indices which should be moved in the (external) dense array if move is needed */ 135 | IJSS_API int ijss_remove(struct ijss *self, unsigned sparse_index, unsigned *move_to_index, unsigned *move_from_index); 136 | 137 | IJSS_API unsigned ijss_dense_index(struct ijss *self, unsigned sparse_index); 138 | IJSS_API unsigned ijss_sparse_index(struct ijss *self, unsigned dense_index); 139 | IJSS_API int ijss_has(struct ijss *self, unsigned sparse_index); 140 | 141 | #ifdef __cplusplus 142 | } 143 | #endif 144 | 145 | #endif /* IJSS_INCLUDED_H */ 146 | 147 | #if defined(IJSS_IMPLEMENTATION) && !defined(IJSS_IMPLEMENTATION_DEFINED) 148 | 149 | #define IJSS_IMPLEMENTATION_DEFINED (1) 150 | 151 | #ifndef IJSS_assert 152 | #include 153 | #define IJSS_assert assert 154 | #endif 155 | 156 | static unsigned ijss__load(const void * const p, unsigned len) 157 | { 158 | IJSS_assert(len >= 1 && len <= 4); 159 | switch (len) { 160 | case 1: return *(unsigned char*)p; 161 | case 2: return *(unsigned short*)p; 162 | case 4: return *(unsigned*)p; 163 | default: return 0; 164 | } 165 | } 166 | 167 | static void ijss__store(void *dst, unsigned len, unsigned value) 168 | { 169 | IJSS_assert(len >= 1 && len <= 4); 170 | IJSS_assert((0xffffffffu >> (8 * (4 - len))) >= value); 171 | switch (len) { 172 | case 1: *(unsigned char*)dst = (unsigned char)value; break; 173 | case 2: *(unsigned short*)dst = (unsigned short)value; break; 174 | case 4: *(unsigned*)dst = (unsigned)value; break; 175 | default:break; 176 | } 177 | } 178 | 179 | IJSS_API void ijss_init(struct ijss *self, void *dense, unsigned dense_stride, void *sparse, unsigned sparse_stride, unsigned elementsize, unsigned capacity) 180 | { 181 | IJSS_assert(elementsize >= 1 && elementsize <= 4); 182 | IJSS_assert((0xffffffffu >> (8 * (4 - elementsize))) >= capacity); 183 | 184 | self->dense = dense; 185 | self->dense_stride = dense_stride; 186 | self->sparse = sparse; 187 | self->sparse_stride = sparse_stride; 188 | self->size = 0; 189 | self->capacity = capacity; 190 | self->elementsize = elementsize; 191 | self->reserved32 = 0; 192 | ijss_reset(self); 193 | } 194 | 195 | #define ijss__pointer_add(type, p, bytes) ((type)((unsigned char *)(p) + (bytes))) 196 | 197 | #define IJSS__STORE(p, stride, elementsize, idx, value) ijss__store(ijss__pointer_add(void*, (p), (stride)*(idx)), (elementsize), (value)) 198 | 199 | /* D[idx] = value */ 200 | #define IJSS__STORE_DENSE(idx, value) IJSS__STORE(self->dense, self->dense_stride, self->elementsize, idx, value) 201 | /* S[idx] = value */ 202 | #define IJSS__STORE_SPARSE(idx, value) IJSS__STORE(self->sparse, self->sparse_stride, self->elementsize, idx, value) 203 | 204 | #define IJSS__LOAD(p, stride, elementsize, idx) ijss__load(ijss__pointer_add(void*, (p), (stride)*(idx)), (elementsize)) 205 | /* idx = D[idx] */ 206 | #define IJSS__LOAD_DENSE(idx) IJSS__LOAD(self->dense, self->dense_stride, self->elementsize, idx) 207 | /* idx = S[idx] */ 208 | #define IJSS__LOAD_SPARSE(idx) IJSS__LOAD(self->sparse, self->sparse_stride, self->elementsize, idx) 209 | 210 | IJSS_API void ijss_reset(struct ijss *self) 211 | { 212 | self->size = 0; 213 | } 214 | 215 | IJSS_API void ijss_reset_identity(struct ijss *self) 216 | { 217 | unsigned i; 218 | self->size = 0; 219 | for (i = 0; i != self->capacity; ++i) 220 | IJSS__STORE_DENSE(i, i); 221 | } 222 | 223 | IJSS_API unsigned ijss_add(struct ijss *self, unsigned sparse_index) 224 | { 225 | unsigned dense_index = self->size++; 226 | IJSS_assert((0xffffffffu >> (8 * (4 - self->elementsize))) >= dense_index); 227 | IJSS_assert((0xffffffffu >> (8 * (4 - self->elementsize))) >= sparse_index); 228 | 229 | IJSS_assert(self->capacity > dense_index); 230 | IJSS_assert(self->capacity > sparse_index); 231 | 232 | IJSS__STORE_DENSE(dense_index, sparse_index); 233 | IJSS__STORE_SPARSE(sparse_index, dense_index); 234 | 235 | return dense_index; 236 | } 237 | 238 | IJSS_API int ijss_remove(struct ijss *self, unsigned sparse_index, unsigned *move_to_index, unsigned *move_from_index) 239 | { 240 | if (!ijss_has(self, sparse_index)) 241 | return -1; 242 | else { 243 | unsigned size_now = self->size-1; 244 | unsigned dense_index_of_removed, sparse_index_of_back; 245 | IJSS_assert(self->capacity > size_now); 246 | 247 | dense_index_of_removed = IJSS__LOAD_SPARSE(sparse_index); 248 | IJSS_assert(self->capacity > dense_index_of_removed); 249 | IJSS_assert(size_now >= dense_index_of_removed); 250 | sparse_index_of_back = IJSS__LOAD_DENSE(size_now); 251 | 252 | /* #1 is not strictly necessary, but together with 'ijss_reset_identity' 253 | * we can make a LIFO index/handle allocator */ 254 | IJSS__STORE_DENSE(size_now, sparse_index); /* #1 */ 255 | IJSS__STORE_DENSE(dense_index_of_removed, sparse_index_of_back); 256 | IJSS__STORE_SPARSE(sparse_index_of_back, dense_index_of_removed); 257 | 258 | *move_from_index = size_now; 259 | *move_to_index = dense_index_of_removed; 260 | --self->size; 261 | 262 | return dense_index_of_removed != size_now; 263 | } 264 | } 265 | 266 | IJSS_API int ijss_has(struct ijss *self, unsigned sparse_index) 267 | { 268 | if (sparse_index >= self->capacity) 269 | return 0; 270 | else { 271 | unsigned dense_index = IJSS__LOAD_SPARSE(sparse_index); 272 | return self->size > dense_index && IJSS__LOAD_DENSE(dense_index) == sparse_index; 273 | } 274 | } 275 | 276 | IJSS_API unsigned ijss_dense_index(struct ijss *self, unsigned sparse_index) 277 | { 278 | IJSS_assert(self->capacity > sparse_index); 279 | return IJSS__LOAD_SPARSE(sparse_index); 280 | } 281 | 282 | IJSS_API unsigned ijss_sparse_index(struct ijss *self, unsigned dense_index) 283 | { 284 | IJSS_assert(self->capacity > dense_index); 285 | return IJSS__LOAD_DENSE(dense_index); 286 | } 287 | 288 | #if defined(IJSS_TEST) || defined(IJSS_TEST_MAIN) 289 | 290 | typedef unsigned int ijss_uint32; 291 | 292 | #ifdef _MSC_VER 293 | typedef unsigned __int64 ijss_uint64; 294 | #else 295 | typedef unsigned long long ijss_uint64; 296 | #endif 297 | 298 | #if defined(__ppc64__) || defined(__aarch64__) || defined(_M_X64) || defined(__x86_64__) || defined(__x86_64) 299 | typedef ijss_uint64 ijss_uintptr; 300 | #else 301 | typedef ijss_uint32 ijss_uintptr; 302 | #endif 303 | 304 | #ifndef offsetof 305 | #define ijss_test_offsetof(st, m) ((ijss_uintptr)&(((st *)0)->m)) 306 | #else 307 | #define ijss_test_offsetof offsetof 308 | #endif 309 | 310 | #define SSHA_INVALID_HANDLE (unsigned)-1 311 | static unsigned ijss_alloc_handle(struct ijss *self, unsigned *dense) 312 | { 313 | unsigned h; 314 | if (self->capacity == self->size) 315 | return SSHA_INVALID_HANDLE; 316 | 317 | /* the sparse indices does not move on adds or removes so we leverage this 318 | * fact to use them as handles */ 319 | h = ijss_sparse_index(self, self->size); 320 | *dense = ijss_add(self, h); 321 | IJSS_assert(*dense == ijss_dense_index(self, h)); 322 | IJSS_assert(*dense == self->size-1); 323 | return h; 324 | } 325 | 326 | static unsigned ijss_handle_valid(struct ijss *self, unsigned handle) 327 | { 328 | return ijss_has(self, handle); 329 | } 330 | 331 | static void ijss_as_handlealloc_test_suite(void) 332 | { 333 | #define SSHA_NUM_OBJECTS (4) 334 | int r, do_move_data; 335 | unsigned i, h, dense, move_from, move_to; 336 | struct ijss_pair32 ssdata[SSHA_NUM_OBJECTS]; 337 | unsigned handles[SSHA_NUM_OBJECTS]; 338 | struct ijss ss, *self = &ss; 339 | 340 | ijss_init_from_pairtype_size(sizeof *ssdata, self, ssdata, sizeof *ssdata, SSHA_NUM_OBJECTS); 341 | ijss_reset_identity(self); 342 | 343 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 344 | h = ijss_alloc_handle(self, &dense); 345 | handles[i] = h; 346 | IJSS_assert(ijss_handle_valid(self, h)); 347 | } 348 | 349 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 350 | IJSS_assert(ijss_handle_valid(self, handles[i])); 351 | if (i % 2) 352 | continue; 353 | 354 | r = ijss_remove(self, i, &move_to, &move_from); 355 | IJSS_assert(r >= 0); 356 | do_move_data = r > 0; 357 | handles[i] = SSHA_INVALID_HANDLE; 358 | } 359 | 360 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 361 | if (handles[i] == SSHA_INVALID_HANDLE) 362 | IJSS_assert(!ijss_handle_valid(self, handles[i])); 363 | else 364 | IJSS_assert(ijss_handle_valid(self, handles[i])); 365 | } 366 | 367 | for (i = 0; i != 2; ++i) { 368 | h = ijss_alloc_handle(self, &dense); 369 | IJSS_assert(handles[h] == SSHA_INVALID_HANDLE); 370 | handles[h] = h; 371 | } 372 | 373 | r = sizeof do_move_data; /* squashing [-Wunused-but-set-variable] */ 374 | 375 | #undef SSHA_NUM_OBJECTS 376 | } 377 | 378 | struct ijss_test_orientation { 379 | int a; 380 | unsigned sparse_owner; 381 | }; 382 | 383 | struct ijss_test_position { 384 | int x, y; 385 | unsigned sparse_owner; 386 | }; 387 | 388 | struct ijss_test_object { 389 | struct ijss_pair32 bookkeeping_position_array; 390 | unsigned char somepayload[20]; 391 | struct ijss_pair8 bookkeeping_orientation_array; 392 | }; 393 | 394 | static void ijss_keep_active_external_data_linear(void) 395 | { 396 | #define SSHA_NUM_OBJECTS (16) 397 | unsigned i; 398 | int loop; 399 | struct ijss ss_positions; 400 | struct ijss ss_orientations; 401 | struct ijss_test_object all_test_objects[SSHA_NUM_OBJECTS] = {0}; 402 | struct ijss_test_orientation all_orientations_array[SSHA_NUM_OBJECTS]; 403 | struct ijss_test_position all_positions_array[SSHA_NUM_OBJECTS]; 404 | 405 | ijss_init_from_pairtype(struct ijss_pair32, 406 | &ss_positions, 407 | (unsigned char*)all_test_objects + ijss_test_offsetof(struct ijss_test_object, bookkeeping_position_array), 408 | sizeof(struct ijss_test_object), 409 | SSHA_NUM_OBJECTS); 410 | 411 | ijss_init_from_pairtype(struct ijss_pair8, 412 | &ss_orientations, 413 | (unsigned char*)all_test_objects + ijss_test_offsetof(struct ijss_test_object, bookkeeping_orientation_array), 414 | sizeof(struct ijss_test_object), 415 | SSHA_NUM_OBJECTS); 416 | 417 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 418 | IJSS_assert(!ijss_has(&ss_positions, i)); 419 | IJSS_assert(!ijss_has(&ss_orientations, i)); 420 | } 421 | 422 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 423 | unsigned dense; 424 | if (i & 1) { 425 | IJSS_assert(!ijss_has(&ss_positions, i)); 426 | dense = ijss_add(&ss_positions, i); 427 | all_positions_array[dense].sparse_owner = i; 428 | all_positions_array[dense].x = all_positions_array[dense].y = 0; 429 | IJSS_assert(ijss_has(&ss_positions, i)); 430 | } else { 431 | IJSS_assert(!ijss_has(&ss_orientations, i)); 432 | dense = ijss_add(&ss_orientations, i); 433 | all_orientations_array[dense].sparse_owner = i; 434 | all_orientations_array[dense].a = 0; 435 | IJSS_assert(ijss_has(&ss_orientations, i)); 436 | } 437 | } 438 | 439 | for (i = 0; i != SSHA_NUM_OBJECTS; ++i) { 440 | if (i & 1) { 441 | IJSS_assert(!ijss_has(&ss_orientations, i)); 442 | IJSS_assert(ijss_has(&ss_positions, i)); 443 | } else { 444 | IJSS_assert(!ijss_has(&ss_positions, i)); 445 | IJSS_assert(ijss_has(&ss_orientations, i)); 446 | } 447 | } 448 | 449 | /* now we have added the objects into either the position or orientation arrays */ 450 | loop = 0; 451 | 452 | while (ss_orientations.size) { 453 | 454 | for (i = 0; i != ss_orientations.size; ++i) { 455 | struct ijss_test_orientation *current = all_orientations_array + i; 456 | IJSS_assert(ijss_has(&ss_orientations, current->sparse_owner)); 457 | IJSS_assert(ijss_dense_index(&ss_orientations, current->sparse_owner) == i); 458 | IJSS_assert(ijss_sparse_index(&ss_orientations, i) == current->sparse_owner); 459 | IJSS_assert(current->a == loop); 460 | } 461 | 462 | for (i = 0; i != ss_positions.size; ++i) { 463 | struct ijss_test_position *current = all_positions_array + i; 464 | IJSS_assert(ijss_has(&ss_positions, current->sparse_owner)); 465 | IJSS_assert(ijss_dense_index(&ss_positions, current->sparse_owner) == i); 466 | IJSS_assert(ijss_sparse_index(&ss_positions, i) == current->sparse_owner); 467 | IJSS_assert(current->x == loop); 468 | IJSS_assert(current->y == loop); 469 | } 470 | 471 | /* now remove the first of each array */ 472 | { 473 | int r, do_move_data; 474 | unsigned move_from, move_to; 475 | unsigned sparse_indices[2]; 476 | sparse_indices[0] = all_orientations_array[0].sparse_owner; 477 | sparse_indices[1] = all_positions_array[0].sparse_owner; 478 | for (i = 0; i != 2; ++i) { 479 | unsigned sparse_index = sparse_indices[i]; 480 | if (i & 1) { 481 | IJSS_assert(!ijss_has(&ss_orientations, sparse_index)); 482 | IJSS_assert(ijss_has(&ss_positions, sparse_index)); 483 | r = ijss_remove(&ss_positions, sparse_index, &move_to, &move_from); 484 | IJSS_assert(r >= 0); 485 | do_move_data = r > 0; 486 | IJSS_assert(do_move_data || ss_positions.size == 0); /* otherwise the test is setup incorrectly */ 487 | all_positions_array[move_to] = all_positions_array[move_from]; 488 | } else { 489 | IJSS_assert(!ijss_has(&ss_positions, sparse_index)); 490 | IJSS_assert(ijss_has(&ss_orientations, sparse_index)); 491 | r = ijss_remove(&ss_orientations, sparse_index, &move_to, &move_from); 492 | IJSS_assert(r >= 0); 493 | do_move_data = r > 0; 494 | IJSS_assert(do_move_data || ss_orientations.size == 0); /* otherwise the test is setup incorrectly */ 495 | all_orientations_array[move_to] = all_orientations_array[move_from]; 496 | } 497 | } 498 | } 499 | 500 | /* now loop the linear data again and verify */ 501 | for (i = 0; i != ss_orientations.size; ++i) { 502 | struct ijss_test_orientation *current = all_orientations_array + i; 503 | IJSS_assert(ijss_has(&ss_orientations, current->sparse_owner)); 504 | IJSS_assert(ijss_dense_index(&ss_orientations, current->sparse_owner) == i); 505 | IJSS_assert(ijss_sparse_index(&ss_orientations, i) == current->sparse_owner); 506 | IJSS_assert(current->a == loop); 507 | current->a++; 508 | } 509 | 510 | for (i = 0; i != ss_positions.size; ++i) { 511 | struct ijss_test_position *current = all_positions_array + i; 512 | IJSS_assert(ijss_has(&ss_positions, current->sparse_owner)); 513 | IJSS_assert(ijss_dense_index(&ss_positions, current->sparse_owner) == i); 514 | IJSS_assert(ijss_sparse_index(&ss_positions, i) == current->sparse_owner); 515 | IJSS_assert(current->x == loop); 516 | IJSS_assert(current->y == loop); 517 | current->x++; 518 | current->y++; 519 | } 520 | ++loop; 521 | } 522 | #undef SSHA_NUM_OBJECTS 523 | 524 | } 525 | static void ijss_test_suite(void) 526 | { 527 | ijss_as_handlealloc_test_suite(); 528 | ijss_keep_active_external_data_linear(); 529 | } 530 | 531 | #if defined(IJSS_TEST_MAIN) 532 | 533 | #include 534 | 535 | int main(int args, char **argc) 536 | { 537 | (void)args; 538 | (void)argc; 539 | ijss_test_suite(); 540 | printf("ijss: all tests done.\n"); 541 | return 0; 542 | } 543 | #endif /* defined(IJSS_TEST_MAIN) */ 544 | #endif /* defined(IJSS_TEST) || defined(IJSS_TEST_MAIN) */ 545 | 546 | #endif /* IJSS_IMPLEMENTATION */ 547 | 548 | /* 549 | LICENSE 550 | ------------------------------------------------------------------------------ 551 | This software is available under 2 licenses -- choose whichever you prefer. 552 | ------------------------------------------------------------------------------ 553 | ALTERNATIVE A - 3-Clause BSD License 554 | Copyright (c) 2019-, Fredrik Engkvist 555 | All rights reserved. 556 | 557 | Redistribution and use in source and binary forms, with or without 558 | modification, are permitted provided that the following conditions are met: 559 | * Redistributions of source code must retain the above copyright 560 | notice, this list of conditions and the following disclaimer. 561 | * Redistributions in binary form must reproduce the above copyright 562 | notice, this list of conditions and the following disclaimer in the 563 | documentation and/or other materials provided with the distribution. 564 | * Neither the name of the copyright holder nor the 565 | names of its contributors may be used to endorse or promote products 566 | derived from this software without specific prior written permission. 567 | 568 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 569 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 570 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 571 | DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER BE LIABLE FOR ANY 572 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 573 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 574 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 575 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 576 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 577 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 578 | ------------------------------------------------------------------------------ 579 | ALTERNATIVE B - Public Domain (www.unlicense.org) 580 | This is free and unencumbered software released into the public domain. 581 | Anyone is free to copy, modify, publish, use, compile, sell, or distribute this 582 | software, either in source code form or as a compiled binary, for any purpose, 583 | commercial or non-commercial, and by any means. 584 | In jurisdictions that recognize copyright laws, the author or authors of this 585 | software dedicate any and all copyright interest in the software to the public 586 | domain. We make this dedication for the benefit of the public at large and to 587 | the detriment of our heirs and successors. We intend this dedication to be an 588 | overt act of relinquishment in perpetuity of all present and future rights to 589 | this software under copyright law. 590 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 591 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 592 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 593 | AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN 594 | ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 595 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 596 | ------------------------------------------------------------------------------ 597 | */ 598 | /* clang-format on */ 599 | --------------------------------------------------------------------------------