├── CHANGELOG.md ├── LICENSE ├── Makefile ├── README.md ├── include └── skiparray.h ├── pc └── libskiparray.pc.in ├── src ├── bench.c ├── skiparray.c ├── skiparray_fold.c ├── skiparray_fold_internal.h ├── skiparray_hof.c ├── skiparray_internal.h ├── skiparray_internal_types.h └── splitmix64_stateless.h ├── test ├── test_skiparray.c ├── test_skiparray.h ├── test_skiparray_basic.c ├── test_skiparray_builder.c ├── test_skiparray_fold.c ├── test_skiparray_hof.c ├── test_skiparray_integration.c ├── test_skiparray_invariants.c ├── test_skiparray_prop.c └── type_info_skiparray_operations.c └── vendor └── greatest.h /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # skiparray Changes By Release 2 | 3 | ## v0.2.0 - 2019-05-25 4 | 5 | ### API Changes 6 | 7 | Added `.ignore_values` flag to `struct skiparray_config` -- this 8 | eliminates unnecessary allocation when only the skiparray's keys 9 | are being used. 10 | 11 | `skiparray_new`'s `config` argument is now `const`. 12 | 13 | Added the `skiparray_builder` interface, which can be used to 14 | incrementally construct a skiparray by appending ascending keys. This is 15 | significantly more efficient than constructing by repeatedly calling 16 | `skiparray_set`, because it avoids redundant searches for where to put 17 | the new binding. 18 | 19 | Added an incremental fold interface, with left and right folds over one 20 | or more skiparrays' values. If there are multiple equal keys, a merge 21 | callback will be called to merge the options to a single (key, value) 22 | pair first. This is built on top of the iteration interface, so the 23 | skiparray(s) will be locked during the fold. 24 | 25 | Added `skiparray_filter`, which produces a filtered shallow copy of 26 | another skiparray using a predicate function. 27 | 28 | Moved the `free` callback (previously an argument to `skiparray_free` 29 | and `skiparray_builder_free`) into the `skiparray_config` struct, 30 | since (like `cmp` and the other callbacks) it shouldn't change over 31 | the lifetime of the skiparray. 32 | 33 | 34 | ### Bug Fixes 35 | 36 | `skiparray_new` could previously return `SKIPARRAY_NEW_ERROR_NULL` if 37 | memory allocation failed, rather than `SKIPARRAY_NEW_ERROR_MEMORY`. 38 | 39 | `skiparray_new` now returns `SKIPARRAY_NEW_ERROR_CONFIG` if the required 40 | comparison callback is `NULL`, rather than `SKIPARRAY_NEW_ERROR_NULL`. 41 | 42 | The `-s` (node size) option was missing from the benchmarking CLI's 43 | usage info. 44 | 45 | ### Other Improvements 46 | 47 | The benchmarking CLI can now take multiple, comma-separated limits (e.g. 48 | `-l 1000,10000,100000`), to benchmarks behavior as input grows. 49 | 50 | The benchmarking CLI's `-n` flag now uses exact name matching. 51 | 52 | Added the benchmarking CLI's `-r` flag, to set the RNG seed. 53 | 54 | 55 | ## v0.1.1 - 2019-04-11 56 | 57 | ### API Changes 58 | 59 | None. 60 | 61 | ### Bug Fixes 62 | 63 | Fixed an overflow bug in the `get_nonexistent` benchmark that meant an 64 | assertion could fail in a 32-bit environment. (Reported by @acfoltzer.) 65 | 66 | Ensure that allocations are deterministically initialized, since the 67 | custom memory hook interface doesn't guarantee it. 68 | 69 | The SAN Makefile variable wasn't actually being used in build targets. 70 | 71 | Portability: Use `uintptr_t`, not `uint64_t`, for word-aligned 72 | allocation during memory benchmarking. 73 | 74 | ### Other Improvements 75 | 76 | Added `cppcheck` target to the Makefile. 77 | 78 | Added `scan-build` target to the Makefile. 79 | 80 | Fixed some static analysis warnings, related to format strings. Also, 81 | rename a variable to avoid harmless shadowing, and eliminate a redundant 82 | variable update. 83 | 84 | 85 | ## v0.1.0 - 2019-04-08 86 | 87 | Initial release. 88 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019 Scott Vokes 2 | 3 | Permission to use, copy, modify, and/or distribute this software for any 4 | purpose with or without fee is hereby granted, provided that the above 5 | copyright notice and this permission notice appear in all copies. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 | WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 | MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 10 | ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 | WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 12 | ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 13 | OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | PROJECT = skiparray 2 | BUILD = build 3 | INCLUDE = include 4 | SRC = src 5 | TEST = test 6 | VENDOR = vendor 7 | 8 | INCDEPS = ${INCLUDE}/*.h ${SRC}/*.h 9 | STATIC_LIB= lib${PROJECT}.a 10 | 11 | COVERAGE = -fprofile-arcs -ftest-coverage 12 | PROFILE = -pg 13 | 14 | OPTIMIZE = -O3 15 | 16 | WARN = -Wall -pedantic -Wextra 17 | CDEFS += 18 | CINCS += -I${INCLUDE} 19 | CINCS += -I${VENDOR} 20 | CSTD += -std=c99 21 | CDEBUG = -ggdb3 22 | 23 | CFLAGS += ${CSTD} ${CDEBUG} ${OPTIMIZE} ${SAN} 24 | CFLAGS += ${WARN} ${CDEFS} ${CINCS} 25 | LDFLAGS += ${CDEBUG} ${SAN} 26 | 27 | TEST_CFLAGS_theft = $(shell pkg-config --cflags libtheft) 28 | TEST_LDFLAGS_theft = $(shell pkg-config --libs libtheft) 29 | 30 | TEST_CFLAGS = ${CFLAGS} -I${SRC} ${TEST_CFLAGS_theft} 31 | TEST_LDFLAGS = ${LDFLAGS} ${TEST_LDFLAGS_theft} 32 | 33 | all: library 34 | 35 | everything: library ${BUILD}/test_${PROJECT} ${BUILD}/benchmarks 36 | 37 | OBJS= ${BUILD}/skiparray.o \ 38 | ${BUILD}/skiparray_fold.o \ 39 | ${BUILD}/skiparray_hof.o \ 40 | 41 | TEST_OBJS= ${OBJS} \ 42 | ${BUILD}/test_${PROJECT}.o \ 43 | ${BUILD}/test_${PROJECT}_basic.o \ 44 | ${BUILD}/test_${PROJECT}_builder.o \ 45 | ${BUILD}/test_${PROJECT}_fold.o \ 46 | ${BUILD}/test_${PROJECT}_hof.o \ 47 | ${BUILD}/test_${PROJECT}_prop.o \ 48 | ${BUILD}/test_${PROJECT}_integration.o \ 49 | ${BUILD}/test_${PROJECT}_invariants.o \ 50 | ${BUILD}/type_info_${PROJECT}_operations.o \ 51 | 52 | BENCH_OBJS= ${BUILD}/bench.o \ 53 | ${BUILD}/test_${PROJECT}_invariants.o \ 54 | 55 | 56 | # Basic targets 57 | 58 | test: ${BUILD}/test_${PROJECT} 59 | ${BUILD}/test_${PROJECT} ${ARGS} 60 | 61 | clean: 62 | rm -rf ${BUILD} 63 | 64 | ${BUILD}/${PROJECT}: ${OBJS} 65 | ${CC} -o $@ $+ ${LDFLAGS} 66 | 67 | library: ${BUILD}/${STATIC_LIB} 68 | 69 | ${BUILD}/${STATIC_LIB}: ${OBJS} 70 | ar -rcs ${BUILD}/${STATIC_LIB} $+ 71 | 72 | ${BUILD}/test_${PROJECT}: ${TEST_OBJS} 73 | ${CC} -o $@ $+ ${TEST_CFLAGS} ${TEST_LDFLAGS} 74 | 75 | ${BUILD}/%.o: ${SRC}/%.c ${INCDEPS} | ${BUILD} 76 | ${CC} -c -o $@ ${CFLAGS} $< 77 | 78 | ${BUILD}/%.o: ${TEST}/%.c ${INCDEPS} | ${BUILD} 79 | ${CC} -c -o $@ ${TEST_CFLAGS} $< 80 | 81 | 82 | # Other targets 83 | 84 | bench: ${BUILD}/benchmarks | ${BUILD} 85 | ${BUILD}/benchmarks ${ARGS} 86 | 87 | tags: ${BUILD}/TAGS 88 | 89 | ${BUILD}/TAGS: ${SRC}/*.c ${INCDEPS} | ${BUILD} 90 | etags -o $@ ${SRC}/*.[ch] ${INCDEPS} ${TEST}/*.[ch] 91 | 92 | ${BUILD}/benchmarks: ${BUILD}/${STATIC_LIB} ${BENCH_OBJS} | ${BUILD} 93 | ${CC} -o $@ ${BENCH_OBJS} ${OPTIMIZE} ${LDFLAGS} ${BUILD}/${STATIC_LIB} 94 | 95 | coverage: OPTIMIZE=-O0 ${COVERAGE} 96 | coverage: CC=gcc 97 | 98 | coverage: test | ${BUILD}/cover 99 | ls -1 src/*.c | sed -e "s#src/#build/#" | xargs -n1 gcov 100 | @echo moving coverage files to ${BUILD}/cover 101 | mv *.gcov ${BUILD}/cover 102 | 103 | ${BUILD}/cover: | ${BUILD} 104 | mkdir ${BUILD}/cover 105 | 106 | profile: profile_perf 107 | 108 | profile_perf: ${BUILD}/benchmarks 109 | perf record ${BUILD}/benchmarks 110 | perf report 111 | 112 | profile_gprof: CFLAGS+=${PROFILE} 113 | profile_gprof: LDFLAGS+=${PROFILE} 114 | 115 | profile_gprof: ${BUILD}/benchmarks 116 | ${BUILD}/benchmarks ${ARGS} 117 | gprof ${BUILD}/benchmarks 118 | 119 | leak_check: CC=clang 120 | leak_check: SAN=-fsanitize=memory,undefined 121 | leak_check: test 122 | 123 | scan-build: 124 | scan-build ${MAKE} everything 125 | 126 | cppcheck: 127 | cppcheck --enable=all -I${INCLUDE} ${SRC}/*.c 128 | 129 | ${BUILD}: 130 | mkdir ${BUILD} 131 | 132 | ${BUILD}/*.o: ${INCLUDE}/*.h 133 | ${BUILD}/*.o: ${SRC}/*.h 134 | ${BUILD}/*.o: Makefile 135 | 136 | 137 | # Installation 138 | 139 | PREFIX ?= /usr/local 140 | LIBDIR ?= lib 141 | INSTALL ?= install 142 | RM ?= rm 143 | 144 | install: install_lib install_pc 145 | 146 | uninstall: uninstall_lib install_pc 147 | 148 | install_lib: ${BUILD}/${STATIC_LIB} ${INCLUDE}/${PROJECT}.h 149 | ${INSTALL} -d -m 755 ${DESTDIR}${PREFIX}/${LIBDIR} 150 | ${INSTALL} -c -m 644 ${BUILD}/lib${PROJECT}.a ${DESTDIR}${PREFIX}/${LIBDIR} 151 | ${INSTALL} -d -m 755 ${DESTDIR}${PREFIX}/include 152 | ${INSTALL} -c -m 644 ${INCLUDE}/${PROJECT}.h ${DESTDIR}${PREFIX}/include 153 | 154 | ${BUILD}/%.pc: pc/%.pc.in | ${BUILD} 155 | sed -e 's,@prefix@,${PREFIX},g' -e 's,@libdir@,${LIBDIR},g' $< > $@ 156 | 157 | install_pc: ${BUILD}/lib${PROJECT}.pc 158 | ${INSTALL} -d -m 755 ${DESTDIR}${PREFIX}/${LIBDIR}/pkgconfig/ 159 | ${INSTALL} -c -m 644 ${BUILD}/lib${PROJECT}.pc ${DESTDIR}${PREFIX}/${LIBDIR}/pkgconfig/ 160 | 161 | uninstall_lib: 162 | ${RM} -f ${DESTDIR}${PREFIX}/${LIBDIR}/lib${PROJECT}.a 163 | ${RM} -f ${DESTDIR}${PREFIX}/include/${PROJECT}.h 164 | 165 | uninstall_pc: 166 | ${RM} -f ${DESTDIR}${PREFIX}/${LIBDIR}/lib${PROJECT}.pc 167 | 168 | .PHONY: test clean tags coverage profile leak_check cppcheck scan-build \ 169 | everything library bench profile profile_perf profile_gprof \ 170 | install install_lib install_pc uninstall uninstall_lib uninstall_pc 171 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # skiparray: an unrolled skip list library 2 | 3 | This C library provides a `void * -> void *` ordered collection based on 4 | a [Skip list][sl], but where a skip list links individual key/value 5 | pairs, this instead links together small arrays (it's ["unrolled"][u]). 6 | All arrays except the last are always at least half-full. It has roughly 7 | the same relation to a skip list as a [B-tree][bt] does a more 8 | conventional [Binary search tree][bst]. 9 | 10 | [sl]: https://en.wikipedia.org/wiki/Skiplist 11 | [u]: https://en.wikipedia.org/wiki/Unrolled_linked_list 12 | [bt]: https://en.wikipedia.org/wiki/B-tree 13 | [bst]: https://en.wikipedia.org/wiki/Binary_search_tree 14 | 15 | 16 | ## Key Features: 17 | 18 | - **Predictable memory usage, low overhead** 19 | 20 | Bindings are stored in small arrays, which are at least half-full. 21 | This leads to an overall memory usage of 2 - 4 words per entry (for 22 | its key and value, and possibly an empty neighbor's), plus < 1% for 23 | structural overhead. This grows gradually, by adding more small 24 | arrays; there are no sudden memory spikes caused by structures 25 | doubling in size. 26 | 27 | 28 | - **High locality** 29 | 30 | Searches start by following the top layer's links, which are likely 31 | to already be in cache. After that, most mutations occur within a 32 | single small array. This reduces time lost to RAM cache misses, and 33 | makes certain operations (such as popping off the first or last 34 | pair) particularly efficient. 35 | 36 | 37 | - **Portable** 38 | 39 | The library doesn't depend on anything beyond the C99 stdlib. 40 | Tested on Linux (`x86_64`, `armv7l`), OpenBSD (`x86_64`). 41 | 42 | 43 | - **ISC license** 44 | 45 | You can use it freely, even for commercial purposes. 46 | 47 | 48 | ## Building 49 | 50 | To build and install the library: 51 | 52 | $ make 53 | $ sudo make install 54 | 55 | To install the library into a build sandox directory, for packaging: 56 | 57 | $ mkdir destdir 58 | $ env DESTDIR=destdir make install 59 | 60 | To build and run tests (which depend on 61 | [theft](https://github.com/silentbicycle/theft)): 62 | 63 | $ make test 64 | 65 | To run benchmarks: 66 | 67 | $ make bench 68 | 69 | Build arguments for `libskiparray` are provided via `pkg-config`: 70 | 71 | $ pkg-config --libs --static libskiparray 72 | -L/usr/local/lib -lskiparray 73 | 74 | 75 | ## General Use 76 | 77 | Use `skiparray_new` to allocate a skiparray collection instance. This 78 | must be called with a `struct skiparray_config`, in order to set the 79 | comparison callback (`.cmp`). The other fields are optional. 80 | 81 | Free the skiparray with `skiparray_free`. This can be given a callback 82 | to free any bindings stored in the skiparray, so they don't leak. 83 | 84 | Key/value pairs can be stored with `skiparray_set`, retrieved with 85 | `skiparray_get`, and removed with `skiparray_forget`. `set`, and `get`, 86 | and `forget` have variants that return the actual stored key as well as 87 | the value (as a `struct skiparray_pair`), in case there are distinct key 88 | instances which compare equal. `skiparray_set_with_pair` also takes a 89 | flag, `replace_key`, to determine whether to replace or keep the current 90 | key when updating an existing binding. 91 | 92 | `skiparray_member` checks whether a key is present, and `skiparray_count` 93 | returns how many bindings are stored. 94 | 95 | `skiparray_first` and `skiparray_last` look up the first and last 96 | bindings, or report that the skiparray is empty. Both have `pop` variants, 97 | which also remove the first/last binding. 98 | 99 | Iterators can be allocated with `skiparray_iter_new` and freed with 100 | `skiparray_iter_free`. While there are iterators active, any functions 101 | that would modify the skiparray structure will return a `LOCKED` error. 102 | Seek to the first/last bindings with `skiparray_iter_seek_endpoint`, to 103 | the first binding `>=` a particular key with `skiparray_iter_seek`, and 104 | `skiparray_iter_next` and `skiparray_iter_prev` will step 105 | forward/backward through the collection. `skiparray_iter_get` will 106 | return the key and value for the iterator's current position. Allocating 107 | an iterator for an empty collection will return an error. 108 | 109 | For further details, see the comments in `include/skiparray.h`. 110 | -------------------------------------------------------------------------------- /include/skiparray.h: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019 Scott Vokes 3 | * 4 | * Permission to use, copy, modify, and/or distribute this software for any 5 | * purpose with or without fee is hereby granted, provided that the above 6 | * copyright notice and this permission notice appear in all copies. 7 | * 8 | * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 | * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 | * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 | * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 | * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 | * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 | * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 | */ 16 | 17 | #ifndef SKIPARRAY_H 18 | #define SKIPARRAY_H 19 | 20 | #include 21 | #include 22 | #include 23 | 24 | /* Version 0.2.0 */ 25 | #define SKIPARRAY_VERSION_MAJOR 0 26 | #define SKIPARRAY_VERSION_MINOR 2 27 | #define SKIPARRAY_VERSION_PATCH 0 28 | 29 | /* Default level limit as the skiparray grows. */ 30 | #define SKIPARRAY_DEF_MAX_LEVEL 16 31 | 32 | /* Max value allowed for max_level option. */ 33 | #define SKIPARRAY_MAX_MAX_LEVEL 32 34 | 35 | /* By default, individual nodes have at most this many pairs. Nodes are 36 | * always at least half-full, except for the very last node. */ 37 | #define SKIPARRAY_DEF_NODE_SIZE 1024 38 | 39 | /* Opaque handle for a skiparray, an unrolled skiplist. */ 40 | struct skiparray; 41 | 42 | /* Memory management function: 43 | * 44 | * - If P is NULL, allocate and return a word-aligned pointer with at 45 | * least NSIZE bytes available. 46 | * - If P is non-NULL and nsize is 0, free it, and return NULL. 47 | * - Never called with non-NULL P and nsize > 0 (the realloc case). 48 | * */ 49 | typedef void *skiparray_memory_fun(void *p, size_t nsize, void *udata); 50 | 51 | /* Compare keys KA and KB: 52 | * | when KA is < KB -> < 0; 53 | * | when KA is > KB -> > 0; 54 | * | when KA is = KB -> 0. 55 | * 56 | * The result of comparing any particular two keys must not change over 57 | * the lifetime of the skiparray (e.g. do not change a collation setting 58 | * stored in udata). */ 59 | typedef int skiparray_cmp_fun(const void *ka, 60 | const void *kb, void *udata); 61 | 62 | /* Callback for freeing keys and/or values in a skiparray, as its 63 | * structure is freed. */ 64 | typedef void skiparray_free_fun(void *key, 65 | void *value, void *udata); 66 | 67 | /* Return the level for a new skiparray node (0 <= X < max_level). 68 | * This should be calculated based on PRNG_STATE_IN (or similar state 69 | * in UDATA), and update *PRNG_STATE_OUT or UDATA to a new random 70 | * number generator state. 71 | * 72 | * There should be approximately half as many nodes on each level over 73 | * 0 as the level below it. */ 74 | typedef int skiparray_level_fun(uint64_t prng_state_in, 75 | uint64_t *prng_state_out, void *udata); 76 | 77 | /* Configuration for the skiparray. 78 | * All fields are optional except cmp. */ 79 | struct skiparray_config { 80 | /* How many key/value pairs should be stored in each node? 81 | * Must be >= 2, or 0 for the default. */ 82 | uint16_t node_size; 83 | /* At most how many express levels should the skiparray have? 84 | * A max_level of 0 will use the default. */ 85 | uint8_t max_level; 86 | uint64_t seed; 87 | /* If this flag is set, then no memory will be allocated for values, 88 | * and value parameters will be ignored. If only the keys are being 89 | * used (as an ordered set), then this will cut memory usage in 90 | * half, and make operations faster by reducing cache misses. */ 91 | bool ignore_values; 92 | 93 | skiparray_cmp_fun *cmp; /* required */ 94 | skiparray_memory_fun *memory; /* optional */ 95 | skiparray_free_fun *free; /* optional */ 96 | skiparray_level_fun *level; /* optional */ 97 | void *udata; /* callback data, opaque to library */ 98 | }; 99 | 100 | /* Allocate a new skiparray. */ 101 | enum skiparray_new_res { 102 | SKIPARRAY_NEW_OK, 103 | SKIPARRAY_NEW_ERROR_NULL = -1, 104 | SKIPARRAY_NEW_ERROR_CONFIG = -2, 105 | SKIPARRAY_NEW_ERROR_MEMORY = -3, 106 | }; 107 | enum skiparray_new_res 108 | skiparray_new(const struct skiparray_config *config, 109 | struct skiparray **sa); 110 | 111 | /* Free a skiparray. If the skiparray's configuration's free callback 112 | * was non-NULL, then it will be called with every key, value pair and 113 | * udata. Any iterators associated with this skiparray will be freed, 114 | * and pointers to them will become stale. */ 115 | void skiparray_free(struct skiparray *sa); 116 | 117 | /* Get the value associated with a key. 118 | * Returns whether the value was found. */ 119 | bool 120 | skiparray_get(const struct skiparray *sa, 121 | const void *key, void **value); 122 | 123 | struct skiparray_pair { 124 | void *key; 125 | void *value; 126 | }; 127 | 128 | /* Same as skiparray_get, but also get the key actually 129 | * used in the binding as well as the value. */ 130 | bool 131 | skiparray_get_pair(const struct skiparray *sa, 132 | const void *key, struct skiparray_pair *pair); 133 | 134 | /* Set/update a binding in the skiparray, possibly replacing 135 | * an existing binding. Note that once a key is in the skiparray, 136 | * it should not be modified in any way that influences comparison 137 | * order. The key is only not const so that it can be freed later. 138 | * 139 | * This function (and any others below that would modify the skiparray) 140 | * will return ERROR_LOCKED if any iterators are active. 141 | * 142 | * To get info about a binding being replaced, use 143 | * skiparray_set_with_pair. This function is just a wrapper for it, 144 | * with REPLACE_PREVIOUS_KEY of true and PAIR set to NULL. */ 145 | enum skiparray_set_res { 146 | SKIPARRAY_SET_BOUND, 147 | SKIPARRAY_SET_REPLACED, 148 | SKIPARRAY_SET_ERROR_NULL = -1, 149 | SKIPARRAY_SET_ERROR_MEMORY = -2, 150 | SKIPARRAY_SET_ERROR_LOCKED = -3, 151 | }; 152 | enum skiparray_set_res 153 | skiparray_set(struct skiparray *sa, void *key, void *value); 154 | 155 | /* Set/update a binding in the skiparray. If PREVIOUS_BINDING is 156 | * non-NULL, its fields will be set to the previous binding, if any. 157 | * 158 | * If replacing an existing binding, REPLACE_KEY determines whether it 159 | * will continue using the current key (false) or change it to the new 160 | * key (true). When there are multiple instances of a key that are not 161 | * pointer-equal, but equal according to the comparison callback, it 162 | * will usually be necessary to free one of them to avoid memory 163 | * leaks. */ 164 | enum skiparray_set_res 165 | skiparray_set_with_pair(struct skiparray *sa, void *key, void *value, 166 | bool replace_key, struct skiparray_pair *previous_binding); 167 | 168 | /* Remove a binding from the skiparray. If PAIR is non-NULL, its key and 169 | * value fields will be set to the forgotten binding. */ 170 | enum skiparray_forget_res { 171 | SKIPARRAY_FORGET_OK, 172 | SKIPARRAY_FORGET_NOT_FOUND, 173 | SKIPARRAY_FORGET_ERROR_NULL = -1, 174 | SKIPARRAY_FORGET_ERROR_MEMORY = -2, 175 | SKIPARRAY_FORGET_ERROR_LOCKED = -3, 176 | }; 177 | enum skiparray_forget_res 178 | skiparray_forget(struct skiparray *sa, const void *key, 179 | struct skiparray_pair *forgotten); 180 | 181 | /* Does KEY have an associated binding? */ 182 | bool 183 | skiparray_member(const struct skiparray *sa, 184 | const void *key); 185 | 186 | /* How many bindings are there? */ 187 | size_t 188 | skiparray_count(const struct skiparray *sa); 189 | 190 | /* Get the first binding. */ 191 | enum skiparray_first_res { 192 | SKIPARRAY_FIRST_OK, 193 | SKIPARRAY_FIRST_EMPTY, 194 | }; 195 | enum skiparray_first_res 196 | skiparray_first(const struct skiparray *sa, 197 | void **key, void **value); 198 | 199 | /* Get the last binding. */ 200 | enum skiparray_last_res { 201 | SKIPARRAY_LAST_OK, 202 | SKIPARRAY_LAST_EMPTY, 203 | }; 204 | enum skiparray_last_res 205 | skiparray_last(const struct skiparray *sa, 206 | void **key, void **value); 207 | 208 | enum skiparray_pop_res { 209 | SKIPARRAY_POP_OK, 210 | SKIPARRAY_POP_EMPTY, 211 | SKIPARRAY_POP_ERROR_MEMORY = -1, 212 | SKIPARRAY_POP_ERROR_LOCKED = -2, 213 | }; 214 | 215 | /* Get and remove the first binding. */ 216 | enum skiparray_pop_res 217 | skiparray_pop_first(struct skiparray *sa, 218 | void **key, void **value); 219 | 220 | /* Get and remove the last binding. */ 221 | enum skiparray_pop_res 222 | skiparray_pop_last(struct skiparray *sa, 223 | void **key, void **value); 224 | 225 | /* Opaque handle to a skiparray iterator. */ 226 | struct skiparray_iter; 227 | 228 | /* Allocate a new iterator handle. This will store a pointer to 229 | * the skiparray, and the skiparray tracks its active iterator(s). 230 | * 231 | * The skiparray cannot be modified while there are any active 232 | * iterators. Operations such as set will just return ERROR_LOCKED. */ 233 | enum skiparray_iter_new_res { 234 | SKIPARRAY_ITER_NEW_OK, 235 | SKIPARRAY_ITER_NEW_EMPTY, 236 | SKIPARRAY_ITER_NEW_ERROR_MEMORY = -1, 237 | }; 238 | enum skiparray_iter_new_res 239 | skiparray_iter_new(struct skiparray *sa, 240 | struct skiparray_iter **res); 241 | 242 | /* Free an iterator. If there are no more iterators associated with a 243 | * skiparray, it will become unlocked and can again be modified. 244 | * Iterators do not need to be freed in any particular order. */ 245 | void 246 | skiparray_iter_free(struct skiparray_iter *iter); 247 | 248 | enum skiparray_iter_seek_endpoint { 249 | SKIPARRAY_ITER_SEEK_FIRST, 250 | SKIPARRAY_ITER_SEEK_LAST, 251 | }; 252 | void 253 | skiparray_iter_seek_endpoint(struct skiparray_iter *iter, 254 | enum skiparray_iter_seek_endpoint end); 255 | 256 | /* Seek to the first binding >= the given key. 257 | * The iterator position is not updated on error. */ 258 | enum skiparray_iter_seek_res { 259 | SKIPARRAY_ITER_SEEK_FOUND, /* now at binding with key */ 260 | SKIPARRAY_ITER_SEEK_NOT_FOUND, /* now at first binding with > key */ 261 | SKIPARRAY_ITER_SEEK_ERROR_BEFORE_FIRST, /* position not updated */ 262 | SKIPARRAY_ITER_SEEK_ERROR_AFTER_LAST, /* position not updated */ 263 | }; 264 | enum skiparray_iter_seek_res 265 | skiparray_iter_seek(struct skiparray_iter *iter, 266 | const void *key); 267 | 268 | /* Seek to the next binding; returns END if at the last pair. */ 269 | enum skiparray_iter_step_res { 270 | SKIPARRAY_ITER_STEP_OK, 271 | SKIPARRAY_ITER_STEP_END, 272 | }; 273 | enum skiparray_iter_step_res 274 | skiparray_iter_next(struct skiparray_iter *iter); 275 | 276 | /* Seek to the previous binding; returns END if at the first pair. */ 277 | enum skiparray_iter_step_res 278 | skiparray_iter_prev(struct skiparray_iter *iter); 279 | 280 | /* Get the key and/or value at the current iterator position. */ 281 | void 282 | skiparray_iter_get(struct skiparray_iter *iter, 283 | void **key, void **value); 284 | 285 | /* Opaque handle for a skiparray builder. This can be used to 286 | * incrementally construct a skiparray more efficiently than by 287 | * repeatedly calling `skiparray_set`, because only the builder 288 | * is allowed to modify the skiparray until it's complete. 289 | * Key/value pairs must be appended in ascending order. */ 290 | struct skiparray_builder; 291 | 292 | /* Allocate a skiparray builder. 293 | * 294 | * If skip_ascending_key_check is true, then the builder will save on 295 | * overhead from a comparison per append, but appending a key that is 296 | * not > the previous may silently corrupt data, trigger assertions 297 | * later, etc. You have been warned. */ 298 | enum skiparray_builder_new_res { 299 | SKIPARRAY_BUILDER_NEW_OK, 300 | SKIPARRAY_BUILDER_NEW_ERROR_MISUSE = -1, 301 | SKIPARRAY_BUILDER_NEW_ERROR_MEMORY = -2, 302 | }; 303 | enum skiparray_builder_new_res 304 | skiparray_builder_new(const struct skiparray_config *cfg, 305 | bool skip_ascending_key_check, struct skiparray_builder **builder); 306 | 307 | /* Free (and abandon) a skiparray that is still being built. */ 308 | void 309 | skiparray_builder_free(struct skiparray_builder *b); 310 | 311 | /* Append a key/value pair with the builder. The key should be > the 312 | * previous key, according to the builder's comparison function. 313 | * 314 | * If doing an ascending key check, it will compare the new key against 315 | * the previously appended key (if any), and either append or return 316 | * ERROR_MISUSE and leave the builder unchanged. */ 317 | enum skiparray_builder_append_res { 318 | SKIPARRAY_BUILDER_APPEND_OK, 319 | SKIPARRAY_BUILDER_APPEND_ERROR_MISUSE = -1, 320 | SKIPARRAY_BUILDER_APPEND_ERROR_MEMORY = -2, 321 | }; 322 | enum skiparray_builder_append_res 323 | skiparray_builder_append(struct skiparray_builder *b, 324 | void *key, void *value); 325 | 326 | /* Finish a builder, converting it to a skiparray. 327 | * The builder will be freed, and *b will be set to NULL. 328 | * This operation cannot fail. */ 329 | void 330 | skiparray_builder_finish(struct skiparray_builder **b, 331 | struct skiparray **sa); 332 | 333 | /* Opaque type for a handle to a fold in progress. */ 334 | struct skiparray_fold_state; 335 | 336 | /* Should the fold start from the left (i.e., ascending keys) 337 | * or right (descending keys)? */ 338 | enum skiparray_fold_type { 339 | SKIPARRAY_FOLD_LEFT, /* left-to-right / ascending */ 340 | SKIPARRAY_FOLD_RIGHT, /* right-to-left / descending */ 341 | }; 342 | 343 | /* A function applied to a (key, value) pair, and potentially 344 | * updating passed-in user state (udata). The udata pointer 345 | * is opaque to the skiparray library. 346 | * 347 | * Note: key and value are not const because they may be passed 348 | * in to skiparray_builder_append, but they should not be 349 | * mutated. 350 | * todo: Is there a better way to encode/enforce this? */ 351 | typedef void 352 | skiparray_fold_fun(void *key, void *value, void *udata); 353 | 354 | /* If multiple skiparrays have keys that compare equal, determine which 355 | * key and value to use. The keys and values will appear in the same 356 | * order as their skiparrays first appeared in the call to 357 | * skiparray_fold_multi. The keys all compare equal, but may be 358 | * distinct instances of that key. 359 | * 360 | * This function should return the offset for which key to use 361 | * (unchanged), and set *merged_value to the value to use (if the 362 | * skiparrays have values). This can point to a freshly allocated value 363 | * or to one of the existing ones, but in the latter case, the free 364 | * callback will need to avoid double frees. 365 | * 366 | * Returning a key choice >= count will lead to an assertion failure. */ 367 | typedef uint8_t 368 | skiparray_fold_merge_fun(uint8_t count, 369 | /* todo: make the input arrays const */ 370 | const void **keys, void **values, void **merged_value, void *udata); 371 | 372 | /* Start a fold over one a skiparray. 373 | * The skiparray will be locked while the fold is active. */ 374 | enum skiparray_fold_res { 375 | SKIPARRAY_FOLD_OK, 376 | SKIPARRAY_FOLD_ERROR_MISUSE = -1, 377 | SKIPARRAY_FOLD_ERROR_MEMORY = -2, 378 | }; 379 | enum skiparray_fold_res 380 | skiparray_fold_init(enum skiparray_fold_type direction, 381 | struct skiparray *sa, skiparray_fold_fun *cb, void *udata, 382 | struct skiparray_fold_state **fs); 383 | 384 | enum skiparray_fold_res 385 | skiparray_fold(enum skiparray_fold_type direction, 386 | struct skiparray *sa, skiparray_fold_fun *cb, void *udata); 387 | 388 | /* Start a fold over multiple skiparrays. 389 | * The callback will be called on each key in ascending or descending 390 | * order, depending on DIRECTION. If multiple skiparrays' next available 391 | * keys compare equal, then the merge callback will be called to merge 392 | * the options to a single key, value pair first. 393 | * 394 | * As this is built on top of the iteration API, all the skiparrays 395 | * will be locked while the fold is active. 396 | * 397 | * Calling this on skiparrays with non-matching cmp, free, or memory 398 | * callbacks will return ERROR_MISUSE. Similarly, either all or none of 399 | * them must use values. */ 400 | enum skiparray_fold_res 401 | skiparray_fold_multi_init(enum skiparray_fold_type direction, 402 | uint8_t skiparray_count, struct skiparray **skiparrays, 403 | skiparray_fold_fun *cb, skiparray_fold_merge_fun *merge, void *udata, 404 | struct skiparray_fold_state **fs); 405 | 406 | /* Halt a fold in progress and free fs. */ 407 | void 408 | skiparray_fold_halt(struct skiparray_fold_state *fs); 409 | 410 | /* Step a fold in progress. This will call the appropriate callbacks and 411 | * return OK if there are more bindings to process, or free fs and 412 | * return DONE. */ 413 | enum skiparray_fold_next_res { 414 | SKIPARRAY_FOLD_NEXT_OK, 415 | SKIPARRAY_FOLD_NEXT_DONE, 416 | }; 417 | enum skiparray_fold_next_res 418 | skiparray_fold_next(struct skiparray_fold_state *fs); 419 | 420 | /* Filter function: given a key/value pair, indicate whether to add it 421 | * to the skiparray that skiparray_filter is buliding. */ 422 | typedef bool 423 | skiparray_filter_fun(const void *key, const void *value, void *udata); 424 | 425 | /* Allocate a new skiparray, containing a subset of another's 426 | * key/value pairs. The new skiparray will have the same comparison 427 | * and memory callbacks as the original. 428 | * 429 | * Returns NULL on allocation failure, or the new skiparray. */ 430 | struct skiparray * 431 | skiparray_filter(struct skiparray *sa, 432 | skiparray_filter_fun *fun, void *udata); 433 | 434 | #endif 435 | -------------------------------------------------------------------------------- /pc/libskiparray.pc.in: -------------------------------------------------------------------------------- 1 | prefix=@prefix@ 2 | libdir=${prefix}/@libdir@ 3 | includedir=${prefix}/include 4 | 5 | Name: skiparray 6 | Description: Unrolled skip list library for C 7 | Version: 0.2.0 8 | Requires: 9 | Libs: -L${libdir} -lskiparray 10 | Libs.private: 11 | Cflags: -I${includedir} 12 | -------------------------------------------------------------------------------- /src/bench.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | 10 | #include "skiparray.h" 11 | 12 | #include 13 | 14 | static const size_t usec_per_sec = 1000000L; 15 | static const size_t msec_per_sec = 1000L; 16 | 17 | static size_t 18 | get_usec_delta(struct timeval *pre, struct timeval *post) { 19 | return (usec_per_sec * (post->tv_sec - pre->tv_sec) 20 | + (post->tv_usec - pre->tv_usec)); 21 | } 22 | 23 | #define TIME(NAME) \ 24 | struct timeval timer_##NAME = { 0, 0 }; \ 25 | int timer_res_##NAME = gettimeofday(&timer_##NAME, NULL); \ 26 | (void)timer_res_##NAME; \ 27 | assert(0 == timer_res_##NAME); \ 28 | 29 | #define CMP_TIME(LABEL, LIMIT, N1, N2) \ 30 | do { \ 31 | size_t usec_delta = get_usec_delta(&timer_##N1, &timer_##N2); \ 32 | double usec_per = usec_delta / (double)LIMIT; \ 33 | double per_second = usec_per_sec / usec_per; \ 34 | printf("%-30s limit %9zu %9.3f msec, %6.3f usec per, " \ 35 | "%11.3f K ops/sec", \ 36 | LABEL, LIMIT, usec_delta / (double)msec_per_sec, \ 37 | usec_per, per_second / 1000); \ 38 | if (track_memory) { \ 39 | printf(", %g MB hwm, %g w/e", \ 40 | memory_hwm / (1024.0 * 1024), \ 41 | memory_hwm / (1.0 * sizeof(void *) * LIMIT)); \ 42 | } \ 43 | printf("\n"); \ 44 | } while(0) \ 45 | 46 | #define TDIFF() CMP_TIME(__func__, limit, pre, post) 47 | 48 | #define MAX_LIMITS 64 49 | #define DEF_LIMIT ((size_t)1000000) 50 | #define DEF_CYCLES ((size_t)1) 51 | 52 | static const int prime = 7919; 53 | static size_t cycles = DEF_CYCLES; 54 | static uint64_t rng_seed = 0; 55 | static uint8_t limit_count = 0; 56 | static size_t limits[MAX_LIMITS]; 57 | static size_t node_size = SKIPARRAY_DEF_NODE_SIZE; 58 | static const char *name; 59 | static bool track_memory; 60 | static size_t memory_used; 61 | static size_t memory_hwm; /* allocation high-water mark */ 62 | 63 | static void 64 | usage(void) { 65 | fprintf(stderr, "Usage: benchmarks [-c ] [-l ] [-m]\n"); 66 | fprintf(stderr, " [-n ] [-r ] [-s ]\n\n"); 67 | fprintf(stderr, " -c: run multiple cycles of benchmarks (def. 1)\n"); 68 | fprintf(stderr, " -l: set limit(s); comma-separated, default %zu.\n", DEF_LIMIT); 69 | fprintf(stderr, " -m: track the memory high-water mark, in MB and words/entry.\n"); 70 | fprintf(stderr, " -n: run one benchmark. 'help' prints available benchmarks.\n"); 71 | fprintf(stderr, " -r: set RNG seed.\n"); 72 | fprintf(stderr, " -s: node size, default %d.\n", SKIPARRAY_DEF_NODE_SIZE); 73 | exit(EXIT_FAILURE); 74 | } 75 | 76 | static int cmp_size_t(const void *pa, const void *pb) { 77 | const size_t a = *(size_t *)pa; 78 | const size_t b = *(size_t *)pb; 79 | return a < b ? -1 : a > b ? 1 : 0; 80 | } 81 | 82 | static bool 83 | parse_limits(char *optarg) { 84 | char *arg = strtok(optarg, ","); 85 | while (arg) { 86 | size_t nlimit = strtoul(arg, NULL, 0); 87 | if (nlimit <= 1) { return false; } 88 | limits[limit_count] = nlimit; 89 | if (limit_count == MAX_LIMITS) { 90 | fprintf(stderr, "Error: Too many limits (max %d)\n", (int)MAX_LIMITS); 91 | exit(EXIT_FAILURE); 92 | } 93 | limit_count++; 94 | arg = strtok(NULL, ","); 95 | } 96 | 97 | qsort(limits, limit_count, sizeof(limits[0]), cmp_size_t); 98 | return true; 99 | } 100 | 101 | static void 102 | handle_args(int argc, char **argv) { 103 | int fl; 104 | while ((fl = getopt(argc, argv, "hc:l:mn:r:s:")) != -1) { 105 | switch (fl) { 106 | case 'h': /* help */ 107 | usage(); 108 | break; 109 | case 'c': /* cycles */ 110 | cycles = strtoul(optarg, NULL, 0); 111 | if (cycles == 0) { 112 | fprintf(stderr, "Bad cycles: %zu\n", cycles); 113 | usage(); 114 | } 115 | break; 116 | case 'l': /* limit */ 117 | if (!parse_limits(optarg)) { 118 | fprintf(stderr, "Bad limit(s): %s\n", optarg); 119 | usage(); 120 | } 121 | break; 122 | case 'm': /* memory */ 123 | track_memory = true; 124 | break; 125 | case 'n': /* name */ 126 | name = optarg; 127 | break; 128 | case 'r': /* rng_seed */ 129 | rng_seed = strtoul(optarg, NULL, 0); 130 | break; 131 | case 's': /* node_size */ 132 | node_size = strtoul(optarg, NULL, 0); 133 | if (node_size < 2) { 134 | fprintf(stderr, "Bad node_size: %zu.\n", node_size); 135 | usage(); 136 | } 137 | break; 138 | case '?': 139 | default: 140 | usage(); 141 | } 142 | } 143 | } 144 | 145 | static int 146 | cmp_intptr_t(const void *ka, 147 | const void *kb, void *udata) { 148 | (void)udata; 149 | intptr_t a = (intptr_t)ka; 150 | intptr_t b = (intptr_t)kb; 151 | return (a < b ? -1 : a > b ? 1 : 0); 152 | } 153 | 154 | static struct skiparray_config sa_config = { 155 | .cmp = cmp_intptr_t, 156 | }; 157 | 158 | static struct skiparray_config sa_config_no_values; 159 | 160 | static struct skiparray * 161 | sequential_build(const struct skiparray_config *config, size_t limit) { 162 | struct skiparray_builder *b = NULL; 163 | 164 | enum skiparray_builder_new_res bnres = 165 | skiparray_builder_new(&sa_config, false, &b); 166 | (void)bnres; 167 | 168 | for (size_t i = 0; i < limit; i++) { 169 | intptr_t k = i; 170 | enum skiparray_builder_append_res bares = 171 | skiparray_builder_append(b, (void *) k, 172 | (config->ignore_values ? NULL : (void *) k)); 173 | (void)bares; 174 | } 175 | 176 | struct skiparray *sa = NULL; 177 | skiparray_builder_finish(&b, &sa); 178 | return sa; 179 | } 180 | 181 | /* Measure insertions. */ 182 | /* Measure getting existing values (successful lookup). */ 183 | static void 184 | get_sequential(size_t limit) { 185 | struct skiparray *sa = sequential_build(&sa_config, limit); 186 | 187 | TIME(pre); 188 | for (size_t i = 0; i < limit; i++) { 189 | intptr_t k = i; 190 | intptr_t v = 0; 191 | skiparray_get(sa, (void *) k, (void **)&v); 192 | assert(v == k); 193 | } 194 | TIME(post); 195 | 196 | TDIFF(); 197 | skiparray_free(sa); 198 | } 199 | 200 | /* Measure getting existing values (successful lookup). */ 201 | static void 202 | get_random_access(size_t limit) { 203 | struct skiparray *sa = sequential_build(&sa_config, limit); 204 | 205 | TIME(pre); 206 | for (size_t i = 0; i < limit; i++) { 207 | intptr_t k = (i * prime) % limit; 208 | intptr_t v = 0; 209 | skiparray_get(sa, (void *) k, (void **)&v); 210 | assert(v == k); 211 | } 212 | TIME(post); 213 | 214 | TDIFF(); 215 | skiparray_free(sa); 216 | } 217 | 218 | /* Same, but only use keys. */ 219 | static void 220 | get_random_access_no_values(size_t limit) { 221 | struct skiparray *sa = sequential_build(&sa_config_no_values, limit); 222 | 223 | TIME(pre); 224 | for (size_t i = 0; i < limit; i++) { 225 | intptr_t k = (i * prime) % limit; 226 | skiparray_get(sa, (void *) k, NULL); 227 | } 228 | TIME(post); 229 | 230 | TDIFF(); 231 | skiparray_free(sa); 232 | } 233 | 234 | /* Measure getting _nonexistent_ values (lookup failure). */ 235 | static void 236 | get_nonexistent(size_t limit) { 237 | struct skiparray *sa = sequential_build(&sa_config, limit); 238 | 239 | TIME(pre); 240 | for (size_t i = 0; i < limit; i++) { 241 | intptr_t k = ((i * prime) % limit) + limit; 242 | intptr_t v = 0; 243 | skiparray_get(sa, (void *) k, (void **)&v); 244 | assert(v == 0); 245 | } 246 | TIME(post); 247 | 248 | TDIFF(); 249 | skiparray_free(sa); 250 | } 251 | 252 | static void 253 | count(size_t limit) { 254 | struct skiparray *sa = sequential_build(&sa_config, limit); 255 | 256 | TIME(pre); 257 | size_t count = skiparray_count(sa); 258 | assert(count == limit); 259 | TIME(post); 260 | 261 | TDIFF(); 262 | skiparray_free(sa); 263 | } 264 | 265 | static void 266 | set_sequential(size_t limit) { 267 | struct skiparray *sa = NULL; 268 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 269 | (void)nres; 270 | 271 | TIME(pre); 272 | for (size_t i = 0; i < limit; i++) { 273 | intptr_t k = i; 274 | skiparray_set(sa, (void *) k, (void *) k); 275 | } 276 | TIME(post); 277 | 278 | TDIFF(); 279 | skiparray_free(sa); 280 | } 281 | 282 | static void 283 | set_sequential_builder(size_t limit) { 284 | struct skiparray_builder *b = NULL; 285 | 286 | enum skiparray_builder_new_res bnres = 287 | skiparray_builder_new(&sa_config, false, &b); 288 | (void)bnres; 289 | 290 | TIME(pre); 291 | for (size_t i = 0; i < limit; i++) { 292 | intptr_t k = i; 293 | enum skiparray_builder_append_res bares = 294 | skiparray_builder_append(b, (void *) k, (void *) k); 295 | (void)bares; 296 | } 297 | 298 | struct skiparray *sa = NULL; 299 | skiparray_builder_finish(&b, &sa); 300 | TIME(post); 301 | 302 | TDIFF(); 303 | skiparray_free(sa); 304 | } 305 | 306 | static void 307 | set_sequential_builder_no_chk(size_t limit) { 308 | struct skiparray_builder *b = NULL; 309 | 310 | enum skiparray_builder_new_res bnres = 311 | skiparray_builder_new(&sa_config, true, &b); 312 | (void)bnres; 313 | 314 | TIME(pre); 315 | for (size_t i = 0; i < limit; i++) { 316 | intptr_t k = i; 317 | enum skiparray_builder_append_res bares = 318 | skiparray_builder_append(b, (void *) k, (void *) k); 319 | (void)bares; 320 | } 321 | 322 | struct skiparray *sa = NULL; 323 | skiparray_builder_finish(&b, &sa); 324 | TIME(post); 325 | 326 | TDIFF(); 327 | skiparray_free(sa); 328 | } 329 | 330 | static void 331 | set_random_access(size_t limit) { 332 | struct skiparray *sa = NULL; 333 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 334 | (void)nres; 335 | 336 | TIME(pre); 337 | for (size_t i = 0; i < limit; i++) { 338 | intptr_t k = (i * prime) % limit; 339 | skiparray_set(sa, (void *) k, (void *) k); 340 | } 341 | TIME(post); 342 | 343 | TDIFF(); 344 | skiparray_free(sa); 345 | } 346 | 347 | static void 348 | set_random_access_no_values(size_t limit) { 349 | struct skiparray *sa = NULL; 350 | enum skiparray_new_res nres = skiparray_new(&sa_config_no_values, &sa); 351 | (void)nres; 352 | 353 | TIME(pre); 354 | for (size_t i = 0; i < limit; i++) { 355 | intptr_t k = (i * prime) % limit; 356 | skiparray_set(sa, (void *) k, NULL); 357 | } 358 | TIME(post); 359 | 360 | TDIFF(); 361 | skiparray_free(sa); 362 | } 363 | 364 | static void 365 | set_replacing_sequential(size_t limit) { 366 | struct skiparray *sa = sequential_build(&sa_config, limit); 367 | 368 | TIME(pre); 369 | for (size_t i = 0; i < limit; i++) { 370 | intptr_t k = i; 371 | skiparray_set(sa, (void *) k, (void *) (k + 1)); 372 | } 373 | TIME(post); 374 | 375 | TDIFF(); 376 | skiparray_free(sa); 377 | } 378 | 379 | static void 380 | set_replacing_random_access(size_t limit) { 381 | struct skiparray *sa = sequential_build(&sa_config, limit); 382 | 383 | TIME(pre); 384 | for (size_t i = 0; i < limit; i++) { 385 | intptr_t k = (i * prime) % limit; 386 | skiparray_set(sa, (void *) k, (void *) (k + 1)); 387 | } 388 | TIME(post); 389 | 390 | TDIFF(); 391 | skiparray_free(sa); 392 | } 393 | 394 | static void 395 | forget_sequential(size_t limit) { 396 | struct skiparray *sa = sequential_build(&sa_config, limit); 397 | 398 | TIME(pre); 399 | for (size_t i = 0; i < limit; i++) { 400 | intptr_t k = i; 401 | (void)skiparray_forget(sa, (void *) k, NULL); 402 | } 403 | TIME(post); 404 | 405 | TDIFF(); 406 | skiparray_free(sa); 407 | } 408 | 409 | static void 410 | forget_random_access(size_t limit) { 411 | struct skiparray *sa = sequential_build(&sa_config, limit); 412 | 413 | TIME(pre); 414 | for (size_t i = 0; i < limit; i++) { 415 | intptr_t k = (i * prime) % limit; 416 | (void)skiparray_forget(sa, (void *) k, NULL); 417 | } 418 | TIME(post); 419 | 420 | TDIFF(); 421 | skiparray_free(sa); 422 | } 423 | 424 | static void 425 | forget_random_access_no_values(size_t limit) { 426 | struct skiparray *sa = sequential_build(&sa_config_no_values, limit); 427 | 428 | TIME(pre); 429 | for (size_t i = 0; i < limit; i++) { 430 | intptr_t k = (i * prime) % limit; 431 | (void)skiparray_forget(sa, (void *)k, NULL); 432 | } 433 | TIME(post); 434 | 435 | TDIFF(); 436 | skiparray_free(sa); 437 | } 438 | 439 | static void 440 | forget_nonexistent(size_t limit) { 441 | struct skiparray *sa = sequential_build(&sa_config, limit); 442 | 443 | TIME(pre); 444 | for (size_t i = 0; i < limit; i++) { 445 | intptr_t k = ((i * prime) % limit) + limit; 446 | (void)skiparray_forget(sa, (void *) k, NULL); 447 | } 448 | TIME(post); 449 | 450 | TDIFF(); 451 | skiparray_free(sa); 452 | } 453 | 454 | static void 455 | pop_first(size_t limit) { 456 | struct skiparray *sa = sequential_build(&sa_config, limit); 457 | 458 | TIME(pre); 459 | for (size_t i = 0; i < limit; i++) { 460 | intptr_t k = 0, v = 0; 461 | enum skiparray_pop_res res = skiparray_pop_first(sa, 462 | (void *) &k, (void *) &v); 463 | if (res == SKIPARRAY_POP_EMPTY) { assert(false); } 464 | assert(res >= 0); 465 | assert(v == k); 466 | (void) res; 467 | } 468 | TIME(post); 469 | 470 | TDIFF(); 471 | skiparray_free(sa); 472 | } 473 | 474 | static void 475 | pop_last(size_t limit) { 476 | struct skiparray *sa = sequential_build(&sa_config, limit); 477 | 478 | TIME(pre); 479 | for (size_t i = 0; i < limit; i++) { 480 | intptr_t k = 0, v = 0; 481 | int res = skiparray_pop_last(sa, (void *) &k, (void *) &v); 482 | assert(res >= 0); 483 | assert(v == k); 484 | (void) res; 485 | } 486 | TIME(post); 487 | 488 | TDIFF(); 489 | skiparray_free(sa); 490 | } 491 | 492 | static void 493 | member_sequential(size_t limit) { 494 | struct skiparray *sa = sequential_build(&sa_config, limit); 495 | 496 | TIME(pre); 497 | for (size_t i = 0; i < limit; i++) { 498 | assert(skiparray_member(sa, (void *)i)); 499 | } 500 | TIME(post); 501 | 502 | TDIFF(); 503 | skiparray_free(sa); 504 | } 505 | 506 | static void 507 | member_random_access(size_t limit) { 508 | struct skiparray *sa = sequential_build(&sa_config, limit); 509 | 510 | TIME(pre); 511 | for (size_t i = 0; i < limit; i++) { 512 | size_t k = (i * prime) % limit; 513 | assert(skiparray_member(sa, (void *)k)); 514 | } 515 | TIME(post); 516 | 517 | TDIFF(); 518 | skiparray_free(sa); 519 | } 520 | 521 | static void 522 | sum(size_t limit) { 523 | struct skiparray *sa = NULL; 524 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 525 | (void)nres; 526 | 527 | uintptr_t actual = 0; 528 | for (size_t i = 0; i < limit; i++) { 529 | skiparray_set(sa, (void *)i, (void *)i); 530 | actual += i; 531 | } 532 | 533 | TIME(pre); 534 | uintptr_t total = 0; 535 | 536 | struct skiparray_iter *iter = NULL; 537 | if (SKIPARRAY_ITER_NEW_OK != skiparray_iter_new(sa, &iter)) { 538 | assert(false); 539 | } 540 | 541 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_FIRST); 542 | 543 | do { 544 | void *k, *v; 545 | skiparray_iter_get(iter, &k, &v); 546 | total += (uintptr_t)v; 547 | } while (skiparray_iter_next(iter) == SKIPARRAY_ITER_STEP_OK); 548 | 549 | skiparray_iter_free(iter); 550 | 551 | TIME(post); 552 | TDIFF(); 553 | skiparray_free(sa); 554 | 555 | assert(total == actual); 556 | } 557 | 558 | static void 559 | sum_partway(size_t limit) { 560 | struct skiparray *sa = NULL; 561 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 562 | (void)nres; 563 | 564 | for (size_t i = 0; i < limit; i++) { 565 | skiparray_set(sa, (void *)i, (void *)i); 566 | } 567 | 568 | TIME(pre); 569 | 570 | struct skiparray_iter *iter = NULL; 571 | if (SKIPARRAY_ITER_NEW_OK != skiparray_iter_new(sa, &iter)) { 572 | assert(false); 573 | } 574 | 575 | const uintptr_t starting_point = limit / 2; 576 | 577 | if (SKIPARRAY_ITER_SEEK_FOUND != 578 | skiparray_iter_seek(iter, (const void *)starting_point)) { 579 | assert(false); 580 | } 581 | 582 | do { 583 | void *k, *v; 584 | skiparray_iter_get(iter, &k, &v); 585 | (void)v; 586 | } while (skiparray_iter_next(iter) == SKIPARRAY_ITER_STEP_OK); 587 | 588 | skiparray_iter_free(iter); 589 | 590 | TIME(post); 591 | TDIFF(); 592 | skiparray_free(sa); 593 | } 594 | 595 | typedef void 596 | benchmark_fun(size_t limit); 597 | 598 | struct benchmark { 599 | const char *name; 600 | benchmark_fun *fun; 601 | }; 602 | 603 | static struct benchmark benchmarks[] = { 604 | { "get_sequential", get_sequential }, 605 | { "get_random_access", get_random_access }, 606 | { "get_random_access_no_values", get_random_access_no_values }, 607 | { "get_nonexistent", get_nonexistent }, 608 | { "set_sequential", set_sequential }, 609 | { "set_sequential_builder", set_sequential_builder }, 610 | { "set_sequential_builder_no_chk", set_sequential_builder_no_chk }, 611 | { "set_random_access", set_random_access }, 612 | { "set_random_access_no_values", set_random_access_no_values }, 613 | { "set_replacing_sequential", set_replacing_sequential }, 614 | { "set_replacing_random_access", set_replacing_random_access }, 615 | { "forget_sequential", forget_sequential }, 616 | { "forget_random_access", forget_random_access }, 617 | { "forget_random_access_no_values", forget_random_access_no_values }, 618 | { "forget_nonexistent", forget_nonexistent }, 619 | { "count", count }, 620 | { "pop_first", pop_first }, 621 | { "pop_last", pop_last }, 622 | { "member_sequential", member_sequential }, 623 | { "member_random_access", member_random_access }, 624 | { "sum", sum }, 625 | { "sum_partway", sum_partway }, 626 | { NULL, NULL }, 627 | }; 628 | 629 | static void * 630 | memory_cb(void *p, size_t size, void *udata) { 631 | /* Do a word-aligned allocation, and save the size immediately 632 | * before the memory allocated for the caller. */ 633 | uintptr_t *word_aligned = NULL; 634 | (void)udata; 635 | if (p != NULL) { 636 | assert(size == 0); /* no realloc used */ 637 | word_aligned = p; 638 | word_aligned--; 639 | memory_used -= word_aligned[0]; 640 | free(word_aligned); 641 | return NULL; 642 | } else { 643 | memory_used += size; 644 | if (memory_used > memory_hwm) { memory_hwm = memory_used; } 645 | word_aligned = malloc(sizeof(*word_aligned) + size); 646 | if (word_aligned == NULL) { return NULL; } 647 | word_aligned[0] = size; 648 | return &word_aligned[1]; 649 | } 650 | } 651 | 652 | int 653 | main(int argc, char **argv) { 654 | handle_args(argc, argv); 655 | 656 | if (limit_count == 0) { 657 | limits[limit_count] = DEF_LIMIT; 658 | limit_count++; 659 | } 660 | 661 | sa_config.node_size = node_size; 662 | sa_config.seed = rng_seed; 663 | if (track_memory) { sa_config.memory = memory_cb; } 664 | 665 | memcpy(&sa_config_no_values, &sa_config, sizeof(sa_config)); 666 | sa_config_no_values.ignore_values = true; 667 | 668 | if (name != NULL && 0 == strcmp(name, "help")) { 669 | for (struct benchmark *b = &benchmarks[0]; b->name; b++) { 670 | printf(" -- %s\n", b->name); 671 | } 672 | exit(EXIT_SUCCESS); 673 | } 674 | 675 | TIME(pre); 676 | 677 | for (size_t l_i = 0; l_i < limit_count; l_i++) { 678 | for (size_t c_i = 0; c_i < cycles; c_i++) { 679 | for (struct benchmark *b = &benchmarks[0]; b->name; b++) { 680 | memory_used = 0; 681 | memory_hwm = 0; 682 | if (name == NULL || 0 == strcmp(name, b->name)) { 683 | b->fun(limits[l_i]); 684 | } 685 | } 686 | } 687 | } 688 | 689 | TIME(post); 690 | 691 | double usec_total = (double)get_usec_delta(&timer_pre, &timer_post); 692 | printf("----\n%-30s %.3f sec\n", "total", usec_total / usec_per_sec); 693 | return 0; 694 | } 695 | -------------------------------------------------------------------------------- /src/skiparray.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019 Scott Vokes 3 | * 4 | * Permission to use, copy, modify, and/or distribute this software for any 5 | * purpose with or without fee is hereby granted, provided that the above 6 | * copyright notice and this permission notice appear in all copies. 7 | * 8 | * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 | * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 | * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 | * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 | * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 | * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 | * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 | */ 16 | 17 | #include "skiparray_internal.h" 18 | 19 | enum skiparray_new_res 20 | skiparray_new(const struct skiparray_config *config, 21 | struct skiparray **sa) { 22 | if (sa == NULL || config == NULL) { 23 | return SKIPARRAY_NEW_ERROR_NULL; 24 | } 25 | 26 | if (config->node_size == 1 || config->cmp == NULL) { 27 | return SKIPARRAY_NEW_ERROR_CONFIG; 28 | } 29 | 30 | #define DEF(FIELD, DEF) (config->FIELD == 0 ? DEF : config->FIELD) 31 | uint16_t node_size = DEF(node_size, SKIPARRAY_DEF_NODE_SIZE); 32 | uint8_t max_level = DEF(max_level, SKIPARRAY_DEF_MAX_LEVEL); 33 | #undef DEF 34 | #define DEF(FIELD, DEF) (config->FIELD == NULL ? DEF : config->FIELD) 35 | skiparray_memory_fun *mem = DEF(memory, def_memory_fun); 36 | skiparray_level_fun *level = DEF(level, def_level_fun); 37 | #undef DEF 38 | 39 | const size_t alloc_size = sizeof(struct skiparray) + 40 | max_level * sizeof(struct node *); 41 | struct skiparray *res = mem(NULL, alloc_size, config->udata); 42 | if (res == NULL) { return SKIPARRAY_NEW_ERROR_MEMORY; } 43 | memset(res, 0x00, alloc_size); 44 | 45 | uint64_t prng_state = 0; 46 | uint8_t root_level = level(config->seed, &prng_state, config->udata) + 1; 47 | if (root_level >= max_level) { root_level = max_level; } 48 | 49 | struct skiparray fields = { 50 | .node_size = node_size, 51 | .max_level = max_level, 52 | .height = root_level, 53 | .use_values = !config->ignore_values, 54 | .prng_state = prng_state, 55 | .mem = mem, 56 | .cmp = config->cmp, 57 | .free = config->free, 58 | .level = level, 59 | .udata = config->udata, 60 | }; 61 | memcpy(res, &fields, sizeof(fields)); 62 | 63 | struct node *root = node_alloc(root_level, node_size, 64 | mem, config->udata, fields.use_values); 65 | if (root == NULL) { 66 | mem(res, 0, config->udata); 67 | return SKIPARRAY_NEW_ERROR_MEMORY; 68 | } 69 | 70 | for (size_t i = 0; i < root_level; i++) { 71 | res->nodes[i] = root; 72 | LOG(4, "%s: res->nodes[%zu]: %p\n", 73 | __func__, i, (void *)res->nodes[i]); 74 | } 75 | for (size_t i = root_level; i < max_level; i++) { 76 | res->nodes[i] = NULL; 77 | LOG(4, "%s: res->nodes[%zu]: %p\n", 78 | __func__, i, (void *)res->nodes[i]); 79 | } 80 | 81 | *sa = res; 82 | LOG(2, "%s: new SA %p with height %u, max_level %u\n", 83 | __func__, (void *)res, root_level, res->max_level); 84 | return SKIPARRAY_NEW_OK; 85 | } 86 | 87 | void 88 | skiparray_free(struct skiparray *sa) { 89 | assert(sa != NULL); 90 | struct node *n = sa->nodes[0]; 91 | while (n != NULL) { 92 | struct node *next = n->fwd[0]; 93 | if (sa->free != NULL) { 94 | for (size_t i = 0; i < n->count; i++) { 95 | sa->free(n->keys[n->offset + i], 96 | sa->use_values ? n->values[n->offset + i] : NULL, sa->udata); 97 | } 98 | } 99 | node_free(sa, n); 100 | n = next; 101 | } 102 | 103 | /* Free any remaining iterators */ 104 | struct skiparray_iter *iter = sa->iter; 105 | while (iter != NULL) { 106 | struct skiparray_iter *next = iter->next; 107 | sa->mem(iter, 0, sa->udata); 108 | iter = next; 109 | } 110 | 111 | sa->mem(sa, 0, sa->udata); 112 | } 113 | 114 | bool 115 | skiparray_get(const struct skiparray *sa, 116 | const void *key, void **value) { 117 | struct skiparray_pair p; 118 | if (skiparray_get_pair(sa, key, &p)) { 119 | if (value != NULL) { *value = p.value; } 120 | return true; 121 | } else { 122 | return false; 123 | } 124 | } 125 | 126 | bool 127 | skiparray_get_pair(const struct skiparray *sa, 128 | const void *key, struct skiparray_pair *pair) { 129 | LOG(2, "%s: key %p\n", __func__, (void *)key); 130 | assert(sa != NULL); 131 | assert(pair != NULL); 132 | 133 | struct search_env env = { 134 | .sa = sa, 135 | .key = key, 136 | }; 137 | enum search_res sres = search(&env); 138 | switch (sres) { 139 | default: 140 | assert(false); 141 | case SEARCH_NOT_FOUND: 142 | return false; 143 | 144 | case SEARCH_FOUND: 145 | { 146 | struct node *n = env.n; 147 | pair->key = n->keys[n->offset + env.index]; 148 | pair->value = sa->use_values ? n->values[n->offset + env.index] : NULL; 149 | return true; 150 | } 151 | } 152 | } 153 | 154 | static bool 155 | has_iterators(const struct skiparray *sa) { 156 | return sa->iter != NULL; 157 | } 158 | 159 | enum skiparray_set_res 160 | skiparray_set(struct skiparray *sa, 161 | void *key, void *value) { 162 | return skiparray_set_with_pair(sa, key, value, true, NULL); 163 | } 164 | 165 | enum skiparray_set_res 166 | skiparray_set_with_pair(struct skiparray *sa, void *key, void *value, 167 | bool replace_key, struct skiparray_pair *previous_binding) { 168 | LOG(2, "%s: key %p => value %p\n", 169 | __func__, (void *)key, (void *)value); 170 | assert(sa); 171 | 172 | if (has_iterators(sa)) { return SKIPARRAY_SET_ERROR_LOCKED; } 173 | 174 | struct search_env env = { 175 | .sa = sa, 176 | .key = key, 177 | }; 178 | 179 | enum search_res sres = search(&env); 180 | 181 | switch (sres) { 182 | case SEARCH_FOUND: 183 | { 184 | struct node *n = env.n; 185 | assert(n); 186 | void **k = &n->keys[n->offset + env.index]; 187 | static void *the_NULL = NULL; /* safe placeholder for *v */ 188 | void **v = sa->use_values 189 | ? &n->values[n->offset + env.index] : &the_NULL; 190 | if (previous_binding != NULL) { 191 | previous_binding->key = *k; 192 | previous_binding->value = *v; 193 | } 194 | if (sa->use_values) { *v = value; } 195 | 196 | if (replace_key) { *k = key; } 197 | 198 | return SKIPARRAY_SET_REPLACED; 199 | } 200 | 201 | case SEARCH_NOT_FOUND: 202 | case SEARCH_EMPTY: 203 | { 204 | struct node *n = env.n; 205 | assert(n); 206 | if (env.n->count == sa->node_size) { 207 | /* split, update node; index in env. 208 | * This is the only code path that changes the overall 209 | * skiplist structure, and can be fairly rare with large nodes. */ 210 | struct node *new = NULL; 211 | if (!split_node(sa, n, &new)) { 212 | return SKIPARRAY_SET_ERROR_MEMORY; 213 | } 214 | assert(new->count > 0); 215 | 216 | /* Update back pointer */ 217 | if (n->fwd[0] != NULL) { n->fwd[0]->back = new; } 218 | 219 | if (LOG_LEVEL >= 3) { 220 | for (size_t i = 0; i <= new->height; i++) { 221 | LOG(3, "post-split: sa->nodes[%zu]: %p\n", 222 | i, (void *)sa->nodes[i]); 223 | } 224 | } 225 | 226 | /* If the new node is taller than the node it split from, 227 | * then find the preceding nodes on those levels and update 228 | * their forward pointers. */ 229 | struct node *prev = NULL; 230 | struct node *cur = NULL; 231 | for (size_t level = new->height - 1; level >= n->height; level--) { 232 | LOG(2, "%s: updating forward pointers on level %zu, cur %p\n", 233 | __func__, level, (void *)cur); 234 | if (level > sa->height) { continue; } 235 | if (cur == NULL) { 236 | if (sa->nodes[level] == NULL) { continue; } 237 | cur = sa->nodes[level]; 238 | } 239 | for (;;) { 240 | assert(cur); 241 | assert(cur->count > 0); 242 | const int res = sa->cmp(new->keys[new->offset], 243 | cur->keys[cur->offset + cur->count - 1], sa->udata); 244 | LOG(2, "%s: level %zu, cur %p, cmp %d, prev %p\n", 245 | __func__, level, (void *)cur, res, (void *)prev); 246 | if (res < 0) { /* overshot */ 247 | if (prev == NULL) { 248 | LOG(2, "%s: setting sa->nodes[%zu] to %p\n", 249 | __func__, level, (void *)new); 250 | new->fwd[level] = sa->nodes[level]; 251 | sa->nodes[level] = new; 252 | } 253 | cur = prev; 254 | break; 255 | } else if (res > 0) { 256 | prev = cur; 257 | if (cur->fwd[level] == NULL) { 258 | LOG(2, "%s: setting %p->fwd[%zu] to %p\n", 259 | __func__, (void *)cur, level, (void *)new); 260 | cur->fwd[level] = new; 261 | break; 262 | } else { 263 | LOG(2, "%s: advancing cur from %p to %p\n", 264 | __func__, (void *)cur, (void *)cur->fwd[level]); 265 | cur = cur->fwd[level]; 266 | } 267 | assert(cur); 268 | } else { 269 | assert(false); 270 | } 271 | } 272 | 273 | if (prev != NULL) { 274 | if (prev->fwd[level] != new) { 275 | LOG(2, "%s: setting new->fwd[%zu] to %p\n", 276 | __func__, level, (void *)prev->fwd[level]); 277 | new->fwd[level] = prev->fwd[level]; 278 | } 279 | LOG(2, "%s: setting prev->fwd[%zu] to %p\n", 280 | __func__, level, (void *)new); 281 | prev->fwd[level] = new; 282 | if (new->fwd[level]) { 283 | assert(new->fwd[level]->height > level); 284 | } 285 | } 286 | } 287 | 288 | /* If the new node is taller than the current SA height, 289 | * then increase it and update forward links. */ 290 | while (new->height > sa->height) { 291 | sa->nodes[sa->height] = new; 292 | sa->height++; 293 | } 294 | 295 | /* Update the forward pointers on the node that split. */ 296 | const uint8_t common_height = (n->height < new->height 297 | ? n->height : new->height); 298 | for (size_t i = 0; i < common_height; i++) { 299 | new->fwd[i] = n->fwd[i]; 300 | n->fwd[i] = new; 301 | } 302 | 303 | if (env.index > n->count) { /* now inserting on new node */ 304 | LOG(2, "split, was inserting at %" PRIu16 305 | ", now inserting at %" PRIu16 " on new\n", 306 | env.index, env.index - n->count); 307 | env.index -= n->count; 308 | n = new; 309 | } 310 | } 311 | 312 | prepare_node_for_insert(sa, n, env.index); 313 | 314 | assert(n->offset + env.index < sa->node_size); 315 | n->keys[n->offset + env.index] = key; 316 | 317 | if (sa->use_values) { 318 | n->values[n->offset + env.index] = value; 319 | } 320 | 321 | n->count++; 322 | LOG(2, "%s: now node %p has %" PRIu16 " pair(s)\n", 323 | __func__, (void *)n, n->count); 324 | return SKIPARRAY_SET_BOUND; 325 | } 326 | 327 | default: 328 | return SKIPARRAY_SET_ERROR_NULL; 329 | } 330 | } 331 | 332 | enum skiparray_forget_res 333 | skiparray_forget(struct skiparray *sa, const void *key, 334 | struct skiparray_pair *forgotten) { 335 | LOG(2, "%s: key %p\n", 336 | __func__, (void *)key); 337 | 338 | if (has_iterators(sa)) { return SKIPARRAY_FORGET_ERROR_LOCKED; } 339 | 340 | struct search_env env = { 341 | .sa = sa, 342 | .key = key, 343 | }; 344 | enum search_res sres = search(&env); 345 | switch (sres) { 346 | case SEARCH_NOT_FOUND: 347 | case SEARCH_EMPTY: 348 | return SKIPARRAY_FORGET_NOT_FOUND; 349 | 350 | case SEARCH_FOUND: 351 | { 352 | struct node *n = env.n; 353 | assert(n); 354 | 355 | LOG(2, "%s: found in node %p at index %" PRIu16 "\n", 356 | __func__, (void *)n, env.index); 357 | assert(env.index < n->count); 358 | 359 | if (forgotten != NULL) { 360 | forgotten->key = n->keys[n->offset + env.index]; 361 | forgotten->value = sa->use_values 362 | ? n->values[n->offset + env.index] : NULL; 363 | } 364 | 365 | if (LOG_LEVEL >= 4) { 366 | dump_raw_bindings("PRE-FORGET", sa, n); 367 | } 368 | 369 | if (env.index == 0) { /* first */ 370 | n->offset++; 371 | /* Deletion shouldn't gradually shift off the end. */ 372 | if (n->offset == sa->node_size) { 373 | n->offset = sa->node_size/2; 374 | } 375 | n->count--; 376 | } else if (env.index == n->count - 1) { /* last */ 377 | n->count--; 378 | } else { /* from middle */ 379 | const uint16_t to_move = n->count - env.index - 1; 380 | shift_pairs(n, n->offset + env.index, 381 | n->offset + env.index + 1, to_move); 382 | n->count--; 383 | } 384 | 385 | LOG(2, "%s: count after deletion for %p: %" PRIu16 " (offset %" PRIu16 ")\n", 386 | __func__, (void *)n, n->count, n->offset); 387 | 388 | if (LOG_LEVEL >= 4) { 389 | dump_raw_bindings("POST-FORGET", sa, n); 390 | } 391 | 392 | if (n->count < sa->node_size/2) { 393 | /* The node is too empty: either shift over entries 394 | * from the following L0 node (if any), or if it's 395 | * also too empty, merge with it.*/ 396 | shift_or_merge(sa, n); 397 | 398 | if (LOG_LEVEL >= 4) { 399 | dump_raw_bindings("POST-FORGET (post merge)", sa, n); 400 | } 401 | } 402 | 403 | return SKIPARRAY_FORGET_OK; 404 | } 405 | 406 | default: 407 | return SKIPARRAY_FORGET_ERROR_NULL; 408 | } 409 | } 410 | 411 | bool 412 | skiparray_member(const struct skiparray *sa, 413 | const void *key) { 414 | return skiparray_get(sa, key, NULL); 415 | } 416 | 417 | size_t 418 | skiparray_count(const struct skiparray *sa) { 419 | assert(sa != NULL); 420 | 421 | size_t res = 0; 422 | struct node *n = sa->nodes[0]; 423 | while (n != NULL) { 424 | res += n->count; 425 | n = n->fwd[0]; 426 | } 427 | 428 | return res; 429 | } 430 | 431 | enum skiparray_first_res 432 | skiparray_first(const struct skiparray *sa, 433 | void **key, void **value) { 434 | assert(sa != NULL); 435 | 436 | struct node *n = sa->nodes[0]; 437 | if (n->count == 0) { 438 | return SKIPARRAY_FIRST_EMPTY; 439 | } 440 | 441 | uint16_t index = n->offset; 442 | 443 | if (key != NULL) { 444 | *key = n->keys[index]; 445 | } 446 | 447 | if (value != NULL && sa->use_values) { 448 | *value = n->values[index]; 449 | } 450 | 451 | return SKIPARRAY_FIRST_OK; 452 | } 453 | 454 | static struct node * 455 | last_node(const struct skiparray *sa) { 456 | assert(sa->height > 0); 457 | int level = sa->height - 1; 458 | struct node *n = sa->nodes[level]; 459 | for (;;) { 460 | struct node *next = n->fwd[level]; 461 | if (next != NULL) { 462 | n = next; 463 | } else { 464 | if (level == 0) { 465 | return n; 466 | } else { 467 | level--; 468 | } 469 | } 470 | } 471 | } 472 | 473 | enum skiparray_last_res 474 | skiparray_last(const struct skiparray *sa, 475 | void **key, void **value) { 476 | assert(sa != NULL); 477 | 478 | struct node *n = last_node(sa); 479 | 480 | if (n->count == 0) { 481 | assert(n == sa->nodes[0]); 482 | return SKIPARRAY_LAST_EMPTY; 483 | } 484 | 485 | uint16_t index = n->offset + n->count - 1; 486 | 487 | if (key != NULL) { 488 | *key = n->keys[index]; 489 | } 490 | 491 | if (value != NULL && sa->use_values) { 492 | *value = n->values[index]; 493 | } 494 | 495 | return SKIPARRAY_LAST_OK; 496 | } 497 | 498 | enum skiparray_pop_res 499 | skiparray_pop_first(struct skiparray *sa, 500 | void **key, void **value) { 501 | /* if first node is only half full and not last, 502 | * then steal from and/or combine with the next node */ 503 | assert(sa != NULL); 504 | 505 | struct node *head = sa->nodes[0]; 506 | LOG(2, "%s: head %p, count %" PRIu16"\n", 507 | __func__, (void *)head, head->count); 508 | 509 | if (has_iterators(sa)) { return SKIPARRAY_POP_ERROR_LOCKED; } 510 | 511 | if (head->count == 0) { 512 | assert(head->fwd[0] == NULL); 513 | return SKIPARRAY_POP_EMPTY; 514 | } 515 | 516 | if (key != NULL) { *key = head->keys[head->offset]; } 517 | if (value != NULL && sa->use_values) { 518 | *value = head->values[head->offset]; 519 | } 520 | head->offset++; 521 | if (head->offset == sa->node_size) { 522 | head->offset = sa->node_size/2; 523 | } 524 | head->count--; 525 | 526 | /* If the head node is less than half full (and not the only node), 527 | * either take some pairs from the next node or merge with it. */ 528 | struct node *next = head->fwd[0]; 529 | const uint16_t required = sa->node_size/2; 530 | if (head->count < sa->node_size/2 && next != NULL) { 531 | if (head->count + next->count <= sa->node_size) { 532 | LOG(2, "%s: combining head with next (%p), which has %" PRIu16 " pairs\n", 533 | __func__, (void *)next, next->count); 534 | const uint16_t to_move = next->count; 535 | if (head->offset > 0) { 536 | /* move to front, to make room */ 537 | shift_pairs(head, 0, head->offset, head->count); 538 | head->offset = 0; 539 | } 540 | move_pairs(head, next, head->count, next->offset, to_move); 541 | head->count += to_move; 542 | 543 | for (size_t i = 0; i < next->height; i++) { 544 | if (i < head->height) { 545 | LOG(2, "%s: head->fwd[%zu] = next->fwd[%zu] = %p\n", 546 | __func__, i, i, (void *)next->fwd[i]); 547 | head->fwd[i] = next->fwd[i]; 548 | } else { 549 | LOG(2, "%s: sa->nodes[%zu] = next->fwd[%zu] = %p\n", 550 | __func__, i, i, (void *)next->fwd[i]); 551 | assert(sa->nodes[i] == next); 552 | sa->nodes[i] = next->fwd[i]; 553 | } 554 | } 555 | 556 | if (next->fwd[0] != NULL) { 557 | next->fwd[0]->back = head; 558 | } 559 | 560 | LOG(2, "%s: freeing next node %p\n", __func__, (void *)next); 561 | node_free(sa, next); 562 | 563 | /* handle decrease in height */ 564 | while (sa->height > 1 && sa->nodes[sa->height - 1] == NULL) { sa->height--; } 565 | } else { 566 | const uint16_t to_move = next->count - required; 567 | LOG(2, "%s: moving %" PRIu16 " pairs from next (%p) to head\n", 568 | __func__, to_move, (void *)next); 569 | if (head->offset > 0) { 570 | /* move to front, to make room */ 571 | shift_pairs(head, 0, head->offset, head->count); 572 | head->offset = 0; 573 | } 574 | move_pairs(head, next, head->count, next->offset, to_move); 575 | next->count -= to_move; 576 | next->offset += to_move; 577 | head->count += to_move; 578 | } 579 | } 580 | 581 | return SKIPARRAY_POP_OK; 582 | } 583 | 584 | enum skiparray_pop_res 585 | skiparray_pop_last(struct skiparray *sa, 586 | void **key, void **value) { 587 | assert(sa != NULL); 588 | /* same as skiparray_last, but delete last node if empty */ 589 | struct node *head = sa->nodes[0]; 590 | LOG(2, "%s: head %p, count %" PRIu16"\n", 591 | __func__, (void *)head, head->count); 592 | 593 | if (has_iterators(sa)) { return SKIPARRAY_POP_ERROR_LOCKED; } 594 | 595 | if (head->count == 0) { 596 | assert(head->fwd[0] == NULL); 597 | return SKIPARRAY_POP_EMPTY; 598 | } 599 | 600 | int8_t level = sa->height - 1; 601 | struct node *cur = sa->nodes[level]; 602 | assert(cur); 603 | while (level >= 0) { 604 | if (cur->fwd[level] == NULL) { 605 | /* If it's the very last node, break */ 606 | if (cur->fwd[0] == NULL) { break; } 607 | level--; 608 | } else { 609 | cur = cur->fwd[level]; 610 | } 611 | } 612 | struct node *last = cur; 613 | assert(last); 614 | assert(last->fwd[0] == NULL); 615 | assert(last->count > 0); 616 | LOG(2, "%s: last node is %p, with %" PRIu16 " pair(s)\n", 617 | __func__, (void *)last, last->count); 618 | 619 | if (key != NULL) { *key = last->keys[last->offset + last->count - 1]; } 620 | if (value != NULL && sa->use_values) { 621 | *value = last->values[last->offset + last->count - 1]; 622 | } 623 | last->count--; 624 | 625 | if (last->count == 0) { 626 | if (last == sa->nodes[0]) { 627 | LOG(2, "%s: retaining empty first/last node\n", __func__); 628 | } else { 629 | unlink_node(sa, last); 630 | } 631 | } 632 | 633 | return SKIPARRAY_POP_OK; 634 | } 635 | 636 | enum skiparray_iter_new_res 637 | skiparray_iter_new(struct skiparray *sa, 638 | struct skiparray_iter **res) { 639 | assert(sa != NULL); 640 | assert(res != NULL); 641 | 642 | if (sa->nodes[0]->fwd[0] == NULL && sa->nodes[0]->count == 0) { 643 | return SKIPARRAY_ITER_NEW_EMPTY; 644 | } 645 | 646 | struct skiparray_iter *si = sa->mem(NULL, 647 | sizeof(*si), sa->udata); 648 | if (si == NULL) { 649 | return SKIPARRAY_ITER_NEW_ERROR_MEMORY; 650 | } 651 | 652 | if (sa->iter != NULL) { 653 | sa->iter->prev = si; 654 | } 655 | 656 | *si = (struct skiparray_iter) { 657 | .sa = sa, 658 | .prev = NULL, 659 | .next = sa->iter, 660 | .n = sa->nodes[0], 661 | .index = 0, 662 | }; 663 | sa->iter = si; 664 | *res = si; 665 | return SKIPARRAY_ITER_NEW_OK; 666 | } 667 | 668 | void 669 | skiparray_iter_free(struct skiparray_iter *iter) { 670 | if (iter == NULL) { return; } 671 | 672 | struct skiparray *sa = iter->sa; 673 | assert(sa != NULL); 674 | 675 | LOG(4, "%s: freeing %p; iter->prev %p, sa->iter %p\n", 676 | __func__, (void *)iter, (void *)iter->prev, (void *)sa->iter); 677 | 678 | if (iter->prev == NULL) { 679 | assert(sa->iter == iter); 680 | sa->iter = iter->next; 681 | if (iter->next != NULL) { 682 | iter->next->prev = NULL; 683 | } 684 | } else { /* unlink */ 685 | iter->prev->next = iter->next; 686 | if (iter->next != NULL) { 687 | iter->next->prev = iter->prev; 688 | } 689 | } 690 | 691 | sa->mem(iter, 0, sa->udata); 692 | } 693 | 694 | void 695 | skiparray_iter_seek_endpoint(struct skiparray_iter *iter, 696 | enum skiparray_iter_seek_endpoint end) { 697 | assert(iter != NULL); 698 | switch (end) { 699 | case SKIPARRAY_ITER_SEEK_FIRST: 700 | iter->n = iter->sa->nodes[0]; 701 | iter->index = 0; 702 | break; 703 | case SKIPARRAY_ITER_SEEK_LAST: 704 | iter->n = last_node(iter->sa); 705 | iter->index = iter->n->count - 1; 706 | break; 707 | 708 | default: 709 | assert(false); 710 | } 711 | } 712 | 713 | enum skiparray_iter_seek_res 714 | skiparray_iter_seek(struct skiparray_iter *iter, 715 | const void *key) { 716 | assert(iter != NULL); 717 | 718 | struct search_env env = { 719 | .sa = iter->sa, 720 | .key = key, 721 | }; 722 | enum search_res sres = search(&env); 723 | assert(env.n != NULL); 724 | 725 | LOG(3, "%s: sres %d, got node %p, index %u\n", 726 | __func__, sres, (void *)env.n, env.index); 727 | 728 | switch (sres) { 729 | case SEARCH_FOUND: 730 | iter->n = env.n; 731 | iter->index = env.index; 732 | return SKIPARRAY_ITER_SEEK_FOUND; 733 | 734 | default: 735 | case SEARCH_EMPTY: 736 | assert(false); 737 | 738 | case SEARCH_NOT_FOUND: 739 | break; /* continue below */ 740 | } 741 | 742 | if (env.index == 0 && env.n->back == NULL) { 743 | return SKIPARRAY_ITER_SEEK_ERROR_BEFORE_FIRST; 744 | } 745 | 746 | if (env.index == env.n->count) { 747 | env.n = env.n->fwd[0]; 748 | if (env.n == NULL) { return SKIPARRAY_ITER_SEEK_ERROR_AFTER_LAST; } 749 | env.index = 0; 750 | } 751 | 752 | iter->n = env.n; 753 | iter->index = env.index; 754 | 755 | return SKIPARRAY_ITER_SEEK_NOT_FOUND; 756 | } 757 | 758 | enum skiparray_iter_step_res 759 | skiparray_iter_next(struct skiparray_iter *iter) { 760 | assert(iter != NULL); 761 | 762 | iter->index++; 763 | LOG(4, "%s: index %"PRIu16", count %"PRIu16"\n", 764 | __func__, iter->index, iter->n->count); 765 | 766 | if (iter->index == iter->n->count) { 767 | if (iter->n->fwd[0] == NULL) { 768 | return SKIPARRAY_ITER_STEP_END; 769 | } else { 770 | iter->n = iter->n->fwd[0]; 771 | iter->index = 0; 772 | } 773 | } 774 | return SKIPARRAY_ITER_STEP_OK; 775 | } 776 | 777 | enum skiparray_iter_step_res 778 | skiparray_iter_prev(struct skiparray_iter *iter) { 779 | assert(iter != NULL); 780 | 781 | LOG(4, "%s: index %"PRIu16", count %"PRIu16"\n", 782 | __func__, iter->index, iter->n->count); 783 | 784 | if (iter->index == 0) { 785 | if (iter->n->back == NULL) { 786 | return SKIPARRAY_ITER_STEP_END; 787 | } else { 788 | iter->n = iter->n->back; 789 | iter->index = iter->n->count - 1; 790 | } 791 | } else { 792 | iter->index--; 793 | } 794 | return SKIPARRAY_ITER_STEP_OK; 795 | } 796 | 797 | void 798 | skiparray_iter_get(struct skiparray_iter *iter, 799 | void **key, void **value) { 800 | assert(iter != NULL); 801 | 802 | LOG(2, "%s: index %u, node %p, count %u\n", 803 | __func__, iter->index, (void *)iter->n, iter->n->count); 804 | 805 | assert(iter->index < iter->n->count); 806 | uint16_t n = iter->n->offset + iter->index; 807 | if (key != NULL) { 808 | *key = iter->n->keys[n]; 809 | } 810 | 811 | if (value != NULL && iter->sa->use_values) { 812 | *value = iter->n->values[n]; 813 | } 814 | } 815 | 816 | enum skiparray_builder_new_res 817 | skiparray_builder_new(const struct skiparray_config *cfg, 818 | bool skip_ascending_key_check, struct skiparray_builder **builder) { 819 | if (builder == NULL) { return SKIPARRAY_BUILDER_NEW_ERROR_MISUSE; } 820 | 821 | struct skiparray *sa = NULL; 822 | enum skiparray_new_res nres = skiparray_new(cfg, &sa); 823 | switch (nres) { 824 | default: 825 | assert(false); 826 | case SKIPARRAY_NEW_ERROR_NULL: 827 | case SKIPARRAY_NEW_ERROR_CONFIG: 828 | return SKIPARRAY_BUILDER_NEW_ERROR_MISUSE; 829 | case SKIPARRAY_NEW_ERROR_MEMORY: 830 | return SKIPARRAY_BUILDER_NEW_ERROR_MEMORY; 831 | case SKIPARRAY_NEW_OK: 832 | break; /* continue below */ 833 | } 834 | 835 | struct skiparray_builder *b = NULL; 836 | const size_t alloc_size = sizeof(*b) 837 | + sa->max_level * sizeof(b->trail[0]); 838 | b = sa->mem(NULL, alloc_size, sa->udata); 839 | if (b == NULL) { 840 | skiparray_free(sa); 841 | return SKIPARRAY_BUILDER_NEW_ERROR_MEMORY; 842 | } 843 | memset(b, 0x00, alloc_size); 844 | 845 | b->sa = sa; 846 | b->last = sa->nodes[0]; 847 | b->last->offset = 0; 848 | 849 | LOG(3, "%s: initializing builder with n %p (height %u)\n", 850 | __func__, (void *)b->last, b->last->height); 851 | 852 | for (size_t i = 0; i < b->last->height; i++) { 853 | b->trail[i] = b->last; 854 | LOG(3, " -- b->trail[%zu] <- %p\n", i, (void *)b->last); 855 | assert(b->trail[i] == sa->nodes[i]); 856 | } 857 | 858 | b->check_ascending = !skip_ascending_key_check; 859 | b->has_prev_key = false; 860 | *builder = b; 861 | return SKIPARRAY_BUILDER_NEW_OK; 862 | } 863 | 864 | void 865 | skiparray_builder_free(struct skiparray_builder *b) { 866 | if (b == NULL) { return; } 867 | assert(b->sa != NULL); 868 | struct skiparray *sa = b->sa; 869 | b->sa->mem(b, 0, sa->udata); 870 | skiparray_free(sa); 871 | } 872 | 873 | enum skiparray_builder_append_res 874 | skiparray_builder_append(struct skiparray_builder *b, 875 | void *key, void *value) { 876 | assert(b != NULL); 877 | assert(b->sa != NULL); 878 | assert(b->last != NULL); 879 | 880 | struct skiparray *sa = b->sa; 881 | struct node *last = b->last; 882 | assert(last->offset == 0); 883 | 884 | /* reject key if <= previous; must be ascending */ 885 | if (b->has_prev_key) { 886 | if (sa->cmp(key, b->prev_key, sa->udata) <= 0) { 887 | return SKIPARRAY_BUILDER_APPEND_ERROR_MISUSE; 888 | } 889 | } 890 | 891 | LOG(3, "%s: last is %p (%u height, %u count), sa height is %u\n", 892 | __func__, (void *)last, last->height, last->count, sa->height); 893 | 894 | /* If the current last node is full, then allocate a new last node 895 | * and connect back and forward pointers according to the trail. */ 896 | if (last->count == sa->node_size) { 897 | uint8_t level = sa->level(sa->prng_state, 898 | &sa->prng_state, sa->udata) + 1; 899 | if (level >= sa->max_level) { level = sa->max_level - 1; } 900 | 901 | struct node *new = node_alloc(level + 1, sa->node_size, 902 | sa->mem, sa->udata, sa->use_values); 903 | if (new == NULL) { 904 | return SKIPARRAY_BUILDER_APPEND_ERROR_MEMORY; 905 | } 906 | LOG(3, " -- new %p, height %u\n", (void *)new, new->height); 907 | 908 | for (size_t i = 0; i < last->height; i++) { 909 | if (i >= new->height) { break; } 910 | LOG(3, " -- last->fwd[%zu] -> %p\n", i, (void *)new); 911 | last->fwd[i] = new; 912 | } 913 | for (size_t i = 0; i < new->height; i++) { 914 | if (b->trail[i] == NULL) { 915 | LOG(3, " -- b->trail[%zu] -> %p\n", i, (void *)new); 916 | b->trail[i] = new; 917 | } else { 918 | LOG(3, " -- b->trail(%p)->fwd[%zu] -> %p\n", 919 | (void *)b->trail[i], i, (void *)new); 920 | assert(b->trail[i]->height > i); 921 | b->trail[i]->fwd[i] = new; 922 | LOG(3, " -- b->trail[%zu] -> %p\n", i, (void *)new); 923 | b->trail[i] = new; 924 | } 925 | } 926 | 927 | /* If the new node is taller than the current SA height, 928 | * then increase it and update forward links. */ 929 | while (new->height > sa->height) { 930 | sa->nodes[sa->height] = new; 931 | sa->height++; 932 | } 933 | new->back = last; 934 | assert(last->fwd[0] == new); 935 | new->offset = 0; 936 | last = new; 937 | b->last = last; 938 | } 939 | 940 | last->keys[last->count] = key; 941 | if (last->values != NULL) { last->values[last->count] = value; } 942 | last->count++; 943 | 944 | if (b->check_ascending) { 945 | b->has_prev_key = true; 946 | b->prev_key = key; 947 | } 948 | return SKIPARRAY_BUILDER_APPEND_OK; 949 | } 950 | 951 | void 952 | skiparray_builder_finish(struct skiparray_builder **b, 953 | struct skiparray **sa) { 954 | assert(b != NULL); 955 | assert(sa != NULL); 956 | 957 | struct skiparray_builder *builder = *b; 958 | *b = NULL; 959 | assert(builder != NULL); 960 | 961 | *sa = builder->sa; 962 | (*sa)->mem(builder, 0, (*sa)->udata); 963 | } 964 | 965 | static struct node * 966 | node_alloc(uint8_t height, uint16_t node_size, 967 | skiparray_memory_fun *mem, void *udata, bool use_values) { 968 | LOG(2, "%s: height %u, size %" PRIu16 "\n", __func__, height, node_size); 969 | assert(height >= 1); 970 | assert(node_size >= 2); 971 | 972 | struct node *res = NULL; 973 | void **keys = NULL; 974 | void **values = NULL; 975 | 976 | const size_t alloc_size = sizeof(struct node) + 977 | height * sizeof(struct node *); 978 | res = mem(NULL, alloc_size, udata); 979 | if (res == NULL) { goto cleanup; } 980 | memset(res, 0x00, alloc_size); 981 | 982 | keys = mem(NULL, node_size * sizeof(keys[0]), udata); 983 | if (keys == NULL) { goto cleanup; } 984 | memset(keys, 0x00, node_size * sizeof(keys[0])); 985 | 986 | if (use_values) { 987 | values = mem(NULL, node_size * sizeof(values[0]), udata); 988 | if (values == NULL) { goto cleanup; } 989 | memset(values, 0x00, node_size * sizeof(values[0])); 990 | } 991 | 992 | struct node fields = { 993 | .height = height, 994 | .offset = node_size / 2, 995 | .count = 0, 996 | .keys = keys, 997 | .values = values, 998 | }; 999 | memcpy(res, &fields, sizeof(fields)); 1000 | for (uint8_t i = 0; i < height; i++) { 1001 | res->fwd[i] = NULL; 1002 | } 1003 | return res; 1004 | 1005 | cleanup: 1006 | if (res != NULL) { mem(res, 0, udata); } 1007 | if (keys != NULL) { mem(keys, 0, udata); } 1008 | if (values != NULL) { mem(values, 0, udata); } 1009 | return NULL; 1010 | } 1011 | 1012 | static void 1013 | node_free(const struct skiparray *sa, struct node *n) { 1014 | if (n == NULL) { return; } 1015 | sa->mem(n->keys, 0, sa->udata); 1016 | if (n->values != NULL) { sa->mem(n->values, 0, sa->udata); } 1017 | sa->mem(n, 0, sa->udata); 1018 | } 1019 | 1020 | /* Search for the index <= KEY within KEYS[KEY_COUNT] (according to CMP), 1021 | * and write it in *INDEX. Return whether an exact match was found. */ 1022 | bool 1023 | skiparray_bsearch(const void *key, const void * const *keys, 1024 | size_t key_count, skiparray_cmp_fun *cmp, void *udata, 1025 | uint16_t *index) { 1026 | 1027 | #if LOG_LEVEL >= 4 1028 | LOG(4, "====== %s\n", __func__); 1029 | for (size_t i = 0; i < key_count; i++) { 1030 | LOG(4, "%zu: %p\n", i, (void *)keys[i]); 1031 | } 1032 | #endif 1033 | 1034 | assert(key_count > 0); 1035 | int low = 0; 1036 | int high = key_count; 1037 | 1038 | while (low < high) { 1039 | int cur = (low + high)/2; 1040 | 1041 | int res = cmp(key, keys[cur], udata); 1042 | LOG(3, "%s: low %d, high %d, cur %d: res %d\n", 1043 | __func__, low, high, cur, res); 1044 | if (res < 0) { 1045 | high = cur; 1046 | continue; 1047 | } else if (res > 0) { 1048 | low = cur + 1; 1049 | continue; 1050 | } else { 1051 | *index = cur; 1052 | return true; 1053 | } 1054 | } 1055 | 1056 | *index = low; 1057 | return false; 1058 | } 1059 | 1060 | static bool 1061 | search_within_node(const struct skiparray *sa, 1062 | const void *key, const struct node *n, uint16_t *index) { 1063 | return skiparray_bsearch(key, (const void * const *)&n->keys[n->offset], 1064 | n->count, sa->cmp, sa->udata, index); 1065 | } 1066 | 1067 | /* Search the chains of nodes, starting at the highest level, and 1068 | * find the node and position in which the key would fit. */ 1069 | static enum search_res 1070 | search(struct search_env *env) { 1071 | bool found = false; 1072 | const struct skiparray *sa = env->sa; 1073 | assert(sa->height >= 1); 1074 | int level = sa->height - 1; 1075 | struct node *prev = NULL; 1076 | 1077 | skiparray_cmp_fun *cmp = sa->cmp; 1078 | void *udata = sa->udata; 1079 | assert(cmp != NULL); 1080 | 1081 | struct node *cur = sa->nodes[level]; 1082 | LOG(2, "%s: level %d: cur %p\n", __func__, level, (void *)cur); 1083 | assert(cur != NULL); 1084 | if (cur->count == 0) { 1085 | LOG(2, "%s: empty head => NOT_FOUND\n", __func__); 1086 | env->n = cur; 1087 | return SEARCH_NOT_FOUND; 1088 | } 1089 | 1090 | if (LOG_LEVEL >= 3) { 1091 | assert(level >= 0); 1092 | for (size_t i = 0; i <= (size_t)level; i++) { 1093 | LOG(3, "%s: sa->nodes[%zu]: %p\n", 1094 | __func__, i, (void *)sa->nodes[i]); 1095 | } 1096 | } 1097 | 1098 | for (;;) { 1099 | assert(cur != NULL); 1100 | assert(cur->count > 0); 1101 | 1102 | /* Eliminating redundant comparisons after dropping a level 1103 | * doesn't appear to make a significant difference time-wise. */ 1104 | const int cmp_res = cmp(env->key, 1105 | cur->keys[cur->offset + cur->count - 1], udata); 1106 | 1107 | LOG(2, "%s: level %d, cur %p, cmp_res %d\n", 1108 | __func__, level, (void *)cur, cmp_res); 1109 | 1110 | if (cmp_res < 0) { /* key < this node's last key */ 1111 | /* either in this node or not at all */ 1112 | if (level == 0) { /* find exact pos and return */ 1113 | found = search_within_node(sa, env->key, cur, &env->index); 1114 | LOG(2, "%s: < -- on level 0, found? %d\n", __func__, found); 1115 | /* If adding a binding to the beginning, put it in the end 1116 | * of the previous one if it's less full. */ 1117 | if (!found && env->index == 0) { 1118 | struct node *back = cur->back; 1119 | if (back != NULL && back->count < cur->count) { 1120 | LOG(2, "%s: choosing end of previous node %p rather than start of %p\n", 1121 | __func__, (void *)back, (void *)cur); 1122 | env->index = back->count; 1123 | cur = back; 1124 | } 1125 | } 1126 | break; 1127 | } else { /* descend */ 1128 | LOG(2, "%s: < -- descending\n", __func__); 1129 | level--; 1130 | cur = (prev ? prev->fwd[level] : sa->nodes[level]); 1131 | assert(cur != NULL); 1132 | } 1133 | } else if (cmp_res > 0) { /* key > this node's last key */ 1134 | struct node *next = cur->fwd[level]; 1135 | if (next != NULL) { /* advance */ 1136 | LOG(2, "%s: > advancing to %p\n", __func__, (void *)next); 1137 | prev = cur; 1138 | cur = next; 1139 | assert(cur != NULL); 1140 | } else { /* descend */ 1141 | LOG(2, "%s: > descending\n", __func__); 1142 | if (level == 0) { 1143 | LOG(2, "%s: > setting index: %" PRIu16 "\n", 1144 | __func__, cur->count); 1145 | env->index = cur->count; 1146 | break; 1147 | } 1148 | 1149 | if (prev == NULL) { 1150 | level--; 1151 | cur = sa->nodes[level]; 1152 | } else { 1153 | /* keep descending and looking for a forward pointer */ 1154 | struct node *ncur = NULL; 1155 | do { 1156 | level--; 1157 | ncur = prev->fwd[level]; 1158 | } while ((ncur == NULL || ncur == cur) && level > 0); 1159 | cur = ncur; 1160 | } 1161 | 1162 | assert(cur != NULL); 1163 | } 1164 | } else { /* exact match: last node key */ 1165 | found = true; 1166 | env->index = cur->count - 1; 1167 | LOG(2, "%s: == index = %" PRIu16 "\n", 1168 | __func__, env->index); 1169 | if (level == 0) { 1170 | break; 1171 | } else { 1172 | level--; 1173 | cur = (prev ? prev->fwd[level] : sa->nodes[level]); 1174 | assert(cur != NULL); 1175 | } 1176 | } 1177 | } 1178 | 1179 | if (found) { assert(cur != NULL); } 1180 | env->n = cur; 1181 | 1182 | LOG(2, "%s: exiting with found %d, env->n %p, env->index %" PRIu16 "\n", 1183 | __func__, found, (void *)env->n, env->index); 1184 | return (found ? SEARCH_FOUND : SEARCH_NOT_FOUND); 1185 | } 1186 | 1187 | static void 1188 | prepare_node_for_insert(struct skiparray *sa, 1189 | struct node *n, uint16_t index) { 1190 | assert(n->count < sa->node_size); /* must fit */ 1191 | 1192 | LOG(2, "%s: inserting @ %" PRIu16 " on %p, node offset %" PRIu16 1193 | ", count %" PRIu16 "\n", 1194 | __func__, index, (void *)n, n->offset, n->count); 1195 | 1196 | dump_raw_bindings("BEFORE insert", sa, n); 1197 | 1198 | if (index == 0) { /* shift forward or reduce offset */ 1199 | if (n->count > 0 && n->offset > 0) { 1200 | LOG(2, "%s: reducing offset by 1\n", __func__); 1201 | n->offset--; 1202 | } else { /* shift all forward */ 1203 | LOG(2, "%s: shifting all forward by 1\n", __func__); 1204 | shift_pairs(n, n->offset + 1, n->offset, n->count); 1205 | } 1206 | } else if (index < n->count) { /* shift middle */ 1207 | if (n->offset > 0) { /* prefer shifting backward */ 1208 | LOG(2, "%s: shifting pairs up to position back 1\n", __func__); 1209 | const uint16_t to_move = index + 1; 1210 | shift_pairs(n, n->offset - 1, n->offset, to_move); 1211 | n->offset--; 1212 | } else { /* shift forward */ 1213 | LOG(2, "%s: shifting pairs after position forward 1\n", __func__); 1214 | assert(n->offset == 0); 1215 | const uint16_t to_move = n->count - index;; 1216 | shift_pairs(n, index + 1, index, to_move); 1217 | } 1218 | } else { /* inserting at end */ 1219 | assert(index == n->count); 1220 | assert(n->offset + index <= sa->node_size); 1221 | if (n->offset + index == sa->node_size) { /* shift all back */ 1222 | LOG(2, "%s: shifting to front, changing offset to 0\n", __func__); 1223 | assert(n->offset > 0); 1224 | shift_pairs(n, 0, n->offset, n->count); 1225 | n->offset = 0; 1226 | } else { 1227 | LOG(2, "%s: no-op \n", __func__); 1228 | } 1229 | } 1230 | 1231 | LOG(2, "%s: adjusted node offset %" PRIu16 "\n", __func__, n->offset); 1232 | if (LOG_LEVEL >= 4) { 1233 | dump_raw_bindings("AFTER insert", sa, n); 1234 | } 1235 | } 1236 | 1237 | static bool 1238 | split_node(struct skiparray *sa, 1239 | struct node *n, struct node **res) { 1240 | uint8_t level = sa->level(sa->prng_state, 1241 | &sa->prng_state, sa->udata) + 1; 1242 | if (level >= sa->max_level) { level = sa->max_level - 1; } 1243 | 1244 | struct node *new = node_alloc(level + 1, sa->node_size, 1245 | sa->mem, sa->udata, sa->use_values); 1246 | if (new == NULL) { 1247 | return false; 1248 | } 1249 | 1250 | if (LOG_LEVEL >= 4) { 1251 | dump_raw_bindings("BEFORE split n", sa, n); 1252 | } 1253 | 1254 | /* Half the keys and values get moved to the new node. Round down 1255 | * and insert at the beginning, in case of sequential insertion. */ 1256 | const uint16_t to_move = n->count / 2; 1257 | assert(to_move > 0); 1258 | new->offset = 0; 1259 | 1260 | memcpy(&new->keys[new->offset], 1261 | &n->keys[n->offset + n->count - to_move], 1262 | to_move * sizeof(n->keys[0])); 1263 | if (sa->use_values) { 1264 | memcpy(&new->values[new->offset], 1265 | &n->values[n->offset + n->count - to_move], 1266 | to_move * sizeof(n->values[0])); 1267 | } 1268 | n->count -= to_move; 1269 | new->count += to_move; 1270 | new->back = n; 1271 | 1272 | if (LOG_LEVEL >= 4) { 1273 | dump_raw_bindings("AFTER split n", sa, n); 1274 | dump_raw_bindings("AFTER split new", sa, new); 1275 | } 1276 | 1277 | *res = new; 1278 | LOG(2, "%s: split node %p (height %u) to %p (height %u), with %u pairs\n", 1279 | __func__, (void *)n, n->height, (void *)new, new->height, new->count); 1280 | return true; 1281 | } 1282 | 1283 | static void 1284 | shift_or_merge(struct skiparray *sa, struct node *n) { 1285 | LOG(2, "%s: checking %p (prev %p, next %p)\n", 1286 | __func__, (void *)n, (void *)n->back, (void *)n->fwd[0]); 1287 | 1288 | /* Special case: If this is the only node, do nothing -- the root 1289 | * node is allowed to be empty. */ 1290 | if (n == sa->nodes[0] && n->fwd[0] == NULL) { 1291 | LOG(2, "%s: special case, allowing head to be empty\n", __func__); 1292 | return; 1293 | } 1294 | const uint16_t required = sa->node_size/2; 1295 | assert(n->count < required); /* node too empty */ 1296 | 1297 | struct node *next = n->fwd[0]; 1298 | if (next == NULL) { 1299 | assert(n->back != NULL); 1300 | struct node *prev = n->back; 1301 | 1302 | /* under-filled last node: possibly combine with previous */ 1303 | if (prev->count + n->count <= sa->node_size) { /* contents will fit */ 1304 | LOG(2, "%s: contents will fit in prev, moving and deleting\n", 1305 | __func__); 1306 | /* move to front, to make room */ 1307 | shift_pairs(prev, 0, prev->offset, prev->count); 1308 | prev->offset = 0; 1309 | 1310 | /* move all pairs */ 1311 | move_pairs(prev, n, prev->count, n->offset, n->count); 1312 | prev->count += n->count; 1313 | 1314 | if (n->fwd[0] != NULL) { n->fwd[0]->back = prev; } 1315 | unlink_node(sa, n); 1316 | } else { 1317 | /* leave alone this time */ 1318 | LOG(2, "%s: contents (%" PRIu16 ") won't fit in prev (%" 1319 | PRIu16 "), leaving alone\n", __func__, n->count, prev->count); 1320 | } 1321 | } else if (next->count + n->count <= sa->node_size) { /* merge */ 1322 | LOG(2, "%s: merging %p with next node %p (%" PRIu16 " + %" PRIu16 ")\n", 1323 | __func__, (void *)n, (void *)next, n->count, next->count); 1324 | 1325 | if (LOG_LEVEL >= 4) { 1326 | dump_raw_bindings("PRE_MERGE n", sa, n); 1327 | dump_raw_bindings("PRE_MERGE next", sa, next); 1328 | } 1329 | 1330 | if (n->offset > 0) { 1331 | /* move to front, to make room */ 1332 | shift_pairs(n, 0, n->offset, n->count); 1333 | n->offset = 0; 1334 | } 1335 | 1336 | move_pairs(n, next, n->count, next->offset, next->count); 1337 | n->count += next->count; 1338 | 1339 | unlink_node(sa, next); 1340 | 1341 | dump_raw_bindings("MERGED", sa, n); 1342 | } else { /* shift pairs over */ 1343 | const uint16_t to_move = next->count - required; 1344 | LOG(2, "%s: moving %" PRIu16 " pairs from next node (%p) to %p\n", 1345 | __func__, to_move, (void *)next, (void *)n); 1346 | if (n->offset > 0) { 1347 | /* move to front, to make room */ 1348 | shift_pairs(n, 0, n->offset, n->count); 1349 | n->offset = 0; 1350 | } 1351 | 1352 | move_pairs(n, next, n->count, next->offset, to_move); 1353 | 1354 | next->count -= to_move; 1355 | next->offset += to_move; 1356 | n->count += to_move; 1357 | assert(next->count == required); 1358 | assert(n->count <= sa->node_size); 1359 | 1360 | } 1361 | } 1362 | 1363 | /* Search to find the next-to-last nodes and unlink the now-empty last 1364 | * node from them. */ 1365 | static void 1366 | unlink_node(struct skiparray *sa, struct node *n) { 1367 | LOG(2, "%s: unlinking empty node %p\n", __func__, (void *)n); 1368 | 1369 | if (n == sa->nodes[0]) { 1370 | assert(n->fwd[0] != NULL); /* never unlink empty first node */ 1371 | } 1372 | 1373 | for (int level = sa->height - 1; level >= 0; level--) { 1374 | if (sa->nodes[level] == n) { 1375 | sa->nodes[level] = n->fwd[level]; 1376 | LOG(2, "%s: sa->nodes[%d] <- %p\n", 1377 | __func__, level, (void *)sa->nodes[level]); 1378 | } 1379 | } 1380 | while (sa->height > 1 && sa->nodes[sa->height - 1] == NULL) { sa->height--; } 1381 | 1382 | /* Since the node is empty, compare against either the last key 1383 | * in the previous node or the first key in the next. One of 1384 | * them must be available. */ 1385 | const struct node *nearest = (n->back ? n->back : n->fwd[0]); 1386 | assert(nearest != NULL); 1387 | const uint16_t nearest_index = (n->back 1388 | ? nearest->offset + nearest->count - 1 1389 | : nearest->offset); 1390 | /* If using the key in the last node before, compare with <= instead of <. */ 1391 | const int cmp_condition = (n->back ? 1 /* res <= 0*/ : 0 /* res < 0 */); 1392 | assert(nearest); 1393 | 1394 | int level = sa->height - 1; 1395 | struct node *cur = NULL; 1396 | while (level >= 0) { 1397 | LOG(2, "%s: level %d, cur %p\n", __func__, level, (void *)cur); 1398 | /* Get the first node (if any) before the unlinked node. */ 1399 | if (cur == NULL) { 1400 | struct node *head = sa->nodes[level]; 1401 | if (head != NULL) { 1402 | int res = sa->cmp(head->keys[head->offset + head->count - 1], 1403 | nearest->keys[nearest_index], sa->udata); 1404 | if (res < cmp_condition) { 1405 | cur = head; 1406 | } else { 1407 | level--; 1408 | continue; 1409 | } 1410 | } 1411 | } 1412 | 1413 | assert(cur != NULL); 1414 | 1415 | /* Check for overshooting, advance, and unlink the node if found. */ 1416 | if (cur->fwd[level] == NULL) { 1417 | level--; 1418 | } else if (cur->fwd[level] == n) { 1419 | LOG(2, "%s: unlinking node %p on level %d\n", 1420 | __func__, (void *)n, level); 1421 | struct node *nfwd = n->fwd[level]; 1422 | cur->fwd[level] = nfwd; /* unlink */ 1423 | if (nfwd != NULL && level == 0) { 1424 | nfwd->back = cur; /* update back pointer */ 1425 | } 1426 | level--; 1427 | continue; 1428 | } else { 1429 | assert(cur); 1430 | struct node *next = cur->fwd[level]; 1431 | if (next == NULL) { 1432 | level--; 1433 | continue; 1434 | } 1435 | int res = sa->cmp(next->keys[next->offset + next->count - 1], 1436 | nearest->keys[nearest_index], sa->udata); 1437 | LOG(2, "%s: cmp_res %d\n", __func__, res); 1438 | if (res < cmp_condition) { 1439 | LOG(2, "%s: advancing on level %d, %p => %p\n", 1440 | __func__, level, (void *)cur, (void *)next); 1441 | cur = next; 1442 | } else { 1443 | LOG(2, "%s: overshot, descending\n", __func__); 1444 | level--; 1445 | } 1446 | } 1447 | } 1448 | node_free(sa, n); 1449 | } 1450 | 1451 | static void 1452 | shift_pairs(struct node *n, 1453 | uint16_t to_pos, uint16_t from_pos, uint16_t count) { 1454 | memmove(&n->keys[to_pos], 1455 | &n->keys[from_pos], 1456 | count * sizeof(n->keys[0])); 1457 | if (n->values != NULL) { 1458 | memmove(&n->values[to_pos], 1459 | &n->values[from_pos], 1460 | count * sizeof(n->values[0])); 1461 | } 1462 | } 1463 | 1464 | static void 1465 | move_pairs(struct node *to, struct node *from, 1466 | uint16_t to_pos, uint16_t from_pos, uint16_t count) { 1467 | memcpy(&to->keys[to_pos], 1468 | &from->keys[from_pos], 1469 | count * sizeof(to->keys[0])); 1470 | if (to->values != NULL) { 1471 | memcpy(&to->values[to_pos], 1472 | &from->values[from_pos], 1473 | count * sizeof(to->values[0])); 1474 | } 1475 | } 1476 | 1477 | static void 1478 | dump_raw_bindings(const char *tag, 1479 | const struct skiparray *sa, const struct node *n) { 1480 | if (LOG_LEVEL > 4) { 1481 | LOG(4, "====== %s\n", tag); 1482 | for (size_t i = 0; i < sa->node_size; i++) { 1483 | LOG(4, "%zu: %p => %p\n", i, (void *)n->keys[i], 1484 | n->values ? (void *)n->values[i] : NULL); 1485 | } 1486 | } 1487 | } 1488 | 1489 | static void * 1490 | def_memory_fun(void *p, size_t nsize, void *udata) { 1491 | (void)udata; 1492 | if (p != NULL) { 1493 | assert(nsize == 0); /* no realloc used */ 1494 | free(p); 1495 | return NULL; 1496 | } else { 1497 | return malloc(nsize); 1498 | } 1499 | } 1500 | 1501 | #include "splitmix64_stateless.h" 1502 | 1503 | static int 1504 | def_level_fun(uint64_t prng_state_in, 1505 | uint64_t *prng_state_out, void *udata) { 1506 | (void)udata; 1507 | 1508 | uint64_t next = splitmix64_stateless(prng_state_in); 1509 | *prng_state_out = next; 1510 | LOG(4, "%s: %"PRIx64" -> %"PRIx64"\n", __func__, prng_state_in, next); 1511 | for (uint8_t i = 0; i < SKIPARRAY_DEF_MAX_LEVEL; i++) { 1512 | if ((next & (1LLU << i)) == 0) { 1513 | return i; 1514 | } 1515 | } 1516 | return SKIPARRAY_DEF_MAX_LEVEL; 1517 | } 1518 | -------------------------------------------------------------------------------- /src/skiparray_fold.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019 Scott Vokes 3 | * 4 | * Permission to use, copy, modify, and/or distribute this software for any 5 | * purpose with or without fee is hereby granted, provided that the above 6 | * copyright notice and this permission notice appear in all copies. 7 | * 8 | * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 | * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 | * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 | * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 | * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 | * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 | * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 | */ 16 | 17 | #include "skiparray_fold_internal.h" 18 | 19 | #ifdef SKIPARRAY_LOG_FOLD 20 | #define LOG(...) fprintf(stdout, __VA_ARGS__) 21 | #else 22 | #define LOG(...) 23 | #endif 24 | 25 | enum skiparray_fold_res 26 | skiparray_fold_init(enum skiparray_fold_type type, 27 | struct skiparray *sa, skiparray_fold_fun *cb, void *udata, 28 | struct skiparray_fold_state **fs) { 29 | return skiparray_fold_multi_init(type, 1, &sa, 30 | cb, NULL, udata, fs); 31 | } 32 | 33 | enum skiparray_fold_res 34 | skiparray_fold(enum skiparray_fold_type direction, 35 | struct skiparray *sa, skiparray_fold_fun *cb, void *udata) { 36 | struct skiparray_fold_state *fs = NULL; 37 | enum skiparray_fold_res fres = skiparray_fold_init(direction, 38 | sa, cb, udata, &fs); 39 | if (fres != SKIPARRAY_FOLD_OK) { return fres; } 40 | 41 | while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE) {} 42 | 43 | return SKIPARRAY_FOLD_OK; 44 | } 45 | 46 | enum skiparray_fold_res 47 | skiparray_fold_multi_init(enum skiparray_fold_type type, 48 | uint8_t skiparray_count, struct skiparray **skiparrays, 49 | skiparray_fold_fun *cb, skiparray_fold_merge_fun *merge, void *udata, 50 | struct skiparray_fold_state **fs) { 51 | 52 | if (skiparrays == NULL || skiparray_count < 1 || cb == NULL || fs == NULL) { 53 | return SKIPARRAY_FOLD_ERROR_MISUSE; 54 | } 55 | 56 | if (skiparray_count > 1 && merge == NULL) { 57 | return SKIPARRAY_FOLD_ERROR_MISUSE; 58 | } 59 | 60 | /* All skiparrays must have the same cmp, free, and mem callbacks, 61 | * and either all or none must use values. */ 62 | for (size_t i = 0; i < skiparray_count; i++) { 63 | if (skiparrays[i] == NULL 64 | || skiparrays[i]->cmp != skiparrays[0]->cmp 65 | || skiparrays[i]->mem != skiparrays[0]->mem 66 | || skiparrays[i]->free != skiparrays[0]->free 67 | || skiparrays[i]->use_values != skiparrays[0]->use_values) { 68 | return SKIPARRAY_FOLD_ERROR_MISUSE; 69 | } 70 | } 71 | 72 | skiparray_memory_fun *mem = skiparrays[0]->mem; 73 | void *sa_udata = skiparrays[0]->udata; 74 | 75 | uint8_t *current_ids = NULL; 76 | struct skiparray_fold_state *res = NULL; 77 | const size_t res_alloc_size = sizeof(*res) 78 | + skiparray_count*sizeof(res->iters[0]); 79 | res = mem(NULL, res_alloc_size, sa_udata); 80 | if (res == NULL) { goto cleanup; } 81 | memset(res, 0x00, res_alloc_size); 82 | 83 | const size_t current_ids_alloc_size = skiparray_count * sizeof(uint8_t); 84 | current_ids = mem(NULL, current_ids_alloc_size, sa_udata); 85 | if (current_ids == NULL) { goto cleanup; } 86 | memset(current_ids, 0x00, current_ids_alloc_size); 87 | 88 | uint8_t live_count = 0; 89 | for (size_t i = 0; i < skiparray_count; i++) { 90 | struct skiparray_iter *iter = NULL; 91 | enum skiparray_iter_new_res ires; 92 | ires = skiparray_iter_new(skiparrays[i], &iter); 93 | switch (ires) { 94 | default: 95 | assert(false); 96 | case SKIPARRAY_ITER_NEW_ERROR_MEMORY: 97 | goto cleanup; 98 | case SKIPARRAY_ITER_NEW_EMPTY: 99 | break; 100 | case SKIPARRAY_ITER_NEW_OK: 101 | live_count++; 102 | break; /* continue below */ 103 | } 104 | 105 | if (iter && type == SKIPARRAY_FOLD_RIGHT) { 106 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_LAST); 107 | } 108 | 109 | res->iters[i].iter = iter; /* can be NULL -- immediately empty */ 110 | res->iter_count++; 111 | } 112 | 113 | res->type = type; 114 | res->use_values = skiparrays[0]->use_values; 115 | 116 | res->cbs.fold = cb; 117 | res->cbs.fold_udata = udata; 118 | res->cbs.mem = mem; 119 | res->cbs.sa_udata = sa_udata; 120 | res->cbs.cmp = skiparrays[0]->cmp; 121 | res->cbs.free = skiparrays[0]->free; 122 | res->cbs.merge = merge; 123 | 124 | assert(res->iter_count == skiparray_count); 125 | res->iter_live = live_count; 126 | 127 | res->ids.current = current_ids; 128 | 129 | *fs = res; 130 | return SKIPARRAY_FOLD_OK; 131 | 132 | cleanup: 133 | if (current_ids != NULL) { mem(current_ids, 0, sa_udata); } 134 | if (res != NULL) { 135 | for (size_t i = 0; i < res->iter_count; i++) { 136 | skiparray_iter_free(res->iters[i].iter); 137 | } 138 | mem(res, 0, sa_udata); 139 | } 140 | return SKIPARRAY_FOLD_ERROR_MEMORY; 141 | } 142 | 143 | void 144 | skiparray_fold_halt(struct skiparray_fold_state *fs) { 145 | if (fs == NULL) { return; } 146 | assert(fs->iter_count > 0); 147 | 148 | for (size_t i = 0; i < fs->iter_count; i++) { 149 | if (fs->iters[i].iter != NULL) { 150 | skiparray_iter_free(fs->iters[i].iter); 151 | } 152 | } 153 | 154 | if (fs->ids.current != NULL) { 155 | fs->cbs.mem(fs->ids.current, 0, fs->cbs.sa_udata); 156 | } 157 | 158 | fs->cbs.mem(fs, 0, fs->cbs.sa_udata); 159 | } 160 | 161 | enum skiparray_fold_next_res 162 | skiparray_fold_next(struct skiparray_fold_state *fs) { 163 | assert(fs != NULL); 164 | LOG("%s: ids.available %zu, live %zu, count %zu\n", 165 | __func__, 166 | (size_t)fs->ids.available, 167 | (size_t)fs->iter_live, 168 | (size_t)fs->iter_count); 169 | assert(fs->ids.available <= fs->iter_count); 170 | assert(fs->iter_live <= fs->iter_count); 171 | 172 | if (fs->iter_live == 0 && fs->ids.available == 0) { 173 | skiparray_fold_halt(fs); 174 | return SKIPARRAY_FOLD_NEXT_DONE; 175 | } 176 | 177 | if (fs->iter_live > 0) { step_active_iterators(fs); } 178 | call_with_next(fs, fs->iter_count); 179 | return SKIPARRAY_FOLD_NEXT_OK; 180 | } 181 | 182 | static void 183 | step_active_iterators(struct skiparray_fold_state *fs) { 184 | /* This could use a next chain, rather than walking the entire 185 | * array, but it's probably not worth the complexity since 186 | * the iterator count is likely to be small. */ 187 | assert(fs->iter_live > 0); 188 | 189 | for (size_t i_i = 0; i_i < fs->iter_count; i_i++) { 190 | struct iter_state *is = &fs->iters[i_i]; 191 | LOG("%s: %zu -- %p, %d\n", __func__, i_i, (void *)is->iter, is->state); 192 | if (is->iter == NULL) { continue; } /* done */ 193 | if (is->state != PS_NONE) { continue; } /* ids.available */ 194 | 195 | struct skiparray_pair *p = &is->pair; 196 | skiparray_iter_get(is->iter, &p->key, &p->value); 197 | 198 | insert_pair(fs, i_i); 199 | assert(is->state != PS_NONE); /* set during insertion */ 200 | LOG("%s: set %zu's state to %d, %p => %p\n", 201 | __func__, i_i, is->state, p->key, p->value); 202 | 203 | enum skiparray_iter_step_res sres; 204 | if (fs->type == SKIPARRAY_FOLD_RIGHT) { 205 | sres = skiparray_iter_prev(is->iter); 206 | } else if (fs->type == SKIPARRAY_FOLD_LEFT) { 207 | sres = skiparray_iter_next(is->iter); 208 | } else { 209 | assert(!"unreachable"); 210 | } 211 | 212 | if (sres == SKIPARRAY_ITER_STEP_END) { 213 | LOG("%s: done: %zu\n", __func__, i_i); 214 | skiparray_iter_free(is->iter); 215 | is->iter = NULL; 216 | assert(fs->iter_live > 0); 217 | fs->iter_live--; 218 | continue; 219 | } 220 | assert(sres == SKIPARRAY_ITER_STEP_OK); 221 | } 222 | } 223 | 224 | static void 225 | insert_pair(struct skiparray_fold_state *fs, size_t iter_i) { 226 | /* This could use binary search, but again, the iterator 227 | * count is likely to be small. */ 228 | uint8_t *ids = fs->ids.current; 229 | 230 | if (fs->ids.offset > 0) { 231 | memmove(&ids[0], &ids[fs->ids.offset], fs->ids.available); 232 | fs->ids.offset = 0; 233 | } 234 | 235 | struct iter_state *is = &fs->iters[iter_i]; 236 | const void *key = is->pair.key; 237 | LOG("%s: %p => %p\n", __func__, key, is->pair.value); 238 | 239 | for (size_t ci_i = 0; ci_i < fs->ids.available; ci_i++) { 240 | void *other = fs->iters[ids[ci_i]].pair.key; 241 | const int cmp_res = fs->cbs.cmp(key, other, fs->cbs.sa_udata); 242 | if (cmp_res <= 0) { /* shift forward */ 243 | memmove(&ids[ci_i + 1], &ids[ci_i], fs->ids.available - ci_i); 244 | ids[ci_i] = iter_i; 245 | is->state = (cmp_res == 0) ? PS_AVAILABLE_EQ : PS_AVAILABLE_LT; 246 | fs->ids.available++; 247 | return; 248 | } else { 249 | assert(cmp_res > 0); 250 | continue; 251 | } 252 | } 253 | 254 | /* > last */ 255 | assert(iter_i <= UINT8_MAX); 256 | ids[fs->ids.available] = (uint8_t)iter_i; 257 | is->state = PS_AVAILABLE_LT; 258 | fs->ids.available++; 259 | } 260 | 261 | static void 262 | call_with_next(struct skiparray_fold_state *fs, size_t count) { 263 | /* Get the next entries, starting at &ids.current[ids.offset]; if LT 264 | * then just call the fold callback on it; if EQ then collect all 265 | * the equal pairs and call merge first. */ 266 | assert(fs->ids.available > 0); 267 | const uint8_t base = fs->ids.offset; 268 | 269 | struct iter_state *first = &fs->iters[fs->ids.current[base]]; 270 | if (first->state == PS_AVAILABLE_LT) { 271 | struct skiparray_pair *p = &first->pair; 272 | LOG("%s: p %p => %p\n", __func__, (void *)p->key, (void *)p->value); 273 | fs->cbs.fold(p->key, p->value, fs->cbs.fold_udata); 274 | fs->ids.available--; 275 | fs->ids.offset++; 276 | first->state = PS_NONE; 277 | return; 278 | } 279 | 280 | assert(first->state == PS_AVAILABLE_EQ); 281 | assert(fs->cbs.merge != NULL); 282 | 283 | /* Given N key/value pairs, choose a key (by ID) and a merged value 284 | * (which can point to an existing value or a new allocation). */ 285 | 286 | void *keys[count]; 287 | void *values[count]; 288 | uint8_t used = 0; 289 | 290 | for (size_t id_i = 0; id_i < fs->ids.available; id_i++) { 291 | uint8_t id = fs->ids.current[id_i + base]; 292 | struct iter_state *is = &fs->iters[id]; 293 | if (is->state != PS_AVAILABLE_EQ) { break; } 294 | keys[used] = is->pair.key; 295 | values[used] = (fs->use_values ? is->pair.value : NULL); 296 | used++; 297 | } 298 | assert(used > 0); 299 | 300 | void *merged_value = NULL; 301 | const uint8_t choice = fs->cbs.merge(used, 302 | (const void **)keys, values, &merged_value, fs->cbs.fold_udata); 303 | LOG("%s: choice %u\n", __func__, choice); 304 | assert(choice < used); 305 | 306 | fs->cbs.fold(keys[choice], merged_value, fs->cbs.fold_udata); 307 | 308 | fs->ids.available -= used; 309 | fs->ids.offset += used; 310 | 311 | for (size_t id_i = 0; id_i < used; id_i++) { 312 | uint8_t id = fs->ids.current[id_i + base]; 313 | struct iter_state *is = &fs->iters[id]; 314 | is->state = PS_NONE; 315 | } 316 | } 317 | -------------------------------------------------------------------------------- /src/skiparray_fold_internal.h: -------------------------------------------------------------------------------- 1 | #ifndef SKIPARRAY_FOLD_INTERNAL_H 2 | #define SKIPARRAY_FOLD_INTERNAL_H 3 | 4 | #include "skiparray_internal_types.h" 5 | 6 | /* #define SKIPARRAY_LOG_FOLD */ 7 | 8 | enum pair_state { 9 | PS_NONE, /* entry is not currently in use */ 10 | PS_AVAILABLE_LT, /* entry is < next, or last */ 11 | PS_AVAILABLE_EQ, /* entry is = next */ 12 | }; 13 | 14 | struct skiparray_fold_state { 15 | enum skiparray_fold_type type; 16 | bool use_values; 17 | 18 | struct { 19 | skiparray_fold_fun *fold; 20 | void *fold_udata; 21 | 22 | skiparray_cmp_fun *cmp; 23 | skiparray_free_fun *free; 24 | skiparray_fold_merge_fun *merge; 25 | skiparray_memory_fun *mem; 26 | void *sa_udata; 27 | } cbs; 28 | 29 | /* Array of iters[] IDs for available pairs, where their keys are in 30 | * >= order. iters[id].state indicates which keys are equal. */ 31 | struct { 32 | uint8_t available; /* IDs available */ 33 | uint8_t offset; /* offset into IDs, as keys are used */ 34 | uint8_t *current; 35 | } ids; 36 | 37 | uint8_t iter_count; 38 | uint8_t iter_live; 39 | struct iter_state { 40 | enum pair_state state; /* state of pair's key */ 41 | struct skiparray_pair pair; 42 | struct skiparray_iter *iter; /* NULL -> done */ 43 | } iters[]; 44 | }; 45 | 46 | static void 47 | step_active_iterators(struct skiparray_fold_state *fs); 48 | 49 | static void 50 | insert_pair(struct skiparray_fold_state *fs, size_t iter_i); 51 | 52 | static void 53 | call_with_next(struct skiparray_fold_state *fs, size_t count); 54 | 55 | #endif 56 | -------------------------------------------------------------------------------- /src/skiparray_hof.c: -------------------------------------------------------------------------------- 1 | #include "skiparray_internal_types.h" 2 | 3 | #include "assert.h" 4 | 5 | /* Other misc. higher-order functions. */ 6 | 7 | struct filter_fold_env { 8 | char tag; 9 | struct skiparray_builder *b; 10 | skiparray_filter_fun *fun; 11 | void *udata; 12 | bool ok; 13 | }; 14 | 15 | static void 16 | filter_append(void *key, void *value, void *udata) { 17 | struct filter_fold_env *env = udata; 18 | assert(env->tag == 'F'); 19 | if (env->fun(key, value, env->udata)) { 20 | if (SKIPARRAY_BUILDER_APPEND_OK != 21 | skiparray_builder_append(env->b, key, value)) { 22 | env->ok = false; 23 | } 24 | } 25 | } 26 | 27 | struct skiparray * 28 | skiparray_filter(struct skiparray *sa, 29 | skiparray_filter_fun *fun, void *udata) { 30 | assert(sa != NULL); 31 | assert(fun != NULL); 32 | 33 | struct skiparray_builder *b = NULL; 34 | { 35 | struct skiparray_config cfg = { 36 | .node_size = sa->node_size, 37 | .max_level = sa->max_level, 38 | .ignore_values = !sa->use_values, 39 | .cmp = sa->cmp, 40 | .memory = sa->mem, 41 | .free = sa->free, 42 | .level = sa->level, 43 | .udata = sa->udata, 44 | }; 45 | if (SKIPARRAY_BUILDER_NEW_OK != skiparray_builder_new(&cfg, true, &b)) { 46 | return NULL; 47 | } 48 | } 49 | 50 | struct filter_fold_env env = { 51 | .tag = 'F', 52 | .b = b, 53 | .fun = fun, 54 | .udata = udata, 55 | .ok = true, 56 | }; 57 | 58 | if (SKIPARRAY_FOLD_OK != skiparray_fold(SKIPARRAY_FOLD_LEFT, 59 | sa, filter_append, &env)) { 60 | skiparray_builder_free(b); 61 | return NULL; 62 | } 63 | 64 | if (env.ok != true) { 65 | skiparray_builder_free(b); 66 | return NULL; 67 | } 68 | 69 | struct skiparray *res = NULL; 70 | skiparray_builder_finish(&b, &res); 71 | return res; 72 | } 73 | -------------------------------------------------------------------------------- /src/skiparray_internal.h: -------------------------------------------------------------------------------- 1 | #ifndef SKIPARRAY_INTERNAL_H 2 | #define SKIPARRAY_INTERNAL_H 3 | 4 | #include "skiparray_internal_types.h" 5 | 6 | #define LOG_LEVEL 0 7 | #define LOG_FILE stdout 8 | #define LOG(LVL, ...) \ 9 | do { \ 10 | if (LVL <= LOG_LEVEL) { \ 11 | fprintf(LOG_FILE, __VA_ARGS__); \ 12 | } \ 13 | } while(0) 14 | 15 | static struct node * 16 | node_alloc(uint8_t height, uint16_t node_size, 17 | skiparray_memory_fun *mem, void *udata, bool use_values); 18 | 19 | static void node_free(const struct skiparray *sa, struct node *n); 20 | 21 | enum search_res { 22 | SEARCH_FOUND, 23 | SEARCH_NOT_FOUND, 24 | SEARCH_EMPTY, 25 | }; 26 | static enum search_res 27 | search(struct search_env *env); 28 | 29 | static void 30 | prepare_node_for_insert(struct skiparray *sa, 31 | struct node *n, uint16_t index); 32 | 33 | static bool 34 | split_node(struct skiparray *sa, 35 | struct node *n, struct node **res); 36 | 37 | static void 38 | shift_or_merge(struct skiparray *sa, struct node *n); 39 | 40 | static void 41 | unlink_node(struct skiparray *sa, struct node *n); 42 | 43 | static bool 44 | search_within_node(const struct skiparray *sa, 45 | const void *key, const struct node *n, uint16_t *index); 46 | 47 | static void 48 | shift_pairs(struct node *n, 49 | uint16_t to_pos, uint16_t from_pos, uint16_t count); 50 | 51 | static void 52 | move_pairs(struct node *to, struct node *from, 53 | uint16_t to_pos, uint16_t from_pos, uint16_t count); 54 | 55 | static void 56 | *def_memory_fun(void *p, size_t nsize, void *udata); 57 | 58 | static int 59 | def_level_fun(uint64_t prng_state_in, 60 | uint64_t *prng_state_out, void *udata); 61 | 62 | static void 63 | dump_raw_bindings(const char *tag, 64 | const struct skiparray *sa, const struct node *n); 65 | 66 | #endif 67 | -------------------------------------------------------------------------------- /src/skiparray_internal_types.h: -------------------------------------------------------------------------------- 1 | #ifndef SKIPARRAY_INTERNAL_TYPES_H 2 | #define SKIPARRAY_INTERNAL_TYPES_H 3 | 4 | #include "skiparray.h" 5 | 6 | #include 7 | #include 8 | #include 9 | #include 10 | 11 | struct skiparray { 12 | const uint16_t node_size; 13 | const uint8_t max_level; 14 | uint8_t height; 15 | bool use_values; 16 | uint64_t prng_state; 17 | 18 | skiparray_memory_fun * const mem; 19 | skiparray_cmp_fun * const cmp; 20 | skiparray_free_fun * const free; 21 | skiparray_level_fun * const level; 22 | void *udata; 23 | 24 | struct skiparray_iter *iter; 25 | 26 | /* Node chains for each level, 0 to max_level, inclusive. 27 | * Every node is on level 0; a level-1 node will also 28 | * be linked to nodes[1], etc. */ 29 | struct node *nodes[]; 30 | }; 31 | 32 | struct skiparray_builder { 33 | struct skiparray *sa; 34 | struct node *last; 35 | bool check_ascending; 36 | bool has_prev_key; 37 | void *prev_key; 38 | 39 | struct node *trail[]; 40 | }; 41 | 42 | struct node { 43 | /* How many levels is this node on? >= 1. */ 44 | const uint8_t height; 45 | uint16_t offset; 46 | uint16_t count; 47 | void **keys; 48 | void **values; 49 | 50 | struct node *back; /* back on level 0 */ 51 | 52 | /* Forward pointers. A level 0 node will have 0, 53 | * at level 0. A level 1 node will have 2, 54 | * at levels 0 (where all are linked) and 1. Etc. */ 55 | struct node *fwd[]; 56 | }; 57 | 58 | struct skiparray_iter { 59 | struct skiparray *sa; 60 | struct skiparray_iter *prev; 61 | struct skiparray_iter *next; 62 | struct node *n; 63 | uint16_t index; 64 | }; 65 | 66 | struct search_env { 67 | const struct skiparray *sa; 68 | const void *key; 69 | 70 | struct node *n; 71 | uint16_t index; 72 | }; 73 | 74 | #endif 75 | -------------------------------------------------------------------------------- /src/splitmix64_stateless.h: -------------------------------------------------------------------------------- 1 | #ifndef SPLITMIX64_STATELESS_H 2 | #define SPLITMIX64_STATELESS_H 3 | 4 | /* Modified to use an explicit imput parameter and to 5 | * add __inline__. */ 6 | 7 | /* Written in 2015 by Sebastiano Vigna (vigna@acm.org) 8 | 9 | To the extent possible under law, the author has dedicated all copyright 10 | and related and neighboring rights to this software to the public domain 11 | worldwide. This software is distributed without any warranty. 12 | 13 | See . */ 14 | 15 | #include 16 | 17 | /* This is a fixed-increment version of Java 8's SplittableRandom generator 18 | See http://dx.doi.org/10.1145/2714064.2660195 and 19 | http://docs.oracle.com/javase/8/docs/api/java/util/SplittableRandom.html 20 | 21 | It is a very fast generator passing BigCrush, and it can be useful if 22 | for some reason you absolutely want 64 bits of state; otherwise, we 23 | rather suggest to use a xoroshiro128+ (for moderately parallel 24 | computations) or xorshift1024* (for massively parallel computations) 25 | generator. */ 26 | 27 | /* The state can be seeded with any value. */ 28 | 29 | static __inline__ uint64_t splitmix64_stateless(uint64_t x) { 30 | uint64_t z = (x += 0x9e3779b97f4a7c15); 31 | z = (z ^ (z >> 30)) * 0xbf58476d1ce4e5b9; 32 | z = (z ^ (z >> 27)) * 0x94d049bb133111eb; 33 | return z ^ (z >> 31); 34 | } 35 | 36 | #endif 37 | -------------------------------------------------------------------------------- /test/test_skiparray.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | /* Add all the definitions that need to be in the test runner's main file. */ 4 | GREATEST_MAIN_DEFS(); 5 | 6 | int main(int argc, char **argv) { 7 | GREATEST_MAIN_BEGIN(); /* command-line arguments, initialization. */ 8 | RUN_SUITE(basic); 9 | RUN_SUITE(builder); 10 | RUN_SUITE(fold); 11 | RUN_SUITE(hof); 12 | RUN_SUITE(integration); 13 | RUN_SUITE(prop); 14 | GREATEST_MAIN_END(); /* display results */ 15 | } 16 | -------------------------------------------------------------------------------- /test/test_skiparray.h: -------------------------------------------------------------------------------- 1 | #ifndef TEST_SKIPARRAY_H 2 | #define TEST_SKIPARRAY_H 3 | 4 | #define GREATEST_USE_LONGJMP 0 5 | #include "greatest.h" 6 | #include "theft.h" 7 | #include "skiparray.h" 8 | 9 | #include 10 | #include 11 | 12 | bool test_skiparray_invariants(struct skiparray *sa, int verbosity); 13 | 14 | SUITE_EXTERN(basic); 15 | SUITE_EXTERN(builder); 16 | SUITE_EXTERN(fold); 17 | SUITE_EXTERN(prop); 18 | SUITE_EXTERN(hof); 19 | SUITE_EXTERN(integration); 20 | 21 | struct test_env { 22 | char tag; 23 | size_t limit; 24 | size_t pair_count; 25 | uint8_t verbosity; 26 | struct theft_print_trial_result_env print_env; 27 | }; 28 | 29 | enum op_type { 30 | OP_GET, 31 | OP_SET, 32 | OP_FORGET, 33 | OP_POP_FIRST, 34 | OP_POP_LAST, 35 | OP_MEMBER, 36 | OP_COUNT, 37 | OP_FIRST, 38 | OP_LAST, 39 | 40 | /* No need to include the iterator stuff -- each operation 41 | * already calls test_skiparray_invariants on it, which 42 | * does a full iteration forwards and backwards. */ 43 | 44 | OP_TYPE_COUNT, 45 | }; 46 | 47 | struct op { 48 | enum op_type t; 49 | union { 50 | struct { 51 | intptr_t key; 52 | } get; 53 | struct { 54 | intptr_t key; 55 | intptr_t value; 56 | } set; 57 | struct { 58 | intptr_t key; 59 | } forget; 60 | struct { 61 | intptr_t key; 62 | } member; 63 | } u; 64 | }; 65 | 66 | struct scenario { 67 | uint32_t seed; 68 | uint16_t node_size; 69 | size_t count; 70 | struct op ops[]; 71 | }; 72 | 73 | struct pair { 74 | void *key; 75 | void *value; 76 | }; 77 | 78 | struct model { 79 | char tag; 80 | struct skiparray *sa; 81 | struct test_env *env; 82 | 83 | size_t pairs_used; 84 | struct pair pairs[]; 85 | }; 86 | 87 | extern const struct theft_type_info type_info_skiparray_operations; 88 | 89 | int test_skiparray_cmp_intptr_t(const void *ka, 90 | const void *kb, void *udata); 91 | 92 | struct skiparray * 93 | test_skiparray_sequential_build(size_t limit); 94 | 95 | #endif 96 | -------------------------------------------------------------------------------- /test/test_skiparray_basic.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | static struct skiparray *init_with_pairs(size_t limit) { 4 | const int verbosity = greatest_get_verbosity(); 5 | struct skiparray_config sa_config = { 6 | .cmp = test_skiparray_cmp_intptr_t, 7 | .node_size = 5, 8 | }; 9 | struct skiparray *sa = NULL; 10 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 11 | if (nres != SKIPARRAY_NEW_OK) { return NULL; } 12 | 13 | if (sa != NULL) { 14 | for (size_t i = 0; i < limit; i++) { 15 | void *x = (void *)i; 16 | if (verbosity > 2) { 17 | fprintf(GREATEST_STDOUT, "==== %s: set %p -> %p\n", 18 | __func__, (void *)x, (void *)x); 19 | } 20 | if (skiparray_set(sa, x, x) != SKIPARRAY_SET_BOUND) { 21 | skiparray_free(sa); 22 | return NULL; 23 | } 24 | 25 | if (!test_skiparray_invariants(sa, verbosity - 1)) { 26 | return NULL; 27 | } 28 | } 29 | } 30 | 31 | return sa; 32 | } 33 | 34 | TEST set_and_forget_lowest(size_t limit) { 35 | struct skiparray *sa = init_with_pairs(limit); 36 | ASSERT(sa); 37 | 38 | const int verbosity = greatest_get_verbosity(); 39 | if (verbosity > 0) { 40 | fprintf(GREATEST_STDOUT, "==== %s(%zd)\n", __func__, limit); 41 | } 42 | ASSERT(test_skiparray_invariants(sa, verbosity - 1)); 43 | 44 | for (size_t i = 0; i < limit; i++) { 45 | if (verbosity > 1) { fprintf(GREATEST_STDOUT, "-- forgetting %zu\n", i); } 46 | 47 | struct skiparray_pair pair; 48 | enum skiparray_forget_res res = skiparray_forget(sa, 49 | (void *)i, &pair); 50 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_OK, res, "%d"); 51 | ASSERT_EQ_FMT(i, (size_t)pair.value, "%zd"); 52 | ASSERT_EQ_FMT(i, (size_t)pair.key, "%zd"); 53 | 54 | ASSERT(test_skiparray_invariants(sa, verbosity - 1)); 55 | } 56 | 57 | skiparray_free(sa); 58 | PASS(); 59 | } 60 | 61 | TEST set_and_forget_highest(size_t limit) { 62 | struct skiparray *sa = init_with_pairs(limit); 63 | ASSERT(sa); 64 | 65 | const int verbosity = greatest_get_verbosity(); 66 | if (verbosity > 0) { 67 | fprintf(GREATEST_STDOUT, "==== %s(%zd)\n", __func__, limit); 68 | } 69 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 70 | 71 | for (intptr_t i = limit - 1; i >= 0; i--) { 72 | struct skiparray_pair pair; 73 | enum skiparray_forget_res res = skiparray_forget(sa, 74 | (void *)i, &pair); 75 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_OK, res, "%d"); 76 | ASSERT_EQ_FMT((uintptr_t)i, (uintptr_t)pair.value, "%"PRIuPTR); 77 | 78 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 79 | if (i == 0) { break; } 80 | } 81 | 82 | skiparray_free(sa); 83 | PASS(); 84 | } 85 | 86 | TEST set_and_forget_interleaved(size_t limit) { 87 | const int verbosity = greatest_get_verbosity(); 88 | struct skiparray_config sa_config = { 89 | .cmp = test_skiparray_cmp_intptr_t, 90 | .node_size = 5, 91 | }; 92 | struct skiparray *sa = NULL; 93 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 94 | ASSERT_EQ_FMT(SKIPARRAY_NEW_OK, nres, "%d"); 95 | ASSERT(sa != NULL); 96 | 97 | for (size_t i = 0; i < limit; i++) { 98 | void *x = (void *)i; 99 | if (verbosity > 2) { 100 | fprintf(GREATEST_STDOUT, "==== %s: set %p -> %p\n", 101 | __func__, (void *)x, (void *)x); 102 | } 103 | if (skiparray_set(sa, x, x) != SKIPARRAY_SET_BOUND) { 104 | skiparray_free(sa); 105 | FAILm("set failure"); 106 | } 107 | 108 | struct skiparray_pair pair; 109 | enum skiparray_forget_res res = skiparray_forget(sa, 110 | (void *)i, &pair); 111 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_OK, res, "%d"); 112 | ASSERT_EQ_FMT((uintptr_t)i, (uintptr_t)pair.value, "%"PRIuPTR); 113 | 114 | ASSERT(test_skiparray_invariants(sa, verbosity - 1)); 115 | } 116 | 117 | skiparray_free(sa); 118 | PASS(); 119 | } 120 | 121 | TEST set_and_pop_first(size_t limit) { 122 | struct skiparray *sa = init_with_pairs(limit); 123 | ASSERT(sa); 124 | 125 | const int verbosity = greatest_get_verbosity(); 126 | if (verbosity > 0) { 127 | fprintf(GREATEST_STDOUT, "==== %s(%zd)\n", __func__, limit); 128 | } 129 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 130 | 131 | for (size_t i = 0; i < limit; i++) { 132 | intptr_t k = 0; 133 | intptr_t v = 0; 134 | 135 | enum skiparray_pop_res res = skiparray_pop_first(sa, 136 | (void *)&k, (void *)&v); 137 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, res, "%d"); 138 | ASSERT_EQ_FMT(i, (size_t)k, "%zd"); 139 | 140 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 141 | } 142 | 143 | skiparray_free(sa); 144 | PASS(); 145 | } 146 | 147 | TEST set_and_pop_last(size_t limit) { 148 | struct skiparray *sa = init_with_pairs(limit); 149 | ASSERT(sa); 150 | 151 | const int verbosity = greatest_get_verbosity(); 152 | if (verbosity > 0) { 153 | fprintf(GREATEST_STDOUT, "==== %s(%zd)\n", __func__, limit); 154 | } 155 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 156 | 157 | for (size_t i = 0; i < limit; i++) { 158 | intptr_t k = 0; 159 | intptr_t v = 0; 160 | 161 | enum skiparray_pop_res res = skiparray_pop_last(sa, 162 | (void *)&k, (void *)&v); 163 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, res, "%d"); 164 | ASSERT_EQ_FMT(limit - i - 1, (size_t)k, "%zd"); 165 | 166 | ASSERT(test_skiparray_invariants(sa, verbosity > 1)); 167 | } 168 | 169 | skiparray_free(sa); 170 | PASS(); 171 | } 172 | 173 | bool skiparray_bsearch(void *key, const void **keys, 174 | size_t key_count, skiparray_cmp_fun *cmp, void *udata, 175 | uint16_t *index); 176 | 177 | TEST binary_search(void) { 178 | #define MAX_SIZE 16 179 | intptr_t keys[MAX_SIZE + 1]; 180 | int verbosity = greatest_get_verbosity(); 181 | for (uint16_t size = 1; size <= MAX_SIZE; size++) { 182 | for (int present = 1; present >= 0; present--) { 183 | for (uintptr_t needle = 0; needle < size; needle++) { 184 | for (size_t i = 0; i < size; i++) { 185 | keys[i] = i; 186 | if (!present && i >= needle) { keys[i] += 1; } 187 | if (verbosity > 0) { 188 | printf(" == %zd: %"PRIdPTR "\n", i, keys[i]); 189 | } 190 | } 191 | 192 | uint16_t index = (uint16_t)-1; 193 | bool found = skiparray_bsearch((void *)needle, 194 | (void *)keys, size, 195 | test_skiparray_cmp_intptr_t, NULL, 196 | &index); 197 | if (verbosity > 0) { 198 | printf("size %u, needle %"PRIuPTR", present %d ==> found %d, index %u\n\n", 199 | size, needle, present, found, index); 200 | } 201 | ASSERT_EQ(present, found); 202 | ASSERT_EQ_FMT(needle, (uintptr_t)index, "%"PRIuPTR); 203 | } 204 | } 205 | } 206 | PASS(); 207 | } 208 | 209 | TEST iteration_locks_collection(bool free_newest_first) { 210 | struct skiparray_config sa_config = { 211 | .cmp = test_skiparray_cmp_intptr_t, 212 | }; 213 | struct skiparray *sa = NULL; 214 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 215 | ASSERT_EQ_FMT(SKIPARRAY_NEW_OK, nres, "%d"); 216 | ASSERT(sa != NULL); 217 | 218 | void *x = (void *)23; 219 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, 220 | skiparray_set(sa, x, x), "%d"); 221 | 222 | struct skiparray_iter *iter = NULL; 223 | ASSERT_EQ_FMT(SKIPARRAY_ITER_NEW_OK, 224 | skiparray_iter_new(sa, &iter), "%d"); 225 | 226 | /* Allocating an iterator locks the collection. */ 227 | 228 | void *k; 229 | void *v; 230 | 231 | ASSERT_EQ_FMT(SKIPARRAY_SET_ERROR_LOCKED, 232 | skiparray_set(sa, x, x), "%d"); 233 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_ERROR_LOCKED, 234 | skiparray_forget(sa, x, NULL), "%d"); 235 | 236 | ASSERT_EQ_FMT(SKIPARRAY_POP_ERROR_LOCKED, 237 | skiparray_pop_first(sa, &k, &v), "%d"); 238 | ASSERT_EQ_FMT(SKIPARRAY_POP_ERROR_LOCKED, 239 | skiparray_pop_last(sa, &k, &v), "%d"); 240 | 241 | /* Allocate another iterator, then verify that it's still locked. */ 242 | struct skiparray_iter *iter2 = NULL; 243 | ASSERT_EQ_FMT(SKIPARRAY_ITER_NEW_OK, 244 | skiparray_iter_new(sa, &iter2), "%d"); 245 | 246 | /* Free one, according to the arg, and verify that it's still locked */ 247 | if (free_newest_first) { 248 | skiparray_iter_free(iter2); 249 | } else { 250 | skiparray_iter_free(iter); 251 | } 252 | 253 | ASSERT_EQ_FMT(SKIPARRAY_SET_ERROR_LOCKED, 254 | skiparray_set(sa, x, x), "%d"); 255 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_ERROR_LOCKED, 256 | skiparray_forget(sa, x, NULL), "%d"); 257 | 258 | ASSERT_EQ_FMT(SKIPARRAY_POP_ERROR_LOCKED, 259 | skiparray_pop_first(sa, &k, &v), "%d"); 260 | ASSERT_EQ_FMT(SKIPARRAY_POP_ERROR_LOCKED, 261 | skiparray_pop_last(sa, &k, &v), "%d"); 262 | 263 | /* After the last iterator is freed, the collection should unlock. */ 264 | if (free_newest_first) { 265 | skiparray_iter_free(iter); 266 | } else { 267 | skiparray_iter_free(iter2); 268 | } 269 | 270 | k = (void *)12345; 271 | enum skiparray_set_res sres = skiparray_set(sa, k, x); 272 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, sres, "%d"); 273 | 274 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_OK, 275 | skiparray_forget(sa, k, NULL), "%d"); 276 | 277 | /* add it back, so it can then be popped off */ 278 | sres = skiparray_set(sa, k, x); 279 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, sres, "%d"); 280 | 281 | enum skiparray_pop_res pres = skiparray_pop_first(sa, &k, &v); 282 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, pres, "%d"); 283 | ASSERT_EQ_FMT((uintptr_t)23, (uintptr_t)k, "%zu"); 284 | ASSERT_EQ_FMT((uintptr_t)23, (uintptr_t)v, "%zu"); 285 | 286 | pres = skiparray_pop_last(sa, &k, &v); 287 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, pres, "%d"); 288 | ASSERT_EQ_FMT((uintptr_t)12345, (uintptr_t)k, "%zu"); 289 | ASSERT_EQ_FMT((uintptr_t)23, (uintptr_t)v, "%zu"); 290 | 291 | skiparray_free(sa); 292 | PASS(); 293 | } 294 | 295 | TEST iteration(void) { 296 | int verbosity = greatest_get_verbosity(); 297 | struct skiparray_config sa_config = { 298 | .cmp = test_skiparray_cmp_intptr_t, 299 | .node_size = 5, 300 | }; 301 | struct skiparray *sa = NULL; 302 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 303 | ASSERT_EQ_FMT(SKIPARRAY_NEW_OK, nres, "%d"); 304 | ASSERT(sa != NULL); 305 | 306 | /* set bindings from 100 to 9900 */ 307 | for (size_t i = 1; i < 100; i++) { 308 | if (verbosity > 0) { 309 | fprintf(GREATEST_STDOUT, "%s: binding %zu (0x%"PRIxPTR") -> %zu (0x%"PRIxPTR")\n", 310 | __func__, 100 * i, (uintptr_t)(100 * i), 311 | (100 * i) + 1, (uintptr_t)((100 * i) + 1)); 312 | } 313 | void *x = (void *)(100 * i); 314 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, 315 | skiparray_set(sa, x, (void *)((uintptr_t)x + 1)), "%d"); 316 | }; 317 | 318 | struct skiparray_iter *iter = NULL; 319 | ASSERT_EQ_FMT(SKIPARRAY_ITER_NEW_OK, 320 | skiparray_iter_new(sa, &iter), "%d"); 321 | 322 | /* allocate more iterators, to test that they get cleaned up properly */ 323 | for (size_t i = 0; i < 10; i++) { 324 | struct skiparray_iter *extra_iter = NULL; 325 | ASSERT_EQ_FMT(SKIPARRAY_ITER_NEW_OK, 326 | skiparray_iter_new(sa, &extra_iter), "%d"); 327 | ASSERT(extra_iter != NULL); 328 | } 329 | 330 | #define GET_AND_CHECK(EXP_KEY, EXP_VALUE) \ 331 | do { \ 332 | void *k; \ 333 | void *v; \ 334 | skiparray_iter_get(iter, &k, &v); \ 335 | ASSERT_EQ_FMT((uintptr_t)EXP_KEY, (uintptr_t)k, "%"PRIuPTR); \ 336 | ASSERT_EQ_FMT((uintptr_t)EXP_VALUE, (uintptr_t)v, "%"PRIuPTR); \ 337 | } while (0) 338 | 339 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_LAST); 340 | GET_AND_CHECK(9900, 9901); 341 | 342 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_FIRST); 343 | GET_AND_CHECK(100, 101); 344 | 345 | /* seek to a present value */ 346 | enum skiparray_iter_seek_res sres = 347 | skiparray_iter_seek(iter, (void *)5000); 348 | ASSERT_EQ_FMT(SKIPARRAY_ITER_SEEK_FOUND, sres, "%d"); 349 | GET_AND_CHECK(5000, 5001); 350 | 351 | enum skiparray_iter_step_res step_res; 352 | step_res = skiparray_iter_next(iter); 353 | ASSERT_EQ_FMT(SKIPARRAY_ITER_STEP_OK, step_res, "%d"); 354 | GET_AND_CHECK(5100, 5101); 355 | 356 | step_res = skiparray_iter_next(iter); 357 | ASSERT_EQ_FMT(SKIPARRAY_ITER_STEP_OK, step_res, "%d"); 358 | GET_AND_CHECK(5200, 5201); 359 | 360 | step_res = skiparray_iter_prev(iter); 361 | ASSERT_EQ_FMT(SKIPARRAY_ITER_STEP_OK, step_res, "%d"); 362 | GET_AND_CHECK(5100, 5101); 363 | 364 | ASSERT(test_skiparray_invariants(sa, verbosity)); 365 | 366 | /* seek to a nonexistent value */ 367 | sres = skiparray_iter_seek(iter, (void *)1234); 368 | ASSERT_EQ_FMT(SKIPARRAY_ITER_SEEK_NOT_FOUND, sres, "%d"); 369 | GET_AND_CHECK(1300, 1301); 370 | 371 | /* try seeking to all entries and check the next */ 372 | for (size_t i = 0; i < 10000; i++) { 373 | sres = skiparray_iter_seek(iter, (void *)i); 374 | const bool present = (i % 100) == 0; 375 | if (i < 100) { 376 | ASSERT_EQ_FMT(SKIPARRAY_ITER_SEEK_ERROR_BEFORE_FIRST, sres, "%d"); 377 | } else if (i > 9900) { 378 | ASSERT_EQ_FMT(SKIPARRAY_ITER_SEEK_ERROR_AFTER_LAST, sres, "%d"); 379 | } else { 380 | ASSERT_EQ_FMT(present 381 | ? SKIPARRAY_ITER_SEEK_FOUND 382 | : SKIPARRAY_ITER_SEEK_NOT_FOUND, 383 | sres, "%d"); 384 | void *exp_k = (void *)(i - (i % 100) + (present ? 0 : 100)); 385 | GET_AND_CHECK(exp_k, ((uintptr_t)exp_k + 1)); 386 | } 387 | } 388 | 389 | /* freeing the skiparray should also free any pending iterators */ 390 | skiparray_free(sa); 391 | PASS(); 392 | } 393 | 394 | SUITE(basic) { 395 | RUN_TEST(binary_search); 396 | RUN_TESTp(iteration_locks_collection, false); 397 | RUN_TESTp(iteration_locks_collection, true); 398 | RUN_TEST(iteration); 399 | 400 | for (size_t i = 10; i <= 10000; i *= 10) { 401 | if (greatest_get_verbosity() > 0) { 402 | fprintf(GREATEST_STDOUT, "== %s: tests with i = %zu\n", __func__, i); 403 | } 404 | 405 | char buf[8]; 406 | if (sizeof(buf) < (size_t)snprintf(buf, sizeof(buf), "%zu", i)) { assert(false); } 407 | 408 | greatest_set_test_suffix(buf); 409 | RUN_TESTp(set_and_forget_lowest, i); 410 | 411 | greatest_set_test_suffix(buf); 412 | RUN_TESTp(set_and_forget_interleaved, i); 413 | 414 | greatest_set_test_suffix(buf); 415 | RUN_TESTp(set_and_forget_highest, i); 416 | 417 | greatest_set_test_suffix(buf); 418 | RUN_TESTp(set_and_pop_first, i); 419 | 420 | greatest_set_test_suffix(buf); 421 | RUN_TESTp(set_and_pop_last, i); 422 | } 423 | } 424 | -------------------------------------------------------------------------------- /test/test_skiparray_builder.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | static struct skiparray_config config = { 4 | .cmp = test_skiparray_cmp_intptr_t, 5 | .node_size = 3, 6 | }; 7 | 8 | TEST reject_missing_parameters(void) { 9 | struct skiparray_config bad = { 10 | .node_size = 2, 11 | }; 12 | 13 | struct skiparray_builder *b = NULL; 14 | 15 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_ERROR_MISUSE, 16 | skiparray_builder_new(NULL, false, &b), "%d"); 17 | 18 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_ERROR_MISUSE, 19 | skiparray_builder_new(&bad, false, NULL), "%d"); 20 | 21 | bad.node_size = 1; 22 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_ERROR_MISUSE, 23 | skiparray_builder_new(&bad, false, &b), "%d"); 24 | 25 | PASS(); 26 | } 27 | 28 | TEST reject_descending_key(void) { 29 | struct skiparray_builder *b = NULL; 30 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_OK, 31 | skiparray_builder_new(&config, false, &b), "%d"); 32 | 33 | uintptr_t k1 = 1; 34 | uintptr_t k0 = 0; 35 | 36 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_OK, 37 | skiparray_builder_append(b, (void *)k1, NULL), "%d"); 38 | 39 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_ERROR_MISUSE, 40 | skiparray_builder_append(b, (void *)k0, NULL), "%d"); 41 | 42 | skiparray_builder_free(b); 43 | PASS(); 44 | } 45 | 46 | TEST reject_equal_key(void) { 47 | struct skiparray_builder *b = NULL; 48 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_OK, 49 | skiparray_builder_new(&config, false, &b), "%d"); 50 | 51 | const uintptr_t k1 = 1; 52 | 53 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_OK, 54 | skiparray_builder_append(b, (void *)k1, NULL), "%d"); 55 | 56 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_ERROR_MISUSE, 57 | skiparray_builder_append(b, (void *)k1, NULL), "%d"); 58 | 59 | skiparray_builder_free(b); 60 | PASS(); 61 | } 62 | 63 | TEST build_ascending(size_t limit) { 64 | const int verbosity = greatest_get_verbosity(); 65 | struct skiparray_builder *b = NULL; 66 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_OK, 67 | skiparray_builder_new(&config, false, &b), "%d"); 68 | 69 | for (size_t i = 0; i < limit; i++) { 70 | const uintptr_t k = (uintptr_t)i; 71 | const uintptr_t v = 2*k + 1; 72 | 73 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_OK, 74 | skiparray_builder_append(b, (void *)k, (void *)v), "%d"); 75 | } 76 | 77 | struct skiparray *sa = NULL; 78 | skiparray_builder_finish(&b, &sa); 79 | ASSERT(sa != NULL); 80 | ASSERT(test_skiparray_invariants(sa, verbosity - 1)); 81 | 82 | for (size_t i = 0; i < limit; i++) { 83 | const uintptr_t k = (uintptr_t)i; 84 | const uintptr_t exp = 2*k + 1; 85 | uintptr_t v; 86 | 87 | ASSERT(skiparray_get(sa, (void *)k, (void **)&v)); 88 | ASSERT_EQ_FMT(exp, v, "%"PRIuPTR); 89 | } 90 | 91 | skiparray_free(sa); 92 | PASS(); 93 | } 94 | 95 | SUITE(builder) { 96 | RUN_TEST(reject_missing_parameters); 97 | RUN_TEST(reject_descending_key); 98 | RUN_TEST(reject_equal_key); 99 | 100 | for (size_t i = 10; i <= 100000; i *= 10) { 101 | if (greatest_get_verbosity() > 0) { 102 | fprintf(GREATEST_STDOUT, "== %s: tests with i = %zu\n", __func__, i); 103 | } 104 | 105 | char buf[8]; 106 | if (sizeof(buf) < (size_t)snprintf(buf, sizeof(buf), "%zu", i)) { assert(false); } 107 | 108 | greatest_set_test_suffix(buf); 109 | RUN_TESTp(build_ascending, i); 110 | } 111 | } 112 | -------------------------------------------------------------------------------- /test/test_skiparray_fold.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | static void 4 | sub_key_from_actual(void *key, void *value, void *udata) { 5 | uintptr_t *actual = udata; 6 | assert(actual != NULL); 7 | (void)value; 8 | (*actual) -= (uintptr_t)key; 9 | } 10 | 11 | TEST sub_forward_and_reverse(size_t limit) { 12 | struct skiparray *sa = test_skiparray_sequential_build(limit); 13 | 14 | /* This uses subtraction because the result will differ when 15 | * iterating left-to-right and right-to-left. */ 16 | 17 | { 18 | uintptr_t expected = 0; 19 | for (uintptr_t i = 0; i < limit; i++) { 20 | expected -= i; /* note: rollover is fine here */ 21 | } 22 | uintptr_t acc = 0; 23 | 24 | struct skiparray_fold_state *fs = NULL; 25 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 26 | skiparray_fold_init(SKIPARRAY_FOLD_LEFT, sa, 27 | sub_key_from_actual, (void *)&acc, &fs), "%d"); 28 | 29 | while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE) { 30 | ; 31 | } 32 | ASSERT_EQ_FMT(expected, acc, "%zu"); 33 | } 34 | 35 | { 36 | assert(limit > 0); 37 | uintptr_t expected = 0; 38 | for (uintptr_t i = limit - 1; true; i--) { 39 | expected -= i; 40 | if (i == 0) { break; } 41 | } 42 | uintptr_t acc = 0; 43 | 44 | struct skiparray_fold_state *fs = NULL; 45 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 46 | skiparray_fold_init(SKIPARRAY_FOLD_RIGHT, sa, 47 | sub_key_from_actual, (void *)&acc, &fs), "%d"); 48 | 49 | while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE) { 50 | ; 51 | } 52 | ASSERT_EQ_FMT(expected, acc, "%zu"); 53 | } 54 | 55 | skiparray_free(sa); 56 | PASS(); 57 | } 58 | 59 | TEST sub_forward_and_reverse_halt_partway(size_t limit) { 60 | struct skiparray *sa = test_skiparray_sequential_build(limit); 61 | 62 | /* same as the last test, but stop over iterating over only 63 | * half of the skiparray: 64 | * - left: first half 65 | * - right: last half */ 66 | const size_t steps = limit/2; 67 | 68 | { 69 | uintptr_t expected = 0; 70 | for (uintptr_t i = 0; i < steps; i++) { 71 | expected -= i; 72 | } 73 | uintptr_t acc = 0; 74 | 75 | struct skiparray_fold_state *fs = NULL; 76 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 77 | skiparray_fold_init(SKIPARRAY_FOLD_LEFT, sa, 78 | sub_key_from_actual, (void *)&acc, &fs), "%d"); 79 | 80 | uintptr_t steps_i = 0; 81 | while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE) { 82 | steps_i++; 83 | if (steps_i == steps) { break; } 84 | } 85 | skiparray_fold_halt(fs); 86 | ASSERT_EQ_FMT(expected, acc, "%zu"); 87 | } 88 | 89 | { 90 | assert(limit > 0); 91 | uintptr_t expected = 0; 92 | for (uintptr_t i = limit - 1, steps_i = 0; steps_i < steps; i--, steps_i++) { 93 | expected -= i; 94 | } 95 | uintptr_t acc = 0; 96 | 97 | struct skiparray_fold_state *fs = NULL; 98 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 99 | skiparray_fold_init(SKIPARRAY_FOLD_RIGHT, sa, 100 | sub_key_from_actual, (void *)&acc, &fs), "%d"); 101 | 102 | uintptr_t steps_i = 0; 103 | while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE) { 104 | steps_i++; 105 | if (steps_i == steps) { break; } 106 | } 107 | skiparray_fold_halt(fs); 108 | ASSERT_EQ_FMT(expected, acc, "%zu"); 109 | } 110 | 111 | skiparray_free(sa); 112 | PASS(); 113 | } 114 | 115 | struct multi_env { 116 | bool ok; 117 | struct skiparray_builder *b; 118 | }; 119 | 120 | static void 121 | append_cb(void *key, void *value, void *udata) { 122 | struct multi_env *env = udata; 123 | if (env->ok) { 124 | if (SKIPARRAY_BUILDER_APPEND_OK != 125 | skiparray_builder_append(env->b, key, value)) { 126 | env->ok = false; 127 | } 128 | } 129 | } 130 | 131 | static uint8_t 132 | merge_cb(uint8_t count, const void **keys, void **values, 133 | void **merged_value, void *udata) { 134 | (void)keys; 135 | (void)values; 136 | (void)udata; 137 | assert(count > 0); 138 | /* always choose the largest value for which key % value is 0 */ 139 | uintptr_t key = (uintptr_t)keys[0]; 140 | uintptr_t out_value = 0; 141 | for (size_t i = 0; i < count; i++) { 142 | const uintptr_t v = (uintptr_t)values[i]; 143 | assert(key == (uintptr_t)keys[i]); 144 | if ((key % v) == 0) { 145 | if (v > out_value) { out_value = v; } 146 | } 147 | } 148 | 149 | *merged_value = (void *)out_value; 150 | return 0; /* all keys are equal */ 151 | } 152 | 153 | TEST fold_multi_and_check_merge(size_t limit) { 154 | /* Take N skiplists of 0..limit multiplied by muls[I] and 155 | * use a multi-fold to zip their values together. */ 156 | const uintptr_t muls[] = { 1, 3, 5 }; 157 | #define MUL_CT (sizeof(muls)/sizeof(muls[0])) 158 | /* this could overflow, but it's not likely with a realistic limit */ 159 | if (muls[MUL_CT - 1] * limit <= limit) { SKIPm("overflow"); } 160 | 161 | struct skiparray_config cfg = { 162 | .cmp = test_skiparray_cmp_intptr_t 163 | }; 164 | 165 | struct skiparray_builder *builders[MUL_CT] = { NULL }; 166 | for (size_t m_i = 0; m_i < MUL_CT; m_i++) { 167 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_OK, 168 | skiparray_builder_new(&cfg, false, &builders[m_i]), "%d"); 169 | } 170 | 171 | /* Append ascending keys and multiplied values */ 172 | for (size_t i = 0; i < limit; i++) { 173 | for (size_t m_i = 0; m_i < MUL_CT; m_i++) { 174 | uintptr_t key = muls[m_i] * i; 175 | uintptr_t value = muls[m_i]; 176 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_APPEND_OK, 177 | skiparray_builder_append(builders[m_i], 178 | (void *)key, (void *)value), "%d"); 179 | } 180 | } 181 | 182 | /* Finish the builders */ 183 | struct skiparray *sas[MUL_CT]; 184 | for (size_t m_i = 0; m_i < MUL_CT; m_i++) { 185 | skiparray_builder_finish(&builders[m_i], &sas[m_i]); 186 | ASSERT(sas[m_i] != NULL); 187 | } 188 | 189 | struct multi_env env = { .ok = true }; 190 | 191 | ASSERT_EQ_FMT(SKIPARRAY_BUILDER_NEW_OK, 192 | skiparray_builder_new(&cfg, false, &env.b), "%d"); 193 | 194 | /* Use a multi-fold to merge them, passing in a new builder */ 195 | struct skiparray_fold_state *fs = NULL; 196 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 197 | skiparray_fold_multi_init(SKIPARRAY_FOLD_LEFT, 198 | MUL_CT, sas, append_cb, merge_cb, (void *)&env, 199 | &fs), 200 | "%d"); 201 | 202 | ASSERT(env.ok); 203 | 204 | /* Step the fold until done */ 205 | do { 206 | } while (skiparray_fold_next(fs) != SKIPARRAY_FOLD_NEXT_DONE); 207 | 208 | /* Finish the result builder */ 209 | struct skiparray *res = NULL; 210 | skiparray_builder_finish(&env.b, &res); 211 | 212 | /* Free the merged skiparrays */ 213 | for (size_t m_i = 0; m_i < MUL_CT; m_i++) { 214 | skiparray_free(sas[m_i]); 215 | } 216 | 217 | { 218 | struct skiparray_iter *iter = NULL; 219 | ASSERT_EQ_FMT(SKIPARRAY_ITER_NEW_OK, 220 | skiparray_iter_new(res, &iter), "%d"); 221 | 222 | /* Iterate over the merged skiparray, checking that the value 223 | * is set to the highest value for which key % V is 0. */ 224 | do { 225 | uintptr_t key; 226 | uintptr_t value; 227 | skiparray_iter_get(iter, (void *)&key, (void *)&value); 228 | 229 | for (int m_i = MUL_CT - 1; m_i >= 0; m_i--) { 230 | if ((key % muls[m_i]) == 0) { 231 | ASSERT_EQ_FMT(muls[m_i], value, "%"PRIuPTR); 232 | break; 233 | } 234 | } 235 | } while (skiparray_iter_next(iter) != SKIPARRAY_ITER_STEP_END); 236 | 237 | skiparray_iter_free(iter); 238 | } 239 | 240 | skiparray_free(res); 241 | PASS(); 242 | } 243 | 244 | static void 245 | sum_values(void *key, void *value, void *udata) { 246 | uintptr_t *actual = udata; 247 | assert(actual != NULL); 248 | (void)key; 249 | (*actual) += (uintptr_t)value; 250 | } 251 | 252 | TEST onepass_sum(size_t limit) { 253 | struct skiparray *sa = test_skiparray_sequential_build(limit); 254 | 255 | size_t exp = 0; 256 | for (size_t i = 0; i < limit; i++) { exp += i; } 257 | 258 | size_t actual = 0; 259 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 260 | skiparray_fold(SKIPARRAY_FOLD_LEFT, 261 | sa, sum_values, &actual), "%d"); 262 | ASSERT_EQ_FMT(exp, actual, "%zu"); 263 | 264 | skiparray_free(sa); 265 | PASS(); 266 | } 267 | 268 | TEST iter_empty(void) { 269 | struct skiparray *sa = test_skiparray_sequential_build(0); 270 | 271 | size_t exp = 0; 272 | size_t actual = 0; 273 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 274 | skiparray_fold(SKIPARRAY_FOLD_LEFT, 275 | sa, sum_values, &actual), "%d"); 276 | ASSERT_EQ_FMT(exp, actual, "%zu"); 277 | 278 | skiparray_free(sa); 279 | PASS(); 280 | } 281 | 282 | SUITE(fold) { 283 | for (size_t limit = 10; limit <= 1000000; limit *= 10) { 284 | char buf[64]; 285 | #define SET_SUFFIX() \ 286 | snprintf(buf, sizeof(buf), "%zu", limit); \ 287 | greatest_set_test_suffix(buf) 288 | 289 | SET_SUFFIX(); 290 | RUN_TESTp(sub_forward_and_reverse, limit); 291 | SET_SUFFIX(); 292 | RUN_TESTp(sub_forward_and_reverse_halt_partway, limit); 293 | SET_SUFFIX(); 294 | RUN_TESTp(fold_multi_and_check_merge, limit); 295 | SET_SUFFIX(); 296 | RUN_TESTp(onepass_sum, limit); 297 | } 298 | 299 | RUN_TEST(iter_empty); 300 | } 301 | -------------------------------------------------------------------------------- /test/test_skiparray_hof.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | struct parity_env { 4 | char tag; 5 | int parity; 6 | bool ok; 7 | }; 8 | 9 | static bool 10 | keep_cb(const void *key, const void *value, void *udata) { 11 | struct parity_env *env = udata; 12 | uintptr_t k = (uintptr_t)key; 13 | (void)value; 14 | assert(env->tag == 'E'); 15 | return (k & env->parity); 16 | } 17 | 18 | static void 19 | matches_parity(void *key, void *value, void *udata) { 20 | struct parity_env *env = udata; 21 | (void)value; 22 | assert(env->tag == 'E'); 23 | uintptr_t k = (uintptr_t)key; 24 | if ((k & 1) != env->parity) { env->ok = false; } 25 | } 26 | 27 | TEST filter_odds_or_evens(int parity) { 28 | struct skiparray *sa = test_skiparray_sequential_build(10); 29 | ASSERT(sa != NULL); 30 | 31 | struct parity_env env = { 32 | .tag = 'E', 33 | .parity = parity, 34 | .ok = true, 35 | }; 36 | 37 | struct skiparray *filtered = skiparray_filter(sa, 38 | keep_cb, (void *)&env); 39 | ASSERT(filtered != NULL); 40 | 41 | ASSERT_EQ_FMT(SKIPARRAY_FOLD_OK, 42 | skiparray_fold(SKIPARRAY_FOLD_LEFT, filtered, 43 | matches_parity, (void *)&env), "%d"); 44 | ASSERT(env.ok); 45 | 46 | skiparray_free(sa); 47 | skiparray_free(filtered); 48 | PASS(); 49 | } 50 | 51 | /* other misc higher-order functions */ 52 | SUITE(hof) { 53 | RUN_TESTp(filter_odds_or_evens, 0); 54 | RUN_TESTp(filter_odds_or_evens, 1); 55 | } 56 | -------------------------------------------------------------------------------- /test/test_skiparray_integration.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | struct symbol { 4 | uint8_t len; 5 | /* Note: this length is only hard-coded to make constructing stack-allocated 6 | * symbols for comparison easy, for testing purposes. */ 7 | char name[256]; 8 | }; 9 | 10 | static int cmp_symbol(const void *pa, const void *pb, void *unused) { 11 | (void)unused; 12 | 13 | const struct symbol *a = (const struct symbol *)pa; 14 | const struct symbol *b = (const struct symbol *)pb; 15 | 16 | if (a->len < b->len) { return -1; } 17 | if (a->len > b->len) { return 1; } 18 | return strncmp(a->name, b->name, a->len); 19 | } 20 | 21 | static void 22 | free_symbol(void *key, void *value, void *udata) { 23 | (void)udata; 24 | (void)value; 25 | free(key); 26 | } 27 | 28 | static struct symbol * 29 | mksymbol(const char *str) { 30 | size_t len = strlen(str); 31 | assert(len < UINT8_MAX); 32 | struct symbol *res = malloc(sizeof(*res)); 33 | if (res == NULL) { return NULL; } 34 | memcpy((void *)res->name, str, len); 35 | res->name[len] = '\0'; 36 | res->len = (uint8_t)len; 37 | return res; 38 | } 39 | 40 | TEST symbol_table(size_t limit) { 41 | struct skiparray *sa = NULL; 42 | struct skiparray_config cfg = { 43 | .cmp = cmp_symbol, 44 | .free = free_symbol, 45 | }; 46 | ASSERT_EQ_FMT(SKIPARRAY_NEW_OK, skiparray_new(&cfg, &sa), "%d"); 47 | 48 | enum skiparray_set_res sres; 49 | char buf[64]; 50 | 51 | for (size_t i = 0; i < limit; i++) { 52 | if (sizeof(buf) < (size_t)snprintf(buf, sizeof(buf), "key_%zu", i)) { 53 | FAILm("snprintf"); 54 | } 55 | 56 | struct symbol *sym = mksymbol(buf); 57 | 58 | sres = skiparray_set(sa, sym, (void *)1); 59 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, sres, "%d"); 60 | } 61 | 62 | for (size_t i = 0; i < limit; i++) { 63 | if (sizeof(buf) < (size_t)snprintf(buf, sizeof(buf), "key_%zu", i)) { 64 | FAILm("snprintf"); 65 | } 66 | 67 | struct symbol *sym = mksymbol(buf); 68 | 69 | const bool replace_previous_key = (i & 0x01); 70 | 71 | struct skiparray_pair pair; 72 | sres = skiparray_set_with_pair(sa, sym, (void *)2, 73 | replace_previous_key, &pair); 74 | ASSERT_EQ_FMT(SKIPARRAY_SET_REPLACED, sres, "%d"); 75 | 76 | ASSERT_EQ_FMT((size_t)1, (size_t)(uintptr_t)pair.value, "%zu"); 77 | if (replace_previous_key) { 78 | struct symbol *old_sym = pair.key; 79 | free(old_sym); 80 | } else { 81 | free(sym); 82 | } 83 | } 84 | 85 | for (size_t i = 0; i < limit; i++) { 86 | struct symbol sym; 87 | sym.len = snprintf(sym.name, sizeof(sym.name), "key_%zu", i); 88 | 89 | struct skiparray_pair p; 90 | ASSERT(skiparray_get_pair(sa, &sym, &p)); 91 | ASSERT(p.key != NULL); 92 | const struct symbol *used_symbol = (struct symbol *)p.key; 93 | 94 | ASSERT_EQ_FMT(sym.len, used_symbol->len, "%u"); 95 | GREATEST_ASSERT_STRN_EQ(sym.name, used_symbol->name, sym.len); 96 | ASSERT_EQ_FMT((size_t)2, (size_t)(uintptr_t)p.value, "%zu"); 97 | } 98 | 99 | skiparray_free(sa); 100 | PASS(); 101 | } 102 | 103 | SUITE(integration) { 104 | RUN_TESTp(symbol_table, 1000); 105 | RUN_TESTp(symbol_table, 100000); 106 | } 107 | -------------------------------------------------------------------------------- /test/test_skiparray_invariants.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | #include "skiparray_internal_types.h" 3 | 4 | #include 5 | #include 6 | 7 | #define LOG_FILE stdout 8 | #define LOG(LVL, ...) \ 9 | do { \ 10 | if (LVL <= verbosity) { \ 11 | fprintf(LOG_FILE, __VA_ARGS__); \ 12 | } \ 13 | } while(0) 14 | 15 | #define CHECK(ASSERT, ...) \ 16 | do { \ 17 | if (!(ASSERT)) { \ 18 | LOG(1, "FAILURE: " __VA_ARGS__); \ 19 | return false; \ 20 | } \ 21 | } while(0) 22 | 23 | int test_skiparray_cmp_intptr_t(const void *ka, 24 | const void *kb, void *udata) { 25 | (void)udata; 26 | intptr_t a = (intptr_t)ka; 27 | intptr_t b = (intptr_t)kb; 28 | return (a < b ? -1 : a > b ? 1 : 0); 29 | } 30 | 31 | struct skiparray * 32 | test_skiparray_sequential_build(size_t limit) { 33 | struct skiparray_builder *b = NULL; 34 | 35 | struct skiparray_config config = { 36 | .cmp = test_skiparray_cmp_intptr_t, 37 | }; 38 | 39 | enum skiparray_builder_new_res bnres = 40 | skiparray_builder_new(&config, false, &b); 41 | (void)bnres; 42 | 43 | for (uintptr_t i = 0; i < limit; i++) { 44 | uintptr_t k = i; 45 | enum skiparray_builder_append_res bares = 46 | skiparray_builder_append(b, (void *) k, 47 | (config.ignore_values ? NULL : (void *) k)); 48 | (void)bares; 49 | } 50 | 51 | struct skiparray *sa = NULL; 52 | skiparray_builder_finish(&b, &sa); 53 | return sa; 54 | } 55 | 56 | bool test_skiparray_invariants(struct skiparray *sa, int verbosity) { 57 | size_t counts[sa->max_level + 1]; 58 | memset(counts, 0, (1 + sa->max_level) * sizeof(size_t)); 59 | 60 | size_t counts_linked[sa->max_level + 1]; 61 | memset(counts_linked, 0, (1 + sa->max_level) * sizeof(size_t)); 62 | 63 | LOG(1, "==== Checking invariants\n"); 64 | 65 | /* There must always be at least one node on level 0. */ 66 | struct node *cur = sa->nodes[0]; 67 | CHECK(cur != NULL, "No node on level 0\n"); 68 | 69 | /* For each progressively taller node encountered, check that it is 70 | * the first node linked in sa->nodes[] up to its height. */ 71 | uint8_t checked_head_links_up_to = 0; 72 | 73 | size_t actual_pairs = 0; 74 | 75 | /* For every node linked on level 0: 76 | * 77 | * - All nodes except the last must have at least node_size/2 keys. 78 | * - No node can overflow its key buffer. 79 | * - The last key in a node must be less than the first key in the 80 | * next node, if there is one. 81 | * - Keys within the node are in ascending order. */ 82 | struct node *prev = NULL; 83 | while (cur) { 84 | assert(cur); 85 | struct node *next = cur->fwd[0]; 86 | LOG(2, "-- checking level 0: %p, height %u, %" PRIu16 " pairs, offset %" PRIu16 87 | " (prev %p, next: %p)\n", 88 | (void *)cur, cur->height, cur->count, cur->offset, 89 | (void *)prev, (void *)next); 90 | 91 | actual_pairs += cur->count; 92 | 93 | CHECK(cur->height <= sa->max_level, "node height exceeds max level: %u vs. %u", 94 | cur->height, sa->max_level); 95 | 96 | counts[cur->height - 1]++; 97 | 98 | for (size_t i = 1; i < cur->height; i++) { 99 | LOG(2, " -- fwd[%zu]: %p\n", i, (void *)cur->fwd[i]); 100 | } 101 | 102 | for (size_t i = 0; i < cur->count; i++) { 103 | LOG(3, "%zd: %p => %p\n", 104 | i, (void *)cur->keys[cur->offset + i], 105 | (void *)cur->values[cur->offset + i]); 106 | } 107 | 108 | if (cur->height > checked_head_links_up_to) { 109 | for (size_t i = checked_head_links_up_to + 1; i < cur->height; i++) { 110 | CHECK(sa->nodes[i] == cur, 111 | "Level %d node %p is not the first node linked on level %zd, instead %p is\n", 112 | cur->height, (void *)cur, i, (void *)sa->nodes[i]); 113 | } 114 | checked_head_links_up_to = cur->height; 115 | } 116 | 117 | if (prev) { 118 | CHECK(cur->back == prev, 119 | "Back pointer mismatch on %p: prev %p, cur->back %p\n", 120 | (void *)cur, (void *)prev, (void *)cur->back); 121 | if (cur->count > 0) { 122 | CHECK(sa->cmp(prev->keys[prev->offset + prev->count - 1], 123 | cur->keys[cur->offset], sa->udata) < 0, 124 | "Last key in prev node must be less than first key in cur node, prev %p, cur %p\n", 125 | (void *)prev, (void *)cur); 126 | } 127 | } else { 128 | CHECK(cur->back == NULL, "First node must have NULL backpointer\n"); 129 | } 130 | 131 | if (next == NULL) { /* last node */ 132 | if (cur != sa->nodes[0]) { 133 | CHECK(cur->count > 0, "Only root node can be empty\n"); 134 | } 135 | } else { /* not last node */ 136 | CHECK(cur->count >= sa->node_size / 2, 137 | "Node must be at least half full\n"); 138 | } 139 | 140 | CHECK(cur->count <= sa->node_size, "Cannot have excess keys\n"); 141 | CHECK(cur->offset + cur->count <= sa->node_size, 142 | "Must not overflow key buffer\n"); 143 | 144 | for (size_t i = 1; i < cur->count; i++) { 145 | CHECK(sa->cmp(cur->keys[cur->offset + i - 1], 146 | cur->keys[cur->offset + i], 147 | sa->udata) < 0, 148 | "Node keys must be in ascending order\n"); 149 | } 150 | 151 | prev = cur; 152 | cur = next; 153 | counts_linked[0]++; 154 | } 155 | 156 | /* For each level above zero: 157 | * 158 | * - The number of nodes linked must be <= the number of nodes on 159 | * the level immediately below it. 160 | * - The last key in a node must be less than the first key in 161 | * the next node on that level. */ 162 | for (size_t li = 1; li < sa->height; li++) { 163 | cur = sa->nodes[li]; 164 | while (cur) { 165 | struct node *next = cur->fwd[li]; 166 | LOG(3, "-- counting level %zd: %p, level %u, %" PRIu16 167 | " pairs, offset %" PRIu16 " (next: %p)\n", 168 | li, (void *)cur, cur->height, cur->count, cur->offset, (void *)next); 169 | CHECK(next != cur, "Cycle detected\n"); 170 | 171 | if (next != NULL) { 172 | CHECK(next->height >= li, 173 | "Node with height %u should not be linked on level %zd\n", 174 | next->height, li); 175 | 176 | CHECK(sa->cmp(cur->keys[cur->offset + cur->count - 1], 177 | next->keys[next->offset], sa->udata) < 0, 178 | "Last key in node must be less than first key in next node\n"); 179 | } 180 | 181 | cur = next; 182 | counts_linked[li]++; 183 | } 184 | } 185 | 186 | for (size_t li = 1; li < sa->height; li++) { 187 | LOG(1, "-- level %zd: %zd nodes linked (level %zd: %zd nodes)\n", 188 | li, counts_linked[li], li, counts[li]); 189 | size_t level_gte = 0; 190 | for (size_t i = li; i <= sa->height; i++) { level_gte += counts[i]; } 191 | 192 | CHECK(counts_linked[li] == level_gte, 193 | "Count mismatch: %zd nodes with level >= %zd, but only %zd linked on level %zd\n", 194 | level_gte, li, counts_linked[li], li); 195 | } 196 | 197 | for (size_t li = 1; li <= sa->height; li++) { 198 | CHECK(counts_linked[li - 1] >= counts_linked[li], 199 | "Less nodes on level than level above it: %zd vs. %zd\n", 200 | counts_linked[li - 1], counts_linked[li]); 201 | } 202 | 203 | size_t count_pairs = skiparray_count(sa); 204 | CHECK(count_pairs == actual_pairs, 205 | "pairs don't match: expected %zu, got %zu\n", actual_pairs, count_pairs); 206 | 207 | struct skiparray_iter *iter = NULL; 208 | 209 | if (count_pairs == 0) { 210 | CHECK(SKIPARRAY_ITER_NEW_EMPTY == skiparray_iter_new(sa, &iter), "iter_new"); 211 | } else { 212 | CHECK(SKIPARRAY_ITER_NEW_OK == skiparray_iter_new(sa, &iter), "iter_new"); 213 | /* This should be optional -- a new iterator should start at the beginning. */ 214 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_FIRST); 215 | 216 | size_t count_forward = 0; 217 | void *prev_key = NULL; 218 | for (;;) { 219 | void *key; 220 | void *value; 221 | skiparray_iter_get(iter, &key, &value); 222 | count_forward++; 223 | 224 | LOG(3, "%s: count_forward %zu, key %p, prev_key %p\n", 225 | __func__, count_forward, (void *)key, (void *)prev_key); 226 | if (count_forward > 1) { 227 | CHECK(test_skiparray_cmp_intptr_t(prev_key, key, NULL) < 0, 228 | "iteration order must be ascending, failed with keys %p and %p\n", 229 | (void *)prev_key, (void *)key); 230 | } 231 | 232 | enum skiparray_iter_step_res step_res = skiparray_iter_next(iter); 233 | if (step_res == SKIPARRAY_ITER_STEP_END) { break; } 234 | CHECK(step_res == SKIPARRAY_ITER_STEP_OK, "iteration stepped"); 235 | prev_key = key; 236 | } 237 | CHECK(count_forward == count_pairs, 238 | "forward iteration count mismatch, exp %zu, got %zu\n", 239 | count_pairs, count_forward); 240 | 241 | skiparray_iter_seek_endpoint(iter, SKIPARRAY_ITER_SEEK_LAST); 242 | 243 | size_t count_backward = 0; 244 | for (;;) { 245 | void *key; 246 | void *value; 247 | skiparray_iter_get(iter, &key, &value); 248 | count_backward++; 249 | 250 | if (count_backward > 1) { 251 | CHECK(test_skiparray_cmp_intptr_t(prev_key, key, NULL) > 0, 252 | "reverse iteration order must be descending, failed with keys %p and %p\n", 253 | (void *)prev_key, (void *)key); 254 | } 255 | 256 | enum skiparray_iter_step_res step_res = skiparray_iter_prev(iter); 257 | if (step_res == SKIPARRAY_ITER_STEP_END) { break; } 258 | CHECK(step_res == SKIPARRAY_ITER_STEP_OK, "iteration stepped"); 259 | prev_key = key; 260 | } 261 | CHECK(count_forward == count_pairs, 262 | "forward iteration count mismatch, exp %zu, got %zu\n", 263 | count_pairs, count_forward); 264 | 265 | skiparray_iter_free(iter); 266 | } 267 | 268 | LOG(1, "==== PASSED\n"); 269 | return true; 270 | } 271 | -------------------------------------------------------------------------------- /test/test_skiparray_prop.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | #define LOG(...) \ 4 | do { \ 5 | if (m->env->verbosity > 0) { \ 6 | printf(__VA_ARGS__); \ 7 | } \ 8 | } while(0) 9 | 10 | #define LOG_FAIL(...) \ 11 | do { \ 12 | if (m->env->verbosity > 0) { \ 13 | printf(__VA_ARGS__); \ 14 | } \ 15 | return false; \ 16 | } while(0) 17 | 18 | static enum theft_trial_res 19 | prop_preserve_invariants(struct theft *t, void *arg1); 20 | static bool evaluate(struct op *op, struct model *m); 21 | static bool eval_get(struct op *op, struct model *m); 22 | static bool eval_set(struct op *op, struct model *m); 23 | static bool eval_pop_first(struct op *op, struct model *m); 24 | static bool eval_pop_last(struct op *op, struct model *m); 25 | static bool eval_forget(struct op *op, struct model *m); 26 | static bool eval_member(struct op *op, struct model *m); 27 | static bool eval_count(struct op *op, struct model *m); 28 | static bool eval_first(struct op *op, struct model *m); 29 | static bool eval_last(struct op *op, struct model *m); 30 | 31 | static bool validate(struct model *m); 32 | 33 | /* Bump up verbosity and re-run once on failure. */ 34 | static enum theft_hook_trial_post_res 35 | trial_post_cb(const struct theft_hook_trial_post_info *info, 36 | void *hook_env) { 37 | struct test_env *env = (struct test_env *)hook_env; 38 | assert(env->tag == 'T'); 39 | 40 | /* run failures once more with logging increased */ 41 | if (info->result == THEFT_TRIAL_FAIL) { 42 | env->verbosity = 1; 43 | return THEFT_HOOK_TRIAL_POST_REPEAT_ONCE; 44 | } 45 | 46 | theft_print_trial_result(&env->print_env, info); 47 | 48 | return THEFT_HOOK_TRIAL_POST_CONTINUE; 49 | } 50 | 51 | TEST preserve_invariants(size_t limit, theft_seed seed) { 52 | size_t trials = 1; 53 | if (seed == 0) { 54 | seed = theft_seed_of_time(); 55 | trials = 100; 56 | } 57 | 58 | struct test_env env = { 59 | .tag = 'T', 60 | .limit = limit, 61 | }; 62 | 63 | char name[64]; 64 | snprintf(name, sizeof(name), "%s(%zd)", __func__, limit); 65 | 66 | struct theft_run_config config = { 67 | .name = name, 68 | .prop1 = prop_preserve_invariants, 69 | .type_info = { &type_info_skiparray_operations }, 70 | .seed = seed, 71 | .trials = trials, 72 | 73 | .hooks = { 74 | .trial_pre = theft_hook_first_fail_halt, 75 | .trial_post = trial_post_cb, 76 | .env = &env, 77 | }, 78 | }; 79 | 80 | if (!getenv("NOFORK")) { 81 | config.fork.enable = true; 82 | } 83 | 84 | ASSERT_ENUM_EQ(THEFT_RUN_PASS, theft_run(&config), theft_run_res_str); 85 | PASS(); 86 | } 87 | 88 | static enum theft_trial_res 89 | prop_preserve_invariants(struct theft *t, void *arg1) { 90 | struct scenario *scen = arg1; 91 | struct test_env *env = theft_hook_get_env(t); 92 | 93 | const size_t m_alloc_size = sizeof(struct model) 94 | + scen->count * sizeof(struct pair); 95 | struct model *m = malloc(m_alloc_size); 96 | if (m == NULL) { return THEFT_TRIAL_ERROR; } 97 | memset(m, 0x00, m_alloc_size); 98 | m->tag = 'M'; 99 | m->env = env; 100 | 101 | struct skiparray_config sa_config = { 102 | .cmp = test_skiparray_cmp_intptr_t, 103 | .seed = scen->seed, 104 | .node_size = scen->node_size, 105 | }; 106 | struct skiparray *sa = NULL; 107 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); 108 | if (SKIPARRAY_NEW_OK != nres) { 109 | free(m); 110 | fprintf(stderr, "skiparray_new %d\n", nres); 111 | return THEFT_TRIAL_ERROR; 112 | } 113 | m->sa = sa; 114 | 115 | for (size_t i = 0; i < scen->count; i++) { 116 | struct op *op = &scen->ops[i]; 117 | LOG("== evaluate: %zd\n", i); 118 | if (!evaluate(op, m)) { goto fail; } 119 | 120 | if (!validate(m)) { goto fail; } 121 | } 122 | 123 | skiparray_free(sa); 124 | free(m); 125 | return THEFT_TRIAL_PASS; 126 | 127 | fail: 128 | free(m); 129 | return THEFT_TRIAL_FAIL; 130 | } 131 | 132 | static bool evaluate(struct op *op, struct model *m) { 133 | switch (op->t) { 134 | case OP_GET: 135 | if (!eval_get(op, m)) { return false; } 136 | break; 137 | case OP_SET: 138 | if (!eval_set(op, m)) { return false; } 139 | break; 140 | case OP_FORGET: 141 | if (!eval_forget(op, m)) { return false; } 142 | break; 143 | case OP_POP_FIRST: 144 | if (!eval_pop_first(op, m)) { return false; } 145 | break; 146 | case OP_POP_LAST: 147 | if (!eval_pop_last(op, m)) { return false; } 148 | break; 149 | case OP_MEMBER: 150 | if (!eval_member(op, m)) { return false; } 151 | break; 152 | case OP_COUNT: 153 | if (!eval_count(op, m)) { return false; } 154 | break; 155 | case OP_FIRST: 156 | if (!eval_first(op, m)) { return false; } 157 | break; 158 | case OP_LAST: 159 | if (!eval_last(op, m)) { return false; } 160 | break; 161 | 162 | default: 163 | case OP_TYPE_COUNT: 164 | assert(false); 165 | } 166 | 167 | if (!test_skiparray_invariants(m->sa, m->env->verbosity)) { 168 | return false; 169 | } 170 | 171 | return true; 172 | } 173 | 174 | static bool check_if_known(struct model *m, intptr_t key, size_t *found_i) { 175 | skiparray_cmp_fun *cmp = test_skiparray_cmp_intptr_t; 176 | /* check if known in model */ 177 | for (size_t i = 0; i < m->pairs_used; i++) { 178 | if (0 == cmp(m->pairs[i].key, (void *)key, NULL)) { 179 | *found_i = i; 180 | return true; 181 | } 182 | } 183 | return false; 184 | } 185 | 186 | static bool eval_get(struct op *op, struct model *m) { 187 | bool found = false; 188 | size_t found_i = 0; 189 | 190 | found = check_if_known(m, op->u.get.key, &found_i); 191 | 192 | void *v = 0; 193 | bool res = skiparray_get(m->sa, (void *)op->u.get.key, &v); 194 | if (found) { 195 | if (!res) { 196 | LOG_FAIL("GET: lost binding\n"); 197 | } 198 | if (m->pairs[found_i].value != v) { 199 | LOG_FAIL("GET: wrong key -- exp %p, got %p\n", 200 | m->pairs[found_i].value, v); 201 | } 202 | } else { 203 | if (res) { 204 | LOG_FAIL("GET: found unexpected binding; %p -> %p\n", 205 | (void *)op->u.get.key, v); 206 | } 207 | } 208 | return true; 209 | } 210 | 211 | static bool eval_set(struct op *op, struct model *m) { 212 | size_t found_i = 0; 213 | bool found = check_if_known(m, op->u.get.key, &found_i); 214 | 215 | struct skiparray_pair pair; 216 | enum skiparray_set_res res = skiparray_set_with_pair(m->sa, 217 | (void *)op->u.set.key, (void *)op->u.set.value, 218 | true, &pair); 219 | 220 | if (found) { 221 | if (res != SKIPARRAY_SET_REPLACED) { 222 | LOG_FAIL("SET: expected res REPLACED (%d), got %d\n", 223 | SKIPARRAY_SET_REPLACED, res); 224 | } 225 | if (pair.value != m->pairs[found_i].value) { 226 | LOG_FAIL("SET: bad old value, expected %p, got %p\n", 227 | m->pairs[found_i].value, pair.value); 228 | } 229 | 230 | /* update model */ 231 | m->pairs[found_i].value = (void *)op->u.set.value; 232 | } else { 233 | if (res != SKIPARRAY_SET_BOUND) { 234 | LOG_FAIL("SET: expected res BOUND (%d), got %d\n", 235 | SKIPARRAY_SET_BOUND, res); 236 | } 237 | m->pairs[m->pairs_used].key = (void *)op->u.set.key; 238 | m->pairs[m->pairs_used].value = (void *)op->u.set.value; 239 | m->pairs_used++; /* update model */ 240 | } 241 | 242 | void *nvalue = 0; 243 | if (!skiparray_get(m->sa, (void *)op->u.set.key, &nvalue)) { 244 | LOG_FAIL("SET: bound value not found: %p\n", 245 | (void *)op->u.set.key); 246 | } 247 | if (nvalue != (void *)op->u.set.value) { 248 | LOG_FAIL("SET: get after read incorrect value: %p\n", nvalue); 249 | } 250 | 251 | return true; 252 | } 253 | 254 | static bool eval_forget(struct op *op, struct model *m) { 255 | size_t found_i = 0; 256 | bool found = check_if_known(m, op->u.get.key, &found_i); 257 | 258 | struct skiparray_pair pair; 259 | enum skiparray_forget_res res = skiparray_forget(m->sa, 260 | (void *)op->u.forget.key, &pair); 261 | 262 | if (found) { 263 | if (res != SKIPARRAY_FORGET_OK) { 264 | LOG_FAIL("FORGET: did not forget present value: %d\n", res); 265 | } 266 | 267 | if (pair.key != m->pairs[found_i].key) { 268 | LOG_FAIL("FORGET: removed unexpected key\n"); 269 | } 270 | if (pair.value != m->pairs[found_i].value) { 271 | LOG_FAIL("FORGET: removed unexpected value\n"); 272 | } 273 | 274 | /* remove from model */ 275 | if (m->pairs_used > 1 && (found_i != m->pairs_used - 1)) { 276 | m->pairs[found_i].key = m->pairs[m->pairs_used - 1].key; 277 | m->pairs[found_i].value = m->pairs[m->pairs_used - 1].value; 278 | } 279 | m->pairs_used--; 280 | } else { 281 | if (res != SKIPARRAY_FORGET_NOT_FOUND) { 282 | LOG_FAIL("FORGET: instead of NOT FOUND, got %d\n", res); 283 | } 284 | } 285 | 286 | void *nvalue = 0; 287 | if (skiparray_get(m->sa, (void *)op->u.set.key, &nvalue)) { 288 | return false; 289 | } 290 | 291 | return true; 292 | } 293 | 294 | static bool eval_pop_first(struct op *op, struct model *m) { 295 | (void)op; 296 | void *key = 0; 297 | void *value = 0; 298 | enum skiparray_pop_res res = skiparray_pop_first(m->sa, &key, &value); 299 | if (m->pairs_used == 0) { 300 | if (res != SKIPARRAY_POP_EMPTY) { 301 | LOG_FAIL("POP_FIRST: expected EMPTY\n"); 302 | } 303 | return true; 304 | } else { 305 | void *min_key = m->pairs[0].key; 306 | void *exp_value = m->pairs[0].value; 307 | size_t match_i = 0; 308 | for (size_t i = 1; i < m->pairs_used; i++) { 309 | if (m->pairs[i].key < min_key) { 310 | min_key = m->pairs[i].key; 311 | exp_value = m->pairs[i].value; 312 | match_i = i; 313 | } 314 | } 315 | if (key != min_key) { 316 | LOG_FAIL("POP_FIRST: not min key (exp %p, got %p)\n", 317 | min_key, key); 318 | } 319 | if (value != exp_value) { 320 | LOG_FAIL("POP_FIRST: not min key's value (%p), got %p\n", 321 | (void *)exp_value, (void *)value); 322 | } 323 | 324 | if (match_i < m->pairs_used - 1) { 325 | m->pairs[match_i].key = m->pairs[m->pairs_used - 1].key; 326 | m->pairs[match_i].value = m->pairs[m->pairs_used - 1].value; 327 | } 328 | m->pairs_used--; 329 | return true; 330 | } 331 | } 332 | 333 | static bool eval_pop_last(struct op *op, struct model *m) { 334 | (void)op; 335 | void *key = 0; 336 | void *value = 0; 337 | enum skiparray_pop_res res = skiparray_pop_last(m->sa, &key, &value); 338 | if (m->pairs_used == 0) { 339 | if (res != SKIPARRAY_POP_EMPTY) { 340 | LOG_FAIL("POP_LAST: expected EMPTY\n"); 341 | } 342 | return true; 343 | } else { 344 | void *max_key = m->pairs[0].key; 345 | void *exp_value = m->pairs[0].value; 346 | size_t match_i = 0; 347 | for (size_t i = 1; i < m->pairs_used; i++) { 348 | if (m->pairs[i].key > max_key) { 349 | max_key = m->pairs[i].key; 350 | exp_value = m->pairs[i].value; 351 | match_i = i; 352 | } 353 | } 354 | if (key != max_key) { 355 | LOG_FAIL("POP_LAST: not max key (exp %p, got %p)\n", 356 | max_key, key); 357 | } 358 | if (value != exp_value) { 359 | LOG_FAIL("POP_LAST: not max key's value (%p), got %p\n", 360 | (void *)exp_value, (void *)value); 361 | } 362 | 363 | if (match_i < m->pairs_used - 1) { 364 | m->pairs[match_i].key = m->pairs[m->pairs_used - 1].key; 365 | m->pairs[match_i].value = m->pairs[m->pairs_used - 1].value; 366 | } 367 | m->pairs_used--; 368 | return true; 369 | } 370 | } 371 | 372 | static bool eval_member(struct op *op, struct model *m) { 373 | size_t found_i = 0; 374 | bool found = check_if_known(m, op->u.member.key, &found_i); 375 | 376 | bool member = skiparray_member(m->sa, (void *)op->u.member.key); 377 | if (member != found) { 378 | LOG_FAIL("MEMBER: expected %d, got %d\n", found, member); 379 | } 380 | return true; 381 | } 382 | 383 | static bool eval_count(struct op *op, struct model *m) { 384 | (void)op; 385 | size_t count = skiparray_count(m->sa); 386 | if (count != m->pairs_used) { 387 | LOG_FAIL("COUNT: expected %zd, got %zd\n", m->pairs_used, count); 388 | return false; 389 | } 390 | return true; 391 | } 392 | 393 | static bool eval_first(struct op *op, struct model *m) { 394 | (void)op; 395 | void *key = 0; 396 | void *value = 0; 397 | enum skiparray_first_res res = skiparray_first(m->sa, &key, &value); 398 | if (m->pairs_used == 0) { 399 | if (res != SKIPARRAY_FIRST_EMPTY) { 400 | LOG_FAIL("FIRST: expected EMPTY\n"); 401 | } 402 | return true; 403 | } else { 404 | void *min_key = m->pairs[0].key; 405 | void *exp_value = m->pairs[0].value; 406 | for (size_t i = 1; i < m->pairs_used; i++) { 407 | if (m->pairs[i].key < min_key) { 408 | min_key = m->pairs[i].key; 409 | exp_value = m->pairs[i].value; 410 | } 411 | } 412 | if (key != min_key) { 413 | LOG_FAIL("FIRST: not min key (exp %p, got %p)\n", 414 | min_key, key); 415 | } 416 | if (value != exp_value) { 417 | LOG_FAIL("FIRST: not min key's value (%p), got %p\n", 418 | (void *)exp_value, (void *)value); 419 | } 420 | return true; 421 | } 422 | } 423 | 424 | static bool eval_last(struct op *op, struct model *m) { 425 | (void)op; 426 | void *key = 0; 427 | void *value = 0; 428 | enum skiparray_last_res res = skiparray_last(m->sa, &key, &value); 429 | if (m->pairs_used == 0) { 430 | if (res != SKIPARRAY_LAST_EMPTY) { 431 | LOG_FAIL("LAST: expected EMPTY\n"); 432 | } 433 | return true; 434 | } else { 435 | void *max_key = m->pairs[0].key; 436 | void *exp_value = m->pairs[0].value; 437 | for (size_t i = 1; i < m->pairs_used; i++) { 438 | if (m->pairs[i].key > max_key) { 439 | max_key = m->pairs[i].key; 440 | exp_value = m->pairs[i].value; 441 | } 442 | } 443 | if (key != max_key) { 444 | LOG_FAIL("LAST: not max key (exp %p, got %p)\n", 445 | max_key, key); 446 | } 447 | if (value != exp_value) { 448 | LOG_FAIL("LAST: not max key's value (%p), got %p\n", 449 | (void *)exp_value, (void *)value); 450 | } 451 | return true; 452 | } 453 | } 454 | 455 | static bool validate(struct model *m) { 456 | for (size_t i = 0; i < m->pairs_used; i++) { 457 | void *v = 0; 458 | if (!skiparray_get(m->sa, m->pairs[i].key, &v)) { 459 | if (m->env->verbosity > 0) { 460 | printf("VALIDATE: lost binding for %p\n", m->pairs[i].key); 461 | } 462 | return false; 463 | } 464 | 465 | if (v != m->pairs[i].value) { 466 | if (m->env->verbosity > 0) { 467 | printf("VALIDATE: wrong binding for %p -- " 468 | "expected %p, got %p\n", 469 | m->pairs[i].key, m->pairs[i].value, v); 470 | } 471 | return false; 472 | } 473 | } 474 | 475 | if (!test_skiparray_invariants(m->sa, m->env->verbosity)) { 476 | return false; 477 | } 478 | 479 | return true; 480 | } 481 | 482 | TEST regression(void) { 483 | #define INIT(SEED, SIZE) \ 484 | struct skiparray_config sa_config = { \ 485 | .cmp = test_skiparray_cmp_intptr_t, \ 486 | .seed = SEED, \ 487 | .node_size = SIZE, \ 488 | }; \ 489 | struct skiparray *sa = NULL; \ 490 | enum skiparray_new_res nres = skiparray_new(&sa_config, &sa); \ 491 | ASSERT_EQ(SKIPARRAY_NEW_OK, nres) 492 | 493 | #define GET(K, EXP) \ 494 | do { \ 495 | void *value = (void *)0; \ 496 | ASSERT(skiparray_get(sa, (void *)K, &value)); \ 497 | ASSERT_EQ_FMT((void *)EXP, value, "%p"); \ 498 | } while(0) \ 499 | 500 | #define SET(K, V) \ 501 | do { \ 502 | enum skiparray_set_res res = skiparray_set(sa, \ 503 | (void *)K, (void *)V); \ 504 | if (res != SKIPARRAY_SET_REPLACED) { \ 505 | ASSERT_EQ_FMT(SKIPARRAY_SET_BOUND, res, "%d"); \ 506 | } \ 507 | } while(0) 508 | 509 | #define FORGET(K) \ 510 | do { \ 511 | enum skiparray_forget_res res = skiparray_forget(sa, K, NULL); \ 512 | ASSERT_EQ_FMT(SKIPARRAY_FORGET_OK, res, "%d"); \ 513 | } while(0) 514 | 515 | #define POP_FIRST(EXP_KEY, EXP_VALUE) \ 516 | do { \ 517 | void *key = 0; \ 518 | void *value = 0; \ 519 | enum skiparray_pop_res res = \ 520 | skiparray_pop_first(sa, &key, &value); \ 521 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, res, "%d"); \ 522 | ASSERT_EQ_FMT((void *)EXP_KEY, key, "%p"); \ 523 | ASSERT_EQ_FMT((void *)EXP_VALUE, value, "%p"); \ 524 | } while(0) 525 | 526 | #define POP_LAST(EXP_KEY, EXP_VALUE) \ 527 | do { \ 528 | void *key = 0; \ 529 | void *value = 0; \ 530 | enum skiparray_pop_res res = \ 531 | skiparray_pop_last(sa, &key, &value); \ 532 | ASSERT_EQ_FMT(SKIPARRAY_POP_OK, res, "%d"); \ 533 | ASSERT_EQ_FMT((void *)EXP_KEY, key, "%p"); \ 534 | ASSERT_EQ_FMT((void *)EXP_VALUE, value, "%p"); \ 535 | } while(0) 536 | 537 | #define CHECK() ASSERT(test_skiparray_invariants(sa, greatest_get_verbosity())); 538 | 539 | // 0x81358e447b66b10fLLU; 540 | /* -- Counter-Example: preserve_invariants(100000) */ 541 | /* Trial 0, Seed 0x0484b9b6c567f9f0 */ 542 | /* Argument 0: */ 543 | /* #skiparray_operations{0x2137d20, count 7, seed 5, node_size 2} */ 544 | /* == 3: SET 5647 => 0 */ 545 | /* == 5: SET 18954 => 3 */ 546 | /* == 6: GET 14063 */ 547 | 548 | // 0xc75a631f7c3da256 549 | INIT(0, 3); 550 | 551 | SET(0, 0); 552 | SET(7, 0); 553 | SET(8, 0); 554 | CHECK(); 555 | SET(3, 0); 556 | CHECK(); 557 | GET(0, 0); 558 | GET(7, 0); 559 | GET(8, 0); 560 | GET(3, 0); 561 | 562 | skiparray_free(sa); 563 | PASS(); 564 | } 565 | 566 | SUITE(prop) { 567 | RUN_TESTp(preserve_invariants, 10000000, 0x1de22a0cf5232d98LLU); 568 | 569 | for (size_t i = 10; i <= 10000000; i *= 100) { 570 | if (greatest_get_verbosity() > 0) { 571 | printf("## preserve_invariants %zd\n", i); 572 | } 573 | RUN_TESTp(preserve_invariants, i, 0); 574 | } 575 | 576 | RUN_TEST(regression); 577 | } 578 | -------------------------------------------------------------------------------- /test/type_info_skiparray_operations.c: -------------------------------------------------------------------------------- 1 | #include "test_skiparray.h" 2 | 3 | static enum theft_alloc_res 4 | op_alloc(struct theft *t, void *penv, void **output) { 5 | (void)penv; 6 | struct scenario *res = NULL; 7 | 8 | struct test_env *env = theft_hook_get_env(t); 9 | assert(env->tag == 'T'); 10 | 11 | const size_t max_count = theft_random_choice(t, (uint64_t)env->limit); 12 | 13 | size_t alloc_size = sizeof(*res) + max_count * sizeof(struct op); 14 | res = malloc(alloc_size); 15 | memset(res, 0x00, alloc_size); 16 | if (res == NULL) { 17 | return THEFT_ALLOC_ERROR; 18 | } 19 | 20 | res->seed = theft_random_bits(t, 16); 21 | res->node_size = 2 + theft_random_choice(t, 64); 22 | 23 | size_t count = 0; 24 | for (size_t i = 0; i < max_count; i++) { 25 | struct op *op = &res->ops[count]; 26 | if (0 == theft_random_bits(t, 1)) { 27 | continue; /* shrink away */ 28 | } 29 | 30 | op->t = (enum op_type)theft_random_choice(t, OP_TYPE_COUNT); 31 | assert(op->t < OP_TYPE_COUNT); 32 | 33 | switch (op->t) { 34 | case OP_GET: 35 | op->u.get.key = theft_random_choice(t, env->limit); 36 | break; 37 | case OP_SET: 38 | op->u.set.key = theft_random_choice(t, env->limit); 39 | op->u.set.value = theft_random_bits(t, 8); 40 | break; 41 | case OP_FORGET: 42 | op->u.forget.key = theft_random_choice(t, env->limit); 43 | break; 44 | case OP_POP_FIRST: 45 | case OP_POP_LAST: 46 | break; 47 | 48 | case OP_MEMBER: 49 | op->u.member.key = theft_random_choice(t, env->limit); 50 | break; 51 | case OP_COUNT: 52 | case OP_FIRST: 53 | case OP_LAST: 54 | break; 55 | 56 | default: 57 | case OP_TYPE_COUNT: 58 | assert(false); 59 | } 60 | count++; 61 | } 62 | res->count = count; 63 | 64 | *output = res; 65 | return THEFT_ALLOC_OK; 66 | } 67 | 68 | static void 69 | op_print(FILE *f, const void *instance, void *env) { 70 | (void)env; 71 | const struct scenario *scen = (const struct scenario *)instance; 72 | printf("#skiparray_operations{%p, count %zd, seed %" PRIu32 73 | ", node_size %" PRIu16 "}\n", 74 | (void *)scen, scen->count, scen->seed, scen->node_size); 75 | for (size_t i = 0; i < scen->count; i++) { 76 | const struct op *op = &scen->ops[i]; 77 | switch (op->t) { 78 | case OP_GET: 79 | fprintf(f, "== %zd: GET %" PRIdPTR "\n", 80 | i, op->u.get.key); 81 | break; 82 | case OP_SET: 83 | fprintf(f, "== %zd: SET %" PRIdPTR " => %" PRIdPTR "\n", 84 | i, op->u.set.key, op->u.set.value); 85 | break; 86 | case OP_FORGET: 87 | fprintf(f, "== %zd: FORGET %" PRIdPTR "\n", 88 | i, op->u.forget.key); 89 | break; 90 | case OP_MEMBER: 91 | fprintf(f, "== %zd: MEMBER %" PRIdPTR "\n", 92 | i, op->u.member.key); 93 | break; 94 | case OP_COUNT: 95 | fprintf(f, "== %zd: COUNT\n", i); 96 | break; 97 | case OP_POP_FIRST: 98 | fprintf(f, "== %zd: POP_FIRST\n", i); 99 | break; 100 | case OP_POP_LAST: 101 | fprintf(f, "== %zd: POP_LAST\n", i); 102 | break; 103 | case OP_FIRST: 104 | fprintf(f, "== %zd: FIRST\n", i); 105 | break; 106 | case OP_LAST: 107 | fprintf(f, "== %zd: LAST\n", i); 108 | break; 109 | 110 | default: 111 | assert(false); 112 | } 113 | } 114 | } 115 | 116 | const struct theft_type_info type_info_skiparray_operations = { 117 | .alloc = op_alloc, 118 | .free = theft_generic_free_cb, 119 | .print = op_print, 120 | 121 | .autoshrink_config = { 122 | .enable = true, 123 | }, 124 | }; 125 | --------------------------------------------------------------------------------