├── LICENSE ├── README.md ├── test ├── Makefile ├── umm_malloc_cfg.h └── umm_malloc_test.c ├── umm_malloc.c ├── umm_malloc.h └── umm_malloc_cfg_example.h /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Ralph Hempel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # umm_malloc - Memory Manager For Small(ish) Microprocessors 2 | 3 | This is a memory management library specifically designed to work with the 4 | ARM7 embedded processor, but it should work on many other 32 bit processors, 5 | as well as 16 and 8 bit devices. 6 | 7 | You can even use it on a bigger project where a single process might want 8 | to manage a large number of smaller objects, and using the system heap 9 | might get expensive. 10 | 11 | ## Acknowledgements 12 | 13 | Joerg Wunsch and the avr-libc provided the first malloc() implementation 14 | that I examined in detail. 15 | 16 | http://www.nongnu.org/avr-libc 17 | 18 | Doug Lea's paper on malloc() was another excellent reference and provides 19 | a lot of detail on advanced memory management techniques such as binning. 20 | 21 | http://g.oswego.edu/dl/html/malloc.html 22 | 23 | Bill Dittman provided excellent suggestions, including macros to support 24 | using these functions in critical sections, and for optimizing realloc() 25 | further by checking to see if the previous block was free and could be 26 | used for the new block size. This can help to reduce heap fragmentation 27 | significantly. 28 | 29 | Yaniv Ankin suggested that a way to dump the current heap condition 30 | might be useful. I combined this with an idea from plarroy to also 31 | allow checking a free pointer to make sure it's valid. 32 | 33 | ## Background 34 | 35 | The memory manager assumes the following things: 36 | 37 | 1. The standard POSIX compliant malloc/realloc/free semantics are used 38 | 1. All memory used by the manager is allocated at link time, it is aligned 39 | on a 32 bit boundary, it is contiguous, and its extent (start and end 40 | address) is filled in by the linker. 41 | 1. All memory used by the manager is initialized to 0 as part of the 42 | runtime startup routine. No other initialization is required. 43 | 44 | The fastest linked list implementations use doubly linked lists so that 45 | its possible to insert and delete blocks in constant time. This memory 46 | manager keeps track of both free and used blocks in a doubly linked list. 47 | 48 | Most memory managers use some kind of list structure made up of pointers 49 | to keep track of used - and sometimes free - blocks of memory. In an 50 | embedded system, this can get pretty expensive as each pointer can use 51 | up to 32 bits. 52 | 53 | In most embedded systems there is no need for managing large blocks 54 | of memory dynamically, so a full 32 bit pointer based data structure 55 | for the free and used block lists is wasteful. A block of memory on 56 | the free list would use 16 bytes just for the pointers! 57 | 58 | This memory management library sees the malloc heap as an array of blocks, 59 | and uses block numbers to keep track of locations. The block numbers are 60 | 15 bits - which allows for up to 32767 blocks of memory. The high order 61 | bit marks a block as being either free or in use, which will be explained 62 | later. 63 | 64 | The result is that a block of memory on the free list uses just 8 bytes 65 | instead of 16. 66 | 67 | In fact, we go even one step futher when we realize that the free block 68 | index values are available to store data when the block is allocated. 69 | 70 | The overhead of an allocated block is therefore just 4 bytes. 71 | 72 | Each memory block holds 8 bytes, and there are up to 32767 blocks 73 | available, for about 256K of heap space. If that's not enough, you 74 | can always add more data bytes to the body of the memory block 75 | at the expense of free block size overhead. 76 | 77 | There are a lot of little features and optimizations in this memory 78 | management system that makes it especially suited to small embedded, but 79 | the best way to appreciate them is to review the data structures and 80 | algorithms used, so let's get started. 81 | 82 | ## Detailed Description 83 | 84 | We have a general notation for a block that we'll use to describe the 85 | different scenarios that our memory allocation algorithm must deal with: 86 | 87 | ``` 88 | +----+----+----+----+ 89 | c |* n | p | nf | pf | 90 | +----+----+----+----+ 91 | ``` 92 | 93 | Where: 94 | 95 | - c is the index of this block 96 | - * is the indicator for a free block 97 | - n is the index of the next block in the heap 98 | - p is the index of the previous block in the heap 99 | - nf is the index of the next block in the free list 100 | - pf is the index of the previous block in the free list 101 | 102 | The fact that we have forward and backward links in the block descriptors 103 | means that malloc() and free() operations can be very fast. It's easy 104 | to either allocate the whole free item to a new block or to allocate part 105 | of the free item and leave the rest on the free list without traversing 106 | the list from front to back first. 107 | 108 | The entire block of memory used by the heap is assumed to be initialized 109 | to 0. The very first block in the heap is special - it't the head of the 110 | free block list. It is never assimilated with a free block (more on this 111 | later). 112 | 113 | Once a block has been allocated to the application, it looks like this: 114 | 115 | ``` 116 | +----+----+----+----+ 117 | c | n | p | ... | 118 | +----+----+----+----+ 119 | ``` 120 | 121 | Where: 122 | 123 | - c is the index of this block 124 | - n is the index of the next block in the heap 125 | - p is the index of the previous block in the heap 126 | 127 | Note that the free list information is gone, because it's now being used to 128 | store actual data for the application. It would have been nice to store 129 | the next and previous free list indexes as well, but that would be a waste 130 | of space. If we had even 500 items in use, that would be 2,000 bytes for 131 | free list information. We simply can't afford to waste that much. 132 | 133 | The address of the `...` area is what is returned to the application 134 | for data storage. 135 | 136 | The following sections describe the scenarios encountered during the 137 | operation of the library. There are two additional notation conventions: 138 | 139 | `??` inside a pointer block means that the data is irrelevant. We don't care 140 | about it because we don't read or modify it in the scenario being 141 | described. 142 | 143 | `...` between memory blocks indicates zero or more additional blocks are 144 | allocated for use by the upper block. 145 | 146 | And while we're talking about "upper" and "lower" blocks, we should make 147 | a comment about adresses. In the diagrams, a block higher up in the 148 | picture is at a lower address. And the blocks grow downwards their 149 | block index increases as does their physical address. 150 | 151 | Finally, there's one very important characteristic of the individual 152 | blocks that make up the heap - there can never be two consecutive free 153 | memory blocks, but there can be consecutive used memory blocks. 154 | 155 | The reason is that we always want to have a short free list of the 156 | largest possible block sizes. By always assimilating a newly freed block 157 | with adjacent free blocks, we maximize the size of each free memory area. 158 | 159 | ### Operation of malloc right after system startup 160 | 161 | As part of the system startup code, all of the heap has been cleared. 162 | 163 | During the very first malloc operation, we start traversing the free list 164 | starting at index 0. The index of the next free block is 0, which means 165 | we're at the end of the list! 166 | 167 | At this point, the malloc has a special test that checks if the current 168 | block index is 0, which it is. This special case initializes the free 169 | list to point at block index 1. 170 | 171 | ``` 172 | BEFORE AFTER 173 | 174 | +----+----+----+----+ +----+----+----+----+ 175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 176 | +----+----+----+----+ +----+----+----+----+ 177 | +----+----+----+----+ 178 | 1 | 0 | 0 | 0 | 0 | 179 | +----+----+----+----+ 180 | ``` 181 | 182 | The heap is now ready to complete the first malloc operation. 183 | 184 | 185 | ### Operation of malloc when we have reached the end of the free list and 186 | there is no block large enough to accommodate the request. 187 | 188 | This happens at the very first malloc operation, or any time the free 189 | list is traversed and no free block large enough for the request is 190 | found. 191 | 192 | The current block pointer will be at the end of the free list, and we 193 | know we're at the end of the list because the nf index is 0, like this: 194 | 195 | ``` 196 | BEFORE AFTER 197 | 198 | +----+----+----+----+ +----+----+----+----+ 199 | pf |*?? | ?? | cf | ?? | pf |*?? | ?? | lf | ?? | 200 | +----+----+----+----+ +----+----+----+----+ 201 | ... ... 202 | +----+----+----+----+ +----+----+----+----+ 203 | p | cf | ?? | ... | p | cf | ?? | ... | 204 | +----+----+----+----+ +----+----+----+----+ 205 | +----+----+----+----+ +----+----+----+----+ 206 | cf | 0 | p | 0 | pf | c | lf | p | ... | 207 | +----+----+----+----+ +----+----+----+----+ 208 | +----+----+----+----+ 209 | lf | 0 | cf | 0 | pf | 210 | +----+----+----+----+ 211 | ``` 212 | 213 | As we walk the free list looking for a block of size b or larger, we get 214 | to cf, which is the last item in the free list. We know this because the 215 | next index is 0. 216 | 217 | So we're going to turn cf into the new block of memory, and then create 218 | a new block that represents the last free entry (lf) and adjust the prev 219 | index of lf to point at the block we just created. We also need to adjust 220 | the next index of the new block (c) to point to the last free block. 221 | 222 | Note that the next free index of the pf block must point to the new lf 223 | because cf is no longer a free block! 224 | 225 | ### Operation of malloc when we have found a block (cf) that will fit the current request of b units exactly 226 | 227 | This one is pretty easy, just clear the free list bit in the current 228 | block and unhook it from the free list. 229 | 230 | ``` 231 | BEFORE AFTER 232 | 233 | +----+----+----+----+ +----+----+----+----+ 234 | pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 235 | +----+----+----+----+ +----+----+----+----+ 236 | ... ... 237 | +----+----+----+----+ +----+----+----+----+ 238 | p | cf | ?? | ... | p | cf | ?? | ... | 239 | +----+----+----+----+ +----+----+----+----+ 240 | +----+----+----+----+ +----+----+----+----+ Clear the free 241 | cf |* n | p | nf | pf | cf | n | p | .. | list bit here 242 | +----+----+----+----+ +----+----+----+----+ 243 | +----+----+----+----+ +----+----+----+----+ 244 | n | ?? | cf | ... | n | ?? | cf | ... | 245 | +----+----+----+----+ +----+----+----+----+ 246 | ... ... 247 | +----+----+----+----+ +----+----+----+----+ 248 | nf |*?? | ?? | ?? | cf | nf | ?? | ?? | ?? | pf | 249 | +----+----+----+----+ +----+----+----+----+ 250 | ``` 251 | 252 | Unhooking from the free list is accomplished by adjusting the next and 253 | prev free list index values in the pf and nf blocks. 254 | 255 | ### Operation of malloc when we have found a block that will fit the current request of b units with some left over 256 | 257 | We'll allocate the new block at the END of the current free block so we 258 | don't have to change ANY free list pointers. 259 | 260 | ``` 261 | BEFORE AFTER 262 | 263 | +----+----+----+----+ +----+----+----+----+ 264 | pf |*?? | ?? | cf | ?? | pf |*?? | ?? | cf | ?? | 265 | +----+----+----+----+ +----+----+----+----+ 266 | ... ... 267 | +----+----+----+----+ +----+----+----+----+ 268 | p | cf | ?? | ... | p | cf | ?? | ... | 269 | +----+----+----+----+ +----+----+----+----+ 270 | +----+----+----+----+ +----+----+----+----+ 271 | cf |* n | p | nf | pf | cf |* c | p | nf | pf | 272 | +----+----+----+----+ +----+----+----+----+ 273 | +----+----+----+----+ This is the new 274 | c | n | cf | .. | block at cf+b 275 | +----+----+----+----+ 276 | +----+----+----+----+ +----+----+----+----+ 277 | n | ?? | cf | ... | n | ?? | c | ... | 278 | +----+----+----+----+ +----+----+----+----+ 279 | ... ... 280 | +----+----+----+----+ +----+----+----+----+ 281 | nf |*?? | ?? | ?? | cf | nf | ?? | ?? | ?? | pf | 282 | +----+----+----+----+ +----+----+----+----+ 283 | ``` 284 | 285 | This one is prety easy too, except we don't need to mess with the 286 | free list indexes at all becasue we'll allocate the new block at the 287 | end of the current free block. We do, however have to adjust the 288 | indexes in cf, c, and n. 289 | 290 | That covers the initialization and all possible malloc scenarios, so now 291 | we need to cover the free operation possibilities... 292 | 293 | ### Free Scenarios 294 | 295 | The operation of free depends on the position of the current block being 296 | freed relative to free list items immediately above or below it. The code 297 | works like this: 298 | 299 | ``` 300 | if next block is free 301 | assimilate with next block already on free list 302 | if prev block is free 303 | assimilate with prev block already on free list 304 | else 305 | put current block at head of free list 306 | ``` 307 | 308 | Step 1 of the free operation checks if the next block is free, and if it 309 | is then insert this block into the free list and assimilate the next block 310 | with this one. 311 | 312 | Note that c is the block we are freeing up, cf is the free block that 313 | follows it. 314 | 315 | ``` 316 | BEFORE AFTER 317 | 318 | +----+----+----+----+ +----+----+----+----+ 319 | pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 320 | +----+----+----+----+ +----+----+----+----+ 321 | ... ... 322 | +----+----+----+----+ +----+----+----+----+ 323 | p | c | ?? | ... | p | c | ?? | ... | 324 | +----+----+----+----+ +----+----+----+----+ 325 | +----+----+----+----+ +----+----+----+----+ This block is 326 | c | cf | p | ... | c | nn | p | ... | disconnected 327 | +----+----+----+----+ +----+----+----+----+ from free list, 328 | +----+----+----+----+ assimilated with 329 | cf |*nn | c | nf | pf | the next, and 330 | +----+----+----+----+ ready for step 2 331 | +----+----+----+----+ +----+----+----+----+ 332 | nn | ?? | cf | ?? | ?? | nn | ?? | c | ... | 333 | +----+----+----+----+ +----+----+----+----+ 334 | ... ... 335 | +----+----+----+----+ +----+----+----+----+ 336 | nf |*?? | ?? | ?? | cf | nf |*?? | ?? | ?? | pf | 337 | +----+----+----+----+ +----+----+----+----+ 338 | ``` 339 | 340 | Take special note that the newly assimilated block (c) is completely 341 | disconnected from the free list, and it does not have its free list 342 | bit set. This is important as we move on to step 2 of the procedure... 343 | 344 | Step 2 of the free operation checks if the prev block is free, and if it 345 | is then assimilate it with this block. 346 | 347 | Note that c is the block we are freeing up, pf is the free block that 348 | precedes it. 349 | 350 | ``` 351 | BEFORE AFTER 352 | 353 | +----+----+----+----+ +----+----+----+----+ This block has 354 | pf |* c | ?? | nf | ?? | pf |* n | ?? | nf | ?? | assimilated the 355 | +----+----+----+----+ +----+----+----+----+ current block 356 | +----+----+----+----+ 357 | c | n | pf | ... | 358 | +----+----+----+----+ 359 | +----+----+----+----+ +----+----+----+----+ 360 | n | ?? | c | ... | n | ?? | pf | ?? | ?? | 361 | +----+----+----+----+ +----+----+----+----+ 362 | ... ... 363 | +----+----+----+----+ +----+----+----+----+ 364 | nf |*?? | ?? | ?? | pf | nf |*?? | ?? | ?? | pf | 365 | +----+----+----+----+ +----+----+----+----+ 366 | ``` 367 | 368 | Nothing magic here, except that when we're done, the current block (c) 369 | is gone since it's been absorbed into the previous free block. Note that 370 | the previous step guarantees that the next block (n) is not free. 371 | 372 | Step 3 of the free operation only runs if the previous block is not free. 373 | it just inserts the current block to the head of the free list. 374 | 375 | Remember, 0 is always the first block in the memory heap, and it's always 376 | head of the free list! 377 | 378 | ``` 379 | BEFORE AFTER 380 | 381 | +----+----+----+----+ +----+----+----+----+ 382 | 0 | ?? | ?? | nf | 0 | 0 | ?? | ?? | c | 0 | 383 | +----+----+----+----+ +----+----+----+----+ 384 | ... ... 385 | +----+----+----+----+ +----+----+----+----+ 386 | p | c | ?? | ... | p | c | ?? | ... | 387 | +----+----+----+----+ +----+----+----+----+ 388 | +----+----+----+----+ +----+----+----+----+ 389 | c | n | p | .. | c |* n | p | nf | 0 | 390 | +----+----+----+----+ +----+----+----+----+ 391 | +----+----+----+----+ +----+----+----+----+ 392 | n | ?? | c | ... | n | ?? | c | ... | 393 | +----+----+----+----+ +----+----+----+----+ 394 | ... ... 395 | +----+----+----+----+ +----+----+----+----+ 396 | nf |*?? | ?? | ?? | 0 | nf |*?? | ?? | ?? | c | 397 | +----+----+----+----+ +----+----+----+----+ 398 | ``` 399 | 400 | Again, nothing spectacular here, we're simply adjusting a few pointers 401 | to make the most recently freed block the first item in the free list. 402 | 403 | That's because finding the previous free block would mean a reverse 404 | traversal of blocks until we found a free one, and it's just easier to 405 | put it at the head of the list. No traversal is needed. 406 | 407 | ### Realloc Scenarios 408 | 409 | Finally, we can cover realloc, which has the following basic operation. 410 | 411 | The first thing we do is assimilate up with the next free block of 412 | memory if possible. This step might help if we're resizing to a bigger 413 | block of memory. It also helps if we're downsizing and creating a new 414 | free block with the leftover memory. 415 | 416 | First we check to see if the next block is free, and we assimilate it 417 | to this block if it is. If the previous block is also free, and if 418 | combining it with the current block would satisfy the request, then we 419 | assimilate with that block and move the current data down to the new 420 | location. 421 | 422 | Assimilating with the previous free block and moving the data works 423 | like this: 424 | 425 | ``` 426 | BEFORE AFTER 427 | 428 | +----+----+----+----+ +----+----+----+----+ 429 | pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 430 | +----+----+----+----+ +----+----+----+----+ 431 | ... ... 432 | +----+----+----+----+ +----+----+----+----+ 433 | cf |* c | ?? | nf | pf | c | n | ?? | ... | The data gets 434 | +----+----+----+----+ +----+----+----+----+ moved from c to 435 | +----+----+----+----+ the new data area 436 | c | n | cf | ... | in cf, then c is 437 | +----+----+----+----+ adjusted to cf 438 | +----+----+----+----+ +----+----+----+----+ 439 | n | ?? | c | ... | n | ?? | c | ?? | ?? | 440 | +----+----+----+----+ +----+----+----+----+ 441 | ... ... 442 | +----+----+----+----+ +----+----+----+----+ 443 | nf |*?? | ?? | ?? | cf | nf |*?? | ?? | ?? | pf | 444 | +----+----+----+----+ +----+----+----+----+ 445 | ``` 446 | 447 | Once we're done that, there are three scenarios to consider: 448 | 449 | 1. The current block size is exactly the right size, so no more work is 450 | needed. 451 | 452 | 2. The current block is bigger than the new required size, so carve off 453 | the excess and add it to the free list. 454 | 455 | 3. The current block is still smaller than the required size, so malloc 456 | a new block of the correct size and copy the current data into the new 457 | block before freeing the current block. 458 | 459 | The only one of these scenarios that involves an operation that has not 460 | yet been described is the second one, and it's shown below: 461 | 462 | ``` 463 | BEFORE AFTER 464 | 465 | +----+----+----+----+ +----+----+----+----+ 466 | p | c | ?? | ... | p | c | ?? | ... | 467 | +----+----+----+----+ +----+----+----+----+ 468 | +----+----+----+----+ +----+----+----+----+ 469 | c | n | p | ... | c | s | p | ... | 470 | +----+----+----+----+ +----+----+----+----+ 471 | +----+----+----+----+ This is the 472 | s | n | c | .. | new block at 473 | +----+----+----+----+ c+blocks 474 | +----+----+----+----+ +----+----+----+----+ 475 | n | ?? | c | ... | n | ?? | s | ... | 476 | +----+----+----+----+ +----+----+----+----+ 477 | ``` 478 | 479 | Then we call free() with the adress of the data portion of the new 480 | block (s) which adds it to the free list. 481 | -------------------------------------------------------------------------------- /test/Makefile: -------------------------------------------------------------------------------- 1 | 2 | all: test test_poison test_integrity test_poison_integrity 3 | 4 | INCDIRS = -I.. -I. 5 | 6 | test: 7 | @echo NORMAL 8 | gcc --std=c99 $(CFLAGS) $(INCDIRS) -g3 -m32 \ 9 | ../umm_malloc.c umm_malloc_test.c \ 10 | -o test_umm 11 | ./test_umm 12 | 13 | test_poison: 14 | @echo POISON 15 | gcc --std=c99 $(CFLAGS) $(INCDIRS) -DUMM_POISON -g3 -m32 \ 16 | ../umm_malloc.c umm_malloc_test.c \ 17 | -o test_umm 18 | ./test_umm 19 | 20 | test_integrity: 21 | @echo INTEGRITY 22 | gcc --std=c99 $(CFLAGS) $(INCDIRS) -DUMM_INTEGRITY_CHECK -g3 -m32 \ 23 | ../umm_malloc.c umm_malloc_test.c \ 24 | -o test_umm 25 | ./test_umm 26 | 27 | test_poison_integrity: 28 | @echo POISON + INTEGRITY 29 | gcc --std=c99 $(CFLAGS) $(INCDIRS) -DUMM_POISON -DUMM_INTEGRITY_CHECK -g3 -m32 \ 30 | ../umm_malloc.c umm_malloc_test.c \ 31 | -o test_umm 32 | ./test_umm 33 | 34 | -------------------------------------------------------------------------------- /test/umm_malloc_cfg.h: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2016 Cesanta Software Limited 3 | * All rights reserved 4 | */ 5 | 6 | /* 7 | * Smartjs-specific configuration for umm_malloc 8 | */ 9 | 10 | #ifndef _UMM_MALLOC_CFG_H 11 | #define _UMM_MALLOC_CFG_H 12 | 13 | /* 14 | * There are a number of defines you can set at compile time that affect how 15 | * the memory allocator will operate. 16 | * You can set them in your config file umm_malloc_cfg.h. 17 | * In GNU C, you also can set these compile time defines like this: 18 | * 19 | * -D UMM_TEST_MAIN 20 | * 21 | * Set this if you want to compile in the test suite at the end of this file. 22 | * 23 | * If you leave this define unset, then you might want to set another one: 24 | * 25 | * -D UMM_REDEFINE_MEM_FUNCTIONS 26 | * 27 | * If you leave this define unset, then the function names are left alone as 28 | * umm_malloc() umm_free() and umm_realloc() so that they cannot be confused 29 | * with the C runtime functions malloc() free() and realloc() 30 | * 31 | * If you do set this define, then the function names become malloc() 32 | * free() and realloc() so that they can be used as the C runtime functions 33 | * in an embedded environment. 34 | * 35 | * -D UMM_BEST_FIT (defualt) 36 | * 37 | * Set this if you want to use a best-fit algorithm for allocating new 38 | * blocks 39 | * 40 | * -D UMM_FIRST_FIT 41 | * 42 | * Set this if you want to use a first-fit algorithm for allocating new 43 | * blocks 44 | * 45 | * -D UMM_DBG_LOG_LEVEL=n 46 | * 47 | * Set n to a value from 0 to 6 depending on how verbose you want the debug 48 | * log to be 49 | * 50 | * ---------------------------------------------------------------------------- 51 | * 52 | * Support for this library in a multitasking environment is provided when 53 | * you add bodies to the UMM_CRITICAL_ENTRY and UMM_CRITICAL_EXIT macros 54 | * (see below) 55 | * 56 | * ---------------------------------------------------------------------------- 57 | */ 58 | 59 | extern char test_umm_heap[]; 60 | extern void umm_corruption(void); 61 | 62 | /* Start and end addresses of the heap */ 63 | #define UMM_MALLOC_CFG__HEAP_ADDR (test_umm_heap) 64 | #define UMM_MALLOC_CFG__HEAP_SIZE 0x10000 65 | 66 | /* A couple of macros to make packing structures less compiler dependent */ 67 | 68 | #define UMM_H_ATTPACKPRE 69 | #define UMM_H_ATTPACKSUF __attribute__((__packed__)) 70 | 71 | /* 72 | * Callback that is called whenever a heap corruption is detected 73 | */ 74 | #define UMM_HEAP_CORRUPTION_CB() umm_corruption(); 75 | 76 | /* 77 | * A couple of macros to make it easier to protect the memory allocator 78 | * in a multitasking system. You should set these macros up to use whatever 79 | * your system uses for this purpose. You can disable interrupts entirely, or 80 | * just disable task switching - it's up to you 81 | * 82 | * NOTE WELL that these macros MUST be allowed to nest, because umm_free() is 83 | * called from within umm_malloc() 84 | */ 85 | 86 | #define UMM_CRITICAL_ENTRY() 87 | #define UMM_CRITICAL_EXIT() 88 | 89 | /* 90 | * -D UMM_INTEGRITY_CHECK : 91 | * 92 | * Enables heap integrity check before any heap operation. It affects 93 | * performance, but does NOT consume extra memory. 94 | * 95 | * If integrity violation is detected, the message is printed and user-provided 96 | * callback is called: `UMM_HEAP_CORRUPTION_CB()` 97 | * 98 | * Note that not all buffer overruns are detected: each buffer is aligned by 99 | * 4 bytes, so there might be some trailing "extra" bytes which are not checked 100 | * for corruption. 101 | */ 102 | /* 103 | #define UMM_INTEGRITY_CHECK 104 | */ 105 | 106 | /* 107 | * -D UMM_POISON : 108 | * 109 | * Enables heap poisoning: add predefined value (poison) before and after each 110 | * allocation, and check before each heap operation that no poison is 111 | * corrupted. 112 | * 113 | * Other than the poison itself, we need to store exact user-requested length 114 | * for each buffer, so that overrun by just 1 byte will be always noticed. 115 | * 116 | * Customizations: 117 | * 118 | * UMM_POISON_SIZE_BEFORE: 119 | * Number of poison bytes before each block, e.g. 2 120 | * UMM_POISON_SIZE_AFTER: 121 | * Number of poison bytes after each block e.g. 2 122 | * UMM_POISONED_BLOCK_LEN_TYPE 123 | * Type of the exact buffer length, e.g. `short` 124 | * 125 | * NOTE: each allocated buffer is aligned by 4 bytes. But when poisoning is 126 | * enabled, actual pointer returned to user is shifted by 127 | * `(sizeof(UMM_POISONED_BLOCK_LEN_TYPE) + UMM_POISON_SIZE_BEFORE)`. 128 | * It's your responsibility to make resulting pointers aligned appropriately. 129 | * 130 | * If poison corruption is detected, the message is printed and user-provided 131 | * callback is called: `UMM_HEAP_CORRUPTION_CB()` 132 | */ 133 | /* 134 | #define UMM_POISON 135 | */ 136 | #define UMM_POISON_SIZE_BEFORE 4 137 | #define UMM_POISON_SIZE_AFTER 4 138 | #define UMM_POISONED_BLOCK_LEN_TYPE short 139 | 140 | #endif /* _UMM_MALLOC_CFG_H */ 141 | -------------------------------------------------------------------------------- /test/umm_malloc_test.c: -------------------------------------------------------------------------------- 1 | 2 | #include 3 | #include 4 | #include 5 | #include 6 | 7 | #include "umm_malloc.h" 8 | 9 | #define TRY(v) do { \ 10 | bool res = v;\ 11 | if (!res) {\ 12 | printf("assert failed: " #v "\n");\ 13 | abort();\ 14 | }\ 15 | } while (0) 16 | 17 | char test_umm_heap[UMM_MALLOC_CFG__HEAP_SIZE]; 18 | static int corruption_cnt = 0; 19 | 20 | void umm_corruption(void) { 21 | corruption_cnt++; 22 | } 23 | 24 | #if defined(UMM_POISON) 25 | bool test_poison(void) { 26 | 27 | size_t size; 28 | for (size = 1; size <= 16; size++) { 29 | 30 | { 31 | umm_init(); 32 | corruption_cnt = 0; 33 | char *ptr = umm_malloc(size); 34 | ptr[size]++; 35 | 36 | umm_free(ptr); 37 | 38 | if (corruption_cnt == 0) { 39 | printf("corruption_cnt should not be 0, but it is\n"); 40 | return false; 41 | } 42 | } 43 | 44 | { 45 | umm_init(); 46 | corruption_cnt = 0; 47 | char *ptr = umm_calloc(1, size); 48 | ptr[-1]++; 49 | 50 | umm_free(ptr); 51 | 52 | if (corruption_cnt == 0) { 53 | printf("corruption_cnt should not be 0, but it is\n"); 54 | return false; 55 | } 56 | } 57 | } 58 | 59 | return true; 60 | } 61 | #endif 62 | 63 | #if defined(UMM_INTEGRITY_CHECK) 64 | bool test_integrity_check(void) { 65 | 66 | size_t size; 67 | for (size = 1; size <= 16; size++) { 68 | 69 | { 70 | umm_init(); 71 | corruption_cnt = 0; 72 | char *ptr = umm_malloc(size); 73 | memset(ptr, 0xfe, size + 8/* size of umm_block*/); 74 | 75 | umm_free(ptr); 76 | 77 | if (corruption_cnt == 0) { 78 | printf("corruption_cnt should not be 0, but it is\n"); 79 | return false; 80 | } 81 | } 82 | 83 | { 84 | umm_init(); 85 | corruption_cnt = 0; 86 | char *ptr = umm_calloc(1, size); 87 | ptr[-1]++; 88 | 89 | umm_free(ptr); 90 | 91 | if (corruption_cnt == 0) { 92 | printf("corruption_cnt should not be 0, but it is\n"); 93 | return false; 94 | } 95 | } 96 | } 97 | 98 | return true; 99 | } 100 | #endif 101 | 102 | bool random_stress(void) { 103 | void * ptr_array[256]; 104 | size_t i; 105 | int idx; 106 | 107 | corruption_cnt = 0; 108 | 109 | printf( "Size of umm_heap is %u\n", (unsigned int)sizeof(test_umm_heap) ); 110 | 111 | umm_init(); 112 | 113 | umm_info( NULL, 1 ); 114 | 115 | for( idx=0; idx<256; ++idx ) 116 | ptr_array[idx] = (void *)NULL; 117 | 118 | for( idx=0; idx<100000; ++idx ) { 119 | i = rand()%256; 120 | 121 | switch( rand() % 16 ) { 122 | 123 | case 0: 124 | case 1: 125 | case 2: 126 | case 3: 127 | case 4: 128 | case 5: 129 | case 6: 130 | { 131 | ptr_array[i] = umm_realloc(ptr_array[i], 0); 132 | break; 133 | } 134 | case 7: 135 | case 8: 136 | { 137 | size_t size = rand()%40; 138 | ptr_array[i] = umm_realloc(ptr_array[i], size ); 139 | memset(ptr_array[i], 0xfe, size); 140 | break; 141 | } 142 | 143 | case 9: 144 | case 10: 145 | case 11: 146 | case 12: 147 | { 148 | size_t size = rand()%100; 149 | ptr_array[i] = umm_realloc(ptr_array[i], size ); 150 | memset(ptr_array[i], 0xfe, size); 151 | break; 152 | } 153 | 154 | case 13: 155 | case 14: 156 | { 157 | size_t size = rand()%200; 158 | umm_free(ptr_array[i]); 159 | ptr_array[i] = umm_calloc( 1, size ); 160 | if (ptr_array[i] != NULL){ 161 | int a; 162 | for (a = 0; a < size; a++) { 163 | if (((char *)ptr_array[i])[a] != 0x00) { 164 | printf("calloc returned non-zeroed memory\n"); 165 | return false; 166 | } 167 | } 168 | } 169 | memset(ptr_array[i], 0xfe, size); 170 | break; 171 | } 172 | 173 | default: 174 | { 175 | size_t size = rand()%400; 176 | umm_free(ptr_array[i]); 177 | ptr_array[i] = umm_malloc( size ); 178 | memset(ptr_array[i], 0xfe, size); 179 | break; 180 | } 181 | } 182 | 183 | } 184 | 185 | 186 | return (corruption_cnt == 0); 187 | } 188 | 189 | int main(void) { 190 | #if defined(UMM_INTEGRITY_CHECK) 191 | TRY(test_integrity_check()); 192 | #endif 193 | 194 | #if defined(UMM_POISON) 195 | TRY(test_poison()); 196 | #endif 197 | 198 | TRY(random_stress()); 199 | 200 | return 0; 201 | } 202 | 203 | -------------------------------------------------------------------------------- /umm_malloc.c: -------------------------------------------------------------------------------- 1 | /* ---------------------------------------------------------------------------- 2 | * umm_malloc.c - a memory allocator for embedded systems (microcontrollers) 3 | * 4 | * See copyright notice in LICENSE.TXT 5 | * ---------------------------------------------------------------------------- 6 | * 7 | * R.Hempel 2007-09-22 - Original 8 | * R.Hempel 2008-12-11 - Added MIT License biolerplate 9 | * - realloc() now looks to see if previous block is free 10 | * - made common operations functions 11 | * R.Hempel 2009-03-02 - Added macros to disable tasking 12 | * - Added function to dump heap and check for valid free 13 | * pointer 14 | * R.Hempel 2009-03-09 - Changed name to umm_malloc to avoid conflicts with 15 | * the mm_malloc() library functions 16 | * - Added some test code to assimilate a free block 17 | * with the very block if possible. Complicated and 18 | * not worth the grief. 19 | * D.Frank 2014-04-02 - Fixed heap configuration when UMM_TEST_MAIN is NOT set, 20 | * added user-dependent configuration file umm_malloc_cfg.h 21 | * ---------------------------------------------------------------------------- 22 | * 23 | * This is a memory management library specifically designed to work with the 24 | * ARM7 embedded processor, but it should work on many other 32 bit processors, 25 | * as well as 16 and 8 bit devices. 26 | * 27 | * ACKNOWLEDGEMENTS 28 | * 29 | * Joerg Wunsch and the avr-libc provided the first malloc() implementation 30 | * that I examined in detail. 31 | * 32 | * http: *www.nongnu.org/avr-libc 33 | * 34 | * Doug Lea's paper on malloc() was another excellent reference and provides 35 | * a lot of detail on advanced memory management techniques such as binning. 36 | * 37 | * http: *g.oswego.edu/dl/html/malloc.html 38 | * 39 | * Bill Dittman provided excellent suggestions, including macros to support 40 | * using these functions in critical sections, and for optimizing realloc() 41 | * further by checking to see if the previous block was free and could be 42 | * used for the new block size. This can help to reduce heap fragmentation 43 | * significantly. 44 | * 45 | * Yaniv Ankin suggested that a way to dump the current heap condition 46 | * might be useful. I combined this with an idea from plarroy to also 47 | * allow checking a free pointer to make sure it's valid. 48 | * 49 | * ---------------------------------------------------------------------------- 50 | * 51 | * The memory manager assumes the following things: 52 | * 53 | * 1. The standard POSIX compliant malloc/realloc/free semantics are used 54 | * 2. All memory used by the manager is allocated at link time, it is aligned 55 | * on a 32 bit boundary, it is contiguous, and its extent (start and end 56 | * address) is filled in by the linker. 57 | * 3. All memory used by the manager is initialized to 0 as part of the 58 | * runtime startup routine. No other initialization is required. 59 | * 60 | * The fastest linked list implementations use doubly linked lists so that 61 | * its possible to insert and delete blocks in constant time. This memory 62 | * manager keeps track of both free and used blocks in a doubly linked list. 63 | * 64 | * Most memory managers use some kind of list structure made up of pointers 65 | * to keep track of used - and sometimes free - blocks of memory. In an 66 | * embedded system, this can get pretty expensive as each pointer can use 67 | * up to 32 bits. 68 | * 69 | * In most embedded systems there is no need for managing large blocks 70 | * of memory dynamically, so a full 32 bit pointer based data structure 71 | * for the free and used block lists is wasteful. A block of memory on 72 | * the free list would use 16 bytes just for the pointers! 73 | * 74 | * This memory management library sees the malloc heap as an array of blocks, 75 | * and uses block numbers to keep track of locations. The block numbers are 76 | * 15 bits - which allows for up to 32767 blocks of memory. The high order 77 | * bit marks a block as being either free or in use, which will be explained 78 | * later. 79 | * 80 | * The result is that a block of memory on the free list uses just 8 bytes 81 | * instead of 16. 82 | * 83 | * In fact, we go even one step futher when we realize that the free block 84 | * index values are available to store data when the block is allocated. 85 | * 86 | * The overhead of an allocated block is therefore just 4 bytes. 87 | * 88 | * Each memory block holds 8 bytes, and there are up to 32767 blocks 89 | * available, for about 256K of heap space. If that's not enough, you 90 | * can always add more data bytes to the body of the memory block 91 | * at the expense of free block size overhead. 92 | * 93 | * There are a lot of little features and optimizations in this memory 94 | * management system that makes it especially suited to small embedded, but 95 | * the best way to appreciate them is to review the data structures and 96 | * algorithms used, so let's get started. 97 | * 98 | * ---------------------------------------------------------------------------- 99 | * 100 | * We have a general notation for a block that we'll use to describe the 101 | * different scenarios that our memory allocation algorithm must deal with: 102 | * 103 | * +----+----+----+----+ 104 | * c |* n | p | nf | pf | 105 | * +----+----+----+----+ 106 | * 107 | * Where - c is the index of this block 108 | * * is the indicator for a free block 109 | * n is the index of the next block in the heap 110 | * p is the index of the previous block in the heap 111 | * nf is the index of the next block in the free list 112 | * pf is the index of the previous block in the free list 113 | * 114 | * The fact that we have forward and backward links in the block descriptors 115 | * means that malloc() and free() operations can be very fast. It's easy 116 | * to either allocate the whole free item to a new block or to allocate part 117 | * of the free item and leave the rest on the free list without traversing 118 | * the list from front to back first. 119 | * 120 | * The entire block of memory used by the heap is assumed to be initialized 121 | * to 0. The very first block in the heap is special - it't the head of the 122 | * free block list. It is never assimilated with a free block (more on this 123 | * later). 124 | * 125 | * Once a block has been allocated to the application, it looks like this: 126 | * 127 | * +----+----+----+----+ 128 | * c | n | p | ... | 129 | * +----+----+----+----+ 130 | * 131 | * Where - c is the index of this block 132 | * n is the index of the next block in the heap 133 | * p is the index of the previous block in the heap 134 | * 135 | * Note that the free list information is gone, because it's now being used to 136 | * store actual data for the application. It would have been nice to store 137 | * the next and previous free list indexes as well, but that would be a waste 138 | * of space. If we had even 500 items in use, that would be 2,000 bytes for 139 | * free list information. We simply can't afford to waste that much. 140 | * 141 | * The address of the ... area is what is returned to the application 142 | * for data storage. 143 | * 144 | * The following sections describe the scenarios encountered during the 145 | * operation of the library. There are two additional notation conventions: 146 | * 147 | * ?? inside a pointer block means that the data is irrelevant. We don't care 148 | * about it because we don't read or modify it in the scenario being 149 | * described. 150 | * 151 | * ... between memory blocks indicates zero or more additional blocks are 152 | * allocated for use by the upper block. 153 | * 154 | * And while we're talking about "upper" and "lower" blocks, we should make 155 | * a comment about adresses. In the diagrams, a block higher up in the 156 | * picture is at a lower address. And the blocks grow downwards their 157 | * block index increases as does their physical address. 158 | * 159 | * Finally, there's one very important characteristic of the individual 160 | * blocks that make up the heap - there can never be two consecutive free 161 | * memory blocks, but there can be consecutive used memory blocks. 162 | * 163 | * The reason is that we always want to have a short free list of the 164 | * largest possible block sizes. By always assimilating a newly freed block 165 | * with adjacent free blocks, we maximize the size of each free memory area. 166 | * 167 | *--------------------------------------------------------------------------- 168 | * 169 | * Operation of malloc right after system startup 170 | * 171 | * As part of the system startup code, all of the heap has been cleared. 172 | * 173 | * During the very first malloc operation, we start traversing the free list 174 | * starting at index 0. The index of the next free block is 0, which means 175 | * we're at the end of the list! 176 | * 177 | * At this point, the malloc has a special test that checks if the current 178 | * block index is 0, which it is. This special case initializes the free 179 | * list to point at block index 1. 180 | * 181 | * BEFORE AFTER 182 | * 183 | * +----+----+----+----+ +----+----+----+----+ 184 | * 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 185 | * +----+----+----+----+ +----+----+----+----+ 186 | * +----+----+----+----+ 187 | * 1 | 0 | 0 | 0 | 0 | 188 | * +----+----+----+----+ 189 | * 190 | * The heap is now ready to complete the first malloc operation. 191 | * 192 | * ---------------------------------------------------------------------------- 193 | * 194 | * Operation of malloc when we have reached the end of the free list and 195 | * there is no block large enough to accommodate the request. 196 | * 197 | * This happens at the very first malloc operation, or any time the free 198 | * list is traversed and no free block large enough for the request is 199 | * found. 200 | * 201 | * The current block pointer will be at the end of the free list, and we 202 | * know we're at the end of the list because the nf index is 0, like this: 203 | * 204 | * BEFORE AFTER 205 | * 206 | * +----+----+----+----+ +----+----+----+----+ 207 | * pf |*?? | ?? | cf | ?? | pf |*?? | ?? | lf | ?? | 208 | * +----+----+----+----+ +----+----+----+----+ 209 | * ... ... 210 | * +----+----+----+----+ +----+----+----+----+ 211 | * p | cf | ?? | ... | p | cf | ?? | ... | 212 | * +----+----+----+----+ +----+----+----+----+ 213 | * +----+----+----+----+ +----+----+----+----+ 214 | * cf | 0 | p | 0 | pf | c | lf | p | ... | 215 | * +----+----+----+----+ +----+----+----+----+ 216 | * +----+----+----+----+ 217 | * lf | 0 | cf | 0 | pf | 218 | * +----+----+----+----+ 219 | * 220 | * As we walk the free list looking for a block of size b or larger, we get 221 | * to cf, which is the last item in the free list. We know this because the 222 | * next index is 0. 223 | * 224 | * So we're going to turn cf into the new block of memory, and then create 225 | * a new block that represents the last free entry (lf) and adjust the prev 226 | * index of lf to point at the block we just created. We also need to adjust 227 | * the next index of the new block (c) to point to the last free block. 228 | * 229 | * Note that the next free index of the pf block must point to the new lf 230 | * because cf is no longer a free block! 231 | * 232 | * ---------------------------------------------------------------------------- 233 | * 234 | * Operation of malloc when we have found a block (cf) that will fit the 235 | * current request of b units exactly. 236 | * 237 | * This one is pretty easy, just clear the free list bit in the current 238 | * block and unhook it from the free list. 239 | * 240 | * BEFORE AFTER 241 | * 242 | * +----+----+----+----+ +----+----+----+----+ 243 | * pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 244 | * +----+----+----+----+ +----+----+----+----+ 245 | * ... ... 246 | * +----+----+----+----+ +----+----+----+----+ 247 | * p | cf | ?? | ... | p | cf | ?? | ... | 248 | * +----+----+----+----+ +----+----+----+----+ 249 | * +----+----+----+----+ +----+----+----+----+ Clear the free 250 | * cf |* n | p | nf | pf | cf | n | p | .. | list bit here 251 | * +----+----+----+----+ +----+----+----+----+ 252 | * +----+----+----+----+ +----+----+----+----+ 253 | * n | ?? | cf | ... | n | ?? | cf | ... | 254 | * +----+----+----+----+ +----+----+----+----+ 255 | * ... ... 256 | * +----+----+----+----+ +----+----+----+----+ 257 | * nf |*?? | ?? | ?? | cf | nf | ?? | ?? | ?? | pf | 258 | * +----+----+----+----+ +----+----+----+----+ 259 | * 260 | * Unhooking from the free list is accomplished by adjusting the next and 261 | * prev free list index values in the pf and nf blocks. 262 | * 263 | * ---------------------------------------------------------------------------- 264 | * 265 | * Operation of malloc when we have found a block that will fit the current 266 | * request of b units with some left over. 267 | * 268 | * We'll allocate the new block at the END of the current free block so we 269 | * don't have to change ANY free list pointers. 270 | * 271 | * BEFORE AFTER 272 | * 273 | * +----+----+----+----+ +----+----+----+----+ 274 | * pf |*?? | ?? | cf | ?? | pf |*?? | ?? | cf | ?? | 275 | * +----+----+----+----+ +----+----+----+----+ 276 | * ... ... 277 | * +----+----+----+----+ +----+----+----+----+ 278 | * p | cf | ?? | ... | p | cf | ?? | ... | 279 | * +----+----+----+----+ +----+----+----+----+ 280 | * +----+----+----+----+ +----+----+----+----+ 281 | * cf |* n | p | nf | pf | cf |* c | p | nf | pf | 282 | * +----+----+----+----+ +----+----+----+----+ 283 | * +----+----+----+----+ This is the new 284 | * c | n | cf | .. | block at cf+b 285 | * +----+----+----+----+ 286 | * +----+----+----+----+ +----+----+----+----+ 287 | * n | ?? | cf | ... | n | ?? | c | ... | 288 | * +----+----+----+----+ +----+----+----+----+ 289 | * ... ... 290 | * +----+----+----+----+ +----+----+----+----+ 291 | * nf |*?? | ?? | ?? | cf | nf | ?? | ?? | ?? | pf | 292 | * +----+----+----+----+ +----+----+----+----+ 293 | * 294 | * This one is prety easy too, except we don't need to mess with the 295 | * free list indexes at all becasue we'll allocate the new block at the 296 | * end of the current free block. We do, however have to adjust the 297 | * indexes in cf, c, and n. 298 | * 299 | * ---------------------------------------------------------------------------- 300 | * 301 | * That covers the initialization and all possible malloc scenarios, so now 302 | * we need to cover the free operation possibilities... 303 | * 304 | * The operation of free depends on the position of the current block being 305 | * freed relative to free list items immediately above or below it. The code 306 | * works like this: 307 | * 308 | * if next block is free 309 | * assimilate with next block already on free list 310 | * if prev block is free 311 | * assimilate with prev block already on free list 312 | * else 313 | * put current block at head of free list 314 | * 315 | * ---------------------------------------------------------------------------- 316 | * 317 | * Step 1 of the free operation checks if the next block is free, and if it 318 | * is then insert this block into the free list and assimilate the next block 319 | * with this one. 320 | * 321 | * Note that c is the block we are freeing up, cf is the free block that 322 | * follows it. 323 | * 324 | * BEFORE AFTER 325 | * 326 | * +----+----+----+----+ +----+----+----+----+ 327 | * pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 328 | * +----+----+----+----+ +----+----+----+----+ 329 | * ... ... 330 | * +----+----+----+----+ +----+----+----+----+ 331 | * p | c | ?? | ... | p | c | ?? | ... | 332 | * +----+----+----+----+ +----+----+----+----+ 333 | * +----+----+----+----+ +----+----+----+----+ This block is 334 | * c | cf | p | ... | c | nn | p | ... | disconnected 335 | * +----+----+----+----+ +----+----+----+----+ from free list, 336 | * +----+----+----+----+ assimilated with 337 | * cf |*nn | c | nf | pf | the next, and 338 | * +----+----+----+----+ ready for step 2 339 | * +----+----+----+----+ +----+----+----+----+ 340 | * nn | ?? | cf | ?? | ?? | nn | ?? | c | ... | 341 | * +----+----+----+----+ +----+----+----+----+ 342 | * ... ... 343 | * +----+----+----+----+ +----+----+----+----+ 344 | * nf |*?? | ?? | ?? | cf | nf |*?? | ?? | ?? | pf | 345 | * +----+----+----+----+ +----+----+----+----+ 346 | * 347 | * Take special note that the newly assimilated block (c) is completely 348 | * disconnected from the free list, and it does not have its free list 349 | * bit set. This is important as we move on to step 2 of the procedure... 350 | * 351 | * ---------------------------------------------------------------------------- 352 | * 353 | * Step 2 of the free operation checks if the prev block is free, and if it 354 | * is then assimilate it with this block. 355 | * 356 | * Note that c is the block we are freeing up, pf is the free block that 357 | * precedes it. 358 | * 359 | * BEFORE AFTER 360 | * 361 | * +----+----+----+----+ +----+----+----+----+ This block has 362 | * pf |* c | ?? | nf | ?? | pf |* n | ?? | nf | ?? | assimilated the 363 | * +----+----+----+----+ +----+----+----+----+ current block 364 | * +----+----+----+----+ 365 | * c | n | pf | ... | 366 | * +----+----+----+----+ 367 | * +----+----+----+----+ +----+----+----+----+ 368 | * n | ?? | c | ... | n | ?? | pf | ?? | ?? | 369 | * +----+----+----+----+ +----+----+----+----+ 370 | * ... ... 371 | * +----+----+----+----+ +----+----+----+----+ 372 | * nf |*?? | ?? | ?? | pf | nf |*?? | ?? | ?? | pf | 373 | * +----+----+----+----+ +----+----+----+----+ 374 | * 375 | * Nothing magic here, except that when we're done, the current block (c) 376 | * is gone since it's been absorbed into the previous free block. Note that 377 | * the previous step guarantees that the next block (n) is not free. 378 | * 379 | * ---------------------------------------------------------------------------- 380 | * 381 | * Step 3 of the free operation only runs if the previous block is not free. 382 | * it just inserts the current block to the head of the free list. 383 | * 384 | * Remember, 0 is always the first block in the memory heap, and it's always 385 | * head of the free list! 386 | * 387 | * BEFORE AFTER 388 | * 389 | * +----+----+----+----+ +----+----+----+----+ 390 | * 0 | ?? | ?? | nf | 0 | 0 | ?? | ?? | c | 0 | 391 | * +----+----+----+----+ +----+----+----+----+ 392 | * ... ... 393 | * +----+----+----+----+ +----+----+----+----+ 394 | * p | c | ?? | ... | p | c | ?? | ... | 395 | * +----+----+----+----+ +----+----+----+----+ 396 | * +----+----+----+----+ +----+----+----+----+ 397 | * c | n | p | .. | c |* n | p | nf | 0 | 398 | * +----+----+----+----+ +----+----+----+----+ 399 | * +----+----+----+----+ +----+----+----+----+ 400 | * n | ?? | c | ... | n | ?? | c | ... | 401 | * +----+----+----+----+ +----+----+----+----+ 402 | * ... ... 403 | * +----+----+----+----+ +----+----+----+----+ 404 | * nf |*?? | ?? | ?? | 0 | nf |*?? | ?? | ?? | c | 405 | * +----+----+----+----+ +----+----+----+----+ 406 | * 407 | * Again, nothing spectacular here, we're simply adjusting a few pointers 408 | * to make the most recently freed block the first item in the free list. 409 | * 410 | * That's because finding the previous free block would mean a reverse 411 | * traversal of blocks until we found a free one, and it's just easier to 412 | * put it at the head of the list. No traversal is needed. 413 | * 414 | * ---------------------------------------------------------------------------- 415 | * 416 | * Finally, we can cover realloc, which has the following basic operation. 417 | * 418 | * The first thing we do is assimilate up with the next free block of 419 | * memory if possible. This step might help if we're resizing to a bigger 420 | * block of memory. It also helps if we're downsizing and creating a new 421 | * free block with the leftover memory. 422 | * 423 | * First we check to see if the next block is free, and we assimilate it 424 | * to this block if it is. If the previous block is also free, and if 425 | * combining it with the current block would satisfy the request, then we 426 | * assimilate with that block and move the current data down to the new 427 | * location. 428 | * 429 | * Assimilating with the previous free block and moving the data works 430 | * like this: 431 | * 432 | * BEFORE AFTER 433 | * 434 | * +----+----+----+----+ +----+----+----+----+ 435 | * pf |*?? | ?? | cf | ?? | pf |*?? | ?? | nf | ?? | 436 | * +----+----+----+----+ +----+----+----+----+ 437 | * ... ... 438 | * +----+----+----+----+ +----+----+----+----+ 439 | * cf |* c | ?? | nf | pf | c | n | ?? | ... | The data gets 440 | * +----+----+----+----+ +----+----+----+----+ moved from c to 441 | * +----+----+----+----+ the new data area 442 | * c | n | cf | ... | in cf, then c is 443 | * +----+----+----+----+ adjusted to cf 444 | * +----+----+----+----+ +----+----+----+----+ 445 | * n | ?? | c | ... | n | ?? | c | ?? | ?? | 446 | * +----+----+----+----+ +----+----+----+----+ 447 | * ... ... 448 | * +----+----+----+----+ +----+----+----+----+ 449 | * nf |*?? | ?? | ?? | cf | nf |*?? | ?? | ?? | pf | 450 | * +----+----+----+----+ +----+----+----+----+ 451 | * 452 | * 453 | * Once we're done that, there are three scenarios to consider: 454 | * 455 | * 1. The current block size is exactly the right size, so no more work is 456 | * needed. 457 | * 458 | * 2. The current block is bigger than the new required size, so carve off 459 | * the excess and add it to the free list. 460 | * 461 | * 3. The current block is still smaller than the required size, so malloc 462 | * a new block of the correct size and copy the current data into the new 463 | * block before freeing the current block. 464 | * 465 | * The only one of these scenarios that involves an operation that has not 466 | * yet been described is the second one, and it's shown below: 467 | * 468 | * BEFORE AFTER 469 | * 470 | * +----+----+----+----+ +----+----+----+----+ 471 | * p | c | ?? | ... | p | c | ?? | ... | 472 | * +----+----+----+----+ +----+----+----+----+ 473 | * +----+----+----+----+ +----+----+----+----+ 474 | * c | n | p | ... | c | s | p | ... | 475 | * +----+----+----+----+ +----+----+----+----+ 476 | * +----+----+----+----+ This is the 477 | * s | n | c | .. | new block at 478 | * +----+----+----+----+ c+blocks 479 | * +----+----+----+----+ +----+----+----+----+ 480 | * n | ?? | c | ... | n | ?? | s | ... | 481 | * +----+----+----+----+ +----+----+----+----+ 482 | * 483 | * Then we call free() with the adress of the data portion of the new 484 | * block (s) which adds it to the free list. 485 | * 486 | * ---------------------------------------------------------------------------- 487 | */ 488 | 489 | #include 490 | #include 491 | 492 | #include "umm_malloc.h" 493 | 494 | #include "umm_malloc_cfg.h" /* user-dependent */ 495 | 496 | #ifndef UMM_FIRST_FIT 497 | # ifndef UMM_BEST_FIT 498 | # define UMM_BEST_FIT 499 | # endif 500 | #endif 501 | 502 | #ifndef DBG_LOG_LEVEL 503 | # undef DBG_LOG_LEVEL 504 | # define DBG_LOG_LEVEL 0 505 | #else 506 | # undef DBG_LOG_LEVEL 507 | # define DBG_LOG_LEVEL DBG_LOG_LEVEL 508 | #endif 509 | 510 | /* -- dbglog {{{ */ 511 | 512 | /* ---------------------------------------------------------------------------- 513 | * A set of macros that cleans up code that needs to produce debug 514 | * or log information. 515 | * 516 | * See copyright notice in LICENSE.TXT 517 | * ---------------------------------------------------------------------------- 518 | * 519 | * There are macros to handle the following decreasing levels of detail: 520 | * 521 | * 6 = TRACE 522 | * 5 = DEBUG 523 | * 4 = CRITICAL 524 | * 3 = ERROR 525 | * 2 = WARNING 526 | * 1 = INFO 527 | * 0 = FORCE - The printf is always compiled in and is called only when 528 | * the first parameter to the macro is non-0 529 | * 530 | * ---------------------------------------------------------------------------- 531 | * 532 | * The following #define should be set up before this file is included so 533 | * that we can be sure that the correct macros are defined. 534 | * 535 | * #define DBG_LOG_LEVEL x 536 | * ---------------------------------------------------------------------------- 537 | */ 538 | 539 | #undef DBG_LOG_TRACE 540 | #undef DBG_LOG_DEBUG 541 | #undef DBG_LOG_CRITICAL 542 | #undef DBG_LOG_ERROR 543 | #undef DBG_LOG_WARNING 544 | #undef DBG_LOG_INFO 545 | #undef DBG_LOG_FORCE 546 | 547 | /* ------------------------------------------------------------------------- */ 548 | 549 | #if DBG_LOG_LEVEL >= 6 550 | # define DBG_LOG_TRACE( format, ... ) printf( format, ## __VA_ARGS__ ) 551 | #else 552 | # define DBG_LOG_TRACE( format, ... ) 553 | #endif 554 | 555 | #if DBG_LOG_LEVEL >= 5 556 | # define DBG_LOG_DEBUG( format, ... ) printf( format, ## __VA_ARGS__ ) 557 | #else 558 | # define DBG_LOG_DEBUG( format, ... ) 559 | #endif 560 | 561 | #if DBG_LOG_LEVEL >= 4 562 | # define DBG_LOG_CRITICAL( format, ... ) printf( format, ## __VA_ARGS__ ) 563 | #else 564 | # define DBG_LOG_CRITICAL( format, ... ) 565 | #endif 566 | 567 | #if DBG_LOG_LEVEL >= 3 568 | # define DBG_LOG_ERROR( format, ... ) printf( format, ## __VA_ARGS__ ) 569 | #else 570 | # define DBG_LOG_ERROR( format, ... ) 571 | #endif 572 | 573 | #if DBG_LOG_LEVEL >= 2 574 | # define DBG_LOG_WARNING( format, ... ) printf( format, ## __VA_ARGS__ ) 575 | #else 576 | # define DBG_LOG_WARNING( format, ... ) 577 | #endif 578 | 579 | #if DBG_LOG_LEVEL >= 1 580 | # define DBG_LOG_INFO( format, ... ) printf( format, ## __VA_ARGS__ ) 581 | #else 582 | # define DBG_LOG_INFO( format, ... ) 583 | #endif 584 | 585 | #define DBG_LOG_FORCE( force, format, ... ) {if(force) {printf( format, ## __VA_ARGS__ );}} 586 | 587 | /* }}} */ 588 | 589 | /* ------------------------------------------------------------------------- */ 590 | 591 | UMM_H_ATTPACKPRE typedef struct umm_ptr_t { 592 | unsigned short int next; 593 | unsigned short int prev; 594 | } UMM_H_ATTPACKSUF umm_ptr; 595 | 596 | 597 | UMM_H_ATTPACKPRE typedef struct umm_block_t { 598 | union { 599 | umm_ptr used; 600 | } header; 601 | union { 602 | umm_ptr free; 603 | unsigned char data[4]; 604 | } body; 605 | } UMM_H_ATTPACKSUF umm_block; 606 | 607 | #define UMM_FREELIST_MASK (0x8000) 608 | #define UMM_BLOCKNO_MASK (0x7FFF) 609 | 610 | /* ------------------------------------------------------------------------- */ 611 | 612 | #ifdef UMM_REDEFINE_MEM_FUNCTIONS 613 | # define umm_free free 614 | # define umm_malloc malloc 615 | # define umm_calloc calloc 616 | # define umm_realloc realloc 617 | #endif 618 | 619 | umm_block *umm_heap = NULL; 620 | unsigned short int umm_numblocks = 0; 621 | 622 | #define UMM_NUMBLOCKS (umm_numblocks) 623 | 624 | /* ------------------------------------------------------------------------ */ 625 | 626 | #define UMM_BLOCK(b) (umm_heap[b]) 627 | 628 | #define UMM_NBLOCK(b) (UMM_BLOCK(b).header.used.next) 629 | #define UMM_PBLOCK(b) (UMM_BLOCK(b).header.used.prev) 630 | #define UMM_NFREE(b) (UMM_BLOCK(b).body.free.next) 631 | #define UMM_PFREE(b) (UMM_BLOCK(b).body.free.prev) 632 | #define UMM_DATA(b) (UMM_BLOCK(b).body.data) 633 | 634 | /* integrity check (UMM_INTEGRITY_CHECK) {{{ */ 635 | #if defined(UMM_INTEGRITY_CHECK) 636 | /* 637 | * Perform integrity check of the whole heap data. Returns 1 in case of 638 | * success, 0 otherwise. 639 | * 640 | * First of all, iterate through all free blocks, and check that all backlinks 641 | * match (i.e. if block X has next free block Y, then the block Y should have 642 | * previous free block set to X). 643 | * 644 | * Additionally, we check that each free block is correctly marked with 645 | * `UMM_FREELIST_MASK` on the `next` pointer: during iteration through free 646 | * list, we mark each free block by the same flag `UMM_FREELIST_MASK`, but 647 | * on `prev` pointer. We'll check and unmark it later. 648 | * 649 | * Then, we iterate through all blocks in the heap, and similarly check that 650 | * all backlinks match (i.e. if block X has next block Y, then the block Y 651 | * should have previous block set to X). 652 | * 653 | * But before checking each backlink, we check that the `next` and `prev` 654 | * pointers are both marked with `UMM_FREELIST_MASK`, or both unmarked. 655 | * This way, we ensure that the free flag is in sync with the free pointers 656 | * chain. 657 | */ 658 | static int integrity_check(void) { 659 | int ok = 1; 660 | unsigned short int prev; 661 | unsigned short int cur; 662 | 663 | if (umm_heap == NULL) { 664 | umm_init(); 665 | } 666 | 667 | /* Iterate through all free blocks */ 668 | prev = 0; 669 | while(1) { 670 | cur = UMM_NFREE(prev); 671 | 672 | /* Check that next free block number is valid */ 673 | if (cur >= UMM_NUMBLOCKS) { 674 | printf("heap integrity broken: too large next free num: %d " 675 | "(in block %d, addr 0x%lx)\n", cur, prev, 676 | (unsigned long)&UMM_NBLOCK(prev)); 677 | ok = 0; 678 | goto clean; 679 | } 680 | if (cur == 0) { 681 | /* No more free blocks */ 682 | break; 683 | } 684 | 685 | /* Check if prev free block number matches */ 686 | if (UMM_PFREE(cur) != prev) { 687 | printf("heap integrity broken: free links don't match: " 688 | "%d -> %d, but %d -> %d\n", 689 | prev, cur, cur, UMM_PFREE(cur)); 690 | ok = 0; 691 | goto clean; 692 | } 693 | 694 | UMM_PBLOCK(cur) |= UMM_FREELIST_MASK; 695 | 696 | prev = cur; 697 | } 698 | 699 | /* Iterate through all blocks */ 700 | prev = 0; 701 | while(1) { 702 | cur = UMM_NBLOCK(prev) & UMM_BLOCKNO_MASK; 703 | 704 | /* Check that next block number is valid */ 705 | if (cur >= UMM_NUMBLOCKS) { 706 | printf("heap integrity broken: too large next block num: %d " 707 | "(in block %d, addr 0x%lx)\n", cur, prev, 708 | (unsigned long)&UMM_NBLOCK(prev)); 709 | ok = 0; 710 | goto clean; 711 | } 712 | if (cur == 0) { 713 | /* No more blocks */ 714 | break; 715 | } 716 | 717 | /* make sure the free mark is appropriate, and unmark it */ 718 | if ((UMM_NBLOCK(cur) & UMM_FREELIST_MASK) 719 | != (UMM_PBLOCK(cur) & UMM_FREELIST_MASK)) 720 | { 721 | printf("heap integrity broken: mask wrong at addr 0x%lx: n=0x%x, p=0x%x\n", 722 | (unsigned long)&UMM_NBLOCK(cur), 723 | (UMM_NBLOCK(cur) & UMM_FREELIST_MASK), 724 | (UMM_PBLOCK(cur) & UMM_FREELIST_MASK) 725 | ); 726 | ok = 0; 727 | goto clean; 728 | } 729 | 730 | /* unmark */ 731 | UMM_PBLOCK(cur) &= UMM_BLOCKNO_MASK; 732 | 733 | /* Check if prev block number matches */ 734 | if (UMM_PBLOCK(cur) != prev) { 735 | printf("heap integrity broken: block links don't match: " 736 | "%d -> %d, but %d -> %d\n", 737 | prev, cur, cur, UMM_PBLOCK(cur)); 738 | ok = 0; 739 | goto clean; 740 | } 741 | 742 | prev = cur; 743 | } 744 | 745 | clean: 746 | if (!ok){ 747 | UMM_HEAP_CORRUPTION_CB(); 748 | } 749 | return ok; 750 | } 751 | 752 | #define INTEGRITY_CHECK() integrity_check() 753 | #else 754 | /* 755 | * Integrity check is disabled, so just define stub macro 756 | */ 757 | #define INTEGRITY_CHECK() 1 758 | #endif 759 | /* }}} */ 760 | 761 | /* poisoning (UMM_POISON) {{{ */ 762 | #if defined(UMM_POISON) 763 | #define POISON_BYTE (0xa5) 764 | 765 | /* 766 | * Yields a size of the poison for the block of size `s`. 767 | * If `s` is 0, returns 0. 768 | */ 769 | #define POISON_SIZE(s) ( \ 770 | (s) ? \ 771 | (UMM_POISON_SIZE_BEFORE + UMM_POISON_SIZE_AFTER + \ 772 | sizeof(UMM_POISONED_BLOCK_LEN_TYPE) \ 773 | ) : 0 \ 774 | ) 775 | 776 | /* 777 | * Print memory contents starting from given `ptr` 778 | */ 779 | static void dump_mem ( const unsigned char *ptr, size_t len ) { 780 | while (len--) { 781 | printf(" 0x%.2x", (unsigned int)(*ptr++)); 782 | } 783 | } 784 | 785 | /* 786 | * Put poison data at given `ptr` and `poison_size` 787 | */ 788 | static void put_poison( unsigned char *ptr, size_t poison_size ) { 789 | memset(ptr, POISON_BYTE, poison_size); 790 | } 791 | 792 | /* 793 | * Check poison data at given `ptr` and `poison_size`. `where` is a pointer to 794 | * a string, either "before" or "after", meaning, before or after the block. 795 | * 796 | * If poison is there, returns 1. 797 | * Otherwise, prints the appropriate message, and returns 0. 798 | */ 799 | static int check_poison( const unsigned char *ptr, size_t poison_size, 800 | const char *where) { 801 | size_t i; 802 | int ok = 1; 803 | 804 | for (i = 0; i < poison_size; i++) { 805 | if (ptr[i] != POISON_BYTE) { 806 | ok = 0; 807 | break; 808 | } 809 | } 810 | 811 | if (!ok) { 812 | printf("there is no poison %s the block. " 813 | "Expected poison address: 0x%lx, actual data:", 814 | where, (unsigned long)ptr); 815 | dump_mem(ptr, poison_size); 816 | printf("\n"); 817 | } 818 | 819 | return ok; 820 | } 821 | 822 | /* 823 | * Check if a block is properly poisoned. Must be called only for non-free 824 | * blocks. 825 | */ 826 | static int check_poison_block( umm_block *pblock ) { 827 | int ok = 1; 828 | 829 | if (pblock->header.used.next & UMM_FREELIST_MASK) { 830 | printf("check_poison_block is called for free block 0x%lx\n", 831 | (unsigned long)pblock); 832 | } else { 833 | /* the block is used; let's check poison */ 834 | unsigned char *pc = (unsigned char *)pblock->body.data; 835 | unsigned char *pc_cur; 836 | 837 | pc_cur = pc + sizeof(UMM_POISONED_BLOCK_LEN_TYPE); 838 | if (!check_poison(pc_cur, UMM_POISON_SIZE_BEFORE, "before")) { 839 | UMM_HEAP_CORRUPTION_CB(); 840 | ok = 0; 841 | goto clean; 842 | } 843 | 844 | pc_cur = pc + *((UMM_POISONED_BLOCK_LEN_TYPE *)pc) - UMM_POISON_SIZE_AFTER; 845 | if (!check_poison(pc_cur, UMM_POISON_SIZE_AFTER, "after")) { 846 | UMM_HEAP_CORRUPTION_CB(); 847 | ok = 0; 848 | goto clean; 849 | } 850 | } 851 | 852 | clean: 853 | return ok; 854 | } 855 | 856 | /* 857 | * Iterates through all blocks in the heap, and checks poison for all used 858 | * blocks. 859 | */ 860 | static int check_poison_all_blocks(void) { 861 | int ok = 1; 862 | unsigned short int blockNo = 0; 863 | 864 | if (umm_heap == NULL) { 865 | umm_init(); 866 | } 867 | 868 | /* Now iterate through the blocks list */ 869 | blockNo = UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK; 870 | 871 | while( UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK ) { 872 | if ( !(UMM_NBLOCK(blockNo) & UMM_FREELIST_MASK) ) { 873 | /* This is a used block (not free), so, check its poison */ 874 | ok = check_poison_block(&UMM_BLOCK(blockNo)); 875 | if (!ok){ 876 | break; 877 | } 878 | } 879 | 880 | blockNo = UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK; 881 | } 882 | 883 | return ok; 884 | } 885 | 886 | /* 887 | * Takes a pointer returned by actual allocator function (`_umm_malloc` or 888 | * `_umm_realloc`), puts appropriate poison, and returns adjusted pointer that 889 | * should be returned to the user. 890 | * 891 | * `size_w_poison` is a size of the whole block, including a poison. 892 | */ 893 | static void *get_poisoned( unsigned char *ptr, size_t size_w_poison ) { 894 | if (size_w_poison != 0 && ptr != NULL) { 895 | 896 | /* Put exact length of the user's chunk of memory */ 897 | memcpy(ptr, &size_w_poison, sizeof(UMM_POISONED_BLOCK_LEN_TYPE)); 898 | 899 | /* Poison beginning and the end of the allocated chunk */ 900 | put_poison(ptr + sizeof(UMM_POISONED_BLOCK_LEN_TYPE), 901 | UMM_POISON_SIZE_BEFORE); 902 | put_poison(ptr + size_w_poison - UMM_POISON_SIZE_AFTER, 903 | UMM_POISON_SIZE_AFTER); 904 | 905 | /* Return pointer at the first non-poisoned byte */ 906 | return ptr + sizeof(UMM_POISONED_BLOCK_LEN_TYPE) + UMM_POISON_SIZE_BEFORE; 907 | } else { 908 | return ptr; 909 | } 910 | } 911 | 912 | /* 913 | * Takes "poisoned" pointer (i.e. pointer returned from `get_poisoned()`), 914 | * and checks that the poison of this particular block is still there. 915 | * 916 | * Returns unpoisoned pointer, i.e. actual pointer to the allocated memory. 917 | */ 918 | static void *get_unpoisoned( unsigned char *ptr ) { 919 | if (ptr != NULL) { 920 | unsigned short int c; 921 | 922 | ptr -= (sizeof(UMM_POISONED_BLOCK_LEN_TYPE) + UMM_POISON_SIZE_BEFORE); 923 | 924 | /* Figure out which block we're in. Note the use of truncated division... */ 925 | c = (((char *)ptr)-(char *)(&(umm_heap[0])))/sizeof(umm_block); 926 | 927 | check_poison_block(&UMM_BLOCK(c)); 928 | } 929 | 930 | return ptr; 931 | } 932 | 933 | #define CHECK_POISON_ALL_BLOCKS() check_poison_all_blocks() 934 | #define GET_POISONED(ptr, size) get_poisoned(ptr, size) 935 | #define GET_UNPOISONED(ptr) get_unpoisoned(ptr) 936 | 937 | #else 938 | /* 939 | * Integrity check is disabled, so just define stub macros 940 | */ 941 | #define POISON_SIZE(s) 0 942 | #define CHECK_POISON_ALL_BLOCKS() 1 943 | #define GET_POISONED(ptr, size) (ptr) 944 | #define GET_UNPOISONED(ptr) (ptr) 945 | #endif 946 | /* }}} */ 947 | 948 | /* ---------------------------------------------------------------------------- 949 | * One of the coolest things about this little library is that it's VERY 950 | * easy to get debug information about the memory heap by simply iterating 951 | * through all of the memory blocks. 952 | * 953 | * As you go through all the blocks, you can check to see if it's a free 954 | * block by looking at the high order bit of the next block index. You can 955 | * also see how big the block is by subtracting the next block index from 956 | * the current block number. 957 | * 958 | * The umm_info function does all of that and makes the results available 959 | * in the ummHeapInfo structure. 960 | * ---------------------------------------------------------------------------- 961 | */ 962 | 963 | UMM_HEAP_INFO ummHeapInfo; 964 | 965 | void *umm_info( void *ptr, int force ) { 966 | 967 | unsigned short int blockNo = 0; 968 | 969 | /* Protect the critical section... */ 970 | UMM_CRITICAL_ENTRY(); 971 | 972 | /* 973 | * Clear out all of the entries in the ummHeapInfo structure before doing 974 | * any calculations.. 975 | */ 976 | memset( &ummHeapInfo, 0, sizeof( ummHeapInfo ) ); 977 | 978 | DBG_LOG_FORCE( force, "\n\nDumping the umm_heap...\n" ); 979 | 980 | DBG_LOG_FORCE( force, "|0x%08lx|B %5i|NB %5i|PB %5i|Z %5i|NF %5i|PF %5i|\n", 981 | (unsigned long)(&UMM_BLOCK(blockNo)), 982 | blockNo, 983 | UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK, 984 | UMM_PBLOCK(blockNo), 985 | (UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK )-blockNo, 986 | UMM_NFREE(blockNo), 987 | UMM_PFREE(blockNo) ); 988 | 989 | /* 990 | * Now loop through the block lists, and keep track of the number and size 991 | * of used and free blocks. The terminating condition is an nb pointer with 992 | * a value of zero... 993 | */ 994 | 995 | blockNo = UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK; 996 | 997 | while( UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK ) { 998 | size_t curBlocks = (UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK )-blockNo; 999 | 1000 | ++ummHeapInfo.totalEntries; 1001 | ummHeapInfo.totalBlocks += curBlocks; 1002 | 1003 | /* Is this a free block? */ 1004 | 1005 | if( UMM_NBLOCK(blockNo) & UMM_FREELIST_MASK ) { 1006 | ++ummHeapInfo.freeEntries; 1007 | ummHeapInfo.freeBlocks += curBlocks; 1008 | 1009 | if (ummHeapInfo.maxFreeContiguousBlocks < curBlocks) { 1010 | ummHeapInfo.maxFreeContiguousBlocks = curBlocks; 1011 | } 1012 | 1013 | DBG_LOG_FORCE( force, "|0x%08lx|B %5i|NB %5i|PB %5i|Z %5u|NF %5i|PF %5i|\n", 1014 | (unsigned long)(&UMM_BLOCK(blockNo)), 1015 | blockNo, 1016 | UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK, 1017 | UMM_PBLOCK(blockNo), 1018 | (unsigned int)curBlocks, 1019 | UMM_NFREE(blockNo), 1020 | UMM_PFREE(blockNo) ); 1021 | 1022 | /* Does this block address match the ptr we may be trying to free? */ 1023 | 1024 | if( ptr == &UMM_BLOCK(blockNo) ) { 1025 | 1026 | /* Release the critical section... */ 1027 | UMM_CRITICAL_EXIT(); 1028 | 1029 | return( ptr ); 1030 | } 1031 | } else { 1032 | ++ummHeapInfo.usedEntries; 1033 | ummHeapInfo.usedBlocks += curBlocks; 1034 | 1035 | DBG_LOG_FORCE( force, "|0x%08lx|B %5i|NB %5i|PB %5i|Z %5u|\n", 1036 | (unsigned long)(&UMM_BLOCK(blockNo)), 1037 | blockNo, 1038 | UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK, 1039 | UMM_PBLOCK(blockNo), 1040 | (unsigned int)curBlocks ); 1041 | } 1042 | 1043 | blockNo = UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK; 1044 | } 1045 | 1046 | /* 1047 | * Update the accounting totals with information from the last block, the 1048 | * rest must be free! 1049 | */ 1050 | 1051 | { 1052 | size_t curBlocks = UMM_NUMBLOCKS-blockNo; 1053 | ummHeapInfo.freeBlocks += curBlocks; 1054 | ummHeapInfo.totalBlocks += curBlocks; 1055 | 1056 | if (ummHeapInfo.maxFreeContiguousBlocks < curBlocks) { 1057 | ummHeapInfo.maxFreeContiguousBlocks = curBlocks; 1058 | } 1059 | } 1060 | 1061 | DBG_LOG_FORCE( force, "|0x%08lx|B %5i|NB %5i|PB %5i|Z %5i|NF %5i|PF %5i|\n", 1062 | (unsigned long)(&UMM_BLOCK(blockNo)), 1063 | blockNo, 1064 | UMM_NBLOCK(blockNo) & UMM_BLOCKNO_MASK, 1065 | UMM_PBLOCK(blockNo), 1066 | UMM_NUMBLOCKS-blockNo, 1067 | UMM_NFREE(blockNo), 1068 | UMM_PFREE(blockNo) ); 1069 | 1070 | DBG_LOG_FORCE( force, "Total Entries %5i Used Entries %5i Free Entries %5i\n", 1071 | ummHeapInfo.totalEntries, 1072 | ummHeapInfo.usedEntries, 1073 | ummHeapInfo.freeEntries ); 1074 | 1075 | DBG_LOG_FORCE( force, "Total Blocks %5i Used Blocks %5i Free Blocks %5i\n", 1076 | ummHeapInfo.totalBlocks, 1077 | ummHeapInfo.usedBlocks, 1078 | ummHeapInfo.freeBlocks ); 1079 | 1080 | /* Release the critical section... */ 1081 | UMM_CRITICAL_EXIT(); 1082 | 1083 | return( NULL ); 1084 | } 1085 | 1086 | /* ------------------------------------------------------------------------ */ 1087 | 1088 | static unsigned short int umm_blocks( size_t size ) { 1089 | 1090 | /* 1091 | * The calculation of the block size is not too difficult, but there are 1092 | * a few little things that we need to be mindful of. 1093 | * 1094 | * When a block removed from the free list, the space used by the free 1095 | * pointers is available for data. That's what the first calculation 1096 | * of size is doing. 1097 | */ 1098 | 1099 | if( size <= (sizeof(((umm_block *)0)->body)) ) 1100 | return( 1 ); 1101 | 1102 | /* 1103 | * If it's for more than that, then we need to figure out the number of 1104 | * additional whole blocks the size of an umm_block are required. 1105 | */ 1106 | 1107 | size -= ( 1 + (sizeof(((umm_block *)0)->body)) ); 1108 | 1109 | return( 2 + size/(sizeof(umm_block)) ); 1110 | } 1111 | 1112 | /* ------------------------------------------------------------------------ */ 1113 | 1114 | /* 1115 | * Split the block `c` into two blocks: `c` and `c + blocks`. 1116 | * 1117 | * - `cur_freemask` should be `0` if `c` used, or `UMM_FREELIST_MASK` 1118 | * otherwise. 1119 | * - `new_freemask` should be `0` if `c + blocks` used, or `UMM_FREELIST_MASK` 1120 | * otherwise. 1121 | * 1122 | * Note that free pointers are NOT modified by this function. 1123 | */ 1124 | static void umm_make_new_block( unsigned short int c, 1125 | unsigned short int blocks, 1126 | unsigned short int cur_freemask, unsigned short int new_freemask ) { 1127 | 1128 | UMM_NBLOCK(c+blocks) = (UMM_NBLOCK(c) & UMM_BLOCKNO_MASK) | new_freemask; 1129 | UMM_PBLOCK(c+blocks) = c; 1130 | 1131 | UMM_PBLOCK(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK) = (c+blocks); 1132 | UMM_NBLOCK(c) = (c+blocks) | cur_freemask; 1133 | } 1134 | 1135 | /* ------------------------------------------------------------------------ */ 1136 | 1137 | static void umm_disconnect_from_free_list( unsigned short int c ) { 1138 | /* Disconnect this block from the FREE list */ 1139 | 1140 | UMM_NFREE(UMM_PFREE(c)) = UMM_NFREE(c); 1141 | UMM_PFREE(UMM_NFREE(c)) = UMM_PFREE(c); 1142 | 1143 | /* And clear the free block indicator */ 1144 | 1145 | UMM_NBLOCK(c) &= (~UMM_FREELIST_MASK); 1146 | } 1147 | 1148 | /* ------------------------------------------------------------------------ */ 1149 | 1150 | static void umm_assimilate_up( unsigned short int c ) { 1151 | 1152 | if( UMM_NBLOCK(UMM_NBLOCK(c)) & UMM_FREELIST_MASK ) { 1153 | /* 1154 | * The next block is a free block, so assimilate up and remove it from 1155 | * the free list 1156 | */ 1157 | 1158 | DBG_LOG_DEBUG( "Assimilate up to next block, which is FREE\n" ); 1159 | 1160 | /* Disconnect the next block from the FREE list */ 1161 | 1162 | umm_disconnect_from_free_list( UMM_NBLOCK(c) ); 1163 | 1164 | /* Assimilate the next block with this one */ 1165 | 1166 | UMM_PBLOCK(UMM_NBLOCK(UMM_NBLOCK(c)) & UMM_BLOCKNO_MASK) = c; 1167 | UMM_NBLOCK(c) = UMM_NBLOCK(UMM_NBLOCK(c)) & UMM_BLOCKNO_MASK; 1168 | } 1169 | } 1170 | 1171 | /* ------------------------------------------------------------------------ */ 1172 | 1173 | static unsigned short int umm_assimilate_down( unsigned short int c, unsigned short int freemask ) { 1174 | 1175 | UMM_NBLOCK(UMM_PBLOCK(c)) = UMM_NBLOCK(c) | freemask; 1176 | UMM_PBLOCK(UMM_NBLOCK(c)) = UMM_PBLOCK(c); 1177 | 1178 | return( UMM_PBLOCK(c) ); 1179 | } 1180 | 1181 | /* ------------------------------------------------------------------------- */ 1182 | 1183 | void umm_init( void ) { 1184 | /* init heap pointer and size, and memset it to 0 */ 1185 | umm_heap = (umm_block *)UMM_MALLOC_CFG__HEAP_ADDR; 1186 | umm_numblocks = (UMM_MALLOC_CFG__HEAP_SIZE / sizeof(umm_block)); 1187 | memset(umm_heap, 0x00, UMM_MALLOC_CFG__HEAP_SIZE); 1188 | 1189 | /* setup initial blank heap structure */ 1190 | { 1191 | /* index of the 0th `umm_block` */ 1192 | const unsigned short int block_0th = 0; 1193 | /* index of the 1st `umm_block` */ 1194 | const unsigned short int block_1th = 1; 1195 | /* index of the latest `umm_block` */ 1196 | const unsigned short int block_last = UMM_NUMBLOCKS - 1; 1197 | 1198 | /* setup the 0th `umm_block`, which just points to the 1st */ 1199 | UMM_NBLOCK(block_0th) = block_1th; 1200 | UMM_NFREE(block_0th) = block_1th; 1201 | 1202 | /* 1203 | * Now, we need to set the whole heap space as a huge free block. We should 1204 | * not touch the 0th `umm_block`, since it's special: the 0th `umm_block` 1205 | * is the head of the free block list. It's a part of the heap invariant. 1206 | * 1207 | * See the detailed explanation at the beginning of the file. 1208 | */ 1209 | 1210 | /* 1211 | * 1th `umm_block` has pointers: 1212 | * 1213 | * - next `umm_block`: the latest one 1214 | * - prev `umm_block`: the 0th 1215 | * 1216 | * Plus, it's a free `umm_block`, so we need to apply `UMM_FREELIST_MASK` 1217 | * 1218 | * And it's the last free block, so the next free block is 0. 1219 | */ 1220 | UMM_NBLOCK(block_1th) = block_last | UMM_FREELIST_MASK; 1221 | UMM_NFREE(block_1th) = 0; 1222 | UMM_PBLOCK(block_1th) = block_0th; 1223 | UMM_PFREE(block_1th) = block_0th; 1224 | 1225 | /* 1226 | * latest `umm_block` has pointers: 1227 | * 1228 | * - next `umm_block`: 0 (meaning, there are no more `umm_blocks`) 1229 | * - prev `umm_block`: the 1st 1230 | * 1231 | * It's not a free block, so we don't touch NFREE / PFREE at all. 1232 | */ 1233 | UMM_NBLOCK(block_last) = 0; 1234 | UMM_PBLOCK(block_last) = block_1th; 1235 | } 1236 | } 1237 | 1238 | /* ------------------------------------------------------------------------ */ 1239 | 1240 | static void _umm_free( void *ptr ) { 1241 | 1242 | unsigned short int c; 1243 | 1244 | /* If we're being asked to free a NULL pointer, well that's just silly! */ 1245 | 1246 | if( (void *)0 == ptr ) { 1247 | DBG_LOG_DEBUG( "free a null pointer -> do nothing\n" ); 1248 | 1249 | return; 1250 | } 1251 | 1252 | /* 1253 | * FIXME: At some point it might be a good idea to add a check to make sure 1254 | * that the pointer we're being asked to free up is actually within 1255 | * the umm_heap! 1256 | * 1257 | * NOTE: See the new umm_info() function that you can use to see if a ptr is 1258 | * on the free list! 1259 | */ 1260 | 1261 | /* Protect the critical section... */ 1262 | UMM_CRITICAL_ENTRY(); 1263 | 1264 | /* Figure out which block we're in. Note the use of truncated division... */ 1265 | 1266 | c = (((char *)ptr)-(char *)(&(umm_heap[0])))/sizeof(umm_block); 1267 | 1268 | DBG_LOG_DEBUG( "Freeing block %6i\n", c ); 1269 | 1270 | /* Now let's assimilate this block with the next one if possible. */ 1271 | 1272 | umm_assimilate_up( c ); 1273 | 1274 | /* Then assimilate with the previous block if possible */ 1275 | 1276 | if( UMM_NBLOCK(UMM_PBLOCK(c)) & UMM_FREELIST_MASK ) { 1277 | 1278 | DBG_LOG_DEBUG( "Assimilate down to next block, which is FREE\n" ); 1279 | 1280 | c = umm_assimilate_down(c, UMM_FREELIST_MASK); 1281 | } else { 1282 | /* 1283 | * The previous block is not a free block, so add this one to the head 1284 | * of the free list 1285 | */ 1286 | 1287 | DBG_LOG_DEBUG( "Just add to head of free list\n" ); 1288 | 1289 | UMM_PFREE(UMM_NFREE(0)) = c; 1290 | UMM_NFREE(c) = UMM_NFREE(0); 1291 | UMM_PFREE(c) = 0; 1292 | UMM_NFREE(0) = c; 1293 | 1294 | UMM_NBLOCK(c) |= UMM_FREELIST_MASK; 1295 | } 1296 | 1297 | #if 0 1298 | /* 1299 | * The following is experimental code that checks to see if the block we just 1300 | * freed can be assimilated with the very last block - it's pretty convoluted in 1301 | * terms of block index manipulation, and has absolutely no effect on heap 1302 | * fragmentation. I'm not sure that it's worth including but I've left it 1303 | * here for posterity. 1304 | */ 1305 | 1306 | if( 0 == UMM_NBLOCK(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK ) ) { 1307 | 1308 | if( UMM_PBLOCK(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK) != UMM_PFREE(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK) ) { 1309 | UMM_NFREE(UMM_PFREE(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK)) = c; 1310 | UMM_NFREE(UMM_PFREE(c)) = UMM_NFREE(c); 1311 | UMM_PFREE(UMM_NFREE(c)) = UMM_PFREE(c); 1312 | UMM_PFREE(c) = UMM_PFREE(UMM_NBLOCK(c) & UMM_BLOCKNO_MASK); 1313 | } 1314 | 1315 | UMM_NFREE(c) = 0; 1316 | UMM_NBLOCK(c) = 0; 1317 | } 1318 | #endif 1319 | 1320 | /* Release the critical section... */ 1321 | UMM_CRITICAL_EXIT(); 1322 | } 1323 | 1324 | /* ------------------------------------------------------------------------ */ 1325 | 1326 | static void *_umm_malloc( size_t size ) { 1327 | unsigned short int blocks; 1328 | unsigned short int blockSize = 0; 1329 | 1330 | unsigned short int bestSize; 1331 | unsigned short int bestBlock; 1332 | 1333 | unsigned short int cf; 1334 | 1335 | if (umm_heap == NULL) { 1336 | umm_init(); 1337 | } 1338 | 1339 | /* 1340 | * the very first thing we do is figure out if we're being asked to allocate 1341 | * a size of 0 - and if we are we'll simply return a null pointer. if not 1342 | * then reduce the size by 1 byte so that the subsequent calculations on 1343 | * the number of blocks to allocate are easier... 1344 | */ 1345 | 1346 | if( 0 == size ) { 1347 | DBG_LOG_DEBUG( "malloc a block of 0 bytes -> do nothing\n" ); 1348 | 1349 | return( (void *)NULL ); 1350 | } 1351 | 1352 | /* Protect the critical section... */ 1353 | UMM_CRITICAL_ENTRY(); 1354 | 1355 | blocks = umm_blocks( size ); 1356 | 1357 | /* 1358 | * Now we can scan through the free list until we find a space that's big 1359 | * enough to hold the number of blocks we need. 1360 | * 1361 | * This part may be customized to be a best-fit, worst-fit, or first-fit 1362 | * algorithm 1363 | */ 1364 | 1365 | cf = UMM_NFREE(0); 1366 | 1367 | bestBlock = UMM_NFREE(0); 1368 | bestSize = 0x7FFF; 1369 | 1370 | while( cf ) { 1371 | blockSize = (UMM_NBLOCK(cf) & UMM_BLOCKNO_MASK) - cf; 1372 | 1373 | DBG_LOG_TRACE( "Looking at block %6i size %6i\n", cf, blockSize ); 1374 | 1375 | #if defined UMM_FIRST_FIT 1376 | /* This is the first block that fits! */ 1377 | if( (blockSize >= blocks) ) 1378 | break; 1379 | #elif defined UMM_BEST_FIT 1380 | if( (blockSize >= blocks) && (blockSize < bestSize) ) { 1381 | bestBlock = cf; 1382 | bestSize = blockSize; 1383 | } 1384 | #endif 1385 | 1386 | cf = UMM_NFREE(cf); 1387 | } 1388 | 1389 | if( 0x7FFF != bestSize ) { 1390 | cf = bestBlock; 1391 | blockSize = bestSize; 1392 | } 1393 | 1394 | if( UMM_NBLOCK(cf) & UMM_BLOCKNO_MASK && blockSize >= blocks ) { 1395 | /* 1396 | * This is an existing block in the memory heap, we just need to split off 1397 | * what we need, unlink it from the free list and mark it as in use, and 1398 | * link the rest of the block back into the freelist as if it was a new 1399 | * block on the free list... 1400 | */ 1401 | 1402 | if( blockSize == blocks ) { 1403 | /* It's an exact fit and we don't neet to split off a block. */ 1404 | DBG_LOG_DEBUG( "Allocating %6i blocks starting at %6i - exact\n", blocks, cf ); 1405 | 1406 | /* Disconnect this block from the FREE list */ 1407 | 1408 | umm_disconnect_from_free_list( cf ); 1409 | 1410 | } else { 1411 | /* It's not an exact fit and we need to split off a block. */ 1412 | DBG_LOG_DEBUG( "Allocating %6i blocks starting at %6i - existing\n", blocks, cf ); 1413 | 1414 | /* 1415 | * split current free block `cf` into two blocks. The first one will be 1416 | * returned to user, so it's not free, and the second one will be free. 1417 | */ 1418 | umm_make_new_block( cf, blocks, 1419 | 0/*`cf` is not free*/, 1420 | UMM_FREELIST_MASK/*new block is free*/); 1421 | 1422 | /* 1423 | * `umm_make_new_block()` does not update the free pointers (it affects 1424 | * only free flags), but effectively we've just moved beginning of the 1425 | * free block from `cf` to `cf + blocks`. So we have to adjust pointers 1426 | * to and from adjacent free blocks. 1427 | */ 1428 | 1429 | /* previous free block */ 1430 | UMM_NFREE( UMM_PFREE(cf) ) = cf + blocks; 1431 | UMM_PFREE( cf + blocks ) = UMM_PFREE(cf); 1432 | 1433 | /* next free block */ 1434 | UMM_PFREE( UMM_NFREE(cf) ) = cf + blocks; 1435 | UMM_NFREE( cf + blocks ) = UMM_NFREE(cf); 1436 | } 1437 | } else { 1438 | /* Out of memory */ 1439 | 1440 | DBG_LOG_DEBUG( "Can't allocate %5i blocks\n", blocks ); 1441 | 1442 | /* Release the critical section... */ 1443 | UMM_CRITICAL_EXIT(); 1444 | 1445 | return( (void *)NULL ); 1446 | } 1447 | 1448 | /* Release the critical section... */ 1449 | UMM_CRITICAL_EXIT(); 1450 | 1451 | return( (void *)&UMM_DATA(cf) ); 1452 | } 1453 | 1454 | /* ------------------------------------------------------------------------ */ 1455 | 1456 | static void *_umm_realloc( void *ptr, size_t size ) { 1457 | 1458 | unsigned short int blocks; 1459 | unsigned short int blockSize; 1460 | 1461 | unsigned short int c; 1462 | 1463 | size_t curSize; 1464 | 1465 | if (umm_heap == NULL) { 1466 | umm_init(); 1467 | } 1468 | 1469 | /* 1470 | * This code looks after the case of a NULL value for ptr. The ANSI C 1471 | * standard says that if ptr is NULL and size is non-zero, then we've 1472 | * got to work the same a malloc(). If size is also 0, then our version 1473 | * of malloc() returns a NULL pointer, which is OK as far as the ANSI C 1474 | * standard is concerned. 1475 | */ 1476 | 1477 | if( ((void *)NULL == ptr) ) { 1478 | DBG_LOG_DEBUG( "realloc the NULL pointer - call malloc()\n" ); 1479 | 1480 | return( _umm_malloc(size) ); 1481 | } 1482 | 1483 | /* 1484 | * Now we're sure that we have a non_NULL ptr, but we're not sure what 1485 | * we should do with it. If the size is 0, then the ANSI C standard says that 1486 | * we should operate the same as free. 1487 | */ 1488 | 1489 | if( 0 == size ) { 1490 | DBG_LOG_DEBUG( "realloc to 0 size, just free the block\n" ); 1491 | 1492 | _umm_free( ptr ); 1493 | 1494 | return( (void *)NULL ); 1495 | } 1496 | 1497 | /* Protect the critical section... */ 1498 | UMM_CRITICAL_ENTRY(); 1499 | 1500 | /* 1501 | * Otherwise we need to actually do a reallocation. A naiive approach 1502 | * would be to malloc() a new block of the correct size, copy the old data 1503 | * to the new block, and then free the old block. 1504 | * 1505 | * While this will work, we end up doing a lot of possibly unnecessary 1506 | * copying. So first, let's figure out how many blocks we'll need. 1507 | */ 1508 | 1509 | blocks = umm_blocks( size ); 1510 | 1511 | /* Figure out which block we're in. Note the use of truncated division... */ 1512 | 1513 | c = (((char *)ptr)-(char *)(&(umm_heap[0])))/sizeof(umm_block); 1514 | 1515 | /* Figure out how big this block is... */ 1516 | 1517 | blockSize = (UMM_NBLOCK(c) - c); 1518 | 1519 | /* Figure out how many bytes are in this block */ 1520 | 1521 | curSize = (blockSize*sizeof(umm_block))-(sizeof(((umm_block *)0)->header)); 1522 | 1523 | /* 1524 | * Ok, now that we're here, we know the block number of the original chunk 1525 | * of memory, and we know how much new memory we want, and we know the original 1526 | * block size... 1527 | */ 1528 | 1529 | if( blockSize == blocks ) { 1530 | /* This space intentionally left blank - return the original pointer! */ 1531 | 1532 | DBG_LOG_DEBUG( "realloc the same size block - %i, do nothing\n", blocks ); 1533 | 1534 | /* Release the critical section... */ 1535 | UMM_CRITICAL_EXIT(); 1536 | 1537 | return( ptr ); 1538 | } 1539 | 1540 | /* 1541 | * Now we have a block size that could be bigger or smaller. Either 1542 | * way, try to assimilate up to the next block before doing anything... 1543 | * 1544 | * If it's still too small, we have to free it anyways and it will save the 1545 | * assimilation step later in free :-) 1546 | */ 1547 | 1548 | umm_assimilate_up( c ); 1549 | 1550 | /* 1551 | * Now check if it might help to assimilate down, but don't actually 1552 | * do the downward assimilation unless the resulting block will hold the 1553 | * new request! If this block of code runs, then the new block will 1554 | * either fit the request exactly, or be larger than the request. 1555 | */ 1556 | 1557 | if( (UMM_NBLOCK(UMM_PBLOCK(c)) & UMM_FREELIST_MASK) && 1558 | (blocks <= (UMM_NBLOCK(c)-UMM_PBLOCK(c))) ) { 1559 | 1560 | /* Check if the resulting block would be big enough... */ 1561 | 1562 | DBG_LOG_DEBUG( "realloc() could assimilate down %i blocks - fits!\n\r", c-UMM_PBLOCK(c) ); 1563 | 1564 | /* Disconnect the previous block from the FREE list */ 1565 | 1566 | umm_disconnect_from_free_list( UMM_PBLOCK(c) ); 1567 | 1568 | /* 1569 | * Connect the previous block to the next block ... and then 1570 | * realign the current block pointer 1571 | */ 1572 | 1573 | c = umm_assimilate_down(c, 0); 1574 | 1575 | /* 1576 | * Move the bytes down to the new block we just created, but be sure to move 1577 | * only the original bytes. 1578 | */ 1579 | 1580 | memmove( (void *)&UMM_DATA(c), ptr, curSize ); 1581 | 1582 | /* And don't forget to adjust the pointer to the new block location! */ 1583 | 1584 | ptr = (void *)&UMM_DATA(c); 1585 | } 1586 | 1587 | /* Now calculate the block size again...and we'll have three cases */ 1588 | 1589 | blockSize = (UMM_NBLOCK(c) - c); 1590 | 1591 | if( blockSize == blocks ) { 1592 | /* This space intentionally left blank - return the original pointer! */ 1593 | 1594 | DBG_LOG_DEBUG( "realloc the same size block - %i, do nothing\n", blocks ); 1595 | 1596 | } else if (blockSize > blocks ) { 1597 | /* 1598 | * New block is smaller than the old block, so just make a new block 1599 | * at the end of this one and put it up on the free list... 1600 | */ 1601 | 1602 | DBG_LOG_DEBUG( "realloc %i to a smaller block %i, shrink and free the leftover bits\n", blockSize, blocks ); 1603 | 1604 | umm_make_new_block( c, blocks, 0, 0 ); 1605 | _umm_free( (void *)&UMM_DATA(c+blocks) ); 1606 | } else { 1607 | /* New block is bigger than the old block... */ 1608 | 1609 | void *oldptr = ptr; 1610 | 1611 | DBG_LOG_DEBUG( "realloc %i to a bigger block %i, make new, copy, and free the old\n", blockSize, blocks ); 1612 | 1613 | /* 1614 | * Now _umm_malloc() a new/ one, copy the old data to the new block, and 1615 | * free up the old block, but only if the malloc was sucessful! 1616 | */ 1617 | 1618 | if( (ptr = _umm_malloc( size )) ) { 1619 | memcpy( ptr, oldptr, curSize ); 1620 | } 1621 | 1622 | _umm_free( oldptr ); 1623 | } 1624 | 1625 | /* Release the critical section... */ 1626 | UMM_CRITICAL_EXIT(); 1627 | 1628 | return( ptr ); 1629 | } 1630 | 1631 | /* ------------------------------------------------------------------------ */ 1632 | 1633 | void *umm_malloc( size_t size ) { 1634 | void *ret; 1635 | 1636 | /* check poison of each blocks, if poisoning is enabled */ 1637 | if (!CHECK_POISON_ALL_BLOCKS()) { 1638 | return NULL; 1639 | } 1640 | 1641 | /* check full integrity of the heap, if this check is enabled */ 1642 | if (!INTEGRITY_CHECK()) { 1643 | return NULL; 1644 | } 1645 | 1646 | size += POISON_SIZE(size); 1647 | 1648 | ret = _umm_malloc( size ); 1649 | 1650 | ret = GET_POISONED(ret, size); 1651 | 1652 | return ret; 1653 | } 1654 | 1655 | /* ------------------------------------------------------------------------ */ 1656 | 1657 | void *umm_calloc( size_t num, size_t item_size ) { 1658 | void *ret; 1659 | size_t size = item_size * num; 1660 | 1661 | /* check poison of each blocks, if poisoning is enabled */ 1662 | if (!CHECK_POISON_ALL_BLOCKS()) { 1663 | return NULL; 1664 | } 1665 | 1666 | /* check full integrity of the heap, if this check is enabled */ 1667 | if (!INTEGRITY_CHECK()) { 1668 | return NULL; 1669 | } 1670 | 1671 | size += POISON_SIZE(size); 1672 | ret = _umm_malloc(size); 1673 | memset(ret, 0x00, size); 1674 | 1675 | ret = GET_POISONED(ret, size); 1676 | 1677 | return ret; 1678 | } 1679 | 1680 | /* ------------------------------------------------------------------------ */ 1681 | 1682 | void *umm_realloc( void *ptr, size_t size ) { 1683 | void *ret; 1684 | 1685 | ptr = GET_UNPOISONED(ptr); 1686 | 1687 | /* check poison of each blocks, if poisoning is enabled */ 1688 | if (!CHECK_POISON_ALL_BLOCKS()) { 1689 | return NULL; 1690 | } 1691 | 1692 | /* check full integrity of the heap, if this check is enabled */ 1693 | if (!INTEGRITY_CHECK()) { 1694 | return NULL; 1695 | } 1696 | 1697 | size += POISON_SIZE(size); 1698 | ret = _umm_realloc( ptr, size ); 1699 | 1700 | ret = GET_POISONED(ret, size); 1701 | 1702 | return ret; 1703 | } 1704 | 1705 | /* ------------------------------------------------------------------------ */ 1706 | 1707 | void umm_free( void *ptr ) { 1708 | 1709 | ptr = GET_UNPOISONED(ptr); 1710 | 1711 | /* check poison of each blocks, if poisoning is enabled */ 1712 | if (!CHECK_POISON_ALL_BLOCKS()) { 1713 | return; 1714 | } 1715 | 1716 | /* check full integrity of the heap, if this check is enabled */ 1717 | if (!INTEGRITY_CHECK()) { 1718 | return; 1719 | } 1720 | 1721 | _umm_free( ptr ); 1722 | } 1723 | 1724 | /* ------------------------------------------------------------------------ */ 1725 | 1726 | size_t umm_free_heap_size( void ) { 1727 | umm_info(NULL, 0); 1728 | return (size_t)ummHeapInfo.freeBlocks * sizeof(umm_block); 1729 | } 1730 | 1731 | /* ------------------------------------------------------------------------ */ 1732 | -------------------------------------------------------------------------------- /umm_malloc.h: -------------------------------------------------------------------------------- 1 | /* ---------------------------------------------------------------------------- 2 | * umm_malloc.h - a memory allocator for embedded systems (microcontrollers) 3 | * 4 | * See copyright notice in LICENSE.TXT 5 | * ---------------------------------------------------------------------------- 6 | */ 7 | 8 | #ifndef UMM_MALLOC_H 9 | #define UMM_MALLOC_H 10 | 11 | /* ------------------------------------------------------------------------ */ 12 | 13 | #include "umm_malloc_cfg.h" /* user-dependent */ 14 | 15 | typedef struct UMM_HEAP_INFO_t { 16 | unsigned short int totalEntries; 17 | unsigned short int usedEntries; 18 | unsigned short int freeEntries; 19 | 20 | unsigned short int totalBlocks; 21 | unsigned short int usedBlocks; 22 | unsigned short int freeBlocks; 23 | 24 | unsigned short int maxFreeContiguousBlocks; 25 | } 26 | UMM_HEAP_INFO; 27 | 28 | extern UMM_HEAP_INFO ummHeapInfo; 29 | 30 | void umm_init( void ); 31 | 32 | void *umm_info( void *ptr, int force ); 33 | 34 | void *umm_malloc( size_t size ); 35 | void *umm_calloc( size_t num, size_t size ); 36 | void *umm_realloc( void *ptr, size_t size ); 37 | void umm_free( void *ptr ); 38 | 39 | size_t umm_free_heap_size( void ); 40 | 41 | 42 | /* ------------------------------------------------------------------------ */ 43 | 44 | #endif /* UMM_MALLOC_H */ 45 | -------------------------------------------------------------------------------- /umm_malloc_cfg_example.h: -------------------------------------------------------------------------------- 1 | /* 2 | * Configuration for umm_malloc 3 | */ 4 | 5 | #ifndef _UMM_MALLOC_CFG_H 6 | #define _UMM_MALLOC_CFG_H 7 | 8 | /* 9 | * There are a number of defines you can set at compile time that affect how 10 | * the memory allocator will operate. 11 | * You can set them in your config file umm_malloc_cfg.h. 12 | * In GNU C, you also can set these compile time defines like this: 13 | * 14 | * -D UMM_TEST_MAIN 15 | * 16 | * Set this if you want to compile in the test suite at the end of this file. 17 | * 18 | * If you leave this define unset, then you might want to set another one: 19 | * 20 | * -D UMM_REDEFINE_MEM_FUNCTIONS 21 | * 22 | * If you leave this define unset, then the function names are left alone as 23 | * umm_malloc() umm_free() and umm_realloc() so that they cannot be confused 24 | * with the C runtime functions malloc() free() and realloc() 25 | * 26 | * If you do set this define, then the function names become malloc() 27 | * free() and realloc() so that they can be used as the C runtime functions 28 | * in an embedded environment. 29 | * 30 | * -D UMM_BEST_FIT (defualt) 31 | * 32 | * Set this if you want to use a best-fit algorithm for allocating new 33 | * blocks 34 | * 35 | * -D UMM_FIRST_FIT 36 | * 37 | * Set this if you want to use a first-fit algorithm for allocating new 38 | * blocks 39 | * 40 | * -D UMM_DBG_LOG_LEVEL=n 41 | * 42 | * Set n to a value from 0 to 6 depending on how verbose you want the debug 43 | * log to be 44 | * 45 | * ---------------------------------------------------------------------------- 46 | * 47 | * Support for this library in a multitasking environment is provided when 48 | * you add bodies to the UMM_CRITICAL_ENTRY and UMM_CRITICAL_EXIT macros 49 | * (see below) 50 | * 51 | * ---------------------------------------------------------------------------- 52 | */ 53 | 54 | /* Start addresses and the size of the heap */ 55 | #define UMM_MALLOC_CFG__HEAP_ADDR /* TODO */ 56 | #define UMM_MALLOC_CFG__HEAP_SIZE /* TODO */ 57 | 58 | /* A couple of macros to make packing structures less compiler dependent */ 59 | 60 | #define UMM_H_ATTPACKPRE 61 | #define UMM_H_ATTPACKSUF __attribute__((__packed__)) 62 | 63 | /* 64 | * A couple of macros to make it easier to protect the memory allocator 65 | * in a multitasking system. You should set these macros up to use whatever 66 | * your system uses for this purpose. You can disable interrupts entirely, or 67 | * just disable task switching - it's up to you 68 | * 69 | * NOTE WELL that these macros MUST be allowed to nest, because umm_free() is 70 | * called from within umm_malloc() 71 | */ 72 | 73 | #define UMM_CRITICAL_ENTRY() 74 | #define UMM_CRITICAL_EXIT() 75 | 76 | /* 77 | * -D UMM_INTEGRITY_CHECK : 78 | * 79 | * Enables heap integrity check before any heap operation. It affects 80 | * performance, but does NOT consume extra memory. 81 | * 82 | * If integrity violation is detected, the message is printed and user-provided 83 | * callback is called: `UMM_HEAP_CORRUPTION_CB()` 84 | * 85 | * Note that not all buffer overruns are detected: each buffer is aligned by 86 | * 4 bytes, so there might be some trailing "extra" bytes which are not checked 87 | * for corruption. 88 | */ 89 | /* 90 | #define UMM_INTEGRITY_CHECK 91 | */ 92 | 93 | /* 94 | * -D UMM_POISON : 95 | * 96 | * Enables heap poisoning: add predefined value (poison) before and after each 97 | * allocation, and check before each heap operation that no poison is 98 | * corrupted. 99 | * 100 | * Other than the poison itself, we need to store exact user-requested length 101 | * for each buffer, so that overrun by just 1 byte will be always noticed. 102 | * 103 | * Customizations: 104 | * 105 | * UMM_POISON_SIZE_BEFORE: 106 | * Number of poison bytes before each block, e.g. 2 107 | * UMM_POISON_SIZE_AFTER: 108 | * Number of poison bytes after each block e.g. 2 109 | * UMM_POISONED_BLOCK_LEN_TYPE 110 | * Type of the exact buffer length, e.g. `short` 111 | * 112 | * NOTE: each allocated buffer is aligned by 4 bytes. But when poisoning is 113 | * enabled, actual pointer returned to user is shifted by 114 | * `(sizeof(UMM_POISONED_BLOCK_LEN_TYPE) + UMM_POISON_SIZE_BEFORE)`. 115 | * It's your responsibility to make resulting pointers aligned appropriately. 116 | * 117 | * If poison corruption is detected, the message is printed and user-provided 118 | * callback is called: `UMM_HEAP_CORRUPTION_CB()` 119 | */ 120 | /* 121 | #define UMM_POISON 122 | */ 123 | #define UMM_POISON_SIZE_BEFORE 2 124 | #define UMM_POISON_SIZE_AFTER 2 125 | #define UMM_POISONED_BLOCK_LEN_TYPE short 126 | 127 | #endif /* _UMM_MALLOC_CFG_H */ 128 | --------------------------------------------------------------------------------