├── .gitignore ├── CasTocFormats.md ├── LICENSE ├── README.md ├── compression.go ├── cpp ├── Pack.bat ├── UnpackAllplusManifest.bat ├── castoc_x64.dll ├── castoc_x64.h ├── fix_manifest_and_pack.py ├── main.cpp └── main.exe ├── dllAPI.go ├── fIoStoreStructs.go ├── go.mod ├── go.sum ├── helpers.go ├── manifest.go ├── pack.go ├── req └── Packed_P.pak ├── ucas.go └── utoc.go /.gitignore: -------------------------------------------------------------------------------- 1 | /files/ 2 | */unpacked*/ 3 | /test/ 4 | test* 5 | *.bin 6 | /.idea/ 7 | Runtime/* 8 | packProgram/Runtime/ 9 | unpacking*/ -------------------------------------------------------------------------------- /CasTocFormats.md: -------------------------------------------------------------------------------- 1 | # utoc File Format 2 | 3 | This document shows the file format of utoc files, and they show their relation with the .ucas files for the Unreal Engine. 4 | This has been reverse engineered based on the game files of Grounded, and the source code of the Unreal Engine. 5 | That being said, let's dive right into it. 6 | 7 | The UTOC struct is made up of the following properties: 8 | 9 | ``` 10 | FIoStoreTocHeader - Header 11 | int64 - TocFileSize 12 | TArray ChunkIds 13 | TMap ChunkIdToIndex 14 | TArray CompressionBlocks 15 | TArray ChunkMetas 16 | TArray Compression 17 | ``` 18 | 19 | First the header: 20 | For any extra data structure used in the header, an indent is made. 21 | Everything in the file is Little Endian. 22 | ``` 23 | FILE HEADER (FIoStoreTocHeader), total bytes: 144 24 | uint8 {1}[16] - static expression of 16 bits for MAGIC word (string: "-==--==--==--==-") 25 | uint8 {1} - Version - Current options are Initial(1), DirectoryIndex(2), PartitionSize(3) 26 | uint8 + uint16 {3} - Reserved/Padding 27 | uint32 {4} - Header Size (144) 28 | uint32 {4} - Entry Count 29 | uint32 {4} - Count of Compressed Blocks 30 | uint32 {4} - Entry size of Compressed Block (12) 31 | uint32 {4} - Count Compression method names 32 | uint32 {4} - Compression method name length (32) 33 | uint32 {4} - Compression block size (0x10000) 34 | uint32 {4} - Directory index Size 35 | uint32 {4} - Partition Count (0) // default value is 0 36 | uint64 {8} - Container ID (FIoContainerID) 37 | FGUID {16} - Guid Encryption Key 38 | uint32 {4} - A 39 | uint32 {4} - B 40 | uint32 {4} - C 41 | uint32 {4} - D 42 | uint8 {1} - Container flags - uint8 bitmask enum - see https://docs.unrealengine.com/5.1/en-US/API/Runtime/Core/IO/EIoContainerFlags/ 43 | uint8 + uint16 - Reserved/Padding caused by bitmask 44 | uint8 [60] - static 60 bytes of padding - "partition size" is among them as well, but seems unused... 45 | ``` 46 | Following the header the total TOC file size is serialized: 47 | ``` 48 | int64 {4} - TOC File Size 49 | ``` 50 | 51 | Based on the header, there are several number of data structures parsed. 52 | The way in which these data structures are parsed and how often they occur are listed here, in the form: "(data structure): {name of variable}" 53 | The individual data structures can be found below. 54 | ``` 55 | (CHUNK_ID) : {Entry Count} 56 | (OFFSET_AND_LENGTHS) : {Entry Count} 57 | (COMPRESSION_BLOCK) : {Count of Compressed Blocks} 58 | (byte {32}) : {Count Compression method names} // the 32 is derived from the compression method name length 59 | (DIRECTORY_INDEX) : {1} // number of bytes in data structure: Directory index Size 60 | (CHUNK_META) : {Entry Count} 61 | ``` 62 | Note: while there are Count Compression method names entries in some kind of array with the compression methods, this array should start at 1, as index 0 means "No compression method". 63 | 64 | After the chunk meta has been read, there should not be any bytes left to parse from the .utoc file. 65 | 66 | ________ 67 | Data structures: 68 | 69 | ``` 70 | CHUNK_ID, total bytes: 12 - Hash made up of 12 uint8 bytes 71 | uint64 {8} - ID 72 | uint16 {2} - Index // seems to be always 0 73 | uint8 {1} - Padding // making it 0x10 aligned 74 | uint8 {1} - Type - See: https://docs.unrealengine.com/4.26/en-US/API/Runtime/Core/IO/EIoChunkType/ 75 | ``` 76 | This hash identifies one chunk. 77 | As far as I know, the ID is invalid if the value is 0. 78 | Otherwise it seems to be random. 79 | I am not sure what the Index means, as this value is always 0. 80 | Maybe this is relevant if the files are too large to be described by one chunk. 81 | In that case you can have multiple chunks describing a file, possibly with multiple indices, but this is all speculation. 82 | The Type field corresponds to the EIoChunkType of the unreal engine. 83 | The values I've seen are 2, 3 and 10. 84 | 85 | Value 2 means "BulkData" and is used to represent .uasset files. 86 | Value 3 means "OptionalBulkData" and is used to represent .ubulk files. 87 | The .uasset files all have unique chunk IDs, whereas the .ubulk files have an ID that was already taken by a .uasset file. 88 | The very last chunkID always has value 10, which means "PackageStoreEntry", but I have no clue what this represents. 89 | The Unreal Engine code isn't too clear about it. 90 | I suspect that this has something to do with the FPackageStoreEntry struct. 91 | 92 | To investigate what it could mean, I set everything of those sections to 0 and checked with the UnrealPak Viewer tool (https://github.com/jashking/UnrealPakViewer). 93 | In the one with everything set to zero, it has "dependency counts" at 0, and the class entries are set to "uexp" or "ubulk". 94 | For the other version, the one with data, it has various "dependency counts", and the class entries are set to different things, such as "Texture2D", "StaticMesh", "Skeleton", "SoundWave" etc. 95 | Furthermore, the "ExportObjects" field in the empty one has counts 0, while the one with data does not. 96 | 97 | 98 | Update: I have found a few more things about this. 99 | If the UTOC header version is 2, then the first entry in the list of chunk IDs has type 10, which is not "mapped" to a file. 100 | If the UTOC header version is 3, then this is the last entry. 101 | The compression blocks that the corresponding offset and lengths entry corresponds to, are compressed (or not), and when decompressed, they form a file with some special file format. 102 | I have dubbed this file format as "depsFile", and its specification can be found below. 103 | 104 | 105 | 106 | ``` 107 | OFFSET_AND_LENGTHS, total bytes: 10 108 | byte {5} - Offset 109 | byte {5} - Length 110 | ``` 111 | Read as a uint64, with only the lower 5 bytes set, the others should be 0. 112 | There are just as many of these entries as Chunk IDs and file entries. 113 | The offset and lengths actually show the data of the individual files that are compressed in the .ucas file. 114 | Note that all data must be decompressed first. 115 | 116 | **NOTE:** The length of the offlengths show the length of the decompressed file! 117 | The offset follows directly after the next length, but it's aligned to 0x10000. 118 | Therefore, there may be large gaps between them. 119 | This is done for indexing in the array of chunks. 120 | 121 | ``` 122 | COMPRESSION_BLOCK, total bytes: 12 123 | byte {5} - Offset 124 | byte {3} - Compressed block size 125 | byte {3} - Uncompressed block size 126 | uint8 {1} - Compression method 127 | ``` 128 | Read offset as uint64, with only lower 5 bytes set. 129 | The others should be read as uint32, with only the lower 3 bytes set. 130 | The compression method is interpreted as an index in the list of compression method names. 131 | 132 | ```Compression Method Names 133 | NAME_Zlib 134 | NAME_Gzip 135 | NAME_LZ4 136 | "Bogus" 137 | "Oodle" 138 | ``` 139 | 140 | The offset is an offset in the .ucas field, and it is always 16-aligned. 141 | The compression block size is the length of the data in the .ucas field. 142 | If the compression block size equals the uncompressed block size, it is obviously not compressed. 143 | If it is, the compression method is an index in the list of compression method entries. 144 | One note: in case of 0, this is always "None". 145 | 146 | As the offset values are always 16-aligned, there are bogus values in between the parts that are unallocated. 147 | I don't know the significance of these values, but I think they are just random. 148 | 149 | 150 | ``` 151 | DIRECTORY_INDEX (total bytes is variable) 152 | uint32 {4} - Mount point string length (includes nullbyte) 153 | byte {X} - Mount point string 154 | uint32 {4} - Directory Index array Entry Count (DIEC) 155 | DIR_INDEX_ENTRY {DIEC * 16} // an array of DIR_INDEX_ENTRY 156 | uint32 {4} - Directory Name index 157 | uint32 {4} - First child entry index 158 | uint32 {4} - First sibling entry index 159 | uint32 {4} - First file entry index 160 | uint32 {4} - File Index array Entry Count (FIEC) 161 | FILE_INDEX_ENTRY {FIEC * 12} // an array of FILE_INDEX_ENTRY 162 | uint32 {4} - File name index 163 | uint32 {4} - Next file entry index 164 | uint32 {4} - UserData (index, I guess) 165 | uint32 {4} - number of strings of the string array 166 | FSTRING {number_of_strings * (4 + X)} // an array of strings, referred to as StringTable 167 | uint32 {4} - string length, nullbyte included 168 | byte {X} - actual string 169 | ``` 170 | The directory index entries and the file index entries have a "name" index, which is used to refer to a name in the StringTable. 171 | The child and sibling indices for the Directory entries point to a different Directory entry in the same array. 172 | The file entry indices point to the file entry in the FILE_INDEX_ENTRY array. 173 | 174 | ``` 175 | CHUNK_META, total bytes: 33 176 | byte {32} - Hash of chunk (This is standard SHA1 hash of 20 bytes, with 12 bytes of padding) 177 | uint8 {1} - Flags 178 | ``` 179 | Possible flags are: NoneMetaFlag (0), CompressedMetaFlag (1), MemoryMappedMetaFlag (2) 180 | However, in the Grounded utoc file, the flag value is always 1. 181 | 182 | 183 | ____ 184 | ### depsFile Format 185 | This still belongs to the .utoc file format, but it's encoded in the .ucas file. 186 | In the .ucas file, either the first large chunk of data (in header version 2), or the last chunk of data (in version 3) is specified as follows: 187 | 188 | ``` 189 | depsFile HEADER, total bytes: 32 190 | uint64 {8} - Chunk ID of this "file" 191 | uint64 {8} - Number of IDs (nID) 192 | uint32 {4} - sizeof(chunkID) (always 8) 193 | uint32 {4} - as of now, an unknown value 194 | byte {4} - zero bytes PADDING 195 | uint32 {4} - Number of IDs (again, but now uint32?) 196 | ``` 197 | The header is followed by an ORDERED list of Chunk IDs, of size _nID_. 198 | The list is unique, and it contains all of the IDs that are present in the .utoc file. 199 | I suspect that this is done for performance, for quick searching. 200 | There's just one Chunk ID missing, and that's the thing with the chunk type being 10 (so the one that contains this "depsFile Format"). 201 | 202 | The list is immediately followed by a uint32 value describing the size in bytes of the rest of the file. 203 | It excludes the trailling (8) zerobytes. 204 | 205 | The "rest of the file" consists of the following very interesting information. 206 | First, there are blocks of 32 bytes, with data that links .uasset files to other files. 207 | These are the connections that are made between dependencies, and they show how many exports each entry has. 208 | For every single ID that was provided in the previous part with the chunk IDs, there is a corresponding entry with this structure; 209 | 210 | ``` 211 | depsFile DEPENDENCIES, total bytes: 32 212 | uint64 {8} - File size (uncompressed) 213 | uint32 {4} - Number of Export Objects 214 | uint32 {4} - A number either 1, 2 or 3. Not sure what this means 215 | 216 | uint64 {8} - Index of some sort; unique and starting at 0, but skipping some numbers 217 | uint32 {4} - Number of Dependency Packages 218 | uint32 {4} - Offset to dependency packages 219 | ``` 220 | The list of these dependencies is followed by a long list of chunk IDs, which are in fact the dependencies. 221 | The offset in a dependency entry points to the start of such chunk ID, but these offsets are a bit tricky. 222 | This offset is taken from the specific offset entry, before reading the entry, so all offsets jump over the whole list. 223 | The number of dependency packages tells how far one must read! 224 | I think this could have been done a bit more efficiently, but unfortunately I didn't design this file format. 225 | 226 | The number of IDs in the dependency area can be calculated. 227 | The length of the "rest of the file" is known, which consists of a number of dependency-entries and the IDs. 228 | We know how many dependency-entries there are, so we can calculate how many IDs there are. 229 | The list of IDs is followed by 8 nullbytes, which dictates the end of the file. 230 | 231 | ____ 232 | # UAsset File Format 233 | After unpacking the .ucas file using the .utoc file, I found out that the .uasset files that were created differ quite from the original .uasset files. 234 | They used to be different after unpacking with the .pak files. 235 | Therefore, I decided to figure out the (new) .uasset file format after unpacking. 236 | 237 | A notable change is that when unpacking, the unreal signature (C1 83 2A 9E) is omitted. 238 | The version number is omitted as well. 239 | I think this is done to preserve space, as these are probably constant among all .uasset files in the .ucas file. 240 | 241 | In addition to this, there is no split between .uasset and .uexp anymore; everything is concatenated within one .uasset file. 242 | This is also the intended way in which .uasset files are stored, I believe. 243 | It's just useful to split the two in some cases. 244 | 245 | The uasset header is divided into two parts; a constant file header (of size 64), and a variable sized part. 246 | After the complete .uasset header, the contents of the would be .uexp file starts. 247 | 248 | Note: most of this information is reverse-engineerd by looking at the information in the .ucas viewer linked before. 249 | ``` 250 | UASSET HEADER, total bytes: 64 251 | uint64 {8} - Name in FMappedName format. See https://docs.unrealengine.com/5.0/en-US/API/Runtime/Core/Serialization/FMappedName/ 252 | uint64 {8} - SourceName in FMappedName format 253 | uint32 {4} - "Package Flags" Generally constant at (00 00 00 80). See https://docs.unrealengine.com/5.1/en-US/API/Runtime/CoreUObject/UObject/EPackageFlags/ 254 | uint32 {4} - CookedHeaderSize - This is the total header size, if it were an old .uasset file, so it deviates 255 | int32 {4} - Name Map Offset 256 | int32 {4} - Name Map Size (in bytes) 257 | int32 {4} - Name Map Hashes offset 258 | int32 {4} - Name Map Hashes Size (in bytes) 259 | int32 {4} - Import Map Offset 260 | int32 {4} - Export Map offset 261 | int32 {4} - Export Bundles Offset 262 | int32 {4} - Graph Data Offset / Dependency Packages offset (also duplicated in dependency file in .ucas) 263 | int32 {4} - Graph Data Size / Dependency Package Size (in bytes) 264 | int32 {4} - Padding 265 | ``` 266 | The header indicates a lot of offsets of all the parts in the .uasset file. 267 | The remainder of this section highlights each of these parts in order. 268 | Sometimes, the header lists the length of a few parts, but I'm not sure why it does this, as all lengths can be inferred from the other offsets, except maybe the list of strings. 269 | 270 | Directly after the file header, there is one nullbyte, after which the name directory index is shown. 271 | ``` 272 | NAMES DIRECTORY 273 | for each name 274 | uint8 {1} - Name length (EXCLUDING null terminator) 275 | byte {X} - Name 276 | byte {1} - null terminator 277 | ``` 278 | Due to the strings in the Names Directory being of different sizes, it is possible that the byte count is not aligned anymore. 279 | After the Names Directory, there are nullbytes to make it aligned to 8 bytes. 280 | This is probably done for performance reasons. 281 | 282 | 283 | ### String Hashes 284 | The names directory is followed by a list 64-bit hashes, representing the hashes of the strings. 285 | The very first value is the "AlgorithmID" of the hash. 286 | This value seems to be always `00 00 64 C1 00 00 00 00`, and this corresponds to this line of code in the [Unreal Engine source code](https://github.com/EpicGames/UnrealEngine/blob/d8c1ecd52eb4728637881c550b39794afed78db3/Engine/Source/Runtime/Core/Private/UObject/UnrealNames.cpp#L748). 287 | The hashing algorithm is the CityHash64 algorithm, and it works on strings that were made to be lowercase. 288 | So to get the value, make the string lowercase and hash it with the CityHash64 algorithm. 289 | 290 | ### Import Objects 291 | The list of string hashes is followed by a list of Import Object IDs. 292 | This is not the same as the file IDs, and I'm not sure what these values represent. 293 | Each value is 8 bytes, and if the import name is "None", the value is `0xFFFFFFFFFFFFFFFF`, or -1. 294 | Not sure what to do with this, but the number of entries correspond to the number of import objects in the ucas file viewer. 295 | 296 | In Helios' tool, the Import Objects correspond to the Linked Classes. 297 | 298 | 299 | ### Export Objects 300 | The next area of the file, are Export Objects. 301 | These are stored in a specific file structure, shown below. 302 | 303 | ``` 304 | EXPORT OBJECTS 305 | uint64 {8} - Cooked Serial Offset 306 | uint64 {8} - Cooked Serial Size 307 | FMappedName {4} - Object Name 308 | FPackageObjectIndex {8} - Outer Index 309 | FPackageObjectIndex {8} - Class Index 310 | FPackageObjectIndex {8} - Super Index 311 | FPackageObjectIndex {8} - Template Index 312 | FPackageObjectIndex {8} - Global Import Index 313 | uint64 {8} - Public Export Hash 314 | EObjectFlags {1} - Object Flags 315 | EExportFilterFlags {1} - Client/Server Flags 316 | byte {2} - Padding 317 | ``` 318 | The fields that were used are mainly copied from the .ucas file viewer. 319 | In any case, the offsets that are described is an integer describing the offset in the name directory, or the list of strings. 320 | 321 | The part after the .uasset file header is built up from different ExportObjects. 322 | When there are multiple ExportObjects, they are consecutive, which the SerialOffset and sizes illustrate. 323 | For the very first ExportObject, the Serial Offset field is the exact same value as the "total header size" in the header of the file. 324 | Now in this new file format, this doesn't really hold anymore, as it differs from the regular file. 325 | However, the serial offset states where the .uexp portion of the file starts. 326 | The Serial Size also has great significance, as this is the length of the .uexp portion of the file. 327 | 328 | The object name offset and the class name offset probably have some use as well, but these probably refer to the old file format. 329 | Therefore, the offset is off and I'm not sure how to correct it right now. 330 | Changing files is probably mostly impacting the serial size and the serial offset, so I'll keep this as a mystery for now. 331 | 332 | ### ExportObject metadata? 333 | The export objects are followed by a strange data structure. 334 | It first starts with a 64-bit number, which seems to have no relation to the rest. 335 | Afterwards, there is an enumeration of some kind of metadata to the export objects. 336 | The metadata consists of four 32-bit numbers, and there are exactly as many metadata entries as export objects. 337 | This is still a mystery, but I don't think I'll touch ExportObjects anyway. 338 | 339 | 340 | ### Dependency packages 341 | After all export objects information, there is a list of dependency packages. 342 | It starts with a uint32 stating how many dependency entries there are. 343 | After this, the following dependencies are shown; 344 | ``` 345 | DEPENDENCY PACKAGE 346 | uint64 {8} - ID 347 | uint32 {4} - Number of Entries (?) // Not quite sure, it's often just 1, sometimes 2 and 3. 348 | int32 {4} - is present* 349 | uint32 {4} - mostly one // not sure what it is, but almost always 1. 350 | 351 | if (number of entries > 1){ 352 | uint32 {4 * (2 * (numberOfEntries - 1))}** 353 | } 354 | ``` 355 | `*` If the value of "is present" is 0, then the ID listed is included in the current .ucas file. 356 | If this value is -1, it's not included in the current .ucas file. 357 | 358 | `**` If the number of entries is 2 or 3, there are more numbers defined. 359 | For every number larger than 1, there are two additional uint32 numbers in a dependency entry. 360 | 361 | It's quite clear that I am still not sure what all of the fields mean and how they relate to each other. 362 | However, after the list of the dependencies, the entire former .uexp file starts! 363 | 364 | ### End of the header 365 | This concludes the .uasset header part of the file. 366 | There are quite some values that are still unknown to me. 367 | However, most of these values will probably not change based on the created mods. 368 | I can only hope that I know enough to do the most interesting stuff with! 369 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## UECastoc Design for DLL 2 | 3 | I have started this project over a year ago, without knowing whether it would ever work. 4 | Just jumping in and reverse engineering the file format has proved to be effective. 5 | However, the resulting code is currently only really usable within Go only. 6 | For most modding tools, it would be nice if you can create a user interface to go with it. 7 | Go isn't meant for creating user interfaces, and not many people in the modding scene do anything with Go. 8 | Therefore, I've decided to make this a DLL for Windows, so that you can still use the code that I wrote, but independent of Go. 9 | 10 | The focus of this DLL is to create packing/unpacking support for Unreal Engine games with an API that is usable. 11 | This API is documented below 12 | 13 | ## Would you like to support me? 14 | You can donate using this [link](https://www.paypal.com/donate/?hosted_button_id=VLFVGJ749MQCC). 15 | 16 | ## Core Features of DLL 17 | 18 | This section lists the core features that this DLL will support. 19 | I do this by showing what the API functions should look like. 20 | 21 | ### Error Handling 22 | Error handling is unique in Go. 23 | There's even a specific type for it! 24 | In C, error handling is often done by returning -1 or NULL if something went wrong. 25 | If you need more information, it is possible to display a message using the _strerror_ function. 26 | 27 | The displayed message originates from the _errno_ value. 28 | From a DLL written in Go, this is not easy to set, though. 29 | Therefore, I provided a function that simply returns the most recent error as a string. 30 | 31 | ```c 32 | char *getError(); 33 | ``` 34 | 35 | This can always be called from your program. 36 | Whenever this function is called, do free the pointer that is provided! 37 | 38 | ### Listing Game Files 39 | The .utoc game file indicates which files are included in the .ucas file. 40 | This is a long list of game files which is structured in directories. 41 | The DLL provides the following function for this. 42 | 43 | ```c 44 | char **listGameFiles(char *utocFileName, int *n, char *AESKey); 45 | ``` 46 | 47 | The function requires the name of the .utoc file, as this file contains the list of game files. 48 | You must also pass in a pointer to an integer _n_, which will contain the number of strings in the list when it returns. 49 | Optionally, you can provide an AES key for utoc files that are encrypted with AES. 50 | While this DLL is quite nice, it can not break AES encryption (yet). 51 | Do provide the key as a hex string, otherwise simply pass NULL. 52 | 53 | If the value of _n_ is -1 after calling, no strings were returned, and the char ** should be NULL. 54 | Otherwise, the char ** contains _n_ strings, where each string is the entire path to a file. 55 | 56 | The file is read and closed by the DLL, but the list must be freed by the caller. 57 | This can also be done with the freeStringList function that comes with the DLL. 58 | Just provide the pointer to the string list and the length of the list. 59 | Do provide an accurate length, as otherwise memory leaks or double frees may occur. 60 | ```c 61 | void freeStringList(char **stringlist, int n); 62 | ``` 63 | It goes without saying, but the pointer should not be used afterwards. 64 | 65 | 66 | ### Create Manifest File 67 | A Manifest file is required to build game files into a mod file. 68 | The Manifest file contains a list of filenames with their Chunk IDs, with some other data that is needed to build a mod for the game later. 69 | ```c 70 | int createManifestFile(char *utocFileName, char *ucasFileName, char *outputFileName); 71 | ``` 72 | This function returns 0 upon success and -1 upon error. 73 | The error message can be retrieved using the getError function. 74 | 75 | 76 | ### Unpack all Game Files 77 | Unpacking the game files require the .utoc file and the .ucas file. 78 | All files get unpacked in the path with the provided directory. 79 | The directory name should end with a slash, and the directory should exist. 80 | An AES key is required for encrypted .utoc/.ucas files. 81 | Pass NULL if it is not encrypted. 82 | **WARNING: Depending on the .utoc and .ucas files, this can take a long time and it could fill your disk.** 83 | 84 | ```c 85 | int unpackAllGameFiles(char *utocFileName, char *ucasFileName, char *outputDirectory, char *AESKey); 86 | ``` 87 | This function returns -1 upon error. 88 | Any other value indicates the number of files that were extracted. 89 | The error message can be retrieved using the getError function. 90 | 91 | 92 | ### Unpack a Selection of Game Files 93 | Unpacking can take a LOT of time. 94 | This function allows one to unpack only the files that matches the provided regular expression. 95 | The regular expression syntax is the one that is default in the Golang language. 96 | https://regex101.com/ has a golang option for regexes, so I assume this is special. 97 | It is otherwise the same as the unpackAllGameFiles function. 98 | 99 | ```c 100 | int unpackGameFiles(char *utocFileName, char *ucasFileName, char *outputDirectory, char *regex, char *AESKey); 101 | ``` 102 | 103 | ### Packing Game Files 104 | Packing the game files require the manifest file that you build using the function meant for it. 105 | This function takes the game directory that you are packing, which should follow the same file structure as how it was unpacked. 106 | The outFile is simply a path to a (new) filename, without any extension. 107 | The filename is used to create the .utoc, .ucas and .pak files in the path that you specify. 108 | 109 | Three compression methods are currently known; "None", "Zlib", "Oodle" or "lz4". 110 | None is the default, so when NULL is passed, it will not be compressed. 111 | Any other compression method will return errors; the names are not case-sensitive. 112 | If you wish to encrypt the created files, you could provide an AES key, but I am pretty sure Unreal Engine won't be able to decrypt your files. 113 | I just added this encryption "feature" for experimentation. 114 | 115 | ```c 116 | int packGameFiles(char *dirPath, char *manifestPath, char *outFile, char *compressionMethod, char *AESKey); 117 | ``` 118 | The function returns -1 in case of error. 119 | Otherwise, it returns the number of files that were packed into the .utoc/.ucas files that were created. 120 | 121 | # Building the DLL yourself! 122 | 123 | Building a DLL from Go on Windows is done as follows 124 | ```go 125 | go build -o castoc_x64.dll -buildmode=c-shared -ldflags "-s -w" . 126 | ``` 127 | This creates a DLL file and a header file. 128 | Both must be placed in the directory of the C/C++ program you would like to build. 129 | The resulting program can then be used without the header file, but it does require the DLL file! 130 | 131 | Compiling a C++ program while making use of the DLL file is easy; 132 | ```cpp 133 | g++ main.cpp castoc_x64.dll -o main.exe 134 | ``` -------------------------------------------------------------------------------- /compression.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "compress/zlib" 6 | "errors" 7 | "io" 8 | "io/ioutil" 9 | "strings" 10 | 11 | "github.com/new-world-tools/go-oodle" 12 | "github.com/pierrec/lz4/v4" 13 | ) 14 | 15 | // implemented (de)compression methods (lowercased) 16 | 17 | var ( 18 | DecompressionMethods = map[string](func(*[]byte, uint32) (*[]byte, error)){ 19 | "none": decompressNone, 20 | "zlib": decompressZLIB, 21 | "oodle": decompressOodle, 22 | "lz4": decompressLZ4, 23 | } 24 | CompressionMethods = map[string](func(*[]byte) (*[]byte, error)){ 25 | "none": compressNone, 26 | "zlib": compressZLIB, 27 | "oodle": compressOodle, // settings: level 3 Kraken compression 28 | "lz4": compressLZ4, 29 | } 30 | ) 31 | 32 | /* Decompression functions */ 33 | func decompressNone(inData *[]byte, expectedOutputSize uint32) (*[]byte, error) { 34 | return inData, nil // can't go wrong :D 35 | } 36 | 37 | func decompressZLIB(inData *[]byte, expectedOutputSize uint32) (*[]byte, error) { 38 | // decompress with zlib 39 | r, err := zlib.NewReader(bytes.NewBuffer(*inData)) 40 | defer r.Close() 41 | if err != nil { 42 | return nil, err 43 | } 44 | uncompressed, err := ioutil.ReadAll(r) 45 | if err != nil { 46 | return nil, err 47 | } 48 | if len(uncompressed) != int(expectedOutputSize) { 49 | return nil, errors.New("zlib did not decompress correctly") 50 | } 51 | return &uncompressed, nil 52 | } 53 | 54 | func decompressOodle(inData *[]byte, expectedOutputSize uint32) (*[]byte, error) { 55 | if !oodle.IsDllExist() { 56 | err := oodle.Download() 57 | if err != nil { 58 | return nil, errors.New("oo2core_9_win64.dll was not found (oodle decompression)") 59 | } 60 | } 61 | output, err := oodle.Decompress(*inData, int64(expectedOutputSize)) 62 | // if err is not nil, it's handled by the caller 63 | return &output, err 64 | } 65 | func decompressLZ4(inData *[]byte, expectedOutputSize uint32) (*[]byte, error) { 66 | reader := bytes.NewReader(*inData) 67 | decompressed := &bytes.Buffer{} 68 | zr := lz4.NewReader(reader) 69 | _, err := io.Copy(decompressed, zr) 70 | if err != nil { 71 | return nil, err 72 | } 73 | decomp := decompressed.Bytes() 74 | return &decomp, nil 75 | } 76 | 77 | /* Compression functions */ 78 | 79 | func compressNone(inData *[]byte) (*[]byte, error) { 80 | return inData, nil 81 | } 82 | func compressZLIB(inData *[]byte) (*[]byte, error) { 83 | var b bytes.Buffer 84 | w := zlib.NewWriter(&b) 85 | _, err := w.Write(*inData) 86 | if err != nil { 87 | return nil, err 88 | } 89 | err = w.Close() 90 | if err != nil { 91 | return nil, err 92 | } 93 | compressedData := b.Bytes() 94 | return &compressedData, nil 95 | } 96 | func compressOodle(inData *[]byte) (*[]byte, error) { 97 | // The settings for Oodle _could_ be modified, but this is what Unreal Engine states as example 98 | // https://docs.unrealengine.com/4.27/en-US/TestingAndOptimization/Oodle/Data/ 99 | compressedData, err := oodle.Compress(*inData, oodle.AlgoKraken, oodle.CompressionLevelOptimal3) 100 | return &compressedData, err 101 | } 102 | 103 | func compressLZ4(inData *[]byte) (*[]byte, error) { 104 | reader := bytes.NewReader(*inData) 105 | compressed := &bytes.Buffer{} 106 | lzwriter := lz4.NewWriter(compressed) 107 | _, err := io.Copy(lzwriter, reader) 108 | if err != nil { 109 | return nil, err 110 | } 111 | // Closing is *very* important 112 | if err := lzwriter.Close(); err != nil { 113 | return nil, err 114 | } 115 | comp := compressed.Bytes() 116 | return &comp, nil 117 | } 118 | 119 | /* Wrapper for getting the functions */ 120 | // depending on the method, return the associated decompression function 121 | func getDecompressionFunction(method string) func(inData *[]byte, outputSize uint32) (*[]byte, error) { 122 | if val, ok := DecompressionMethods[strings.ToLower(method)]; ok { 123 | return val 124 | } 125 | return nil 126 | } 127 | func getCompressionFunction(method string) func(inData *[]byte) (*[]byte, error) { 128 | if val, ok := CompressionMethods[strings.ToLower(method)]; ok { 129 | return val 130 | } 131 | return nil 132 | } 133 | -------------------------------------------------------------------------------- /cpp/Pack.bat: -------------------------------------------------------------------------------- 1 | @pushd %~dp0 2 | @if exist "packed" goto 1 3 | mkdir "packed" 4 | timeout -t 1 -nobreak 5 | :1 6 | @echo off 7 | @if exist "manifest\manifest.json" goto 2 8 | color 40 9 | cls 10 | echo manifest.json missing! 11 | pause 12 | :2 13 | @echo on 14 | @if "%~1"=="" goto skip 15 | @pushd %~dp0 16 | .\main.exe pack "%~dpn1" "manifest\manifest.json" "packed\%~n1" none 17 | :skip -------------------------------------------------------------------------------- /cpp/UnpackAllplusManifest.bat: -------------------------------------------------------------------------------- 1 | @pushd %~dp0 2 | @if exist "output" goto 1 3 | mkdir "output" 4 | :1 5 | @if exist "manifest" goto 2 6 | mkdir "manifest" 7 | :2 8 | timeout -t 1 -nobreak 9 | @if "%~1"=="" goto skip 10 | .\main.exe unpackAll "%~dpn1.utoc" "%~dpn1.ucas" output\ 11 | .\main.exe manifest "%~dpn1.utoc" "%~dpn1.ucas" "manifest\%~n1.json" 12 | :skip -------------------------------------------------------------------------------- /cpp/castoc_x64.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gitMenv/UEcastoc/7e8112e4d7617522d60bda812dca11949d6a77cc/cpp/castoc_x64.dll -------------------------------------------------------------------------------- /cpp/castoc_x64.h: -------------------------------------------------------------------------------- 1 | /* Code generated by cmd/cgo; DO NOT EDIT. */ 2 | 3 | /* package github.com/gitMenv/UEcastoc */ 4 | 5 | 6 | #line 1 "cgo-builtin-export-prolog" 7 | 8 | #include /* for ptrdiff_t below */ 9 | 10 | #ifndef GO_CGO_EXPORT_PROLOGUE_H 11 | #define GO_CGO_EXPORT_PROLOGUE_H 12 | 13 | #ifndef GO_CGO_GOSTRING_TYPEDEF 14 | typedef struct { const char *p; ptrdiff_t n; } _GoString_; 15 | #endif 16 | 17 | #endif 18 | 19 | /* Start of preamble from import "C" comments. */ 20 | 21 | 22 | #line 3 "dllAPI.go" 23 | #include 24 | #include 25 | 26 | #line 1 "cgo-generated-wrapper" 27 | 28 | 29 | /* End of preamble from import "C" comments. */ 30 | 31 | 32 | /* Start of boilerplate cgo prologue. */ 33 | #line 1 "cgo-gcc-export-header-prolog" 34 | 35 | #ifndef GO_CGO_PROLOGUE_H 36 | #define GO_CGO_PROLOGUE_H 37 | 38 | typedef signed char GoInt8; 39 | typedef unsigned char GoUint8; 40 | typedef short GoInt16; 41 | typedef unsigned short GoUint16; 42 | typedef int GoInt32; 43 | typedef unsigned int GoUint32; 44 | typedef long long GoInt64; 45 | typedef unsigned long long GoUint64; 46 | typedef GoInt64 GoInt; 47 | typedef GoUint64 GoUint; 48 | typedef __SIZE_TYPE__ GoUintptr; 49 | typedef float GoFloat32; 50 | typedef double GoFloat64; 51 | typedef float _Complex GoComplex64; 52 | typedef double _Complex GoComplex128; 53 | 54 | /* 55 | static assertion to make sure the file is being used on architecture 56 | at least with matching size of GoInt. 57 | */ 58 | typedef char _check_for_64_bit_pointer_matching_GoInt[sizeof(void*)==64/8 ? 1:-1]; 59 | 60 | #ifndef GO_CGO_GOSTRING_TYPEDEF 61 | typedef _GoString_ GoString; 62 | #endif 63 | typedef void *GoMap; 64 | typedef void *GoChan; 65 | typedef struct { void *t; void *v; } GoInterface; 66 | typedef struct { void *data; GoInt len; GoInt cap; } GoSlice; 67 | 68 | #endif 69 | 70 | /* End of boilerplate cgo prologue. */ 71 | 72 | #ifdef __cplusplus 73 | extern "C" { 74 | #endif 75 | 76 | extern __declspec(dllexport) int packGameFiles(char* dirPath, char* manifestPath, char* outFile, char* compressionMethod, char* AESKey); 77 | extern __declspec(dllexport) void freeStringList(char** stringlist, int n); 78 | extern __declspec(dllexport) char** listGameFiles(char* utocFile, int* n, char* AESKey); 79 | extern __declspec(dllexport) char* getError(); 80 | extern __declspec(dllexport) int createManifestFile(char* utocFile, char* ucasFile, char* outputFile, char* AESKey); 81 | extern __declspec(dllexport) int unpackAllGameFiles(char* utocFile, char* ucasFile, char* outputDirectory, char* AESKey); 82 | extern __declspec(dllexport) int unpackGameFiles(char* utocFile, char* ucasFile, char* outputDirectory, char* regex, char* AESKey); 83 | 84 | #ifdef __cplusplus 85 | } 86 | #endif 87 | -------------------------------------------------------------------------------- /cpp/fix_manifest_and_pack.py: -------------------------------------------------------------------------------- 1 | # Python script to fix the paths in the manifest file generated by UECASTOC so that UECASTOC can pack the files. 2 | # This script should be installed and run from the "cpp" directory of a UECASTOC ( https://github.com/gitMenv/UEcastoc/tags ) installation 3 | 4 | # Caveats: 5 | # - The filenames in each mod must be unique. In other words, a specific mod should not have files with the same name in different folders. 6 | # - Mod names should not have spaces or other strange characters (like /, \, or . ) 7 | 8 | # Usage: 9 | # - Download UECASTOC from https://github.com/gitMenv/UEcastoc/tags 10 | # - Copy this script into the cpp directory 11 | # - Point the manifest_file variable below to the manifest file generated by UECASTOC when you unpacked 12 | # - Point directory_to_pak to the directory you want to pack 13 | # - The directory structure should be the usual structure we are used to from before IoStore (e.g., as used by FluffyQuack's UnrealPak scripts). 14 | # For example, for Hogwarts Legacy, it should be z_your_mod_name_P\phoenix\Content\... and you would then set directory_to_pak = 'z_your_mod_name_P' 15 | # - Run the script 16 | # - The script will attempt to find the files in the directory you want to pack, fix the manifest accordingly, and pack the files into packed/... 17 | 18 | import os 19 | import json 20 | import pathlib 21 | import shutil 22 | 23 | ##### SET THESE TO POINT TO YOUR MANIFEST FILE AND TO THE DIRECTORY YOU WANT TO PACK ########### 24 | manifest_file = 'test_manifest.json' 25 | directory_to_pak = 'z_your_mod_name_P' 26 | ######################################################################################### 27 | 28 | with open(manifest_file, 'r') as fp: 29 | manifest_dict = json.load(fp) 30 | 31 | with open("fixed_"+manifest_file,'w') as fout: 32 | print("Fixing the paths in the manifest file so that the packer can find them...") 33 | file_idx = 0 34 | for file_entry in manifest_dict['Files']: 35 | if(not(file_entry['Path']=='dependencies')): 36 | filename_to_find = file_entry['Path'][1:] # get rid of leading / 37 | my_file = [] 38 | for path in pathlib.Path(directory_to_pak).rglob(filename_to_find): 39 | my_file.append(path) 40 | if (len(my_file)==0): 41 | print("WARNING: could not find "+filename_to_find) 42 | if(len(my_file)>1): 43 | print("WARNING: found multiple instances of "+filename_to_find+":") 44 | for path in my_file: 45 | print(path) 46 | for path in my_file: 47 | python_path = str(path) 48 | fixed_path ='/' + ('/').join(python_path.split('\\')[1:]) 49 | print(fixed_path) 50 | manifest_dict['Files'][file_idx]['Path'] = fixed_path 51 | file_idx = file_idx+1 52 | 53 | print("Writing new manifest file "+"fixed_"+manifest_file) 54 | json.dump(manifest_dict, fout, indent=2) 55 | 56 | 57 | print("Packing "+directory_to_pak) 58 | json_path = pathlib.Path("fixed_"+manifest_file) 59 | 60 | 61 | my_cmd = '.\main.exe pack \"' + str(directory_to_pak)+'\" \"' + str(json_path) + '\" \"packed\\' +directory_to_pak+ '\" None' 62 | print(my_cmd) 63 | os.system(my_cmd) 64 | 65 | #my_cmd = "main.exe pack " + directory_to_pak + " " + str(json_path.with_suffix('')) + ".json packed\\" +directory_to_pak+ " none" 66 | #print(my_cmd) 67 | #os.system(my_cmd) 68 | 69 | -------------------------------------------------------------------------------- /cpp/main.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include "castoc_x64.h" 3 | 4 | #include 5 | #include 6 | 7 | using namespace std; 8 | 9 | 10 | // Print help text on usage 11 | void printHelp() { 12 | cout << "Usage: castoc.exe [args]" << endl; 13 | cout << "All args that are prepended with an asterisk are optional" << endl; 14 | cout << "Features:" << endl; 15 | cout << " help: Print this message" << endl; 16 | cout << " list [utocPath, *AES key]: lists all files that are packed in the .utoc/.ucas file" << endl; 17 | cout << " unpackAll [utocPath, ucasPath, outputDir, *AES key]: unpack entire .utoc/.ucas files" << endl; 18 | cout << " unpack [utocPath, ucasPath, outputDir, regex, *AES key]: unpack .utoc/.ucas files based on regex" << endl; 19 | cout << " manifest [utocPath, ucasPath, outputManifest, *AES key]: creates Manifest file of this .utoc/.ucas file" << endl; 20 | cout << " pack [packDir, manifestPath, outputFile, compressionMethod, *AES key]: pack directory into .utoc/.ucas file" << endl; 21 | cout << endl; 22 | cout << "the pack command requires the manifest file, and it packs the input dir to the outputFile{.utoc, .ucas, .pak}; three files are created!" << endl; 23 | cout << "the following compression methods for packing are supported; {None, Zlib, Oodle}" << endl; 24 | } 25 | 26 | void help(vector args) { 27 | printHelp(); 28 | } 29 | 30 | void list(vector args) { 31 | if (args.size() == 0){ 32 | cout << "expecting at least one arg for list" << endl; 33 | printHelp(); 34 | return; 35 | } 36 | int n; 37 | char** list; 38 | char* aeskey = NULL; 39 | const char *utocPath = args[0].c_str(); 40 | 41 | if (args.size() > 1){ 42 | const char *aes = args[1].c_str(); 43 | aeskey = const_cast(aes); 44 | } 45 | list = listGameFiles(const_cast(utocPath), &n, aeskey); 46 | if(list == NULL){ 47 | cout << getError() << endl; 48 | return; 49 | } 50 | for(int i = 0; i < n; i++){ 51 | cout << list[i] << endl; 52 | } 53 | freeStringList(list, n); 54 | } 55 | 56 | void unpackAll(vector args){ 57 | //[utocPath, ucasPath, outputDir, *AES key] 58 | if (args.size() < 3){ 59 | cout << "expecting at least three args for unpackAll" << endl; 60 | printHelp(); 61 | return; 62 | } 63 | const char* utocPath = args[0].c_str(); 64 | const char* ucasPath = args[1].c_str(); 65 | const char* outputDir = args[2].c_str(); 66 | 67 | char* aeskey = NULL; 68 | if (args.size() == 4) { 69 | const char* aes = args[3].c_str(); 70 | aeskey = const_cast(aes); 71 | } 72 | int n = unpackAllGameFiles(const_cast(utocPath), 73 | const_cast(ucasPath), 74 | const_cast(outputDir), 75 | aeskey); 76 | if(n < 0){ 77 | cout << getError() << endl; 78 | } else { 79 | cout << "number of unpacked files:" << n << endl; 80 | } 81 | } 82 | 83 | void unpack(vector args){ 84 | //[utocPath, ucasPath, outputDir, regex, *AES key] 85 | if (args.size() < 4){ 86 | cout << "expecting at least four args for unpack" << endl; 87 | printHelp(); 88 | return; 89 | } 90 | const char* utocPath = args[0].c_str(); 91 | const char* ucasPath = args[1].c_str(); 92 | const char* outputDir = args[2].c_str(); 93 | const char* regex = args[3].c_str(); 94 | 95 | char* aeskey = NULL; 96 | if (args.size() == 5) { 97 | const char* aes = args[4].c_str(); 98 | aeskey = const_cast(aes); 99 | } 100 | int n = unpackGameFiles(const_cast(utocPath), 101 | const_cast(ucasPath), 102 | const_cast(outputDir), 103 | const_cast(regex), 104 | aeskey); 105 | if(n < 0){ 106 | cout << getError() << endl; 107 | } else { 108 | cout << "number of unpacked files:" << n << endl; 109 | } 110 | } 111 | 112 | void manifest(vector args){ 113 | //[utocPath, ucasPath, outputManifest, aeskey] 114 | if(args.size() < 3) { 115 | cout << "expecting at least 3 arguments for creating a manifest file" << endl; 116 | printHelp(); 117 | return; 118 | } 119 | const char* utocPath = args[0].c_str(); 120 | const char* ucasPath = args[1].c_str(); 121 | const char* outputManifest = args[2].c_str(); 122 | char* aeskey = NULL; 123 | if (args.size() == 4) { 124 | const char* aes = args[3].c_str(); 125 | aeskey = const_cast(aes); 126 | } 127 | int n = createManifestFile(const_cast(utocPath), 128 | const_cast(ucasPath), 129 | const_cast(outputManifest), 130 | aeskey); 131 | if (n < 0){ 132 | cout << getError() << endl; 133 | } 134 | 135 | } 136 | 137 | void pack(vector args){ 138 | // [packDir, manifestPath, outputFile, compressionMethod, *AES key] 139 | if(args.size() < 4){ 140 | cout << "expecting at least 4 arguments for packing" << endl; 141 | printHelp(); 142 | return; 143 | } 144 | const char* packdir = args[0].c_str(); 145 | const char* manifestPath = args[1].c_str(); 146 | const char* outputfile = args[2].c_str(); 147 | const char* compression = args[3].c_str(); 148 | char* aeskey; 149 | if(args.size() == 5){ 150 | const char* aes = args[4].c_str(); 151 | aeskey = const_cast(aes); 152 | } 153 | int n = packGameFiles(const_cast(packdir), 154 | const_cast(manifestPath), 155 | const_cast(outputfile), 156 | const_cast(compression), 157 | aeskey); 158 | if(n < 0){ 159 | cout << getError() << endl; 160 | }else{ 161 | cout << "number of files packed:" << n << endl; 162 | } 163 | 164 | } 165 | 166 | int main(int argc, char** argv) { 167 | if (argc < 2) { 168 | cout << "Error: No feature specified" << endl; 169 | printHelp(); 170 | return 1; 171 | } 172 | 173 | string feature = argv[1]; 174 | vector args; 175 | for (int i = 2; i < argc; i++) { 176 | args.push_back(argv[i]); 177 | } 178 | 179 | if (feature == "help") { 180 | help(args); 181 | } else if (feature == "list") { 182 | list(args); 183 | } else if(feature == "unpackAll"){ 184 | unpackAll(args); 185 | } else if(feature == "unpack"){ 186 | unpack(args); 187 | } else if(feature == "manifest"){ 188 | manifest(args); 189 | } else if(feature == "pack") { 190 | pack(args); 191 | }else{ 192 | cout << "Error: Invalid feature specified" << endl; 193 | printHelp(); 194 | return 1; 195 | } 196 | 197 | return 0; 198 | } 199 | 200 | -------------------------------------------------------------------------------- /cpp/main.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gitMenv/UEcastoc/7e8112e4d7617522d60bda812dca11949d6a77cc/cpp/main.exe -------------------------------------------------------------------------------- /dllAPI.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // #include 4 | // #include 5 | import "C" 6 | import ( 7 | "embed" // for the .pak file 8 | "encoding/json" 9 | "fmt" 10 | "io/fs" 11 | "io/ioutil" 12 | "os" 13 | "path/filepath" 14 | "strings" 15 | "unsafe" 16 | ) 17 | 18 | // The functions that use cgo stuff from C libraries must be used in the file 19 | // that contains the main function. 20 | var staticErr string 21 | 22 | //go:embed req/Packed_P.pak 23 | var embeddedFiles embed.FS 24 | 25 | //export packGameFiles 26 | func packGameFiles(dirPath *C.char, manifestPath *C.char, outFile *C.char, compressionMethod *C.char, AESKey *C.char) C.int { 27 | dir := C.GoString(dirPath) 28 | dir, err := filepath.Abs(dir) 29 | if err != nil { 30 | staticErr = err.Error() 31 | return C.int(-1) 32 | } 33 | manifestFile := C.GoString(manifestPath) 34 | outPath := C.GoString(outFile) 35 | outPath = strings.TrimSuffix(outPath, filepath.Ext(outPath)) // remove any extension 36 | compression := "None" 37 | if compressionMethod != nil { 38 | compression = C.GoString(compressionMethod) 39 | } 40 | aes := convertAES(AESKey) 41 | if len(aes) != 0 && len(aes) != 32 { 42 | staticErr = "AES key length should be 32, or none at all" 43 | } 44 | manifest, err := readManifest(manifestFile) 45 | if err != nil { 46 | staticErr = err.Error() 47 | return C.int(-1) 48 | } 49 | n, err := packToCasToc(dir, manifest, outPath, compression, aes) 50 | if err != nil { 51 | staticErr = err.Error() 52 | return C.int(-1) 53 | } 54 | // write the embedded .pak file 55 | embedded, _ := embeddedFiles.ReadFile("req/Packed_P.pak") 56 | os.WriteFile(outPath+".pak", embedded, os.ModePerm) 57 | return C.int(n - 1) // correction for dependencies file 58 | } 59 | 60 | //export freeStringList 61 | func freeStringList(stringlist **C.char, n C.int) { 62 | for i := 0; i < int(n); i++ { 63 | toFreeString := *(**C.char)(unsafe.Pointer(uintptr(unsafe.Pointer(stringlist)) + uintptr(i)*unsafe.Sizeof(*stringlist))) 64 | C.free(unsafe.Pointer(toFreeString)) 65 | } 66 | C.free(unsafe.Pointer(stringlist)) 67 | } 68 | 69 | //export listGameFiles 70 | func listGameFiles(utocFile *C.char, n *C.int, AESKey *C.char) (strlist **C.char) { 71 | utocFname := C.GoString(utocFile) 72 | aes := convertAES(AESKey) 73 | 74 | d, err := parseUtocFile(utocFname, aes) 75 | if err != nil { 76 | staticErr = err.Error() 77 | *n = C.int(-1) 78 | return nil 79 | } 80 | 81 | filepaths := []string{} 82 | for _, v := range d.files { 83 | if v.filepath == DepFileName { 84 | continue 85 | } 86 | filepaths = append(filepaths, v.filepath) 87 | } 88 | // each line a new string 89 | *n = C.int(len(filepaths)) 90 | return strSliceToC(&filepaths) 91 | } 92 | 93 | //export getError 94 | func getError() (err *C.char) { 95 | return C.CString(staticErr) 96 | } 97 | 98 | //export createManifestFile 99 | func createManifestFile(utocFile *C.char, ucasFile *C.char, outputFile *C.char, AESKey *C.char) C.int { 100 | //TODO: check if the "dependencies" part works for more games, and if it's even required. 101 | utocFname := C.GoString(utocFile) 102 | ucasFname := C.GoString(ucasFile) 103 | outputFname := C.GoString(outputFile) 104 | aes := convertAES(AESKey) 105 | 106 | d, err := parseUtocFile(utocFname, aes) 107 | if err != nil { 108 | staticErr = err.Error() 109 | return C.int(-1) 110 | } 111 | 112 | if d.hdr.isEncrypted(){ 113 | tmpFile, err := os.CreateTemp("", "tmp") 114 | if err != nil { 115 | staticErr = err.Error() 116 | return C.int(-1) 117 | } 118 | ucasBytes, err := ioutil.ReadFile(ucasFname) 119 | if err != nil { 120 | staticErr = err.Error() 121 | return C.int(-1) 122 | } 123 | decryptedBytes, err := decryptAES(&ucasBytes, aes) 124 | if err != nil { 125 | staticErr = err.Error() 126 | return C.int(-1) 127 | } 128 | tmpFile.Write(*decryptedBytes) 129 | ucasFname = tmpFile.Name() 130 | err = tmpFile.Close() 131 | if err != nil { 132 | fmt.Println("err:", err) 133 | return C.int(-1) 134 | } 135 | defer os.Remove(tmpFile.Name()) 136 | } 137 | manifest, err := d.constructManifest(ucasFname) 138 | if err != nil { 139 | staticErr = err.Error() 140 | return C.int(-1) 141 | } 142 | b, err := json.MarshalIndent(manifest, "", " ") // indent for readability 143 | if err != nil { 144 | staticErr = err.Error() 145 | return C.int(-1) 146 | } 147 | err = ioutil.WriteFile(outputFname, b, fs.ModePerm) 148 | if err != nil { 149 | staticErr = err.Error() 150 | return C.int(-1) 151 | } 152 | 153 | return C.int(0) 154 | } 155 | 156 | //export unpackAllGameFiles 157 | func unpackAllGameFiles(utocFile *C.char, ucasFile *C.char, outputDirectory *C.char, AESKey *C.char) C.int { 158 | reg := C.CString("/*") 159 | x := unpackGameFiles(utocFile, ucasFile, outputDirectory, reg, AESKey) 160 | C.free(unsafe.Pointer(reg)) // free the string that I made myself 161 | return x 162 | } 163 | 164 | //export unpackGameFiles 165 | func unpackGameFiles(utocFile *C.char, ucasFile *C.char, outputDirectory *C.char, regex *C.char, AESKey *C.char) C.int { 166 | utocFname := C.GoString(utocFile) 167 | ucasFname := C.GoString(ucasFile) 168 | outDir := C.GoString(outputDirectory) 169 | reg := C.GoString(regex) 170 | aes := convertAES(AESKey) 171 | 172 | d, err := parseUtocFile(utocFname, aes) 173 | if err != nil { 174 | staticErr = err.Error() 175 | return C.int(-1) 176 | } 177 | // ucas may also be encrypted; create temporary file and place decrypted version there 178 | // let the ucasreader read from the temporary file 179 | if d.hdr.isEncrypted() { 180 | tmpFile, err := os.CreateTemp("", "tmp") 181 | if err != nil { 182 | staticErr = err.Error() 183 | return C.int(-1) 184 | } 185 | ucasBytes, err := ioutil.ReadFile(ucasFname) 186 | if err != nil { 187 | staticErr = err.Error() 188 | return C.int(-1) 189 | } 190 | decryptedBytes, err := decryptAES(&ucasBytes, aes) 191 | if err != nil { 192 | staticErr = err.Error() 193 | return C.int(-1) 194 | } 195 | tmpFile.Write(*decryptedBytes) 196 | ucasFname = tmpFile.Name() 197 | err = tmpFile.Close() 198 | if err != nil { 199 | fmt.Println("err:", err) 200 | return C.int(-1) 201 | } 202 | defer os.Remove(tmpFile.Name()) 203 | } 204 | 205 | // we need the parsed .utoc file to unpack the files that are included in the .ucas file. 206 | numberOfFiles, err := d.unpackUcasFiles(ucasFname, outDir, reg) 207 | if err != nil { 208 | staticErr = err.Error() 209 | return C.int(-1) 210 | } 211 | return C.int(numberOfFiles) 212 | } 213 | 214 | // main function is required for creating a DLL. 215 | func main() {} 216 | -------------------------------------------------------------------------------- /fIoStoreStructs.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "encoding/binary" 5 | "fmt" 6 | "strconv" 7 | ) 8 | 9 | type FIoContainerID uint64 10 | type FIoStoreTocEntryMetaFlags uint8 11 | type FString string // just a string I guess? But fancier... 12 | type FName string 13 | 14 | const ( 15 | NoneMetaFlag FIoStoreTocEntryMetaFlags = iota 16 | CompressedMetaFlag 17 | MemoryMappedMetaFlag 18 | ) 19 | 20 | type FIoDirectoryIndexEntry struct { 21 | Name uint32 22 | FirstChildEntry uint32 23 | NextSiblingEntry uint32 24 | FirstFileEntry uint32 25 | } 26 | 27 | type FIoFileIndexEntry struct { 28 | Name uint32 29 | NextFileEntry uint32 30 | UserData uint32 31 | } 32 | 33 | type FIoChunkID struct { 34 | ID uint64 // 8 bytes 35 | Index uint16 // 2 bytes 36 | Padding uint8 // 1 byte 37 | Type uint8 // 1 byte 38 | } 39 | 40 | type DirIndexWrapper struct { 41 | dirs *[]*FIoDirectoryIndexEntry 42 | files *[]*FIoFileIndexEntry 43 | strTable *map[string]int 44 | strSlice *[]string 45 | } 46 | 47 | func (r *FIoDirectoryIndexEntry) AddFile(fpathSections []string, fIndex uint32, structure *DirIndexWrapper) { 48 | if len(fpathSections) == 0 { 49 | return 50 | } 51 | if len(fpathSections) == 1 { 52 | // only one item, add file and return; base case 53 | fname := fpathSections[0] 54 | newFile := FIoFileIndexEntry{ 55 | Name: uint32((*(*structure).strTable)[fname]), 56 | NextFileEntry: NoneEntry, 57 | UserData: fIndex, 58 | } 59 | newEntryIndex := uint32(len(*(*structure).files)) 60 | *(*structure).files = append(*(*structure).files, &newFile) 61 | 62 | if (*r).FirstFileEntry == NoneEntry { 63 | (*r).FirstFileEntry = newEntryIndex 64 | } else { 65 | fentry := &(*(*structure).files)[(*r).FirstFileEntry] 66 | // filenames (with their path) are unique and will always be added 67 | for (*fentry).NextFileEntry != NoneEntry { 68 | fentry = &(*(*structure).files)[(*fentry).NextFileEntry] 69 | } 70 | (*fentry).NextFileEntry = newEntryIndex 71 | } 72 | return 73 | } 74 | // recursive case; find directory if present, otherwise add. 75 | var currDir *FIoDirectoryIndexEntry 76 | currDirName := fpathSections[0] 77 | currDirNameIndex := uint32((*(*structure).strTable)[currDirName]) 78 | 79 | possibleNewEntryIndex := uint32(len(*(*structure).dirs)) 80 | if (*r).FirstChildEntry == NoneEntry { 81 | newDirEntry := FIoDirectoryIndexEntry{ 82 | Name: currDirNameIndex, 83 | FirstChildEntry: NoneEntry, 84 | NextSiblingEntry: NoneEntry, 85 | FirstFileEntry: NoneEntry, 86 | } 87 | (*r).FirstChildEntry = possibleNewEntryIndex 88 | *(*structure).dirs = append(*(*structure).dirs, &newDirEntry) 89 | currDir = &newDirEntry 90 | } else { 91 | dentry := &(*(*structure).dirs)[(*r).FirstChildEntry] 92 | 93 | for (*dentry).Name != currDirNameIndex { 94 | if (*dentry).NextSiblingEntry != NoneEntry { 95 | dentry = &(*(*structure).dirs)[(*dentry).NextSiblingEntry] 96 | } else { 97 | break 98 | } 99 | } 100 | if (*dentry).Name == currDirNameIndex { 101 | // directory found 102 | currDir = (*dentry) 103 | } else { 104 | // add new directory 105 | newDirEntry := FIoDirectoryIndexEntry{ 106 | Name: currDirNameIndex, 107 | FirstChildEntry: NoneEntry, 108 | NextSiblingEntry: NoneEntry, 109 | FirstFileEntry: NoneEntry, 110 | } 111 | (*dentry).NextSiblingEntry = possibleNewEntryIndex 112 | *(*structure).dirs = append(*(*structure).dirs, &newDirEntry) 113 | currDir = &newDirEntry 114 | } 115 | } 116 | (*currDir).AddFile(fpathSections[1:], fIndex, structure) 117 | } 118 | 119 | func (c *FIoChunkID) ToHexString() string { 120 | return fmt.Sprintf("%016x%04x%02x%02x", c.ID, c.Index, c.Padding, c.Type) 121 | } 122 | 123 | func FromHexString(s string) FIoChunkID { 124 | var c FIoChunkID 125 | v, _ := strconv.ParseUint(s[:16], 16, 64) 126 | c.ID = uint64(v) 127 | idx, _ := strconv.ParseUint(s[16:20], 16, 16) 128 | c.Index = uint16(idx) 129 | b, _ := strconv.ParseUint(s[20:22], 16, 8) 130 | c.Padding = uint8(b) 131 | b, _ = strconv.ParseUint(s[22:], 16, 8) 132 | c.Type = uint8(b) 133 | return c 134 | } 135 | 136 | type FIoStoreTocCompressedBlockEntry struct { 137 | Offset [5]uint8 138 | CompressedSize [3]uint8 139 | UncompressedSize [3]uint8 140 | CompressionMethod uint8 141 | } 142 | type FIoOffsetAndLength struct { 143 | Offset [5]uint8 144 | Length [5]uint8 145 | } 146 | 147 | type FIoStoreTocEntryMeta struct { 148 | ChunkHash FIoChunkHash 149 | Flags FIoStoreTocEntryMetaFlags 150 | } 151 | 152 | type FIoChunkHash struct { 153 | Hash [20]uint8 //SHA1 154 | Padding [12]uint8 155 | } 156 | 157 | func normalize(s []byte) []byte { 158 | return append(s, make([]byte, 8-len(s))...) 159 | } 160 | func (f *FIoOffsetAndLength) GetOffset() uint64 { 161 | return uint64(f.Offset[4]) | 162 | (uint64(f.Offset[3]) << 8) | 163 | (uint64(f.Offset[2]) << 16) | 164 | (uint64(f.Offset[1]) << 24) | 165 | (uint64(f.Offset[0]) << 32) 166 | } 167 | func (f *FIoOffsetAndLength) GetLength() uint64 { 168 | return uint64(f.Length[4]) | 169 | (uint64(f.Length[3]) << 8) | 170 | (uint64(f.Length[2]) << 16) | 171 | (uint64(f.Length[1]) << 24) | 172 | (uint64(f.Length[0]) << 32) 173 | } 174 | func (f *FIoOffsetAndLength) SetOffset(offset uint64) { 175 | f.Offset[0] = uint8(offset >> 32) 176 | f.Offset[1] = uint8(offset >> 24) 177 | f.Offset[2] = uint8(offset >> 16) 178 | f.Offset[3] = uint8(offset >> 8) 179 | f.Offset[4] = uint8(offset >> 0) 180 | } 181 | func (f *FIoOffsetAndLength) SetLength(length uint64) { 182 | f.Length[0] = uint8(length >> 32) 183 | f.Length[1] = uint8(length >> 24) 184 | f.Length[2] = uint8(length >> 16) 185 | f.Length[3] = uint8(length >> 8) 186 | f.Length[4] = uint8(length >> 0) 187 | } 188 | 189 | func (f *FIoStoreTocCompressedBlockEntry) GetOffset() uint64 { 190 | return binary.LittleEndian.Uint64(normalize(f.Offset[:])) 191 | } 192 | 193 | func (f *FIoStoreTocCompressedBlockEntry) GetCompressedSize() uint32 { 194 | return binary.LittleEndian.Uint32(normalize(f.CompressedSize[:])) 195 | } 196 | func (f *FIoStoreTocCompressedBlockEntry) GetUncompressedSize() uint32 { 197 | return binary.LittleEndian.Uint32(normalize(f.UncompressedSize[:])) 198 | } 199 | 200 | func (f *FIoStoreTocCompressedBlockEntry) SetOffset(offset uint64) { 201 | r := make([]byte, 5) 202 | for i := uint64(0); i < 5; i++ { 203 | r[i] = byte((offset >> (i * 8)) & 0xff) 204 | } 205 | copy(f.Offset[:], r) 206 | } 207 | func (f *FIoStoreTocCompressedBlockEntry) SetUncompressedSize(size uint32) { 208 | f.UncompressedSize[0] = uint8(size >> 0) 209 | f.UncompressedSize[1] = uint8(size >> 8) 210 | f.UncompressedSize[2] = uint8(size >> 16) 211 | } 212 | func (f *FIoStoreTocCompressedBlockEntry) SetCompressedSize(size uint32) { 213 | f.CompressedSize[0] = uint8(size >> 0) 214 | f.CompressedSize[1] = uint8(size >> 8) 215 | f.CompressedSize[2] = uint8(size >> 16) 216 | } 217 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/gitMenv/UEcastoc 2 | 3 | go 1.18 4 | 5 | require ( 6 | github.com/new-world-tools/go-oodle v0.1.2 7 | github.com/pierrec/lz4 v2.6.1+incompatible 8 | github.com/pierrec/lz4/v4 v4.1.17 9 | ) 10 | 11 | require ( 12 | github.com/frankban/quicktest v1.14.4 // indirect 13 | github.com/itchio/lzma v0.0.0-20190703113020-d3e24e3e3d49 // indirect 14 | ) 15 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= 2 | github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY= 3 | github.com/frankban/quicktest v1.14.4/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= 4 | github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= 5 | github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= 6 | github.com/itchio/lzma v0.0.0-20190703113020-d3e24e3e3d49 h1:+YrBMf3rkLjkT10zIHyVE4S7ma4hqvfjl6XgnzZwS6o= 7 | github.com/itchio/lzma v0.0.0-20190703113020-d3e24e3e3d49/go.mod h1:avNrevQMli1pYPsz1+HIHMvx95pk6O+6otbWqCZPeZI= 8 | github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= 9 | github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= 10 | github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= 11 | github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= 12 | github.com/new-world-tools/go-oodle v0.1.2 h1:KDZNiiN0TnkOcuncXQtlnteZQYD2samuv6H3phq2DCA= 13 | github.com/new-world-tools/go-oodle v0.1.2/go.mod h1:G8hCx5mlsQBT8xCYARaiIIyHgt0FI32LLpSN5/DryQM= 14 | github.com/pierrec/lz4 v2.6.1+incompatible h1:9UY3+iC23yxF0UfGaYrGplQ+79Rg+h/q9FV9ix19jjM= 15 | github.com/pierrec/lz4 v2.6.1+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= 16 | github.com/pierrec/lz4/v4 v4.1.17 h1:kV4Ip+/hUBC+8T6+2EgburRtkE9ef4nbY3f4dFhGjMc= 17 | github.com/pierrec/lz4/v4 v4.1.17/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= 18 | github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= 19 | github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= 20 | github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= 21 | -------------------------------------------------------------------------------- /helpers.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "crypto/aes" 5 | "crypto/rand" 6 | "crypto/sha1" 7 | "encoding/binary" 8 | "encoding/hex" 9 | "strings" 10 | "unsafe" 11 | ) 12 | import "C" 13 | 14 | func strSliceToC(list *[]string) **C.char { 15 | strlist := *list 16 | n := len(strlist) 17 | // create an unsafe pointer with sufficient space on the stack 18 | strs := C.malloc(C.size_t(n) * C.size_t(unsafe.Sizeof(uintptr(0)))) 19 | // fill the array with strings by offsetting the unsafe pointers with the size of the char pointers 20 | for i := 0; i < int(n); i++ { 21 | *(**C.char)(unsafe.Pointer(uintptr(strs) + uintptr(i)*unsafe.Sizeof(*(**C.char)(strs)))) = C.CString(strlist[i]) 22 | } 23 | return (**C.char)(strs) 24 | } 25 | 26 | func convertAES(AES *C.char) []byte { 27 | s := "" 28 | if AES != nil { 29 | s = C.GoString(AES) 30 | } 31 | // go string to []byte 32 | s = strings.TrimPrefix(s, "0x") 33 | s = strings.TrimPrefix(s, "0X") 34 | 35 | b, _ := hex.DecodeString(s) 36 | return b 37 | } 38 | 39 | func decryptAES(ciphertext *[]byte, AES []byte) (*[]byte, error) { 40 | block, err := aes.NewCipher(AES) 41 | if err != nil { 42 | return nil, err 43 | } 44 | dst := make([]byte, len(*ciphertext)) 45 | for i := 0; i < len(dst); i += block.BlockSize() { 46 | block.Decrypt(dst[i:], (*ciphertext)[i:]) 47 | } 48 | return &dst, nil 49 | } 50 | func encryptAES(plaintext *[]byte, AES []byte) (*[]byte, error) { 51 | block, err := aes.NewCipher(AES) 52 | if err != nil { 53 | return nil, err 54 | } 55 | dst := make([]byte, len(*plaintext)) 56 | for i := 0; i < len(dst); i += block.BlockSize() { 57 | block.Encrypt(dst[i:], (*plaintext)[i:]) 58 | } 59 | return &dst, nil 60 | } 61 | 62 | func sha1Hash(fdata *[]byte) *FIoChunkHash { 63 | hasher := sha1.New() 64 | hasher.Write(*fdata) 65 | fileHash := hasher.Sum(nil) 66 | 67 | var hash FIoChunkHash 68 | copy(hash.Hash[:], fileHash[:20]) 69 | hash.Padding = [12]byte{} // explicitly set to 0 70 | return &hash 71 | } 72 | 73 | func getRandomBytes(n int) []byte { 74 | ret := make([]byte, n) 75 | rand.Read(ret) 76 | return ret 77 | } 78 | 79 | // A string must have a preamble of the strlen and a nullbyte at the end. 80 | // this function returns the string in the "FString" format. 81 | func stringToFString(str string) []byte { 82 | strlen := uint32(len(str) + 1) // include nullbyte 83 | fstring := make([]byte, int(strlen)+binary.Size(strlen)) 84 | binary.LittleEndian.PutUint32(fstring, strlen) 85 | for i := 0; i < len(str); i++ { 86 | fstring[4+i] = str[i] 87 | } 88 | fstring[len(fstring)-1] = 0 89 | return fstring 90 | } 91 | 92 | func uint32ToBytes(a *uint32) *[]byte { 93 | t := make([]byte, 4) 94 | binary.LittleEndian.PutUint32(t, *a) 95 | return &t 96 | } 97 | -------------------------------------------------------------------------------- /manifest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "encoding/binary" 6 | "encoding/json" 7 | "io" 8 | "os" 9 | "sort" 10 | ) 11 | 12 | // This file is used to extract dependency data from the .ucas file. 13 | // Not all fields are completely clear to me, so some of this must be saved somewhere to reconstruct the file later. 14 | // I can imagine that if the dependencies are missing, something will go wrong. 15 | // In short, this is "MAGIC" and I have no idea how or why it even works. 16 | 17 | const DepFileName = "dependencies" 18 | 19 | // Dependencies contains data extracted from the dependencies section in the .ucas file. 20 | // An instance of this will be used to convert back to this section. 21 | type Dependencies struct { 22 | ThisPackageID uint64 `json:"packageID"` 23 | ChunkIDToDependencies map[uint64]FileDependency 24 | } 25 | 26 | type FileDependency struct { 27 | FileSize uint64 `json:"uncompressedSize"` 28 | ExportObjects uint32 `json:"exportObjects"` 29 | MostlyOne uint32 `json:"requiredValueSomehow"` 30 | SomeIndex uint64 `json:"uniqueIndex"` 31 | Dependencies []uint64 `json:"dependencies"` // lists the ID of each dependency 32 | } 33 | 34 | type DepsHeader struct { 35 | ThisPackageID uint64 36 | NumberOfIDs uint64 37 | IDSize uint32 // always 8 38 | Padding [4]byte // value is in hex, LE, 00 00 64 C1 39 | ZeroBytes [4]byte 40 | NumberOfIDsAgain uint32 // whether this means an offset or? 41 | } 42 | type DepLinks struct { 43 | FileSize uint64 // uncompressed file size 44 | ExportObjects uint32 // number of Export Objects 45 | MostlyOne uint32 // this is either 1, 2 or 3, but most often 1. 46 | 47 | // All entries have a unique value for this field, starting at 0 48 | // It does skip some numbers, so I'm not sure what this means. 49 | // Looks like some kind of index though 50 | SomeIndex uint64 51 | DependencyPackages uint32 // number of dependency packages that this connection has 52 | Offset uint32 // an offset to point to the dependency packages 53 | } 54 | 55 | type parseDependencies struct { 56 | Hdr DepsHeader 57 | IDs []uint64 58 | FileLength uint32 // file length from this offset 59 | Conns []DepLinks 60 | OffsetAfterConss int64 61 | Deps []uint64 62 | IDToConn map[uint64]DepLinks 63 | } 64 | 65 | type Manifest struct { 66 | Files []ManifestFile `json:"Files,omitempty"` // in the .utoc file 67 | Deps Dependencies `json:"Dependencies,omitempty"` 68 | // Packages []UcasPackages `json:"Packages,omitempty"` // the "dependencies" in .ucas file??? 69 | } 70 | type UcasPackages struct { 71 | PathName string `json:"Name"` 72 | ExportBundleChunkIds []string `json:"ExportBundleChunkIds,omitempty"` 73 | BulkDataChunkIds []string `json:"BulkDataChunkIds,omitempty"` 74 | } 75 | type ManifestFile struct { 76 | Filepath string `json:"Path"` 77 | ChunkID string `json:"ChunkId"` 78 | } 79 | 80 | func (u *UTocData) constructManifest(ucasPath string) (m Manifest, err error) { 81 | for _, v := range u.files { 82 | mf := ManifestFile{Filepath: v.filepath, ChunkID: v.chunkID.ToHexString()} 83 | m.Files = append(m.Files, mf) 84 | } 85 | // files part has been added, now decode the dependencies 86 | data, err := u.unpackDependencies(ucasPath) 87 | if err != nil { 88 | return m, err 89 | } 90 | x, err := ParseDependencies(*data) 91 | m.Deps = *x 92 | return m, err 93 | } 94 | 95 | func readManifest(manifestPath string) (*Manifest, error) { 96 | b, err := os.ReadFile(manifestPath) 97 | if err != nil { 98 | return nil, err 99 | } 100 | var manifest Manifest 101 | err = json.Unmarshal(b, &manifest) 102 | return &manifest, err 103 | } 104 | 105 | func (s *parseDependencies) extractDependencies() *Dependencies { 106 | d := Dependencies{} 107 | d.ThisPackageID = s.Hdr.ThisPackageID 108 | d.ChunkIDToDependencies = make(map[uint64]FileDependency) 109 | for i, id := range s.IDs { 110 | fd := FileDependency{ 111 | FileSize: s.Conns[i].FileSize, 112 | ExportObjects: s.Conns[i].ExportObjects, 113 | MostlyOne: s.Conns[i].MostlyOne, 114 | SomeIndex: s.Conns[i].SomeIndex, 115 | } 116 | idx := s.Conns[i].Offset / 8 117 | for j := 0; j < int(s.Conns[i].DependencyPackages); j++ { 118 | fd.Dependencies = append(fd.Dependencies, s.Deps[idx+uint32(j)]) 119 | } 120 | d.ChunkIDToDependencies[id] = fd 121 | } 122 | return &d 123 | } 124 | 125 | // Deparses the Dependencies struct exactly as how it was parsed 126 | // This was checked using a simple diff tool. 127 | func (d *Dependencies) Deparse() *[]byte { 128 | // deparse by writing all file portions to this buffer 129 | buf := bytes.NewBuffer([]byte{}) 130 | 131 | // write hdr 132 | hdr := DepsHeader{ 133 | ThisPackageID: d.ThisPackageID, 134 | NumberOfIDs: uint64(len(d.ChunkIDToDependencies)), 135 | IDSize: 8, 136 | Padding: [4]byte{0x00, 0x00, 0x64, 0xC1}, 137 | ZeroBytes: [4]byte{}, 138 | NumberOfIDsAgain: uint32(len(d.ChunkIDToDependencies)), 139 | } 140 | binary.Write(buf, binary.LittleEndian, hdr) 141 | 142 | // write list of IDs 143 | ids := []uint64{} 144 | totalNumberOfDependencies := 0 145 | for k := range d.ChunkIDToDependencies { 146 | ids = append(ids, k) 147 | totalNumberOfDependencies += len(d.ChunkIDToDependencies[k].Dependencies) 148 | } 149 | // these IDs are stored in order in this file, so sort here and use this ordering 150 | sort.Slice(ids, func(i, j int) bool { 151 | return ids[i] < ids[j] 152 | }) 153 | for _, id := range ids { 154 | binary.Write(buf, binary.LittleEndian, id) 155 | } 156 | // write file length from this point onwards 157 | var x DepLinks 158 | var flength uint32 = uint32(len(d.ChunkIDToDependencies)*binary.Size(x) + totalNumberOfDependencies*8) 159 | binary.Write(buf, binary.LittleEndian, flength) 160 | 161 | // write list of DepLinks entries 162 | endOfDeps := flength - uint32(totalNumberOfDependencies*8) 163 | depsToWrite := []uint64{} 164 | 165 | for i, id := range ids { 166 | entry := d.ChunkIDToDependencies[id] 167 | link := DepLinks{ 168 | FileSize: entry.FileSize, 169 | ExportObjects: entry.ExportObjects, 170 | MostlyOne: entry.MostlyOne, 171 | SomeIndex: entry.SomeIndex, 172 | DependencyPackages: uint32(len(entry.Dependencies)), 173 | Offset: 0, // 0 by default, must be calculated otherwise 174 | } 175 | 176 | if link.DependencyPackages != 0 { 177 | // some math to write the correct offsets 178 | offsetFieldOffset := i*binary.Size(link) + 16 + 8 179 | target := int(endOfDeps) + len(depsToWrite)*8 180 | depsToWrite = append(depsToWrite, entry.Dependencies...) 181 | link.Offset = uint32(target - offsetFieldOffset) 182 | } 183 | binary.Write(buf, binary.LittleEndian, link) 184 | } 185 | 186 | // write Dependencies to be written 187 | for _, depLink := range depsToWrite { 188 | binary.Write(buf, binary.LittleEndian, depLink) 189 | } 190 | 191 | // write 8 nullbytes 192 | var nulls uint64 = 0 193 | binary.Write(buf, binary.LittleEndian, nulls) 194 | 195 | b := buf.Bytes() 196 | return &b 197 | } 198 | 199 | func ParseDependencies(b []byte) (*Dependencies, error) { 200 | s := parseDependencies{} 201 | s.IDToConn = make(map[uint64]DepLinks) 202 | reader := bytes.NewReader(b) 203 | err := binary.Read(reader, binary.LittleEndian, &s.Hdr) 204 | if err != nil { 205 | return nil, err 206 | } 207 | for i := 0; i < int(s.Hdr.NumberOfIDs); i++ { 208 | var newID uint64 209 | binary.Read(reader, binary.LittleEndian, &newID) 210 | s.IDs = append(s.IDs, newID) 211 | } 212 | binary.Read(reader, binary.LittleEndian, &s.FileLength) 213 | curr, _ := reader.Seek(0, io.SeekCurrent) 214 | var x DepLinks 215 | s.OffsetAfterConss = curr + int64(s.Hdr.NumberOfIDs)*int64(binary.Size(x)) 216 | for i := 0; i < int(s.Hdr.NumberOfIDs); i++ { 217 | var conn DepLinks 218 | binary.Read(reader, binary.LittleEndian, &conn) 219 | // adjust so that this offset can be used to index the resulting array of connections... 220 | if conn.Offset != 0 { 221 | curr, _ := reader.Seek(0, io.SeekCurrent) 222 | conn.Offset = uint32(curr) + conn.Offset - uint32(s.OffsetAfterConss) - 8 223 | } 224 | s.IDToConn[s.IDs[i]] = conn 225 | s.Conns = append(s.Conns, conn) 226 | } 227 | s.OffsetAfterConss, err = reader.Seek(0, io.SeekCurrent) 228 | if err != nil { 229 | return nil, err 230 | } 231 | 232 | // parse the remainder of the file as IDs I guess? 233 | toParse := s.FileLength - uint32(s.Hdr.NumberOfIDs)*uint32(binary.Size(s.Conns[0])) 234 | for i := 0; i < int(toParse); i += 8 { 235 | var id uint64 236 | binary.Read(reader, binary.LittleEndian, &id) 237 | s.Deps = append(s.Deps, id) 238 | } 239 | return s.extractDependencies(), nil 240 | } 241 | -------------------------------------------------------------------------------- /pack.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "encoding/binary" 6 | "errors" 7 | "fmt" 8 | "os" 9 | "path/filepath" 10 | "strings" 11 | ) 12 | 13 | const ( 14 | CompSize = 0x10000 // default size of a compression block. Haven't seen any others 15 | PackUtocVersion = 3 //3 is PartitionSize, 2 is DirectoryIndex according to https://github.com/FabianFG/CUE4Parse/blob/master/CUE4Parse/UE4/IO/Objects/FIoStoreTocHeader.cs 16 | CompressionNameLength = 32 17 | ) 18 | 19 | func listFilesInDir(dir string, pathToChunkID *map[string]FIoChunkID) (*[]GameFileMetaData, error) { 20 | var files []GameFileMetaData 21 | err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error { 22 | if err != nil { 23 | return err 24 | } 25 | // only add if it's a file 26 | if info.IsDir() { 27 | return nil 28 | } 29 | mountedPath, err := filepath.Abs(path) 30 | if err != nil { 31 | return err 32 | } 33 | fmt.Println("mountedpath: ", mountedPath) 34 | mountedPath = strings.TrimPrefix(mountedPath, dir) 35 | mountedPath = strings.ReplaceAll(mountedPath, "\\", "/") // ensure path dividors are as expected and not Windows 36 | var offlen FIoOffsetAndLength 37 | offlen.SetLength(uint64(info.Size())) 38 | 39 | chidData, ok := (*pathToChunkID)[mountedPath] 40 | if !ok { 41 | return errors.New("a problem occurred while constructing the file. Did you use the correct manifest file?") 42 | } 43 | newEntry := GameFileMetaData{ 44 | filepath: mountedPath, 45 | chunkID: chidData, 46 | offlen: offlen, 47 | } 48 | fmt.Println("trimmed: ", mountedPath) 49 | files = append(files, newEntry) 50 | return nil 51 | }) 52 | 53 | return &files, err 54 | } 55 | 56 | // This function does way too much. It does the following; 57 | // - reads all files that must be packed 58 | // - compresses all the files as specified 59 | // - records all metadata of packing, required for the program. 60 | // - writes the compressed files to the .ucas file - not yet encrypted! 61 | func packFilesToUcas(files *[]GameFileMetaData, m *Manifest, dir string, outFilename string, compression string) error { 62 | 63 | /* manually add the "dependencies" section here */ 64 | // only include the dependencies that are present 65 | subsetDependencies := make(map[uint64]FileDependency) 66 | for _, v := range *files { 67 | // all values are ChunkIDs 68 | subsetDependencies[v.chunkID.ID] = (*m).Deps.ChunkIDToDependencies[v.chunkID.ID] 69 | } 70 | (*m).Deps.ChunkIDToDependencies = subsetDependencies 71 | 72 | // find uint64 of depfile 73 | depHexString := "" 74 | for _, v := range (*m).Files { 75 | if v.Filepath == DepFileName { 76 | depHexString = v.ChunkID 77 | } 78 | } 79 | 80 | compMethodNumber := uint8(0) 81 | if strings.ToLower(compression) != "none" { 82 | compMethodNumber = 1 83 | } 84 | compFun := getCompressionFunction(compression) 85 | if compFun == nil { 86 | return errors.New("could not find compression method. Please use none, oodle or zlib") 87 | } 88 | 89 | // create the new file in a new directory 90 | directory := filepath.Dir(outFilename) 91 | os.MkdirAll(directory, 0700) 92 | f, err := os.OpenFile(outFilename+".ucas", os.O_CREATE|os.O_WRONLY, 0644) 93 | if err != nil { 94 | return err 95 | } 96 | defer f.Close() // all file data is written in this function 97 | 98 | for i := 0; i < len(*files); i++ { 99 | b, err := os.ReadFile(dir + (*files)[i].filepath) 100 | 101 | // sorry, this is a little cursed 102 | if err != nil && (*files)[i].filepath != DepFileName { 103 | return err 104 | } 105 | // if the file doesnt exist, but the filepath indicates it's the dependency file... 106 | if (*files)[i].filepath == DepFileName { 107 | // attempt to deparse, fix filepath, set chunkid 108 | b = *(*m).Deps.Deparse() 109 | (*files)[i].filepath = "" 110 | (*files)[i].chunkID = FromHexString(depHexString) 111 | } 112 | (*files)[i].offlen.SetLength(uint64(len(b))) 113 | if i == 0 { 114 | (*files)[i].offlen.SetOffset(0) 115 | } else { 116 | offset := (*files)[i-1].offlen.GetOffset() + (*files)[i-1].offlen.GetLength() 117 | offset = ((offset + CompSize - 1) / CompSize) * CompSize // align to 0x1000 118 | (*files)[i].offlen.SetOffset(offset) 119 | } 120 | (*files)[i].metadata.ChunkHash = *sha1Hash(&b) 121 | (*files)[i].metadata.Flags = 1 // not sure what this should be? 122 | 123 | // now perform compression, write per compressed block to ucas file 124 | for len(b) != 0 { 125 | var chunk []byte 126 | var block FIoStoreTocCompressedBlockEntry 127 | chunkLen := len(b) 128 | if chunkLen > CompSize { 129 | chunkLen = CompSize 130 | } 131 | chunk = b[:chunkLen] 132 | cChunkPtr, err := compFun(&chunk) 133 | if err != nil { 134 | return err 135 | } 136 | compressedChunk := *cChunkPtr 137 | 138 | block.CompressionMethod = compMethodNumber 139 | currOffset, _ := f.Seek(0, os.SEEK_CUR) 140 | block.SetOffset(uint64(currOffset)) 141 | block.SetUncompressedSize(uint32(chunkLen)) 142 | block.SetCompressedSize(uint32(len(compressedChunk))) 143 | // align this compessedChunk to 0x10 with random bytes as padding 144 | compressedChunk = append(compressedChunk, getRandomBytes((0x10-(len(compressedChunk)%0x10))&(0x10-1))...) 145 | b = b[chunkLen:] 146 | 147 | (*files)[i].compressionBlocks = append((*files)[i].compressionBlocks, block) 148 | 149 | // write chunk to the new .ucas file 150 | f.Write(compressedChunk) 151 | } 152 | fmt.Println("Packed: ", (*files)[i].filepath) 153 | } 154 | return nil 155 | } 156 | 157 | func (w *DirIndexWrapper) ToBytes() *[]byte { 158 | buf := bytes.NewBuffer([]byte{}) 159 | 160 | dirCount := uint32(len(*w.dirs)) 161 | fileCount := uint32(len(*w.files)) 162 | strCount := uint32(len(*w.strSlice)) 163 | // mount point string 164 | mountPointStr := stringToFString(MountPoint) 165 | buf.Write(mountPointStr) 166 | 167 | // directory index entries 168 | buf.Write(*uint32ToBytes(&dirCount)) 169 | for _, directoryEntry := range *w.dirs { 170 | binary.Write(buf, binary.LittleEndian, *directoryEntry) 171 | } 172 | 173 | // file index entries 174 | buf.Write(*uint32ToBytes(&fileCount)) 175 | for _, fileEntry := range *w.files { 176 | binary.Write(buf, binary.LittleEndian, *fileEntry) 177 | } 178 | 179 | // string table 180 | buf.Write(*uint32ToBytes(&strCount)) 181 | for _, str := range *w.strSlice { 182 | buf.Write(stringToFString(str)) 183 | } 184 | output := buf.Bytes() 185 | return &output 186 | } 187 | 188 | func deparseDirectoryIndex(files *[]GameFileMetaData) *[]byte { 189 | var wrapper DirIndexWrapper 190 | var dirIndexEntries []*FIoDirectoryIndexEntry 191 | var fileIndexEntries []*FIoFileIndexEntry 192 | 193 | // first, create unique slice of strings 194 | strmap := make(map[string]bool) 195 | for _, v := range *files { 196 | dirfiles := strings.Split(v.filepath, "/") 197 | if dirfiles[0] == "" { 198 | dirfiles = dirfiles[1:] 199 | } 200 | for _, str := range dirfiles { 201 | strmap[str] = true 202 | } 203 | } 204 | strSlice := []string{} 205 | for k, _ := range strmap { 206 | strSlice = append(strSlice, k) 207 | } 208 | // of this, create a map for quick lookup 209 | strIdx := make(map[string]int) 210 | for i, v := range strSlice { 211 | strIdx[v] = i 212 | } 213 | root := FIoDirectoryIndexEntry{ 214 | Name: NoneEntry, 215 | FirstChildEntry: NoneEntry, 216 | NextSiblingEntry: NoneEntry, 217 | FirstFileEntry: NoneEntry, 218 | } 219 | dirIndexEntries = append(dirIndexEntries, &root) 220 | wrapper.dirs = &dirIndexEntries 221 | wrapper.files = &fileIndexEntries 222 | wrapper.strTable = &strIdx 223 | wrapper.strSlice = &strSlice 224 | 225 | for i, v := range *files { 226 | fpathSections := strings.Split(v.filepath, "/") 227 | if fpathSections[0] == "" { 228 | fpathSections = fpathSections[1:] 229 | } 230 | root.AddFile(fpathSections, uint32(i), &wrapper) 231 | } 232 | 233 | return wrapper.ToBytes() 234 | } 235 | 236 | func constructUtocFile(files *[]GameFileMetaData, compression string, AESKey []byte) (*[]byte, error) { 237 | var udata UTocData 238 | newContainerFlags := uint8(IndexedContainerFlag) 239 | 240 | compressionMethods := []string{"None"} 241 | if strings.ToLower(compression) != "none" { 242 | compressionMethods = append(compressionMethods, compression) 243 | newContainerFlags |= uint8(CompressedContainerFlag) 244 | } 245 | 246 | if len(AESKey) != 0 { 247 | newContainerFlags |= uint8(EncryptedContainerFlag) 248 | } 249 | compressedBlocksCount := uint32(0) 250 | var containerIndex int 251 | // fmt.Printf("%+v\n", files) 252 | for i, v := range *files { 253 | compressedBlocksCount += uint32(len(v.compressionBlocks)) 254 | // fmt.Println("containerIndex:", v.chunkID.Type) 255 | // should be the right type for this container/dependencies/whatever-its-called-chunk 256 | if v.chunkID.Type == 10 { 257 | containerIndex = i 258 | // fmt.Println("containerIndex:", containerIndex) 259 | } 260 | } 261 | 262 | dirIndexBytes := deparseDirectoryIndex(files) 263 | // the container uint64 must be unique and new from any other ID from within the file. 264 | // There is a low probability that there is a collision with any other uint64 that is already in the file. 265 | // When this happens, the mod won't work without any apparent reason, so this would be the first place to start investigating. 266 | var magic [16]byte 267 | for i := 0; i < len(MagicUtoc); i++ { 268 | magic[i] = MagicUtoc[i] 269 | } 270 | // setting the required header fields 271 | udata.hdr = UTocHeader{ 272 | Magic: magic, 273 | Version: 3, // like the Grounded files 274 | HeaderSize: uint32(binary.Size(udata.hdr)), 275 | EntryCount: uint32(len(*files)), 276 | CompressedBlockEntryCount: compressedBlocksCount, 277 | CompressedBlockEntrySize: 12, 278 | CompressionMethodNameCount: uint32(len(compressionMethods) - 1), // "extra" methods, other than "none" 279 | CompressionMethodNameLength: CompressionNameLength, 280 | CompressionBlockSize: CompSize, 281 | DirectoryIndexSize: uint32(len(*dirIndexBytes)), // number of bytes in the dirIndex 282 | ContainerID: FIoContainerID((*files)[containerIndex].chunkID.ID), 283 | ContainerFlags: EIoContainerFlags(newContainerFlags), 284 | PartitionSize: 0xffffffffffffffff, 285 | PartitionCount: 1, 286 | } 287 | 288 | buf := bytes.NewBuffer([]byte{}) 289 | // write header 290 | binary.Write(buf, binary.LittleEndian, udata.hdr) 291 | 292 | // write chunk IDs 293 | for _, v := range *files { 294 | binary.Write(buf, binary.LittleEndian, v.chunkID) 295 | } 296 | 297 | // write Offset and lengths 298 | for _, v := range *files { 299 | binary.Write(buf, binary.LittleEndian, v.offlen) 300 | } 301 | 302 | // write compression blocks 303 | for _, v := range *files { 304 | for _, b := range v.compressionBlocks { 305 | binary.Write(buf, binary.LittleEndian, b) 306 | } 307 | } 308 | 309 | // write compression methods, but skip "none" 310 | for _, compMethod := range compressionMethods { 311 | if strings.ToLower(compMethod) == "none" { 312 | continue 313 | } 314 | capitalized := strings.Title(compMethod) 315 | bname := make([]byte, 32) 316 | for i := 0; i < len(capitalized); i++ { 317 | bname[i] = capitalized[i] 318 | } 319 | binary.Write(buf, binary.LittleEndian, bname) 320 | } 321 | 322 | // write directory index 323 | binary.Write(buf, binary.LittleEndian, dirIndexBytes) 324 | 325 | // write chunk metas 326 | for _, v := range *files { 327 | binary.Write(buf, binary.LittleEndian, v.metadata) 328 | } 329 | output := buf.Bytes() 330 | 331 | return &output, nil 332 | } 333 | 334 | // returns the GameFileMetaData of the dependencies file 335 | func packToCasToc(dir string, m *Manifest, outFilename string, compression string, aes []byte) (int, error) { 336 | 337 | var offlen FIoOffsetAndLength 338 | var fdata []GameFileMetaData 339 | var newEntry GameFileMetaData 340 | for _, v := range (*m).Files { 341 | var p string = filepath.Join(dir, v.Filepath) 342 | if info, err := os.Stat(p); err == nil { 343 | // fmt.Println("exists", v.Filepath) 344 | offlen.SetLength(uint64(info.Size())) 345 | } else if errors.Is(err, os.ErrNotExist) && v.Filepath == DepFileName { 346 | //dependencies file doesnt exist, but still needs to be parsed so add it here anyways! 347 | // fmt.Println("exin't", v.Filepath) 348 | offlen.SetLength(0) //will be fixed in a later function 349 | } 350 | newEntry = GameFileMetaData{ 351 | filepath: v.Filepath, 352 | chunkID: FromHexString(v.ChunkID), 353 | offlen: offlen, 354 | } 355 | fdata = append(fdata, newEntry) 356 | } 357 | 358 | // fmt.Printf("%+v\n", pathToChunkID) 359 | // first aggregate flat list of all the files that are in the dir 360 | // the resulting slice must keep its indices structure. 361 | // fdata, err := listFilesInDir(dir, &pathToChunkID) 362 | // if err != nil { 363 | // return 0, err 364 | // } 365 | 366 | // read each file and place them in a newly created .ucas file with the desired compression method 367 | // get the required data such as compression sizes and hashes; 368 | err := packFilesToUcas(&fdata, m, dir, outFilename, compression) 369 | if err != nil { 370 | return 0, err 371 | } 372 | 373 | // .ucas file has been written now; encrypt with aes if desired (why would you?) 374 | if len(aes) != 0 { 375 | b, err := os.ReadFile(outFilename + ".ucas") 376 | if err != nil { 377 | return 0, err 378 | } 379 | encrypted, err := encryptAES(&b, aes) 380 | if err != nil { 381 | return 0, err 382 | } 383 | err = os.WriteFile(outFilename+".ucas", *encrypted, os.ModePerm) 384 | if err != nil { 385 | return 0, err 386 | } 387 | } 388 | 389 | // .utoc file must be generated, especially the directory index, which is the hardest part. 390 | utocBytes, err := constructUtocFile(&fdata, compression, aes) 391 | if err != nil { 392 | return 0, err 393 | } 394 | err = os.WriteFile(outFilename+".utoc", *utocBytes, os.ModePerm) 395 | return len(fdata), err 396 | } 397 | -------------------------------------------------------------------------------- /req/Packed_P.pak: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gitMenv/UEcastoc/7e8112e4d7617522d60bda812dca11949d6a77cc/req/Packed_P.pak -------------------------------------------------------------------------------- /ucas.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "errors" 5 | "fmt" 6 | "os" 7 | "path/filepath" 8 | "regexp" 9 | ) 10 | 11 | func (d *UTocData) unpackFile(fdata *GameFileMetaData, blockData *[][]byte, outDir string) error { 12 | os.MkdirAll(outDir, 0700) 13 | bdata := *blockData 14 | outputData := []byte{} 15 | for i := 0; i < len(bdata); i++ { 16 | method := d.compressionMethods[fdata.compressionBlocks[i].CompressionMethod] 17 | decomp := getDecompressionFunction(method) 18 | if decomp == nil { 19 | return errors.New(fmt.Sprintf("decompression method %s not known", method)) 20 | } 21 | newData, err := decomp(&(bdata[i]), fdata.compressionBlocks[i].GetUncompressedSize()) 22 | if err != nil { 23 | return err 24 | } 25 | outputData = append(outputData, (*newData)...) 26 | } 27 | // ensure path exists to the file 28 | fpath := filepath.Clean(outDir + fdata.filepath) 29 | directory := filepath.Dir(fpath) 30 | 31 | os.MkdirAll(directory, 0700) 32 | // write the actual data to the new file 33 | err := os.WriteFile(fpath, outputData, 0644) 34 | 35 | return err 36 | } 37 | 38 | func (d *UTocData) matchRegex(regex string) *[]GameFileMetaData { 39 | filesToUnpack := []GameFileMetaData{} 40 | for _, v := range d.files { 41 | match, err := regexp.MatchString(regex, v.filepath) 42 | if err != nil { 43 | return &filesToUnpack 44 | } 45 | // exclude special "dependencies" file, as it's not meant to be directly unpacked 46 | // for unpacking that file, have a look at the function to construct the manifest! 47 | if match && v.filepath != DepFileName { 48 | filesToUnpack = append(filesToUnpack, v) 49 | } 50 | } 51 | return &filesToUnpack 52 | } 53 | 54 | func (d *UTocData) unpackUcasFiles(ucasPath string, outDir string, regex string) (filesUnpacked int, err error) { 55 | outDir += d.mountPoint // adjust for mountpoint 56 | filesUnpacked = 0 57 | // read the file 58 | openUcas, err := os.Open(ucasPath) 59 | if err != nil { 60 | return filesUnpacked, err 61 | } 62 | defer openUcas.Close() 63 | 64 | filesToUnpack := *(d.matchRegex(regex)) 65 | // each "file" is built from compression blocks 66 | // extract those compression blocks from the .ucas file and use those for unpacking 67 | // Since there's one place where the .ucas file is actually read, it can act as a work divider. 68 | // that may make it possible to make it run multithreaded in the future! 69 | for _, v := range filesToUnpack { 70 | var compressionblockData [][]byte 71 | for _, b := range v.compressionBlocks { 72 | _, err = openUcas.Seek(int64(b.GetOffset()), 0) 73 | if err != nil { 74 | return filesUnpacked, err 75 | } 76 | buf := make([]byte, b.GetCompressedSize()) 77 | readBytes, err := openUcas.Read(buf) 78 | if err != nil { 79 | return filesUnpacked, err 80 | } 81 | if uint32(readBytes) != b.GetCompressedSize() { 82 | return filesUnpacked, errors.New("could not read the correct size") 83 | } 84 | compressionblockData = append(compressionblockData, buf) 85 | } 86 | // all separate blocks collected for file unpacking 87 | err = d.unpackFile(&v, &compressionblockData, outDir) 88 | if err != nil { 89 | return filesUnpacked, err 90 | } 91 | filesUnpacked++ 92 | } 93 | return len(filesToUnpack), nil 94 | } 95 | -------------------------------------------------------------------------------- /utoc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "encoding/binary" 6 | "errors" 7 | "fmt" 8 | "io/ioutil" 9 | "os" 10 | "strings" 11 | ) 12 | 13 | const ( 14 | MagicUtoc string = "-==--==--==--==-" 15 | UnrealSignature string = "\xC1\x83\x2A\x9E" 16 | MountPoint string = "../../../" 17 | NoneEntry uint32 = 0xffffffff 18 | ) 19 | const ( 20 | VersionInvalid uint8 = iota 21 | VersionInitial = iota 22 | VersionDirectoryIndex = iota 23 | VersionPartitionSize = iota 24 | VersionPerfectHash = iota 25 | VersionPerfectHashWithOverflow = iota 26 | VersionLatest = iota 27 | ) 28 | 29 | type EIoContainerFlags uint8 30 | 31 | const ( 32 | NoneContainerFlag EIoContainerFlags = 0 33 | CompressedContainerFlag = 1 << 0 34 | EncryptedContainerFlag = 1 << 1 35 | SignedContainerFlag = 1 << 2 36 | IndexedContainerFlag = 1 << 3 37 | ) 38 | 39 | type FGuid struct { 40 | A, B, C, D uint32 41 | } 42 | 43 | type UTocHeader struct { 44 | Magic [16]byte 45 | Version uint8 // Current options are Initial(1), DirectoryIndex(2), PartitionSize(3) 46 | Reserved0 [3]uint8 // actually a uint8 and uint16 47 | HeaderSize uint32 // value is 144 48 | EntryCount uint32 49 | CompressedBlockEntryCount uint32 50 | CompressedBlockEntrySize uint32 // they say "For sanity checking" 51 | CompressionMethodNameCount uint32 52 | CompressionMethodNameLength uint32 53 | CompressionBlockSize uint32 54 | DirectoryIndexSize uint32 55 | PartitionCount uint32 // should be 0 56 | ContainerID FIoContainerID 57 | EncryptionKeyGuid FGuid 58 | ContainerFlags EIoContainerFlags 59 | Reserved1 [3]byte 60 | TocChunkPerfectHashSeedsCount uint32 61 | PartitionSize uint64 62 | TocChunksWithoutPerfectHashCount uint32 63 | Reserved2 [44]byte 64 | } 65 | 66 | func (h *UTocHeader) isEncrypted() bool { 67 | return h.ContainerFlags&EncryptedContainerFlag != 0 68 | } 69 | 70 | // ucas file consists of files. For each file, there is an entry with this data. 71 | // It states where you can find which file in the ucas file. 72 | type GameFileMetaData struct { 73 | filepath string 74 | chunkID FIoChunkID 75 | offlen FIoOffsetAndLength 76 | compressionBlocks []FIoStoreTocCompressedBlockEntry 77 | metadata FIoStoreTocEntryMeta 78 | } 79 | 80 | type UTocData struct { 81 | hdr UTocHeader 82 | mountPoint string 83 | files []GameFileMetaData 84 | compressionMethods []string 85 | } 86 | 87 | type GameFilePathData struct { 88 | fpath string 89 | userData uint32 90 | } 91 | 92 | func (u *UTocData) unpackDependencies(ucasPath string) (*[]byte, error) { 93 | // find dependency file independent of index 94 | var depfile GameFileMetaData 95 | // depfile := u.files[len(u.files)-1] 96 | for _, f := range u.files { 97 | if f.filepath == DepFileName { 98 | depfile = f 99 | } 100 | } 101 | if depfile.filepath != DepFileName { 102 | fmt.Println(depfile.filepath) 103 | return nil, errors.New("could not derive dependencies") 104 | } 105 | // open ucas file 106 | openUcas, err := os.Open(ucasPath) 107 | if err != nil { 108 | return nil, err 109 | } 110 | var compressionblockData [][]byte 111 | for _, b := range depfile.compressionBlocks { 112 | _, err = openUcas.Seek(int64(b.GetOffset()), 0) 113 | if err != nil { 114 | return nil, err 115 | } 116 | buf := make([]byte, b.GetCompressedSize()) 117 | readBytes, err := openUcas.Read(buf) 118 | if err != nil { 119 | return nil, err 120 | } 121 | if uint32(readBytes) != b.GetCompressedSize() { 122 | return nil, errors.New("could not read the correct size") 123 | } 124 | compressionblockData = append(compressionblockData, buf) 125 | } 126 | // all separate blocks collected for file unpacking 127 | outputData := []byte{} 128 | for i := 0; i < len(compressionblockData); i++ { 129 | method := u.compressionMethods[depfile.compressionBlocks[i].CompressionMethod] 130 | decomp := getDecompressionFunction(method) 131 | if decomp == nil { 132 | return nil, errors.New(fmt.Sprintf("decompression method %s not known", method)) 133 | } 134 | newData, err := decomp(&(compressionblockData[i]), depfile.compressionBlocks[i].GetUncompressedSize()) 135 | if err != nil { 136 | return nil, err 137 | } 138 | outputData = append(outputData, (*newData)...) 139 | } 140 | return &outputData, nil 141 | } 142 | 143 | func recursiveDirExplorer(parentPath string, pDir uint32, outputList *[]GameFilePathData, 144 | strTable *[]string, dirs *[]FIoDirectoryIndexEntry, files *[]FIoFileIndexEntry) { 145 | 146 | dirIdx := (*dirs)[pDir].FirstChildEntry 147 | fileIdx := (*dirs)[pDir].FirstFileEntry 148 | if dirIdx == NoneEntry && fileIdx == NoneEntry { // base case 149 | return 150 | } 151 | 152 | for dirIdx != NoneEntry { 153 | dirEntry := (*dirs)[dirIdx] 154 | newDirName := parentPath + "/" + (*strTable)[dirEntry.Name] 155 | recursiveDirExplorer(newDirName, dirIdx, outputList, strTable, dirs, files) 156 | dirIdx = dirEntry.NextSiblingEntry 157 | } 158 | for fileIdx != NoneEntry { 159 | fileEntry := (*files)[fileIdx] 160 | filepath := parentPath + "/" + (*strTable)[fileEntry.Name] 161 | (*outputList) = append((*outputList), GameFilePathData{fpath: filepath, userData: fileEntry.UserData}) 162 | fileIdx = fileEntry.NextFileEntry 163 | } 164 | } 165 | 166 | // returned is a slice of filepaths in the correct order. 167 | // meaning that the indices correspond to their file-index userData field. 168 | func parseDirectoryIndex(r *bytes.Reader, numberOfChunks int) (mountpoint string, filepaths *[]string) { 169 | var size, dirCount, fileCount, stringCount uint32 170 | var dirs []FIoDirectoryIndexEntry 171 | var files []FIoFileIndexEntry 172 | var strTable []string 173 | 174 | // mount point string 175 | binary.Read(r, binary.LittleEndian, &size) 176 | mntPt := make([]byte, size) 177 | binary.Read(r, binary.LittleEndian, &mntPt) 178 | mountPointName := string(mntPt[:len(mntPt)-1]) 179 | if !strings.HasPrefix(mountPointName, MountPoint) { 180 | return "", nil 181 | } 182 | mountpoint = strings.TrimPrefix(mountPointName, MountPoint) 183 | var dirEntry FIoDirectoryIndexEntry 184 | binary.Read(r, binary.LittleEndian, &dirCount) 185 | for i := 0; uint32(i) < dirCount; i++ { 186 | binary.Read(r, binary.LittleEndian, &dirEntry) 187 | dirs = append(dirs, dirEntry) 188 | } 189 | 190 | var fileEntry FIoFileIndexEntry 191 | binary.Read(r, binary.LittleEndian, &fileCount) 192 | for i := 0; uint32(i) < fileCount; i++ { 193 | binary.Read(r, binary.LittleEndian, &fileEntry) 194 | files = append(files, fileEntry) 195 | } 196 | 197 | binary.Read(r, binary.LittleEndian, &stringCount) 198 | for i := 0; uint32(i) < stringCount; i++ { 199 | binary.Read(r, binary.LittleEndian, &size) 200 | newString := make([]byte, size) 201 | binary.Read(r, binary.LittleEndian, &newString) 202 | strTable = append(strTable, string(newString[:len(newString)-1])) 203 | } 204 | if dirs[0].Name != NoneEntry { 205 | return "", nil 206 | } 207 | 208 | var gamefilePaths []GameFilePathData 209 | 210 | recursiveDirExplorer("", 0, &gamefilePaths, &strTable, &dirs, &files) 211 | 212 | // order the filepaths according to their userdata 213 | orderedPaths := make([]string, numberOfChunks) 214 | for _, v := range gamefilePaths { 215 | orderedPaths[v.userData] = v.fpath 216 | } 217 | 218 | return mountpoint, &orderedPaths 219 | } 220 | 221 | func parseUtocHeader(r *bytes.Reader) (hdr UTocHeader, err error) { 222 | // read the header of the .utoc file 223 | err = binary.Read(r, binary.LittleEndian, &hdr) 224 | if err != nil { 225 | return hdr, err 226 | } 227 | if string(hdr.Magic[:]) != MagicUtoc { 228 | return hdr, errors.New("magic word of .utoc file was not found") 229 | } 230 | if hdr.Version < VersionDirectoryIndex { 231 | return hdr, errors.New("utoc version is outdated") 232 | } 233 | if hdr.Version > VersionLatest { 234 | return hdr, errors.New("too new utoc version") 235 | } 236 | 237 | if hdr.Version < VersionPartitionSize { 238 | hdr.PartitionCount = 1 239 | hdr.PartitionSize = 0xffffffffffffffff // limit of uint64 240 | fmt.Println("Warning: this is a version of the utoc file format that may not be supported yet") 241 | } 242 | if hdr.CompressedBlockEntrySize != 12 { // must be sizeof FIoStoreTocCompressedBlockEntry 243 | return hdr, errors.New("compressed block entry size was incorrect") 244 | } 245 | 246 | if hdr.ContainerFlags != 0 && uint8(hdr.ContainerFlags)&SignedContainerFlag != 0 { 247 | // the reference project may contain flags here, but no idea what it should do... 248 | return hdr, errors.New("the unreal engine dictates that this is an error. No idea why (yet)... Sorry!") 249 | } 250 | 251 | return hdr, nil 252 | } 253 | 254 | // the UTocData can be used to extract all information from the ucas files 255 | func parseUtocFile(utocFile string, aesKey []byte) (*UTocData, error) { 256 | var udata UTocData 257 | b, err := ioutil.ReadFile(utocFile) 258 | if err != nil { 259 | return nil, err 260 | } 261 | r := bytes.NewReader(b) 262 | 263 | udata.hdr, err = parseUtocHeader(r) 264 | if err != nil { 265 | return nil, err 266 | } 267 | if udata.hdr.isEncrypted() { 268 | if len(aesKey) == 0 { 269 | return &udata, errors.New("encrypted file, but no AES key was provided! Please pass the aes key as a string in hexadecimal format") 270 | } 271 | } 272 | 273 | // parse the following four sections of the file 274 | var chunkIDs []FIoChunkID 275 | var offlengths []FIoOffsetAndLength 276 | var perfectHashSeeds []uint32 277 | var withoutPerfectHashes []uint32 278 | var compressionBlocks []FIoStoreTocCompressedBlockEntry 279 | var filepaths []string 280 | var metas []FIoStoreTocEntryMeta 281 | 282 | // following the header is a list of chunk IDs. 283 | var chunkID FIoChunkID 284 | for i := 0; i < int(udata.hdr.EntryCount); i++ { 285 | binary.Read(r, binary.LittleEndian, &chunkID) 286 | chunkIDs = append(chunkIDs, chunkID) 287 | } 288 | 289 | var offlen FIoOffsetAndLength 290 | for i := 0; i < int(udata.hdr.EntryCount); i++ { 291 | binary.Read(r, binary.LittleEndian, &offlen) 292 | offlengths = append(offlengths, offlen) 293 | } 294 | // if there are Perfect Hash Seeds ?? then these must be parsed before the compression blocks 295 | var hashSeed uint32 296 | for i := 0; i < int(udata.hdr.TocChunkPerfectHashSeedsCount); i++ { 297 | binary.Read(r, binary.LittleEndian, &hashSeed) 298 | perfectHashSeeds = append(perfectHashSeeds, hashSeed) 299 | } 300 | // same goes for the chunks without perfect hashes??? 301 | for i := 0; i < int(udata.hdr.TocChunksWithoutPerfectHashCount); i++ { 302 | binary.Read(r, binary.LittleEndian, &hashSeed) 303 | withoutPerfectHashes = append(withoutPerfectHashes, hashSeed) 304 | } 305 | 306 | // read compression blocks 307 | var cBlock FIoStoreTocCompressedBlockEntry 308 | for i := 0; i < int(udata.hdr.CompressedBlockEntryCount); i++ { 309 | binary.Read(r, binary.LittleEndian, &cBlock) 310 | compressionBlocks = append(compressionBlocks, cBlock) 311 | } 312 | // read compression methods 313 | udata.compressionMethods = append(udata.compressionMethods, "None") 314 | 315 | method := make([]byte, udata.hdr.CompressionMethodNameLength) 316 | for i := 0; i < int(udata.hdr.CompressionMethodNameCount); i++ { 317 | binary.Read(r, binary.LittleEndian, &method) 318 | udata.compressionMethods = append(udata.compressionMethods, string(bytes.Trim([]byte(method[:]), "\x00"))) 319 | } 320 | 321 | // read directory index, but only if the containerFlags states that is present. TODO? 322 | dirIndexBuffer := make([]byte, udata.hdr.DirectoryIndexSize) 323 | binary.Read(r, binary.LittleEndian, &dirIndexBuffer) // normal reader is advanced here as well 324 | 325 | if udata.hdr.isEncrypted() { 326 | plaintext, err := decryptAES(&dirIndexBuffer, aesKey) 327 | if err != nil { 328 | return &udata, err 329 | } 330 | dirIndexBuffer = *plaintext 331 | } 332 | dirReader := bytes.NewReader(dirIndexBuffer) 333 | 334 | mntPt, fpaths := parseDirectoryIndex(dirReader, len(chunkIDs)) 335 | udata.mountPoint = mntPt 336 | if fpaths == nil { 337 | return &udata, errors.New("something went wrong parsing the directory index!") 338 | } 339 | filepaths = *fpaths 340 | 341 | // read file chunk metas 342 | var meta FIoStoreTocEntryMeta 343 | for i := 0; i < int(udata.hdr.EntryCount); i++ { 344 | binary.Read(r, binary.LittleEndian, &meta) 345 | metas = append(metas, meta) 346 | } 347 | //temporary dependency "file" because it isn't always the last chunk 348 | var foundDeps bool 349 | var path string 350 | // aggregate file data 351 | for i, v := range filepaths { 352 | startBlock := offlengths[i].GetOffset() / uint64(udata.hdr.CompressionBlockSize) 353 | // hacky way of rounding the length to the next multiple of the compressionblocksize and intcasting 354 | endBlock := startBlock + (offlengths[i].GetLength()+(uint64(udata.hdr.CompressionBlockSize)-1))/uint64(udata.hdr.CompressionBlockSize) 355 | blocks := compressionBlocks[startBlock:endBlock] 356 | if v == "" { 357 | // check for "dependencies" chunk via type instead of assuming it's last. 358 | // in the sample im running this on, the chunkID matches with the one in the header. 359 | if chunkIDs[i].Type != 10 && uint64(udata.hdr.ContainerID) != chunkIDs[i].ID { 360 | // if the name is empty, the type is 10, and the chunkID doesnt match, then it's not the dependencies 361 | continue 362 | } 363 | foundDeps = true 364 | path = DepFileName 365 | } else { 366 | path = v 367 | } 368 | udata.files = append(udata.files, GameFileMetaData{ 369 | filepath: path, 370 | chunkID: chunkIDs[i], 371 | offlen: offlengths[i], 372 | compressionBlocks: blocks, 373 | metadata: metas[i], 374 | }) 375 | } 376 | if !foundDeps { 377 | return &udata, errors.New("couldn't find dependencies") 378 | } 379 | // the final file in the list will have filepath "dependencies" 380 | // //manually stick this on at the end for compatibility? 381 | // udata.files = append(udata.files, deps) 382 | return &udata, nil 383 | } 384 | --------------------------------------------------------------------------------