├── .gitignore ├── .ss1.png ├── Finish-new.dat └── null ├── README.md ├── Xtrakt ├── convert-dat └── null ├── file_context_zone └── nul └── tools ├── README.md ├── blockimgdiff.py ├── blockimgdiff.pyc ├── common.py ├── common.pyc ├── img2sdat.py ├── lib64 ├── libc++.so ├── sefcontext └── sefcontext_compile ├── make_ext4fs ├── nul ├── rangelib.py ├── rangelib.pyc ├── sdat2img.py ├── simg2img ├── sparse_img.py └── sparse_img.pyc /.gitignore: -------------------------------------------------------------------------------- 1 | Finish-new.dat* 2 | convert-dat* 3 | file_context_zone* 4 | *.pyc 5 | -------------------------------------------------------------------------------- /.ss1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/.ss1.png -------------------------------------------------------------------------------- /Finish-new.dat/null: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/Finish-new.dat/null -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Android System Extraction and Repack Tool 2 | ========================================== 3 | 4 | by : iykeDROID™ 5 | 6 | _Nana Iyke Quame_ 7 | 8 | [WEBSITE](http://www.droidpeepz.xyz/) 9 | 10 | ![SS](https://github.com/iykequame/Android_System_Extraction_and_Repack_Tool/blob/master/.ss1.png) 11 | 12 | > Current Version : [v1.8](https://github.com/iykequame/android_system_extraction_and_repack_tool/releases/tag/v1.8) 13 | 14 | **USAGE:** 15 | ---------- 16 | **NOTE** 17 | Make sure that **android_system_extraction_and_repack_tool** is located at **Desktop** 18 | 19 | 1. Run **"Xtrakt"** from it's location in terminal 20 | 2. Copy **"file_contexts.bin"** from your Rom to **"file_context_zone"** folder 21 | 2. Use "f" from menu to convert **"file_contexts.bin"** to text readable **"file_contexts"** 22 | 3. Copy : **system.new.dat, system.transfer.list & file_contexts** to **"convert-dat"** folder. 23 | 4. Use "i" from menu to unpack, which the output will be name as **"rom_system"** for modifications of apks & files. 24 | 5. Use "y" from menu to repack, which the complete new **"system.new.dat", "system.patch.dat" & "system.transfer.list"** will be located at **"Finish-new.dat"** folder 25 | 6. Done ! 26 | 27 | 28 | **EXAMPLE:** 29 | 30 | In your terminal, type the following to start the script: 31 | ``` 32 | git clone https://github.com/iykequame/android_system_extraction_and_repack_tool.git 33 | 34 | mv android_system_extraction_and_repack_tool ~/Desktop/ 35 | 36 | cd ~/Desktop/android_system_extraction_and_repack_tool/ 37 | 38 | ./Xtrakt 39 | ``` 40 | 41 | **OR** 42 | 43 | Double-click the Xtrakt file and choose "Run in Terminal" if your OS supports it. 44 | 45 | **ALERT!!!** 46 | ------------ 47 | sudo is requested in the script. 48 | 49 | [WEBSITE](http://www.droidpeepz.xyz/) 50 | 51 | **Sources :** 52 | 53 | [GITHUB](https://github.com/iykequame/android_system_extractrion_and_repack_tool) 54 | 55 | [BITBUCKET](https://bitbucket.org/zac6ix/android_system_extraction_and_repack_tool) 56 | 57 | **Threads** 58 | 59 | [XDA](https://forum.xda-developers.com/android/software-hacking/dev-android-extractrion-repack-tool-t3588311) 60 | 61 | [sdat2img 1.0 - img2sdat 1.2](https://forum.xda-developers.com/android/software-hacking/how-to-conver-lollipop-dat-files-to-t2978952) 62 | 63 | [For file_context.bin conversion](https://www.youtube.com/watch?v=Tw5f4iLUYhc) by: Pom Kritsada @ MTK THAI Developers. 64 | 65 | Credit to : 66 | 67 | [@xpirt {xda}](https://forum.xda-developers.com/member.php?u=5132229) 68 | 69 | [@SuperR. {xda}](https://forum.xda-developers.com/member.php?u=5787964) 70 | 71 | -all xda threads which helped 72 | 73 | -[Android Matrix Development](https://web.facebook.com/groups/1024872487548231/) 74 | 75 | -Nana Yaa {Jennie} for her time. 76 | 77 | ## THANK YOU 78 | -------------------------------------------------------------------------------- /Xtrakt: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | printf '\e[8;33;80t]' 3 | # 4 | # Custom build script 5 | # Copyright © 2017, Nana Iyke Quame "iyke" 6 | # 7 | # 8 | #Android_Matrix_Development 9 | #DroidPeepz™ Inc 10 | # 11 | # NOTE : This Is Meant For The Good & Ease Of Kernel Development, 12 | # - Please You Are Free To Edit And Make It Better, But DO NOT STEAL ! 13 | # - 14 | # 15 | #-------------------------- 16 | # Updated On : 2017/09/23 17 | # by : @artificerpi 18 | # reason : Change to use `relative path` 19 | # commit : https://github.com/iykequame/android_system_extraction_and_repack_tool/pull/3/commits/4c8691b17c407a6d5eae9d408ba69c473e0f06c6 20 | # commit-ID : 4c8691b17c407a6d5eae9d408ba69c473e0f06c6 21 | #-------------------------- 22 | user=$(echo "$(whoami | gawk '{ print $1 }')") 23 | PROJECT_DIR=`realpath "$0" | sed 's|\(.*\)/.*|\1|'` 24 | flicx="$PROJECT_DIR/.flicx/flicx.sh"; 25 | permsda=chmod 755 $PROJECT_DIR/tools/sdat2img.py 26 | permimg=chmod 755 $PROJECT_DIR/tools/img2simg.py 27 | permext=chmod 755 $PROJECT_DIR/tools/make_ext4fs 28 | permext=chmod 755 $PROJECT_DIR/tools/img2simg.py 29 | #------------------- 30 | 31 | #START - HOME 32 | START(){ 33 | clear 34 | echo -e " \033[0;34m " 35 | cecho P"" " \e[4m*||_http://www.droidpeepz.xyz_||*" 36 | echo 37 | cecho P "" " \e[7m=================================" 38 | cecho C "" " \e[7m===!! Android System Extraction/Repack Tool. !!===" 39 | cecho G "" " \e[7mFor" 40 | cecho C "" " \e[7m*..MediaTek & Qualcomm Devices..*" 41 | echo 42 | echo " ========*****************========" 43 | cecho G "" " ---> DroidPeepz™ Inc " 44 | cecho P "" " ---> by : Nana Iyke Quame " 45 | echo " ========*****************========" 46 | echo " Credit Goes To:" 47 | cecho Y "" " **ANDROID MATRIX DEVELOPMENT**" 48 | cecho P "" " & @xpirt, @Mesaj, @matrix, @artificerpi " 49 | echo 50 | echo 51 | CT="$(date +"%r")" 52 | DT="$(date +"%b-%d-%Y")" 53 | cecho B "" " \e[7m\e[3mMenu" " \033[0;36m\e[3m|| Time : $CT" 54 | cecho C "" " \e[3mHost System : $HOSTNAME" " \033[0;36m\e[3m|| Date : $DT" 55 | echo 56 | cecho Y " \033[0mf - " "Convert file_contexts " 57 | echo 58 | cecho C " i - " "Unpack "system.new.dat" " 59 | cecho R "" " NOTE : This process will request for your "sudo" password! " 60 | echo 61 | cecho G " y - " "Repack To "system.new.dat" " 62 | echo 63 | cecho C " v - " "View Credits + info" 64 | echo 65 | cecho R " x - " "Exit" 66 | echo 67 | } 68 | 69 | 70 | #MENU 71 | EXT_MAIN(){ 72 | while : 73 | do 74 | 75 | clear 76 | START 77 | read -p " Enter option: " CHOICE 78 | case "$CHOICE" in 79 | Y|y ) RePck;; 80 | F|f ) FILCX;; 81 | I|i ) UnPck;; 82 | V|v) V_CREDIT;; 83 | X|x) EXIT;; 84 | *) cecho R "" " Invalid option"; sleep 0.3; continue;; 85 | esac 86 | 87 | done 88 | } 89 | 90 | PAGE(){ 91 | clear 92 | echo -e " \033[0;34m " 93 | cecho P"" " \e[4m*||_http://www.droidpeepz.xyz_||*" 94 | echo 95 | echo 96 | cecho P "" " \e[7m=================================" 97 | cecho C "" " \e[7m===!! Android System Extraction/Repack Tool. !!===" 98 | cecho G "" " \e[7mFor" 99 | cecho C "" " \e[7m*..MediaTek & Qualcomm Devices..*" 100 | echo 101 | echo 102 | echo 103 | } 104 | 105 | TIME(){ 106 | CT="$(date +"%r")" 107 | DT="$(date +"%b-%d-%Y")" 108 | cecho B "" " \e[7m\e[3mMenu" " \033[0;36m\e[3m|| Time : $CT" 109 | cecho C "" " \e[3mHost System : $HOSTNAME" " \033[0;36m\e[3m|| Date : $DT" 110 | } 111 | 112 | # File_context_zone 113 | COTEXTDIR=./file_context_zone 114 | CONFOLDER=./file_context_zone/convert 115 | REVFOLDER=./file_context_zone/revert 116 | BINNAME=file_contexts.bin 117 | NEWBINFILE=new-$(date +%Y%m%d)-$BINNAME 118 | 119 | #ST_DEB 120 | FILCX(){ 121 | clear 122 | PAGE 123 | echo 124 | cecho P "" " \e[7m*||_File_contexts.bin Convert Zone_||*" 125 | echo 126 | echo " ========*****************========" 127 | echo " Credit Goes To:" 128 | cecho C "" " cofface@cofface.com - source script " 129 | cecho B "" " Pom Kritsada @MTK THAI Developers " 130 | echo " ========*****************========" 131 | echo 132 | TIME 133 | echo 134 | cecho Y " \033[0mv - " "View Credits & info" 135 | echo 136 | cecho C " c - " "Convert" 137 | echo 138 | cecho G " r - " "Revert " 139 | echo 140 | cecho R " h - " "go to home" 141 | echo 142 | echo 143 | read -p " Enter option: " x 144 | case "$x" in 145 | V|v) V_CREDIT;; 146 | C|c) CONVERT_FILCX;; 147 | R|r) REVERT_FILCX;; 148 | H|h ) START;; 149 | *) cecho R "" " Invalid option"; sleep 0.3; continue;; 150 | esac 151 | FILCX 152 | } 153 | 154 | CONVERT_FILCX(){ 155 | clear 156 | PAGE 157 | echo -e " \033[0;34m " 158 | echo " Please wait" 159 | mkdir -p $CONFOLDER 160 | sleep 1.0; 161 | if [ -f $CONFOLDER/file_contexts* ] 162 | then 163 | echo -e " \033[0;31m " 164 | echo " File." "file_context already exist in folder; " 165 | echo " continuing will be overwrite it !!! " 166 | echo 167 | read -p " to continue, press Enter ... " 168 | fi 169 | ./tools/lib64/sefcontext -o $CONFOLDER/file_contexts $COTEXTDIR/$BINNAME > /dev/null 170 | if [ $? == 0 ] 171 | then 172 | echo -e " \033[0;34m " 173 | echo " !! DONE !!" 174 | read -p "Press enter key to continue . . ." 175 | echo 176 | else 177 | echo " !! Faild !!" 178 | read -p "Press enter key to continue . . ." 179 | fi 180 | } 181 | 182 | REVERT_FILCX() { 183 | clear 184 | PAGE 185 | echo -e " \033[0;34m " 186 | echo " Please wait" 187 | mkdir -p $REVFOLDER 188 | sleep 1.0; 189 | if [ -f $REVFOLDER/file_contexts.bin* ] 190 | then 191 | echo -e " \033[0;31m " 192 | echo " File." "file_context.bin already exist in folder; " 193 | echo " continuing will be overwrite it !!! " 194 | echo 195 | read -p " to continue, press Enter ... " 196 | fi 197 | ./tools/lib64/sefcontext_compile -o $REVFOLDER/$NEWBINFILE $CONFOLDER/file_contexts > /dev/null 198 | if [ $? == 0 ] 199 | then 200 | echo -e " \033[0;34m " 201 | echo " !! DONE !!" 202 | read -p "Press enter key to continue . . ." 203 | echo 204 | else 205 | echo " !! Faild !!" 206 | read -p "Press enter key to continue . . ." 207 | fi 208 | } 209 | 210 | 211 | #-------------------------- 212 | #converting to system image 213 | #-------------------------- 214 | UnPck(){ 215 | sudo mkdir $PROJECT_DIR/convert-dat/out 216 | printf '\e[8;33;80t]' 217 | $permsda 218 | clear 219 | PAGE 220 | 221 | echo " -----> Unpacking Image." 222 | echo 223 | if [ -f $PROJECT_DIR/convert-dat/system.transfer.list ] 224 | then 225 | echo 226 | echo " system.transfer.list [FOUND]" 227 | fi 228 | if [ -f $PROJECT_DIR/convert-dat/system.new.dat ] 229 | then 230 | echo 231 | echo " system.new.dat [FOUND]" 232 | echo 233 | sleep 0.6; 234 | PAGE 235 | echo " Few seconds to go... :)" 236 | ( tools/./sdat2img.py convert-dat/system.transfer.list convert-dat/system.new.dat convert-dat/system.img ) >> Unpack.log 237 | mkdir -p /$hm/$user/$dsk/$tul/convert-dat/out 238 | mkdir -p /$hm/$user/$dsk/$tul/convert-dat/tmpsparse 239 | mkdir -p /$hm/$user/$dsk/$tul/convert-dat/rom_system 240 | mkdir -p Finish-new.dat 241 | echo 242 | echo " -----> mounting system image..." 243 | echo " Please Wait..." 244 | echo 245 | echo -e " \033[0;31m " 246 | sudo mount -t ext4 -o loop convert-dat/system.img convert-dat/out/ 247 | echo 248 | echo -e " \033[0;36m " 249 | echo " -----> setting permission for modifications..." 250 | echo " Please Wait..." 251 | sudo chown -R $user:$user $PROJECT_DIR/convert-dat/out 252 | ( sudo cp -avr $PROJECT_DIR/convert-dat/out $PROJECT_DIR/convert-dat/rom_system ) >> x-log 253 | sudo umount $PROJECT_DIR/convert-dat/out/ 254 | rm -rf $PROJECT_DIR/convert-dat/out 255 | rm convert-dat/system.new.dat 256 | rm convert-dat/system.transfer.list 257 | echo 258 | echo -e " \033[0;36m " 259 | echo " -----> Please go to convert-dat/rom_system to edit your Rom " 260 | echo 261 | echo -e " \033[0;34m " 262 | echo "=============================" 263 | echo " !! PROCESS TIME !! " 264 | echo $[$SECONDS / 60]' minutes '$[$SECONDS % 60]' seconds' 265 | echo "=============================" 266 | echo 267 | echo 268 | echo " !! DONE !!" 269 | echo 270 | echo 271 | else 272 | PAGE 273 | echo -e " \033[0;31m " 274 | echo "WARNING! WARNING!! WARNING!!!" 275 | echo "Please Check & Trace Where Errors." 276 | echo " There Is NO rom_system found" 277 | echo 278 | cecho R "" "system.new.dat -->> Missing !" 279 | echo 280 | cecho R "" "system.transfer.list -->> Missing !" 281 | echo 282 | echo 283 | fi 284 | read -p "Press enter key to continue . . ." 285 | START " " 286 | } 287 | 288 | 289 | RePck(){ 290 | printf '\e[8;33;80t]' 291 | PAGE 292 | echo 293 | #-------------------------- 294 | echo " -----> Repacking Image." 295 | echo 296 | echo " Few seconds to go... :)" 297 | if [ -f $PROJECT_DIR/convert-dat/rom_system/build.prop ] 298 | then 299 | api=$(grep "ro.build.version.sdk" convert-dat/rom_system/build.prop | cut -d"=" -f2); 300 | echo -e " \033[0;34m " 301 | echo " Android SDK = $api " 302 | echo 303 | fi 304 | if [ -f $PROJECT_DIR/convert-dat/file_contexts ] 305 | then 306 | echo " file_contexts [FOUND]" 307 | echo 308 | echo -e " \033[0;36m " 309 | tools/./make_ext4fs -s -T -1 -S convert-dat/file_contexts -L system -l 2692743168 -a system convert-dat/system_new.img convert-dat/rom_system/ 310 | echo 311 | clear 312 | PAGE 313 | echo " -----> Converting To Sparse Image." 314 | if [[ $api = "21" ]]; 315 | then 316 | argv="1" 317 | elif [[ $api = "22" ]]; 318 | then 319 | argv="2" 320 | elif [[ $api = "23" ]]; 321 | then 322 | argv="3" 323 | elif [[ $api = "24" ]]; 324 | then 325 | argv="4" 326 | elif [[ $api = "25" ]]; 327 | then 328 | argv="4" 329 | fi 330 | echo " Please Wait..." 331 | ./tools/img2sdat.py convert-dat/system_new.img Finish-new.dat $argv; 332 | echo 333 | echo " Few seconds to go... :)" 334 | echo 335 | echo " Thank You !!! :0" 336 | rm $PROJECT_DIR/convert-dat/tmpsparse/system.* 337 | rm $PROJECT_DIR/convert-dat/system.img 338 | rm $PROJECT_DIR/convert-dat/system_new.img 339 | echo 340 | echo 341 | if [ -f $PROJECT_DIR/Finish-new.dat/system.new.dat ] 342 | then 343 | PAGE 344 | echo -e " \033[0;36m " 345 | echo " PLEASE CHECK Finish-new.dat -Folder to see the following new files" 346 | echo 347 | echo " ************************************" 348 | echo 349 | ls Finish-new.dat 350 | echo "" 351 | echo -e " \033[0;34m " 352 | echo "=============================" 353 | echo " !! PROCESS TIME !! " 354 | echo $[$SECONDS / 60]' minutes '$[$SECONDS % 60]' seconds' 355 | echo "=============================" 356 | echo 357 | echo 358 | echo " !! DONE !!" 359 | echo 360 | echo 361 | echo 362 | fi 363 | else 364 | PAGE 365 | echo -e " \033[0;31m " 366 | echo "WARNING! WARNING!! WARNING!!!" 367 | echo "Please Check & Trace Where Errors." 368 | echo " There Is NO rom_system found" 369 | echo 370 | cecho R "" "file_contexts -->> Missing !" 371 | echo 372 | cecho R "" "Android SDK -->> not detected !" 373 | echo 374 | read -p "Press enter key to continue . . ." 375 | echo 376 | PAGE 377 | echo "WARNING! WARNING!! WARNING!!!" 378 | echo "Please Check & Trace Where Errors." 379 | echo 380 | echo 381 | cecho R "" "Repack ERRORS!" 382 | echo 383 | cecho R "" "Repack ERRORS!" 384 | echo 385 | cecho R "" "Repack ERRORS!" 386 | echo 387 | echo 388 | fi 389 | read -p "Press enter key to continue . . ." 390 | START " " 391 | } 392 | #-------------------------- 393 | 394 | ###----------------------------------------- 395 | # ./sdat2img.py system.transfer.list system.new.dat system.img 396 | # sudo mount -t ext4 -o loop system.img system/ 397 | # sudo chown -R iyke:iyke /home/iyke/Desktop/xe/tools/system 398 | # ./make_ext4fs -T 0 -S file_contexts -l 1073741824 -a system system_new.img system/ 399 | 400 | 401 | #credit & info 402 | V_CREDIT(){ 403 | clear 404 | echo 405 | PAGE 406 | echo -e " \033[0;34m " 407 | cecho P " "" \e[4m\e[7m@Xpirt [XDA]" 408 | echo " for : sdat2img 1.0 - img2sdat 1.2" 409 | echo " check thread link below >>" 410 | cecho P"" " \033[0;33m\e[3m\e[4m*https://forum.xda-developers.com/android/software-hacking/how-to-conver-lollipop-dat-files-to-t2978952" 411 | echo 412 | echo 413 | echo 414 | cecho P " " "\e[3m " 415 | read -p " Press enter key for next . . ." 416 | clear 417 | echo 418 | PAGE 419 | echo -e " \033[0;34m " 420 | cecho P " "" \e[4m\e[7m@SuperR. [XDA]" 421 | echo " for : Some binaries" 422 | echo " check profile link below >>" 423 | cecho P"" " \033[0;33m\e[3m\e[4m*https://forum.xda-developers.com/member.php?u=5787964" 424 | echo 425 | echo 426 | echo 427 | cecho P " " "\e[3m " 428 | read -p " Press enter key for next . . ." 429 | clear 430 | echo 431 | PAGE 432 | echo -e " \033[0;34m " 433 | cecho P " "" \e[4m\e[7m@Pom Kritsada [MTK THAI Developers]" 434 | echo " for : file_context.bin conversion" 435 | echo " check video link below >>" 436 | cecho P"" " \033[0;33m\e[3m\e[4m*https://www.youtube.com/watch?v=Tw5f4iLUYhc" 437 | echo 438 | echo 439 | echo 440 | cecho P " " "\e[3m " 441 | read -p " Press enter key to continue . . ." 442 | clear 443 | echo 444 | PAGE 445 | echo -e " \033[0;34m " 446 | cecho P " "" \e[4m\e[7m#AMD [FACEBOOK]" 447 | echo " Android Matrix Development" 448 | echo " check group link below >>" 449 | cecho P"" " \033[0;33m\e[3m\e[4m*https://web.facebook.com/groups/1024872487548231/" 450 | echo 451 | echo 452 | echo 453 | cecho P " " "\e[3m " 454 | read -p " Press enter key for next . . ." 455 | clear 456 | echo 457 | PAGE 458 | echo -e " \033[0;34m " 459 | cecho P " "" \e[4m\e[7mNana Yaa [Jennie]" 460 | echo " for her time & motivation" 461 | echo 462 | echo 463 | echo 464 | cecho P " " "\e[3m " 465 | read -p " Press enter key for next . . ." 466 | echo 467 | clear 468 | echo 469 | PAGE 470 | echo -e " \033[0;33m " 471 | echo -e " \e[3m[ THANKS FOR VIEWING ]" 472 | echo -e " \e[3m[ OUR CREDIT !!! ]" 473 | echo 474 | sleep 2.0; 475 | START 476 | } 477 | 478 | #WAIT 479 | RELAX(){ 480 | PAGE 481 | echo "" 482 | echo -e " \033[0;34m " 483 | echo "=============================" 484 | echo " !! PROCESS TIME !! " 485 | echo $[$SECONDS / 60]' minutes '$[$SECONDS % 60]' seconds' 486 | echo "=============================" 487 | echo 488 | echo 489 | echo " !! DONE !!" 490 | echo 491 | echo 492 | read -p "Press enter key to continue . . ." 493 | echo 494 | HOME " " 495 | } 496 | 497 | #EXIT 498 | EXIT(){ 499 | printf '\e[8;33;80t]' 500 | clear 501 | cecho C "" "Talent Is Nothing WIthout Ethics!!!" 502 | sleep 1.0; 503 | clear 504 | exit 505 | } 506 | 507 | #COLOR 508 | #Credit goes to @Matrix for his color codes 509 | #USAGE: cecho TYPE=R|G|Y|B|P|C|W "msg1" "color_msg2" "msg3" 510 | cecho () 511 | { 512 | #Case didn't work out for me in cygwin 513 | if [ "$1" == "R" ] 514 | then 515 | echo -e "$2""\033[0;91m$3\033[0m""$4" # Red 516 | elif [ "$1" == "G" ] 517 | then 518 | echo -e "$2""\033[0;92m$3\033[0m""$4" # Green 519 | elif [ "$1" == "Y" ] 520 | then 521 | echo -e "$2""\033[0;93m$3\033[0m""$4" # Yellow 522 | elif [ "$1" == "B" ] 523 | then 524 | echo -e "$2""\033[0;94m$3\033[0m""$4" # Blue 525 | elif [ "$1" == "P" ] 526 | then 527 | echo -e "$2""\033[0;95m$3\033[0m""$4" # Purple 528 | elif [ "$1" == "C" ] 529 | then 530 | echo -e "$2""\033[0;96m$3\033[0m""$4" # Cyan 531 | elif [ "$1" == "W" ] 532 | then 533 | echo -e "$2""\033[0;97m$3\033[0m""$4" # White 534 | fi 535 | } 536 | 537 | #EXTRA_COLOR_OPTIONS 538 | blue='\033[0;34m' 539 | cyan='\033[0;36m' 540 | yellow='\033[0;33m' 541 | red='\033[0;31m' 542 | nocol='\033[0m' 543 | 544 | 545 | #DEPLOYING function 546 | EXT_MAIN 547 | -------------------------------------------------------------------------------- /convert-dat/null: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/convert-dat/null -------------------------------------------------------------------------------- /file_context_zone/nul: -------------------------------------------------------------------------------- 1 | support:cofface@cofface.com 2 | converted success,outfile: /home/zac6ix/Desktop/android_system_extraction_and_repack_tool/file_context_zone/convert/file_contexts. 3 | -------------------------------------------------------------------------------- /tools/README.md: -------------------------------------------------------------------------------- 1 | # sdat2img 2 | Convert sparse Android data image (.dat) into filesystem ext4 image (.img) 3 | 4 | 5 | 6 | ## Requirements 7 | This binary requires Python 2.7 or newer installed on your system. 8 | 9 | It currently supports Windows, Linux, MacOS & ARM architectures. 10 | 11 | 12 | 13 | ## Usage 14 | ``` 15 | sdat2img.py [system_img] 16 | ``` 17 | - `` = input, system.transfer.list from rom zip 18 | - `` = input, system.new.dat from rom zip 19 | - `[system_img]` = output ext4 raw image file (optional) 20 | 21 | 22 | 23 | ## Example 24 | This is a simple example on a Linux system: 25 | ``` 26 | ~$ ./sdat2img.py system.transfer.list system.new.dat system.img 27 | ``` 28 | 29 | 30 | 31 | ## OTAs 32 | If you are looking on decompressing `system.patch.dat` file or `.p` files, therefore reproduce the patching system on your PC, check [imgpatchtools](https://github.com/erfanoabdi/imgpatchtools) out by @erfanoabdi. 33 | 34 | 35 | 36 | ## Info 37 | For more information about this binary, visit http://forum.xda-developers.com/android/software-hacking/how-to-conver-lollipop-dat-files-to-t2978952. 38 | -------------------------------------------------------------------------------- /tools/blockimgdiff.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2014 The Android Open Source Project 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | from __future__ import print_function 16 | 17 | from collections import deque, OrderedDict 18 | from hashlib import sha1 19 | import array 20 | import common 21 | import functools 22 | import heapq 23 | import itertools 24 | import multiprocessing 25 | import os 26 | import re 27 | import subprocess 28 | import threading 29 | import time 30 | import tempfile 31 | 32 | from rangelib import RangeSet 33 | 34 | 35 | __all__ = ["EmptyImage", "DataImage", "BlockImageDiff"] 36 | 37 | 38 | def compute_patch(src, tgt, imgdiff=False): 39 | srcfd, srcfile = tempfile.mkstemp(prefix="src-") 40 | tgtfd, tgtfile = tempfile.mkstemp(prefix="tgt-") 41 | patchfd, patchfile = tempfile.mkstemp(prefix="patch-") 42 | os.close(patchfd) 43 | 44 | try: 45 | with os.fdopen(srcfd, "wb") as f_src: 46 | for p in src: 47 | f_src.write(p) 48 | 49 | with os.fdopen(tgtfd, "wb") as f_tgt: 50 | for p in tgt: 51 | f_tgt.write(p) 52 | try: 53 | os.unlink(patchfile) 54 | except OSError: 55 | pass 56 | if imgdiff: 57 | p = subprocess.call(["imgdiff", "-z", srcfile, tgtfile, patchfile], 58 | stdout=open("/dev/null", "a"), 59 | stderr=subprocess.STDOUT) 60 | else: 61 | p = subprocess.call(["bsdiff", srcfile, tgtfile, patchfile]) 62 | 63 | if p: 64 | raise ValueError("diff failed: " + str(p)) 65 | 66 | with open(patchfile, "rb") as f: 67 | return f.read() 68 | finally: 69 | try: 70 | os.unlink(srcfile) 71 | os.unlink(tgtfile) 72 | os.unlink(patchfile) 73 | except OSError: 74 | pass 75 | 76 | 77 | class Image(object): 78 | def ReadRangeSet(self, ranges): 79 | raise NotImplementedError 80 | 81 | def TotalSha1(self, include_clobbered_blocks=False): 82 | raise NotImplementedError 83 | 84 | 85 | class EmptyImage(Image): 86 | """A zero-length image.""" 87 | blocksize = 4096 88 | care_map = RangeSet() 89 | clobbered_blocks = RangeSet() 90 | extended = RangeSet() 91 | total_blocks = 0 92 | file_map = {} 93 | def ReadRangeSet(self, ranges): 94 | return () 95 | def TotalSha1(self, include_clobbered_blocks=False): 96 | # EmptyImage always carries empty clobbered_blocks, so 97 | # include_clobbered_blocks can be ignored. 98 | assert self.clobbered_blocks.size() == 0 99 | return sha1().hexdigest() 100 | 101 | 102 | class DataImage(Image): 103 | """An image wrapped around a single string of data.""" 104 | 105 | def __init__(self, data, trim=False, pad=False): 106 | self.data = data 107 | self.blocksize = 4096 108 | 109 | assert not (trim and pad) 110 | 111 | partial = len(self.data) % self.blocksize 112 | padded = False 113 | if partial > 0: 114 | if trim: 115 | self.data = self.data[:-partial] 116 | elif pad: 117 | self.data += '\0' * (self.blocksize - partial) 118 | padded = True 119 | else: 120 | raise ValueError(("data for DataImage must be multiple of %d bytes " 121 | "unless trim or pad is specified") % 122 | (self.blocksize,)) 123 | 124 | assert len(self.data) % self.blocksize == 0 125 | 126 | self.total_blocks = len(self.data) / self.blocksize 127 | self.care_map = RangeSet(data=(0, self.total_blocks)) 128 | # When the last block is padded, we always write the whole block even for 129 | # incremental OTAs. Because otherwise the last block may get skipped if 130 | # unchanged for an incremental, but would fail the post-install 131 | # verification if it has non-zero contents in the padding bytes. 132 | # Bug: 23828506 133 | if padded: 134 | clobbered_blocks = [self.total_blocks-1, self.total_blocks] 135 | else: 136 | clobbered_blocks = [] 137 | self.clobbered_blocks = clobbered_blocks 138 | self.extended = RangeSet() 139 | 140 | zero_blocks = [] 141 | nonzero_blocks = [] 142 | reference = '\0' * self.blocksize 143 | 144 | for i in range(self.total_blocks-1 if padded else self.total_blocks): 145 | d = self.data[i*self.blocksize : (i+1)*self.blocksize] 146 | if d == reference: 147 | zero_blocks.append(i) 148 | zero_blocks.append(i+1) 149 | else: 150 | nonzero_blocks.append(i) 151 | nonzero_blocks.append(i+1) 152 | 153 | assert zero_blocks or nonzero_blocks or clobbered_blocks 154 | 155 | self.file_map = dict() 156 | if zero_blocks: 157 | self.file_map["__ZERO"] = RangeSet(data=zero_blocks) 158 | if nonzero_blocks: 159 | self.file_map["__NONZERO"] = RangeSet(data=nonzero_blocks) 160 | if clobbered_blocks: 161 | self.file_map["__COPY"] = RangeSet(data=clobbered_blocks) 162 | 163 | def ReadRangeSet(self, ranges): 164 | return [self.data[s*self.blocksize:e*self.blocksize] for (s, e) in ranges] 165 | 166 | def TotalSha1(self, include_clobbered_blocks=False): 167 | if not include_clobbered_blocks: 168 | ranges = self.care_map.subtract(self.clobbered_blocks) 169 | return sha1(self.ReadRangeSet(ranges)).hexdigest() 170 | else: 171 | return sha1(self.data).hexdigest() 172 | 173 | 174 | class Transfer(object): 175 | def __init__(self, tgt_name, src_name, tgt_ranges, src_ranges, style, by_id): 176 | self.tgt_name = tgt_name 177 | self.src_name = src_name 178 | self.tgt_ranges = tgt_ranges 179 | self.src_ranges = src_ranges 180 | self.style = style 181 | self.intact = (getattr(tgt_ranges, "monotonic", False) and 182 | getattr(src_ranges, "monotonic", False)) 183 | 184 | # We use OrderedDict rather than dict so that the output is repeatable; 185 | # otherwise it would depend on the hash values of the Transfer objects. 186 | self.goes_before = OrderedDict() 187 | self.goes_after = OrderedDict() 188 | 189 | self.stash_before = [] 190 | self.use_stash = [] 191 | 192 | self.id = len(by_id) 193 | by_id.append(self) 194 | 195 | def NetStashChange(self): 196 | return (sum(sr.size() for (_, sr) in self.stash_before) - 197 | sum(sr.size() for (_, sr) in self.use_stash)) 198 | 199 | def ConvertToNew(self): 200 | assert self.style != "new" 201 | self.use_stash = [] 202 | self.style = "new" 203 | self.src_ranges = RangeSet() 204 | 205 | def __str__(self): 206 | return (str(self.id) + ": <" + str(self.src_ranges) + " " + self.style + 207 | " to " + str(self.tgt_ranges) + ">") 208 | 209 | 210 | @functools.total_ordering 211 | class HeapItem(object): 212 | def __init__(self, item): 213 | self.item = item 214 | # Negate the score since python's heap is a min-heap and we want 215 | # the maximum score. 216 | self.score = -item.score 217 | def clear(self): 218 | self.item = None 219 | def __bool__(self): 220 | return self.item is None 221 | def __eq__(self, other): 222 | return self.score == other.score 223 | def __le__(self, other): 224 | return self.score <= other.score 225 | 226 | 227 | # BlockImageDiff works on two image objects. An image object is 228 | # anything that provides the following attributes: 229 | # 230 | # blocksize: the size in bytes of a block, currently must be 4096. 231 | # 232 | # total_blocks: the total size of the partition/image, in blocks. 233 | # 234 | # care_map: a RangeSet containing which blocks (in the range [0, 235 | # total_blocks) we actually care about; i.e. which blocks contain 236 | # data. 237 | # 238 | # file_map: a dict that partitions the blocks contained in care_map 239 | # into smaller domains that are useful for doing diffs on. 240 | # (Typically a domain is a file, and the key in file_map is the 241 | # pathname.) 242 | # 243 | # clobbered_blocks: a RangeSet containing which blocks contain data 244 | # but may be altered by the FS. They need to be excluded when 245 | # verifying the partition integrity. 246 | # 247 | # ReadRangeSet(): a function that takes a RangeSet and returns the 248 | # data contained in the image blocks of that RangeSet. The data 249 | # is returned as a list or tuple of strings; concatenating the 250 | # elements together should produce the requested data. 251 | # Implementations are free to break up the data into list/tuple 252 | # elements in any way that is convenient. 253 | # 254 | # TotalSha1(): a function that returns (as a hex string) the SHA-1 255 | # hash of all the data in the image (ie, all the blocks in the 256 | # care_map minus clobbered_blocks, or including the clobbered 257 | # blocks if include_clobbered_blocks is True). 258 | # 259 | # When creating a BlockImageDiff, the src image may be None, in which 260 | # case the list of transfers produced will never read from the 261 | # original image. 262 | 263 | class BlockImageDiff(object): 264 | def __init__(self, tgt, src=None, version=4, threads=None, 265 | disable_imgdiff=False): 266 | if threads is None: 267 | threads = multiprocessing.cpu_count() // 2 268 | if threads == 0: 269 | threads = 1 270 | self.threads = threads 271 | self.version = version 272 | self.transfers = [] 273 | self.src_basenames = {} 274 | self.src_numpatterns = {} 275 | self._max_stashed_size = 0 276 | self.touched_src_ranges = RangeSet() 277 | self.touched_src_sha1 = None 278 | self.disable_imgdiff = disable_imgdiff 279 | 280 | assert version in (1, 2, 3, 4) 281 | 282 | self.tgt = tgt 283 | if src is None: 284 | src = EmptyImage() 285 | self.src = src 286 | 287 | # The updater code that installs the patch always uses 4k blocks. 288 | assert tgt.blocksize == 4096 289 | assert src.blocksize == 4096 290 | 291 | # The range sets in each filemap should comprise a partition of 292 | # the care map. 293 | self.AssertPartition(src.care_map, src.file_map.values()) 294 | self.AssertPartition(tgt.care_map, tgt.file_map.values()) 295 | 296 | @property 297 | def max_stashed_size(self): 298 | return self._max_stashed_size 299 | 300 | def Compute(self, prefix): 301 | # When looking for a source file to use as the diff input for a 302 | # target file, we try: 303 | # 1) an exact path match if available, otherwise 304 | # 2) a exact basename match if available, otherwise 305 | # 3) a basename match after all runs of digits are replaced by 306 | # "#" if available, otherwise 307 | # 4) we have no source for this target. 308 | self.AbbreviateSourceNames() 309 | self.FindTransfers() 310 | 311 | # Find the ordering dependencies among transfers (this is O(n^2) 312 | # in the number of transfers). 313 | self.GenerateDigraph() 314 | # Find a sequence of transfers that satisfies as many ordering 315 | # dependencies as possible (heuristically). 316 | self.FindVertexSequence() 317 | # Fix up the ordering dependencies that the sequence didn't 318 | # satisfy. 319 | if self.version == 1: 320 | self.RemoveBackwardEdges() 321 | else: 322 | self.ReverseBackwardEdges() 323 | self.ImproveVertexSequence() 324 | 325 | # Ensure the runtime stash size is under the limit. 326 | if self.version >= 2 and common.OPTIONS.cache_size is not None: 327 | self.ReviseStashSize() 328 | 329 | # Double-check our work. 330 | self.AssertSequenceGood() 331 | 332 | self.ComputePatches(prefix) 333 | self.WriteTransfers(prefix) 334 | 335 | def HashBlocks(self, source, ranges): # pylint: disable=no-self-use 336 | data = source.ReadRangeSet(ranges) 337 | ctx = sha1() 338 | 339 | for p in data: 340 | ctx.update(p) 341 | 342 | return ctx.hexdigest() 343 | 344 | def WriteTransfers(self, prefix): 345 | def WriteTransfersZero(out, to_zero): 346 | """Limit the number of blocks in command zero to 1024 blocks. 347 | 348 | This prevents the target size of one command from being too large; and 349 | might help to avoid fsync errors on some devices.""" 350 | 351 | zero_blocks_limit = 1024 352 | total = 0 353 | while to_zero: 354 | zero_blocks = to_zero.first(zero_blocks_limit) 355 | out.append("zero %s\n" % (zero_blocks.to_string_raw(),)) 356 | total += zero_blocks.size() 357 | to_zero = to_zero.subtract(zero_blocks) 358 | return total 359 | 360 | out = [] 361 | 362 | total = 0 363 | 364 | stashes = {} 365 | stashed_blocks = 0 366 | max_stashed_blocks = 0 367 | 368 | free_stash_ids = [] 369 | next_stash_id = 0 370 | 371 | for xf in self.transfers: 372 | 373 | if self.version < 2: 374 | assert not xf.stash_before 375 | assert not xf.use_stash 376 | 377 | for s, sr in xf.stash_before: 378 | assert s not in stashes 379 | if free_stash_ids: 380 | sid = heapq.heappop(free_stash_ids) 381 | else: 382 | sid = next_stash_id 383 | next_stash_id += 1 384 | stashes[s] = sid 385 | if self.version == 2: 386 | stashed_blocks += sr.size() 387 | out.append("stash %d %s\n" % (sid, sr.to_string_raw())) 388 | else: 389 | sh = self.HashBlocks(self.src, sr) 390 | if sh in stashes: 391 | stashes[sh] += 1 392 | else: 393 | stashes[sh] = 1 394 | stashed_blocks += sr.size() 395 | self.touched_src_ranges = self.touched_src_ranges.union(sr) 396 | out.append("stash %s %s\n" % (sh, sr.to_string_raw())) 397 | 398 | if stashed_blocks > max_stashed_blocks: 399 | max_stashed_blocks = stashed_blocks 400 | 401 | free_string = [] 402 | free_size = 0 403 | 404 | if self.version == 1: 405 | src_str = xf.src_ranges.to_string_raw() if xf.src_ranges else "" 406 | elif self.version >= 2: 407 | 408 | # <# blocks> 409 | # OR 410 | # <# blocks> 411 | # OR 412 | # <# blocks> - 413 | 414 | size = xf.src_ranges.size() 415 | src_str = [str(size)] 416 | 417 | unstashed_src_ranges = xf.src_ranges 418 | mapped_stashes = [] 419 | for s, sr in xf.use_stash: 420 | sid = stashes.pop(s) 421 | unstashed_src_ranges = unstashed_src_ranges.subtract(sr) 422 | sh = self.HashBlocks(self.src, sr) 423 | sr = xf.src_ranges.map_within(sr) 424 | mapped_stashes.append(sr) 425 | if self.version == 2: 426 | src_str.append("%d:%s" % (sid, sr.to_string_raw())) 427 | # A stash will be used only once. We need to free the stash 428 | # immediately after the use, instead of waiting for the automatic 429 | # clean-up at the end. Because otherwise it may take up extra space 430 | # and lead to OTA failures. 431 | # Bug: 23119955 432 | free_string.append("free %d\n" % (sid,)) 433 | free_size += sr.size() 434 | else: 435 | assert sh in stashes 436 | src_str.append("%s:%s" % (sh, sr.to_string_raw())) 437 | stashes[sh] -= 1 438 | if stashes[sh] == 0: 439 | free_size += sr.size() 440 | free_string.append("free %s\n" % (sh)) 441 | stashes.pop(sh) 442 | heapq.heappush(free_stash_ids, sid) 443 | 444 | if unstashed_src_ranges: 445 | src_str.insert(1, unstashed_src_ranges.to_string_raw()) 446 | if xf.use_stash: 447 | mapped_unstashed = xf.src_ranges.map_within(unstashed_src_ranges) 448 | src_str.insert(2, mapped_unstashed.to_string_raw()) 449 | mapped_stashes.append(mapped_unstashed) 450 | self.AssertPartition(RangeSet(data=(0, size)), mapped_stashes) 451 | else: 452 | src_str.insert(1, "-") 453 | self.AssertPartition(RangeSet(data=(0, size)), mapped_stashes) 454 | 455 | src_str = " ".join(src_str) 456 | 457 | # all versions: 458 | # zero 459 | # new 460 | # erase 461 | # 462 | # version 1: 463 | # bsdiff patchstart patchlen 464 | # imgdiff patchstart patchlen 465 | # move 466 | # 467 | # version 2: 468 | # bsdiff patchstart patchlen 469 | # imgdiff patchstart patchlen 470 | # move 471 | # 472 | # version 3: 473 | # bsdiff patchstart patchlen srchash tgthash 474 | # imgdiff patchstart patchlen srchash tgthash 475 | # move hash 476 | 477 | tgt_size = xf.tgt_ranges.size() 478 | 479 | if xf.style == "new": 480 | assert xf.tgt_ranges 481 | out.append("%s %s\n" % (xf.style, xf.tgt_ranges.to_string_raw())) 482 | total += tgt_size 483 | elif xf.style == "move": 484 | assert xf.tgt_ranges 485 | assert xf.src_ranges.size() == tgt_size 486 | if xf.src_ranges != xf.tgt_ranges: 487 | if self.version == 1: 488 | out.append("%s %s %s\n" % ( 489 | xf.style, 490 | xf.src_ranges.to_string_raw(), xf.tgt_ranges.to_string_raw())) 491 | elif self.version == 2: 492 | out.append("%s %s %s\n" % ( 493 | xf.style, 494 | xf.tgt_ranges.to_string_raw(), src_str)) 495 | elif self.version >= 3: 496 | # take into account automatic stashing of overlapping blocks 497 | if xf.src_ranges.overlaps(xf.tgt_ranges): 498 | temp_stash_usage = stashed_blocks + xf.src_ranges.size() 499 | if temp_stash_usage > max_stashed_blocks: 500 | max_stashed_blocks = temp_stash_usage 501 | 502 | self.touched_src_ranges = self.touched_src_ranges.union( 503 | xf.src_ranges) 504 | 505 | out.append("%s %s %s %s\n" % ( 506 | xf.style, 507 | self.HashBlocks(self.tgt, xf.tgt_ranges), 508 | xf.tgt_ranges.to_string_raw(), src_str)) 509 | total += tgt_size 510 | elif xf.style in ("bsdiff", "imgdiff"): 511 | assert xf.tgt_ranges 512 | assert xf.src_ranges 513 | if self.version == 1: 514 | out.append("%s %d %d %s %s\n" % ( 515 | xf.style, xf.patch_start, xf.patch_len, 516 | xf.src_ranges.to_string_raw(), xf.tgt_ranges.to_string_raw())) 517 | elif self.version == 2: 518 | out.append("%s %d %d %s %s\n" % ( 519 | xf.style, xf.patch_start, xf.patch_len, 520 | xf.tgt_ranges.to_string_raw(), src_str)) 521 | elif self.version >= 3: 522 | # take into account automatic stashing of overlapping blocks 523 | if xf.src_ranges.overlaps(xf.tgt_ranges): 524 | temp_stash_usage = stashed_blocks + xf.src_ranges.size() 525 | if temp_stash_usage > max_stashed_blocks: 526 | max_stashed_blocks = temp_stash_usage 527 | 528 | self.touched_src_ranges = self.touched_src_ranges.union( 529 | xf.src_ranges) 530 | 531 | out.append("%s %d %d %s %s %s %s\n" % ( 532 | xf.style, 533 | xf.patch_start, xf.patch_len, 534 | self.HashBlocks(self.src, xf.src_ranges), 535 | self.HashBlocks(self.tgt, xf.tgt_ranges), 536 | xf.tgt_ranges.to_string_raw(), src_str)) 537 | total += tgt_size 538 | elif xf.style == "zero": 539 | assert xf.tgt_ranges 540 | to_zero = xf.tgt_ranges.subtract(xf.src_ranges) 541 | assert WriteTransfersZero(out, to_zero) == to_zero.size() 542 | total += to_zero.size() 543 | else: 544 | raise ValueError("unknown transfer style '%s'\n" % xf.style) 545 | 546 | if free_string: 547 | out.append("".join(free_string)) 548 | stashed_blocks -= free_size 549 | 550 | if self.version >= 2 and common.OPTIONS.cache_size is not None: 551 | # Sanity check: abort if we're going to need more stash space than 552 | # the allowed size (cache_size * threshold). There are two purposes 553 | # of having a threshold here. a) Part of the cache may have been 554 | # occupied by some recovery logs. b) It will buy us some time to deal 555 | # with the oversize issue. 556 | cache_size = common.OPTIONS.cache_size 557 | stash_threshold = common.OPTIONS.stash_threshold 558 | max_allowed = cache_size * stash_threshold 559 | assert max_stashed_blocks * self.tgt.blocksize < max_allowed, \ 560 | 'Stash size %d (%d * %d) exceeds the limit %d (%d * %.2f)' % ( 561 | max_stashed_blocks * self.tgt.blocksize, max_stashed_blocks, 562 | self.tgt.blocksize, max_allowed, cache_size, 563 | stash_threshold) 564 | 565 | if self.version >= 3: 566 | self.touched_src_sha1 = self.HashBlocks( 567 | self.src, self.touched_src_ranges) 568 | 569 | # Zero out extended blocks as a workaround for bug 20881595. 570 | if self.tgt.extended: 571 | assert (WriteTransfersZero(out, self.tgt.extended) == 572 | self.tgt.extended.size()) 573 | total += self.tgt.extended.size() 574 | 575 | # We erase all the blocks on the partition that a) don't contain useful 576 | # data in the new image; b) will not be touched by dm-verity. Out of those 577 | # blocks, we erase the ones that won't be used in this update at the 578 | # beginning of an update. The rest would be erased at the end. This is to 579 | # work around the eMMC issue observed on some devices, which may otherwise 580 | # get starving for clean blocks and thus fail the update. (b/28347095) 581 | all_tgt = RangeSet(data=(0, self.tgt.total_blocks)) 582 | all_tgt_minus_extended = all_tgt.subtract(self.tgt.extended) 583 | new_dontcare = all_tgt_minus_extended.subtract(self.tgt.care_map) 584 | 585 | erase_first = new_dontcare.subtract(self.touched_src_ranges) 586 | if erase_first: 587 | out.insert(0, "erase %s\n" % (erase_first.to_string_raw(),)) 588 | 589 | erase_last = new_dontcare.subtract(erase_first) 590 | if erase_last: 591 | out.append("erase %s\n" % (erase_last.to_string_raw(),)) 592 | 593 | out.insert(0, "%d\n" % (self.version,)) # format version number 594 | out.insert(1, "%d\n" % (total,)) 595 | if self.version >= 2: 596 | # version 2 only: after the total block count, we give the number 597 | # of stash slots needed, and the maximum size needed (in blocks) 598 | out.insert(2, str(next_stash_id) + "\n") 599 | out.insert(3, str(max_stashed_blocks) + "\n") 600 | 601 | with open(prefix + ".transfer.list", "wb") as f: 602 | for i in out: 603 | f.write(i) 604 | 605 | if self.version >= 2: 606 | self._max_stashed_size = max_stashed_blocks * self.tgt.blocksize 607 | OPTIONS = common.OPTIONS 608 | if OPTIONS.cache_size is not None: 609 | max_allowed = OPTIONS.cache_size * OPTIONS.stash_threshold 610 | print("max stashed blocks: %d (%d bytes), " 611 | "limit: %d bytes (%.2f%%)\n" % ( 612 | max_stashed_blocks, self._max_stashed_size, max_allowed, 613 | self._max_stashed_size * 100.0 / max_allowed)) 614 | else: 615 | print("max stashed blocks: %d (%d bytes), limit: \n" % ( 616 | max_stashed_blocks, self._max_stashed_size)) 617 | 618 | def ReviseStashSize(self): 619 | print("Revising stash size...") 620 | stashes = {} 621 | 622 | # Create the map between a stash and its def/use points. For example, for a 623 | # given stash of (idx, sr), stashes[idx] = (sr, def_cmd, use_cmd). 624 | for xf in self.transfers: 625 | # Command xf defines (stores) all the stashes in stash_before. 626 | for idx, sr in xf.stash_before: 627 | stashes[idx] = (sr, xf) 628 | 629 | # Record all the stashes command xf uses. 630 | for idx, _ in xf.use_stash: 631 | stashes[idx] += (xf,) 632 | 633 | # Compute the maximum blocks available for stash based on /cache size and 634 | # the threshold. 635 | cache_size = common.OPTIONS.cache_size 636 | stash_threshold = common.OPTIONS.stash_threshold 637 | max_allowed = cache_size * stash_threshold / self.tgt.blocksize 638 | 639 | stashed_blocks = 0 640 | new_blocks = 0 641 | 642 | # Now go through all the commands. Compute the required stash size on the 643 | # fly. If a command requires excess stash than available, it deletes the 644 | # stash by replacing the command that uses the stash with a "new" command 645 | # instead. 646 | for xf in self.transfers: 647 | replaced_cmds = [] 648 | 649 | # xf.stash_before generates explicit stash commands. 650 | for idx, sr in xf.stash_before: 651 | if stashed_blocks + sr.size() > max_allowed: 652 | # We cannot stash this one for a later command. Find out the command 653 | # that will use this stash and replace the command with "new". 654 | use_cmd = stashes[idx][2] 655 | replaced_cmds.append(use_cmd) 656 | print("%10d %9s %s" % (sr.size(), "explicit", use_cmd)) 657 | else: 658 | stashed_blocks += sr.size() 659 | 660 | # xf.use_stash generates free commands. 661 | for _, sr in xf.use_stash: 662 | stashed_blocks -= sr.size() 663 | 664 | # "move" and "diff" may introduce implicit stashes in BBOTA v3. Prior to 665 | # ComputePatches(), they both have the style of "diff". 666 | if xf.style == "diff" and self.version >= 3: 667 | assert xf.tgt_ranges and xf.src_ranges 668 | if xf.src_ranges.overlaps(xf.tgt_ranges): 669 | if stashed_blocks + xf.src_ranges.size() > max_allowed: 670 | replaced_cmds.append(xf) 671 | print("%10d %9s %s" % (xf.src_ranges.size(), "implicit", xf)) 672 | 673 | # Replace the commands in replaced_cmds with "new"s. 674 | for cmd in replaced_cmds: 675 | # It no longer uses any commands in "use_stash". Remove the def points 676 | # for all those stashes. 677 | for idx, sr in cmd.use_stash: 678 | def_cmd = stashes[idx][1] 679 | assert (idx, sr) in def_cmd.stash_before 680 | def_cmd.stash_before.remove((idx, sr)) 681 | 682 | # Add up blocks that violates space limit and print total number to 683 | # screen later. 684 | new_blocks += cmd.tgt_ranges.size() 685 | cmd.ConvertToNew() 686 | 687 | num_of_bytes = new_blocks * self.tgt.blocksize 688 | print(" Total %d blocks (%d bytes) are packed as new blocks due to " 689 | "insufficient cache size." % (new_blocks, num_of_bytes)) 690 | 691 | def ComputePatches(self, prefix): 692 | print("Reticulating splines...") 693 | diff_q = [] 694 | patch_num = 0 695 | with open(prefix + ".new.dat", "wb") as new_f: 696 | for xf in self.transfers: 697 | if xf.style == "zero": 698 | pass 699 | elif xf.style == "new": 700 | for piece in self.tgt.ReadRangeSet(xf.tgt_ranges): 701 | new_f.write(piece) 702 | elif xf.style == "diff": 703 | src = self.src.ReadRangeSet(xf.src_ranges) 704 | tgt = self.tgt.ReadRangeSet(xf.tgt_ranges) 705 | 706 | # We can't compare src and tgt directly because they may have 707 | # the same content but be broken up into blocks differently, eg: 708 | # 709 | # ["he", "llo"] vs ["h", "ello"] 710 | # 711 | # We want those to compare equal, ideally without having to 712 | # actually concatenate the strings (these may be tens of 713 | # megabytes). 714 | 715 | src_sha1 = sha1() 716 | for p in src: 717 | src_sha1.update(p) 718 | tgt_sha1 = sha1() 719 | tgt_size = 0 720 | for p in tgt: 721 | tgt_sha1.update(p) 722 | tgt_size += len(p) 723 | 724 | if src_sha1.digest() == tgt_sha1.digest(): 725 | # These are identical; we don't need to generate a patch, 726 | # just issue copy commands on the device. 727 | xf.style = "move" 728 | else: 729 | # For files in zip format (eg, APKs, JARs, etc.) we would 730 | # like to use imgdiff -z if possible (because it usually 731 | # produces significantly smaller patches than bsdiff). 732 | # This is permissible if: 733 | # 734 | # - imgdiff is not disabled, and 735 | # - the source and target files are monotonic (ie, the 736 | # data is stored with blocks in increasing order), and 737 | # - we haven't removed any blocks from the source set. 738 | # 739 | # If these conditions are satisfied then appending all the 740 | # blocks in the set together in order will produce a valid 741 | # zip file (plus possibly extra zeros in the last block), 742 | # which is what imgdiff needs to operate. (imgdiff is 743 | # fine with extra zeros at the end of the file.) 744 | imgdiff = (not self.disable_imgdiff and xf.intact and 745 | xf.tgt_name.split(".")[-1].lower() 746 | in ("apk", "jar", "zip")) 747 | xf.style = "imgdiff" if imgdiff else "bsdiff" 748 | diff_q.append((tgt_size, src, tgt, xf, patch_num)) 749 | patch_num += 1 750 | 751 | else: 752 | assert False, "unknown style " + xf.style 753 | 754 | if diff_q: 755 | if self.threads > 1: 756 | print("Computing patches (using %d threads)..." % (self.threads,)) 757 | else: 758 | print("Computing patches...") 759 | diff_q.sort() 760 | 761 | patches = [None] * patch_num 762 | 763 | # TODO: Rewrite with multiprocessing.ThreadPool? 764 | lock = threading.Lock() 765 | def diff_worker(): 766 | while True: 767 | with lock: 768 | if not diff_q: 769 | return 770 | tgt_size, src, tgt, xf, patchnum = diff_q.pop() 771 | patch = compute_patch(src, tgt, imgdiff=(xf.style == "imgdiff")) 772 | size = len(patch) 773 | with lock: 774 | patches[patchnum] = (patch, xf) 775 | print("%10d %10d (%6.2f%%) %7s %s" % ( 776 | size, tgt_size, size * 100.0 / tgt_size, xf.style, 777 | xf.tgt_name if xf.tgt_name == xf.src_name else ( 778 | xf.tgt_name + " (from " + xf.src_name + ")"))) 779 | 780 | threads = [threading.Thread(target=diff_worker) 781 | for _ in range(self.threads)] 782 | for th in threads: 783 | th.start() 784 | while threads: 785 | threads.pop().join() 786 | else: 787 | patches = [] 788 | 789 | p = 0 790 | with open(prefix + ".patch.dat", "wb") as patch_f: 791 | for patch, xf in patches: 792 | xf.patch_start = p 793 | xf.patch_len = len(patch) 794 | patch_f.write(patch) 795 | p += len(patch) 796 | 797 | def AssertSequenceGood(self): 798 | # Simulate the sequences of transfers we will output, and check that: 799 | # - we never read a block after writing it, and 800 | # - we write every block we care about exactly once. 801 | 802 | # Start with no blocks having been touched yet. 803 | touched = array.array("B", "\0" * self.tgt.total_blocks) 804 | 805 | # Imagine processing the transfers in order. 806 | for xf in self.transfers: 807 | # Check that the input blocks for this transfer haven't yet been touched. 808 | 809 | x = xf.src_ranges 810 | if self.version >= 2: 811 | for _, sr in xf.use_stash: 812 | x = x.subtract(sr) 813 | 814 | for s, e in x: 815 | # Source image could be larger. Don't check the blocks that are in the 816 | # source image only. Since they are not in 'touched', and won't ever 817 | # be touched. 818 | for i in range(s, min(e, self.tgt.total_blocks)): 819 | assert touched[i] == 0 820 | 821 | # Check that the output blocks for this transfer haven't yet 822 | # been touched, and touch all the blocks written by this 823 | # transfer. 824 | for s, e in xf.tgt_ranges: 825 | for i in range(s, e): 826 | assert touched[i] == 0 827 | touched[i] = 1 828 | 829 | # Check that we've written every target block. 830 | for s, e in self.tgt.care_map: 831 | for i in range(s, e): 832 | assert touched[i] == 1 833 | 834 | def ImproveVertexSequence(self): 835 | print("Improving vertex order...") 836 | 837 | # At this point our digraph is acyclic; we reversed any edges that 838 | # were backwards in the heuristically-generated sequence. The 839 | # previously-generated order is still acceptable, but we hope to 840 | # find a better order that needs less memory for stashed data. 841 | # Now we do a topological sort to generate a new vertex order, 842 | # using a greedy algorithm to choose which vertex goes next 843 | # whenever we have a choice. 844 | 845 | # Make a copy of the edge set; this copy will get destroyed by the 846 | # algorithm. 847 | for xf in self.transfers: 848 | xf.incoming = xf.goes_after.copy() 849 | xf.outgoing = xf.goes_before.copy() 850 | 851 | L = [] # the new vertex order 852 | 853 | # S is the set of sources in the remaining graph; we always choose 854 | # the one that leaves the least amount of stashed data after it's 855 | # executed. 856 | S = [(u.NetStashChange(), u.order, u) for u in self.transfers 857 | if not u.incoming] 858 | heapq.heapify(S) 859 | 860 | while S: 861 | _, _, xf = heapq.heappop(S) 862 | L.append(xf) 863 | for u in xf.outgoing: 864 | del u.incoming[xf] 865 | if not u.incoming: 866 | heapq.heappush(S, (u.NetStashChange(), u.order, u)) 867 | 868 | # if this fails then our graph had a cycle. 869 | assert len(L) == len(self.transfers) 870 | 871 | self.transfers = L 872 | for i, xf in enumerate(L): 873 | xf.order = i 874 | 875 | def RemoveBackwardEdges(self): 876 | print("Removing backward edges...") 877 | in_order = 0 878 | out_of_order = 0 879 | lost_source = 0 880 | 881 | for xf in self.transfers: 882 | lost = 0 883 | size = xf.src_ranges.size() 884 | for u in xf.goes_before: 885 | # xf should go before u 886 | if xf.order < u.order: 887 | # it does, hurray! 888 | in_order += 1 889 | else: 890 | # it doesn't, boo. trim the blocks that u writes from xf's 891 | # source, so that xf can go after u. 892 | out_of_order += 1 893 | assert xf.src_ranges.overlaps(u.tgt_ranges) 894 | xf.src_ranges = xf.src_ranges.subtract(u.tgt_ranges) 895 | xf.intact = False 896 | 897 | if xf.style == "diff" and not xf.src_ranges: 898 | # nothing left to diff from; treat as new data 899 | xf.style = "new" 900 | 901 | lost = size - xf.src_ranges.size() 902 | lost_source += lost 903 | 904 | print((" %d/%d dependencies (%.2f%%) were violated; " 905 | "%d source blocks removed.") % 906 | (out_of_order, in_order + out_of_order, 907 | (out_of_order * 100.0 / (in_order + out_of_order)) 908 | if (in_order + out_of_order) else 0.0, 909 | lost_source)) 910 | 911 | def ReverseBackwardEdges(self): 912 | print("Reversing backward edges...") 913 | in_order = 0 914 | out_of_order = 0 915 | stashes = 0 916 | stash_size = 0 917 | 918 | for xf in self.transfers: 919 | for u in xf.goes_before.copy(): 920 | # xf should go before u 921 | if xf.order < u.order: 922 | # it does, hurray! 923 | in_order += 1 924 | else: 925 | # it doesn't, boo. modify u to stash the blocks that it 926 | # writes that xf wants to read, and then require u to go 927 | # before xf. 928 | out_of_order += 1 929 | 930 | overlap = xf.src_ranges.intersect(u.tgt_ranges) 931 | assert overlap 932 | 933 | u.stash_before.append((stashes, overlap)) 934 | xf.use_stash.append((stashes, overlap)) 935 | stashes += 1 936 | stash_size += overlap.size() 937 | 938 | # reverse the edge direction; now xf must go after u 939 | del xf.goes_before[u] 940 | del u.goes_after[xf] 941 | xf.goes_after[u] = None # value doesn't matter 942 | u.goes_before[xf] = None 943 | 944 | print((" %d/%d dependencies (%.2f%%) were violated; " 945 | "%d source blocks stashed.") % 946 | (out_of_order, in_order + out_of_order, 947 | (out_of_order * 100.0 / (in_order + out_of_order)) 948 | if (in_order + out_of_order) else 0.0, 949 | stash_size)) 950 | 951 | def FindVertexSequence(self): 952 | print("Finding vertex sequence...") 953 | 954 | # This is based on "A Fast & Effective Heuristic for the Feedback 955 | # Arc Set Problem" by P. Eades, X. Lin, and W.F. Smyth. Think of 956 | # it as starting with the digraph G and moving all the vertices to 957 | # be on a horizontal line in some order, trying to minimize the 958 | # number of edges that end up pointing to the left. Left-pointing 959 | # edges will get removed to turn the digraph into a DAG. In this 960 | # case each edge has a weight which is the number of source blocks 961 | # we'll lose if that edge is removed; we try to minimize the total 962 | # weight rather than just the number of edges. 963 | 964 | # Make a copy of the edge set; this copy will get destroyed by the 965 | # algorithm. 966 | for xf in self.transfers: 967 | xf.incoming = xf.goes_after.copy() 968 | xf.outgoing = xf.goes_before.copy() 969 | xf.score = sum(xf.outgoing.values()) - sum(xf.incoming.values()) 970 | 971 | # We use an OrderedDict instead of just a set so that the output 972 | # is repeatable; otherwise it would depend on the hash values of 973 | # the transfer objects. 974 | G = OrderedDict() 975 | for xf in self.transfers: 976 | G[xf] = None 977 | s1 = deque() # the left side of the sequence, built from left to right 978 | s2 = deque() # the right side of the sequence, built from right to left 979 | 980 | heap = [] 981 | for xf in self.transfers: 982 | xf.heap_item = HeapItem(xf) 983 | heap.append(xf.heap_item) 984 | heapq.heapify(heap) 985 | 986 | sinks = set(u for u in G if not u.outgoing) 987 | sources = set(u for u in G if not u.incoming) 988 | 989 | def adjust_score(iu, delta): 990 | iu.score += delta 991 | iu.heap_item.clear() 992 | iu.heap_item = HeapItem(iu) 993 | heapq.heappush(heap, iu.heap_item) 994 | 995 | while G: 996 | # Put all sinks at the end of the sequence. 997 | while sinks: 998 | new_sinks = set() 999 | for u in sinks: 1000 | if u not in G: continue 1001 | s2.appendleft(u) 1002 | del G[u] 1003 | for iu in u.incoming: 1004 | adjust_score(iu, -iu.outgoing.pop(u)) 1005 | if not iu.outgoing: new_sinks.add(iu) 1006 | sinks = new_sinks 1007 | 1008 | # Put all the sources at the beginning of the sequence. 1009 | while sources: 1010 | new_sources = set() 1011 | for u in sources: 1012 | if u not in G: continue 1013 | s1.append(u) 1014 | del G[u] 1015 | for iu in u.outgoing: 1016 | adjust_score(iu, +iu.incoming.pop(u)) 1017 | if not iu.incoming: new_sources.add(iu) 1018 | sources = new_sources 1019 | 1020 | if not G: break 1021 | 1022 | # Find the "best" vertex to put next. "Best" is the one that 1023 | # maximizes the net difference in source blocks saved we get by 1024 | # pretending it's a source rather than a sink. 1025 | 1026 | while True: 1027 | u = heapq.heappop(heap) 1028 | if u and u.item in G: 1029 | u = u.item 1030 | break 1031 | 1032 | s1.append(u) 1033 | del G[u] 1034 | for iu in u.outgoing: 1035 | adjust_score(iu, +iu.incoming.pop(u)) 1036 | if not iu.incoming: sources.add(iu) 1037 | 1038 | for iu in u.incoming: 1039 | adjust_score(iu, -iu.outgoing.pop(u)) 1040 | if not iu.outgoing: sinks.add(iu) 1041 | 1042 | # Now record the sequence in the 'order' field of each transfer, 1043 | # and by rearranging self.transfers to be in the chosen sequence. 1044 | 1045 | new_transfers = [] 1046 | for x in itertools.chain(s1, s2): 1047 | x.order = len(new_transfers) 1048 | new_transfers.append(x) 1049 | del x.incoming 1050 | del x.outgoing 1051 | 1052 | self.transfers = new_transfers 1053 | 1054 | def GenerateDigraph(self): 1055 | print("Generating digraph...") 1056 | 1057 | # Each item of source_ranges will be: 1058 | # - None, if that block is not used as a source, 1059 | # - a transfer, if one transfer uses it as a source, or 1060 | # - a set of transfers. 1061 | source_ranges = [] 1062 | for b in self.transfers: 1063 | for s, e in b.src_ranges: 1064 | if e > len(source_ranges): 1065 | source_ranges.extend([None] * (e-len(source_ranges))) 1066 | for i in range(s, e): 1067 | if source_ranges[i] is None: 1068 | source_ranges[i] = b 1069 | else: 1070 | if not isinstance(source_ranges[i], set): 1071 | source_ranges[i] = set([source_ranges[i]]) 1072 | source_ranges[i].add(b) 1073 | 1074 | for a in self.transfers: 1075 | intersections = set() 1076 | for s, e in a.tgt_ranges: 1077 | for i in range(s, e): 1078 | if i >= len(source_ranges): break 1079 | b = source_ranges[i] 1080 | if b is not None: 1081 | if isinstance(b, set): 1082 | intersections.update(b) 1083 | else: 1084 | intersections.add(b) 1085 | 1086 | for b in intersections: 1087 | if a is b: continue 1088 | 1089 | # If the blocks written by A are read by B, then B needs to go before A. 1090 | i = a.tgt_ranges.intersect(b.src_ranges) 1091 | if i: 1092 | if b.src_name == "__ZERO": 1093 | # the cost of removing source blocks for the __ZERO domain 1094 | # is (nearly) zero. 1095 | size = 0 1096 | else: 1097 | size = i.size() 1098 | b.goes_before[a] = size 1099 | a.goes_after[b] = size 1100 | 1101 | def FindTransfers(self): 1102 | """Parse the file_map to generate all the transfers.""" 1103 | 1104 | def AddTransfer(tgt_name, src_name, tgt_ranges, src_ranges, style, by_id, 1105 | split=False): 1106 | """Wrapper function for adding a Transfer(). 1107 | 1108 | For BBOTA v3, we need to stash source blocks for resumable feature. 1109 | However, with the growth of file size and the shrink of the cache 1110 | partition source blocks are too large to be stashed. If a file occupies 1111 | too many blocks (greater than MAX_BLOCKS_PER_DIFF_TRANSFER), we split it 1112 | into smaller pieces by getting multiple Transfer()s. 1113 | 1114 | The downside is that after splitting, we may increase the package size 1115 | since the split pieces don't align well. According to our experiments, 1116 | 1/8 of the cache size as the per-piece limit appears to be optimal. 1117 | Compared to the fixed 1024-block limit, it reduces the overall package 1118 | size by 30% volantis, and 20% for angler and bullhead.""" 1119 | 1120 | # We care about diff transfers only. 1121 | if style != "diff" or not split: 1122 | Transfer(tgt_name, src_name, tgt_ranges, src_ranges, style, by_id) 1123 | return 1124 | 1125 | pieces = 0 1126 | cache_size = common.OPTIONS.cache_size 1127 | split_threshold = 0.125 1128 | max_blocks_per_transfer = int(cache_size * split_threshold / 1129 | self.tgt.blocksize) 1130 | 1131 | # Change nothing for small files. 1132 | if (tgt_ranges.size() <= max_blocks_per_transfer and 1133 | src_ranges.size() <= max_blocks_per_transfer): 1134 | Transfer(tgt_name, src_name, tgt_ranges, src_ranges, style, by_id) 1135 | return 1136 | 1137 | while (tgt_ranges.size() > max_blocks_per_transfer and 1138 | src_ranges.size() > max_blocks_per_transfer): 1139 | tgt_split_name = "%s-%d" % (tgt_name, pieces) 1140 | src_split_name = "%s-%d" % (src_name, pieces) 1141 | tgt_first = tgt_ranges.first(max_blocks_per_transfer) 1142 | src_first = src_ranges.first(max_blocks_per_transfer) 1143 | 1144 | Transfer(tgt_split_name, src_split_name, tgt_first, src_first, style, 1145 | by_id) 1146 | 1147 | tgt_ranges = tgt_ranges.subtract(tgt_first) 1148 | src_ranges = src_ranges.subtract(src_first) 1149 | pieces += 1 1150 | 1151 | # Handle remaining blocks. 1152 | if tgt_ranges.size() or src_ranges.size(): 1153 | # Must be both non-empty. 1154 | assert tgt_ranges.size() and src_ranges.size() 1155 | tgt_split_name = "%s-%d" % (tgt_name, pieces) 1156 | src_split_name = "%s-%d" % (src_name, pieces) 1157 | Transfer(tgt_split_name, src_split_name, tgt_ranges, src_ranges, style, 1158 | by_id) 1159 | 1160 | empty = RangeSet() 1161 | for tgt_fn, tgt_ranges in self.tgt.file_map.items(): 1162 | if tgt_fn == "__ZERO": 1163 | # the special "__ZERO" domain is all the blocks not contained 1164 | # in any file and that are filled with zeros. We have a 1165 | # special transfer style for zero blocks. 1166 | src_ranges = self.src.file_map.get("__ZERO", empty) 1167 | AddTransfer(tgt_fn, "__ZERO", tgt_ranges, src_ranges, 1168 | "zero", self.transfers) 1169 | continue 1170 | 1171 | elif tgt_fn == "__COPY": 1172 | # "__COPY" domain includes all the blocks not contained in any 1173 | # file and that need to be copied unconditionally to the target. 1174 | AddTransfer(tgt_fn, None, tgt_ranges, empty, "new", self.transfers) 1175 | continue 1176 | 1177 | elif tgt_fn in self.src.file_map: 1178 | # Look for an exact pathname match in the source. 1179 | AddTransfer(tgt_fn, tgt_fn, tgt_ranges, self.src.file_map[tgt_fn], 1180 | "diff", self.transfers, self.version >= 3) 1181 | continue 1182 | 1183 | b = os.path.basename(tgt_fn) 1184 | if b in self.src_basenames: 1185 | # Look for an exact basename match in the source. 1186 | src_fn = self.src_basenames[b] 1187 | AddTransfer(tgt_fn, src_fn, tgt_ranges, self.src.file_map[src_fn], 1188 | "diff", self.transfers, self.version >= 3) 1189 | continue 1190 | 1191 | b = re.sub("[0-9]+", "#", b) 1192 | if b in self.src_numpatterns: 1193 | # Look for a 'number pattern' match (a basename match after 1194 | # all runs of digits are replaced by "#"). (This is useful 1195 | # for .so files that contain version numbers in the filename 1196 | # that get bumped.) 1197 | src_fn = self.src_numpatterns[b] 1198 | AddTransfer(tgt_fn, src_fn, tgt_ranges, self.src.file_map[src_fn], 1199 | "diff", self.transfers, self.version >= 3) 1200 | continue 1201 | 1202 | AddTransfer(tgt_fn, None, tgt_ranges, empty, "new", self.transfers) 1203 | 1204 | def AbbreviateSourceNames(self): 1205 | for k in self.src.file_map.keys(): 1206 | b = os.path.basename(k) 1207 | self.src_basenames[b] = k 1208 | b = re.sub("[0-9]+", "#", b) 1209 | self.src_numpatterns[b] = k 1210 | 1211 | @staticmethod 1212 | def AssertPartition(total, seq): 1213 | """Assert that all the RangeSets in 'seq' form a partition of the 1214 | 'total' RangeSet (ie, they are nonintersecting and their union 1215 | equals 'total').""" 1216 | 1217 | so_far = RangeSet() 1218 | for i in seq: 1219 | assert not so_far.overlaps(i) 1220 | so_far = so_far.union(i) 1221 | assert so_far == total 1222 | -------------------------------------------------------------------------------- /tools/blockimgdiff.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/blockimgdiff.pyc -------------------------------------------------------------------------------- /tools/common.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2008 The Android Open Source Project 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import copy 16 | import errno 17 | import getopt 18 | import getpass 19 | import imp 20 | import os 21 | import platform 22 | import re 23 | import shlex 24 | import shutil 25 | import subprocess 26 | import sys 27 | import tempfile 28 | import threading 29 | import time 30 | import zipfile 31 | 32 | import blockimgdiff 33 | 34 | from hashlib import sha1 as sha1 35 | 36 | 37 | class Options(object): 38 | def __init__(self): 39 | platform_search_path = { 40 | "linux2": "out/host/linux-x86", 41 | "darwin": "out/host/darwin-x86", 42 | } 43 | 44 | self.search_path = platform_search_path.get(sys.platform, None) 45 | self.signapk_path = "framework/signapk.jar" # Relative to search_path 46 | self.signapk_shared_library_path = "lib64" # Relative to search_path 47 | self.extra_signapk_args = [] 48 | self.java_path = "java" # Use the one on the path by default. 49 | self.java_args = "-Xmx2048m" # JVM Args 50 | self.public_key_suffix = ".x509.pem" 51 | self.private_key_suffix = ".pk8" 52 | # use otatools built boot_signer by default 53 | self.boot_signer_path = "boot_signer" 54 | self.boot_signer_args = [] 55 | self.verity_signer_path = None 56 | self.verity_signer_args = [] 57 | self.verbose = False 58 | self.tempfiles = [] 59 | self.device_specific = None 60 | self.extras = {} 61 | self.info_dict = None 62 | self.source_info_dict = None 63 | self.target_info_dict = None 64 | self.worker_threads = None 65 | # Stash size cannot exceed cache_size * threshold. 66 | self.cache_size = None 67 | self.stash_threshold = 0.8 68 | 69 | 70 | OPTIONS = Options() 71 | 72 | 73 | # Values for "certificate" in apkcerts that mean special things. 74 | SPECIAL_CERT_STRINGS = ("PRESIGNED", "EXTERNAL") 75 | 76 | class ErrorCode(object): 77 | """Define error_codes for failures that happen during the actual 78 | update package installation. 79 | 80 | Error codes 0-999 are reserved for failures before the package 81 | installation (i.e. low battery, package verification failure). 82 | Detailed code in 'bootable/recovery/error_code.h' """ 83 | 84 | SYSTEM_VERIFICATION_FAILURE = 1000 85 | SYSTEM_UPDATE_FAILURE = 1001 86 | SYSTEM_UNEXPECTED_CONTENTS = 1002 87 | SYSTEM_NONZERO_CONTENTS = 1003 88 | SYSTEM_RECOVER_FAILURE = 1004 89 | VENDOR_VERIFICATION_FAILURE = 2000 90 | VENDOR_UPDATE_FAILURE = 2001 91 | VENDOR_UNEXPECTED_CONTENTS = 2002 92 | VENDOR_NONZERO_CONTENTS = 2003 93 | VENDOR_RECOVER_FAILURE = 2004 94 | OEM_PROP_MISMATCH = 3000 95 | FINGERPRINT_MISMATCH = 3001 96 | THUMBPRINT_MISMATCH = 3002 97 | OLDER_BUILD = 3003 98 | DEVICE_MISMATCH = 3004 99 | BAD_PATCH_FILE = 3005 100 | INSUFFICIENT_CACHE_SPACE = 3006 101 | TUNE_PARTITION_FAILURE = 3007 102 | APPLY_PATCH_FAILURE = 3008 103 | 104 | class ExternalError(RuntimeError): 105 | pass 106 | 107 | 108 | def Run(args, **kwargs): 109 | """Create and return a subprocess.Popen object, printing the command 110 | line on the terminal if -v was specified.""" 111 | if OPTIONS.verbose: 112 | print " running: ", " ".join(args) 113 | return subprocess.Popen(args, **kwargs) 114 | 115 | 116 | def CloseInheritedPipes(): 117 | """ Gmake in MAC OS has file descriptor (PIPE) leak. We close those fds 118 | before doing other work.""" 119 | if platform.system() != "Darwin": 120 | return 121 | for d in range(3, 1025): 122 | try: 123 | stat = os.fstat(d) 124 | if stat is not None: 125 | pipebit = stat[0] & 0x1000 126 | if pipebit != 0: 127 | os.close(d) 128 | except OSError: 129 | pass 130 | 131 | 132 | def LoadInfoDict(input_file, input_dir=None): 133 | """Read and parse the META/misc_info.txt key/value pairs from the 134 | input target files and return a dict.""" 135 | 136 | def read_helper(fn): 137 | if isinstance(input_file, zipfile.ZipFile): 138 | return input_file.read(fn) 139 | else: 140 | path = os.path.join(input_file, *fn.split("/")) 141 | try: 142 | with open(path) as f: 143 | return f.read() 144 | except IOError as e: 145 | if e.errno == errno.ENOENT: 146 | raise KeyError(fn) 147 | d = {} 148 | try: 149 | d = LoadDictionaryFromLines(read_helper("META/misc_info.txt").split("\n")) 150 | except KeyError: 151 | # ok if misc_info.txt doesn't exist 152 | pass 153 | 154 | # backwards compatibility: These values used to be in their own 155 | # files. Look for them, in case we're processing an old 156 | # target_files zip. 157 | 158 | if "mkyaffs2_extra_flags" not in d: 159 | try: 160 | d["mkyaffs2_extra_flags"] = read_helper( 161 | "META/mkyaffs2-extra-flags.txt").strip() 162 | except KeyError: 163 | # ok if flags don't exist 164 | pass 165 | 166 | if "recovery_api_version" not in d: 167 | try: 168 | d["recovery_api_version"] = read_helper( 169 | "META/recovery-api-version.txt").strip() 170 | except KeyError: 171 | raise ValueError("can't find recovery API version in input target-files") 172 | 173 | if "tool_extensions" not in d: 174 | try: 175 | d["tool_extensions"] = read_helper("META/tool-extensions.txt").strip() 176 | except KeyError: 177 | # ok if extensions don't exist 178 | pass 179 | 180 | if "fstab_version" not in d: 181 | d["fstab_version"] = "1" 182 | 183 | # A few properties are stored as links to the files in the out/ directory. 184 | # It works fine with the build system. However, they are no longer available 185 | # when (re)generating from target_files zip. If input_dir is not None, we 186 | # are doing repacking. Redirect those properties to the actual files in the 187 | # unzipped directory. 188 | if input_dir is not None: 189 | # We carry a copy of file_contexts.bin under META/. If not available, 190 | # search BOOT/RAMDISK/. Note that sometimes we may need a different file 191 | # to build images than the one running on device, such as when enabling 192 | # system_root_image. In that case, we must have the one for image 193 | # generation copied to META/. 194 | fc_basename = os.path.basename(d.get("selinux_fc", "file_contexts")) 195 | fc_config = os.path.join(input_dir, "META", fc_basename) 196 | if d.get("system_root_image") == "true": 197 | assert os.path.exists(fc_config) 198 | if not os.path.exists(fc_config): 199 | fc_config = os.path.join(input_dir, "BOOT", "RAMDISK", fc_basename) 200 | if not os.path.exists(fc_config): 201 | fc_config = None 202 | 203 | if fc_config: 204 | d["selinux_fc"] = fc_config 205 | 206 | # Similarly we need to redirect "ramdisk_dir" and "ramdisk_fs_config". 207 | if d.get("system_root_image") == "true": 208 | d["ramdisk_dir"] = os.path.join(input_dir, "ROOT") 209 | d["ramdisk_fs_config"] = os.path.join( 210 | input_dir, "META", "root_filesystem_config.txt") 211 | 212 | # Redirect {system,vendor}_base_fs_file. 213 | if "system_base_fs_file" in d: 214 | basename = os.path.basename(d["system_base_fs_file"]) 215 | system_base_fs_file = os.path.join(input_dir, "META", basename) 216 | if os.path.exists(system_base_fs_file): 217 | d["system_base_fs_file"] = system_base_fs_file 218 | else: 219 | print "Warning: failed to find system base fs file: %s" % ( 220 | system_base_fs_file,) 221 | del d["system_base_fs_file"] 222 | 223 | if "vendor_base_fs_file" in d: 224 | basename = os.path.basename(d["vendor_base_fs_file"]) 225 | vendor_base_fs_file = os.path.join(input_dir, "META", basename) 226 | if os.path.exists(vendor_base_fs_file): 227 | d["vendor_base_fs_file"] = vendor_base_fs_file 228 | else: 229 | print "Warning: failed to find vendor base fs file: %s" % ( 230 | vendor_base_fs_file,) 231 | del d["vendor_base_fs_file"] 232 | 233 | try: 234 | data = read_helper("META/imagesizes.txt") 235 | for line in data.split("\n"): 236 | if not line: 237 | continue 238 | name, value = line.split(" ", 1) 239 | if not value: 240 | continue 241 | if name == "blocksize": 242 | d[name] = value 243 | else: 244 | d[name + "_size"] = value 245 | except KeyError: 246 | pass 247 | 248 | def makeint(key): 249 | if key in d: 250 | d[key] = int(d[key], 0) 251 | 252 | makeint("recovery_api_version") 253 | makeint("blocksize") 254 | makeint("system_size") 255 | makeint("vendor_size") 256 | makeint("userdata_size") 257 | makeint("cache_size") 258 | makeint("recovery_size") 259 | makeint("boot_size") 260 | makeint("fstab_version") 261 | 262 | if d.get("no_recovery", False) == "true": 263 | d["fstab"] = None 264 | else: 265 | d["fstab"] = LoadRecoveryFSTab(read_helper, d["fstab_version"], 266 | d.get("system_root_image", False)) 267 | d["build.prop"] = LoadBuildProp(read_helper) 268 | return d 269 | 270 | def LoadBuildProp(read_helper): 271 | try: 272 | data = read_helper("SYSTEM/build.prop") 273 | except KeyError: 274 | print "Warning: could not find SYSTEM/build.prop in %s" % zip 275 | data = "" 276 | return LoadDictionaryFromLines(data.split("\n")) 277 | 278 | def LoadDictionaryFromLines(lines): 279 | d = {} 280 | for line in lines: 281 | line = line.strip() 282 | if not line or line.startswith("#"): 283 | continue 284 | if "=" in line: 285 | name, value = line.split("=", 1) 286 | d[name] = value 287 | return d 288 | 289 | def LoadRecoveryFSTab(read_helper, fstab_version, system_root_image=False): 290 | class Partition(object): 291 | def __init__(self, mount_point, fs_type, device, length, device2, context): 292 | self.mount_point = mount_point 293 | self.fs_type = fs_type 294 | self.device = device 295 | self.length = length 296 | self.device2 = device2 297 | self.context = context 298 | 299 | try: 300 | data = read_helper("RECOVERY/RAMDISK/etc/recovery.fstab") 301 | except KeyError: 302 | print "Warning: could not find RECOVERY/RAMDISK/etc/recovery.fstab" 303 | data = "" 304 | 305 | if fstab_version == 1: 306 | d = {} 307 | for line in data.split("\n"): 308 | line = line.strip() 309 | if not line or line.startswith("#"): 310 | continue 311 | pieces = line.split() 312 | if not 3 <= len(pieces) <= 4: 313 | raise ValueError("malformed recovery.fstab line: \"%s\"" % (line,)) 314 | options = None 315 | if len(pieces) >= 4: 316 | if pieces[3].startswith("/"): 317 | device2 = pieces[3] 318 | if len(pieces) >= 5: 319 | options = pieces[4] 320 | else: 321 | device2 = None 322 | options = pieces[3] 323 | else: 324 | device2 = None 325 | 326 | mount_point = pieces[0] 327 | length = 0 328 | if options: 329 | options = options.split(",") 330 | for i in options: 331 | if i.startswith("length="): 332 | length = int(i[7:]) 333 | else: 334 | print "%s: unknown option \"%s\"" % (mount_point, i) 335 | 336 | d[mount_point] = Partition(mount_point=mount_point, fs_type=pieces[1], 337 | device=pieces[2], length=length, 338 | device2=device2) 339 | 340 | elif fstab_version == 2: 341 | d = {} 342 | for line in data.split("\n"): 343 | line = line.strip() 344 | if not line or line.startswith("#"): 345 | continue 346 | # 347 | pieces = line.split() 348 | if len(pieces) != 5: 349 | raise ValueError("malformed recovery.fstab line: \"%s\"" % (line,)) 350 | 351 | # Ignore entries that are managed by vold 352 | options = pieces[4] 353 | if "voldmanaged=" in options: 354 | continue 355 | 356 | # It's a good line, parse it 357 | length = 0 358 | options = options.split(",") 359 | for i in options: 360 | if i.startswith("length="): 361 | length = int(i[7:]) 362 | else: 363 | # Ignore all unknown options in the unified fstab 364 | continue 365 | 366 | mount_flags = pieces[3] 367 | # Honor the SELinux context if present. 368 | context = None 369 | for i in mount_flags.split(","): 370 | if i.startswith("context="): 371 | context = i 372 | 373 | mount_point = pieces[1] 374 | d[mount_point] = Partition(mount_point=mount_point, fs_type=pieces[2], 375 | device=pieces[0], length=length, 376 | device2=None, context=context) 377 | 378 | else: 379 | raise ValueError("Unknown fstab_version: \"%d\"" % (fstab_version,)) 380 | 381 | # / is used for the system mount point when the root directory is included in 382 | # system. Other areas assume system is always at "/system" so point /system 383 | # at /. 384 | if system_root_image: 385 | assert not d.has_key("/system") and d.has_key("/") 386 | d["/system"] = d["/"] 387 | return d 388 | 389 | 390 | def DumpInfoDict(d): 391 | for k, v in sorted(d.items()): 392 | print "%-25s = (%s) %s" % (k, type(v).__name__, v) 393 | 394 | 395 | def _BuildBootableImage(sourcedir, fs_config_file, info_dict=None, 396 | has_ramdisk=False): 397 | """Build a bootable image from the specified sourcedir. 398 | 399 | Take a kernel, cmdline, and optionally a ramdisk directory from the input (in 400 | 'sourcedir'), and turn them into a boot image. Return the image data, or 401 | None if sourcedir does not appear to contains files for building the 402 | requested image.""" 403 | 404 | def make_ramdisk(): 405 | ramdisk_img = tempfile.NamedTemporaryFile() 406 | 407 | if os.access(fs_config_file, os.F_OK): 408 | cmd = ["mkbootfs", "-f", fs_config_file, 409 | os.path.join(sourcedir, "RAMDISK")] 410 | else: 411 | cmd = ["mkbootfs", os.path.join(sourcedir, "RAMDISK")] 412 | p1 = Run(cmd, stdout=subprocess.PIPE) 413 | p2 = Run(["minigzip"], stdin=p1.stdout, stdout=ramdisk_img.file.fileno()) 414 | 415 | p2.wait() 416 | p1.wait() 417 | assert p1.returncode == 0, "mkbootfs of %s ramdisk failed" % (sourcedir,) 418 | assert p2.returncode == 0, "minigzip of %s ramdisk failed" % (sourcedir,) 419 | 420 | return ramdisk_img 421 | 422 | if not os.access(os.path.join(sourcedir, "kernel"), os.F_OK): 423 | return None 424 | 425 | if has_ramdisk and not os.access(os.path.join(sourcedir, "RAMDISK"), os.F_OK): 426 | return None 427 | 428 | if info_dict is None: 429 | info_dict = OPTIONS.info_dict 430 | 431 | img = tempfile.NamedTemporaryFile() 432 | 433 | if has_ramdisk: 434 | ramdisk_img = make_ramdisk() 435 | 436 | # use MKBOOTIMG from environ, or "mkbootimg" if empty or not set 437 | mkbootimg = os.getenv('MKBOOTIMG') or "mkbootimg" 438 | 439 | cmd = [mkbootimg, "--kernel", os.path.join(sourcedir, "kernel")] 440 | 441 | fn = os.path.join(sourcedir, "second") 442 | if os.access(fn, os.F_OK): 443 | cmd.append("--second") 444 | cmd.append(fn) 445 | 446 | fn = os.path.join(sourcedir, "cmdline") 447 | if os.access(fn, os.F_OK): 448 | cmd.append("--cmdline") 449 | cmd.append(open(fn).read().rstrip("\n")) 450 | 451 | fn = os.path.join(sourcedir, "base") 452 | if os.access(fn, os.F_OK): 453 | cmd.append("--base") 454 | cmd.append(open(fn).read().rstrip("\n")) 455 | 456 | fn = os.path.join(sourcedir, "pagesize") 457 | if os.access(fn, os.F_OK): 458 | cmd.append("--pagesize") 459 | cmd.append(open(fn).read().rstrip("\n")) 460 | 461 | args = info_dict.get("mkbootimg_args", None) 462 | if args and args.strip(): 463 | cmd.extend(shlex.split(args)) 464 | 465 | args = info_dict.get("mkbootimg_version_args", None) 466 | if args and args.strip(): 467 | cmd.extend(shlex.split(args)) 468 | 469 | if has_ramdisk: 470 | cmd.extend(["--ramdisk", ramdisk_img.name]) 471 | 472 | img_unsigned = None 473 | if info_dict.get("vboot", None): 474 | img_unsigned = tempfile.NamedTemporaryFile() 475 | cmd.extend(["--output", img_unsigned.name]) 476 | else: 477 | cmd.extend(["--output", img.name]) 478 | 479 | p = Run(cmd, stdout=subprocess.PIPE) 480 | p.communicate() 481 | assert p.returncode == 0, "mkbootimg of %s image failed" % ( 482 | os.path.basename(sourcedir),) 483 | 484 | if (info_dict.get("boot_signer", None) == "true" and 485 | info_dict.get("verity_key", None)): 486 | path = "/" + os.path.basename(sourcedir).lower() 487 | cmd = [OPTIONS.boot_signer_path] 488 | cmd.extend(OPTIONS.boot_signer_args) 489 | cmd.extend([path, img.name, 490 | info_dict["verity_key"] + ".pk8", 491 | info_dict["verity_key"] + ".x509.pem", img.name]) 492 | p = Run(cmd, stdout=subprocess.PIPE) 493 | p.communicate() 494 | assert p.returncode == 0, "boot_signer of %s image failed" % path 495 | 496 | # Sign the image if vboot is non-empty. 497 | elif info_dict.get("vboot", None): 498 | path = "/" + os.path.basename(sourcedir).lower() 499 | img_keyblock = tempfile.NamedTemporaryFile() 500 | cmd = [info_dict["vboot_signer_cmd"], info_dict["futility"], 501 | img_unsigned.name, info_dict["vboot_key"] + ".vbpubk", 502 | info_dict["vboot_key"] + ".vbprivk", 503 | info_dict["vboot_subkey"] + ".vbprivk", 504 | img_keyblock.name, 505 | img.name] 506 | p = Run(cmd, stdout=subprocess.PIPE) 507 | p.communicate() 508 | assert p.returncode == 0, "vboot_signer of %s image failed" % path 509 | 510 | # Clean up the temp files. 511 | img_unsigned.close() 512 | img_keyblock.close() 513 | 514 | img.seek(os.SEEK_SET, 0) 515 | data = img.read() 516 | 517 | if has_ramdisk: 518 | ramdisk_img.close() 519 | img.close() 520 | 521 | return data 522 | 523 | 524 | def GetBootableImage(name, prebuilt_name, unpack_dir, tree_subdir, 525 | info_dict=None): 526 | """Return a File object with the desired bootable image. 527 | 528 | Look for it in 'unpack_dir'/BOOTABLE_IMAGES under the name 'prebuilt_name', 529 | otherwise look for it under 'unpack_dir'/IMAGES, otherwise construct it from 530 | the source files in 'unpack_dir'/'tree_subdir'.""" 531 | 532 | prebuilt_path = os.path.join(unpack_dir, "BOOTABLE_IMAGES", prebuilt_name) 533 | if os.path.exists(prebuilt_path): 534 | print "using prebuilt %s from BOOTABLE_IMAGES..." % (prebuilt_name,) 535 | return File.FromLocalFile(name, prebuilt_path) 536 | 537 | prebuilt_path = os.path.join(unpack_dir, "IMAGES", prebuilt_name) 538 | if os.path.exists(prebuilt_path): 539 | print "using prebuilt %s from IMAGES..." % (prebuilt_name,) 540 | return File.FromLocalFile(name, prebuilt_path) 541 | 542 | print "building image from target_files %s..." % (tree_subdir,) 543 | 544 | if info_dict is None: 545 | info_dict = OPTIONS.info_dict 546 | 547 | # With system_root_image == "true", we don't pack ramdisk into the boot image. 548 | # Unless "recovery_as_boot" is specified, in which case we carry the ramdisk 549 | # for recovery. 550 | has_ramdisk = (info_dict.get("system_root_image") != "true" or 551 | prebuilt_name != "boot.img" or 552 | info_dict.get("recovery_as_boot") == "true") 553 | 554 | fs_config = "META/" + tree_subdir.lower() + "_filesystem_config.txt" 555 | data = _BuildBootableImage(os.path.join(unpack_dir, tree_subdir), 556 | os.path.join(unpack_dir, fs_config), 557 | info_dict, has_ramdisk) 558 | if data: 559 | return File(name, data) 560 | return None 561 | 562 | 563 | def UnzipTemp(filename, pattern=None): 564 | """Unzip the given archive into a temporary directory and return the name. 565 | 566 | If filename is of the form "foo.zip+bar.zip", unzip foo.zip into a 567 | temp dir, then unzip bar.zip into that_dir/BOOTABLE_IMAGES. 568 | 569 | Returns (tempdir, zipobj) where zipobj is a zipfile.ZipFile (of the 570 | main file), open for reading. 571 | """ 572 | 573 | tmp = tempfile.mkdtemp(prefix="targetfiles-") 574 | OPTIONS.tempfiles.append(tmp) 575 | 576 | def unzip_to_dir(filename, dirname): 577 | cmd = ["unzip", "-o", "-q", filename, "-d", dirname] 578 | if pattern is not None: 579 | cmd.append(pattern) 580 | p = Run(cmd, stdout=subprocess.PIPE) 581 | p.communicate() 582 | if p.returncode != 0: 583 | raise ExternalError("failed to unzip input target-files \"%s\"" % 584 | (filename,)) 585 | 586 | m = re.match(r"^(.*[.]zip)\+(.*[.]zip)$", filename, re.IGNORECASE) 587 | if m: 588 | unzip_to_dir(m.group(1), tmp) 589 | unzip_to_dir(m.group(2), os.path.join(tmp, "BOOTABLE_IMAGES")) 590 | filename = m.group(1) 591 | else: 592 | unzip_to_dir(filename, tmp) 593 | 594 | return tmp, zipfile.ZipFile(filename, "r") 595 | 596 | 597 | def GetKeyPasswords(keylist): 598 | """Given a list of keys, prompt the user to enter passwords for 599 | those which require them. Return a {key: password} dict. password 600 | will be None if the key has no password.""" 601 | 602 | no_passwords = [] 603 | need_passwords = [] 604 | key_passwords = {} 605 | devnull = open("/dev/null", "w+b") 606 | for k in sorted(keylist): 607 | # We don't need a password for things that aren't really keys. 608 | if k in SPECIAL_CERT_STRINGS: 609 | no_passwords.append(k) 610 | continue 611 | 612 | p = Run(["openssl", "pkcs8", "-in", k+OPTIONS.private_key_suffix, 613 | "-inform", "DER", "-nocrypt"], 614 | stdin=devnull.fileno(), 615 | stdout=devnull.fileno(), 616 | stderr=subprocess.STDOUT) 617 | p.communicate() 618 | if p.returncode == 0: 619 | # Definitely an unencrypted key. 620 | no_passwords.append(k) 621 | else: 622 | p = Run(["openssl", "pkcs8", "-in", k+OPTIONS.private_key_suffix, 623 | "-inform", "DER", "-passin", "pass:"], 624 | stdin=devnull.fileno(), 625 | stdout=devnull.fileno(), 626 | stderr=subprocess.PIPE) 627 | _, stderr = p.communicate() 628 | if p.returncode == 0: 629 | # Encrypted key with empty string as password. 630 | key_passwords[k] = '' 631 | elif stderr.startswith('Error decrypting key'): 632 | # Definitely encrypted key. 633 | # It would have said "Error reading key" if it didn't parse correctly. 634 | need_passwords.append(k) 635 | else: 636 | # Potentially, a type of key that openssl doesn't understand. 637 | # We'll let the routines in signapk.jar handle it. 638 | no_passwords.append(k) 639 | devnull.close() 640 | 641 | key_passwords.update(PasswordManager().GetPasswords(need_passwords)) 642 | key_passwords.update(dict.fromkeys(no_passwords, None)) 643 | return key_passwords 644 | 645 | 646 | def GetMinSdkVersion(apk_name): 647 | """Get the minSdkVersion delared in the APK. This can be both a decimal number 648 | (API Level) or a codename. 649 | """ 650 | 651 | p = Run(["aapt", "dump", "badging", apk_name], stdout=subprocess.PIPE) 652 | output, err = p.communicate() 653 | if err: 654 | raise ExternalError("Failed to obtain minSdkVersion: aapt return code %s" 655 | % (p.returncode,)) 656 | 657 | for line in output.split("\n"): 658 | # Looking for lines such as sdkVersion:'23' or sdkVersion:'M' 659 | m = re.match(r'sdkVersion:\'([^\']*)\'', line) 660 | if m: 661 | return m.group(1) 662 | raise ExternalError("No minSdkVersion returned by aapt") 663 | 664 | 665 | def GetMinSdkVersionInt(apk_name, codename_to_api_level_map): 666 | """Get the minSdkVersion declared in the APK as a number (API Level). If 667 | minSdkVersion is set to a codename, it is translated to a number using the 668 | provided map. 669 | """ 670 | 671 | version = GetMinSdkVersion(apk_name) 672 | try: 673 | return int(version) 674 | except ValueError: 675 | # Not a decimal number. Codename? 676 | if version in codename_to_api_level_map: 677 | return codename_to_api_level_map[version] 678 | else: 679 | raise ExternalError("Unknown minSdkVersion: '%s'. Known codenames: %s" 680 | % (version, codename_to_api_level_map)) 681 | 682 | 683 | def SignFile(input_name, output_name, key, password, min_api_level=None, 684 | codename_to_api_level_map=dict(), 685 | whole_file=False): 686 | """Sign the input_name zip/jar/apk, producing output_name. Use the 687 | given key and password (the latter may be None if the key does not 688 | have a password. 689 | 690 | If whole_file is true, use the "-w" option to SignApk to embed a 691 | signature that covers the whole file in the archive comment of the 692 | zip file. 693 | 694 | min_api_level is the API Level (int) of the oldest platform this file may end 695 | up on. If not specified for an APK, the API Level is obtained by interpreting 696 | the minSdkVersion attribute of the APK's AndroidManifest.xml. 697 | 698 | codename_to_api_level_map is needed to translate the codename which may be 699 | encountered as the APK's minSdkVersion. 700 | """ 701 | 702 | java_library_path = os.path.join( 703 | OPTIONS.search_path, OPTIONS.signapk_shared_library_path) 704 | 705 | cmd = [OPTIONS.java_path, OPTIONS.java_args, 706 | "-Djava.library.path=" + java_library_path, 707 | "-jar", 708 | os.path.join(OPTIONS.search_path, OPTIONS.signapk_path)] 709 | cmd.extend(OPTIONS.extra_signapk_args) 710 | if whole_file: 711 | cmd.append("-w") 712 | 713 | min_sdk_version = min_api_level 714 | if min_sdk_version is None: 715 | if not whole_file: 716 | min_sdk_version = GetMinSdkVersionInt( 717 | input_name, codename_to_api_level_map) 718 | if min_sdk_version is not None: 719 | cmd.extend(["--min-sdk-version", str(min_sdk_version)]) 720 | 721 | cmd.extend([key + OPTIONS.public_key_suffix, 722 | key + OPTIONS.private_key_suffix, 723 | input_name, output_name]) 724 | 725 | p = Run(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE) 726 | if password is not None: 727 | password += "\n" 728 | p.communicate(password) 729 | if p.returncode != 0: 730 | raise ExternalError("signapk.jar failed: return code %s" % (p.returncode,)) 731 | 732 | 733 | def CheckSize(data, target, info_dict): 734 | """Check the data string passed against the max size limit, if 735 | any, for the given target. Raise exception if the data is too big. 736 | Print a warning if the data is nearing the maximum size.""" 737 | 738 | if target.endswith(".img"): 739 | target = target[:-4] 740 | mount_point = "/" + target 741 | 742 | fs_type = None 743 | limit = None 744 | if info_dict["fstab"]: 745 | if mount_point == "/userdata": 746 | mount_point = "/data" 747 | p = info_dict["fstab"][mount_point] 748 | fs_type = p.fs_type 749 | device = p.device 750 | if "/" in device: 751 | device = device[device.rfind("/")+1:] 752 | limit = info_dict.get(device + "_size", None) 753 | if not fs_type or not limit: 754 | return 755 | 756 | if fs_type == "yaffs2": 757 | # image size should be increased by 1/64th to account for the 758 | # spare area (64 bytes per 2k page) 759 | limit = limit / 2048 * (2048+64) 760 | size = len(data) 761 | pct = float(size) * 100.0 / limit 762 | msg = "%s size (%d) is %.2f%% of limit (%d)" % (target, size, pct, limit) 763 | if pct >= 99.0: 764 | raise ExternalError(msg) 765 | elif pct >= 95.0: 766 | print 767 | print " WARNING: ", msg 768 | print 769 | elif OPTIONS.verbose: 770 | print " ", msg 771 | 772 | 773 | def ReadApkCerts(tf_zip): 774 | """Given a target_files ZipFile, parse the META/apkcerts.txt file 775 | and return a {package: cert} dict.""" 776 | certmap = {} 777 | for line in tf_zip.read("META/apkcerts.txt").split("\n"): 778 | line = line.strip() 779 | if not line: 780 | continue 781 | m = re.match(r'^name="(.*)"\s+certificate="(.*)"\s+' 782 | r'private_key="(.*)"$', line) 783 | if m: 784 | name, cert, privkey = m.groups() 785 | public_key_suffix_len = len(OPTIONS.public_key_suffix) 786 | private_key_suffix_len = len(OPTIONS.private_key_suffix) 787 | if cert in SPECIAL_CERT_STRINGS and not privkey: 788 | certmap[name] = cert 789 | elif (cert.endswith(OPTIONS.public_key_suffix) and 790 | privkey.endswith(OPTIONS.private_key_suffix) and 791 | cert[:-public_key_suffix_len] == privkey[:-private_key_suffix_len]): 792 | certmap[name] = cert[:-public_key_suffix_len] 793 | else: 794 | raise ValueError("failed to parse line from apkcerts.txt:\n" + line) 795 | return certmap 796 | 797 | 798 | COMMON_DOCSTRING = """ 799 | -p (--path) 800 | Prepend /bin to the list of places to search for binaries 801 | run by this script, and expect to find jars in /framework. 802 | 803 | -s (--device_specific) 804 | Path to the python module containing device-specific 805 | releasetools code. 806 | 807 | -x (--extra) 808 | Add a key/value pair to the 'extras' dict, which device-specific 809 | extension code may look at. 810 | 811 | -v (--verbose) 812 | Show command lines being executed. 813 | 814 | -h (--help) 815 | Display this usage message and exit. 816 | """ 817 | 818 | def Usage(docstring): 819 | print docstring.rstrip("\n") 820 | print COMMON_DOCSTRING 821 | 822 | 823 | def ParseOptions(argv, 824 | docstring, 825 | extra_opts="", extra_long_opts=(), 826 | extra_option_handler=None): 827 | """Parse the options in argv and return any arguments that aren't 828 | flags. docstring is the calling module's docstring, to be displayed 829 | for errors and -h. extra_opts and extra_long_opts are for flags 830 | defined by the caller, which are processed by passing them to 831 | extra_option_handler.""" 832 | 833 | try: 834 | opts, args = getopt.getopt( 835 | argv, "hvp:s:x:" + extra_opts, 836 | ["help", "verbose", "path=", "signapk_path=", 837 | "signapk_shared_library_path=", "extra_signapk_args=", 838 | "java_path=", "java_args=", "public_key_suffix=", 839 | "private_key_suffix=", "boot_signer_path=", "boot_signer_args=", 840 | "verity_signer_path=", "verity_signer_args=", "device_specific=", 841 | "extra="] + 842 | list(extra_long_opts)) 843 | except getopt.GetoptError as err: 844 | Usage(docstring) 845 | print "**", str(err), "**" 846 | sys.exit(2) 847 | 848 | for o, a in opts: 849 | if o in ("-h", "--help"): 850 | Usage(docstring) 851 | sys.exit() 852 | elif o in ("-v", "--verbose"): 853 | OPTIONS.verbose = True 854 | elif o in ("-p", "--path"): 855 | OPTIONS.search_path = a 856 | elif o in ("--signapk_path",): 857 | OPTIONS.signapk_path = a 858 | elif o in ("--signapk_shared_library_path",): 859 | OPTIONS.signapk_shared_library_path = a 860 | elif o in ("--extra_signapk_args",): 861 | OPTIONS.extra_signapk_args = shlex.split(a) 862 | elif o in ("--java_path",): 863 | OPTIONS.java_path = a 864 | elif o in ("--java_args",): 865 | OPTIONS.java_args = a 866 | elif o in ("--public_key_suffix",): 867 | OPTIONS.public_key_suffix = a 868 | elif o in ("--private_key_suffix",): 869 | OPTIONS.private_key_suffix = a 870 | elif o in ("--boot_signer_path",): 871 | OPTIONS.boot_signer_path = a 872 | elif o in ("--boot_signer_args",): 873 | OPTIONS.boot_signer_args = shlex.split(a) 874 | elif o in ("--verity_signer_path",): 875 | OPTIONS.verity_signer_path = a 876 | elif o in ("--verity_signer_args",): 877 | OPTIONS.verity_signer_args = shlex.split(a) 878 | elif o in ("-s", "--device_specific"): 879 | OPTIONS.device_specific = a 880 | elif o in ("-x", "--extra"): 881 | key, value = a.split("=", 1) 882 | OPTIONS.extras[key] = value 883 | else: 884 | if extra_option_handler is None or not extra_option_handler(o, a): 885 | assert False, "unknown option \"%s\"" % (o,) 886 | 887 | if OPTIONS.search_path: 888 | os.environ["PATH"] = (os.path.join(OPTIONS.search_path, "bin") + 889 | os.pathsep + os.environ["PATH"]) 890 | 891 | return args 892 | 893 | 894 | def MakeTempFile(prefix=None, suffix=None): 895 | """Make a temp file and add it to the list of things to be deleted 896 | when Cleanup() is called. Return the filename.""" 897 | fd, fn = tempfile.mkstemp(prefix=prefix, suffix=suffix) 898 | os.close(fd) 899 | OPTIONS.tempfiles.append(fn) 900 | return fn 901 | 902 | 903 | def Cleanup(): 904 | for i in OPTIONS.tempfiles: 905 | if os.path.isdir(i): 906 | shutil.rmtree(i) 907 | else: 908 | os.remove(i) 909 | 910 | 911 | class PasswordManager(object): 912 | def __init__(self): 913 | self.editor = os.getenv("EDITOR", None) 914 | self.pwfile = os.getenv("ANDROID_PW_FILE", None) 915 | 916 | def GetPasswords(self, items): 917 | """Get passwords corresponding to each string in 'items', 918 | returning a dict. (The dict may have keys in addition to the 919 | values in 'items'.) 920 | 921 | Uses the passwords in $ANDROID_PW_FILE if available, letting the 922 | user edit that file to add more needed passwords. If no editor is 923 | available, or $ANDROID_PW_FILE isn't define, prompts the user 924 | interactively in the ordinary way. 925 | """ 926 | 927 | current = self.ReadFile() 928 | 929 | first = True 930 | while True: 931 | missing = [] 932 | for i in items: 933 | if i not in current or not current[i]: 934 | missing.append(i) 935 | # Are all the passwords already in the file? 936 | if not missing: 937 | return current 938 | 939 | for i in missing: 940 | current[i] = "" 941 | 942 | if not first: 943 | print "key file %s still missing some passwords." % (self.pwfile,) 944 | answer = raw_input("try to edit again? [y]> ").strip() 945 | if answer and answer[0] not in 'yY': 946 | raise RuntimeError("key passwords unavailable") 947 | first = False 948 | 949 | current = self.UpdateAndReadFile(current) 950 | 951 | def PromptResult(self, current): # pylint: disable=no-self-use 952 | """Prompt the user to enter a value (password) for each key in 953 | 'current' whose value is fales. Returns a new dict with all the 954 | values. 955 | """ 956 | result = {} 957 | for k, v in sorted(current.iteritems()): 958 | if v: 959 | result[k] = v 960 | else: 961 | while True: 962 | result[k] = getpass.getpass( 963 | "Enter password for %s key> " % k).strip() 964 | if result[k]: 965 | break 966 | return result 967 | 968 | def UpdateAndReadFile(self, current): 969 | if not self.editor or not self.pwfile: 970 | return self.PromptResult(current) 971 | 972 | f = open(self.pwfile, "w") 973 | os.chmod(self.pwfile, 0o600) 974 | f.write("# Enter key passwords between the [[[ ]]] brackets.\n") 975 | f.write("# (Additional spaces are harmless.)\n\n") 976 | 977 | first_line = None 978 | sorted_list = sorted([(not v, k, v) for (k, v) in current.iteritems()]) 979 | for i, (_, k, v) in enumerate(sorted_list): 980 | f.write("[[[ %s ]]] %s\n" % (v, k)) 981 | if not v and first_line is None: 982 | # position cursor on first line with no password. 983 | first_line = i + 4 984 | f.close() 985 | 986 | p = Run([self.editor, "+%d" % (first_line,), self.pwfile]) 987 | _, _ = p.communicate() 988 | 989 | return self.ReadFile() 990 | 991 | def ReadFile(self): 992 | result = {} 993 | if self.pwfile is None: 994 | return result 995 | try: 996 | f = open(self.pwfile, "r") 997 | for line in f: 998 | line = line.strip() 999 | if not line or line[0] == '#': 1000 | continue 1001 | m = re.match(r"^\[\[\[\s*(.*?)\s*\]\]\]\s*(\S+)$", line) 1002 | if not m: 1003 | print "failed to parse password file: ", line 1004 | else: 1005 | result[m.group(2)] = m.group(1) 1006 | f.close() 1007 | except IOError as e: 1008 | if e.errno != errno.ENOENT: 1009 | print "error reading password file: ", str(e) 1010 | return result 1011 | 1012 | 1013 | def ZipWrite(zip_file, filename, arcname=None, perms=0o644, 1014 | compress_type=None): 1015 | import datetime 1016 | 1017 | # http://b/18015246 1018 | # Python 2.7's zipfile implementation wrongly thinks that zip64 is required 1019 | # for files larger than 2GiB. We can work around this by adjusting their 1020 | # limit. Note that `zipfile.writestr()` will not work for strings larger than 1021 | # 2GiB. The Python interpreter sometimes rejects strings that large (though 1022 | # it isn't clear to me exactly what circumstances cause this). 1023 | # `zipfile.write()` must be used directly to work around this. 1024 | # 1025 | # This mess can be avoided if we port to python3. 1026 | saved_zip64_limit = zipfile.ZIP64_LIMIT 1027 | zipfile.ZIP64_LIMIT = (1 << 32) - 1 1028 | 1029 | if compress_type is None: 1030 | compress_type = zip_file.compression 1031 | if arcname is None: 1032 | arcname = filename 1033 | 1034 | saved_stat = os.stat(filename) 1035 | 1036 | try: 1037 | # `zipfile.write()` doesn't allow us to pass ZipInfo, so just modify the 1038 | # file to be zipped and reset it when we're done. 1039 | os.chmod(filename, perms) 1040 | 1041 | # Use a fixed timestamp so the output is repeatable. 1042 | epoch = datetime.datetime.fromtimestamp(0) 1043 | timestamp = (datetime.datetime(2009, 1, 1) - epoch).total_seconds() 1044 | os.utime(filename, (timestamp, timestamp)) 1045 | 1046 | zip_file.write(filename, arcname=arcname, compress_type=compress_type) 1047 | finally: 1048 | os.chmod(filename, saved_stat.st_mode) 1049 | os.utime(filename, (saved_stat.st_atime, saved_stat.st_mtime)) 1050 | zipfile.ZIP64_LIMIT = saved_zip64_limit 1051 | 1052 | 1053 | def ZipWriteStr(zip_file, zinfo_or_arcname, data, perms=None, 1054 | compress_type=None): 1055 | """Wrap zipfile.writestr() function to work around the zip64 limit. 1056 | 1057 | Even with the ZIP64_LIMIT workaround, it won't allow writing a string 1058 | longer than 2GiB. It gives 'OverflowError: size does not fit in an int' 1059 | when calling crc32(bytes). 1060 | 1061 | But it still works fine to write a shorter string into a large zip file. 1062 | We should use ZipWrite() whenever possible, and only use ZipWriteStr() 1063 | when we know the string won't be too long. 1064 | """ 1065 | 1066 | saved_zip64_limit = zipfile.ZIP64_LIMIT 1067 | zipfile.ZIP64_LIMIT = (1 << 32) - 1 1068 | 1069 | if not isinstance(zinfo_or_arcname, zipfile.ZipInfo): 1070 | zinfo = zipfile.ZipInfo(filename=zinfo_or_arcname) 1071 | zinfo.compress_type = zip_file.compression 1072 | if perms is None: 1073 | perms = 0o100644 1074 | else: 1075 | zinfo = zinfo_or_arcname 1076 | 1077 | # If compress_type is given, it overrides the value in zinfo. 1078 | if compress_type is not None: 1079 | zinfo.compress_type = compress_type 1080 | 1081 | # If perms is given, it has a priority. 1082 | if perms is not None: 1083 | # If perms doesn't set the file type, mark it as a regular file. 1084 | if perms & 0o770000 == 0: 1085 | perms |= 0o100000 1086 | zinfo.external_attr = perms << 16 1087 | 1088 | # Use a fixed timestamp so the output is repeatable. 1089 | zinfo.date_time = (2009, 1, 1, 0, 0, 0) 1090 | 1091 | zip_file.writestr(zinfo, data) 1092 | zipfile.ZIP64_LIMIT = saved_zip64_limit 1093 | 1094 | 1095 | def ZipClose(zip_file): 1096 | # http://b/18015246 1097 | # zipfile also refers to ZIP64_LIMIT during close() when it writes out the 1098 | # central directory. 1099 | saved_zip64_limit = zipfile.ZIP64_LIMIT 1100 | zipfile.ZIP64_LIMIT = (1 << 32) - 1 1101 | 1102 | zip_file.close() 1103 | 1104 | zipfile.ZIP64_LIMIT = saved_zip64_limit 1105 | 1106 | 1107 | class DeviceSpecificParams(object): 1108 | module = None 1109 | def __init__(self, **kwargs): 1110 | """Keyword arguments to the constructor become attributes of this 1111 | object, which is passed to all functions in the device-specific 1112 | module.""" 1113 | for k, v in kwargs.iteritems(): 1114 | setattr(self, k, v) 1115 | self.extras = OPTIONS.extras 1116 | 1117 | if self.module is None: 1118 | path = OPTIONS.device_specific 1119 | if not path: 1120 | return 1121 | try: 1122 | if os.path.isdir(path): 1123 | info = imp.find_module("releasetools", [path]) 1124 | else: 1125 | d, f = os.path.split(path) 1126 | b, x = os.path.splitext(f) 1127 | if x == ".py": 1128 | f = b 1129 | info = imp.find_module(f, [d]) 1130 | print "loaded device-specific extensions from", path 1131 | self.module = imp.load_module("device_specific", *info) 1132 | except ImportError: 1133 | print "unable to load device-specific module; assuming none" 1134 | 1135 | def _DoCall(self, function_name, *args, **kwargs): 1136 | """Call the named function in the device-specific module, passing 1137 | the given args and kwargs. The first argument to the call will be 1138 | the DeviceSpecific object itself. If there is no module, or the 1139 | module does not define the function, return the value of the 1140 | 'default' kwarg (which itself defaults to None).""" 1141 | if self.module is None or not hasattr(self.module, function_name): 1142 | return kwargs.get("default", None) 1143 | return getattr(self.module, function_name)(*((self,) + args), **kwargs) 1144 | 1145 | def FullOTA_Assertions(self): 1146 | """Called after emitting the block of assertions at the top of a 1147 | full OTA package. Implementations can add whatever additional 1148 | assertions they like.""" 1149 | return self._DoCall("FullOTA_Assertions") 1150 | 1151 | def FullOTA_InstallBegin(self): 1152 | """Called at the start of full OTA installation.""" 1153 | return self._DoCall("FullOTA_InstallBegin") 1154 | 1155 | def FullOTA_InstallEnd(self): 1156 | """Called at the end of full OTA installation; typically this is 1157 | used to install the image for the device's baseband processor.""" 1158 | return self._DoCall("FullOTA_InstallEnd") 1159 | 1160 | def IncrementalOTA_Assertions(self): 1161 | """Called after emitting the block of assertions at the top of an 1162 | incremental OTA package. Implementations can add whatever 1163 | additional assertions they like.""" 1164 | return self._DoCall("IncrementalOTA_Assertions") 1165 | 1166 | def IncrementalOTA_VerifyBegin(self): 1167 | """Called at the start of the verification phase of incremental 1168 | OTA installation; additional checks can be placed here to abort 1169 | the script before any changes are made.""" 1170 | return self._DoCall("IncrementalOTA_VerifyBegin") 1171 | 1172 | def IncrementalOTA_VerifyEnd(self): 1173 | """Called at the end of the verification phase of incremental OTA 1174 | installation; additional checks can be placed here to abort the 1175 | script before any changes are made.""" 1176 | return self._DoCall("IncrementalOTA_VerifyEnd") 1177 | 1178 | def IncrementalOTA_InstallBegin(self): 1179 | """Called at the start of incremental OTA installation (after 1180 | verification is complete).""" 1181 | return self._DoCall("IncrementalOTA_InstallBegin") 1182 | 1183 | def IncrementalOTA_InstallEnd(self): 1184 | """Called at the end of incremental OTA installation; typically 1185 | this is used to install the image for the device's baseband 1186 | processor.""" 1187 | return self._DoCall("IncrementalOTA_InstallEnd") 1188 | 1189 | def VerifyOTA_Assertions(self): 1190 | return self._DoCall("VerifyOTA_Assertions") 1191 | 1192 | class File(object): 1193 | def __init__(self, name, data): 1194 | self.name = name 1195 | self.data = data 1196 | self.size = len(data) 1197 | self.sha1 = sha1(data).hexdigest() 1198 | 1199 | @classmethod 1200 | def FromLocalFile(cls, name, diskname): 1201 | f = open(diskname, "rb") 1202 | data = f.read() 1203 | f.close() 1204 | return File(name, data) 1205 | 1206 | def WriteToTemp(self): 1207 | t = tempfile.NamedTemporaryFile() 1208 | t.write(self.data) 1209 | t.flush() 1210 | return t 1211 | 1212 | def AddToZip(self, z, compression=None): 1213 | ZipWriteStr(z, self.name, self.data, compress_type=compression) 1214 | 1215 | DIFF_PROGRAM_BY_EXT = { 1216 | ".gz" : "imgdiff", 1217 | ".zip" : ["imgdiff", "-z"], 1218 | ".jar" : ["imgdiff", "-z"], 1219 | ".apk" : ["imgdiff", "-z"], 1220 | ".img" : "imgdiff", 1221 | } 1222 | 1223 | class Difference(object): 1224 | def __init__(self, tf, sf, diff_program=None): 1225 | self.tf = tf 1226 | self.sf = sf 1227 | self.patch = None 1228 | self.diff_program = diff_program 1229 | 1230 | def ComputePatch(self): 1231 | """Compute the patch (as a string of data) needed to turn sf into 1232 | tf. Returns the same tuple as GetPatch().""" 1233 | 1234 | tf = self.tf 1235 | sf = self.sf 1236 | 1237 | if self.diff_program: 1238 | diff_program = self.diff_program 1239 | else: 1240 | ext = os.path.splitext(tf.name)[1] 1241 | diff_program = DIFF_PROGRAM_BY_EXT.get(ext, "bsdiff") 1242 | 1243 | ttemp = tf.WriteToTemp() 1244 | stemp = sf.WriteToTemp() 1245 | 1246 | ext = os.path.splitext(tf.name)[1] 1247 | 1248 | try: 1249 | ptemp = tempfile.NamedTemporaryFile() 1250 | if isinstance(diff_program, list): 1251 | cmd = copy.copy(diff_program) 1252 | else: 1253 | cmd = [diff_program] 1254 | cmd.append(stemp.name) 1255 | cmd.append(ttemp.name) 1256 | cmd.append(ptemp.name) 1257 | p = Run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 1258 | err = [] 1259 | def run(): 1260 | _, e = p.communicate() 1261 | if e: 1262 | err.append(e) 1263 | th = threading.Thread(target=run) 1264 | th.start() 1265 | th.join(timeout=300) # 5 mins 1266 | if th.is_alive(): 1267 | print "WARNING: diff command timed out" 1268 | p.terminate() 1269 | th.join(5) 1270 | if th.is_alive(): 1271 | p.kill() 1272 | th.join() 1273 | 1274 | if err or p.returncode != 0: 1275 | print "WARNING: failure running %s:\n%s\n" % ( 1276 | diff_program, "".join(err)) 1277 | self.patch = None 1278 | return None, None, None 1279 | diff = ptemp.read() 1280 | finally: 1281 | ptemp.close() 1282 | stemp.close() 1283 | ttemp.close() 1284 | 1285 | self.patch = diff 1286 | return self.tf, self.sf, self.patch 1287 | 1288 | 1289 | def GetPatch(self): 1290 | """Return a tuple (target_file, source_file, patch_data). 1291 | patch_data may be None if ComputePatch hasn't been called, or if 1292 | computing the patch failed.""" 1293 | return self.tf, self.sf, self.patch 1294 | 1295 | 1296 | def ComputeDifferences(diffs): 1297 | """Call ComputePatch on all the Difference objects in 'diffs'.""" 1298 | print len(diffs), "diffs to compute" 1299 | 1300 | # Do the largest files first, to try and reduce the long-pole effect. 1301 | by_size = [(i.tf.size, i) for i in diffs] 1302 | by_size.sort(reverse=True) 1303 | by_size = [i[1] for i in by_size] 1304 | 1305 | lock = threading.Lock() 1306 | diff_iter = iter(by_size) # accessed under lock 1307 | 1308 | def worker(): 1309 | try: 1310 | lock.acquire() 1311 | for d in diff_iter: 1312 | lock.release() 1313 | start = time.time() 1314 | d.ComputePatch() 1315 | dur = time.time() - start 1316 | lock.acquire() 1317 | 1318 | tf, sf, patch = d.GetPatch() 1319 | if sf.name == tf.name: 1320 | name = tf.name 1321 | else: 1322 | name = "%s (%s)" % (tf.name, sf.name) 1323 | if patch is None: 1324 | print "patching failed! %s" % (name,) 1325 | else: 1326 | print "%8.2f sec %8d / %8d bytes (%6.2f%%) %s" % ( 1327 | dur, len(patch), tf.size, 100.0 * len(patch) / tf.size, name) 1328 | lock.release() 1329 | except Exception as e: 1330 | print e 1331 | raise 1332 | 1333 | # start worker threads; wait for them all to finish. 1334 | threads = [threading.Thread(target=worker) 1335 | for i in range(OPTIONS.worker_threads)] 1336 | for th in threads: 1337 | th.start() 1338 | while threads: 1339 | threads.pop().join() 1340 | 1341 | 1342 | class BlockDifference(object): 1343 | def __init__(self, partition, tgt, src=None, check_first_block=False, 1344 | version=None, disable_imgdiff=False): 1345 | self.tgt = tgt 1346 | self.src = src 1347 | self.partition = partition 1348 | self.check_first_block = check_first_block 1349 | self.disable_imgdiff = disable_imgdiff 1350 | 1351 | if version is None: 1352 | version = 1 1353 | if OPTIONS.info_dict: 1354 | version = max( 1355 | int(i) for i in 1356 | OPTIONS.info_dict.get("blockimgdiff_versions", "1").split(",")) 1357 | self.version = version 1358 | 1359 | b = blockimgdiff.BlockImageDiff(tgt, src, threads=OPTIONS.worker_threads, 1360 | version=self.version, 1361 | disable_imgdiff=self.disable_imgdiff) 1362 | tmpdir = tempfile.mkdtemp() 1363 | OPTIONS.tempfiles.append(tmpdir) 1364 | self.path = os.path.join(tmpdir, partition) 1365 | b.Compute(self.path) 1366 | self._required_cache = b.max_stashed_size 1367 | self.touched_src_ranges = b.touched_src_ranges 1368 | self.touched_src_sha1 = b.touched_src_sha1 1369 | 1370 | 1371 | @property 1372 | def required_cache(self): 1373 | return self._required_cache 1374 | 1375 | def WriteScript(self, script, output_zip, progress=None): 1376 | if not self.src: 1377 | # write the output unconditionally 1378 | script.Print("Patching %s image unconditionally..." % (self.partition,)) 1379 | else: 1380 | script.Print("Patching %s image after verification." % (self.partition,)) 1381 | 1382 | if progress: 1383 | script.ShowProgress(progress, 0) 1384 | self._WriteUpdate(script, output_zip) 1385 | if OPTIONS.verify: 1386 | self._WritePostInstallVerifyScript(script) 1387 | 1388 | def WriteStrictVerifyScript(self, script): 1389 | """Verify all the blocks in the care_map, including clobbered blocks. 1390 | 1391 | This differs from the WriteVerifyScript() function: a) it prints different 1392 | error messages; b) it doesn't allow half-way updated images to pass the 1393 | verification.""" 1394 | 1395 | partition = self.partition 1396 | script.Print("Verifying %s..." % (partition,)) 1397 | ranges = self.tgt.care_map 1398 | ranges_str = ranges.to_string_raw() 1399 | script.AppendExtra('range_sha1("%s", "%s") == "%s" && ' 1400 | 'ui_print(" Verified.") || ' 1401 | 'ui_print("\\"%s\\" has unexpected contents.");' % ( 1402 | self.device, ranges_str, 1403 | self.tgt.TotalSha1(include_clobbered_blocks=True), 1404 | self.device)) 1405 | script.AppendExtra("") 1406 | 1407 | def WriteVerifyScript(self, script, touched_blocks_only=False): 1408 | partition = self.partition 1409 | 1410 | # full OTA 1411 | if not self.src: 1412 | script.Print("Image %s will be patched unconditionally." % (partition,)) 1413 | 1414 | # incremental OTA 1415 | else: 1416 | if touched_blocks_only and self.version >= 3: 1417 | ranges = self.touched_src_ranges 1418 | expected_sha1 = self.touched_src_sha1 1419 | else: 1420 | ranges = self.src.care_map.subtract(self.src.clobbered_blocks) 1421 | expected_sha1 = self.src.TotalSha1() 1422 | 1423 | # No blocks to be checked, skipping. 1424 | if not ranges: 1425 | return 1426 | 1427 | ranges_str = ranges.to_string_raw() 1428 | if self.version >= 4: 1429 | script.AppendExtra(('if (range_sha1("%s", "%s") == "%s" || ' 1430 | 'block_image_verify("%s", ' 1431 | 'package_extract_file("%s.transfer.list"), ' 1432 | '"%s.new.dat", "%s.patch.dat")) then') % ( 1433 | self.device, ranges_str, expected_sha1, 1434 | self.device, partition, partition, partition)) 1435 | elif self.version == 3: 1436 | script.AppendExtra(('if (range_sha1("%s", "%s") == "%s" || ' 1437 | 'block_image_verify("%s", ' 1438 | 'package_extract_file("%s.transfer.list"), ' 1439 | '"%s.new.dat", "%s.patch.dat")) then') % ( 1440 | self.device, ranges_str, expected_sha1, 1441 | self.device, partition, partition, partition)) 1442 | else: 1443 | script.AppendExtra('if range_sha1("%s", "%s") == "%s" then' % ( 1444 | self.device, ranges_str, self.src.TotalSha1())) 1445 | script.Print('Verified %s image...' % (partition,)) 1446 | script.AppendExtra('else') 1447 | 1448 | if self.version >= 4: 1449 | 1450 | # Bug: 21124327 1451 | # When generating incrementals for the system and vendor partitions in 1452 | # version 4 or newer, explicitly check the first block (which contains 1453 | # the superblock) of the partition to see if it's what we expect. If 1454 | # this check fails, give an explicit log message about the partition 1455 | # having been remounted R/W (the most likely explanation). 1456 | if self.check_first_block: 1457 | script.AppendExtra('check_first_block("%s");' % (self.device,)) 1458 | 1459 | # If version >= 4, try block recovery before abort update 1460 | if partition == "system": 1461 | code = ErrorCode.SYSTEM_RECOVER_FAILURE 1462 | else: 1463 | code = ErrorCode.VENDOR_RECOVER_FAILURE 1464 | script.AppendExtra(( 1465 | 'ifelse (block_image_recover("{device}", "{ranges}") && ' 1466 | 'block_image_verify("{device}", ' 1467 | 'package_extract_file("{partition}.transfer.list"), ' 1468 | '"{partition}.new.dat", "{partition}.patch.dat"), ' 1469 | 'ui_print("{partition} recovered successfully."), ' 1470 | 'abort("E{code}: {partition} partition fails to recover"));\n' 1471 | 'endif;').format(device=self.device, ranges=ranges_str, 1472 | partition=partition, code=code)) 1473 | 1474 | # Abort the OTA update. Note that the incremental OTA cannot be applied 1475 | # even if it may match the checksum of the target partition. 1476 | # a) If version < 3, operations like move and erase will make changes 1477 | # unconditionally and damage the partition. 1478 | # b) If version >= 3, it won't even reach here. 1479 | else: 1480 | if partition == "system": 1481 | code = ErrorCode.SYSTEM_VERIFICATION_FAILURE 1482 | else: 1483 | code = ErrorCode.VENDOR_VERIFICATION_FAILURE 1484 | script.AppendExtra(( 1485 | 'abort("E%d: %s partition has unexpected contents");\n' 1486 | 'endif;') % (code, partition)) 1487 | 1488 | def _WritePostInstallVerifyScript(self, script): 1489 | partition = self.partition 1490 | script.Print('Verifying the updated %s image...' % (partition,)) 1491 | # Unlike pre-install verification, clobbered_blocks should not be ignored. 1492 | ranges = self.tgt.care_map 1493 | ranges_str = ranges.to_string_raw() 1494 | script.AppendExtra('if range_sha1("%s", "%s") == "%s" then' % ( 1495 | self.device, ranges_str, 1496 | self.tgt.TotalSha1(include_clobbered_blocks=True))) 1497 | 1498 | # Bug: 20881595 1499 | # Verify that extended blocks are really zeroed out. 1500 | if self.tgt.extended: 1501 | ranges_str = self.tgt.extended.to_string_raw() 1502 | script.AppendExtra('if range_sha1("%s", "%s") == "%s" then' % ( 1503 | self.device, ranges_str, 1504 | self._HashZeroBlocks(self.tgt.extended.size()))) 1505 | script.Print('Verified the updated %s image.' % (partition,)) 1506 | if partition == "system": 1507 | code = ErrorCode.SYSTEM_NONZERO_CONTENTS 1508 | else: 1509 | code = ErrorCode.VENDOR_NONZERO_CONTENTS 1510 | script.AppendExtra( 1511 | 'else\n' 1512 | ' abort("E%d: %s partition has unexpected non-zero contents after ' 1513 | 'OTA update");\n' 1514 | 'endif;' % (code, partition)) 1515 | else: 1516 | script.Print('Verified the updated %s image.' % (partition,)) 1517 | 1518 | if partition == "system": 1519 | code = ErrorCode.SYSTEM_UNEXPECTED_CONTENTS 1520 | else: 1521 | code = ErrorCode.VENDOR_UNEXPECTED_CONTENTS 1522 | 1523 | script.AppendExtra( 1524 | 'else\n' 1525 | ' abort("E%d: %s partition has unexpected contents after OTA ' 1526 | 'update");\n' 1527 | 'endif;' % (code, partition)) 1528 | 1529 | def _WriteUpdate(self, script, output_zip): 1530 | ZipWrite(output_zip, 1531 | '{}.transfer.list'.format(self.path), 1532 | '{}.transfer.list'.format(self.partition)) 1533 | ZipWrite(output_zip, 1534 | '{}.new.dat'.format(self.path), 1535 | '{}.new.dat'.format(self.partition)) 1536 | ZipWrite(output_zip, 1537 | '{}.patch.dat'.format(self.path), 1538 | '{}.patch.dat'.format(self.partition), 1539 | compress_type=zipfile.ZIP_STORED) 1540 | 1541 | if self.partition == "system": 1542 | code = ErrorCode.SYSTEM_UPDATE_FAILURE 1543 | else: 1544 | code = ErrorCode.VENDOR_UPDATE_FAILURE 1545 | 1546 | call = ('block_image_update("{device}", ' 1547 | 'package_extract_file("{partition}.transfer.list"), ' 1548 | '"{partition}.new.dat", "{partition}.patch.dat") ||\n' 1549 | ' abort("E{code}: Failed to update {partition} image.");'.format( 1550 | device=self.device, partition=self.partition, code=code)) 1551 | script.AppendExtra(script.WordWrap(call)) 1552 | 1553 | def _HashBlocks(self, source, ranges): # pylint: disable=no-self-use 1554 | data = source.ReadRangeSet(ranges) 1555 | ctx = sha1() 1556 | 1557 | for p in data: 1558 | ctx.update(p) 1559 | 1560 | return ctx.hexdigest() 1561 | 1562 | def _HashZeroBlocks(self, num_blocks): # pylint: disable=no-self-use 1563 | """Return the hash value for all zero blocks.""" 1564 | zero_block = '\x00' * 4096 1565 | ctx = sha1() 1566 | for _ in range(num_blocks): 1567 | ctx.update(zero_block) 1568 | 1569 | return ctx.hexdigest() 1570 | 1571 | 1572 | DataImage = blockimgdiff.DataImage 1573 | 1574 | # map recovery.fstab's fs_types to mount/format "partition types" 1575 | PARTITION_TYPES = { 1576 | "yaffs2": "MTD", 1577 | "mtd": "MTD", 1578 | "ext4": "EMMC", 1579 | "emmc": "EMMC", 1580 | "f2fs": "EMMC", 1581 | "squashfs": "EMMC" 1582 | } 1583 | 1584 | def GetTypeAndDevice(mount_point, info): 1585 | fstab = info["fstab"] 1586 | if fstab: 1587 | return (PARTITION_TYPES[fstab[mount_point].fs_type], 1588 | fstab[mount_point].device) 1589 | else: 1590 | raise KeyError 1591 | 1592 | 1593 | def ParseCertificate(data): 1594 | """Parse a PEM-format certificate.""" 1595 | cert = [] 1596 | save = False 1597 | for line in data.split("\n"): 1598 | if "--END CERTIFICATE--" in line: 1599 | break 1600 | if save: 1601 | cert.append(line) 1602 | if "--BEGIN CERTIFICATE--" in line: 1603 | save = True 1604 | cert = "".join(cert).decode('base64') 1605 | return cert 1606 | 1607 | def MakeRecoveryPatch(input_dir, output_sink, recovery_img, boot_img, 1608 | info_dict=None): 1609 | """Generate a binary patch that creates the recovery image starting 1610 | with the boot image. (Most of the space in these images is just the 1611 | kernel, which is identical for the two, so the resulting patch 1612 | should be efficient.) Add it to the output zip, along with a shell 1613 | script that is run from init.rc on first boot to actually do the 1614 | patching and install the new recovery image. 1615 | 1616 | recovery_img and boot_img should be File objects for the 1617 | corresponding images. info should be the dictionary returned by 1618 | common.LoadInfoDict() on the input target_files. 1619 | """ 1620 | 1621 | if info_dict is None: 1622 | info_dict = OPTIONS.info_dict 1623 | 1624 | full_recovery_image = info_dict.get("full_recovery_image", None) == "true" 1625 | system_root_image = info_dict.get("system_root_image", None) == "true" 1626 | 1627 | if full_recovery_image: 1628 | output_sink("etc/recovery.img", recovery_img.data) 1629 | 1630 | else: 1631 | diff_program = ["imgdiff"] 1632 | path = os.path.join(input_dir, "SYSTEM", "etc", "recovery-resource.dat") 1633 | if os.path.exists(path): 1634 | diff_program.append("-b") 1635 | diff_program.append(path) 1636 | bonus_args = "-b /system/etc/recovery-resource.dat" 1637 | else: 1638 | bonus_args = "" 1639 | 1640 | d = Difference(recovery_img, boot_img, diff_program=diff_program) 1641 | _, _, patch = d.ComputePatch() 1642 | output_sink("recovery-from-boot.p", patch) 1643 | 1644 | try: 1645 | # The following GetTypeAndDevice()s need to use the path in the target 1646 | # info_dict instead of source_info_dict. 1647 | boot_type, boot_device = GetTypeAndDevice("/boot", info_dict) 1648 | recovery_type, recovery_device = GetTypeAndDevice("/recovery", info_dict) 1649 | except KeyError: 1650 | return 1651 | 1652 | if full_recovery_image: 1653 | sh = """#!/system/bin/sh 1654 | if ! applypatch -c %(type)s:%(device)s:%(size)d:%(sha1)s; then 1655 | applypatch /system/etc/recovery.img %(type)s:%(device)s %(sha1)s %(size)d && log -t recovery "Installing new recovery image: succeeded" || log -t recovery "Installing new recovery image: failed" 1656 | else 1657 | log -t recovery "Recovery image already installed" 1658 | fi 1659 | """ % {'type': recovery_type, 1660 | 'device': recovery_device, 1661 | 'sha1': recovery_img.sha1, 1662 | 'size': recovery_img.size} 1663 | else: 1664 | sh = """#!/system/bin/sh 1665 | if ! applypatch -c %(recovery_type)s:%(recovery_device)s:%(recovery_size)d:%(recovery_sha1)s; then 1666 | applypatch %(bonus_args)s %(boot_type)s:%(boot_device)s:%(boot_size)d:%(boot_sha1)s %(recovery_type)s:%(recovery_device)s %(recovery_sha1)s %(recovery_size)d %(boot_sha1)s:/system/recovery-from-boot.p && log -t recovery "Installing new recovery image: succeeded" || log -t recovery "Installing new recovery image: failed" 1667 | else 1668 | log -t recovery "Recovery image already installed" 1669 | fi 1670 | """ % {'boot_size': boot_img.size, 1671 | 'boot_sha1': boot_img.sha1, 1672 | 'recovery_size': recovery_img.size, 1673 | 'recovery_sha1': recovery_img.sha1, 1674 | 'boot_type': boot_type, 1675 | 'boot_device': boot_device, 1676 | 'recovery_type': recovery_type, 1677 | 'recovery_device': recovery_device, 1678 | 'bonus_args': bonus_args} 1679 | 1680 | # The install script location moved from /system/etc to /system/bin 1681 | # in the L release. Parse init.*.rc files to find out where the 1682 | # target-files expects it to be, and put it there. 1683 | sh_location = "etc/install-recovery.sh" 1684 | found = False 1685 | if system_root_image: 1686 | init_rc_dir = os.path.join(input_dir, "ROOT") 1687 | else: 1688 | init_rc_dir = os.path.join(input_dir, "BOOT", "RAMDISK") 1689 | init_rc_files = os.listdir(init_rc_dir) 1690 | for init_rc_file in init_rc_files: 1691 | if (not init_rc_file.startswith('init.') or 1692 | not init_rc_file.endswith('.rc')): 1693 | continue 1694 | 1695 | with open(os.path.join(init_rc_dir, init_rc_file)) as f: 1696 | for line in f: 1697 | m = re.match(r"^service flash_recovery /system/(\S+)\s*$", line) 1698 | if m: 1699 | sh_location = m.group(1) 1700 | found = True 1701 | break 1702 | 1703 | if found: 1704 | break 1705 | 1706 | print "putting script in", sh_location 1707 | 1708 | output_sink(sh_location, sh) 1709 | -------------------------------------------------------------------------------- /tools/common.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/common.pyc -------------------------------------------------------------------------------- /tools/img2sdat.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | #==================================================== 4 | # FILE: img2sdat.py 5 | # AUTHORS: xpirt - luxi78 - howellzhu 6 | # DATE: 2016-12-23 16:47:24 CST 7 | #==================================================== 8 | 9 | import sys, os, errno, tempfile 10 | import common, blockimgdiff, sparse_img 11 | 12 | __version__ = '1.2' 13 | 14 | if sys.hexversion < 0x02070000: 15 | print >> sys.stderr, "Python 2.7 or newer is required." 16 | try: 17 | input = raw_input 18 | except NameError: pass 19 | input('Press ENTER to exit...') 20 | sys.exit(1) 21 | else: 22 | print('img2sdat binary - version: %s\n' % __version__) 23 | 24 | try: 25 | INPUT_IMAGE = str(sys.argv[1]) 26 | except IndexError: 27 | print('Usage: img2sdat.py [outdir] [version]\n') 28 | print(' : input system image\n') 29 | print(' [outdir]: output directory (current directory by default)\n') 30 | print(' [version]: transfer list version number (1 - 5.0, 2 - 5.1, 3 - 6.0, 4 - 7.0, will be asked by default, more info on xda thread)\n') 31 | print('Visit xda thread for more information.\n') 32 | try: 33 | input = raw_input 34 | except NameError: pass 35 | input('Press ENTER to exit...') 36 | sys.exit() 37 | 38 | def main(argv): 39 | if len(sys.argv) < 3: 40 | outdir = './system' 41 | else: 42 | outdir = sys.argv[2] + '/system' 43 | 44 | if len(sys.argv) < 4: 45 | version = 4 46 | item = True 47 | while item: 48 | print(''' 1. Android Lollipop 5.0 49 | 2. Android Lollipop 5.1 50 | 3. Android Marshmallow 6.0 51 | 4. Android Nougat 7.0 52 | ''') 53 | item = raw_input('Choose system version: ') 54 | if item == '1': 55 | version = 1 56 | break 57 | elif item == '2': 58 | version = 2 59 | break 60 | elif item == '3': 61 | version = 3 62 | break 63 | elif item == '4': 64 | version = 4 65 | break 66 | else: 67 | return 68 | else: 69 | version = int(sys.argv[3]) 70 | 71 | # Get sparse image 72 | image = sparse_img.SparseImage(INPUT_IMAGE, tempfile.mkstemp()[1], '0') 73 | 74 | # Generate output files 75 | b = blockimgdiff.BlockImageDiff(image, None, version) 76 | b.Compute(outdir) 77 | 78 | print('Done! Output files: %s' % os.path.dirname(outdir)) 79 | return 80 | 81 | if __name__ == '__main__': 82 | main(sys.argv) 83 | -------------------------------------------------------------------------------- /tools/lib64/libc++.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/lib64/libc++.so -------------------------------------------------------------------------------- /tools/lib64/sefcontext: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/lib64/sefcontext -------------------------------------------------------------------------------- /tools/lib64/sefcontext_compile: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/lib64/sefcontext_compile -------------------------------------------------------------------------------- /tools/make_ext4fs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/make_ext4fs -------------------------------------------------------------------------------- /tools/nul: -------------------------------------------------------------------------------- 1 | support:cofface@cofface.com 2 | converted success,outfile: convert/file_contexts. 3 | -------------------------------------------------------------------------------- /tools/rangelib.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2014 The Android Open Source Project 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | from __future__ import print_function 16 | import heapq 17 | import itertools 18 | 19 | __all__ = ["RangeSet"] 20 | 21 | class RangeSet(object): 22 | """A RangeSet represents a set of nonoverlapping ranges on the 23 | integers (ie, a set of integers, but efficient when the set contains 24 | lots of runs.""" 25 | 26 | def __init__(self, data=None): 27 | self.monotonic = False 28 | if isinstance(data, str): 29 | self._parse_internal(data) 30 | elif data: 31 | assert len(data) % 2 == 0 32 | self.data = tuple(self._remove_pairs(data)) 33 | self.monotonic = all(x < y for x, y in zip(self.data, self.data[1:])) 34 | else: 35 | self.data = () 36 | 37 | def __iter__(self): 38 | for i in range(0, len(self.data), 2): 39 | yield self.data[i:i+2] 40 | 41 | def __eq__(self, other): 42 | return self.data == other.data 43 | 44 | def __ne__(self, other): 45 | return self.data != other.data 46 | 47 | def __nonzero__(self): 48 | return bool(self.data) 49 | 50 | def __str__(self): 51 | if not self.data: 52 | return "empty" 53 | else: 54 | return self.to_string() 55 | 56 | def __repr__(self): 57 | return '' 58 | 59 | @classmethod 60 | def parse(cls, text): 61 | """Parse a text string consisting of a space-separated list of 62 | blocks and ranges, eg "10-20 30 35-40". Ranges are interpreted to 63 | include both their ends (so the above example represents 18 64 | individual blocks. Returns a RangeSet object. 65 | 66 | If the input has all its blocks in increasing order, then returned 67 | RangeSet will have an extra attribute 'monotonic' that is set to 68 | True. For example the input "10-20 30" is monotonic, but the input 69 | "15-20 30 10-14" is not, even though they represent the same set 70 | of blocks (and the two RangeSets will compare equal with ==). 71 | """ 72 | return cls(text) 73 | 74 | def _parse_internal(self, text): 75 | data = [] 76 | last = -1 77 | monotonic = True 78 | for p in text.split(): 79 | if "-" in p: 80 | s, e = (int(x) for x in p.split("-")) 81 | data.append(s) 82 | data.append(e+1) 83 | if last <= s <= e: 84 | last = e 85 | else: 86 | monotonic = False 87 | else: 88 | s = int(p) 89 | data.append(s) 90 | data.append(s+1) 91 | if last <= s: 92 | last = s+1 93 | else: 94 | monotonic = False 95 | data.sort() 96 | self.data = tuple(self._remove_pairs(data)) 97 | self.monotonic = monotonic 98 | 99 | @staticmethod 100 | def _remove_pairs(source): 101 | """Remove consecutive duplicate items to simplify the result. 102 | 103 | [1, 2, 2, 5, 5, 10] will become [1, 10].""" 104 | last = None 105 | for i in source: 106 | if i == last: 107 | last = None 108 | else: 109 | if last is not None: 110 | yield last 111 | last = i 112 | if last is not None: 113 | yield last 114 | 115 | def to_string(self): 116 | out = [] 117 | for i in range(0, len(self.data), 2): 118 | s, e = self.data[i:i+2] 119 | if e == s+1: 120 | out.append(str(s)) 121 | else: 122 | out.append(str(s) + "-" + str(e-1)) 123 | return " ".join(out) 124 | 125 | def to_string_raw(self): 126 | assert self.data 127 | return str(len(self.data)) + "," + ",".join(str(i) for i in self.data) 128 | 129 | def union(self, other): 130 | """Return a new RangeSet representing the union of this RangeSet 131 | with the argument. 132 | 133 | >>> RangeSet("10-19 30-34").union(RangeSet("18-29")) 134 | 135 | >>> RangeSet("10-19 30-34").union(RangeSet("22 32")) 136 | 137 | """ 138 | out = [] 139 | z = 0 140 | for p, d in heapq.merge(zip(self.data, itertools.cycle((+1, -1))), 141 | zip(other.data, itertools.cycle((+1, -1)))): 142 | if (z == 0 and d == 1) or (z == 1 and d == -1): 143 | out.append(p) 144 | z += d 145 | return RangeSet(data=out) 146 | 147 | def intersect(self, other): 148 | """Return a new RangeSet representing the intersection of this 149 | RangeSet with the argument. 150 | 151 | >>> RangeSet("10-19 30-34").intersect(RangeSet("18-32")) 152 | 153 | >>> RangeSet("10-19 30-34").intersect(RangeSet("22-28")) 154 | 155 | """ 156 | out = [] 157 | z = 0 158 | for p, d in heapq.merge(zip(self.data, itertools.cycle((+1, -1))), 159 | zip(other.data, itertools.cycle((+1, -1)))): 160 | if (z == 1 and d == 1) or (z == 2 and d == -1): 161 | out.append(p) 162 | z += d 163 | return RangeSet(data=out) 164 | 165 | def subtract(self, other): 166 | """Return a new RangeSet representing subtracting the argument 167 | from this RangeSet. 168 | 169 | >>> RangeSet("10-19 30-34").subtract(RangeSet("18-32")) 170 | 171 | >>> RangeSet("10-19 30-34").subtract(RangeSet("22-28")) 172 | 173 | """ 174 | 175 | out = [] 176 | z = 0 177 | for p, d in heapq.merge(zip(self.data, itertools.cycle((+1, -1))), 178 | zip(other.data, itertools.cycle((-1, +1)))): 179 | if (z == 0 and d == 1) or (z == 1 and d == -1): 180 | out.append(p) 181 | z += d 182 | return RangeSet(data=out) 183 | 184 | def overlaps(self, other): 185 | """Returns true if the argument has a nonempty overlap with this 186 | RangeSet. 187 | 188 | >>> RangeSet("10-19 30-34").overlaps(RangeSet("18-32")) 189 | True 190 | >>> RangeSet("10-19 30-34").overlaps(RangeSet("22-28")) 191 | False 192 | """ 193 | 194 | # This is like intersect, but we can stop as soon as we discover the 195 | # output is going to be nonempty. 196 | z = 0 197 | for _, d in heapq.merge(zip(self.data, itertools.cycle((+1, -1))), 198 | zip(other.data, itertools.cycle((+1, -1)))): 199 | if (z == 1 and d == 1) or (z == 2 and d == -1): 200 | return True 201 | z += d 202 | return False 203 | 204 | def size(self): 205 | """Returns the total size of the RangeSet (ie, how many integers 206 | are in the set). 207 | 208 | >>> RangeSet("10-19 30-34").size() 209 | 15 210 | """ 211 | 212 | total = 0 213 | for i, p in enumerate(self.data): 214 | if i % 2: 215 | total += p 216 | else: 217 | total -= p 218 | return total 219 | 220 | def map_within(self, other): 221 | """'other' should be a subset of 'self'. Returns a RangeSet 222 | representing what 'other' would get translated to if the integers 223 | of 'self' were translated down to be contiguous starting at zero. 224 | 225 | >>> RangeSet("0-9").map_within(RangeSet("3-4")) 226 | 227 | >>> RangeSet("10-19").map_within(RangeSet("13-14")) 228 | 229 | >>> RangeSet("10-19 30-39").map_within(RangeSet("17-19 30-32")) 230 | 231 | >>> RangeSet("10-19 30-39").map_within(RangeSet("12-13 17-19 30-32")) 232 | 233 | """ 234 | 235 | out = [] 236 | offset = 0 237 | start = None 238 | for p, d in heapq.merge(zip(self.data, itertools.cycle((-5, +5))), 239 | zip(other.data, itertools.cycle((-1, +1)))): 240 | if d == -5: 241 | start = p 242 | elif d == +5: 243 | offset += p-start 244 | start = None 245 | else: 246 | out.append(offset + p - start) 247 | return RangeSet(data=out) 248 | 249 | def extend(self, n): 250 | """Extend the RangeSet by 'n' blocks. 251 | 252 | The lower bound is guaranteed to be non-negative. 253 | 254 | >>> RangeSet("0-9").extend(1) 255 | 256 | >>> RangeSet("10-19").extend(15) 257 | 258 | >>> RangeSet("10-19 30-39").extend(4) 259 | 260 | >>> RangeSet("10-19 30-39").extend(10) 261 | 262 | """ 263 | out = self 264 | for i in range(0, len(self.data), 2): 265 | s, e = self.data[i:i+2] 266 | s1 = max(0, s - n) 267 | e1 = e + n 268 | out = out.union(RangeSet(str(s1) + "-" + str(e1-1))) 269 | return out 270 | 271 | def first(self, n): 272 | """Return the RangeSet that contains at most the first 'n' integers. 273 | 274 | >>> RangeSet("0-9").first(1) 275 | 276 | >>> RangeSet("10-19").first(5) 277 | 278 | >>> RangeSet("10-19").first(15) 279 | 280 | >>> RangeSet("10-19 30-39").first(3) 281 | 282 | >>> RangeSet("10-19 30-39").first(15) 283 | 284 | >>> RangeSet("10-19 30-39").first(30) 285 | 286 | >>> RangeSet("0-9").first(0) 287 | 288 | """ 289 | 290 | if self.size() <= n: 291 | return self 292 | 293 | out = [] 294 | for s, e in self: 295 | if e - s >= n: 296 | out += (s, s+n) 297 | break 298 | else: 299 | out += (s, e) 300 | n -= e - s 301 | return RangeSet(data=out) 302 | 303 | 304 | if __name__ == "__main__": 305 | import doctest 306 | doctest.testmod() 307 | -------------------------------------------------------------------------------- /tools/rangelib.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/rangelib.pyc -------------------------------------------------------------------------------- /tools/sdat2img.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | #==================================================== 4 | # FILE: sdat2img.py 5 | # AUTHORS: xpirt - luxi78 - howellzhu 6 | # DATE: 2017-01-04 2:01:45 CEST 7 | #==================================================== 8 | 9 | import sys, os, errno 10 | 11 | __version__ = '1.0' 12 | 13 | if sys.hexversion < 0x02070000: 14 | print >> sys.stderr, "Python 2.7 or newer is required." 15 | try: 16 | input = raw_input 17 | except NameError: pass 18 | input('Press ENTER to exit...') 19 | sys.exit(1) 20 | else: 21 | print('sdat2img binary - version: %s\n' % __version__) 22 | 23 | try: 24 | TRANSFER_LIST_FILE = str(sys.argv[1]) 25 | NEW_DATA_FILE = str(sys.argv[2]) 26 | except IndexError: 27 | print('\nUsage: sdat2img.py [system_img]\n') 28 | print(' : transfer list file') 29 | print(' : system new dat file') 30 | print(' [system_img]: output system image\n\n') 31 | print('Visit xda thread for more information.\n') 32 | try: 33 | input = raw_input 34 | except NameError: pass 35 | input('Press ENTER to exit...') 36 | sys.exit() 37 | 38 | try: 39 | OUTPUT_IMAGE_FILE = str(sys.argv[3]) 40 | except IndexError: 41 | OUTPUT_IMAGE_FILE = 'system.img' 42 | 43 | BLOCK_SIZE = 4096 44 | 45 | def rangeset(src): 46 | src_set = src.split(',') 47 | num_set = [int(item) for item in src_set] 48 | if len(num_set) != num_set[0]+1: 49 | print('Error on parsing following data to rangeset:\n%s' % src) 50 | sys.exit(1) 51 | 52 | return tuple ([ (num_set[i], num_set[i+1]) for i in range(1, len(num_set), 2) ]) 53 | 54 | def parse_transfer_list_file(path): 55 | trans_list = open(TRANSFER_LIST_FILE, 'r') 56 | 57 | # First line in transfer list is the version number 58 | version = int(trans_list.readline()) 59 | 60 | # Second line in transfer list is the total number of blocks we expect to write 61 | new_blocks = int(trans_list.readline()) 62 | 63 | if version >= 2: 64 | # Third line is how many stash entries are needed simultaneously 65 | trans_list.readline() 66 | # Fourth line is the maximum number of blocks that will be stashed simultaneously 67 | trans_list.readline() 68 | 69 | # Subsequent lines are all individual transfer commands 70 | commands = [] 71 | for line in trans_list: 72 | line = line.split(' ') 73 | cmd = line[0] 74 | if cmd in ['erase', 'new', 'zero']: 75 | commands.append([cmd, rangeset(line[1])]) 76 | else: 77 | # Skip lines starting with numbers, they are not commands anyway 78 | if not cmd[0].isdigit(): 79 | print('Command "%s" is not valid.' % cmd) 80 | trans_list.close() 81 | sys.exit(1) 82 | 83 | trans_list.close() 84 | return version, new_blocks, commands 85 | 86 | def main(argv): 87 | version, new_blocks, commands = parse_transfer_list_file(TRANSFER_LIST_FILE) 88 | 89 | if version == 1: 90 | print('Android Lollipop 5.0 detected!\n') 91 | elif version == 2: 92 | print('Android Lollipop 5.1 detected!\n') 93 | elif version == 3: 94 | print('Android Marshmallow 6.x detected!\n') 95 | elif version == 4: 96 | print('Android Nougat 7.x / Oreo 8.x detected!\n') 97 | else: 98 | print('Unknown Android version!\n') 99 | 100 | # Don't clobber existing files to avoid accidental data loss 101 | try: 102 | output_img = open(OUTPUT_IMAGE_FILE, 'wb') 103 | except IOError as e: 104 | if e.errno == errno.EEXIST: 105 | print('Error: the output file "{}" already exists'.format(e.filename)) 106 | print('Remove it, rename it, or choose a different file name.') 107 | sys.exit(e.errno) 108 | else: 109 | raise 110 | 111 | new_data_file = open(NEW_DATA_FILE, 'rb') 112 | all_block_sets = [i for command in commands for i in command[1]] 113 | max_file_size = max(pair[1] for pair in all_block_sets)*BLOCK_SIZE 114 | 115 | for command in commands: 116 | if command[0] == 'new': 117 | for block in command[1]: 118 | begin = block[0] 119 | end = block[1] 120 | block_count = end - begin 121 | print('Copying {} blocks into position {}...'.format(block_count, begin)) 122 | 123 | # Position output file 124 | output_img.seek(begin*BLOCK_SIZE) 125 | 126 | # Copy one block at a time 127 | while(block_count > 0): 128 | output_img.write(new_data_file.read(BLOCK_SIZE)) 129 | block_count -= 1 130 | else: 131 | print('Skipping command %s...' % command[0]) 132 | 133 | # Make file larger if necessary 134 | if(output_img.tell() < max_file_size): 135 | output_img.truncate(max_file_size) 136 | 137 | output_img.close() 138 | new_data_file.close() 139 | print('Done! Output image: %s' % os.path.realpath(output_img.name)) 140 | 141 | if __name__ == '__main__': 142 | main(sys.argv) 143 | -------------------------------------------------------------------------------- /tools/simg2img: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/simg2img -------------------------------------------------------------------------------- /tools/sparse_img.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2014 The Android Open Source Project 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import bisect 16 | import os 17 | import struct 18 | from hashlib import sha1 19 | 20 | import rangelib 21 | 22 | 23 | class SparseImage(object): 24 | """Wraps a sparse image file into an image object. 25 | 26 | Wraps a sparse image file (and optional file map and clobbered_blocks) into 27 | an image object suitable for passing to BlockImageDiff. file_map contains 28 | the mapping between files and their blocks. clobbered_blocks contains the set 29 | of blocks that should be always written to the target regardless of the old 30 | contents (i.e. copying instead of patching). clobbered_blocks should be in 31 | the form of a string like "0" or "0 1-5 8". 32 | """ 33 | 34 | def __init__(self, simg_fn, file_map_fn=None, clobbered_blocks=None, 35 | mode="rb", build_map=True): 36 | self.simg_f = f = open(simg_fn, mode) 37 | 38 | header_bin = f.read(28) 39 | header = struct.unpack("> 2)) 188 | to_read -= this_read 189 | 190 | while to_read > 0: 191 | # continue with following chunks if this range spans multiple chunks. 192 | idx += 1 193 | chunk_start, chunk_len, filepos, fill_data = self.offset_map[idx] 194 | this_read = min(chunk_len, to_read) 195 | if filepos is not None: 196 | f.seek(filepos, os.SEEK_SET) 197 | yield f.read(this_read * self.blocksize) 198 | else: 199 | yield fill_data * (this_read * (self.blocksize >> 2)) 200 | to_read -= this_read 201 | 202 | def LoadFileBlockMap(self, fn, clobbered_blocks): 203 | remaining = self.care_map 204 | self.file_map = out = {} 205 | 206 | with open(fn) as f: 207 | for line in f: 208 | fn, ranges = line.split(None, 1) 209 | ranges = rangelib.RangeSet.parse(ranges) 210 | out[fn] = ranges 211 | assert ranges.size() == ranges.intersect(remaining).size() 212 | 213 | # Currently we assume that blocks in clobbered_blocks are not part of 214 | # any file. 215 | assert not clobbered_blocks.overlaps(ranges) 216 | remaining = remaining.subtract(ranges) 217 | 218 | remaining = remaining.subtract(clobbered_blocks) 219 | 220 | # For all the remaining blocks in the care_map (ie, those that 221 | # aren't part of the data for any file nor part of the clobbered_blocks), 222 | # divide them into blocks that are all zero and blocks that aren't. 223 | # (Zero blocks are handled specially because (1) there are usually 224 | # a lot of them and (2) bsdiff handles files with long sequences of 225 | # repeated bytes especially poorly.) 226 | 227 | zero_blocks = [] 228 | nonzero_blocks = [] 229 | reference = '\0' * self.blocksize 230 | 231 | # Workaround for bug 23227672. For squashfs, we don't have a system.map. So 232 | # the whole system image will be treated as a single file. But for some 233 | # unknown bug, the updater will be killed due to OOM when writing back the 234 | # patched image to flash (observed on lenok-userdebug MEA49). Prior to 235 | # getting a real fix, we evenly divide the non-zero blocks into smaller 236 | # groups (currently 1024 blocks or 4MB per group). 237 | # Bug: 23227672 238 | MAX_BLOCKS_PER_GROUP = 1024 239 | nonzero_groups = [] 240 | 241 | f = self.simg_f 242 | for s, e in remaining: 243 | for b in range(s, e): 244 | idx = bisect.bisect_right(self.offset_index, b) - 1 245 | chunk_start, _, filepos, fill_data = self.offset_map[idx] 246 | if filepos is not None: 247 | filepos += (b-chunk_start) * self.blocksize 248 | f.seek(filepos, os.SEEK_SET) 249 | data = f.read(self.blocksize) 250 | else: 251 | if fill_data == reference[:4]: # fill with all zeros 252 | data = reference 253 | else: 254 | data = None 255 | 256 | if data == reference: 257 | zero_blocks.append(b) 258 | zero_blocks.append(b+1) 259 | else: 260 | nonzero_blocks.append(b) 261 | nonzero_blocks.append(b+1) 262 | 263 | if len(nonzero_blocks) >= MAX_BLOCKS_PER_GROUP: 264 | nonzero_groups.append(nonzero_blocks) 265 | # Clear the list. 266 | nonzero_blocks = [] 267 | 268 | if nonzero_blocks: 269 | nonzero_groups.append(nonzero_blocks) 270 | nonzero_blocks = [] 271 | 272 | assert zero_blocks or nonzero_groups or clobbered_blocks 273 | 274 | if zero_blocks: 275 | out["__ZERO"] = rangelib.RangeSet(data=zero_blocks) 276 | if nonzero_groups: 277 | for i, blocks in enumerate(nonzero_groups): 278 | out["__NONZERO-%d" % i] = rangelib.RangeSet(data=blocks) 279 | if clobbered_blocks: 280 | out["__COPY"] = clobbered_blocks 281 | 282 | def ResetFileMap(self): 283 | """Throw away the file map and treat the entire image as 284 | undifferentiated data.""" 285 | self.file_map = {"__DATA": self.care_map} 286 | -------------------------------------------------------------------------------- /tools/sparse_img.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/iykex/android_system_extraction_and_repack_tool/af3860974040b1e04c2279d4297aadd2e7f3c8eb/tools/sparse_img.pyc --------------------------------------------------------------------------------