├── .gitignore ├── LICENSE ├── README.md ├── add_keys.sh ├── alert_login.sh ├── archived ├── README.md ├── StartApps.applescript ├── chk_fio.sh ├── chk_raid.sh ├── diceware │ ├── README.md │ ├── beale.wordlist.asc │ └── diceware.wordlist.asc ├── downr │ ├── README.md │ ├── doc │ │ ├── api │ │ │ ├── rs-api.txt │ │ │ ├── rsapi.pl │ │ │ └── rsapiresume.pl │ │ └── rs-check-files.txt │ └── downr.sh ├── google_code_main.css ├── jumpbox_checker │ ├── README.md │ ├── jumpbox.sh │ ├── jumpbox_check.log │ └── stored_jumpboxes.txt ├── killie.sh ├── passgen.sh ├── study │ └── array_test.sh └── sysinfo_node.sh ├── bench_disk.sh ├── bench_net.sh ├── checksum.sh ├── checksum_cdrom.sh ├── chk_badblocks.sh ├── chkrootkit.sh ├── copy_keys.sh ├── entropy_ck.py ├── experiments ├── README.md ├── bash_manipulation.sh ├── curl_grab_headers.sh ├── json_parse_with_python.sh ├── processes.sh ├── progress_example.sh ├── randomize_hostname.sh ├── redirect_logging.sh ├── uptime.pl └── userinfo.youtah.php ├── fixssh.py ├── fixssh.sh ├── getaddrbyhost.pl ├── gethostbyaddr.pl ├── github_short_url.sh ├── launch_tmux.sh ├── macgen.py ├── measure_latency.sh ├── mount_iso.sh ├── myrepos_status.sh ├── myrepos_update.sh ├── polarhome.pl ├── powerbank.sh ├── randomize_mac.sh ├── remove_spaces.sh ├── reverse_ssh_tunnel.sh ├── rkhunter.sh ├── screenshots └── alert_login.png ├── update_other_repos.sh └── vagrant_update_boxes.sh /.gitignore: -------------------------------------------------------------------------------- 1 | # vim temp history 2 | # http://stackoverflow.com/a/9850662 3 | .netrwhist 4 | 5 | # linux temp files 6 | *~ 7 | 8 | # macos temp files 9 | ._* 10 | Icon? 11 | .Trash* 12 | .DS_Store 13 | .DS_Store? 14 | */.DS_Store 15 | .Spotlight-V100 16 | 17 | # windows temp files 18 | Thumbs.db 19 | ehthumbs.db 20 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # scriptlets 2 | 3 | A bunch of sciptlets to automate tasks that I do often. 4 | 5 | Directories; 6 | * **diceare/** - mirror of the famous diceware lists that can be easily downloaded for use with other scripts 7 | * **experimental/** - contains test script or expirements that have been written while writing other scripts, may or may not work. Not meant to be used, just kept around as reference or fun. 8 | * **old_n_deprecated/** - old, broken, or otherwise deprecated scripts/apps that have been written and are broken or EOL. 9 | 10 | ### alert_login.sh 11 | a simple script to alert an email address when someone logs into a Linux machine. Place the script in /etc/profile.d/alert_login.sh. 12 | 13 | Example email (blurred for privacy); 14 | 15 | 16 | 17 | ### bench_disk.sh 18 | a rough disk benchmarking utiltiy using dd (use tee to add to logfile and keep historical data) 19 | 20 | ``` 21 | chad@myhost:~$ sudo ./bench_disk.sh /tmp/ | tee -a bench_disk_20150709_2204.log 22 | [sudo] password for chad: 23 | beginning dd tests: 24 | writing...done 25 | flushing cache...done 26 | reading...done 27 | reading (cached)...done 28 | dd results: 29 | path /tmp/ 30 | write 280 MB/s (1.1 GB in 3.83191 s) 31 | read 311 MB/s (1.1 GB in 3.45511 s) 32 | cached 293 MB/s (1.1 GB in 3.6592 s) 33 | ``` 34 | 35 | ### bench_net.sh 36 | a rough bandwidth benchmarking utility using wget (use tee to add to logfile and keep historical data) 37 | 38 | ``` 39 | chad@myhost:~$ ./bench_net.sh | tee -a bench_net_20150709_2205.log 40 | beginning speed/latency tests... 41 | Speed from SoftLayer, DC USA : 7.74MB/s (77.392 ms latency) 42 | Speed from Edis, Frankfurt DE : 2.31MB/s (154.037 ms latency) 43 | Speed from Bahnhof, Sundsvall SE : 7.40MB/s (180.912 ms latency) 44 | Speed from Linode, Atlanta GA USA : 26.4MB/s (59.380 ms latency) 45 | Speed from Leaseweb, Haarlem NL : 198MB/s (9.869 ms latency) 46 | Speed from DigitalOCean, NY USA : 4.89MB/s (73.899 ms latency) 47 | Speed from CacheFly CDN Network : 55.9MB/s (0.512 ms latency) 48 | Speed from Linode, Singapore : 11.8MB/s (184.932 ms latency) 49 | Speed from Linode, Dallas TX USA : 24.1MB/s (45.719 ms latency) 50 | Speed from Linode, Tokyo JP : 18.3MB/s (101.913 ms latency) 51 | Speed from SoftLayer, SJ CA USA : 48.0MB/s (8.217 ms latency) 52 | done 53 | ``` 54 | 55 | ### checksum.sh 56 | checksum (md5/sha1) all regular files under a directory tree 57 | ``` 58 | chad@myhost:~$ ./checksum.sh sha1 /Users/chad/Books/ 59 | chad@myhost:~$ head -n5 ~/checksums.3075.txt 60 | 63902c99e287b05463f46be3551aa37260cd5665 /Users/chad/Books//Docker_Cookbook.pdf 61 | dff0c59900275673c29fde9fc97de390c3edd2c3 /Users/chad/Books//Docker_in_Practice.pdf 62 | 52332d0a159305d3c55deaacfda9f02fd48b80c2 /Users/chad/Books//Unix_Power_Tools_Third_Edition.pdf 63 | 8b09f063a6db3e73424c9af678f5256bc5b1f562 /Users/chad/Books//Using_Docker.pdf 64 | 8fc938c3e5b73daad3cbcdb75c06653e957db854 /Users/chad/Books//Introducing_Go.pdf 65 | ``` 66 | 67 | ### checksum_cdrom.sh 68 | checksums a burnt CD/DVD against an ISO file used to master it. 69 | 70 | ``` 71 | chad@myhost:~ $ ./checksum_cdrom.sh 72 | ERROR: You must supply an iso file to compare! 73 | e.g. ./checksum_cdrom.sh /path/to/discimage.iso 74 | chad@myhost:~ $ ./checksum_cdrom.sh /home/chad/ubuntu_full_16.04.iso 75 | Found disc in /dev/sr0: Ubuntu 16.04.1 LTS amd64 76 | Beginning checksum...done! 77 | ERROR: Checksums do not match! 78 | Checksum of ubuntu_full_16.04.iso: de82147297858a862b59b07ae3f111ca9fa2f8c0 79 | Checksum of disc in /dev/sr0: da39a3ee5e6b4b0d3255bfef95601890afd80709 80 | chad@myhost:~ $ ./checksum_cdrom.sh /home/chad/ubuntu_mini_16.04.iso 81 | Found disc in /dev/sr0: Ubuntu 16.04.1 LTS amd64 82 | Beginning checksum...done! 83 | Checksums match! 902731a64bf54a057ba266a32de5fbcc4c494fcf 84 | ``` 85 | 86 | ### chk_fio.sh 87 | gather info about a Fusion-io PCIe Flash Drive and display the information, or with the option 'health' it will email errors. 88 | 89 | ``` 90 | [root@myhost ~]# /usr/local/bin/chk_fio.sh 91 | ERROR: Unknown option! Please change the option and try again. 92 | e.g. /usr/local/bin/chk_fio.sh 93 | [root@myhost ~]# /usr/local/bin/chk_fio.sh info 94 | fct0: Product Number:FS0-000-000-AB, SN:0000000 95 | Internal temperature: 51.68 degC, max 58.08 degC 96 | Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% 97 | Physical bytes written: 82,103,493,122,656 (76464.83 GB) 98 | Physical bytes read : 78,618,223,881,224 (73218.92 GB) 99 | RAM Current: 49,912,320 bytes 100 | RAM Peak : 49,912,320 bytes 101 | ``` 102 | 103 | Example emails when errors are detected; 104 | 105 | ``` 106 | Subject 107 | ------- 108 | WARNING: Problems with Fusion-io drive on file.chadmayfield.com! 109 | 110 | Body 111 | ---- 112 | fct0: Product Number:FS0-000-000-AB, SN:0000000 113 | Internal temperature: 51.68 degC, max 58.08 degC 114 | Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00% 115 | Physical bytes written: 82,103,493,122,656 (76464.83 GB) 116 | Physical bytes read : 78,618,223,881,224 (73218.92 GB) 117 | RAM Current: 49,912,320 bytes 118 | RAM Peak : 49,912,320 bytes 119 | ``` 120 | 121 | ### chk_raid.sh 122 | Gather info about PERC (specifically the 6/i and other LSI based cards that use MegaCli) and display in a pretty format. Also use the monitor function, designed to be called from cron, to monitor the array health and alert an email on errors. 123 | 124 | ``` 125 | [root@myhost ~]# ./chk_raid.sh 126 | ERROR: Unknown option! Please change the option and try again. 127 | e.g. ./chk_raid.sh 128 | [root@myhost ~]# ./chk_raid.sh info 129 | Product Name PERC 6/i Adapter 130 | Serial No 1122334455667788 131 | FW Package Build 6.3.1-0003 132 | FW Version 1.22.32-1371 133 | BIOS Version 2.04.00 134 | Host Interface PCIE 135 | Memory Size 256MB 136 | Supported Drives SAS, SATA 137 | Virtual Drives 1 138 | Degraded 0 139 | Offline 0 140 | Physical Devices 4 141 | Disks 4 142 | Critical Disks 0 143 | Failed Disks 0 144 | Virtual Drive Info 145 | RAID Level Primary-5, Secondary-0, RAID Level Qualifier-3 146 | Size 4.091 TB 147 | Sector Size 512 148 | Strip Size 64 KB 149 | Number Of Drives 4 150 | Span Depth 1 151 | Drive Status OPTIMAL 152 | Slot Number 0 Online, Spun Up 9VS12A34ST1500DM003-9YN16G 153 | Slot Number 1 Online, Spun Up 9VS12B34ST1500DM003-9YN16G 154 | Slot Number 2 Online, Spun Up 9VS12C34ST1500DM003-9YN16G 155 | Slot Number 3 Online, Spun Up 9VS12D34ST1500DM003-9YN16G 156 | [root@myhost ~]# ./chk_raid.sh monitor 157 | STATE: Degraded 158 | ERROR: 1 Disks Degraded 159 | ERROR: 1 Disks Offline 160 | ERROR: 0 Critical Disks 161 | ERROR: 0 Failed Disks 162 | ``` 163 | 164 | Example emails when errors are detected; 165 | 166 | ``` 167 | Subject 168 | ------- 169 | WARNING: Problems with RAID array on file.lomiz.com! 170 | 171 | Body 172 | ---- 173 | STATE: Degraded 174 | ERROR: 1 Disks Degraded 175 | ERROR: 1 Disks Offline 176 | ERROR: 0 Critical Disks 177 | ERROR: 0 Failed Disks 178 | -------------------- 179 | State Degraded 180 | Degraded 1 181 | Offline 1 182 | Disks 4 183 | Critical Disks 0 184 | Failed Disks 0 185 | ``` 186 | 187 | ### entropy_ck.sh 188 | calculate the Shannon entropy of a string (if using with a password use a space before the command execution to override storing it in the history buffer" 189 | 190 | ``` 191 | chad@myhost:~$ ./entropy_ck.sh "Tr0ub4dor&3" 192 | passwd length: 11 193 | entropy/char: 3.27761343682 194 | actual entropy: 36.053747805 bits 195 | chad@myhost:~$ ./entropy_ck.sh "correcthorsebatterystaple" 196 | passwd length: 25 197 | entropy/char: 3.36385618977 198 | actual entropy: 84.0964047444 bits 199 | ``` 200 | 201 | ### chkrootkit.sh 202 | run chkrootkit then log & email results (chkrootkit is required) 203 | 204 | ### measure_latency.sh 205 | a quick and dirty latency measurement tool 206 | 207 | NOTE: This is just a quick tool to use so you don't have to bust out of the terminal, if you want historic views, use smokeping. 208 | 209 | ``` 210 | chad@macbookpro:~$ ./measure_latency.sh yahoo.com 211 | ERROR: You must supply a hostname/IP to measure & a packet count! 212 | e.g. ./measure_latency.sh 213 | chad@macbookpro:~$ ./measure_latency.sh 8.8.4.4 10 214 | latency to 8.8.4.4 with 10 packets is: 74.735 ms 215 | chad@macbookpro:~$ ./measure_latency.sh msn.com 10 216 | latency measurement failed: 100.0% packet loss 217 | 218 | -- or on linux -- 219 | 220 | ubuntu@ubuntu-xenial:~$ ./measure_latency.sh 8.8.8.8 10 221 | latency to 8.8.8.8 with 10 packets is: 203.783 ms 222 | ubuntu@ubuntu-xenial:~$ ./measure_latency.sh msn.com 10 223 | latency measurement failed: 100% packet loss 224 | ``` 225 | 226 | ### myrepos_status.sh 227 | quick script to show me all of my repos and what needs to be checked-in and where. 228 | 229 | ``` 230 | ./myrepos_status.sh 231 | Found repo: git@github.com:chadmayfield/Dockerfiles.git 232 | Found repo: git@github.com:chadmayfield/compliance_checks.git 233 | ?? get_www_ciphers.sh 234 | Found repo: git@github.com:chadmayfield/configs.git 235 | ?? etc/ 236 | Found repo: git@github.com:chadmayfield/dot_files.git 237 | Found repo: git@github.com:chadmayfield/scriptlets.git 238 | M polarhome.pl 239 | ?? experiments/processes.sh 240 | Found repo: git@github.com:chadmayfield/seeker.git 241 | Found repo: git@github.com:chadmayfield/zoneedit-updater.git 242 | ``` 243 | 244 | 245 | ### myrepos_update.sh 246 | an automated update script that iterates through all subdirectories (only one deep) under the current tree and pull any changes to the git repos there. assumes ssh is used (not https) to pull/push repos. 247 | 248 | ``` 249 | chad@myhost:~$ ./update_myrepos.sh 250 | Found repo: git@github.com:chadmayfield/chadmayfield.github.io.git 251 | Pulling latest changes... 252 | Already up-to-date. 253 | Found repo: git@github.com:chadmayfield/compliance_checks.git 254 | Pulling latest changes... 255 | Already up-to-date. 256 | Found repo: git@github.com:chadmayfield/scriptlets.git 257 | Pulling latest changes... 258 | remote: Counting objects: 3, done. 259 | remote: Compressing objects: 100% (3/3), done. 260 | remote: Total 3 (delta 2), reused 0 (delta 0), pack-reused 0 261 | Unpacking objects: 100% (3/3), done. 262 | From github.com:chadmayfield/scriptlets 263 | fa418b1..66db9c9 master -> origin/master 264 | Updating fa418b1..66db9c9 265 | Fast-forward 266 | README.md | 2 +- 267 | 1 file changed, 1 insertion(+), 1 deletion(-) 268 | ``` 269 | 270 | ### randomize_mac.sh 271 | randomize mac addresses on macOS and Linux. This will help circumvent free wifi time limits in coffee shops and such. (This was actually an experiment until I begain using it more and more. I know about and have used machanger and spoofMAC, but I wanted to use something I wrote!) 272 | ``` 273 | macbookpro:~ $ ifconfig en0 | grep ether 274 | ether 87:41:13:1e:e3:ab 275 | macbookpro:~ $ sudo ./randomize_mac.sh 276 | Default Interface: en0 277 | Default MAC Address: 87:41:13:1e:e3:ab 278 | Random MAC Address: 00:50:56:04:a3:f1 279 | Succeessfully changed MAC! 280 | macbookpro:~ $ ifconfig en0 | grep ether 281 | ether 00:50:56:04:a3:f1 282 | macbookpro:~ $ sudo ./randomize_mac.sh --revert 283 | Original MAC address found: 87:41:13:1e:e3:ab 284 | Reverting it back... 285 | Succeessfully changed MAC! 286 | macbookpro:~ $ ifconfig en0 | grep ether 287 | ether 87:41:13:1e:e3:ab 288 | ``` 289 | 290 | 291 | ### remove_spaces.sh 292 | removes spaces in file names under path 293 | 294 | ``` 295 | chad@myhost:~$ ./remove_spaces.sh test 296 | test/Docker Cookbook.pdf -> test/Docker.Cookbook.pdf 297 | test/Unix Power Tools Third Edition.pdf -> test/Unix.Power.Tools.Third.Edition.pdf 298 | test/Using Docker.pdf -> test/Using.Docker.pdf 299 | test/Version Control with Git Second Edition.pdf -> test/Version.Control.with.Git.Second.Edition.pdf 300 | 301 | 302 | ### vagrant_update_boxes.sh 303 | a quick update script for all of my vagrant boxes 304 | 305 | ``` 306 | chad@myhost:~$ ./vagrant_update_boxes.sh 307 | Found Vagrantfile at: ./centos_7/Vagrantfile 308 | Updating box: "centos/7" 309 | ==> default: Checking for updates to 'centos/7' 310 | default: Latest installed version: 1702.01 311 | default: Version constraints: 312 | default: Provider: virtualbox 313 | ==> default: Box 'centos/7' (v1702.01) is running the latest version. 314 | Found Vagrantfile at: ./rancher/os-vagrant/Vagrantfile 315 | Updating box: "rancherio/rancheros" 316 | ==> rancher-01: Checking for updates to 'rancherio/rancheros' 317 | rancher-01: Latest installed version: 0.4.3 318 | rancher-01: Version constraints: >=0.4.1 319 | rancher-01: Provider: virtualbox 320 | ==> rancher-01: Box 'rancherio/rancheros' (v0.4.3) is running the latest version. 321 | ``` 322 | -------------------------------------------------------------------------------- /add_keys.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # add-keys.sh: add my ssh keys to agent 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | fail=0 9 | keys=( "$HOME/.ssh/github.com/id_ed25519" 10 | "$HOME/.ssh/gogs/id_ed25519" 11 | "$HOME/.ssh/helios/id_ed25519" 12 | "$HOME/.ssh/selene/id_ed25519" ) 13 | 14 | for i in ${keys[@]}; do 15 | if ! [ -f "$i" ]; then 16 | echo "Key doesn't exist: $i" 17 | let fail+=1 18 | fi 19 | done 20 | 21 | if [ "$fail" -ne 0 ]; then 22 | echo "ERROR: Unable to find key(s)!" 23 | exit 1 24 | fi 25 | 26 | # use the correct grammar for fun! 27 | if [ "${#keys[@]}" -eq 1 ]; then 28 | echo "Checking for key..." 29 | else 30 | echo "Checking for keys..." 31 | fi 32 | 33 | for i in "${keys[@]}"; do 34 | # grab key fingerprint 35 | cmp_key=$(ssh-keygen -lf $i) 36 | 37 | # if key fingerprint not found in fingerprint list, add it 38 | if [ $(ssh-add -l | grep -c "$cmp_key") -eq 0 ]; then 39 | echo "Key not found! Adding it..." 40 | ssh-add $i 41 | add_rv=$? 42 | 43 | if [ $add_rv -eq 0 ]; then 44 | echo "Key added." 45 | fi 46 | else 47 | echo "Key already added: $(echo $cmp_key | awk '{print $2}')" 48 | fi 49 | done 50 | 51 | #EOF 52 | -------------------------------------------------------------------------------- /alert_login.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # alert_login.sh - notify when anyone logs into system 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | if ! [[ $OSTYPE =~ "linux" ]]; then 9 | echo "ERROR: This may only be run on Linux!" 10 | exit 1 11 | fi 12 | 13 | command -v mail >/dev/null 2>&1 || { \ 14 | echo >&2 "ERROR: You must install mail to continue!"; exit 1; } 15 | 16 | command -v route >/dev/null 2>&1 || { \ 17 | echo >&2 "ERROR: You must install route to continue!"; exit 1; } 18 | 19 | date=$(date) 20 | who_am_i=$(whoami) 21 | users=$(w) 22 | our_host=$(hostname -f) 23 | 24 | dflt_iface=$(route | grep '^default' | grep -o '[^ ]*$') 25 | local_ip=$(ip addr show "$dflt_iface" | grep "inet " | awk '{print $2}' | \ 26 | sed -e 's/\/.*$//g') 27 | 28 | # check if local or remote connection 29 | if [[ $(tty) =~ "pts" ]]; then 30 | # pseudo terminal 31 | local_ip=$(echo "$SSH_CONNECTION" | awk '{print $3}') 32 | local_wan=$(curl -s http://ipinfo.io/ip) 33 | local_rdns=$(curl -s http://ipinfo.io/hostname) 34 | 35 | remote_ip=$(echo "$SSH_CONNECTION" | awk '{print $1}') 36 | if [[ $remote_ip =~ (::1|127.0.0.1) ]]; then 37 | remote_ip="NONE (reverse tunnel?)" 38 | fi 39 | remote_rdns=$(host "$remote_ip") 40 | 41 | # check to see if we can resolve the connecting ip 42 | regexp="not found|PTR|NXDO" 43 | if [[ $remote_ip =~ "$regexp" ]]; then 44 | remote_rdns="N/A" 45 | fi 46 | elif [[ $(tty) =~ "tty" ]]; then 47 | # console 48 | remote_ip="tty$(tty | awk -F "tty" '{print $2}')" 49 | remote_rdns="None, logged in locally!" 50 | else 51 | # who knows how anyone got here 52 | remote_ip="UNKNOWN" 53 | remote_rdns="UNKNOWN" 54 | fi 55 | 56 | tmpfile="/tmp/alert_login.txt.$$" 57 | mail_to='user@domain.tld' # change this to alert user 58 | subject="ALERT: Login to $our_host from $remote_ip" 59 | 60 | # I know, normally I hate HTML email, but for this, I wanted it and this 61 | # is a pretty cool way to do it. 62 | cat > $tmpfile << EOF 63 | 64 | 65 | ALERT: Login to $our_host from $remote_ip 66 | 67 | 68 | Dear $mail_to, 69 | 70 |

For security reasons we inform you about each login to your server. If you 71 | received this notification but you did not login, please log 72 | into your server as soon as possible and check your logs! Note: if you 73 | don't want to receive such notifications please remove the 74 | /etc/profile.d/${0##*/} file.

75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 |
Date: $date
Hostname: $our_host ($local_ip)
User: $who_am_i (id=${UID})
Location: $(tty)
WAN IP: $local_wan ($local_rdns)
Remote IP: $remote_ip
Remote rDNS: $remote_rdns


Uptime/Users:
${users}


Current Connections:
$(netstat -n -A inet)


90 | 91 |

----
Regards, 92 |
admin@$our_host
93 | 94 | 95 | EOF 96 | 97 | # send out the email 98 | mail -s "$(echo -e "$subject \nContent-type: text/html")" "$mail_to" < $tmpfile 99 | 100 | # cleanup 101 | rm -f $tmpfile 102 | 103 | #EOF 104 | -------------------------------------------------------------------------------- /archived/README.md: -------------------------------------------------------------------------------- 1 | # scriptlets: old & deprecated -------------------------------------------------------------------------------- /archived/StartApps.applescript: -------------------------------------------------------------------------------- 1 | -- start_apps.applescript - prepare machine for work after reboot 2 | 3 | -- author : Chad Mayfield (chad@chd.my) 4 | -- license : gplv3 5 | 6 | -- change to desktop 2 [DOESN'T WORK] 7 | --tell application "System Events" 8 | -- key code 19 using {control down} -- control+2 is switch to Display Space 2 9 | --end tell 10 | --delay 2.0 11 | 12 | -- open a couple of terminal windows with two tabs each 13 | -- and run some commands to prepare workstation for work! 14 | tell application "Terminal" 15 | -- open first window on right side of screen -- 16 | set win to do script 17 | if not (exists window 1) then reopen 18 | set win's current settings to settings set "Dracula" 19 | activate 20 | 21 | try 22 | set position of window 1 to {350, 22} 23 | set size of window 1 to {600, 975} -- 84x57 24 | --set size of window 1 to {dtw / 2, dth - 800} 25 | end try 26 | 27 | -- do work here (on the right) -- 28 | tell application "System Events" to keystroke "t" using {command down} 29 | delay 0.2 30 | do script "myip" in tab 1 of front window 31 | do script "cd ~/Code/myrepos/ && ./myrepos_status.sh" in tab 2 of front window 32 | --do script "some cmd" in tab 3 of front window 33 | end tell 34 | 35 | tell application "Terminal" 36 | -- open second window on left side of screen -- 37 | set win to do script 38 | if not (exists window 2) then reopen 39 | set win's current settings to settings set "Dracula" 40 | activate 41 | 42 | try 43 | set position of window 2 to {950, 22} 44 | set size of window 1 to {600, 975} -- 84x57 45 | --set size of window 2 to {dtw / 2, dth - 22} 46 | end try 47 | 48 | -- do work here (on the left) -- 49 | tell application "System Events" to keystroke "t" using {command down} 50 | delay 0.2 51 | do script "uptime && w" in tab 1 of front window 52 | do script "cd ~/Code/ && ls" in tab 2 of front window 53 | --do script "some cmd" in tab 3 of front window 54 | end tell 55 | 56 | -- change to desktop 1 [DOESN'T WORK] 57 | --tell application "System Events" 58 | -- key code 18 using {control down} -- control+1 is switch to Display Space 1 59 | --end tell 60 | --delay 1.0 61 | 62 | -- open Finder and then the Code directory 63 | tell application "Finder" 64 | -- classic mac os syntax 65 | open alias "Macintosh HD:Users:chad:Code" 66 | -- or unix syntax 67 | --open POSIX file "~/Code" 68 | end tell 69 | 70 | -- create a new window (or reopen) with default settings 71 | set p to "/Users/chad/Code" 72 | tell application "Finder" 73 | reopen # if there are no open windows, open one 74 | activate 75 | set target of window 1 to (POSIX file p as text) 76 | --set bounds of front window to {0, 22, 450, 248} 77 | --or-- 78 | --set position of window 1 to {0, 400} 79 | --set size of window 1 to {800, 500} 80 | end tell 81 | 82 | -- now open google chrome, if needed open tabs 83 | tell application "Google Chrome" 84 | if it is running then 85 | quit 86 | else 87 | activate 88 | open location "https://github.com/chadmayfield" 89 | delay 1 90 | activate 91 | open location "https://startpage.com/" 92 | delay 1 93 | activate 94 | end if 95 | end tell 96 | 97 | -- finally open messages 98 | tell application "Messages" 99 | activate 100 | end tell 101 | 102 | 103 | -- TODO 104 | -- http://stackoverflow.com/a/2305588 105 | -- tell application "System Events" 106 | -- set x to application bindings of spaces preferences of expose preferences 107 | -- set x to {|com.apple.messages|:2} & x -- Have TextEdit appear in space 4 108 | -- set application bindings of spaces preferences of expose preferences to x 109 | -- end tell 110 | 111 | -- http://stackoverflow.com/a/37305289 112 | --tell application "System Events" 113 | -- tell application "System Events" to key code 19 using control down 114 | --key code 19 using {control down} -- control+2 is switch to Display Space 2 115 | --end tell 116 | --delay 1.0 -------------------------------------------------------------------------------- /archived/chk_fio.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # chk_fio.sh - check status of fio 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | errors=0 9 | rthreshold=80 # percentage 10 | alertlog="/tmp/alert.log.$$" 11 | email="user@domain.tld" 12 | subject="WARNING: Problems with Fusion-io drive on $(hostname)!" 13 | 14 | regexp="(Board|Internal) t|Media|status:|fct?.*Product|Flashback|bytes (w|r)|Current|Peak" 15 | 16 | if [ $UID -ne 0 ]; then 17 | echo ERROR: You must be root to run this utility!"" 18 | fi 19 | 20 | command -v fio-status >/dev/null 2>&1 || { \ 21 | echo >&2 "ERROR: You must install fio-status to continue!"; exit 1; } 22 | 23 | command -v bc >/dev/null 2>&1 || { \ 24 | echo >&2 "ERROR: You must install bc to continue!"; exit 1; } 25 | 26 | # we don't have a fio character device 27 | if [ ! -c /dev/fct* ]; then 28 | echo "ERROR: No character device found! Are you sure you have FIO card?" 29 | exit 1 30 | fi 31 | 32 | # we've got a device, do we have a block device 33 | if [ ! -b /dev/fio* ]; then 34 | echo "ERROR: No FIO block device found! Is the drive installed & working?" 35 | exit 1 36 | fi 37 | 38 | # check if we have an ext4 filesystem 39 | if [ ! -d "$(mount | grep /dev/fio* | awk '{print $3}')/lost+found" ]; then 40 | echo "ERROR: No file system found on $(mount | grep /dev/fio* | awk '{print $1}')" 41 | exit 1 42 | fi 43 | 44 | info() { 45 | # just print out what's interesting, w/o formatting 46 | while read -r line 47 | do 48 | if [[ $line =~ (Current|Peak) ]]; then 49 | echo "RAM $line" 50 | else 51 | if [[ $line =~ "bytes" ]]; then 52 | size=$(echo $line | awk -F ": " '{print $2}' | sed -e 's/,//g') 53 | c=$(bc <<< "scale=2; $size / 1024 / 1024 / 1024") 54 | 55 | echo "$line ($c GB)" 56 | else 57 | echo $line 58 | fi 59 | fi 60 | done< <(fio-status -a | grep -E "$regexp") 61 | } 62 | 63 | check_health() { 64 | while read -r line 65 | do 66 | # add every line to $alertlog to email if there are errors 67 | if [[ $line =~ (Current|Peak) ]]; then 68 | echo "RAM $line" >> $alertlog 69 | elif [[ $line =~ "bytes" ]]; then 70 | size=$(echo $line | awk -F ": " '{print $2}' | sed -e 's/,//g') 71 | c=$(bc <<< "scale=2; $size / 1024 / 1024 / 1024") 72 | echo "$line ($c GB)" >> $alertlog 73 | else 74 | echo $line >> $alertlog 75 | fi 76 | 77 | # if health is anything other than heathly...alert 78 | if [[ $line =~ "status:" ]]; then 79 | state=$(echo $line |awk -F": " '{print $2}' |awk -F";" '{print $1}') 80 | 81 | if [ $state != "Healthy" ]; then 82 | let errors+=1 83 | fi 84 | fi 85 | 86 | # if flashback is more than 50% used...alert 87 | if [[ $line =~ "Flashback" ]]; then 88 | # how much flashback is used 89 | fback=$(echo $line | awk -F ": " '{print $2}' | \ 90 | awk -F "/" '{print $1}') 91 | # what is the flashback available 92 | fb_ttl=$(echo $line | awk -F ": " '{print $2}' | \ 93 | awk -F "/" '{print $2}') 94 | # set our threshold to 50% flashback usage 95 | threshold=$(bc <<< "scale=0; $fb_ttl / 2") 96 | 97 | if [ $fback -ge $threshold ]; then 98 | let errors+=1 99 | fi 100 | fi 101 | 102 | # if block reserves dip below 80%...alert 103 | if [[ $line =~ "Reserves" ]]; then 104 | # little dirty, but it works 105 | reserves=$(echo $line | awk -F "s: " '{print $3}' | \ 106 | awk -F "%" '{print $1}' | awk -F. '{print $1}') 107 | 108 | if [ $reserves -lt $rthreshold ]; then 109 | let errors+=1 110 | fi 111 | fi 112 | done< <(fio-status -a | grep -E "$regexp") 113 | 114 | # if errors are detects, send alert email 115 | if [ $errors -ge 1 ]; then 116 | #echo "send email" 117 | mail -s "$subject" $email < $alertlog 118 | fi 119 | 120 | # debug email 121 | #cat $alertlog 122 | 123 | # we're done, cleanup 124 | rm -f $alertlog 125 | } 126 | 127 | case $1 in 128 | info) 129 | info 130 | ;; 131 | health) 132 | check_health 133 | ;; 134 | *) 135 | echo "ERROR: Unknown option! Please change the option and try again." 136 | echo " e.g. $0 " 137 | exit 1 138 | esac 139 | 140 | #EOF 141 | -------------------------------------------------------------------------------- /archived/chk_raid.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # chk_raid.sh - check status of raid using MegaCli64 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # run from root's cron hourly 9 | #0 * * * * /usr/local/bin/chk_raid.sh 10 | 11 | # example output for drive/adapter status ($megacli -LDInfo -LALL -aAll) 12 | #Virtual Drives : 1 13 | # Degraded : 0 14 | # Offline : 0 15 | #Physical Devices : 4 16 | # Disks : 4 17 | # Critical Disks : 0 18 | # Failed Disks : 0 19 | 20 | debug=0 21 | megacli="/opt/MegaRAID/MegaCli/MegaCli64" 22 | tmpfile="/tmp/megacli.out.$$" 23 | alertlog="/tmp/alert.log.$$" 24 | email="user@domain.tld" 25 | subject="WARNING: Problems with RAID array on $(hostname)!" 26 | 27 | if [ $UID -ne 0 ]; then 28 | echo "ERROR: You must be root to run this utility!" 29 | exit 1 30 | fi 31 | 32 | if [ ! -f $megacli ]; then 33 | echo "ERROR: Cannot find MegaCli64! You must install it to continue." 34 | exit 1 35 | fi 36 | 37 | info() { 38 | # grab adapter info 39 | $megacli -AdpAllInfo -aAll > $tmpfile 40 | regexp="^(Product|FW (P|V)|BIOS V|Supported|Failed| Offline)|(Virtual|Physical) D(evice|rive)s|Serial No|Memory S|Host Int|Degraded|Disks|Critical" 41 | 42 | while read line 43 | do 44 | name=$(echo $line | awk -F ":" '{print $1}') 45 | ver=$(echo $line | awk -F ":" '{print $2}') 46 | 47 | if [[ $name =~ (Degraded|Offline|Disks) ]]; then 48 | printf " %-19s %s\n" "$name" "$ver" 49 | else 50 | printf "%-21s %s\n" "$name" "$ver" 51 | fi 52 | done < <(grep -E "$regexp" $tmpfile) 53 | printf "Virtual Drive Info\n" 54 | unset regexp 55 | 56 | # grab logical drive info 57 | $megacli -LDInfo -LALL -aAll > $tmpfile 58 | regexp="^(RAID|S(ize|ector|trip|pan)|Number)" 59 | 60 | while read line 61 | do 62 | name=$(echo $line | awk -F: '{print $1}') 63 | ver=$(echo $line | awk -F: '{print $2}') 64 | printf " %-19s %s\n" "$name" "$ver" 65 | done < <(grep -E "$regexp" $tmpfile) 66 | unset regexp 67 | 68 | state=$(grep -E '^State' $tmpfile | awk -F: '{print $2}') 69 | printf "%-21s %s\n" "Drive Status" "${state^^}" 70 | 71 | # grab physical information 72 | $megacli -PDList -aAll > $tmpfile 73 | regexp="Slot|Firmware state|Inquiry" 74 | 75 | while read line 76 | do 77 | name=$(echo $line | awk -F ":" '{print $1 $2}') 78 | ver=$(echo $line | awk -F ":" '{print $2}' | sed 's/^[[:space:]]*//g') 79 | 80 | # get slot/state 81 | if [[ $line =~ "Slot" ]]; then 82 | printf " %-21s" "$name" 83 | elif [[ $line =~ "state" ]]; then 84 | printf "%-21s" "$ver" 85 | fi 86 | 87 | # get serial numebr 88 | if [[ $line =~ "Inquiry Data" ]]; then 89 | drive=$(echo $line | grep 'Inquiry'| awk -F: '{print $2}'| \ 90 | awk '{print $1}') 91 | printf "$drive\n" 92 | fi 93 | done < <(grep -E "$regexp" $tmpfile) 94 | unset regexp 95 | } 96 | 97 | monitor() { 98 | do_exit=0 99 | $megacli -LDInfo -LALL -aAll > $tmpfile 100 | $megacli -AdpAllInfo -aAll >> $tmpfile 101 | 102 | # remove a couple of things to make regexp easier to write 103 | sed -i '/Coercion Mode/d' $tmpfile 104 | sed -i '/Offline VD/d' $tmpfile 105 | sed -i '/Force Offline/d' $tmpfile 106 | 107 | regexp="^State|Degraded|Offline|Disks" 108 | 109 | while read line 110 | do 111 | name=$(echo $line | awk -F ":" '{print $1}') 112 | ver=$(echo $line | awk -F ":" '{print $2}') 113 | 114 | # check each line for concerning items... if debug=1 tee so see output 115 | if [[ $line =~ "State" ]]; then 116 | if [ $ver != "Optimal" ]; then 117 | if [ $debug -eq 1 ]; then 118 | echo "STATE: $ver" | tee -a $alertlog 119 | else 120 | echo "STATE: $ver" >> $alertlog 121 | fi 122 | let do_exit+=1 123 | fi 124 | elif [[ $line =~ "Degraded" ]]; then 125 | if [ $ver -ne 0 ]; then 126 | if [ $debug -eq 1 ]; then 127 | echo "ERROR: $ver Disks Degraded" | tee -a $alertlog 128 | else 129 | echo "ERROR: $ver Disks Degraded" >> $alertlog 130 | fi 131 | let do_exit+=1 132 | fi 133 | elif [[ $line =~ "Offline" ]]; then 134 | if [ $ver -ne 0 ]; then 135 | if [ $debug -eq 1 ]; then 136 | echo "ERROR: $ver Disks Offline" | tee -a $alertlog 137 | else 138 | echo "ERROR: $ver Disks Offline" >> $alertlog 139 | fi 140 | let do_exit+=1 141 | fi 142 | elif [[ $line =~ "Critical" ]]; then 143 | if [ $ver -ne 0 ]; then 144 | if [ $debug -eq 1 ]; then 145 | echo "ERROR: $ver Critical Disks" | tee -a $alertlog 146 | else 147 | echo "ERROR: $ver Critical Disks" >> $alertlog 148 | fi 149 | let do_exit+=1 150 | fi 151 | elif [[ $line =~ "Failed" ]]; then 152 | if [ $ver -ne 0 ]; then 153 | if [ $debug -eq 1 ]; then 154 | echo "ERROR: $ver Failed Disks" | tee -a $alertlog 155 | else 156 | echo "ERROR: $ver Failed Disks" >> $alertlog 157 | fi 158 | let do_exit+=1 159 | fi 160 | fi 161 | 162 | printf "%-15s %s\n" "$name" "$ver" >> /tmp/stats.log.$$ 163 | 164 | done < <(grep -E "$regexp" $tmpfile) 165 | } 166 | 167 | case $1 in 168 | info) 169 | info 170 | ;; 171 | monitor|status) 172 | monitor 173 | 174 | echo "--------------------" >> $alertlog 175 | cat /tmp/stats.log.$$ >> $alertlog 176 | 177 | if [ $do_exit -ge 1 ]; then 178 | mail -s "$subject" $email < $alertlog 179 | rm -f $alertlog /tmp/stats.log.$$ 180 | exit 99 181 | fi 182 | ;; 183 | *) 184 | echo "ERROR: Unknown option! Please change the option and try again." 185 | echo " e.g. $0 " 186 | exit 1 187 | esac 188 | 189 | #EOF 190 | -------------------------------------------------------------------------------- /archived/diceware/README.md: -------------------------------------------------------------------------------- 1 | # scriptlets: diceware -------------------------------------------------------------------------------- /archived/downr/README.md: -------------------------------------------------------------------------------- 1 | ### downr.sh 2 | 3 | This was a project from years ago that was only used for a short time (and is really sloppy). It's a shell script to automate the download of files from raspishare.com, hotfile.com, or megashares.com. It was inspired by scripts like `rapid.pl`, `rapidsucker.pl`, and `rsdown`. Hasn't even been used in almost 10 years, probably broken. 4 | -------------------------------------------------------------------------------- /archived/downr/doc/api/rs-api.txt: -------------------------------------------------------------------------------- 1 | RapidShare API - Last updates and revision history on bottom of this document 2 | 3 | This is the final technical documentation to the RapidShare API for coders. 4 | 5 | If you are no programmer: Technical means, this documentation's purpose is solely to give coders (people creating cool tools) a documentation on how to 6 | implement the back-end API in their tools, so their programs get even cooler and easier to be created, so you can use RapidShare even more comfortably. 7 | If you know a good coder, do not hesitate to advise him or her of this documentation. 8 | 9 | Since RapidShare is always extending its functionality, new routines will be added from time to time and existing routines will be adjusted slightly. 10 | We will take care that existing routines will not be changed too much, so that existing programs keep running. Tell us if you are missing an API call 11 | and we will see what we can do. 12 | 13 | Routines giving back many values are subject of many changes. For example the routine getaccountdetails_v1 returns many key-value pairs. Those pairs 14 | might be sorted differently in the future. Some values might disappear, new values may appear. Make sure your program can handle this changes without 15 | a need for an update. If a value disappears, you should assume a "0" value. If a new value appears, your program should just ignore it. Your program 16 | must not rely on the sort-order of the list. 17 | 18 | In case we plan major adjustments, we will create a different function-call for it. This is reflected by the appendix "_v1" "_v2" etc. However, if there 19 | are security issues by design in existing functions, we reserve the right to disable or change existing functions without prior notice. So check back 20 | here from time to time. 21 | 22 | WARNING: Since RapidShare serves a very large community, programming errors in popular tools might cause an unwanted DDOS attack to the RapidShare servers. 23 | When programming tools, always keep in mind that your tool might be used by millions of people at the same time. So make sure you do not kill our API servers, 24 | which might cause some big financial issues on your side. Always make sure your program STOPS retrying a failed request after 3 tries. Always make sure you 25 | do not make more API calls than necessary. Our servers use a IP-based credit system, which will ban a IP making very many small requests or just a few 26 | unnecessary big requests. Everything you do will add POINTS to your IP address. If you exceed a certain point limit, API calls are denied for 30 minutes. 27 | If you exceed this limit multiple times, your account is banned as well. This especially happens with listfiles_v1 abuses on accounts having many files. 28 | How many points your calls will add to your balance depend heavily on the routine you call. For example calling nextuploadserver_v1 adds nearly no points 29 | to your balance, while listfiles_v1 is certainly the most expensive routine if you have many files in your account and you request a very detailed list. 30 | 31 | http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine (finalpoints=points) 32 | or https://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine (finalpoints=points*2 (this means using SSL doubles points!)) 33 | 34 | Additional parameters can be added via http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine¶m1=value1¶m2=value2 35 | 36 | In case you get an error, the reply always starts with "ERROR: " followed by a plain string telling you what is wrong. You should always check this. 37 | The best is if you just output the string to the user. The string is self-explanatory and every user should know what went wrong. 38 | 39 | Be careful when giving too many wrong passwords. The password brute-force protection will block your IP when you enter too many wrong passwords and/or killcodes. 40 | This protection has nothing to do with the point system mentioned above and works independently. 41 | 42 | Every routine taking "login,password,type" parameters also accepts the parameter "cookie". If you give "cookie=63AC34AA98443H....", RapidShare will decrypt 43 | the cookie and overwrite the parameters "login,password,type" with the parameters stored in the encrypted cookie. this can be understood as a login override. 44 | 45 | 46 | 47 | subroutine=nextuploadserver_v1 48 | Description: Gives the next available upload server to use without fearing rejection. 49 | Parameters: None 50 | Reply fields: 1:The upload server. Complete it with rs$uploadserver.rapidshare.com 51 | Reply format: integer 52 | 53 | 54 | 55 | subroutine=getapicpu_v1 56 | Description: Gives the CURRENT and MAX api cpu value for your IP address. If you reach MAX points, all further API requests from your IP will be blocked. 57 | Every minute the server will subtract 1000 points from your balance. 58 | Parameters: None 59 | Reply fields: 1:How many points you already have. (CURRENT) 60 | 2:How many points you may have before getting blocked. (MAX) 61 | Reply format: integer,integer 62 | 63 | 64 | 65 | subroutine=checkincomplete_v1 66 | Description: You need this to resume an incomplete file. You can try without it, but you might get rejected if the file is invalid. 67 | This routine needs the file ID and the kill code for authentication. 68 | Parameters: fileid=The file ID in question 69 | killcode=The killcode of the file ID 70 | Reply fields: 1:The size already saved on the server. You should use this to resume the upload, so you know where to start. 71 | Reply format: integer 72 | 73 | 74 | 75 | subroutine=renamefile_v1 76 | Description: Renames a file to something else. Be aware that your users will not be able to download the file anymore by using the old link! 77 | Parameters: type=col or prem (Collector's account or Premium account) 78 | login=ID or username 79 | password=password of the login 80 | fileid=The file ID in question 81 | newname=A new name for the file. Invalid characters will automatically be converted to "_" 82 | Reply fields: 1:OK 83 | Reply format: string 84 | 85 | 86 | 87 | subroutine=movefilestorealfolder_v1 88 | Description: Moves one or more files to a RealFolder. (files parameter limited to 10000 bytes) 89 | Parameters: type=col or prem (Collector's account or Premium account) 90 | login=ID or username 91 | password=password of the login 92 | files=comma separated list of file ids 93 | realfolder=ID of the RealFolder 94 | Reply fields: 1:OK 95 | Reply format: string 96 | 97 | 98 | 99 | subroutine=renamerealfolder_v1 100 | Description: Renames an existing RealFolder. 101 | Parameters: type=col or prem (Collector's account or Premium account) 102 | login=ID or username 103 | password=password of the login 104 | realfolder=ID of the RealFolder 105 | newname=New name of the RealFolder (limited to 100 chars.) 106 | Reply fields: 1:OK 107 | Reply format: string 108 | 109 | 110 | 111 | subroutine=deletefiles_v1 112 | Description: Deletes one or more files forever. (files parameter limited to 10000 bytes) 113 | Parameters: type=col or prem (Collector's account or Premium account) 114 | login=ID or username 115 | password=password of the login 116 | files=comma separated list of file ids, or ONE RealFolder ID. This deletes all files in the RealFolder ID 117 | Reply fields: 1:OK 118 | Reply format: string 119 | 120 | 121 | 122 | subroutine=addrealfolder_v1 123 | Description: Adds a new RealFolder 124 | Parameters: type=col or prem (Collector's account or Premium account) 125 | login=ID or username 126 | password=password of the login 127 | name=Name of the folder (Max. 100 byte) 128 | parent=ID of parent folder. 0=root 129 | Reply fields: 1:RealFolder ID 130 | OR 131 | 1:-1 (if no space left, it returns -1) 132 | Reply format: integer 133 | 134 | 135 | 136 | subroutine=delrealfolder_v1 137 | Description: Deletes an existing RealFolder (without the files, so just the RealFolder entry) 138 | Parameters: type=col or prem (Collector's account or Premium account) 139 | login=ID or username 140 | password=password of the login 141 | realfolder=ID of the RealFolder 142 | Reply fields: 1:OK 143 | Reply format: string 144 | 145 | 146 | 147 | subroutine=moverealfolder_v1 148 | Description: Changes the parent ID of an existing RealFolder 149 | Parameters: type=col or prem (Collector's account or Premium account) 150 | login=ID or username 151 | password=password of the login 152 | realfolder=ID of the RealFolder 153 | newparent=New parent ID 154 | Reply fields: 1:OK 155 | Reply format: string 156 | 157 | 158 | 159 | subroutine=listfiles_v1 160 | Description: Lists all files in a given format in a given RealFolder or in all RealFolders. Warning: Flooding the server with this routine will block your IP! 161 | Parameters: type=col or prem (Collector's account or Premium account) 162 | login=ID or username 163 | password=password of the login 164 | realfolder=ID of the real folder to list files from. 0=root all=All folders 165 | filename=Optional. Give a filename to get only results where filename=$filename. Good for finding dupes. 166 | fileids=Optional. Give a comma-separated list of file IDs to get only results with the corresponding file IDs. fileids=1545615,1345154,215143 167 | fields=A comma separated list of database columns you want to receive. You will always receive the fileid. 168 | Example: fields=downloads,size will reply many lines in the format "$fileid,$downloads,$size\n" 169 | The following file columns are available: downloads,lastdownload,filename,size,killcode,serverid,type,x,y,realfolder,bodtype,killdeadline 170 | The following history columns are available: uploadtime,ip,md5hex 171 | Warning: History columns will always be appended after the file columns. Do NOT use them if you don't need to, since this will boost your points! 172 | Example: fields=size,filename,md5hex,killcode,ip,x,y will result in "$size,$filename,$killcode,$x,$y,$md5hex,$ip 173 | Format: Everything is human readable except timestamps, which are unix timestamps (integers). 174 | order=Reply will be ordered by this column. All file columns are valid. (optional. Avoid this parameter to pay less penalty points!) 175 | desc=0 or 1. 1 means, the result will be ordered descending. (optional) 176 | Reply fields: 1:fileid 177 | 2:dynamically adjusted 178 | OR 179 | "NONE" (if no results, it returns "NONE") 180 | Reply format: integer,fields (fields depending on fields you request) 181 | 182 | 183 | 184 | subroutine=listrealfolders_v1 185 | Description: Returns all available RealFolders and their topology. 186 | Parameters: type=col or prem (Collector's account or Premium account) 187 | login=ID or username 188 | password=password of the login 189 | Reply fields: 1:RealFolder ID 190 | 2:Parent RealFolder ID 191 | 3:Name of the folder 192 | OR 193 | "NONE" (if no results, it returns NONE) 194 | Reply format: integer,integer,string 195 | 196 | 197 | 198 | subroutine=getaccountdetails_v1 199 | Description: Returns key-pair values for the specific account. Warning: The order may change, and we will probably add or remove values in the future. 200 | You should make sure that your program does not stop working if new values appear or existing values disappear. 201 | Parameters: type=col or prem (Collector's account or Premium account.) 202 | login=ID or username 203 | password=password of the login 204 | withrefstring=1 (Optional. If given, the reply also contains refstring=STRING. You need this string to earn money. See FAQ for further information.) 205 | withcookie=1 (Optional. If given, the reply also contains cookie=STRING. You need this string only if you need to set a valid encryped cookie.) 206 | Reply fields: 1:key 2:value 207 | Reply format: string=string or integer\n... 208 | Reply example: TYPE=PREM: 209 | accountid=$accountid (integer) 210 | type=$type (prem or col) 211 | servertime=$time (integer) 212 | addtime=$addtime (integer) 213 | validuntil=$validuntil (integer) 214 | username=$username (string) 215 | directstart=$directstart (integer) 216 | protectfiles=$protectfiles (integer) 217 | rsantihack=$rsantihack (integer) 218 | plustrafficmode=$plustrafficmode (integer) 219 | mirrors=$mirrors (string) 220 | jsconfig=$jsconfig (string) 221 | email=$email (string) 222 | lots=$lots (integer) 223 | fpoints=$fpoints (integer) 224 | ppoints=$ppoints (integer) 225 | curfiles=$curfiles (integer) 226 | curspace=$curspace (integer) 227 | bodkb=$bodkb (integer) 228 | premkbleft=$premkbleft (integer) 229 | ppointrate=$ppointrate (integer in cents) 230 | refstring=$refstring (string, optional. See 'withrefstring' above.) 231 | cookie=$cookie (string. optional. See 'withcookie' above.) 232 | TYPE=COL: 233 | accountid=$accountid (integer) 234 | type=$type (prem or col) 235 | servertime=$time (integer) 236 | addtime=$addtime (integer) 237 | username=$username (string) 238 | email=$email (string) 239 | jsconfig=$jsconfig (string) 240 | rsantihack=$rsantihack (integer) 241 | lots=$lots (integer) 242 | fpoints=$fpoints (integer) 243 | ppoints=$ppoints (integer) 244 | curfiles=$curfiles (integer) 245 | curspace=$curspace (integer) 246 | ppointrate=$ppointrate (integer in cents) 247 | refstring=$refstring (string, optional. See 'withrefstring' above.) 248 | cookie=$cookie (string. optional. See 'withcookie' above.) 249 | 250 | 251 | 252 | subroutine=setaccountdetails_v1 253 | Description: Changes the settings of an account. Every parameter is mandatory except "newpassword". Thus, not transmitting a parameter means setting it to "". 254 | Enabled RSAntiHAck causes a block if you try to change email, username, password or plustrafficmode. 255 | Parameters: type=col or prem (Collector's account or Premium account.) 256 | login=ID or username 257 | password=Password of the login 258 | newpassword=Sets a new password. Optional. Skipping this will not change the password at all. 259 | email=Email address to use. Mandatory. Skipping results in an error. 260 | username=Optional username to use. If skipped, the username alias will be deleted. 261 | mirror=2 character mirror in segment 1. Skipping results in random mirror selection. Ignored on type=col 262 | mirror2=2 character mirror in segment 2. Skipping results in random mirror selection. Ignored on type=col 263 | mirror3=2 character mirror in segment 3. Skipping results in random mirror selection. Ignored on type=col 264 | directstart=1 or 0. Downloads will start instantly. Skipping this means setting it to 0. Ignored on type=col 265 | jsconfig=A custom value, which can be set as you like. Max. 64 alphanumeric characters. 266 | plustrafficmode=Modes valid are 0=No auto conversion. 1=Only TrafficShare conversion. 2=Only RapidPoints conversion. 3=Both conversions available. Ignored on type=col 267 | Reply fields: 1:OK 268 | Reply format: string 269 | 270 | 271 | 272 | subroutine=enablersantihack_v1 273 | Description: Enabled the RS AntiHack mode. This mode is highly recommended for every account, as it makes account manipulations impossible without unlocking it first. 274 | Calling this routine gives an error if no valid e-mail has been saved by the user. 275 | Parameters: type=col or prem (Collector's account or Premium account.) 276 | login=ID or username 277 | password=Password of the login 278 | Reply fields: 1:OK 279 | Reply format: string 280 | 281 | 282 | 283 | subroutine=disablersantihack_v1 284 | Description: Disables the RS AntiHack mode, so the user can change the account settings again. 285 | Parameters: type=col or prem (Collector's account or Premium account.) 286 | login=ID or username 287 | password=Password of the login 288 | unlockcode=The unlock code as seen in the e-mail sent by enablersantihack. 289 | Reply fields: 1:OK 290 | Reply format: string 291 | 292 | 293 | 294 | subroutine=sendrsantihackmail_v1 295 | Description: Sends the e-mail again containing the unlock code. It is the same e-mail as you called enablersantihack. 296 | Parameters: type=col or prem (Collector's account or Premium account.) 297 | login=ID or username 298 | password=Password of the login 299 | Reply fields: 1:OK 300 | Reply format: string 301 | 302 | 303 | 304 | subroutine=filemigrator_v1 305 | Description: Access to the powerful file migrator to move files between different accounts and account types. LinkLists also supported. 306 | Please notice that every transfer is logged. If you use this function to break the general user agreement, your account will be closed. 307 | Parameters: type=col or prem (Collector's account or Premium account.) 308 | login=ID or username 309 | password=Password of the login 310 | srcaccount=Login of the source account 311 | srcpassword=Password of the source account 312 | fileids=What files to move. Either a three digit ID for all files in the respective RealFolder, or a comma separated list of file IDs. 313 | If movetype is freecol or freeprem, fileids has to be like: fileids=fFILEIDkKILLCODEfFILEIDkKILLCODE and (only then) it is limited to 100 files per run. 314 | targetaccount=Login of the target account 315 | targetpassword=Password of the target account 316 | targetrealfolder=The RealFolder ID in the target account. All files will be moved in this RealFolder in the target account. 317 | movetype=freecol OR freeprem OR colcol OR colprem OR premcol OR premprem OR llpremprem (premcol for example moves files from a premium account to a collector's account) 318 | Reply fields: IF MOVETYPE=LLPREMPREM 319 | 1:Number of moved link lists 320 | ELSE 321 | 1:Number of moved files 322 | 2:Files in source account before action 323 | 3:Space in source account before action 324 | 4:Files in target account before action 325 | 5:Space in target account before action 326 | 6:Files in source account after action 327 | 7:Space in source account after action 328 | 8:Files in target account after action 329 | 9:Space in target account after action 330 | Reply format: IF MOVETYPE=LLPREMPREM 331 | integer 332 | ELSE 333 | integer,integer,integer,integer,integer,integer,integer,integer,integer 334 | 335 | 336 | 337 | subroutine=newlinklist_v1 338 | Description: Creates a new LinkList. 339 | Parameters: type=col or prem (Collector's account or Premium account.) 340 | login=ID or username 341 | password=Password of the login 342 | foldername=The name of the new LinkList 343 | folderheadline=A headline for the new LinkList 344 | nickname=Your nick name to display in the LinkList view mode 345 | folderpassword=An optional folder password visitors have to enter before being able to browse your LinkList 346 | Reply fields: 1:LinkList ID 347 | Reply format: string 348 | 349 | 350 | 351 | subroutine=editlinklist_v1 352 | Description: Edits an existing LinkList. Keeping any value empty means deleting the value. 353 | Parameters: type=col or prem (Collector's account or Premium account.) 354 | login=ID or username 355 | password=Password of the login 356 | folderid=The ID of the existing LinkList 357 | foldername=The new name of the LinkList 358 | folderheadline=A new headline for the LinkList 359 | nickname=A new nick name to display in the LinkList view mode 360 | folderpassword=An optional folder password visitors have to enter before being able to browse your LinkList 361 | Reply fields: 1:OK 362 | Reply format: string 363 | 364 | 365 | 366 | subroutine=getlinklist_v1 367 | Description: Receives a full list of all available link lists OR details about a specific link list. WARNING: Reply separator is a " instead of a comma! 368 | Parameters: type=col or prem (Collector's account or Premium account.) 369 | login=ID or username 370 | password=Password of the login 371 | folderid=LinkList ID. Set this to receive the first reply field group. Do not set this to receive the second reply field group. 372 | withsubfolders=1 gives also all sub-folders if folderid is empty. 373 | Reply fields: 1:Subfolder ID (string) 374 | 2:File ID (integer) 375 | 3:Filename (string) 376 | 4:Size (integer in bytes) 377 | 5:Description (string) 378 | 6:Addtime (unix timestamp) 379 | OR (if folderid is empty): 380 | 1:Folder ID (string) 381 | 2:Name (string) 382 | 3:Headline (string) 383 | 4:Views (integer) 384 | 5:Last view (unix timestamp) 385 | 6:Folder password (string) 386 | 7:Nick (string) 387 | Reply format: string"integer"string"integer"string"integer\n... 388 | OR (if folderid is empty) 389 | string"string"string"integer"integer"string"string\n... 390 | 391 | 392 | 393 | subroutine=copyfilestolinklist_v1 394 | Description: Copys several files to the given LinkList. Please notice that the files are not copied, but a link entry is generated in the LinkList pointing to the respective file. 395 | It takes the size and the filename and saves it in the link-list for every file ID you provide. Please notice that the files have to be in your respective zone 396 | in order to be copied to your LinkList. 397 | Parameters: type=col or prem 398 | login=ID or username 399 | password=Password of the login 400 | folderid=The folder ID to copy the files to 401 | subfolderid=The sub-folder ID to copy the files to (0=root, default is 0) 402 | files=A comma separated list of file IDs 403 | Reply fields: 1:OK 404 | Reply format: string 405 | 406 | 407 | 408 | subroutine=newlinklistsubfolder_v1 409 | Description: Creates a new LinkList sub-folder. 410 | Parameters: type=col or prem 411 | login=ID or username 412 | password=Password of the login 413 | folderid=The folder ID to create the entry in 414 | subfolderid=The sub-folder ID to create the entry in 415 | newsubfoldername=A reasonable sub-folder name 416 | newsubfolderpassword=An optional numeric access password for that sub-folder. 417 | newsubfolderdescription=An optional description 418 | Reply fields: 1:New sub-folder ID 419 | Reply format: string 420 | 421 | 422 | 423 | subroutine=deletelinklist_v1 424 | Description: Deletes an existing LinkList. Please notice that it will not delete the files itself, just the LinkList alone. 425 | Parameters: type=col or prem 426 | login=ID or username 427 | password=Password of the login 428 | folderid=The folder ID to delete 429 | Reply fields: 1:OK 430 | Reply format: string 431 | 432 | 433 | 434 | subroutine=deletelinklistentries_v1 435 | Description: Deletes LinkList entries. Also supports deleting sub folders. Be careful that it is possible to delete sub folders without deleting the links itself! 436 | A messed up LinkList can always be completely deleted with deletelinklist_v1. 437 | Parameters: type=col or prem 438 | login=ID or username 439 | password=Password of the login 440 | folderid=The folder ID to delete entries from 441 | subfolderid=The sub-folder ID to delete entries from. Defaults to 0 (root). 442 | files=The comma-separated file IDs to delete. Notice that sub-folders are file IDs less than 1000. 443 | Reply fields: 1:OK 444 | Reply format: string 445 | 446 | 447 | 448 | subroutine=editlinklistentry_v1 449 | Description: Edits a LinkList entry. If length of file ID is <= 3, it is a folder and you may edit description and password. If it is >= 4, it is a file and you may only change the description. 450 | Parameters: type=col or prem 451 | login=ID or username 452 | password=Password of the login 453 | folderid=The folder ID containing the file-id 454 | subfolderid=The sub-folder ID containing the file-id. Defaults to 0 (root). 455 | fileid=The file ID to modify. 456 | newdescription=The new description of the file or sub LinkList. 457 | newpassword=The new access password of the sub LinkList. Only valid if you edit a sub LinkList. 458 | Reply fields: 1:OK 459 | Reply format: string 460 | 461 | 462 | 463 | subroutine=trafficsharetype_v1 464 | Description: Sets a new TrafficShare type for a list of files. (files parameter limited to 10000 bytes) 465 | Parameters: type=col or prem (Collector's account or Premium account) 466 | login=ID or username 467 | password=password of the login 468 | files=comma separated list of file ids 469 | trafficsharetype=0,1,2,101,102 (0=off 1=on 2=on with encryption 101=on with logging 102=on with logging and encryption (101 and 102 require a verified premium account)) 470 | Reply fields: 1:OK 471 | Reply format: string 472 | 473 | 474 | 475 | subroutine=masspoll_v1 476 | Description: Saves your vote on a running mass poll. 477 | Parameters: type=col or prem (Collector's account or Premium account) 478 | login=ID or username 479 | password=password of the login 480 | pollid=ID of the poll 481 | a1=Your vote for question 1 (number between 1 and 99) 482 | a2=Your vote for question 2 (number between 1 and 99) 483 | a3=Your vote for question 3 (number between 1 and 99) 484 | a4=Your vote for question 4 (number between 1 and 99) 485 | a5=Your vote for question 5 (number between 1 and 99) 486 | Reply fields: 1:OK 487 | Reply format: string 488 | 489 | 490 | 491 | subroutine=checkfiles_v1 492 | Description: Gets status details about a list of given files. (files parameter limited to 3000 bytes. filenames parameter limited to 30000 bytes.) 493 | Parameters: files=comma separated list of file ids 494 | filenames=comma separated list of the respective filename. Example: files=50444381,50444382 filenames=test1.rar,test2.rar 495 | incmd5=if set to 1, field 7 is the hex-md5 of the file. This will double your points! If not given, all md5 values will be 0 496 | Reply fields: 1:File ID 497 | 2:Filename 498 | 3:Size (in bytes. If size is 0, this file does not exist.) 499 | 4:Server ID 500 | 5:Status integer, which can have the following numeric values: 501 | 0=File not found 502 | 1=File OK (Anonymous downloading) 503 | 2=File OK (TrafficShare direct download without any logging) 504 | 3=Server down 505 | 4=File marked as illegal 506 | 5=Anonymous file locked, because it has more than 10 downloads already 507 | 6=File OK (TrafficShare direct download with enabled logging. Read our privacy policy to see what is logged.) 508 | 6:Short host (Use the short host to get the best download mirror: http://rs$serverid$shorthost.rapidshare.com/files/$fileid/$filename) 509 | 7:md5 (See parameter incmd5 in parameter description above.) 510 | Reply format: integer,string,integer,integer,integer,string,string 511 | 512 | 513 | 514 | subroutine=trafficsharelogs_v1 515 | Description: Gets detailed download logs for your offered TrafficShare files. To make this work, you first have to enable logging for the respective TrafficShare files. 516 | No logs are generated by default. 517 | Parameters: type=col or prem (Collector's account senseless here) 518 | login=ID or username 519 | password=password of the login 520 | fileid=ID of the file 521 | Reply fields: 1: Start time, unix timestamp 522 | 2: Stop time, unix timestamp (You can easily calculate the download speed. If this is 0, then the client is still downloading.) 523 | 3: Size of the whole file in bytes 524 | 4: Starting position of the download 525 | 5: How many bytes the client has really downloaded 526 | 6: Range parameter. Download-accelerators might give those parameters. 527 | 7: Custom parameter. You can include information in the download link, like the customer ID or billing informations. Those can be tracked here as well. 528 | Reply format: integer"integer"integer"integer"integer"string"string (the separator here is ", because the last two values may contain commas!) 529 | 530 | 531 | 532 | subroutine=trafficsharebandwidth_v1 533 | Description: You can see how much bandwidth all your offered TrafficShare files have been used. This means, in case you want to host your files on your own servers, you need this bandwidth. 534 | Holes in the table will exist if you use less than 1 MBit (128 KB/sec) or if we experience server problems. Logging of TrafficShare bandwidth also stops as soon as your 535 | TrafficShare remaining traffic drops below 100 GB. If you rely on the graph, make sure you have always more than 100 GB of TrafficShare traffic. This means that this feature 536 | is reserved for heavy business TrafficShare users. 537 | Parameters: type=col or prem (Collector's account senseless here) 538 | login=ID or username 539 | password=password of the login 540 | starttime=Start time to get logs for, unix timestamp 541 | endtime=End time to get logs for, unix timestamp (you will never get more than 1000 records. Reply is "ORDER BY starttime LIMIT 1000") 542 | Reply fields: 1: Unix timestamp (You will get timestamp intervals every 10 minutes.) 543 | 2: KB/sec (How many KB/sec you have used in the past 10 minutes.) 544 | Reply format: integer,integer 545 | 546 | 547 | 548 | subroutine=buylots_v1 549 | Description: Exchanges RapidPoints to lots. You will get one lot for 50 RapidPoints. You can not own more than 50.000 lots. 550 | Parameters: type=col or prem 551 | login=ID or username 552 | password=password of the login 553 | newlots=How many new lots to buy 554 | Reply fields: 1: Number of lots you have now. (old+new) 555 | Reply format: integer 556 | 557 | 558 | 559 | subroutine=sendmail_v1 560 | Description: You may send an e-mail to someone to inform him/her about a file you have just uploaded. E-mail sending is restricted and has several anti-spam methods included. 561 | Parameters: name=YOUR name 562 | comment=A comment you want to attach to your e-mail. HTML will be filtered. 563 | email1=First e-mail address to send to 564 | email2=Second e-mail address (optional) 565 | email3=Third e-mail address (optional) 566 | withkillcode1=1 means that e-mail #1 will also receive the delete links for the files. Optional. 567 | withkillcode2=1 optional 568 | withkillcode3=1 optional 569 | fileid1=File ID 1 to inform the receiver about 570 | killcode1=The killcode for fileid1 (required) 571 | fileidX=Same as above. Supported up to fileid10. 572 | killcodeX=Supported up to killcode10. 573 | Reply fields: 1: OK 574 | Reply format: string 575 | 576 | 577 | 578 | subroutine=premiumzonelogs_v1 579 | Description: Downloads the log files from your premium zone. Thus which IP network has downloaded how much data on what day from your premium account. Ordered by date descending. 580 | Parameters: login=ID or username 581 | password=password of the login 582 | Reply fields: 1: date (YYYY-MM-DD) 583 | 2: ipnet (10.10.0.XXX) 584 | 3: dlkb (How many Kilobytes you have downloaded) 585 | Reply format: string,string,integer\nstring,string,integer\nstring,string,integer\nstring,string,integer\n... 586 | 587 | 588 | 589 | subroutine=getreward_v1 590 | Description: Gets details about your ordered RapidShare reward. You can only have one pending reward active at the same time. 591 | Parameters: type=col or prem 592 | login=ID or username 593 | password=password of the login 594 | Reply fields: 1: Reward-ID 595 | 2: AddTime (Unix timestamp) 596 | 3: E-Mail address saved when ordered this reward. 597 | 4: Active PPointRate (PPointRate: How many CENT you get for 1000 Premium RapidPoints.) 598 | 5: Parameters. This is a text-block of data needed to deliver the reward, which has been saved as well via setreward_v1. 599 | Reply format: integer,integer,string,integer\nseveral lines of text data 600 | 601 | 602 | 603 | subroutine=setreward_v1 604 | Description: Saves details about your ordered RapidShare reward. You can only have one pending reward active at the same time. 605 | Parameters: type=col or prem 606 | login=ID or username 607 | password=password of the login 608 | reward=integer (1-255) of the reward ID. 609 | parameters=A multi-line textblock with max. 3000 characters, which can be read by getreward_v1. Suspicious characters will be filtered. 610 | Reply fields: 1: OK 611 | Reply format: string 612 | 613 | 614 | 615 | subroutine=getpointlogs_v1 616 | Description: Gets details about your earned RapidPoints. You can see how many RapidPoints you have earned on which day. Max. 90 days in the past. 617 | Due to the complexity of this process, you can't see the points of the current day. 618 | Parameters: type=col or prem 619 | login=ID or username 620 | password=password of the login 621 | Reply fields: 1: Date (ordered by date descending "YYYY-MM-DD") 622 | 2: fpoints (Free RapidPoints earned through free users) 623 | 3: ppoints (Premium RapidPoints earned through premium users. Example: 24.06) 624 | Reply format: string,integer,float 625 | 626 | 627 | 628 | subroutine=getreferrerlogs_v1 629 | Description: You can see how many Premium RapidPoints you have earned on which day. Max. 1000 entries will be displayed. 630 | Parameters: type=col or prem 631 | login=ID or username 632 | password=password of the login 633 | Reply fields: 1: addtime (unix timestamp. entries ordered by addtime descending) 634 | 2: ppoints (how many Premium-RapidPoints you got by this new customer) 635 | 3: byfileid (the referrer File-ID. This is 0 if the customer used a REFLINK instead.) 636 | 4: confirmed (it takes 21 days until the ppoints will be credited to your balance preventing fraud. 0 or 1) 637 | Reply format: integer,integer,integer 638 | 639 | 640 | 641 | subroutine=ppointstofpoints_v1 642 | Description: Exchanges your Premium RapidPoints to Free RapidPoints. Exchange rate may vary. Right now you get 1250 fpoints for 1000 ppoint. 643 | Parameters: type=col or prem 644 | login=ID or username 645 | password=password of the login 646 | takeppoints=how many ppoints you wish to exchange to fpoints. Minimum is 1000. The really exchanged points may be lower than specified. 647 | Reply fields: 1: ppoints (how many ppoints you lost) 648 | 2: fpoints (how many fpoints you got) 649 | Reply format: integer,integer 650 | 651 | 652 | 653 | Revision history (entries refering to functions being removed after just a few days will be removed here as well to keep the history clean) 654 | =========================================================================================================================================== 655 | 25.05.2009 656 | - Introduction of the revision history 657 | - trafficsharelogs_v1,trafficsharebandwidth_v1: actually make them work as they should.... 658 | - trafficsharelogs_v1: range and custom parameter added 659 | - checkfiles_v1: status values re-formatted and value 6 added 660 | 661 | 30.05.2009 662 | - getaccountdetails_v1: Possibility to earn money added. Thus, referers added, new parameter withrefstring added. 663 | 664 | 01.06.2009 665 | - getaccountdetails_v1: referers changed to refpoints. Possibility for collector's users to earn money added. refpoints and refstring added there as well. 666 | 667 | 04.06.2009 668 | - premiumzonelogs_v1 added 669 | - getapicpu_v1: Now permanent. MAX value added. 670 | 671 | 05.06.2009 672 | - filemigrator_v1: acceptfee added. File migrator not free anymore due to massive abuse. 673 | 674 | 08.06.2009 675 | - getaccountdetails_v1: withcookie added. If you need to set the new encrypted cookie, this string will include cookie=STRING in the reply. 676 | 677 | 14.06.2009 678 | - getreward_v1 and setreward_v1 added. 679 | 680 | 22.06.2009 681 | - global: Support for encrypted API login added via global override parameter cookie=HEXSTRING 682 | 683 | 25.06.2009 684 | - getaccountdetails_v1: reply field "type" added. 685 | 686 | 01.07.2009 687 | - getaccountdetails_v1: prempoints always returns 0. It will be removed shortly! 688 | 689 | 06.07.2009 690 | - getaccountdetails_v1: prempoints now removed. refrate added. 691 | - getpointlogs_v1: added function 692 | - getreferrerlogs_v1: added function 693 | 694 | 15.07.2009 695 | - listfiles_v1: killdeadline added 696 | 697 | 16.07.2009 698 | - listfiles_v1: now takes new optional parameter "fileids" 699 | 700 | 06.08.2009 701 | - getlinklist_v1: Internal bug-fixes and changed reply separator from , to " and reply is no longer HTML encoded. 702 | 703 | 11.08.2009 704 | - getreward_v1: Reply changed. Points removed. 705 | 706 | 17.08.2009 707 | - filemigrator_v1: acceptfee removed. Fee removed completely. 708 | - masspoll_v1: vote changed to a1...a5 to enable possibility of surveys. 709 | 710 | 19.08.2009 711 | - getaccountdetails_v1: points renamed to fpoints. ppoints added. "points=0" added to avoid broken applications. 712 | - getpointlogs_v1: ppoints changed from integer to float. 713 | 714 | 21.08.2009 715 | - ppointstorefpoints_v1: added function 716 | 717 | 22.08.2009 718 | - ppointstorefpoints_v1: changed reply 719 | - ppointstofpoints_v1: added function 720 | 721 | 24.08.2009 722 | - getaccountdetails_v1: refrate changed to ppointrate, refpoints removed. 723 | - getreferrerlogs_v1: refpoints changed to ppoints 724 | - ppointstorefpoints_v1: function removed 725 | - getreward_v1: refrate changed to ppointrate 726 | 727 | 05.10.2009 728 | - getaccountdetails_v1: updated documentation to reality: points replaced by fpoints and ppoints, prempoints removed. 729 | 730 | 06.10.2009 731 | - Inserted missing RSAntiHack check routines in several api functions. 732 | - renamefile_v1: changed function so that you need a login now to use it. No longer accepts killcode identification 733 | 734 | 08.10.2009 735 | - getreferrerlogs_v1: confirmed flag added 736 | 737 | 18.10.2009 738 | - filemigrator_v1: Some design flaws fixed with RealFolders. Syntax has changed slightly. srcrealfolder removed. 739 | fileids now alternatively takes a RealFolder ID and no longer an "*". 740 | 741 | 26.10.2009 742 | - getaccountdetails_v1: mirror, mirror2, mirror3 and mirror4 replaced by "mirrors", a comma separated list of mirrors to use. 743 | 744 | 05.11.2009 745 | - checkfiles_v1: limit of files to check at once lowered from 10000 bytes to 3000 bytes. -------------------------------------------------------------------------------- /archived/downr/doc/api/rsapi.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl 2 | 3 | # RapidShare AG OpenSource Perl Uploader V1.0. For non-commercial use only. All rights reserved. 4 | # Included: Uploading to free, collector's and premium-zone. The MD5-check after uploads checks if the upload worked. 5 | # NOT included in this version: Upload-resume via new RS API. 6 | # This is a PERL script written for experts and for coders wanting to know how to write own upload programs. 7 | # Tested under Linux and Linux only. 8 | # If you write your own upload-tools, please look at our rsapi.cgi calls. You need them to have fun. 9 | # 10 | # To upload a file, put this script on a machine with perl installed and use the following syntax: 11 | # perl rsapi.pl mytestfile.rar (this uploads mytestfile.rar as a free user) 12 | # perl rsapi.pl archive.rar prem 334 test (this uploads archive.rar to the premium-zone of login 334 with password test) 13 | # perl rsapi.pl a.rar col testuser mypw (this uploads a.rar to the collector's-zone of login testuser with password mypw) 14 | # 15 | # We will publish another version with upload resume enabled soon, but this script actually works and we actually 16 | # want you to understand how it works and upload resume would make this script even more complex. 17 | 18 | use strict; 19 | use warnings; 20 | use Digest::MD5("md5_hex"); 21 | use Fcntl; 22 | use IO::Socket; 23 | 24 | my ($file, $filename, $uploadpath, $size, $socket, $uploadserver, $cursize, $fh, $bufferlen, $buffer, $boundary, $header, $contentheader, 25 | $contenttail, $contentlength, $result, $maxbufsize, $md5hex, $filecontent, $size2, %key_val, $login, $password, $zone); 26 | 27 | 28 | 29 | # This chapter sets some vars and parses some vars. 30 | $/ = undef; 31 | $file = $ARGV[0] || die "Syntax: $0 [login] [password]\n"; 32 | $zone = $ARGV[1] || ""; 33 | $login = $ARGV[2] || ""; 34 | $password = $ARGV[3] || ""; 35 | $maxbufsize = 64000; 36 | $uploadpath = "l3"; 37 | $cursize = 0; 38 | $size = -s $file || die "File $file is empty or does not exist!\n"; 39 | $filename = $file =~ /[\/\\]([^\/\\]+)$/ ? $1 : $file; 40 | 41 | 42 | 43 | # This chapter checks the file and calculates the MD5HEX of the existing local file. 44 | print "File $file has $size bytes. Calculating MD5HEX...\n"; 45 | open(FH, $file) || die "Unable to open file: $!\n"; 46 | $filecontent = ; 47 | close(FH); 48 | $md5hex = uc(md5_hex($filecontent)); 49 | $size2 = length($filecontent); 50 | print "MD5HEX is $md5hex ($size2 bytes analyzed.)\n"; 51 | unless ($size == $size2) { die "Strange error: $size bytes found, but only $size2 bytes analyzed?\n" } 52 | 53 | 54 | 55 | # This chapter finds out which upload server is free for uploading our file by fetching http://rapidshare.com/cgi-bin/rsapi.cgi?sub=nextuploadserver_v1 56 | if ($login and $password) { print "Trying to upload to your premium account.\n" } else { print "Uploading as a free user.\n" } 57 | print "Uploading as filename '$filename'. Getting upload server infos.\n"; 58 | $socket = IO::Socket::INET->new(PeerAddr => "rapidshare.com:80") || die "Unable to open port: $!\n"; 59 | print $socket qq|GET /cgi-bin/rsapi.cgi?sub=nextuploadserver_v1 HTTP/1.0\r\n\r\n|; 60 | ($uploadserver) = <$socket> =~ /\r\n\r\n(\d+)/; 61 | unless ($uploadserver) { die "Uploadserver invalid? Internal error!\n" } 62 | print "Uploading to rs$uploadserver$uploadpath.rapidshare.com\n"; 63 | 64 | 65 | 66 | # This chapter opens our file and the TCP socket to the upload server. 67 | sysopen($fh, $file, O_RDONLY) || die "Unable to open file: $!\n"; 68 | $socket = IO::Socket::INET->new(PeerAddr => "rs$uploadserver$uploadpath.rapidshare.com:80") || die "Unable to open port: $!\n"; 69 | 70 | 71 | 72 | # This chapter constructs a (somewhat RFC valid) HTTP header. See how we pass rsapi_v1=1 to the server to get a program-friendly output. 73 | $boundary = "---------------------632865735RS4EVER5675865"; 74 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="rsapi_v1"\r\n\r\n1\r\n|; 75 | 76 | if ($zone eq "prem" and $login and $password) { 77 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="login"\r\n\r\n$login\r\n|; 78 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="password"\r\n\r\n$password\r\n|; 79 | } 80 | 81 | if ($zone eq "col" and $login and $password) { 82 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="freeaccountid"\r\n\r\n$login\r\n|; 83 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="password"\r\n\r\n$password\r\n|; 84 | } 85 | 86 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="filecontent"; filename="$filename"\r\n\r\n|; 87 | $contenttail = "\r\n$boundary--\r\n"; 88 | $contentlength = length($contentheader) + $size + length($contenttail); 89 | $header = qq|POST /cgi-bin/upload.cgi HTTP/1.0\r\nContent-Type: multipart/form-data; boundary=$boundary\r\nContent-Length: $contentlength\r\n\r\n|; 90 | 91 | 92 | 93 | #This chapter actually sends all the data, header first, to the upload server. 94 | print $socket "$header$contentheader"; 95 | 96 | while ($cursize < $size) { 97 | $bufferlen = sysread($fh, $buffer, $maxbufsize, 0) || 0; 98 | unless ($bufferlen) { die "Error while sending data: $!\n" } 99 | print "$cursize of $size bytes sent.\n"; 100 | $cursize += $bufferlen; 101 | print $socket $buffer; 102 | } 103 | 104 | print $socket $contenttail; 105 | 106 | 107 | 108 | # OK, all is sent. Now lets fetch the server's reponse and analyze it. 109 | print "All $size bytes sent to server. Fetching result:\n"; 110 | ($result) = <$socket> =~ /\r\n\r\n(.+)/s; 111 | unless ($result) { die "Ooops! Did not receive any valid server results?\n" } 112 | print "$result >>> Verifying MD5...\n"; 113 | 114 | foreach (split(/\n/, $result)) { 115 | if ($_ =~ /([^=]+)=(.+)/) { $key_val{$1} = $2 } 116 | } 117 | 118 | 119 | 120 | # Now lets check if the result contains (and it should contain) the MD5HEX of the uploaded file and check if its identical to our MD5HEX. 121 | unless ($key_val{"File1.4"}) { die "Ooops! Result did not contain MD5? Maybe you entered invalid login data.\n" } 122 | if ($md5hex ne $key_val{"File1.4"}) { die qq|Upload FAILED! Your MD5HEX is $md5hex, while the uploaded file has MD5HEX $key_val{"File1.4"}!\n| } 123 | print "MD5HEX value correct. Upload completed without errors. Saving links to rsulres.txt\n\n\n"; 124 | 125 | 126 | 127 | # Maybe you want the links saved to a logfile? Here we go. 128 | open(O, ">>rsulres.txt"); 129 | print O $result . "\n"; 130 | close(O); 131 | 132 | 133 | 134 | # Thats it. Have fun experimenting with this script. Now lets say... 135 | exit; 136 | -------------------------------------------------------------------------------- /archived/downr/doc/api/rsapiresume.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl 2 | 3 | # Version 2.2.3 (21. Sep. 2009) 4 | # RapidShare AG OpenSource Perl Uploader. For non-commercial use only. All rights reserved. USE AT YOUR OWN RISK! 5 | 6 | # Features: 7 | # - Uploading to free, collector's zone and premium zone. 8 | # - Supports MD5 check after uploads to check if the upload worked. 9 | # - Supports upload resume to continue aborted uploads. Upload needs to be completed within 24 hours. 10 | # - Supports new RealFolders for premium and collector's accounts. 11 | # - Supports uploading whole directories in RealFolders. 12 | # - Supports update modes and trash modes to update remote files without changing file IDs. 13 | 14 | # Syntax: rsapiresume.pl [login] [password] [updatemode] [trashmode] 15 | 16 | # Update=0: Traditional uploading. No duplicate checking. 17 | # Update=1: The lowest file ID duplicate will be overwritten if MD5 differs. Other duplicates will be handled using the trash flag. 18 | 19 | # Trash=0: No trashing. 20 | # Trash=1: Files will be moved to trash RealFolder (255) 21 | # Trash=2: Files will be DELETED! (not undoable) 22 | 23 | # To upload a file, put this script on a Linux machine with perl installed and use the following syntax: 24 | # perl rsapiresume.pl mytestfile.rar free (this uploads mytestfile.rar as a free user) 25 | # perl rsapiresume.pl archive.rar prem 334 test (this uploads archive.rar to the premium zone of login 334 with password test) 26 | # perl rsapiresume.pl a.rar col testuser mypw (this uploads a.rar to the collector's zone of login testuser with password mypw) 27 | 28 | # perl rsapiresume.pl prem myfolder 334 mypw 1 1 29 | # This uploads the folder myfolder and all subfolders to the premium zone of login 334 with password mypw. 30 | # Update=1 will not upload files already existing with same md5. Existing but different files will be overwritten without 31 | # changing the download link. Multiple duplicates will be moved to the RealFolder 255 (Trash), because we set the trash value to 1. 32 | 33 | use strict; 34 | use warnings; 35 | use Digest::MD5("md5_hex"); 36 | use Fcntl; 37 | use IO::Socket; 38 | use LWP::Simple; 39 | 40 | my ($FILE, $TYPE, $LOGIN, $PASSWORD, $UPDATEMODE, %ESCAPES, $TRASHMODE, %PARENTANDNAME_REALFOLDER); 41 | 42 | $/ = undef; 43 | $SIG{PIPE} = $SIG{HUP} = 'IGNORE'; 44 | $FILE = $ARGV[0] || ""; 45 | $TYPE = $ARGV[1] || ""; 46 | $LOGIN = $ARGV[2] || ""; 47 | $PASSWORD = $ARGV[3] || ""; 48 | $UPDATEMODE = $ARGV[4] || 0; 49 | $TRASHMODE = $ARGV[5] || 0; 50 | 51 | unless ($TYPE) { die "Syntax: $0 [login] [password] [updatemode] [trashmode]\n" } 52 | unless (-e $FILE) { die "File not found.\n" } 53 | 54 | if (-d $FILE) { 55 | unless ($LOGIN and $PASSWORD) { die "Folder upload not supported for anonymous uploads.\n" } 56 | 57 | print "Counting all folders and files in $FILE...\n"; 58 | my ($numfiles, $numfolders, $numbytes) = &countfiles($FILE); 59 | printf("You want to upload $numfiles files in $numfolders folders having $numbytes bytes (%.2f MB)\n", $numbytes / 1000000); 60 | if ($numfiles > 1000) { die "More than 1000 files? You should not do that...\n" } 61 | if ($numfolders > 100) { die "More than 100 folders? You should not do that...\n" } 62 | if ($numbytes > 100_000_000_000) { die "More than 100 Gigabytes? You should not do that...\n" } 63 | 64 | print "Uploading folder $FILE...\n"; 65 | &listrealfolders($TYPE, $LOGIN, $PASSWORD); 66 | &uploadfolder($FILE, $TYPE, $LOGIN, $PASSWORD); 67 | } else { 68 | &uploadfile($FILE, $TYPE, $LOGIN, $PASSWORD); 69 | } 70 | 71 | print "All done.\n"; 72 | exit; 73 | 74 | 75 | 76 | 77 | 78 | sub countfiles { 79 | my $dir = shift || die; 80 | 81 | my ($filename, $numfiles, $numfolders, $numbytes, $subnumfiles, $subnumfolders, $subnumbytes); 82 | 83 | foreach $filename (glob("$dir/*")) { 84 | if ($filename =~ /\.uploaddata$/) { next } 85 | 86 | if (-d $filename) { 87 | ($subnumfiles, $subnumfolders, $subnumbytes) = &countfiles($filename); 88 | $numfiles += $subnumfiles; 89 | $numfolders++; 90 | $numbytes += $subnumbytes; 91 | } else { 92 | $numfiles++; 93 | $numbytes += -s $filename || 0; 94 | } 95 | } 96 | 97 | return ($numfiles || 0, $numfolders || 0, $numbytes || 0); 98 | } 99 | 100 | 101 | 102 | 103 | 104 | sub uploadfolder { 105 | my $file = shift || die; 106 | my $type = shift || die; 107 | my $login = shift || ""; 108 | my $password = shift || ""; 109 | my $parent = shift || 0; 110 | 111 | my ($realfolder, $filename, $htmllogin, $htmlpassword, $htmlname, $mode); 112 | 113 | $realfolder = $PARENTANDNAME_REALFOLDER{"$parent,$file"} || 0; 114 | $mode = "existed"; 115 | 116 | unless ($realfolder) { 117 | $htmllogin = &htmlencode($login); 118 | $htmlpassword = &htmlencode($password); 119 | $htmlname = &htmlencode($file); 120 | $realfolder = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=addrealfolder_v1&type=$type&login=$htmllogin&password=$htmlpassword&name=$htmlname&parent=$parent") || ""; 121 | if (not $realfolder or $realfolder =~ /^ERROR: /) { die "API Error occured: $realfolder\n" } 122 | $mode = "created"; 123 | unless ($realfolder =~ /^\d+$/) { die "Error adding RealFolder: $realfolder\n" } 124 | } 125 | 126 | print "Folder $file resolved to ID $realfolder ($mode)\n"; 127 | 128 | foreach $filename (glob("$file/*")) { 129 | if ($filename =~ /\.uploaddata$/) { next } 130 | if (-d $filename) { &uploadfolder($filename, $type, $login, $password, $realfolder) } else { &uploadfile($filename, $type, $login, $password, $realfolder) } 131 | } 132 | 133 | return ""; 134 | } 135 | 136 | 137 | 138 | 139 | 140 | sub listrealfolders { 141 | my $type = shift || die; 142 | my $login = shift || die; 143 | my $password = shift || die; 144 | 145 | my ($htmllogin, $htmlpassword, $result, $realfolder, $parent, $name); 146 | 147 | $htmllogin = &htmlencode($login); 148 | $htmlpassword = &htmlencode($password); 149 | 150 | $result = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=listrealfolders_v1&login=$htmllogin&password=$htmlpassword&type=$type") || ""; 151 | if (not $result or $result =~ /^ERROR: /) { die "API Error occured: $result\n" } 152 | 153 | foreach (split(/\n/, $result)) { 154 | ($realfolder, $parent, $name) = split(/,/, $_, 3); 155 | $PARENTANDNAME_REALFOLDER{"$parent,$name"} = $realfolder; 156 | } 157 | 158 | return ""; 159 | } 160 | 161 | 162 | 163 | 164 | 165 | sub finddupes { 166 | my $type = shift || die; 167 | my $login = shift || die; 168 | my $password = shift || die; 169 | my $realfolder = shift || 0; 170 | my $filename = shift || ""; 171 | 172 | my ($header, $result, $htmllogin, $htmlpassword, $htmlfilename, $fileid, $size, $killcode, $md5hex, $serverid); 173 | 174 | $htmllogin = &htmlencode($login); 175 | $htmlpassword = &htmlencode($password); 176 | $htmlfilename = &htmlencode($filename); 177 | $result = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=listfiles_v1&login=$htmllogin&password=$htmlpassword&type=$type&realfolder=$realfolder&filename=$htmlfilename&fields=size,killcode,serverid,md5hex&order=fileid") || ""; 178 | 179 | if (not $result or $result =~ /^ERROR: /) { die "API Error occured: $result\n" } 180 | if ($result eq "NONE") { print "FINDDUPES: No dupe detected.\n"; return (0,0,0,0,0) } 181 | 182 | foreach (split(/\n/, $result)) { 183 | unless ($_ =~ /^(\d+),(\d+),(\d+),(\d+),(\w+)/) { die "FINDDUPES: Unexpected result: $result\n" } 184 | unless ($fileid) { $fileid = $1; $size = $2; $killcode = $3; $serverid = $4; $md5hex = lc($5); next } 185 | print "FINDDUPES: Deleting dupe $1\n"; 186 | &deletefile($1, $3); 187 | } 188 | 189 | return ($fileid, $size, $killcode, $serverid, $md5hex); 190 | } 191 | 192 | 193 | 194 | 195 | 196 | sub deletefile { 197 | my $fileid = shift || die; 198 | my $killcode = shift || die; 199 | 200 | if ($TRASHMODE == 1) { 201 | print "DELETEFILE: Moving file $fileid to trash RealFolder 255.\n"; 202 | my $result = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=movefilestorealfolder_v1&files=f$fileid"."k$killcode&realfolder=255") || ""; 203 | if ($result ne "OK") { die "DELETEFILE: Unexpected server reply: $result\n" } 204 | } 205 | 206 | elsif ($TRASHMODE == 2) { 207 | print "DELETEFILE: DELETING file $fileid.\n"; 208 | my $result = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=deletefiles_v1&files=f$fileid"."k$killcode") || ""; 209 | if ($result ne "OK") { die "DELETEFILE: Unexpected server reply: $result\n" } 210 | } 211 | 212 | else { 213 | print "DELETEFILE: Doing nothing with file $fileid, because trash mode is 0.\n"; 214 | } 215 | 216 | return ""; 217 | } 218 | 219 | 220 | 221 | 222 | 223 | sub uploadfile { 224 | my $file = shift || die; 225 | my $type = shift || die; 226 | my $login = shift || ""; 227 | my $password = shift || ""; 228 | my $realfolder = shift || 0; 229 | 230 | my ($size, $filecontent, $md5hex, $size2, $uploadserver, $cursize, $dupefileid, $dupesize, $dupekillcode, $dupemd5hex); 231 | 232 | # This chapter checks the file and calculates the MD5HEX of the existing local file. 233 | $size = -s $file || die "File $file is empty or does not exist!\n"; 234 | print "File $file\n$size has byte. Full file MD5 is... "; 235 | open(FH, $file) || die "Unable to open file: $!\n"; 236 | binmode(FH); 237 | $filecontent = ; 238 | close(FH); 239 | $md5hex = md5_hex($filecontent); 240 | $size2 = length($filecontent); 241 | print "$md5hex\n"; 242 | unless ($size == $size2) { die "Strange error: $size byte found, but only $size2 byte analyzed?\n" } 243 | 244 | if ($UPDATEMODE and $login and $password) { 245 | ($dupefileid, $dupesize, $dupekillcode, $uploadserver, $dupemd5hex) = &finddupes($type, $login, $password, $realfolder, $file); 246 | if ($md5hex eq $dupemd5hex) { print "FILE ALREADY UP TO DATE! Server rs$uploadserver.rapidshare.com in file ID $dupefileid.\n\n"; return "" } 247 | if ($dupefileid) { print "UPDATING FILE $dupefileid on server rs$uploadserver.rapidshare.com ($type)\n" } 248 | } 249 | 250 | unless ($uploadserver) { 251 | print "Getting a free upload server...\n"; 252 | $uploadserver = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=nextuploadserver_v1") || ""; 253 | if (not $uploadserver or $uploadserver =~ /^ERROR: /) { die "API Error occured: $uploadserver\n" } 254 | print "Uploading to rs$uploadserver.rapidshare.com ($type)\n"; 255 | } 256 | 257 | $cursize = 0; 258 | while ($cursize < $size) { $cursize = &uploadchunk($file, $type, $login, $password, $realfolder, $md5hex, $size, $cursize, "rs$uploadserver.rapidshare.com:80", $dupefileid, $dupekillcode) } 259 | 260 | return ""; 261 | } 262 | 263 | 264 | 265 | 266 | 267 | sub uploadchunk { 268 | my $file = shift || die; 269 | my $type = shift || die; 270 | my $login = shift || ""; 271 | my $password = shift || ""; 272 | my $realfolder = shift || 0; 273 | my $md5hex = shift || die; 274 | my $size = shift || die; 275 | my $cursize = shift || 0; 276 | my $fulluploadserver = shift || die; 277 | my $replacefileid = shift || 0; 278 | my $replacekillcode = shift || 0; 279 | 280 | my ($uploaddata, $wantchunksize, $fh, $socket, $boundary, $contentheader, $contenttail, $contentlength, $header, $chunks, $chunksize, 281 | $bufferlen, $buffer, $result, $fileid, $complete, $resumed, $filename, $killcode, $remotemd5hex, $chunkmd5hex); 282 | 283 | if (-e "$file.uploaddata") { 284 | open(I, "$file.uploaddata") or die "Unable to open file: $!\n"; 285 | ($fulluploadserver, $fileid, $killcode) = split(/\n/, ); 286 | print "RESUMING UPLOAD! Uploadserver=$fulluploadserver\nFile-ID=$fileid\nKillcode=$killcode\n"; 287 | close(I); 288 | print "Requesting authorization for upload resume...\n"; 289 | $cursize = get("http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkincomplete_v1&fileid=$fileid&killcode=$killcode") || ""; 290 | unless ($cursize =~ /^\d+$/) { die "Unable to resume! Please delete $file.uploaddata or try again.\n" } 291 | print "The upload stopped at $cursize on server $fulluploadserver.\n"; 292 | $resumed = 1; 293 | } 294 | 295 | $wantchunksize = 1000000; 296 | 297 | if ($size > $wantchunksize) { 298 | $chunks = 1; 299 | $chunksize = $size - $cursize; 300 | if ($chunksize > $wantchunksize) { $chunksize = $wantchunksize } else { $complete = 1 } 301 | } else { 302 | $chunks = 0; 303 | $chunksize = $size; 304 | } 305 | 306 | print "Upload chunk is $chunksize byte starting at $cursize.\n"; 307 | 308 | sysopen($fh, $file, O_RDONLY) || die "Unable to open file: $!\n"; 309 | $filename = $file =~ /[\/\\]([^\/\\]+)$/ ? $1 : $file; 310 | $socket = IO::Socket::INET->new(PeerAddr => $fulluploadserver) || die "Unable to open socket: $!\n"; 311 | $boundary = "---------------------632865735RS4EVER5675865"; 312 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="rsapi_v1"\r\n\r\n1\r\n|; 313 | 314 | if ($resumed) { 315 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="fileid"\r\n\r\n$fileid\r\n|; 316 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="killcode"\r\n\r\n$killcode\r\n|; 317 | if ($complete) { $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="complete"\r\n\r\n1\r\n| } 318 | } else { 319 | if ($type eq "prem" and $login and $password) { $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="login"\r\n\r\n$login\r\n| } 320 | if ($type eq "col" and $login and $password) { $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="freeaccountid"\r\n\r\n$login\r\n| } 321 | 322 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="password"\r\n\r\n$password\r\n|; 323 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="realfolder"\r\n\r\n$realfolder\r\n|; 324 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="replacefileid"\r\n\r\n$replacefileid\r\n|; 325 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="replacekillcode"\r\n\r\n$replacekillcode\r\n|; 326 | 327 | if ($chunks) { $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="incomplete"\r\n\r\n1\r\n| } 328 | } 329 | 330 | $contentheader .= qq|$boundary\r\nContent-Disposition: form-data; name="filecontent"; filename="$filename"\r\n\r\n|; 331 | $contenttail = "\r\n$boundary--\r\n"; 332 | $contentlength = length($contentheader) + $chunksize + length($contenttail); 333 | 334 | if ($resumed) { 335 | $header = qq|POST /cgi-bin/uploadresume.cgi HTTP/1.0\r\nContent-Type: multipart/form-data; boundary=$boundary\r\nContent-Length: $contentlength\r\n\r\n|; 336 | } else { 337 | $header = qq|POST /cgi-bin/upload.cgi HTTP/1.0\r\nContent-Type: multipart/form-data; boundary=$boundary\r\nContent-Length: $contentlength\r\n\r\n|; 338 | } 339 | 340 | print $socket "$header$contentheader"; 341 | 342 | sysseek($fh, $cursize, 0); 343 | $bufferlen = sysread($fh, $buffer, $wantchunksize) || 0; 344 | unless ($bufferlen) { die "Error while reading file: $!\n" } 345 | $chunkmd5hex = md5_hex($buffer); 346 | print "Sending $bufferlen byte...\n"; 347 | $cursize += $bufferlen; 348 | print $socket $buffer; 349 | print $socket $contenttail; 350 | print "Reading server response...\n"; 351 | ($result) = <$socket> =~ /\r\n\r\n(.+)/s; 352 | unless ($result) { die "Ooops! Did not receive any valid server results?\n" } 353 | 354 | if ($resumed) { 355 | if ($complete) { 356 | if ($result =~ /^COMPLETE,(\w+)/) { 357 | print "Upload completed! Remote MD5=$1 Local MD5=$md5hex\n"; 358 | if ($md5hex ne $1) { die "MD5 CHECK NOT PASSED!\n" } 359 | print "MD5 check passed. Upload OK! Saving status to rsapiuploads.txt\n\n"; 360 | unlink("$file.uploaddata"); 361 | } else { 362 | die "Unexpected server response!\n"; 363 | } 364 | } else { 365 | if ($result =~ /^CHUNK,(\d+),(\w+)/) { 366 | print "Chunk upload completed! $1 byte uploaded.\nRemote MD5=$2 Local MD5=$chunkmd5hex\n\n"; 367 | if ($2 ne $chunkmd5hex) { die "CHUNK MD5 CHECK NOT PASSED!\n" } 368 | } else { 369 | die "Unexpected server response!\n\n$result\n"; 370 | } 371 | } 372 | } else { 373 | if ($result =~ /files\/(\d+)/) { $fileid = $1 } else { die "Server result did not contain a file ID.\n" } 374 | unless ($result =~ /File1\.3=(\d+)/ and $1 == $cursize) { die "Server did not save all data we sent.\n" } 375 | unless ($result =~ /File1\.2=.+?killcode=(\d+)/) { die "Server did not send our killcode.\n" } 376 | $killcode = $1; 377 | unless ($result =~ /File1\.4=(\w+)/) { die "Server did not send the remote MD5 sum.\n" } 378 | $remotemd5hex = lc($1); 379 | 380 | if ($chunks) { 381 | if ($result !~ /File1\.5=Incomplete/) { die "Server did not acknowledge the incomplete upload request.\n" } 382 | print "Chunk upload completed! $cursize byte uploaded.\nRemote MD5=$remotemd5hex Local MD5=$chunkmd5hex\n"; 383 | if ($remotemd5hex ne $chunkmd5hex) { die "CHUNK MD5 CHECK NOT PASSED!\n" } 384 | print "Upload OK! Saving to rsapiuploads.txt and resuming upload...\n\n"; 385 | open(O, ">$file.uploaddata") or die "Unable to save upload server: $!\n"; 386 | print O "$fulluploadserver\n$fileid\n$killcode\n"; 387 | close(O); 388 | } else { 389 | if ($result !~ /File1\.5=Completed/) { die "Server did not acknowledge the completed upload request.\n" } 390 | if ($md5hex ne $remotemd5hex) { die "FINAL MD5 CHECK NOT PASSED! LOCAL=$md5hex REMOTE=$remotemd5hex\n" } 391 | print "FINAL MD5 check passed. Upload OK! Saving status to rsapiuploads.txt\n$result"; 392 | } 393 | 394 | open(O,">>rsapiuploads.txt") or die "Unable to save to rsapiuploads.txt: $!\n"; 395 | print O $chunks ? "Initialized chunk upload for file $file.\n$result" : "Uploaded file $file.\n$result"; 396 | close(O); 397 | } 398 | 399 | return $cursize; 400 | } 401 | 402 | 403 | 404 | 405 | 406 | sub htmlencode { 407 | my $text = shift || ""; 408 | 409 | unless (%ESCAPES) { 410 | for (0 .. 255) { $ESCAPES{chr($_)} = sprintf("%%%02X", $_) } 411 | } 412 | 413 | $text =~ s/(.)/$ESCAPES{$1}/g; 414 | 415 | return $text; 416 | } 417 | -------------------------------------------------------------------------------- /archived/downr/doc/rs-check-files.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | SAMPLE URL: http://rapidshare.com/files/301254665/4C.-.Aftms.part1.rar 4 | INPUT: http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkfiles_v1&files=301254665&filenames=4C.-.Aftms.part1.rar 5 | OUTPUT: 301254665,4C.-.Aftms.part1.rar,104857600,535,1,tg,0 6 | 7 | WITH MD5SUM 8 | SAMPLE URL: http://rapidshare.com/files/301254665/4C.-.Aftms.part1.rar 9 | INPUT: http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkfiles_v1&files=301254665&filenames=4C.-.Aftms.part1.rar&incmd5=1 10 | OUTPUT: 301254665,4C.-.Aftms.part1.rar,104857600,535,1,l32,3ADBBD3EB5D4A9C9C92CD594C4968534 11 | 12 | ---------------------------- 13 | http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine (finalpoints=points) 14 | https://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine (finalpoints=points*2 (this means using SSL doubles points!)) 15 | 16 | Additional parameters can be added via http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=subroutine¶m1=value1¶m2=value2 17 | 18 | 19 | ---------------------------- 20 | subroutine=checkfiles_v1 21 | 22 | Description: 23 | Gets status details about a list of given files. (files parameter 24 | limited to 3000 bytes. filenames parameter limited to 30000 bytes.) 25 | 26 | Parameters: files=comma separated list of file ids 27 | filenames=comma separated list of the respective filename. Example: 28 | files=50444381,50444382 filenames=test1.rar,test2.rar incmd5=if set 29 | to 1, field 7 is the hex-md5 of the file. This will double your 30 | points! If not given, all md5 values will be 0 31 | 32 | Reply fields: 33 | 1:File ID 34 | 2:Filename 35 | 3:Size (in bytes. If size is 0, this file does not exist.) 36 | 4:Server ID 37 | 5:Status integer, which can have the following numeric values: 38 | 0=File not found 39 | 1=File OK (Anonymous downloading) 40 | 2=File OK (TrafficShare direct download without any logging) 41 | 3=Server down 42 | 4=File marked as illegal 43 | 5=Anonymous file locked, because it has more than 10 downloads already 44 | 6=File OK (TrafficShare direct download with enabled logging. Read our privacy policy to see what is logged.) 45 | 46 | 6:Short host (Use the short host to get the best download mirror: 47 | http://rs$serverid$shorthost.rapidshare.com/files/$fileid/$filename) 48 | 7:md5 (See parameter incmd5 in parameter description above.) 49 | 50 | Reply format: integer,string,integer,integer,integer,string,string 51 | -------------------------------------------------------------------------------- /archived/downr/downr.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # downr, a bash download "manager" for rapidshare/hotfile 4 | # Copyright (C) 2007-2010 Chad Mayfield 5 | # 6 | # This program is free software: you can redistribute it and/or modify 7 | # it under the terms of the GNU General Public License as published by 8 | # the Free Software Foundation, either version 3 of the License, or 9 | # (at your option) any later version. 10 | # 11 | # This program is distributed in the hope that it will be useful, 12 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 13 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 | # GNU General Public License for more details. 15 | # 16 | # You should have received a copy of the GNU General Public License 17 | # along with this program. If not, see . 18 | 19 | #+-- RAPIDSHARE ACCOUNT 20 | RSUSER="xxxxx" 21 | RSPASS="xxxxxxx" 22 | #+-- HOTFILE ACCOUNT 23 | HFUSER="xxxxxxx" 24 | HFPASS="xxxxxx" 25 | #+-- MEGASHARES ACCOUNT 26 | MSUSER="xxxxxxxxxxxxxxx" 27 | MSPASS="xxxxxx" 28 | 29 | # http://www.dslreports.com/calculator 30 | DOWNRATE=150K #+-- in bytes per second 31 | 32 | #+------------- 33 | RSREGEX="http://(rapidshare.com|www.rapidshare.com)" 34 | RSAPIDOC="http://images.rapidshare.com/apidoc.txt" 35 | RSAPI="https://api.rapidshare.com/cgi-bin/rsapi.cgi" 36 | RSCHECK="sub=checkfiles_v1&files=${FILEID}&filenames=${FILENAME}&incmd5=1" 37 | RSDATA="sub=getaccountdetails_v1&withcookie=1&type=prem&login=${RSUSER}&password=${RSPASS}" 38 | RSCOOKIE=".rscookie" 39 | #+------------- 40 | HFREGEX="http://(hotfile.com|www.hotfile.com)" 41 | HFCHECK="http://hotfile.com/checkfiles.html?files=" 42 | #HFLOGIN="http://www.hotfile.com/login.php" 43 | #HFPOST="returnto=%2F&user=${HFUSER}&pass=${HFPASS}&=Login" 44 | #+-------------- 45 | DOWNLOADLOG= 46 | USERAGENT="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" 47 | 48 | 49 | if [ $# -ne 1 ]; then 50 | echo "Usage: $0 LINKSFILE.txt" 51 | else 52 | echo "Beginning to get `cat $1 | wc -l` files at $DOWNRATE KB/s." 53 | while read line; 54 | do 55 | echo "===============================================================================" 56 | if [ `echo $line | egrep -c "${RSREGEX}"` -gt 0 ]; then 57 | if [ ! -s $RSCOOKIE ]; then 58 | echo "No rapidshare cookie found. Creating it..." 59 | RESPONSE=$(curl -s --data "$RSDATA" "$RSAPI") 60 | COOKIE_VAL=$(echo "$RESPONSE" | sed -n "/^cookie=/ s/^.*cookie=\(.*\).*$/\1/p") 61 | COOKIE=".rapidshare.com TRUE / FALSE $(($(date +%s)+24*60*60)) enc $COOKIE_VAL" 62 | echo "$COOKIE" | tr ' ' '\t' > $RSCOOKIE 63 | exit 1 64 | else 65 | FILEID=`echo $line | awk -F "/" '{print $5}'` 66 | FILENAME=`echo $line | awk -F "/" '{print $6}'` 67 | FILESTATS=`curl -Gs "${RSAPI}?sub=checkfiles_v1&files=${FILEID}&filenames=${FILENAME}&incmd5=1"` 68 | FILESIZE=`echo $FILESTATS | awk -F "," '{print $3}'` 69 | STATUS=`echo $FILESTATS | awk -F "," '{print $5}'` 70 | echo "FILEHOST: RAPIDSHARE.COM" 71 | echo "FILEID: $FILEID" 72 | echo "FILENAME: $FILENAME" 73 | echo "FILESIZE: `echo $FILESIZE/1024/1204|bc`Mb ($FILESIZE bytes)" 74 | SKIP= 75 | case $STATUS in 76 | 0) echo "STATUS: $STATUS (File Not Found)"; SKIP=1; ;; 77 | 1) echo "STATUS: $STATUS (File OK)"; 78 | echo "-------------------------------------------------------------------------------"; ;; 79 | 3) echo "STATUS: $STATUS (Server Down)"; SKIP=1; ;; 80 | 4) echo "STATUS: $STATUS (Illegal File!)"; SKIP=1; ;; 81 | 5) echo "STATUS: $STATUS (Locked, < 10 downloads)"; SKIP=1; ;; 82 | *) echo "STATUS: $STATUS (Unknown status: $status)"; SKIP=1; ;; 83 | esac 84 | if [ ! $SKIP ]; then 85 | # curl -L -O --cookie $RSCOOKIE $line 86 | curl -L -O --cookie $RSCOOKIE --limit-rate $DOWNRATE $line 87 | fi 88 | fi 89 | elif [ `echo $line | egrep -c "${HFREGEX}"` -gt 0 ]; then 90 | FILEID=`echo $line | awk -F "/" '{print $5}'` 91 | FILENAME=`echo $line | awk -F "/" '{print $7}' | sed s/.html//` 92 | #FILESTATS=`curl -Gs $HFCHECK$line | grep -A10 Results` 93 | FILESIZE=`curl -Gs ${HFCHECK}${line} | egrep '[MK]b|N/A' | awk 'BEGIN {FS="(<|>)"} {print $3}'` 94 | STATUS=`curl -Gs ${HFCHECK}${line} | egrep -c "Existent"` 95 | echo "FILEHOST: HOTFILE.COM" 96 | echo "FILEID: $FILEID" 97 | echo "FILENAME: $FILENAME" 98 | echo "FILESIZE: $FILESIZE" 99 | case $STATUS in 100 | 0) echo "STATUS: $STATUS (File Not Found)"; SKIP=1; ;; 101 | 1) echo "STATUS: $STATUS (File OK)"; 102 | echo "-------------------------------------------------------------------------------"; ;; 103 | *) echo "STATUS: $STATUS (Unknown status: $status)"; SKIP=1; ;; 104 | esac 105 | if [ ! $SKIP ]; then 106 | echo "curl ......" 107 | #curl -L -O --cookie $RSCOOKIE $line 108 | fi 109 | #wget -c $TRIES --auth-no-challenge --user=$HFUSER --password=$HFPASS \ 110 | # --referer="" --limit-rate=${DOWNRATE} -o $WGETLOG $url 111 | else 112 | echo "Skipping unknown link." 113 | #echo $line 114 | fi 115 | sleep 1 116 | done<$1 117 | fi 118 | -------------------------------------------------------------------------------- /archived/google_code_main.css: -------------------------------------------------------------------------------- 1 | /* 2 | * Green: 80c65a ddf8cc Yellow: ffcc33 fff4c2 3 | * Blue: 3366cc c3d9ff Orange: ff9900 ffeac0 4 | * Purple: 49188f c2bddd Gray: 676767 e8e8e8 5 | */ 6 | 7 | #ssb {border-top:1px solid #80c65a; background: #ddf8cc; width: 70%; } 8 | #ssb div{float:left;padding:4px 0 0;padding-left:4px;padding-right:.5em; } 9 | #ssb p{text-align:right;white-space:nowrap;margin:.1em 0;padding:.2em; zoom:1;} 10 | #ssb h1 h2 h3 { font: 12px; color: 'Lucinda Grande', Geneva, Verdana, Arial, Helvetica, sans-serif; color: #80c65a; } 11 | 12 | #bsf,#ssb{background:#ddf8cc} 13 | #bsf,#ssb{margin:10px 0} 14 | #bsf{border-bottom:1px solid #6b90da;padding:1.8em 0} 15 | -------------------------------------------------------------------------------- /archived/jumpbox_checker/README.md: -------------------------------------------------------------------------------- 1 | # jumpbox_checker -------------------------------------------------------------------------------- /archived/jumpbox_checker/jumpbox.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # View all JumpBox apps: http://www.jumpbox.com/go/virtualization?page=0 4 | 5 | LOGDIR="." # Root logfile directory 6 | LOGFILE="${LOGDIR}/jumpbox_check.log" # Logfile location 7 | STORED="${LOGDIR}/stored_jumpboxes.txt" # Location of stored versions file 8 | SAVETO="." # No trailing slash required 9 | UA="Mozilla/4.0 (compatible; MSIE 7; Windows NT 5.1" # User Agent 10 | APPS=( deki knowledgetree lampd mysqld jasperbi sugarcrm5 joomla15 joomla cacti wordpress mediawiki openfire \ 11 | drupal6 drupal dokuwiki moinmoin mantis orangehrm redmine magento nagios3 punbb rubyonrails alfresco liferay \ 12 | lappd zenoss tikiwiki bugzilla glpi postgresqld moodle tracks pmwiki otrs trac vtigercrm silverstripe nagios \ 13 | gallery sugarcrm phpbb openldap movabletype projectpier dimdim snaplogic ) 14 | 15 | log() { 16 | echo "`date` $*" >> $LOGFILE 17 | } 18 | 19 | error() { 20 | log "ERROR: $*" 21 | } 22 | 23 | fail() { 24 | log "ERROR: $*" 25 | exit 1 26 | } 27 | 28 | if [ "$1" = "create" ]; then 29 | rm -rf $STORED 30 | touch $STORED 31 | for i in "${APPS[@]}" 32 | do 33 | CURRENT=`curl -s --user-agent "$UA" http://www.jumpbox.com/app/${i} | grep "JumpBox Version" | awk '{print $3}'` 34 | if [ `echo $i | wc -m` -ge 8 ]; then 35 | echo -e "$i \t $CURRENT" >> $STORED 36 | else 37 | echo -e "$i \t \t $CURRENT" >> $STORED 38 | fi 39 | done 40 | exit 0 41 | fi 42 | 43 | log "======== Beginning JumpBox Update Check ========" 44 | 45 | if [ ! -d $LOGDIR ]; then 46 | error "The \$LOGDIR directory does not exist!" 47 | fail "\$LOGDIR = $LOGDIR" 48 | fi 49 | 50 | if [ ! -d $SAVETO ]; then 51 | error "The \$SAVETO directory does not exist!" 52 | fail "\$SAVETO = $SAVETO" 53 | fi 54 | 55 | if [ ! -f $STORED ]; then 56 | error "Your list of stored JumpBoxes does not exist." 57 | error "Please create it and then rerun the script." 58 | fail "\$STORED = $STORED" 59 | fi 60 | 61 | for i in "${APPS[@]}" 62 | do 63 | CURRENT=`curl -s --user-agent "$UA" http://www.jumpbox.com/app/${i} | grep "JumpBox Version" | awk '{print $3}'` 64 | URL="http://downloads2.jumpbox.com/${i}-${CURRENT}.zip" 65 | log "Checking for new JumpBox: $i" 66 | if [ `echo $i | wc -m` -ge 8 ]; then 67 | BIG=`grep -w $i $STORED | awk -F "\t" '{print $2}' | sed 's/ //g'` 68 | if [ "$BIG" == "$CURRENT" ]; then 69 | #log "JumpBox found! $i v${CURRENT}" 70 | log "JumpBox already stored. Skipping download." 71 | else 72 | log "New JumpBox found! $i v${CURRENT}" 73 | log "JumpBox updated! Starting download..." 74 | log "Getting: $URL" 75 | #curl -s -G -o $SAVETO/${i}-${CURRENT}.zip --user-agent "${UA}" $URL 76 | if [ $? -eq 0 ]; then 77 | log "Download complete!" 78 | else 79 | error "Download was unsuccessful! Please check" 80 | error "the URL and try the download again." 81 | fi 82 | # TODO: add sed command to replace old version with this one in $STORED 83 | fi 84 | else 85 | SMALL=`grep -w $i $STORED | awk -F "\t" '{print $3}' | sed 's/ //g'` 86 | if [ "$SMALL" == "$CURRENT" ]; then 87 | #log "JumpBox found! $i v${CURRENT}" 88 | log "JumpBox already stored. Skipping download." 89 | else 90 | log "New JumpBox found! $i v${CURRENT}" 91 | log "JumpBox updated! Starting download..." 92 | log "Getting: $URL" 93 | #curl -s -G -o $SAVETO/${i}-${CURRENT}.zip --user-agent "${UA}" $URL 94 | if [ $? -eq 0 ]; then 95 | log "Download complete!" 96 | else 97 | error "Download was unsuccessful! Please check" 98 | error "the URL and try the download again." 99 | fi 100 | # TODO: add sed command to replace old version with this one in $STORED 101 | fi 102 | fi 103 | #sleep 15 #+-- Just to be nice to the server, sleep for 15s till next check 104 | done 105 | 106 | log "JumpBox update check completed!" 107 | 108 | #EOF 109 | -------------------------------------------------------------------------------- /archived/jumpbox_checker/jumpbox_check.log: -------------------------------------------------------------------------------- 1 | Thu May 14 17:43:52 MDT 2009 ======== Beginning JumpBox Update Check ======== 2 | Thu May 14 17:43:52 MDT 2009 ERROR: Your list of stored JumpBoxes does 3 | Thu May 14 17:43:52 MDT 2009 not exist. Please create it and continue. 4 | Thu May 14 17:43:52 MDT 2009 FAILURE: $STORED = ./stored_jumpboxes.txt 5 | Thu May 14 17:44:23 MDT 2009 ======== Beginning JumpBox Update Check ======== 6 | Thu May 14 17:44:23 MDT 2009 ERROR: The $SAVETO directory does not exist! 7 | Thu May 14 17:44:23 MDT 2009 FAILURE: $SAVETO = /home/chasdafd 8 | Thu May 14 17:44:47 MDT 2009 ======== Beginning JumpBox Update Check ======== 9 | Thu May 14 17:44:48 MDT 2009 Checking for new JumpBox: deki 10 | Thu May 14 17:44:48 MDT 2009 JumpBox found! deki v1.1.1 11 | Thu May 14 17:44:48 MDT 2009 JumpBox updated! Starting download... 12 | Thu May 14 17:44:48 MDT 2009 GET: http://downloads2.jumpbox.com/deki-1.1.1.zip 13 | Thu May 14 17:44:48 MDT 2009 Download complete! 14 | Thu May 14 17:44:48 MDT 2009 Checking for new JumpBox: knowledgetree 15 | Thu May 14 17:44:48 MDT 2009 JumpBox found! knowledgetree v1.1.4 16 | Thu May 14 17:44:48 MDT 2009 JumpBox updated! Starting download... 17 | Thu May 14 17:44:48 MDT 2009 GET: http://downloads2.jumpbox.com/knowledgetree-1.1.4.zip 18 | Thu May 14 17:44:48 MDT 2009 Download complete! 19 | Thu May 14 17:44:48 MDT 2009 Checking for new JumpBox: lampd 20 | Thu May 14 17:44:48 MDT 2009 JumpBox found! lampd v1.1.6 21 | Thu May 14 17:44:48 MDT 2009 JumpBox already stored. Skipping download. 22 | Thu May 14 17:44:48 MDT 2009 Checking for new JumpBox: mysqld 23 | Thu May 14 17:44:48 MDT 2009 JumpBox found! mysqld v1.1.5 24 | Thu May 14 17:44:48 MDT 2009 JumpBox already stored. Skipping download. 25 | Thu May 14 17:44:49 MDT 2009 Checking for new JumpBox: jasperbi 26 | Thu May 14 17:44:49 MDT 2009 JumpBox found! jasperbi v1.1.0 27 | Thu May 14 17:44:49 MDT 2009 JumpBox already stored. Skipping download. 28 | Thu May 14 17:44:49 MDT 2009 Checking for new JumpBox: sugarcrm5 29 | Thu May 14 17:44:49 MDT 2009 JumpBox found! sugarcrm5 v1.1.12 30 | Thu May 14 17:44:49 MDT 2009 JumpBox already stored. Skipping download. 31 | Thu May 14 17:44:49 MDT 2009 Checking for new JumpBox: joomla15 32 | Thu May 14 17:44:49 MDT 2009 JumpBox found! joomla15 v1.1.9 33 | Thu May 14 17:44:49 MDT 2009 JumpBox already stored. Skipping download. 34 | Thu May 14 17:44:49 MDT 2009 Checking for new JumpBox: joomla 35 | Thu May 14 17:44:49 MDT 2009 JumpBox found! joomla v1.1.4 36 | Thu May 14 17:44:49 MDT 2009 JumpBox already stored. Skipping download. 37 | Thu May 14 17:44:50 MDT 2009 Checking for new JumpBox: cacti 38 | Thu May 14 17:44:50 MDT 2009 JumpBox found! cacti v1.1.5 39 | Thu May 14 17:44:50 MDT 2009 JumpBox already stored. Skipping download. 40 | Thu May 14 17:44:50 MDT 2009 Checking for new JumpBox: wordpress 41 | Thu May 14 17:44:50 MDT 2009 JumpBox found! wordpress v1.1.11 42 | Thu May 14 17:44:50 MDT 2009 JumpBox already stored. Skipping download. 43 | Thu May 14 17:44:50 MDT 2009 Checking for new JumpBox: mediawiki 44 | Thu May 14 17:44:50 MDT 2009 JumpBox found! mediawiki v1.1.10 45 | Thu May 14 17:44:50 MDT 2009 JumpBox already stored. Skipping download. 46 | Thu May 14 17:44:50 MDT 2009 Checking for new JumpBox: openfire 47 | Thu May 14 17:44:50 MDT 2009 JumpBox found! openfire v1.1.4 48 | Thu May 14 17:44:50 MDT 2009 JumpBox already stored. Skipping download. 49 | Thu May 14 17:44:51 MDT 2009 Checking for new JumpBox: drupal6 50 | Thu May 14 17:44:51 MDT 2009 JumpBox found! drupal6 v1.1.10 51 | Thu May 14 17:44:51 MDT 2009 JumpBox already stored. Skipping download. 52 | Thu May 14 17:44:51 MDT 2009 Checking for new JumpBox: drupal 53 | Thu May 14 17:44:51 MDT 2009 JumpBox found! drupal v1.1.12 54 | Thu May 14 17:44:51 MDT 2009 JumpBox already stored. Skipping download. 55 | Thu May 14 17:44:51 MDT 2009 Checking for new JumpBox: dokuwiki 56 | Thu May 14 17:44:51 MDT 2009 JumpBox found! dokuwiki v1.1.4 57 | Thu May 14 17:44:51 MDT 2009 JumpBox already stored. Skipping download. 58 | Thu May 14 17:44:51 MDT 2009 Checking for new JumpBox: moinmoin 59 | Thu May 14 17:44:51 MDT 2009 JumpBox found! moinmoin v1.1.9 60 | Thu May 14 17:44:51 MDT 2009 JumpBox already stored. Skipping download. 61 | Thu May 14 17:44:54 MDT 2009 Checking for new JumpBox: mantis 62 | Thu May 14 17:44:54 MDT 2009 JumpBox found! mantis v1.1.6 63 | Thu May 14 17:44:54 MDT 2009 JumpBox already stored. Skipping download. 64 | Thu May 14 17:44:55 MDT 2009 Checking for new JumpBox: orangehrm 65 | Thu May 14 17:44:55 MDT 2009 JumpBox found! orangehrm v1.1.2 66 | Thu May 14 17:44:55 MDT 2009 JumpBox already stored. Skipping download. 67 | Thu May 14 17:44:55 MDT 2009 Checking for new JumpBox: redmine 68 | Thu May 14 17:44:55 MDT 2009 JumpBox found! redmine v1.1.9 69 | Thu May 14 17:44:55 MDT 2009 JumpBox already stored. Skipping download. 70 | Thu May 14 17:44:55 MDT 2009 Checking for new JumpBox: magento 71 | Thu May 14 17:44:55 MDT 2009 JumpBox found! magento v1.1.7 72 | Thu May 14 17:44:55 MDT 2009 JumpBox already stored. Skipping download. 73 | Thu May 14 17:44:55 MDT 2009 Checking for new JumpBox: nagios3 74 | Thu May 14 17:44:55 MDT 2009 JumpBox found! nagios3 v1.1.2 75 | Thu May 14 17:44:55 MDT 2009 JumpBox already stored. Skipping download. 76 | Thu May 14 17:44:55 MDT 2009 Checking for new JumpBox: punbb 77 | Thu May 14 17:44:56 MDT 2009 JumpBox found! punbb v1.1.8 78 | Thu May 14 17:44:56 MDT 2009 JumpBox already stored. Skipping download. 79 | Thu May 14 17:44:56 MDT 2009 Checking for new JumpBox: rubyonrails 80 | Thu May 14 17:44:56 MDT 2009 JumpBox found! rubyonrails v1.1.2 81 | Thu May 14 17:44:56 MDT 2009 JumpBox already stored. Skipping download. 82 | Thu May 14 17:44:56 MDT 2009 Checking for new JumpBox: alfresco 83 | Thu May 14 17:44:56 MDT 2009 JumpBox found! alfresco v1.1.2 84 | Thu May 14 17:44:56 MDT 2009 JumpBox already stored. Skipping download. 85 | Thu May 14 17:44:56 MDT 2009 Checking for new JumpBox: liferay 86 | Thu May 14 17:44:56 MDT 2009 JumpBox found! liferay v1.1.1 87 | Thu May 14 17:44:56 MDT 2009 JumpBox already stored. Skipping download. 88 | Thu May 14 17:44:56 MDT 2009 Checking for new JumpBox: lappd 89 | Thu May 14 17:44:56 MDT 2009 JumpBox found! lappd v1.1.1 90 | Thu May 14 17:44:56 MDT 2009 JumpBox already stored. Skipping download. 91 | Thu May 14 17:44:57 MDT 2009 Checking for new JumpBox: zenoss 92 | Thu May 14 17:44:57 MDT 2009 JumpBox found! zenoss v1.1.2 93 | Thu May 14 17:44:57 MDT 2009 JumpBox already stored. Skipping download. 94 | Thu May 14 17:44:57 MDT 2009 Checking for new JumpBox: tikiwiki 95 | Thu May 14 17:44:57 MDT 2009 JumpBox found! tikiwiki v1.1.7 96 | Thu May 14 17:44:57 MDT 2009 JumpBox already stored. Skipping download. 97 | Thu May 14 17:44:57 MDT 2009 Checking for new JumpBox: bugzilla 98 | Thu May 14 17:44:57 MDT 2009 JumpBox found! bugzilla v1.1.5 99 | Thu May 14 17:44:57 MDT 2009 JumpBox already stored. Skipping download. 100 | Thu May 14 17:44:57 MDT 2009 Checking for new JumpBox: glpi 101 | Thu May 14 17:44:57 MDT 2009 JumpBox found! glpi v1.1.3 102 | Thu May 14 17:44:57 MDT 2009 JumpBox already stored. Skipping download. 103 | Thu May 14 17:44:58 MDT 2009 Checking for new JumpBox: postgresqld 104 | Thu May 14 17:44:58 MDT 2009 JumpBox found! postgresqld v1.1.1 105 | Thu May 14 17:44:58 MDT 2009 JumpBox already stored. Skipping download. 106 | Thu May 14 17:44:58 MDT 2009 Checking for new JumpBox: moodle 107 | Thu May 14 17:44:58 MDT 2009 JumpBox found! moodle v1.1.7 108 | Thu May 14 17:44:58 MDT 2009 JumpBox already stored. Skipping download. 109 | Thu May 14 17:44:58 MDT 2009 Checking for new JumpBox: tracks 110 | Thu May 14 17:44:58 MDT 2009 JumpBox found! tracks v1.1.0 111 | Thu May 14 17:44:58 MDT 2009 JumpBox already stored. Skipping download. 112 | Thu May 14 17:44:58 MDT 2009 Checking for new JumpBox: pmwiki 113 | Thu May 14 17:44:58 MDT 2009 JumpBox found! pmwiki v1.1.5 114 | Thu May 14 17:44:58 MDT 2009 JumpBox already stored. Skipping download. 115 | Thu May 14 17:44:58 MDT 2009 Checking for new JumpBox: otrs 116 | Thu May 14 17:44:58 MDT 2009 JumpBox found! otrs v1.1.7 117 | Thu May 14 17:44:58 MDT 2009 JumpBox already stored. Skipping download. 118 | Thu May 14 17:44:59 MDT 2009 Checking for new JumpBox: trac 119 | Thu May 14 17:44:59 MDT 2009 JumpBox found! trac v1.1.10 120 | Thu May 14 17:44:59 MDT 2009 JumpBox already stored. Skipping download. 121 | Thu May 14 17:44:59 MDT 2009 Checking for new JumpBox: vtigercrm 122 | Thu May 14 17:44:59 MDT 2009 JumpBox found! vtigercrm v1.1.4 123 | Thu May 14 17:44:59 MDT 2009 JumpBox already stored. Skipping download. 124 | Thu May 14 17:44:59 MDT 2009 Checking for new JumpBox: silverstripe 125 | Thu May 14 17:44:59 MDT 2009 JumpBox found! silverstripe v1.1.5 126 | Thu May 14 17:44:59 MDT 2009 JumpBox already stored. Skipping download. 127 | Thu May 14 17:44:59 MDT 2009 Checking for new JumpBox: nagios 128 | Thu May 14 17:44:59 MDT 2009 JumpBox found! nagios v1.1.5 129 | Thu May 14 17:44:59 MDT 2009 JumpBox already stored. Skipping download. 130 | Thu May 14 17:45:00 MDT 2009 Checking for new JumpBox: gallery 131 | Thu May 14 17:45:00 MDT 2009 JumpBox found! gallery v1.1.1 132 | Thu May 14 17:45:00 MDT 2009 JumpBox already stored. Skipping download. 133 | Thu May 14 17:45:00 MDT 2009 Checking for new JumpBox: sugarcrm 134 | Thu May 14 17:45:00 MDT 2009 JumpBox found! sugarcrm v1.1.4 135 | Thu May 14 17:45:00 MDT 2009 JumpBox already stored. Skipping download. 136 | Thu May 14 17:45:00 MDT 2009 Checking for new JumpBox: phpbb 137 | Thu May 14 17:45:00 MDT 2009 JumpBox found! phpbb v1.1.6 138 | Thu May 14 17:45:00 MDT 2009 JumpBox already stored. Skipping download. 139 | Thu May 14 17:45:00 MDT 2009 Checking for new JumpBox: openldap 140 | Thu May 14 17:45:00 MDT 2009 JumpBox found! openldap v1.1.1 141 | Thu May 14 17:45:00 MDT 2009 JumpBox already stored. Skipping download. 142 | Thu May 14 17:45:01 MDT 2009 Checking for new JumpBox: movabletype 143 | Thu May 14 17:45:01 MDT 2009 JumpBox found! movabletype v1.1.8 144 | Thu May 14 17:45:01 MDT 2009 JumpBox already stored. Skipping download. 145 | Thu May 14 17:45:01 MDT 2009 Checking for new JumpBox: projectpier 146 | Thu May 14 17:45:01 MDT 2009 JumpBox found! projectpier v1.1.4 147 | Thu May 14 17:45:01 MDT 2009 JumpBox already stored. Skipping download. 148 | Thu May 14 17:45:01 MDT 2009 Checking for new JumpBox: dimdim 149 | Thu May 14 17:45:01 MDT 2009 JumpBox found! dimdim v1.1.0 150 | Thu May 14 17:45:01 MDT 2009 JumpBox already stored. Skipping download. 151 | Thu May 14 17:45:01 MDT 2009 Checking for new JumpBox: snaplogic 152 | Thu May 14 17:45:01 MDT 2009 JumpBox found! snaplogic v1.1.1 153 | Thu May 14 17:45:01 MDT 2009 JumpBox already stored. Skipping download. 154 | Thu May 14 17:45:01 MDT 2009 JumpBox update check completed! 155 | -------------------------------------------------------------------------------- /archived/jumpbox_checker/stored_jumpboxes.txt: -------------------------------------------------------------------------------- 1 | deki 1.1.1 2 | knowledgetree 1.1.4 3 | lampd 1.1.6 4 | mysqld 1.1.5 5 | jasperbi 1.1.0 6 | sugarcrm5 1.1.12 7 | joomla15 1.1.9 8 | joomla 1.1.4 9 | cacti 1.1.5 10 | wordpress 1.1.11 11 | mediawiki 1.1.10 12 | openfire 1.1.4 13 | drupal6 1.1.10 14 | drupal 1.1.12 15 | dokuwiki 1.1.4 16 | moinmoin 1.1.9 17 | mantis 1.1.6 18 | orangehrm 1.1.2 19 | redmine 1.1.9 20 | magento 1.1.7 21 | nagios3 1.1.2 22 | punbb 1.1.8 23 | rubyonrails 1.1.2 24 | alfresco 1.1.2 25 | liferay 1.1.1 26 | lappd 1.1.1 27 | zenoss 1.1.2 28 | tikiwiki 1.1.7 29 | bugzilla 1.1.5 30 | glpi 1.1.3 31 | postgresqld 1.1.1 32 | moodle 1.1.7 33 | tracks 1.1.0 34 | pmwiki 1.1.5 35 | otrs 1.1.7 36 | trac 1.1.10 37 | vtigercrm 1.1.4 38 | silverstripe 1.1.5 39 | nagios 1.1.5 40 | gallery 1.1.1 41 | sugarcrm 1.1.4 42 | phpbb 1.1.6 43 | openldap 1.1.1 44 | movabletype 1.1.8 45 | projectpier 1.1.4 46 | dimdim 1.1.0 47 | snaplogic 1.1.1 48 | -------------------------------------------------------------------------------- /archived/killie.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # killie.sh - kill IE/wineserver so memory doesn't ballon too much 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv2 7 | 8 | # NOTE: this is so old, at least six years, needed it to kill IE fast 9 | # as memory would ballon quickly. Not used anymore, needed for an 10 | # IE only webapp 11 | 12 | # Use ps to find PID's of IEXPLORE or wineserver 13 | wine=`ps aux | grep -i [w]ineserver | awk '$11 {print $2}'` 14 | ie=`ps aux | grep -i [I]EXPLORE | awk '$11 {print $2}'` 15 | 16 | # Check if any instances of IEXPLORE or wineserver return 0 (not running) 17 | if [ "$ie" == "0" ]; then 18 | pids2kill=( $wine $ie) 19 | for item in ${pids2kill[@]}; do 20 | # need processname otherwise will show blank since it is after we kill PID 21 | processname=`ps --pid $item | grep -v CMD | awk '{print $4}'` 22 | kill -9 $item 23 | 24 | if [ $? -eq 0 ]; then 25 | echo "Killed PID ${item}, (${processname})" 26 | else 27 | echo "Unable to kill PID ${item}, (${processname})" 28 | fi 29 | done 30 | else 31 | echo "No IE process(es) found, nothing to do. Exiting." 32 | fi 33 | 34 | #EOF -------------------------------------------------------------------------------- /archived/passgen.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # date: 04/19/2009 4 | # author: Chad Mayfield (http://www.chadmayfield.com/) 5 | # license: gpl v3 (http://www.gnu.org/licenses/gpl-3.0.txt) 6 | 7 | #+-- Check to see if there are any arguments passed to the function 8 | if [ ! $1 ]; then 9 | echo "You must supply a password length, or use --help for usage." 10 | exit 1 11 | fi 12 | 13 | #+-- Check to see there was one argument passed to the function 14 | if [ "$1" = "--help" ]; then 15 | echo "Usage:\t " 16 | elif [ $# -gt 2 ]; then 17 | echo "Error! Too many arguments entered. Please try again or use --help for usage." 18 | elif [ $# -eq 1 ]; then 19 | #+-- Check to see if that argument was numeric 20 | echo $1 | egrep "[^0-9]+" > /dev/null 2>&1 21 | if [ "$?" -eq "0" ]; then 22 | echo "Error! You number was not entered. Please try again or use --help for usage." 23 | else 24 | if [ $1 -gt 8 ]; then 25 | #+-- Now check to see that the number is between 8 and 50 26 | if [ $1 -lt 56 ]; then 27 | passwd=`< /dev/urandom tr -dc A-Za-z0-9 | head -c${1}` 28 | echo "Your random password is: ${passwd}" 29 | else 30 | echo "Error! Password length must be less than 55! Please try again or use --help for usage." 31 | fi 32 | else 33 | #+-- If it was not numeric notify the user. 34 | echo "Error! Password length must be less than 8! Please try again or use --help for usage." 35 | fi 36 | fi 37 | #+-- Check to see if there was two arguments passed to the function 38 | elif [ $# -eq 2 ]; then 39 | #+-- Check to see if that argument was numeric 40 | echo $1 | egrep "[^0-9]+" > /dev/null 2>&1 41 | if [ "$?" -eq "0" ]; then 42 | echo "Error! You did not enter a number try again or use --help for usage." 43 | else 44 | if [ "$2" = "simple" ]; then 45 | passwd=`< /dev/urandom tr -dc A-Za-z0-9 | head -c${1}` 46 | echo "Your password is: ${passwd}" 47 | elif [ "$2" = "complex" ]; then 48 | passwd=`< /dev/urandom tr -dc A-Za-z0-9_\$\%\?\!\"\/\(\)- | head -c${1}` 49 | echo "Your password is: ${passwd}" 50 | else 51 | echo "Error! Second arguement is invalid, use --help for usage." 52 | fi 53 | fi 54 | else 55 | echo "Error! Invalid arguements entered, please try again, or use --help for usage." 56 | fi 57 | -------------------------------------------------------------------------------- /archived/study/array_test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # array_test.sh - NOT MY CODE, can't remember where I got this, but it's a 4 | # good reminder of bash arrays 5 | 6 | array=( $( find . -name *.txt ) ) 7 | 8 | echo "Array size: ${#array[*]}" 9 | 10 | echo "Array items:" 11 | for item in ${array[*]} 12 | do 13 | printf " %s\n" $item 14 | done 15 | 16 | echo "Array indexes:" 17 | for index in ${!array[*]} 18 | do 19 | printf " %d\n" $index 20 | done 21 | 22 | echo "Array items and indexes:" 23 | for index in ${!array[*]} 24 | do 25 | printf "%4d: %s\n" $index ${array[$index]} 26 | done 27 | -------------------------------------------------------------------------------- /archived/sysinfo_node.sh: -------------------------------------------------------------------------------- 1 | #~/bin/bash 2 | 3 | # date: 04/19/2009 4 | # author: chadfu (http://www.chadmayfield.com/) 5 | # license: gpl v3 (http://www.gnu.org/licenses/gpl-3.0.txt) 6 | 7 | # 8 | # 9 | 10 | local_hostname=`hostname` 11 | proc_type=`grep vendor_id /proc/cpuinfo | awk 'BEGIN { FS=": | " } { print $2 }' | head -n1` 12 | proc_model=`grep model\ name /proc/cpuinfo | awk 'BEGIN { FS=": | " } { print $2 }' | head -n1` 13 | proc_speed=`grep model\ name /proc/cpuinfo | awk 'BEGIN { FS="@ " } { print $2 }' | head -n1` 14 | proc_phy_cores=`grep 'physical id' /proc/cpuinfo | sort | uniq | wc -l` 15 | proc_virt_cores=`grep ^processor /proc/cpuinfo | wc -l` 16 | proc_ht=`grep flags /proc/cpuinfo | grep -c ht` 17 | mem_ttl=`free -m | grep Mem: | awk '$7 {print $2}'` 18 | disk_size_ttl=`` 19 | disk_config=`` 20 | 21 | echo "=========================" 22 | echo "Hostname: $HOSTNAME" 23 | echo "---" 24 | echo "Processor Type: $proc_type $proc_model $proc_speed" 25 | echo "Physical Processors: $proc_phy_cores" 26 | echo "Virtual Cores: $proc_virt_cores" 27 | if [ $proc_ht -ge 1 ]; then 28 | echo "Hyper-Threading *is* supported." 29 | else 30 | echo "Hyper-Threading *is not* supported." 31 | fi 32 | echo "---" 33 | echo "Total Memory Installed, $mem_ttl MB" 34 | echo "---" 35 | echo "Number of Hard Disks: $()" 36 | echo "Total Hard Disk Size: " 37 | echo "Hard Drive Configuration: " 38 | echo "=========================" 39 | -------------------------------------------------------------------------------- /bench_disk.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # bench_disk.sh - a rough disk benchmark tool using dd 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv3 7 | 8 | # TODO: 9 | # - fix sudo issue... doesn't work 10 | # - add user defined bs and count 11 | # - add bonnie++ and ioping functions 12 | 13 | path=$1 14 | #log_file=/var/log/diskbench.log 15 | tmp_file=tempfile 16 | dev_in=/dev/zero 17 | dev_out=/dev/null # where to write it 18 | block_size=1M # block size to write 19 | count=1024 # number of blocks to write 20 | 21 | # verify we have correct args 22 | if [ $# -ne 1 ]; then 23 | echo "Please specify a path to test!" 24 | exit 1 25 | fi 26 | 27 | # verify that our path that exists 28 | if [ ! -d "$path" ]; then 29 | echo "Path does not exist: $path" 30 | exit 1 31 | fi 32 | 33 | # don't mess with permissions, just switch to root 34 | if [ $UID -ne 0 ]; then 35 | # echo "Must be run as root, please enter your password:" 36 | # sudo -ik $0 $@ 37 | echo "Must be run as root!" 38 | exit 1 39 | fi 40 | 41 | # TODO: added arg for size of tempfile and iterations 42 | # TODO: check for disk space available so we kill anything 43 | 44 | ##### functions start ##### 45 | 46 | # TODO: add logging to file and console for historical data 47 | #log() { 48 | # echo "$(date): $@" >> $logfile 49 | #} 50 | 51 | # dd benchmarking commands (in part) adapted from the archlinux wiki 52 | # https://wiki.archlinux.org/index.php/SSD_Benchmarking#Using_dd 53 | 54 | do_seq_write() { 55 | # sequential write 56 | echo -n " writing..." 57 | dd if="$dev_in" of="$path/$tmp_file" bs=1M count=1024 conv=fdatasync,notrunc &>/tmp/$$ 58 | # seq_write=$(grep -v records /tmp/$$) 59 | w_speed=$(grep -v records /tmp/$$ | awk '{print $8" "$9 }') 60 | w_size=$(grep -v records /tmp/$$ | awk '{print $3" "$4}' | tr -d '()') 61 | w_time=$(grep -v records /tmp/$$ | awk '{print $6" "$7}' | tr -d ',' ) 62 | } 63 | 64 | do_flush() { 65 | # flush cache 66 | echo -n " flushing cache..." 67 | echo 3 > /proc/sys/vm/drop_caches 68 | } 69 | 70 | do_seq_read() { 71 | # sequential read 72 | echo -n " reading..." 73 | dd if="$path/$tmp_file" of=$dev_out bs=$block_size count=$count &> /tmp/$$ 74 | # seq_read=$(grep -v records /tmp/$$) 75 | r_speed=$(grep -v records /tmp/$$ | awk '{print $8" "$9 }') 76 | r_size=$(grep -v records /tmp/$$ | awk '{print $3" "$4}' | tr -d '()') 77 | r_time=$(grep -v records /tmp/$$ | awk '{print $6" "$7}' | tr -d ',' ) 78 | } 79 | 80 | do_cached_read() { 81 | # cached sequential read 82 | echo -n " reading (cached)..." 83 | dd if="$path/$tmp_file" of=$dev_out bs=$block_size count=$count &> /tmp/$$ 84 | # cached_read=$(grep -v records /tmp/$$) 85 | rc_speed=$(grep -v records /tmp/$$ | awk '{print $8" "$9 }') 86 | rc_size=$(grep -v records /tmp/$$ | awk '{print $3" "$4}' | tr -d '()') 87 | rc_time=$(grep -v records /tmp/$$ | awk '{print $6" "$7}' | tr -d ',' ) 88 | } 89 | 90 | cleanup() { 91 | # remove test file 92 | rm -f $tmp_file /tmp/$$ 93 | sleep 2 94 | } 95 | 96 | echo "beginning dd tests:" 97 | for i in do_seq_write do_flush do_seq_read do_cached_read 98 | do 99 | $i 100 | sleep 1 101 | echo "done" 102 | done 103 | 104 | # TODO: add ioping and bonnie++ tests 105 | #echo "beginning bonnie++ tests:" 106 | #echo " coming soon" 107 | 108 | cleanup 109 | 110 | echo "dd results:" 111 | 112 | # raw dd output 113 | #echo " " 114 | #echo "Sequential write: $seq_write" 115 | #echo "Sequential read : $seq_read" 116 | #echo "Cached read : $cached_read" 117 | #echo " " 118 | 119 | # prettier output 120 | printf "%s\n" " path $path" 121 | printf "%s\n" " write $w_speed \t($w_size in $w_time)" 122 | printf "%s\n" " read $r_speed \t($r_size in $r_time)" 123 | printf "%s\n" " cached $rc_speed \t($rc_size in $rc_time)" 124 | 125 | #EOF -------------------------------------------------------------------------------- /bench_net.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # bench_net.sh - a quick and dirty script to test a server's bandwidth speed 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv3 7 | 8 | # limitations: 9 | # - unable to test ipv6 10 | # - does not work well with old version of wget 11 | # - only uses 100mb files (bad at testing gigabit speed) 12 | # TODO: 13 | # - parameterize to test by region 14 | # - add ipv6 support 15 | 16 | tmpfile="/tmp/$0.$$" 17 | options="-O /dev/null" 18 | 19 | declare -A array 20 | 21 | array["CacheFly CDN Network "]="http://cachefly.cachefly.net/100mb.test" 22 | 23 | #### NORTH AMERICA 24 | array["Linode, Fremont CA USA"]="http://speedtest.fremont.linode.com/100MB-fremont.bin" 25 | array["SoftLayer, SJ CA USA "]="http://speedtest.sjc01.softlayer.com/downloads/test100.zip" 26 | array["SoftLayer, SEA WA USA "]="http://speedtest.sea01.softlayer.com/downloads/test100.zip" 27 | array["Linode, Dallas TX USA "]="http://speedtest.dallas.linode.com/100MB-dallas.bin" 28 | array["Linode, Atlanta GA USA"]="http://speedtest.atlanta.linode.com/100MB-atlanta.bin" 29 | array["DigitalOcean, NY USA "]="http://speedtest-nyc1.digitalocean.com/100mb.test" 30 | array["Linode, Newark NJ USA "]="http://speedtest.newark.linode.com/100MB-newark.bin" 31 | array["SoftLayer, DC USA "]="http://speedtest.wdc01.softlayer.com/downloads/test100.zip" 32 | array["OVH, Beauharnois QC CA"]="http://bhs.proof.ovh.net/files/100Mb.dat" 33 | 34 | #### EUROPE 35 | #array["Edis, Frankfurt DE "]="http://de.edis.at/100MB.test" 36 | #array["Edis, Warsaw PL "]="http://pl.edis.at/100MB.test" 37 | #array["DigitalOcean, AMS NL "]="http://speedtest-ams1.digitalocean.com/100mb.test" 38 | #array["Leaseweb, Haarlem NL "]="http://mirror.leaseweb.com/speedtest/100mb.bin" 39 | #array["Bahnhof, Sundsvall SE "]="http://speedtest.bahnhof.se/100M.zip" 40 | #array["DigitalOcean, LON EN "]="http://ipv4.speedtest-lon1.digitalocean.com/100mb.test" 41 | #array["DigitalOcean, PAR FR "]="http://speedtest-fra1.digitalocean.com/100mb.test" 42 | #array["Edis, Bucharest RO "]="http://ro.edis.at/100MB.test" 43 | #array["Edis, Hafnarfjordur IS"]="http://is.edis.at/100MB.test" 44 | #array["Edis, Moscow RU "]="http://ru.edis.at/100MB.test" 45 | #array["Edis, Tel Aviv IL "]="http://il.edis.at/100MB.test" 46 | 47 | #### SOUTH AMERICA 48 | #array["Edis, Vina del Mar CL "]="http://cl.edis.at/100MB.test" 49 | 50 | #### ASIA/PACIFICA 51 | #array["Linode, Tokyo JP "]="http://speedtest.tokyo.linode.com/100MB-tokyo.bin" 52 | #array["Linode, Singapore "]="http://speedtest.singapore.linode.com/100MB-singapore.bin" 53 | #array["SoftLayer, Singapore "]="http://speedtest.sng01.softlayer.com/downloads/test100.zip" 54 | #array["Edis, NwTerritories HK"]="http://hk.edis.at/100MB.test" 55 | 56 | echo "beginning speed/latency tests..." 57 | echo " start time: $(date)" 58 | 59 | for i in "${!array[@]}" 60 | do 61 | # run wget as out speedtest & save the output 62 | wget $options ${array[$i]} &> $tmpfile 63 | speed=$(awk '/\/dev\/null/ {s=$3 $4} END {gsub(/\(|\)/,"",s); print s}' $tmpfile) 64 | 65 | # find a quick avg latency (does work with wget >1.13) 66 | ip=$(awk '/connected/ {print $4}' $tmpfile | awk -F"|" '{print $2}') 67 | cmd=$(ping -q -c 20 -i 0.2 -w 5 $ip | \ 68 | awk -F "/" '/rtt/ {print $5 " ms latency"}') 69 | 70 | echo -ne " Speed from $i : ${speed}\t(${cmd})\n" 71 | done 72 | 73 | rm -f $tmpfile 74 | echo " end time: $(date)" 75 | echo "done" 76 | 77 | #EOF 78 | -------------------------------------------------------------------------------- /checksum.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # checksum.sh - checksum all files under a certain path 4 | 5 | if [ $# -ne 2 ]; then 6 | echo "You must specifiy a hash (md5 or sha) type and a path to checksum!" 7 | echo " e.g. $0 [ md5 | sha ] /path/to/directory" 8 | exit 1 9 | fi 10 | 11 | # check to make sure a valid hash is provided 12 | if ! [[ $1 =~ (md5|sha1) ]]; then 13 | echo "ERROR: You must specify a hash, either md5 or sha1!" 14 | exit 1 15 | else 16 | if [[ $1 =~ "md5" ]]; then 17 | algo=md5 18 | elif [[ $1 =~ "sha1" ]]; then 19 | algo=sha1 20 | else 21 | echo "ERROR: Unknown hashing algorithm! Please use md5 or sha1." 22 | fi 23 | fi 24 | 25 | path="$2" 26 | chksumfile="~/checksums.txt" 27 | 28 | # checksum based on $OSTYPE 29 | if [[ $OSTYPE =~ "linux" ]]; then 30 | if [[ $algo =~ "md5" ]]; then 31 | algo=md5sum 32 | else 33 | algo=shasum 34 | fi 35 | elif [[ $OSTYPE =~ "darwin" ]]; then 36 | if [[ $algo =~ "md5" ]]; then 37 | algo=md5 38 | else 39 | algo=shasum 40 | fi 41 | else 42 | echo "ERROR: Unknown \$OSTYPE!" 43 | fi 44 | 45 | # find all regular files under dir tree 46 | command find "$path" -type f -print0 | xargs -0 "$algo" &> ~/checksums.$$.$1 47 | 48 | echo "Checksum complete. Checksum file located at: ~/checksums.$$.$1" 49 | 50 | #EOF 51 | -------------------------------------------------------------------------------- /checksum_cdrom.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # checksum_cdrom.sh - get checksum of burnt cd/dvd rom 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # TODO: 9 | # + make work when more than one drive exists 10 | # + make work on macos 11 | 12 | if [ $# -ne 1 ]; then 13 | echo "ERROR: You must supply an iso file to compare!" 14 | echo " e.g. $0 /path/to/discimage.iso" 15 | exit 1 16 | fi 17 | 18 | bin=(volname wodim md5sum sha1sum) 19 | for i in ${bin[@]} 20 | do 21 | command -v $i >/dev/null 2>&1 || { \ 22 | echo >&2 "ERROR: You must install $i to continue!"; exit 1; } 23 | done 24 | 25 | iso=$1 26 | algo=sha1sum 27 | #algo=md5sum 28 | # get the device name, unless grep fails exit 29 | dev="/dev/$(grep "drive name" /proc/sys/dev/cdrom/info | awk '{print $NF}'; exit ${PIPESTATUS[0]})" 30 | # is there a disc in the tray? 31 | exists=$(volname $dev &>/dev/null; echo $?) 32 | # get drive name, but requires lock 33 | #info=$(wodim -prcap 2> /dev/null | grep -E 'Vendor_|Ident|Revi' | awk -F "'" '{print$2}' | tr '\n' ' ') 34 | 35 | if [ ${PIPESTATUS[0]} -ne 0 ]; then 36 | echo "ERROR: No CD/DVD device not found!" 37 | exit 1 38 | elif [ $exists -ne 0 ]; then 39 | echo "ERROR: No disc mounted! Please insert a disc and try again." 40 | exit 1 41 | fi 42 | 43 | #echo "Found drive: $info" 44 | echo "Found disc in $dev: $(volname $dev)" 45 | echo -n "Beginning checksum..." 46 | cksum_iso=$($algo $iso | awk '{print $1}') 47 | cksum_cdr=$(dd if=${dev} &> /dev/null | head -c $(stat --format=%s $iso) | $algo | awk '{print $1}') 48 | 49 | if [ "$cksum_iso" != "$cksum_cdr" ]; then 50 | echo "done!" 51 | echo "ERROR: Checksums do not match!" 52 | echo " Checksum of ${iso##*/}: $cksum_iso" 53 | echo " Checksum of disc in $dev: $cksum_cdr" 54 | else 55 | echo "done!" 56 | echo "Checksums match! $cksum_iso" 57 | fi 58 | 59 | #EOF 60 | -------------------------------------------------------------------------------- /chk_badblocks.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # chk_badblocks.sh - check hard drives for bad blocks to be called with a 4 | # single argument, the device as shown in /dev. 5 | # for example; ./chk_badblocks.sh /dev/sda 6 | # 7 | 8 | # should be root 9 | if [ $UID -ne 0 ]; then 10 | echo "$0 must be run as root" 11 | exit 1 12 | fi 13 | 14 | # print usage if no args are passed 15 | if [ $# -ne 1 ]; then 16 | echo "usage: $0 " 17 | exit 1 18 | fi 19 | 20 | # set some variables 21 | is_destructive=0 # (0=no, 1=yes) 22 | ldevice=${1%/} # long device name 23 | sdevice=`echo ${1%/} | sed 's/\/dev\///'` 24 | logfile=chk_badblocks_${sdevice}.log 25 | passes=20 26 | mail="chad@planetmayfield.com" 27 | 28 | # let's begin 29 | echo "Beginning run of chk_badblocks." >> $logfile 30 | echo "Run information" >> $logfile 31 | echo " Device: $ldevice" >> $logfile 32 | echo " Passes: $passes" >> $logfile 33 | 34 | if [ $is_destructive -eq 0 ]; then 35 | echo " Destructive: NO" >> $logfile 36 | else 37 | echo " Destructive: YES" >> $logfile 38 | fi 39 | 40 | # actually run badblocks 41 | count=0 42 | while [ $count -lt $passes ]; do 43 | echo "---" >> $logfile 44 | echo "Pass: $count" >> $logfile 45 | echo "Start time: $(date)" >> $logfile 46 | stime=$(date +%s) 47 | 48 | # run either a destructive or non-destructive badblocks run 49 | if [ $is_destructive -eq 0 ]; then 50 | # non-destructive 51 | badblocks -nsv -o bb_${sdevice}.txt $1 >> $logfile 52 | else 53 | # destructive 54 | badblocks -wsv -o bb_${sdevice}.txt $1 >> $logfile 55 | fi 56 | 57 | etime=$(date +%s) 58 | echo "End time: $(date)" >> $logfile 59 | elapsed=`expr $etime - $stime` 60 | echo "Elapsed time: `expr $elapsed \/ 60` minutes" >> $logfile 61 | echo "---" >> $logfile 62 | let count=count+1 63 | done 64 | 65 | # mail results 66 | mailx -s "chk_badblocks.sh finished on $passes on $1" $mail < ./chk_badblocks_${sdevice}.log 67 | -------------------------------------------------------------------------------- /chkrootkit.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # chkrootkit.sh - run chkrootkit then log & email results 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv3 7 | 8 | #set -e # immediately exit if non-zero exit 9 | 10 | logrotate_enabled=0 # if logrotate enabled change to 1 11 | email_to='root@localhost' 12 | 13 | bin_path=/usr/sbin/chkrootkit 14 | logfile=/var/log/chkrootkit.log 15 | required=( chkrootkit egrep mail ) # required binaries 16 | start_time=$(date) 17 | host=$(hostname -f) 18 | exclude='no suspect files|not (found|infected|promisc)|nothing (deleted|detected|found)' 19 | email_sub="[chkrootkit] $host $start_time" 20 | fail=0 21 | 22 | # functions start 23 | log() { 24 | echo "$(date): $@" >> $logfile 25 | } 26 | 27 | bail() { 28 | echo "$(date): $@" >> $logfile 29 | exit 2 30 | } 31 | 32 | log_rotate() { 33 | if [ $logrotate_enabled -eq 0 ]; then 34 | if [ ! -f $logfile]; then 35 | bail "error: no logfile exists" # bail, log rotation won't work 36 | fi 37 | 38 | # modified from http://stackoverflow.com/a/3690996 39 | for suffix in {8..1}; do 40 | if [ -f "$logfile.${suffix}" ]; then 41 | ((next = suffix + 1)) 42 | mv -f "$logfile.${suffix}" "$logfile.${next}" 43 | fi 44 | done 45 | mv -f "$logfile" "$logfile.1" 46 | else 47 | log "internal log rotation disabled" 48 | fi 49 | } 50 | 51 | check_logs() { 52 | if [ -e $logfile ]; then 53 | if [ -s $logfile ]; then 54 | log "using current empty logfile" # it's empty 55 | else 56 | log_rotate # need to rotate 57 | fi 58 | else 59 | touch $logfile &> /dev/null || echo "unable to create log file" 60 | fi 61 | } 62 | 63 | check_depends() { 64 | # check for required programs 65 | for i in ${required[@]}; 66 | do 67 | hash $i &> /dev/null || log "required but not found: $i" 68 | if [ $? -ne 0 ]; then 69 | fail=1; 70 | fi 71 | done 72 | 73 | # TODO: send email that we are bailing 74 | 75 | if [ $fail -ne 0 ]; then 76 | bail "error: un-met dependencies" 77 | fi 78 | } 79 | 80 | send_mail() { 81 | 82 | } 83 | 84 | # start main program 85 | log "start time: $start_time" 86 | log "hostname: $host" 87 | 88 | check_depends # check for dependencies 89 | check_logs # check for clean logfile/rotation 90 | 91 | # finally execute chkrootkit 92 | $bin_path | egrep -v $exclude &> $logfile 93 | 94 | # grab logfile email 95 | cat $logfile | mail -s $email_sub email_to 96 | 97 | #/usr/sbin/chkrootkit | egrep -v 'no suspect files|not (found|infected|promisc)|nothing (deleted|detected|found)' | mail -s "[chkrootkit] `hostname -f` `date`" root@localhost 98 | 99 | #EOF -------------------------------------------------------------------------------- /copy_keys.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # copy_keys.sh - copy ssh keys to a temp server 4 | 5 | if [ $# -ne 1 ]; then 6 | echo "ERROR: Must supply the last IP octet!" 7 | exit 1 8 | fi 9 | 10 | IP="$1" 11 | HOST="host${IP}.us.local" 12 | USERS="root admin nimda toor" 13 | PASS="Password1" 14 | KEY="${HOME}/.ssh/temp/id_rsa" 15 | 16 | command -v sshpass >/dev/null 2>&1 || \ 17 | { echo >&2 "ERROR: 'sshpass' is required but it's not installed!"; exit 1; } 18 | 19 | # remove any keys for this host from known hosts 20 | if [ "$(ssh-keygen -F "$HOST")" ] ; then 21 | ssh-keygen -f "${HOME}/.ssh/known_hosts" -R $HOST 22 | fi 23 | 24 | # ssh-copy-id key to server 25 | for i in ${USERS[@]} 26 | do 27 | sshpass -p "$PASS" ssh-copy-id -f -i "${KEY}.pub" -o StrictHostKeyChecking=no ${i}@${HOST} 28 | # disable the TMOUT for ssh to we can stay connected 29 | ssh -i "$KEY" ${i}@${HOST} 'echo "# CHAD WAS HERE" >> ~/.bashrc; echo "unset TMOUT" >> ~/.bashrc' 30 | done 31 | 32 | #EOF 33 | -------------------------------------------------------------------------------- /entropy_ck.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | # entropy_ck.py - calculates the Shannon entropy of a string 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv3 7 | 8 | import os, sys, math 9 | 10 | sname = os.path.basename(sys.argv[0]) 11 | if len(sys.argv) == 1: 12 | print "Usage: " + sname + " " 13 | sys.exit() 14 | 15 | # modified from http://stackoverflow.com/a/2979208 16 | def entropy(string): 17 | # get probability of chars in string 18 | prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ] 19 | 20 | # calculate the entropy 21 | entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ]) 22 | 23 | return entropy 24 | 25 | c_entropy = entropy(sys.argv[1]) 26 | p_length = len(sys.argv[1]) 27 | ttl_entropy = entropy(sys.argv[1]) * len(sys.argv[1]) 28 | 29 | print 'passwd length: ', p_length 30 | print 'entropy/char: ', c_entropy 31 | print 'actual entropy: ', ttl_entropy, 'bits' 32 | -------------------------------------------------------------------------------- /experiments/README.md: -------------------------------------------------------------------------------- 1 | # scriptlets: experiments -------------------------------------------------------------------------------- /experiments/bash_manipulation.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # bash_manipulation.sh - learning bash expansion/manipulation works (bash 4+) 4 | 5 | # http://stackoverflow.com/a/5163260 6 | # https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html 7 | # $1, $2, $3, ... are the positional parameters. 8 | # "$@" is an array-like construct of all positional parameters, {$1,$2,$3,...} 9 | # "$*" is the IFS expansion of all positional parameters, $1 $2 $3 .... 10 | # $# is the number of positional parameters. 11 | # $- current options set for the shell. 12 | # $$ pid of the current shell (not subshell). 13 | # $_ most recent parameter (or abs path of the command to start curr shell) 14 | # $IFS is the (input) field separator. 15 | # $? is the most recent foreground pipeline exit status. 16 | # $! is the PID of the most recent background command. 17 | # $0 is the name of the shell or shell script. 18 | 19 | echo "STRING MANIPULATION" 20 | echo "--------------------------------------------------------" 21 | echo "Example 1: Convert entire string to uppercase:" 22 | oldvariable=mississippi 23 | newvariable=${oldvariable^^} 24 | echo "newvariable=\${oldvariable^^} = ${newvariable^^}" 25 | echo "--------------------------------------------------------" 26 | 27 | echo "Example 2: Convert only first character to uppercase:" 28 | oldvariable=mississippi 29 | newvariable=${oldvariable^} 30 | echo "newvariable=\${oldvariable^} = ${newvariable^}" 31 | echo "--------------------------------------------------------" 32 | 33 | echo "Example 3: Convert entire string to lowercase:" 34 | oldvariable=MISSISSIPPI 35 | newvariable=${oldvariable,,} 36 | echo "newvariable=\${oldvariable,,} = ${newvariable,,}" 37 | echo "--------------------------------------------------------" 38 | 39 | echo "Example 4: Convert only first character to lowercase:" 40 | oldvariable=MISSISSIPPI 41 | newvariable=${oldvariable,} 42 | echo "newvariable=\${oldvariable,} = ${newvariable,}" 43 | echo "--------------------------------------------------------" 44 | 45 | echo "Example 5: Convert specific characters to uppercase:" 46 | oldvariable=mississippi 47 | newvariable=${oldvariable^^[mi]} 48 | echo "newvariable=\${oldvariable^^[mi]} = ${newvariable}" 49 | echo "--------------------------------------------------------" 50 | 51 | echo "Example 6: Convert specific characters to lowercase:" 52 | oldvariable=MISSISSIPPI 53 | newvariable=${oldvariable,,[MI]} 54 | echo "newvariable=\${oldvariable,,[MI]} = ${newvariable}" 55 | echo "--------------------------------------------------------" 56 | 57 | 58 | echo "FILENAME/PATH MANIPULATION & PARAMETER EXPANSION" 59 | FILE=/usr/share/lib/secrets.tar.gz 60 | 61 | echo "Example 1: Path without first directory" 62 | echo "\${FILE#*/} = ${FILE#*/}" 63 | echo "--------------------------------------------------------" 64 | 65 | echo "Example 2: Filename with all directories removed" 66 | echo "\${FILE##*/} = ${FILE##*/}" 67 | echo "--------------------------------------------------------" 68 | 69 | echo "Example 3: Filename with all directories removed using basename" 70 | echo "\$(basename FILE) = $(basename $FILE)" 71 | echo "--------------------------------------------------------" 72 | 73 | echo "Example 4: Only filename extensions" 74 | echo "\${FILE#*.} = ${FILE#*.}" 75 | echo "--------------------------------------------------------" 76 | 77 | echo "Example 5: Only the *last* filename extension" 78 | echo "\${FILE##*.} = ${FILE##*.}" 79 | echo "--------------------------------------------------------" 80 | 81 | echo "Example 6: Path as directories, no filename" 82 | echo "\${FILE%/*} = ${FILE%/*}" 83 | echo "--------------------------------------------------------" 84 | 85 | echo "Example 7: Path as directories, no filename using dirname" 86 | echo "\$(dirname $FILE = $(dirname $FILE)" 87 | echo "--------------------------------------------------------" 88 | 89 | echo "Example 8: List only the first directory in the path" 90 | echo "\${FILE%%/*} = ${FILE%%/*}" 91 | echo "--------------------------------------------------------" 92 | 93 | echo "Example 9: Path with the last extension removed" 94 | echo "\${FILE%.*} = ${FILE%.*}" 95 | echo "--------------------------------------------------------" 96 | 97 | echo "Example 10: Path with all extensions removed" 98 | echo "\${FILE%%.*} = ${FILE%%.*}" 99 | echo "--------------------------------------------------------" 100 | 101 | #EOF 102 | -------------------------------------------------------------------------------- /experiments/curl_grab_headers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # curl_grab_headers.sh - grab specific headers, was to experiment to get http 4 | # status codes in bash for a script at one timee 5 | 6 | var=( content_type filename_effective ftp_entry_path http_code http_connect local_ip local_port num_connects num_redirects redirect_url remote_ip remote_port size_download size_header time_appconnect time_connect time_namelookup time_pretransfer time_redirect time_starttransfer time_total url_effective size_request size_upload speed_download speed_upload ssl_verify_result time_appconnect time_connect time_namelookup time_pretransfer time_redirect time_starttransfer time_total url_effective ) 7 | 8 | # according to the man page (https://curl.haxx.se/docs/manpage.html) only one 9 | # variable can be used at a time; "If this option is used several times, the 10 | # last one will be used." which is inefficent, but this is just a test 11 | for i in "${var[@]}" 12 | do 13 | resp=$(curl -Iso /dev/null -w "%{$i}" https://www.google.com/robots.txt) 14 | printf "%-20s\t%s\n" "$i" "$resp" 15 | #sleep 1 16 | done 17 | 18 | #EOF 19 | -------------------------------------------------------------------------------- /experiments/json_parse_with_python.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # json_parse_with_python - parse json output with python in a shell script 4 | 5 | # example output 6 | #{ 7 | # "ip": "166.70.163.62", 8 | # "hostname": "166-70-163-62.xmission.com", 9 | # "city": "Midvale", 10 | # "region": "Utah", 11 | # "country": "US", 12 | # "loc": "40.6092,-111.8819", 13 | # "org": "AS6315 Xmission, L.C.", 14 | # "postal": "84047" 15 | #} 16 | 17 | # grab ip info in json format 18 | headers=$(curl -s http://ipinfo.io) 19 | 20 | # fields also contain 'postal', but don't care about it 21 | fields=(ip hostname city region country loc org) 22 | 23 | for i in ${fields[@]} 24 | do 25 | printf "%-20s %s\n" "$i" "$(echo $headers | \ 26 | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["'$i'"]';)" 27 | done 28 | 29 | #EOF 30 | -------------------------------------------------------------------------------- /experiments/processes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # processes.sh - manipulation of linux processes 4 | 5 | echo "PID = $$" 6 | 7 | pid=$$ 8 | if [ -z $pid ]; then 9 | read -p "PID: " pid 10 | fi 11 | 12 | ps -p ${pid:-$$} -o ppid= 13 | 14 | #EOF 15 | -------------------------------------------------------------------------------- /experiments/progress_example.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # progress_example.sh - feedback of progress while waiting for work to complete 4 | 5 | pidile=/var/run/appname/appname.pid 6 | if [ ! -e $pidfile ]; then 7 | touch $pidfile 8 | fi 9 | 10 | CNT=0 11 | 12 | # watch and wait for the process to terminate 13 | echo -n "Waiting for appname to close cleanly" 14 | while [ -e $pidfile ]; 15 | do 16 | echo -n "." 17 | sleep 1 18 | 19 | let CNT=CNT+1 20 | 21 | # process has not terminated in 3 minutes, kill it 22 | if [ $CNT -gt 180 ]; then 23 | rm -f $pidfile 24 | #kill -9 appname 25 | echo "termination timeout exceeded, killing appname" 26 | exit 1 27 | fi 28 | done 29 | echo "the appname has been terminated" 30 | 31 | #EOF 32 | -------------------------------------------------------------------------------- /experiments/randomize_hostname.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # randomize_hostname.sh - randomize hostname 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # NOTE: This is ***totally untested***, don't use this yet! 9 | 10 | if [ $UID -ne 0 ]; then 11 | echo "ERROR: You must run this as root to change the hostname!" 12 | exit 1 13 | fi 14 | 15 | if [ "$1" == "permanent"]; then 16 | perm=1 17 | fi 18 | 19 | # store our original hostname 20 | h=/etc/.original_hostname 21 | svc_file=/etc/systemd/system/hostname.service 22 | 23 | # our list of possible hostnames 24 | names=(lust gluttony greed sloth wrath envy pride prudence justice 25 | temperance courage faith hope charity) 26 | 27 | create_service() { 28 | cat << EOF > /etc/systemd/system/hostname.service 29 | [Unit] 30 | Description=Randomize hostname on boot 31 | 32 | [Service] 33 | Type=oneshot 34 | ExecStart=/bin/bash /usr/local/bin/randomize_hostname.sh permanent 35 | 36 | [Install] 37 | WantedBy=multi-user.target 38 | EOF 39 | } 40 | 41 | if [[ $OSTYPE =~ "darwin" ]]; then 42 | # get random element 43 | idx=$( jot -r 1 0 $((${#names[@]} - 1)) ) 44 | sel=${names[idx]} 45 | 46 | # find old hostname (or hostname/uname -a) 47 | orig_hostname=$(sysctl kern.hostname | awk '{print $2}') 48 | if [ -e $h ]; then 49 | if [ $(cat $h) != "$orig_hostname" ]; then 50 | echo "Storing original hostname..." 51 | echo "$orig_hostname" > $h 52 | fi 53 | fi 54 | 55 | if [ $perm -eq 1 ]; then 56 | # for permanent hostname change like this: 57 | hostname -s $sel 58 | else 59 | # temporarily change hostname (or hostname $sel) 60 | echo "scutil –-set HostName $sel" 61 | fi 62 | 63 | if [ $(hostname) == "$sel" ]; then 64 | echo "Successfully changed hostname to: $sel" 65 | 66 | if [ $perm -eq 1 ]; then 67 | echo "Please reboot this machine to complete the change." 68 | fi 69 | else 70 | echo "ERROR: Unable to change hostname! Please try again." 71 | exit 2 72 | fi 73 | 74 | elif [[ $OSTYPE =~ "linux" ]]; then 75 | # get random element 76 | sel=${names[$RANDOM % ${#names[@]} ]} 77 | 78 | # old hostname (or hostname/uname -n/ cat /proc/sys/kernel/hostname) 79 | orig_hostname=$(sysctl kernel.hostname | awk '{print $3}') 80 | if [ -e $h ]; then 81 | if [ $(cat $h) != "$orig_hostname" ]; then 82 | echo "Storing original hostname..." 83 | echo "$orig_hostname" > $h 84 | fi 85 | fi 86 | 87 | if [ $perm -eq 1 ]; then 88 | # create systemd.service file /lib/systemd/system/hostname.service 89 | # should put system-wide custom services in /etc/systemd/system 90 | # or /etc/systemd/user or ~/.config/systemd/user for user mode. 91 | if [ ! -f $svc_file ]; then 92 | create_service_file 93 | #chown /etc/systemd/system/hostname.service 94 | #chmod /etc/systemd/system/hostname.service 95 | fi 96 | 97 | if [ -f /etc/redhat-release ]; then 98 | # Red Hat variant 99 | sed -i "s/^HOSTNAME=.*$/HOSTNAME=$sel/g" /etc/sysconfig/network 100 | elif [[ $(lsb_release -d) =~ [Debian|Ubuntu] ]]; then 101 | # Debian/Ubuntu variants 102 | echo $sel > /etc/hostname 103 | else 104 | echo "ERROR: Unknown distro!" 105 | fi 106 | 107 | # modify /etc/hosts 108 | #sed -i "s/$oldhostname/$sel/g" 109 | else 110 | # temporarily change hostname 111 | echo "hostname $sel" 112 | fi 113 | 114 | if [ $(hostname) == "$sel" ]; then 115 | echo "Successfully changed hostname to: $sel" 116 | 117 | if [ $perm -eq 1 ]; then 118 | : 119 | #echo "Please reboot this machine to complete the change." 120 | fi 121 | else 122 | echo "ERROR: Unable to change hostname! Please try again." 123 | exit 2 124 | fi 125 | 126 | else 127 | echo "ERROR: Unknown OS!" 128 | fi 129 | 130 | #EOF 131 | -------------------------------------------------------------------------------- /experiments/redirect_logging.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # redirect_logging.sh - redirect logging example 4 | 5 | #xhost + &> /dev/null 6 | 7 | # logging routine. 8 | >/var/log/mylog.log 9 | chmod 666 /var/log/mylog.log 10 | 11 | # assign descriptor 3 as a duplicate STDOUT link, then out to logfile 12 | # good explanation: http://tldp.org/LDP/abs/html/io-redirection.html 13 | exec 3>&1 14 | exec 1>/var/log/my.log 2>&1 15 | 16 | eko () { 17 | # echo args to stdout and log file 18 | echo "$@" >&3 19 | echo "$@" 20 | } 21 | 22 | eko "Hello World!" 23 | 24 | # disable writing to logfile by group/other 25 | chmod 644 /var/log/mylog.log 26 | 27 | #EOF 28 | -------------------------------------------------------------------------------- /experiments/uptime.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl 2 | 3 | # uptime.pl - an experiment in re-writing uptime for different os's 4 | 5 | use Data::Dumper qw(Dumper); 6 | #use Config; 7 | #print "$Config{'osname'}\n"; 8 | # perl oses: http://alma.ch/perl/perloses.htm 9 | 10 | if ( $^O =~ "linux" ) { 11 | # example uptime from debian linux command line (`uptime`) 12 | # 15:35:50 up 20:01, 1 user, load average: 0.04, 0.02, 0.09 13 | 14 | open(P, '/proc/uptime'); 15 | my $uptime =

; 16 | close(P); 17 | $uptime = (split(m/\s+/, $uptime))[0]; 18 | 19 | } elsif ( $^O =~ "darwin") { 20 | # example uptime from macOS Sierra (`sysctl -n kern.boottime`) 21 | # { sec = 1491070408, usec = 756788 } Sat Apr 1 12:13:28 2017 22 | # or using the actual (`uptime`) utility 23 | # 15:54 up 23:16, 3 users, load averages: 1.24 1.41 1.47 24 | 25 | # get current time 26 | my @datetime = (split / /, `date`); 27 | chomp(@datetime); 28 | my $currtime = $datetime[4]; 29 | 30 | # get up (time) 31 | my @uptime = (split / /, `sysctl -n kern.boottime`, 9); 32 | chomp(@uptime); 33 | my $hours = $uptime[6]; 34 | my $boottime = $uptime[3]; 35 | 36 | # TODO: get users 37 | # TODO: get load average 38 | 39 | # print it out together 40 | print "$currtime\n"; 41 | printf "Last boot date: %s\n", $boottime; 42 | 43 | # printf "Last boot date: %s (%.2f hours up)\n", $boottime, $hours; 44 | # print Dumper \@datetime; 45 | print Dumper \@uptime; 46 | 47 | } else { 48 | print "ERROR: Unknown OS!\n"; 49 | } 50 | -------------------------------------------------------------------------------- /experiments/userinfo.youtah.php: -------------------------------------------------------------------------------- 1 | 2 | 3 | User Information 4 | 5 | 6 | 7 | 17 | 19 | 20 | 21 |

User Information

22 |
23 |

Collected via Perl

24 | Environment Variables in Perl
25 |
26 |

Collected via PHP

27 |
28 | DATE/TIME STAMP: ".date('l, F d, Y @ H:i:s A T')."
"; 46 | echo "YOUR IP ADDRESS: ".getip()."
"; 47 | echo 'HTTP_ACCEPT '. $_SERVER["HTTP_ACCEPT"]."
"; 48 | echo 'HTTP_ACCEPT_LANGUAGE '. $_SERVER["HTTP_ACCEPT_LANGUAGE"]."
"; 49 | echo 'HTTP_CONNECTION '. $_SERVER["HTTP_CONNECTION"]."
"; 50 | echo 'HTTP_COOKIE '. $_SERVER["HTTP_COOKIE"]."
"; 51 | echo 'HTTP_HOST '. $_SERVER["HTTP_HOST"]."
"; 52 | echo 'HTTP_KEEP_ALIVE '. $_SERVER["HTTP_KEEP_ALIVE"]."
"; 53 | echo 'HTTP_USER_AGENT '. $_SERVER["HTTP_USER_AGENT"]."
"; 54 | echo 'REMOTE_ADDR '. $_SERVER["REMOTE_ADDR"]."
"; 55 | echo 'REMOTE_PORT '. $_SERVER["REMOTE_PORT"]."
"; 56 | //echo 'SERVER_ADDR '. $_SERVER["SERVER_ADDR"]."
"; 57 | //echo 'PHP_SELF '. $_SERVER["PHP_SELF"]."
"; 58 | //echo "SERVER SCRIPT_FILENAME: ".$_SERVER["SCRIPT_FILENAME"]."
"; 59 | echo "PHP_SELF: ".$_SERVER["PHP_SELF"]."
"; 60 | echo "argv: ".$_SERVER["argv"]."
"; 61 | echo "argc: ".$_SERVER["argc"]."
"; 62 | echo "GATEWAY_INTERFACE: ".$_SERVER["GATEWAY_INTERFACE"]."
"; 63 | echo "SERVER_NAME: ".$_SERVER["SERVER_NAME"]."
"; 64 | echo "SERVER_SOFTWARE: ".$_SERVER["SERVER_SOFTWARE"]."
"; 65 | echo "SERVER_PROTOCOL: ".$_SERVER["SERVER_PROTOCOL"]."
"; 66 | echo "REQUEST_METHOD: ".$_SERVER["REQUEST_METHOD"]."
"; 67 | echo "REQUEST_TIME: ".$_SERVER["REQUEST_TIME"]."
"; 68 | echo "QUERY_STRING: ".$_SERVER["QUERY_STRING"]."
"; 69 | //echo "DOCUMENT_ROOT: ".$_SERVER["DOCUMENT_ROOT"]."
"; 70 | echo "HTTP_ACCEPT: ".$_SERVER["HTTP_ACCEPT"]."
"; 71 | echo "HTTP_ACCEPT_CHARSET: ".$_SERVER["HTTP_ACCEPT_CHARSET"]."
"; 72 | echo "HTTP_ACCEPT_ENCODING: ".$_SERVER["HTTP_ACCEPT_ENCODING"]."
"; 73 | echo "HTTP_ACCEPT_LANGUAGE: ".$_SERVER["HTTP_ACCEPT_LANGUAGE"]."
"; 74 | echo "HTTP_CONNECTION: ".$_SERVER["HTTP_CONNECTION"]."
"; 75 | //echo "HTTP_HOST: ".$_SERVER["HTTP_HOST"]."
"; 76 | echo "HTTP_REFERER: ".$_SERVER["HTTP_REFERER"]."
"; 77 | echo "HTTP_USER_AGENT: ".$_SERVER["HTTP_USER_AGENT"]."
"; 78 | //echo "HTTPS: ".$_SERVER["HTTPS"]."
"; 79 | echo "REMOTE_ADDR: ".$_SERVER["REMOTE_ADDR"]."
"; 80 | echo "REMOTE_HOST: ".$_SERVER["REMOTE_HOST"]."
"; 81 | echo "REMOTE_PORT: ".$_SERVER["REMOTE_PORT"]."
"; 82 | //echo "SCRIPT_FILENAME: ".$_SERVER["SCRIPT_FILENAME"]."
"; 83 | //echo "SERVER_ADMIN: ".$_SERVER["SERVER_ADMIN"]."
"; 84 | echo "SERVER_PORT: ".$_SERVER["SERVER_PORT"]."
"; 85 | //echo "SERVER_SIGNATURE: ".$_SERVER["SERVER_SIGNATURE"]."
"; 86 | echo "PATH_TRANSLATED: ".$_SERVER["PATH_TRANSLATED"]."
"; 87 | echo "SCRIPT_NAME: ".$_SERVER["SCRIPT_NAME"]."
"; 88 | echo "REQUEST_URI: ".$_SERVER["REQUEST_URI"]."
"; 89 | echo "PHP_AUTH_DIGEST: ".$_SERVER["PHP_AUTH_DIGEST"]."
"; 90 | echo "PHP_AUTH_USER: ".$_SERVER["PHP_AUTH_USER"]."
"; 91 | echo "PHP_AUTH_PW: ".$_SERVER["PHP_AUTH_PW"]."
"; 92 | echo "AUTH_TYPE: ".$_SERVER["AUTH_TYPE"]."
"; 93 | echo "HOST BY ADDRESS: ".gethostbyaddr($_SERVER['REMOTE_ADDR']); 94 | ?> 95 |

Go Home

96 |
97 | 98 |
99 |

Collected via JavaScript

100 |
101 | 111 |
112 | 135 |
136 | 147 |
148 | 151 |
152 | 156 |

Go Home

157 |
158 | 159 | 160 | 161 | -------------------------------------------------------------------------------- /fixssh.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # fixssh.py - fix bad ssh keys from server OVER-deployment, a quick hack and 4 | # useless re-engineering of 'ssh-keygen -R ' 5 | 6 | # NOTE: Reinvent the wheel, yup I did it. I fequently deploy a lot of servers 7 | # which are DHCP/DNS'd so when the SSH keys regenerate I always used to get key 8 | # errors, so I have this gem.... before I discovered 'ssh-keygen -R ' 9 | 10 | import sys, os 11 | 12 | def main(): 13 | # print usage if no args are given 14 | sname = os.path.basename(sys.argv[0]) 15 | if len(sys.argv) == 1: 16 | print "Usage: " + sname + " " 17 | sys.exit() 18 | 19 | # assign first arg as hostname 20 | for arg in sys.argv[1:]: 21 | h = arg 22 | 23 | # define files we'll work on 24 | knownhosts = "~/.ssh/known_hosts" 25 | removedhosts = "~/.ssh/removed_hosts" 26 | 27 | # open and remove h from knownhost 28 | f = open(knownhosts, 'r') 29 | lines = f.readlines() 30 | f.close() 31 | f = open(knownhosts, 'w') 32 | for line in lines: 33 | if line != h + "\n": 34 | f.write(line) 35 | f.close() 36 | 37 | # check if removed was successful 38 | if h in open(knownhosts).read(): 39 | print "ERROR: Failed to removed "+ h + " from " + knownhosts 40 | sys.exit() 41 | else: 42 | print "SUCCESS: Removed " + h + " from " + knownhosts 43 | 44 | # write h to a backup file removedhosts 45 | with open(removedhosts, 'a+') as myfile: 46 | myfile.write(h + "\n") 47 | 48 | # check if addition was successful 49 | if h in open(removedhosts).read(): 50 | print "SUCCESS: Added " + h + " to " + removedhosts 51 | else: 52 | print "ERROR: Failed to add " + h + " to " + removedhosts 53 | 54 | if __name__ == "__main__": 55 | main() 56 | -------------------------------------------------------------------------------- /fixssh.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # fixssh.sh - fix bad ssh keys from server OVER-deployment, a quick hack and 4 | # useless re-engineering of 'ssh-keygen -R ' 5 | 6 | # NOTE: Reinvent the wheel, yup I did it. I fequently deploy a lot of servers 7 | # which are DHCP/DNS'd so when the SSH keys regenerate I always used to get key 8 | # errors, so I have this gem.... before I discovered 'ssh-keygen -R ' 9 | 10 | if [ $* -ne 1 ]; then 11 | echo "Usage: $0 " 12 | exit 1 13 | fi 14 | 15 | remove="$*" 16 | knownhosts="~/.ssh/known_hosts" 17 | removedhosts="~/.ssh/removed_hosts" 18 | 19 | if [ $(grep -c $remove $knownhosts) -ge 1 ]; then 20 | for server in $remove; do 21 | # copy the key to remove to $removedhosts 22 | cat $hostfile | grep $remove > $removedhosts 23 | echo "SUCCESS: Added $remove to $removedhosts" 24 | 25 | # remove the key ($remove) 26 | cat $hostfile | grep -v $server > ${knownhosts}.$$ 27 | mv -fp ${knownhosts}.$$ $knownhosts 28 | #sed -i '/$remove/'d $knownhosts 29 | echo "SUCCESS: Removed $remove from $knownhosts" 30 | done 31 | else 32 | echo "ERROR: $remove not found in $knownhosts" 33 | fi 34 | 35 | #EOF 36 | -------------------------------------------------------------------------------- /getaddrbyhost.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl -w 2 | 3 | # getaddrbyhost.pl - lookup a hosts ip by hostname 4 | 5 | use strict; 6 | use Socket; 7 | use Data::Dumper; 8 | 9 | if ( @ARGV != 1 ) { 10 | print "ERROR: You must supply a hostname!\n"; 11 | exit; 12 | } 13 | 14 | my @address = inet_ntoa(scalar(gethostbyname($ARGV[0]))); 15 | 16 | #print Dumper(\@address); 17 | 18 | my $count = 1; 19 | foreach my $n (@address) { 20 | printf("%-15s %s\n", "IPAddress".$count.":", $n); 21 | shift(@address); 22 | $count++; 23 | } 24 | 25 | #EOF 26 | -------------------------------------------------------------------------------- /gethostbyaddr.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl -w 2 | 3 | # gethostnamebyaddr.pl - show hostname/alias based on ip address 4 | 5 | use strict; 6 | use Socket; 7 | use Data::Dumper; 8 | 9 | if ( @ARGV != 1 ) { 10 | print "ERROR: You must supply an ip address!\n"; 11 | exit; 12 | } 13 | 14 | my $ip = $ARGV[0]; 15 | my @i = gethostbyaddr(inet_aton($ip), AF_INET); 16 | 17 | # define var based on array and shift off stack 18 | my $hn = $i[0]; 19 | shift(@i); 20 | my $alias = $i[0]; 21 | shift(@i); 22 | my $addrtype = $i[0]; 23 | shift(@i); 24 | my $len = $i[0]; 25 | shift(@i); 26 | 27 | # print it all out 28 | printf("%-15s %s\n", "Hostname:", $hn); 29 | printf("%-15s %s\n", "Alias:", $alias); 30 | #printf("%-15s %s\n", "Type:", $addrtype); 31 | #printf("%-15s %s\n", "Length:", $len); 32 | 33 | #foreach (@i) { 34 | # print " = ", inet_ntoa($_), "\n"; 35 | #} 36 | 37 | #EOF 38 | -------------------------------------------------------------------------------- /github_short_url.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # github_short_url.sh - create a short github url 4 | 5 | # HOWTO: https://github.blog/2011-11-10-git-io-github-url-shortener/ 6 | 7 | if [ $# -lt 1 ] || [ $# -gt 2 ]; then 8 | echo "ERROR: You must supply a github url to shorten" 9 | exit 1 10 | fi 11 | 12 | short=() 13 | url="$1" 14 | vanity="$2" 15 | shortener="https://git.io" 16 | 17 | if ! [[ "$url" =~ (github.(com|blog)|githubusercontent.com) ]]; then 18 | echo "ERROR: Only github.com URLs are allowed!" 19 | exit 1 20 | else 21 | if [ ! -z ${vanity+x} ]; then 22 | # vanity url was requested 23 | if [[ $OSTYPE =~ "darwin" ]]; then 24 | # work around for ancient version of bash on macOS 25 | while IFS= read -r line; do 26 | short+=( "$line" ) 27 | done < <( curl -si "$shortener" -H "User-Agent: curl/7.58.0" -F "url=${url}" -F "code=${vanity}" | grep -E "(Status|Location): " ) 28 | else 29 | # mapfile, only for bash v4.0+ 30 | mapfile -t short < <( curl -i "$shortener" -H "User-Agent: curl/7.58.0" -F "url=${url}" -F "code=${vanity}" | grep -E "(Status|Location): " ) 31 | fi 32 | else 33 | if [[ $OSTYPE =~ "darwin" ]]; then 34 | # work around for ancient version of bash on macOS 35 | while IFS= read -r line; do 36 | short+=( "$line" ) 37 | done < <( curl -si "$shortener" -H "User-Agent= curl/7.58.0" -F "url=${url}" | grep -E "(Status|Location): " ) 38 | else 39 | # mapfile, only for bash v4.0+ 40 | mapfile -t short < <( curl -H "User-Agent= curl/7.58.0" -i "$shortener" -F "url=${url}" | grep -E "(Status|Location): " ) 41 | fi 42 | fi 43 | 44 | if [[ ${short[0]} =~ "201" ]]; then 45 | echo "Link created: $(echo "${short[1]}" | awk '{print $2}')" 46 | else 47 | # echo "ERROR: Link creation failed! Code $(echo ${short[0]} | sed 's|Status: ||g')" 48 | echo "ERROR: Link creation failed! Code ${short[0]//Status: /}" 49 | fi 50 | fi 51 | 52 | #EOF -------------------------------------------------------------------------------- /launch_tmux.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # launch_tmux.sh - launch my tmux dev env on different machines 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | # date : 02/12/2019 8 | 9 | # ADDITIONAL INFO: 10 | 11 | # INFO: to identify panes easily; 12 | # Ctrl+B + Q (within tmux) 13 | # --- or --- 14 | # tmux display -pt "${TMUX_PANE:?}" '#{pane_index}' 15 | # view panes: #{session_name}:#{window_index}.#{pane_index} 16 | 17 | if [[ $OSTYPE =~ "linux" ]]; then 18 | # get current IP address, to make sure we're on the correct network 19 | IP="$(ip -br -4 addr | grep UP | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b")" 20 | 21 | # check hostnames and apply create env for each machine 22 | if [[ "$(hostname)" =~ "phobos" ]] && [[ "$IP" =~ "73.196" ]]; then 23 | # start a new 'main' session detached 24 | tmux new-session -s "main" -d -n "dev1" 25 | 26 | # set the status bar for this session 27 | #tmux set status-bg default 28 | #tmux set status-fg white 29 | #tmux set status-left '#[fg=green]#S' 30 | tmux set status-right '#[fg=red,bold]w#{window_index}.p#{pane_index} #[fg=default,nobold](#(whoami)@#h) %H:%M:%S %d-%b-%y' 31 | #tmux set status-right '#{session_name}:#{window_index}.#{pane_index} (#(ehoami)@#h) %H:%M:%S %d-%b-%y' 32 | #tmux set status-left-length 20 33 | tmux set status-right-length 80 34 | 35 | ######## create a new window: dev1 (dev1 environment) 36 | #tmux new-window -t main:0 -n "dev1" 37 | #tmux rename-window dev 38 | 39 | # split windows into panes 40 | tmux split-window -h 41 | tmux split-window -h 42 | #tmux select-layout even-horizontal 43 | tmux select-pane -t 1 44 | tmux split-window -v -p 50 45 | tmux select-pane -t 3 46 | tmux split-window -v -p 50 47 | tmux select-pane -t 3 48 | tmux split-window -v -p 20 49 | 50 | # run these commands in created panes 51 | tmux send-keys -t 0 C-z 'cd ~/Code/' Enter 52 | tmux send-keys -t 1 C-z 'ssh file' Enter 53 | tmux send-keys -t 2 C-z 'ssh file' Enter 54 | tmux send-keys -t 3 C-z 'top' Enter 55 | tmux clock-mode -t 4 56 | tmux send-keys -t 5 C-z 'ssh file' Enter 57 | tmux select-pane -t main:0 58 | 59 | ######## create a new window: dev2 (dev2 environment) 60 | tmux new-window -t main:1 -n "dev2" 61 | 62 | # split windows into panes 63 | tmux split-window -h 64 | tmux split-window -h 65 | #tmux select-layout even-horizontal 66 | tmux select-pane -t 1 67 | tmux split-window -v -p 50 68 | tmux select-pane -t 3 69 | tmux split-window -v -p 50 70 | 71 | # run these commands in created panes 72 | tmux send-keys -t 0 C-z 'cd ~/Code/ && ls' Enter 73 | tmux select-pane -t main:1 74 | 75 | ######## create a new window: k8s1 (work with k8s cluster) 76 | tmux new-window -t main:2 -n "k8s" 77 | 78 | # split windows into panes 79 | tmux split-window -h 80 | tmux select-layout even-horizontal 81 | tmux select-pane -t 0 82 | tmux split-window -v -p 50 83 | tmux select-pane -t 2 84 | tmux split-window -v -p 50 85 | 86 | ######## create a new window: admin (for sysadmin-y things) 87 | tmux new-window -t main:3 -n "admin" 88 | 89 | # split windows into panes 90 | tmux split-window -h 91 | tmux split-window -h 92 | tmux select-layout even-horizontal 93 | tmux select-pane -t 0 94 | tmux split-window -v -p 50 95 | tmux select-pane -t 2 96 | tmux split-window -v -p 50 97 | tmux select-pane -t 4 98 | tmux split-window -v -p 50 99 | 100 | # run these commands in created panes 101 | tmux send-keys -t 0 C-z 'top' Enter 102 | tmux send-keys -t 1 C-z 'last' Enter 103 | tmux send-keys -t 2 C-z 'who' Enter 104 | tmux send-keys -t 3 C-z 'ssh file' Enter 105 | tmux select-pane -t 0 106 | 107 | ######## create a new window: misc 108 | tmux new-window -t main:4 -n "misc" 109 | 110 | # split windows into panes 111 | tmux split-window -h 112 | tmux split-window -h 113 | tmux select-layout even-horizontal 114 | tmux select-pane -t 0 115 | tmux split-window -v -p 50 116 | tmux select-pane -t 2 117 | tmux split-window -v -p 50 118 | tmux select-pane -t 4 119 | tmux split-window -v -p 50 120 | 121 | # run these commands in created panes 122 | tmux send-keys -t 0 C-z 'echo command goes here' Enter 123 | tmux send-keys -t 1 C-z 'ssh file' Enter 124 | tmux select-pane -t 0 125 | 126 | # switch focus back to main window, pane 0 127 | tmux select-window -t 0 128 | tmux select-pane -t 0 129 | 130 | # attach to main session 131 | tmux -2 attach-session -t main 132 | 133 | # check hostnames and apply create env for each machine 134 | elif [[ "$(hostname)" =~ (deimos|ISFL) ]] && [[ "$IP" =~ "76.133" ]]; then 135 | # start a new 'main' session detached 136 | tmux new-session -s "main" -d -n "dev" 137 | 138 | # set the status bar for this session 139 | tmux set status-right '#[fg=red,bold]w#{window_index}.p#{pane_index} #[fg=default,nobold](#(whoami)@#h) %H:%M:%S %d-%b-%y' 140 | tmux set status-right-length 80 141 | 142 | ######## create a new window: dev (dev environment) 143 | # split windows into panes 144 | tmux split-window -h 145 | #tmux select-layout even-horizontal 146 | tmux select-pane -t 1 147 | tmux split-window -v -p 50 148 | 149 | # run these commands in created panes 150 | tmux send-keys -t 0 C-z 'cd ~/Code/' Enter 151 | tmux send-keys -t 1 C-z 'ssh file' Enter 152 | # dont forget to add user to sudoers like this; 153 | # username ALL=(ALL) NOPASSWD: /usr/local/bin/sysinfo.sh 154 | tmux send-keys -t 2 'if ! /usr/local/bin/sysinfo.sh; then curl -sSL https://git.io/fhQAQ | sudo bash; fi' Enter 155 | tmux select-pane -t main:0 156 | 157 | ######## create a new window: admin (for sysadmin-y things) 158 | tmux new-window -t main:1 -n "admin" 159 | 160 | # split windows into panes 161 | tmux split-window -h 162 | tmux select-layout even-horizontal 163 | tmux select-pane -t 0 164 | tmux split-window -v -p 50 165 | tmux select-pane -t 2 166 | tmux split-window -v -p 50 167 | 168 | # run these commands in created panes 169 | tmux send-keys -t 0 C-z 'ssh file' Enter 170 | tmux send-keys -t 1 C-z 'who' Enter 171 | tmux send-keys -t 2 C-z 'last' Enter 172 | tmux send-keys -t 3 'if ! /usr/local/bin/sysinfo.sh; then curl -sSL https://git.io/fhQAQ | sudo bash; fi' Enter 173 | tmux select-pane -t 0 174 | 175 | ######## create a new window: k8s1 (work with k8s cluster 1) 176 | tmux new-window -t main:2 -n "k8s1" 177 | 178 | # split windows into panes 179 | tmux split-window -h 180 | tmux select-layout even-horizontal 181 | tmux select-pane -t 0 182 | tmux split-window -v -p 50 183 | tmux select-pane -t 2 184 | tmux split-window -v -p 50 185 | 186 | # run these commands in created panes 187 | #tmux send-keys -t 0 C-z 'ssh k8s-node1' Enter 188 | #tmux send-keys -t 1 C-z 'ssh k8s-node2' Enter 189 | #tmux send-keys -t 2 C-z 'ssh k8s-node3' Enter 190 | #tmux send-keys -t 3 C-z 'ssh k8s-node4' Enter 191 | #tmux select-pane -t 0 192 | 193 | ######## create a new window: k8s2 (work with k8s cluster 2) 194 | tmux new-window -t main:3 -n "k8s2" 195 | 196 | # split windows into panes 197 | tmux split-window -h 198 | tmux select-layout even-horizontal 199 | tmux select-pane -t 0 200 | tmux split-window -v -p 50 201 | tmux select-pane -t 2 202 | tmux split-window -v -p 50 203 | 204 | # run these commands in created panes 205 | #tmux send-keys -t 0 C-z 'ssh n1' Enter 206 | #tmux send-keys -t 1 C-z 'ssh n2' Enter 207 | #tmux send-keys -t 2 C-z 'ssh n3' Enter 208 | #tmux send-keys -t 3 C-z 'ssh n4' Enter 209 | #tmux select-pane -t 0 210 | 211 | ######## create a new window: misc 212 | tmux new-window -t main:4 -n "misc" 213 | 214 | # split windows into panes 215 | tmux split-window -h 216 | tmux select-layout even-horizontal 217 | tmux select-pane -t 0 218 | tmux split-window -v -p 50 219 | tmux select-pane -t 2 220 | tmux split-window -v -p 50 221 | 222 | # run these commands in created panes 223 | tmux send-keys -t 0 C-z 'ssh file' Enter 224 | tmux select-pane -t 0 225 | 226 | # switch focus back to main window, pane 0 227 | tmux select-window -t 0 228 | tmux select-pane -t 0 229 | 230 | # attach to main session 231 | tmux -2 attach-session -t main 232 | else 233 | echo "ERROR: Not implemented (UNKNOWN HOST/NETWORK)!" 234 | exit 1 235 | fi 236 | elif [[ $OSTYPE =~ "darwin" ]]; then 237 | 238 | # grab internal IPv4 address and check it 239 | IP="$(ifconfig en0 | grep 'inet ' | awk '{print $2}')" 240 | 241 | if [[ "$(hostname -s)" =~ "MBP" ]] && [[ "$IP" =~ "7.10" ]]; then 242 | # start a new 'main' session detached 243 | tmux new-session -s "main" -d -n "dev" 244 | 245 | # set the status bar for this session 246 | #pmset -g batt | egrep "([0-9]+\%).*" -o | cut -f1 -d';' 247 | tmux set status-right '#[fg=red,bold]w#{window_index}/p#{pane_index} #[fg=default,nobold](#(whoami)@#h) %H:%M:%S %d-%b-%y' 248 | tmux set status-right-length 80 249 | 250 | ######## create a new window: dev1 (dev1 environment) 251 | # split windows into panes 252 | tmux split-window -h 253 | #tmux select-layout even-horizontal 254 | tmux select-pane -t 1 255 | tmux split-window -v -p 50 256 | 257 | # run these commands in created panes 258 | tmux send-keys -t 0 C-z 'cd ~/Code/' Enter 259 | tmux send-keys -t 0 'if ! /usr/local/bin/sysinfo.sh; then curl -sSL https://git.io/fhQAQ | bash; fi; ls' Enter 260 | tmux send-keys -t 1 C-z 'ssh file' Enter 261 | tmux send-keys -t 1 "ls" C-m 262 | tmux send-keys -t 2 C-z 'ssh file' Enter 263 | tmux send-keys -t 2 "uptime" C-m 264 | tmux select-pane -t main:0 265 | 266 | ######## create a new window: plex (plex environment) 267 | tmux new-window -t main:1 -n "plex" 268 | 269 | # split windows into panes 270 | tmux split-window -h 271 | #tmux select-layout even-horizontal 272 | tmux select-pane -t 1 273 | tmux split-window -v -p 50 274 | 275 | # run these commands in created panes 276 | tmux send-keys -t 0 C-z 'ssh plex' Enter 277 | tmux send-keys -t 0 "cd /mnt/plex/ && ls -l" C-m 278 | tmux send-keys -t 1 C-z 'ssh plex' Enter 279 | tmux send-keys -t 1 "dfh" C-m 280 | tmux send-keys -t 2 C-z 'ssh plex' Enter 281 | tmux send-keys -t 2 "docker logs plex" C-m 282 | tmux select-pane -t main:1 283 | 284 | ######## create a new window: transmission (transmission environment) 285 | tmux new-window -t main:2 -n "transmission" 286 | 287 | # split windows into panes 288 | tmux split-window -h 289 | #tmux select-layout even-horizontal 290 | tmux select-pane -t 1 291 | tmux split-window -v -p 50 292 | 293 | # run these commands in created panes 294 | tmux send-keys -t 0 C-z 'ssh transmission' Enter 295 | tmux send-keys -t 0 "cd ~/completed/ && ls" C-m 296 | tmux send-keys -t 1 C-z 'ssh transmission' Enter 297 | tmux send-keys -t 1 "cd ~/completed/" C-m 298 | tmux send-keys -t 2 C-z 'ssh transmission' Enter 299 | tmux send-keys -t 2 "dfh" C-m 300 | tmux select-pane -t main:2 301 | 302 | ######## create a new window: k8s-prod (work with production k8s cluster) 303 | tmux new-window -t main:3 -n "k8s-prod" 304 | 305 | # split windows into panes 306 | tmux split-window -h 307 | tmux select-layout even-horizontal 308 | tmux select-pane -t 0 309 | tmux split-window -v -p 50 310 | tmux select-pane -t 2 311 | tmux split-window -v -p 50 312 | tmux select-pane -t main:3 313 | 314 | # run these commands in created panes 315 | tmux send-keys -t 0 C-z 'ssh k8s-node1' Enter 316 | tmux send-keys -t 0 "kubectl get nodes && kubectl get po --all-namespaces" C-m 317 | tmux send-keys -t 1 C-z 'ssh k8s-node2' Enter 318 | tmux send-keys -t 2 C-z 'ssh k8s-node3' Enter 319 | tmux send-keys -t 3 C-z 'ssh k8s-node4' Enter 320 | 321 | ######## create a new window: k8s-prod (work with production k8s cluster) 322 | tmux new-window -t main:4 -n "k8s-test" 323 | 324 | # split windows into panes 325 | tmux split-window -h 326 | tmux select-layout even-horizontal 327 | tmux select-pane -t 0 328 | tmux split-window -v -p 50 329 | tmux select-pane -t 2 330 | tmux split-window -v -p 50 331 | tmux select-pane -t main:4 332 | 333 | # run these commands in created panes 334 | tmux send-keys -t 0 C-z 'ssh k8s-test-node1' Enter 335 | tmux send-keys -t 0 "kubectl get nodes && kubectl get po --all-namespaces" C-m 336 | tmux send-keys -t 1 C-z 'ssh k8s-test-node2' Enter 337 | tmux send-keys -t 2 C-z 'ssh k8s-test-node3' Enter 338 | tmux send-keys -t 3 C-z 'ssh k8s-test-node4' Enter 339 | 340 | ######## create a new window: admin (server/network administration) 341 | tmux new-window -t main:5 -n "admin" 342 | 343 | # split windows into panes 344 | tmux split-window -h 345 | tmux select-layout even-horizontal 346 | tmux select-pane -t 0 347 | tmux split-window -v -p 50 348 | tmux select-pane -t 2 349 | tmux split-window -v -p 50 350 | tmux select-pane -t main:5 351 | 352 | # run these commands in created panes 353 | tmux send-keys -t 0 C-z 'ssh file' Enter 354 | # tmux send-keys -t 0 "awk '/^md/ {printf \"%s: \", $1}; /blocks/ {print $NF}' /proc/mdstat;" C-m 355 | tmux send-keys -t 1 C-z 'ssh nuc1' Enter 356 | tmux send-keys -t 1 "df -h && uptime && ls" C-m 357 | tmux send-keys -t 2 C-z 'ssh ns1' Enter 358 | tmux send-keys -t 2 "tail -n 30 /var/log/dnsmasq/dnsmasq.log" C-m 359 | tmux send-keys -t 3 C-z 'ssh ns2' Enter 360 | tmux send-keys -t 3 "pihole -c -e" C-m "curl -s http://${PIHOLE}/admin/api.php?summaryRaw | jq '.' | grep -E 'dns_queries_today|ads_blocked|percent'" C-m 361 | tmux select-pane -t main:5 362 | 363 | ######## create a new window: misc 364 | tmux new-window -t main:6 -n "misc" 365 | 366 | # split windows into panes 367 | tmux split-window -h 368 | tmux select-layout even-horizontal 369 | tmux select-pane -t 1 370 | tmux split-window -v -p 50 371 | tmux select-pane -t main:6 372 | 373 | # run these commands in created panes 374 | tmux send-keys -t 0 C-z 'cd ~' Enter 375 | tmux send-keys -t 1 C-z 'cd ~' Enter 376 | tmux send-keys -t 2 C-z 'cd ~' Enter 377 | tmux send-keys -t 3 C-z 'cd ~' Enter 378 | 379 | # switch focus back to main window, pane 0 380 | tmux select-window -t 0 381 | tmux select-pane -t 0 382 | 383 | # attach to main session 384 | tmux -2 attach-session -t main 385 | else 386 | echo "ERROR: Not implemented (UNKNOWN HOST/NETWORK)!" 387 | exit 1 388 | fi 389 | else 390 | # TODO: Add cygwin? 391 | echo "ERROR: Unknown \$OSTYPE, bailing out now!" 392 | exit 1 393 | fi 394 | 395 | #EOF 396 | -------------------------------------------------------------------------------- /macgen.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | # macgen.py script to generate a MAC address for vm http://red.ht/2pnu0Xy 4 | 5 | # TODO: extend functionality 6 | 7 | import random 8 | 9 | def randomMAC(): 10 | mac = [ 0x00, 0x16, 0x3e, 11 | random.randint(0x00, 0x7f), 12 | random.randint(0x00, 0xff), 13 | random.randint(0x00, 0xff) ] 14 | return ':'.join(map(lambda x: "%02x" % x, mac)) 15 | # 16 | print randomMAC() 17 | -------------------------------------------------------------------------------- /measure_latency.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # measure_latency.sh - a dirty measure of latency via ping 4 | 5 | # NOTE: This was only written so I could test latency quickly if I needed 6 | # to while in the terminal (which is where I spend all my time). If 7 | # you really want to view latency over time, use smokeping 8 | 9 | usage() { 10 | echo " e.g. $0 " 11 | } 12 | 13 | if [ $# -ne 2 ]; then 14 | echo "ERROR: You must supply a hostname/IP to measure & a packet count!" 15 | usage 16 | exit 1 17 | fi 18 | 19 | server=$1 20 | count=$2 21 | success=0 22 | 23 | if [[ $OSTYPE =~ "darwin" ]]; then 24 | # mac ping has different options that on linux 25 | while read line 26 | do 27 | # check if we successfully ran 28 | if [[ $line =~ "round-trip" ]]; then 29 | latency=$(echo $line | awk -F "/" '/round-trip/ {print $7}') 30 | echo "latency to $server with $count packets is: $latency" 31 | success=1 && exit 0 32 | fi 33 | 34 | # check packet loss if we didn't run correctly 35 | if [[ $line =~ "transmitted" ]] && [[ $success -ne 1 ]]; then 36 | pct=$(echo $line | awk '/transmitted/ {print $(NF-2)}' | \ 37 | awk -F. '{print $1}') 38 | 39 | # check packet loss 40 | if [ $pct -eq 100 ]; then 41 | loss=$(echo $line |awk '/tted/ {print $(NF-2),$(NF-1),$NF}') 42 | echo "latency measurement failed: $loss" 43 | fi 44 | fi 45 | done < <(ping -q -c $count -i 0.2 -t 3 $server) 46 | 47 | elif [[ $OSTYPE =~ "linux" ]]; then 48 | # http://homepage.smc.edu/morgan_david/cs70/assignments/ping-latency.htm 49 | while read line 50 | do 51 | # check if we successfully ran 52 | if [[ $line =~ "rtt" ]]; then 53 | latency=$(echo $line | awk -F "/" '/rtt/ {print $5 " ms"}') 54 | echo "latency to $server with $count packets is: $latency" 55 | success=1 && exit 0 56 | fi 57 | 58 | # check packet loss if we didn't run correctly 59 | if [[ $line =~ "transmitted" ]] && [[ $success -ne 1 ]]; then 60 | pct=$(echo $line |awk '/transmitted/ {print $6}' | sed 's/%//g') 61 | 62 | # check packet loss 63 | if [ $pct -eq 100 ]; then 64 | loss=$(echo $line |awk '/mitted/ {print $6" "$7" "$8}' | \ 65 | sed 's/,//g') 66 | echo "latency measurement failed: $loss" 67 | fi 68 | fi 69 | done < <(ping -q -c $count -i 0.2 -w 3 $server) 70 | 71 | else 72 | echo "ERROR: Unknown OS ($OSTYPE)" 73 | fi 74 | 75 | #EOF 76 | -------------------------------------------------------------------------------- /mount_iso.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # date: 04/19/2009 4 | # author: Chad Mayfield (http://www.chadmayfield.com/) 5 | # license: gpl v3 (http://www.gnu.org/licenses/gpl-3.0.txt) 6 | 7 | # I have to mount ISO files at work all the time and quite often 8 | # at home as well. I created this utility to automate the process 9 | # of creating a mount point and mounting the ISO. 10 | 11 | #+-- Check to make sure that user is root 12 | if [ "$(id -u)" != "0" ]; then 13 | echo "ERROR: You must be root to run this utility!" 14 | exit 1; 15 | fi 16 | 17 | usage() { 18 | echo "Usage: $0 " 19 | exit 1; 20 | } 21 | 22 | #+-- Check arguments, if none print usage 23 | if [ $# -eq 0 ]; then 24 | usage 25 | fi 26 | 27 | isonamewithpath=$2 #+-- Save the full path of the iso 28 | isonameonly=${2##*/} #+-- Strip off the path from the iso 29 | mntdirname=`echo $isonameonly | sed -e 's/\.[a-zA-Z0-9_-]\+$//'` 30 | ismounted=`cat /proc/mounts | grep -c $mntdirname` 31 | 32 | mountiso() { 33 | #+-- Check if /mnt/$dirname is already there 34 | if [ ! -d /mnt/$mntdirname ]; then 35 | mkdir -p /mnt/$mntdirname && echo "SUCCESS: Created directory: /mnt/$mntdirname" 36 | fi 37 | 38 | #+-- Check if ISO is already mounted, if not mount it 39 | if [ $ismounted -ge 1 ]; then 40 | echo "ERROR: File $isonamewithpath already mounted!" 41 | else 42 | mount -o loop $isonamepath /mnt/$mntdirname && echo "SUCCESS: File $isonameonly mounted" 43 | #cd /mnt/$dirname 44 | fi 45 | } 46 | 47 | unmountiso() { 48 | if [ -ge 1 ]; then 49 | umount /mnt/$mntdirname 50 | echo "Unmounted $isonamepath" 51 | rm -rf /mnt/$mntdirname 52 | echo "Removed /mnt/$mntdirname" 53 | else 54 | echo "ERROR: File $isonamepath is not mounted" 55 | exit 1; 56 | fi 57 | } 58 | 59 | if [ $# -eq 2 ]; then 60 | if [ "$1" = "mount" ]; then 61 | mountiso 62 | elif [ "$1" = "unmount" ]; then 63 | unmountiso 64 | else 65 | echo "ERROR: Invalid option specified: $1" 66 | fi 67 | else 68 | usage 69 | fi 70 | 71 | -------------------------------------------------------------------------------- /myrepos_status.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # status_myrepos.sh - get status of repos that have changes to commit under current tree 4 | 5 | for i in $(find . -name .git | sed 's/\/.git//g' | sort) 6 | do 7 | cd $i 8 | 9 | if [ -e .git ]; then 10 | repo=$(git config --local -l | grep "remote.origin.url" | awk -F "=" '{print $2}') 11 | 12 | # only show repos that have changes 13 | if [ $(git status -s | wc -l | awk '{print $1}') -gt 0 ]; then 14 | echo -e "Repo : \033[1m$repo\033[0m" 15 | echo -e "Path : \033[0;34m$(pwd)\033[0m" 16 | git status -s 17 | fi 18 | fi 19 | 20 | cd - > /dev/null 2>&1 21 | done 22 | 23 | #EOF 24 | -------------------------------------------------------------------------------- /myrepos_update.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # update_myrepos.sh - update all my repos under the current tree 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | fail=0 9 | keys=( "$HOME/.ssh/hosting/src/id_ed25519" ) 10 | 11 | bold=$(tput bold) 12 | normal=$(tput sgr0) 13 | black=$(tput setaf 0) # COLOR_BLACK/RGB:0,0,0 14 | red=$(tput setaf 1) # COLOR_RED/RGB:255,0,0 15 | green=$(tput setaf 2) # COLOR_GREEN/RGB:0,255,0 16 | yellow=$(tput setaf 3) # COLOR_YELLOW/RGB:255,255,0 17 | blue=$(tput setaf 4) # COLOR_BLUE/RGB:0,0,255 18 | magenta=$(tput setaf 5) # COLOR_MAGENTA/RGB:255,0,255 19 | cyan=$(tput setaf 6) # COLOR_CYAN/RGB:0,255,255 20 | white=$(tput setaf 7) # COLOR_WHITE/RGB:255,255,255 21 | 22 | for i in ${keys[@]}; do 23 | if ! [ -f "$i" ]; then 24 | echo "Key doesn't exist: $i" 25 | let fail+=1 26 | fi 27 | done 28 | 29 | if [ "$fail" -ne 0 ]; then 30 | echo "ERROR: Unable to find key(s)!" 31 | exit 1 32 | fi 33 | 34 | echo "Checking for ssh-agent..." 35 | 36 | # find all ssh-agent sockets 37 | #$find / -uid $(id -u) -type s -name *agent.\* 2>/dev/null() 38 | 39 | # set var if not set 40 | oursock="$HOME/.ssh/.ssh-agent.$HOSTNAME.sock" 41 | 42 | # is $SSH_AUTH_SOCK set 43 | if [ -z "$SSH_AUTH_SOCK" ]; then 44 | export SSH_AUTH_SOCK=$oursock 45 | else 46 | # it is, but check to make sure it's not keyring 47 | if ! [ "$SSH_AUTH_SOCK" = $oursock ]; then 48 | export SSH_AUTH_SOCK=$oursock 49 | fi 50 | fi 51 | 52 | # if we don't have a socket, start ssh-agent 53 | if [ ! -S "$SSH_AUTH_SOCK" ]; then 54 | echo "Not found! Starting ssh-agent..." 55 | eval $(ssh-agent -a "$SSH_AUTH_SOCK" >/dev/null) 56 | start_rv=$? 57 | echo $SSH_AGENT_PID > $HOME/.ssh/.ssh-agent.$HOSTNAME.sock.pid 58 | 59 | if [ "$start_rv" -eq 0 ]; then 60 | echo "Started: $SSH_AUTH_SOCK (PID: $SSH_AGENT_PID)" 61 | else 62 | echo "ERROR: Failed to start ssh-agent! (EXIT: $start_rv)" 63 | exit 1 64 | fi 65 | else 66 | echo "Found: $SSH_AUTH_SOCK" 67 | fi 68 | 69 | ## recreate pid 70 | #if [ -z $SSH_AGENT_PID ]; then 71 | # export SSH_AGENT_PID=$(cat $HOME/.ssh/.ssh-agent.$HOSTNAME.sock.pid) 72 | #fi 73 | 74 | # use the correct grammar for fun! 75 | if [ "${#keys[@]}" -eq 1 ]; then 76 | echo "Checking for key..." 77 | else 78 | echo "Checking for keys..." 79 | fi 80 | 81 | for i in "${keys[@]}"; do 82 | # grab key fingerprint 83 | cmp_key=$(ssh-keygen -lf $i) 84 | 85 | # if key fingerprint not found in fingerprint list, add it 86 | if [ $(ssh-add -l | grep -c "$cmp_key") -eq 0 ]; then 87 | echo "Key not found! Adding it..." 88 | ssh-add $i 89 | add_rv=$? 90 | 91 | if [ $add_rv -eq 0 ]; then 92 | echo "Key added." 93 | fi 94 | else 95 | echo "Key already added: $(echo $cmp_key | awk '{print $2}')" 96 | fi 97 | done 98 | 99 | # iterate through all child dirs to find git repos 100 | DIR="$(pwd)/" 101 | echo "Pulling updates... (${DIR})" 102 | for i in $(find "$DIR" -name .git | grep -vE "go/src/|.old_repos" | sed 's/\/.git//g' | sort) 103 | do 104 | ( 105 | cd "$i" 106 | 107 | if [ -e .git ]; then 108 | repo=$(git config --local -l | grep "remote.origin.url" | awk -F "=" '{print $2}') 109 | echo " " 110 | 111 | if [[ $repo =~ "@" ]]; then 112 | repotype="SSH" 113 | else 114 | repotype="HTTPS" 115 | fi 116 | 117 | echo "=====================================================" 118 | echo "${bold}Found repo ($repotype): ${yellow}$repo${normal}" 119 | echo "Pulling latest changes..." 120 | git pull 121 | #git pull --tags 122 | fi 123 | cd - > /dev/null 2>&1 124 | ) 125 | done 126 | 127 | #EOF 128 | -------------------------------------------------------------------------------- /polarhome.pl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl 2 | 3 | # polarhome.pl - simplify ssh connection to polarhome.com servers without 4 | # having to remember usernames and ports (port info at bottom) 5 | 6 | # author : Chad Mayfield (chad@chd.my) 7 | # license : gplv3 8 | 9 | use warnings; 10 | use strict; 11 | 12 | if ( @ARGV != 1 ) { 13 | print "ERROR: You must supply a host to connect!\n"; 14 | print " e.g. $0 \n"; 15 | exit 1; 16 | } 17 | 18 | our $user; 19 | my $host = shift @ARGV; 20 | my $hostck = "-oStrictHostKeyChecking=no"; 21 | my $ident = "-i ~/.ssh/id_rsa_polarhome"; 22 | 23 | # our hash with host and port 24 | my %hosts = ( 25 | vax => "705", 26 | freebsd => "715", 27 | solaris => "725", 28 | openbsd => "735", 29 | netbsd => "745", 30 | debian => "755", 31 | alpha => "765", 32 | aix => "775", 33 | hpux => "785", 34 | redhat => "795", 35 | ultrix => "805", 36 | qnx => "815", 37 | irix => "825", 38 | tru64 => "835", 39 | openindiana => "845", 40 | suse => "855", 41 | openstep => "865", 42 | mandriva => "875", 43 | ubuntu => "885", 44 | scosysv => "895", 45 | unixware => "905", 46 | dragonfly => "915", 47 | centos => "925", 48 | miros => "935", 49 | hurd => "945", 50 | minix => "955", 51 | raspberrypi => "975", 52 | # plan9 => "", 53 | ); 54 | 55 | # debug 56 | #print "size of hash: " . keys( %hosts ) . " hosts.\n--------\n"; 57 | 58 | # ik there's a better way to do this, but i'm too lazy to fight it right now 59 | my $count =0; 60 | # iterate through hash and check if @ARGV[0] exists 61 | while (my ($key, $value) = each(%hosts)) { 62 | if ( $key eq "$host" ) { 63 | # dynamically set username based on host 64 | if ( $key =~ m/(hpux|minix|openindiana|qnx|solaris|tru64)/ ) { 65 | $user = "user1"; 66 | } elsif ( $key =~ m/(aix|alpha|solaris|ultrix|vax)/) { 67 | $user = "user2"; 68 | } else { 69 | print "ERROR: You don't have an account on $host!\n"; 70 | exit 1; 71 | } 72 | # ssh keys must be setup or this will not work 73 | exec("ssh -tt $ident $hostck -l $user -p $value $key.polarhome.com"); 74 | } else { 75 | $count++ 76 | } 77 | 78 | # debug 79 | #print "$key => $value\n"; 80 | } 81 | 82 | if ( $count >= keys(%hosts) ) { 83 | print "ERROR: Host ($host) not found! Please check it and try again.\n"; 84 | } 85 | 86 | # ----------------------------------------------------------------------------- 87 | # POLARHOME HOSTNAME | PORT | DIRECT PORTS SERVICE PORTS 88 | # ========================================= ============= 89 | # vax.polarhome.com 70x 2000-2999 ftp xx1 90 | # freebsd.polarhome.com 71x 10000-14999 telnet xx2 91 | # solaris.polarhome.com 72x 25000-29999 http xx3 92 | # openbsd.polarhome.com 73x 15000-19999 https xx4 93 | # netbsd.polarhome.com 74x 20000-24999 ssh xx5 94 | # debian.polarhome.com 75x 30000-34999 pop3 xx6 95 | # alpha.polarhome.com 76x 3000-3999 imap xx7 96 | # aix.polarhome.com 77x 35000-39999 usermin xx8 97 | # hpux.polarhome.com 78x 40000-44999 imaps xx9 98 | # redhat.polarhome.com 79x 5000-9999 99 | # ultrix.polarhome.com 80x 1025-1999 100 | # qnx.polarhome.com 81x 4000-4999 101 | # irix.polarhome.com 82x 45000-46999 102 | # tru64.polarhome.com 83x 47000-49999 103 | #openindiana.polarhome.com 84x 104 | # suse.polarhome.com 85x 59000-59999 105 | # openstep.polarhome.com 86x 52000-52999 106 | # mandriva.polarhome.com 87x 54000-55999 107 | # ubuntu.polarhome.com 88x 56000-58999 108 | # scosysv.polarhome.com 89x 61000-61999 109 | # unixware.polarhome.com 90x 60000-60999 110 | # dragonfly.polarhome.com 91x 62000-62999 111 | # centos.polarhome.com 92x 63000-63999 112 | # miros.polarhome.com 93x 64000-64100 113 | # hurd.polarhome.com 94x 114 | # minix.polarhome.com 95x 115 | #raspberrypi.polarhome.com 97x 116 | # plan9.polarhome.com 50000-51999 117 | # ----------------------------------------------------------------------------- 118 | 119 | #EOF 120 | -------------------------------------------------------------------------------- /powerbank.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # powerbank.sh - calculate number of charges of a powerbank 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # TODO 9 | # + write the script! 10 | # algo: (labeled powerbank capacity x 3.7 / output voltage) * 0.85 / pb capacity / capacity of device = # recharges 11 | # example: (16000 x 3.7 / 5) x 0.85 / 1821 = 5.5 charges 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | #EOF -------------------------------------------------------------------------------- /randomize_mac.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # randomize_mac.sh - randomize mac address 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # NOTE: This was meant to be a quick and easy way to get around things like 9 | # free wifi time limits, nothing more. It was a quick experiment to build a 10 | # tool myself like macchanger or spoofMAC. Even though it was an experiment 11 | # I began using it all the time. 12 | 13 | # if not already root, become root 14 | if [ $UID -ne 0 ]; then 15 | echo "ERROR: You must be root!" 16 | exit 1 17 | fi 18 | 19 | revert=0 20 | if [ $# -eq 1 ]; then 21 | if ! [[ $1 =~ "revert" ]]; then 22 | echo "ERROR: Invalid option, it must be --revert to revert back!" 23 | echo "to the default MAC address!" 24 | echo " e.g. $0 --revert" 25 | exit 1 26 | else 27 | revert=1 28 | fi 29 | fi 30 | 31 | # store original hw mac here 32 | sentinel=~/.default_hw_mac 33 | 34 | # we'll default to openssl for random hex generation 35 | command -v openssl >/dev/null 2>&1; i_can_haz_ssl=1 || { i_can_haz_ssl=0; } 36 | 37 | # let define a few default prefixes so our mac addresses look legitimate 38 | # entire list is here: http://standards.ieee.org/regauth/oui/oui.txt 39 | # Unicast: x2:, x6:, xA:, xE: 40 | # Multicast: x3:, x7:, xF: 41 | # NSA J125: 00:20:91 42 | # VMware: 00:50:56:00 to 00:50:56:3F 43 | # Xen: 00:16:3E 44 | # VirtualBox: 0A:00:27 (v5) 45 | # 08:00:27 (v4) 46 | 47 | # define prefix from above 48 | prefix=('08:00:27:' '0A:00:27:' '00:16:3E:' '00:50:56:' '00:20:91:') 49 | 50 | # the meat 51 | if [[ $OSTYPE =~ "darwin" ]]; then 52 | idx=$( jot -r 1 0 $((${#prefix[@]} - 1)) ) 53 | sel=${prefix[idx]} 54 | 55 | default_iface=$(route -n get default | grep interface | awk '{print $2}') 56 | default_mac=$(ifconfig $default_iface | grep ether | awk '{print $2}') 57 | 58 | if [ $revert -eq 1 ]; then 59 | if [ ! -f $sentinel ]; then 60 | echo "ERROR: Unable to revert, can't find $sentinel! Try rebooting." 61 | exit 1 62 | else 63 | original_mac=$(cat $sentinel) 64 | echo "Original MAC address found: $original_mac" 65 | fi 66 | 67 | # change it back to default 68 | echo "Reverting it back..." 69 | ifconfig $default_iface ether $original_mac 70 | 71 | # verify that it was changed 72 | test_mac=$(ifconfig $default_iface | grep ether | awk '{print $2}') 73 | if [ "$test_mac" == "$original_mac" ]; then 74 | echo "Succeessfully changed MAC!" 75 | rm -f $sentinel 76 | else 77 | echo "MAC Address change unsuccessful." 78 | exit 1 79 | fi 80 | exit 1 81 | fi 82 | 83 | # set a sentinel file to keep track of real mac 84 | if [ -f $sentinel ]; then 85 | if [ $(cat $sentinel) != "$default_mac" ]; then 86 | # keep the mac in sentinel as default 87 | default_mac=$(cat $sentinel) 88 | fi 89 | else 90 | # first run, have to touch or it dies 91 | touch $sentinel 92 | echo $default_mac > $sentinel 93 | fi 94 | 95 | printf "%-20s %s\n" "Default Interface:" $default_iface 96 | printf "%-20s %s\n" "Default MAC Address:" $default_mac 97 | 98 | # generate part of a new mac address 99 | if [ "$i_can_haz_ssl" -eq "1" ]; then 100 | random_mac=$(openssl rand -hex 3 | sed 's/\(..\)/\1:/g; s/.$//') 101 | new_mac="${sel}${random_mac}" 102 | printf "%-20s %s\n" "Random MAC Address:" ${new_mac} 103 | else 104 | #random_mac=$() 105 | echo "Coming Soon!" 106 | fi 107 | 108 | # make the change 109 | ifconfig $default_iface ether $new_mac 110 | 111 | # verify that it was changed 112 | test_mac=$(ifconfig $default_iface | grep ether | awk '{print $2}') 113 | if [ "$test_mac" == "$new_mac" ]; then 114 | echo "Succeessfully changed MAC!" 115 | else 116 | echo "MAC Address change unsuccessful." 117 | fi 118 | 119 | elif [[ $OSTYPE =~ "linux" ]]; then 120 | sel=${prefix[$RANDOM % ${#prefix[@]} ]} 121 | 122 | default_iface=$(route | grep '^default' | grep -o '[^ ]*$') 123 | default_mac=$(ifconfig $default_iface | awk '/HWaddr/ {print $5}') 124 | 125 | if [ $revert -eq 1 ]; then 126 | if [ ! -f $sentinel ]; then 127 | echo "ERROR: Unable to revert, can't find $sentinel! Try rebooting." 128 | exit 1 129 | else 130 | original_mac=$(cat $sentinel) 131 | echo "Original MAC address found: $original_mac" 132 | fi 133 | 134 | # change it back to default 135 | echo "Reverting it back..." 136 | # bring the interface down (alt/old: ifconfig $default_iface down) 137 | echo "ip link set dev $default_iface down" 138 | 139 | # change the mac address (alt/old: ifconfig eth0 hw ether $new_mac) 140 | echo "ip link set dev $default_iface address $new_mac" 141 | 142 | # bring the interface up (alt/old: ifconfig eth0 up) 143 | echo "ip link set dev $default_iface up" 144 | 145 | # verify that it was changed 146 | test_mac=$(ip link show $default_iface | awk '/ether/ {print $2}') 147 | #test_mac=$(ifconfig $default_iface | awk '/HWaddr/ {print $5}') 148 | if [ "$test_mac" == "$original_mac" ]; then 149 | echo "Succeessfully changed MAC!" 150 | rm -f $sentinel 151 | else 152 | echo "MAC Address change unsuccessful." 153 | exit 1 154 | fi 155 | exit 1 156 | fi 157 | 158 | # set a sentinel file to keep track of real mac 159 | if [ -f $sentinel ]; then 160 | if [ $(cat $sentinel) != "$default_mac" ]; then 161 | # keep the mac in sentinel as default 162 | default_mac=$(cat $sentinel) 163 | fi 164 | else 165 | # first run 166 | touch $sentinel 167 | echo $default_mac > $sentinel 168 | fi 169 | 170 | printf "%-20s %s\n" "Default Interface:" $default_iface 171 | printf "%-20s %s\n" "Default MAC Address:" $default_mac 172 | 173 | # generate part of a new mac address 174 | if [ "$i_can_haz_ssl" -eq "1" ]; then 175 | random_mac=$(openssl rand -hex 3 | sed 's/\(..\)/\1:/g; s/.$//') 176 | new_mac="${sel}${random_mac}" 177 | printf "%-20s %s\n" "Random MAC Address:" ${new_mac} 178 | else 179 | random_mac=$(head -n 10 /dev/urandom | tr -dc 'a-fA-F0-9' | \ 180 | fold -w 12 | head -n 1 | fold -w2 | paste -sd: - | \ 181 | tr '[:upper:]' '[:lower:]') 182 | new_mac="${sel}${random_mac}" 183 | printf "%-20s %s\n" "Random MAC Address:" ${new_mac} 184 | fi 185 | 186 | # make the change 187 | 188 | # bring the interface down (alt/old: ifconfig $default_iface down) 189 | ip link set dev $default_iface down || \ 190 | echo "ERROR: Unable to bring $default_iface down!"; exit 1 191 | 192 | # change the mac address (alt/old: ifconfig eth0 hw ether $new_mac) 193 | ip link set dev $default_iface address $new_mac || \ 194 | echo "ERROR: Unable to chnage $default_iface MAC!"; exit 1 195 | 196 | # bring the interface up (alt/old: ifconfig eth0 up) 197 | ip link set dev $default_iface up || \ 198 | echo "ERROR: Unable to bring $default_iface up!"; exit 1 199 | 200 | # verify that it was changed (alt: ifconfig $default_iface | grep HWaddr) 201 | test_mac=$(ip link show $default_iface | awk '/ether/ {print $2}') 202 | if [ "$test_mac" == "$new_mac" ]; then 203 | echo "Succeessfully changed MAC!" 204 | else 205 | echo "MAC Address change unsuccessful." 206 | fi 207 | 208 | else 209 | echo "ERROR: Unknown OSTYPE!" 210 | fi 211 | 212 | #EOF 213 | -------------------------------------------------------------------------------- /remove_spaces.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # remove_space.sh - remove spaces of all files of a type in a path 4 | 5 | unset a i 6 | 7 | if [ $# -ne 1 ]; then 8 | echo "ERROR: You must supply both a path!" 9 | echo " e.g. $0 /home/user/Downloads/" 10 | exit 1 11 | fi 12 | 13 | path=$1 14 | 15 | # OLD METHOD 16 | #find $path -type f -print0 | while read -d $'\0' f; do mv -v "$f" "${f// /.}"; done 17 | 18 | # iterate through all regular files in $path and rename them 19 | while IFS= read -r -d $'\0' file; do 20 | mv -v "$file" "${file// /.}" 21 | done < <(find "$path" -type f -print0) 22 | 23 | #EOF 24 | -------------------------------------------------------------------------------- /reverse_ssh_tunnel.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # reverse_ssh_tunnel.sh - create reverse ssh tunnel from current host to jumpbox 4 | 5 | # author : Chad Mayfield (chad@chd.my) 6 | # license : gplv3 7 | 8 | # NOTE: If keys are not setup you'll need to supply your password twice, once 9 | # for the test and once for the connection. I created this becuase on 10 | # certain servers I create reverse shells to bypass firewalls and leave 11 | # them running for a long time, so I always have to look up on a cheat 12 | # sheet the syntax, with this script I don't have to remember syntax! 13 | 14 | if [ $# -ne 3 ]; then 15 | echo "ERROR: You must supply the args; username hostname and port" 16 | echo " e.g. $0 " 17 | exit 1 18 | fi 19 | 20 | user=$1 21 | host=$2 22 | port=$3 23 | 24 | # must be a valid port 25 | if [[ $port -lt 0 ]] || [[ $port -gt 65536 ]]; then 26 | echo "ERROR: Invalid port, ($port), please use a port between 1-65536!" 27 | exit 1 28 | fi 29 | 30 | # crude check to see if we have connectivity 31 | if [ $(ping -c 1 -p $port $host | awk '/transmitted/ {print $(NF-2)}' | awk -F. '{print $1}') -eq 100 ]; then 32 | echo "ERROR: Unable to ping $host:$port! Are you sure it's up?" 33 | exit 1 34 | fi 35 | 36 | ck_if_tunnel_exists() { 37 | # check to see if the tunnel exists on the remote host 38 | netstat_test="netstat -tunla | grep -c 127.0.0.1:7000" 39 | tunnel_test=$(ssh -l $user -p $port $host "$netstat_test") 40 | 41 | if [ $tunnel_test -ne 0 ]; then 42 | echo "ERROR: There's already a tunnel running on $host! Please use it." 43 | exit 2 44 | fi 45 | } 46 | 47 | create_reverse_tunnel() { 48 | echo "No port is running, proceeding to create one..." 49 | 50 | # create tunnel in background without executing commands 51 | ssh -fN -p $port -R 7000:localhost:22 ${user}@${host} 52 | 53 | tunnel_rv=$? 54 | 55 | if [ $tunnel_rv -eq 0 ]; then 56 | echo "Tunnel created successfully! You can now connect to $host and run" 57 | echo "ssh -p 7000 user@localhost to connect back to this machine." 58 | else 59 | echo "ERROR: There was a problem creating the reverse tunnel!" 60 | exit 3 61 | fi 62 | } 63 | 64 | ck_if_tunnel_exists 65 | create_reverse_tunnel 66 | 67 | #EOF 68 | -------------------------------------------------------------------------------- /rkhunter.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # rkhunter.sh - run rkhunter then log & email results 4 | 5 | # author : Chad Mayfield (code@chadmayfield.com) 6 | # license : gplv3 7 | 8 | command -v logrotate >/dev/null 2>&1; logrotate=1 || { logrotate=0; } 9 | command -v rkhunter >/dev/null 2>&1 || \ 10 | { echo >&2 "ERROR: rkhunter isn't installed!"; exit 1; } 11 | 12 | if [ $UID -ne 0 ]; then 13 | echo "ERROR: You must be root to run this utility!" 14 | exit 1 15 | fi 16 | 17 | # set which package manager we should use 18 | if [ -f /etc/os-release ]; then 19 | pkgmgr=RPM 20 | elif [[ $(lsb_release -a 2> /dev/null | grep Desc) =~ (Ubuntu|Debian) ]]; then 21 | pkgmgr=DPKG 22 | elif [[ $OSTYPE =~ "darwin" ]]; then 23 | pkgmgr=BSD 24 | else 25 | pkgmgr=NONE 26 | fi 27 | 28 | # we want logrotate to rotate the logs weekly 29 | if [ $logrotate -eq 1 ]; then 30 | echo "checking if logrotate has been configured..." 31 | 32 | if [ $(grep -c rkhunter /etc/logrotate.d/*) -ne 1 ]; then 33 | #/var/log/rkhunter/rkhunter.log { 34 | # weekly 35 | # notifempty 36 | # create 640 root root 37 | #} 38 | echo "skipping logrotate autoconf, not implemented yet" 39 | else 40 | echo "rkhunter is already configured in logrotate" 41 | fi 42 | fi 43 | 44 | # where's our logs? 45 | logfile="/var/log/rkhunter/rkhunter.log" 46 | 47 | # runtime options 48 | rkhunter="command rkhunter" 49 | ver_opts="--rwo --nocolors --versioncheck" 50 | upt_opts="--rwo --nocolors --update" 51 | run_opts="-c --nomow --nocolors --syslog --pkgmgr $pkgmgr --cronjob --summary" 52 | 53 | # mail config 54 | mail_to='user@domain.tld' 55 | mail_from="root@$(hostname)" 56 | subject="RKHUNTER: Scan results for $(hostname)." 57 | 58 | # version check 59 | $rkhunter $ver_opts 60 | 61 | # run an update 62 | $rkhunter $upt_opts 63 | 64 | # finally run 65 | $rkhunter $run_opts 66 | 67 | # send an email 68 | mail -s "$subject" $mail_to < $logfile 69 | 70 | #EOF 71 | -------------------------------------------------------------------------------- /screenshots/alert_login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmayfield/scriptlets/eccdacee6bf4063472de1a6b9b90c3bb6e08cf71/screenshots/alert_login.png -------------------------------------------------------------------------------- /update_other_repos.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # update_other_repos.sh - update the 'other' repos cloned under the current tree 4 | 5 | # git pull all repositories in tree 6 | for i in $(find . -name .git | sed 's/.git//g') 7 | do 8 | cd "$i" 9 | 10 | if [ -e .git ]; then 11 | repo=$(git config --local -l | grep "remote.origin.url" | awk -F "=" '{print $2}') 12 | echo "Found repo: $repo" 13 | echo "Pulling latest changes..." 14 | git pull 15 | else 16 | : 17 | fi 18 | 19 | cd .. 20 | done 21 | 22 | #EOF 23 | -------------------------------------------------------------------------------- /vagrant_update_boxes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # vagrant_update_boxes.sh - update all vagrant boxes under current directory 4 | 5 | for i in $(find . -name Vagrantfile) 6 | do 7 | if ! [[ $i =~ (do|vultr)- ]]; then 8 | echo "Found Vagrantfile at: $i" 9 | cd $(echo $i | sed 's/Vagrantfile//') 10 | 11 | boxname=$(grep "^ config.vm.box " Vagrantfile | awk -F "= " '{print $2}') 12 | echo "Updating box: $boxname" 13 | 14 | vagrant box update 15 | 16 | cd - &> /dev/null 17 | echo "============================================================" 18 | fi 19 | done 20 | 21 | #EOF 22 | --------------------------------------------------------------------------------