├── README.md ├── backup ├── README.md ├── backup.conf ├── backup.sh ├── ftpbackup.list └── ftpbackup.sh ├── ddns ├── README.md ├── cloudflare.conf ├── cloudflare.sh ├── cloudxns.conf ├── cloudxns.sh ├── dnspod.conf └── dnspod.sh ├── ip ├── 17monipdb.dat ├── IP.class.php ├── README-CN.md ├── README.md ├── dbip-client.class.php ├── example.php ├── index.php └── index2.php ├── le-dns ├── README.md ├── cloudflare-hook.sh ├── cloudflare.conf ├── cloudflare.sh ├── cloudxns-hook.sh ├── cloudxns.conf ├── cloudxns.sh ├── dnspod-hook.sh ├── dnspod.conf ├── dnspod.sh ├── le-cloudflare.sh ├── le-cloudxns.sh └── le-dnspod.sh ├── lets-encrypt ├── README-CN.md ├── README.md ├── letsencrypt.conf └── letsencrypt.sh ├── localrt.sh ├── media └── generate.sh ├── net ├── DownloadHelper │ ├── .gitignore │ ├── README.md │ ├── config.json.example │ ├── index2link.py │ └── mirror.py ├── cm.device.list ├── cm.sh ├── cm │ ├── README-CN.md │ ├── README.md │ ├── cm.py │ ├── cm.sh │ ├── cm_monitor.sh │ └── cmfile.py ├── cm2cron.py ├── cm2list.py ├── cmurl.sh ├── cyanogenmod.crontab.list ├── http │ ├── http.php │ └── http.sh ├── lurl.sh ├── mail │ └── mail.php ├── ngnix_autoindex_2_download_link.sh ├── rm │ └── rm.php ├── transmission │ ├── aria2-rpc.sh │ ├── complete.sh │ ├── d2z.sh │ ├── transmission.settings.json.example │ └── x2t.sh ├── url.sh ├── wireless │ ├── README.md │ ├── wireless.conf │ └── wireless.sh └── youtube │ ├── aria2-complete.sh │ ├── aria2.conf.example │ ├── aria2.init.example │ ├── nginx.config.cdn.example │ ├── nginx.config.home.example │ ├── nginx.config.remote.example │ ├── youtube.php │ └── youtube.sh ├── newuser.sh ├── nginx ├── upgrade-nginx-deb.sh └── upgrade-nginx.sh ├── opensips ├── README.md ├── autobuild.cron ├── autobuild.sh ├── build.sh ├── gdb.sh ├── install.sh ├── mail.cfg ├── mail.sh ├── mail2.sh ├── make.sh ├── makeself-header.sh ├── makeself.sh └── rpm.opensips.init ├── py ├── build.sh ├── clean.sh └── header.sh ├── speedtest ├── README-CN.md ├── README.md ├── speedtest.php ├── speedtest.py └── speedtest.sh ├── ssh ├── README.md ├── authorized_keys ├── pam.sh ├── sshrc └── sshrc.sh ├── sticker ├── .gitignore ├── README.md ├── requirements.txt └── sticker.py ├── trcp.sh ├── type └── input.py ├── u2helper ├── .gitignore ├── README-CN.md ├── README.md ├── config.json.example ├── transmission.py ├── u2.py └── u2torrent.py ├── uploader ├── .gitignore ├── README.md ├── config.py.example ├── cron.sh ├── requirements.txt ├── upload.py └── uploader.py └── zip ├── d2t.sh ├── d2z.sh ├── r2t.sh └── z2t.sh /README.md: -------------------------------------------------------------------------------- 1 | scripts 2 | ======= 3 | -------------------------------------------------------------------------------- /backup/README.md: -------------------------------------------------------------------------------- 1 | 2 | - [1. 服务器备份脚本](#服务器备份脚本) 3 | - [2. online.net ftp 备份脚本](#onlinenet-ftp-备份脚本) 4 | 5 | # 服务器备份脚本 6 | 7 | 指定要备份的文件和目录列表, 使用 `zip` 加密备份文件到指定目录,如 `Dropbox` 同步目录。压缩文件密码为随机密码,会通过 `gmail` 发送到指定邮箱。 8 | 9 | ## 下载 10 | 11 | ``` 12 | mkdir /root/bin 13 | cd /root/bin 14 | 15 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/backup/backup.sh 16 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/backup/backup.conf 17 | 18 | chmod +x backup.sh 19 | ``` 20 | 21 | ## 依赖 22 | 23 | 需要安装 `zip` 用来压缩文件, `sendemail` 用来发送密码通知邮件。 24 | 25 | ``` 26 | apt-get install zip libnet-ssleay-perl libio-socket-ssl-perl sendemail 27 | ``` 28 | 29 | ## 配置 30 | 31 | 修改 `backup.conf` 文件,主要修改 32 | 33 | **邮件配置** 34 | 35 | ``` 36 | EMAIL="receiver@gmail.com" 37 | SENDER="sender@gmail.com" 38 | SENDER_PASSWD="sender_password" 39 | ``` 40 | 41 | 注意如果 `sender@gmail.com` 启用了两部验证,则应该使用 [应用专用密码](https://security.google.com/settings/security/apppasswords) 42 | 43 | **文件列表配置** 44 | 45 | ``` 46 | FILES=( 47 | "/root/.vimrc" 48 | "/root/.bashrc" 49 | "/root/.my.cnf" 50 | "/root/.screenrc" 51 | "/root/.ssh/config" 52 | ) 53 | ``` 54 | 55 | **目录列表配置** 56 | 57 | ``` 58 | DIRS=( 59 | "/etc" 60 | "/root/bin" 61 | "/root/bashfiles" 62 | "/var/www" 63 | "/home/git" 64 | ) 65 | ``` 66 | 67 | **忽略目录中的文件或文件夹** 68 | 69 | 注意如果备份时要跳过目录中的文件或子目录,可以在目标目录中添加一个 `exclude.lst` 文件,如 `/var/www/exclude.lst` 文件内容参考如下 70 | 71 | ``` 72 | */10meg.test 73 | */cache/* 74 | /var/www/zips/* 75 | /var/www/downloads/* 76 | /var/www/share/* 77 | /var/www/wordpress/dl/* 78 | /var/www/wordpress/mp3/* 79 | /var/www/wordpress/d/* 80 | /var/www/wordpress/wp-content/languages/* 81 | /var/www/wordpress/wp-content/plugins/* 82 | /var/www/wordpress/wp-content/themes/* 83 | /var/www/wordpress/wp-content/uploads/2011/* 84 | ``` 85 | 86 | **备份所有文件** 87 | 88 | 如果某次备份要备份所有的文件,即忽略 `exclude.lst` 文件,可以添加 `all` 参数运行 89 | 90 | ``` 91 | /root/bin/backup.sh all 92 | ``` 93 | 94 | **备份 mysql 配置** 95 | 96 | ``` 97 | BACKUP_MYSQL=true 98 | ``` 99 | 100 | 如果启用 `mysql` 备份, 则需要添加 `/root/.my.cnf` 文件,内容示例如下 101 | 102 | ``` 103 | [mysqldump] 104 | user=root 105 | password=123456 106 | ``` 107 | 108 | **备份压缩配置** 109 | 110 | ``` 111 | ZIP_COMPRESS=true 112 | ``` 113 | 114 | 如果不启用压缩,则会以存储模式压缩文件和文件夹。 115 | 116 | **备份保存路径** 117 | 118 | ``` 119 | TARGET_DIR="/root/Dropbox" 120 | ``` 121 | 122 | 备份完成后会移动到 `TARGET_DIR`, 示例中为 `dropbox` 的默认同步路径,可以将文件同步到 `dropbox` 服务器。安装 `dropbox` 请参考 [https://www.dropbox.com/install-linux](https://www.dropbox.com/install-linux) 123 | 124 | **日志路径** 125 | 126 | ``` 127 | LOG_FILE="/var/log/backup.log" 128 | ``` 129 | 130 | 会将备份过程中的主要操作输出到日志中。 131 | 132 | 133 | ## cron 定时任务 134 | 135 | ``` 136 | 10 */4 * * * bash /root/bin/backup.sh >/dev/null 2>&1 137 | 30 02 * * 0 bash /root/bin/backup.sh all >>/dev/null 2>&1 138 | ``` 139 | 140 | # online.net ftp 备份脚本 141 | 142 | ## 下载 143 | 144 | ``` 145 | mkdir /root/bin 146 | cd /root/bin 147 | 148 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/backup/ftpbackup.sh 149 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/backup/ftpbackup.list 150 | 151 | chmod +x ftpbackup.sh 152 | ``` 153 | 154 | ## 配置 155 | 156 | 修改 `/root/bin/ftpbackup.sh` 中的 `SERVER` 为账户下的 `FTP server`,`BACKUP_DIR` 为备份文件临时目录。 157 | 158 | 修改 `/root/bin/ftpbackup.list` 文件,`files` 为要上传的文件,`dirs` 为要备份的目录。 159 | 160 | ## 运行 161 | 162 | `cron` 定时任务 163 | 164 | ``` 165 | 5 1 * * * /root/bin/ftpbackup.sh >> /var/log/ftpbackup.log 2>&1 166 | ``` 167 | 168 | 169 | -------------------------------------------------------------------------------- /backup/backup.conf: -------------------------------------------------------------------------------- 1 | ## Email configuration 2 | 3 | EVENT="Server schedule backup" 4 | EMAIL="receiver@gmail.com" 5 | SENDER="sender@gmail.com" 6 | SENDER_PASSWD="sender_password" 7 | SMTP_SERVER="smtp.gmail.com:587" 8 | 9 | 10 | ## backup settings 11 | 12 | BACKUP_MYSQL=true 13 | ZIP_COMPRESS=true 14 | TARGET_DIR="/root/Dropbox" 15 | LOG_FILE="/var/log/backup.log" 16 | 17 | 18 | ## backup files and dirs 19 | 20 | FILES=( 21 | "/root/.vimrc" 22 | "/root/.bashrc" 23 | "/root/.my.cnf" 24 | "/root/.screenrc" 25 | "/root/.ssh/config" 26 | ) 27 | 28 | DIRS=( 29 | "/etc" 30 | "/root/bin" 31 | "/root/bashfiles" 32 | "/var/www" 33 | "/home/git" 34 | ) 35 | 36 | -------------------------------------------------------------------------------- /backup/backup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ALL=$1 4 | 5 | TIME=$(date +%F-%H-%M-%S) 6 | PASSWD=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w30 | head -n1) 7 | 8 | CONF_FILE="$(dirname "$0")/backup.conf" 9 | 10 | if [ -f "$CONF_FILE" ]; then 11 | # shellcheck source=/dev/null 12 | source "$CONF_FILE" 13 | else 14 | echo "$CONF_FILE not exist." 15 | exit -1 16 | fi 17 | 18 | if [ ! -d "$TARGET_DIR" ]; then 19 | echo "Error, directory $TARGET_DIR not found." | tee -a "$LOG_FILE" 20 | exit -1 21 | fi 22 | 23 | ZIP="zip -P $PASSWD" 24 | 25 | if [ "$ZIP_COMPRESS" != true ]; then 26 | ZIP="zip -0 -P $PASSWD" 27 | fi 28 | 29 | # create tmp dir for archive files and dirs 30 | 31 | cd /opt || exit -1 32 | 33 | if [ -d 'tmp' ]; then 34 | rm -r tmp 35 | fi 36 | 37 | mkdir tmp 38 | cd tmp || exit -1 39 | 40 | # remove old backup files 41 | 42 | if [ ! "$(find "$TARGET_DIR" -name \*.zip |wc -l)" == 0 ]; then 43 | rm "$TARGET_DIR"/*.zip 44 | fi 45 | 46 | # backup files 47 | 48 | if [ -f "$TARGET_DIR/backup_files.zip" ]; then 49 | rm "$TARGET_DIR/backup_files.zip" 50 | fi 51 | 52 | echo "$(date) --> backup files: ${FILES[*]}" | tee -a "$LOG_FILE" 53 | $ZIP "backup_files.zip" "${FILES[@]}" 54 | mv "backup_files.zip" "$TARGET_DIR/backup-$TIME-backup_files.zip" 55 | 56 | # backup dirs 57 | 58 | for dir in "${DIRS[@]}" 59 | do 60 | target="${dir//\//_}" 61 | target="${target:1}" 62 | 63 | if [ -f "$TARGET_DIR/$target.zip" ]; then 64 | rm "$TARGET_DIR/$target.zip" 65 | fi 66 | 67 | if [ -d "$dir" ]; then 68 | echo "$(date) --> backup: $dir" | tee -a "$LOG_FILE" 69 | if [ "$ALL" = "all" ] || [ ! -f "$dir/exclude.lst" ]; then 70 | $ZIP -r --symlinks "$target.zip" "$dir" 71 | else 72 | $ZIP -r --symlinks "$target.zip" "$dir" -x@"$dir/exclude.lst" 73 | fi 74 | mv "$target.zip" "$TARGET_DIR/backup-$TIME-$target.zip" 75 | else 76 | echo "$(date) --> $dir not exist." | tee -a "$LOG_FILE" 77 | fi 78 | done 79 | 80 | # backup mysql 81 | 82 | if [ "$BACKUP_MYSQL" == "true" ]; then 83 | 84 | echo "$(date) --> backup: mysql" | tee -a "$LOG_FILE" 85 | 86 | mysqldump --all-databases > mysql.sql 87 | $ZIP mysql.zip mysql.sql 88 | rm mysql.sql 89 | 90 | mv "mysql.zip" "$TARGET_DIR/backup-$TIME-mysql.zip" 91 | fi 92 | 93 | # clean tmp dir 94 | 95 | cd /opt || exit -1 96 | rm -r tmp 97 | 98 | #cp /root/Dropbox/*.zip /home/box/backup 99 | 100 | # curl -s https://www.xdty.org/mail.php -X POST -d "event=$EVENT ($TIME|$PASSWD)&name=$NAME&email=$EMAIL" & 101 | 102 | # echo $PASSWD 103 | 104 | # send email 105 | 106 | if [ "$ALL" = "all" ]; then 107 | EVENT="$EVENT(all)" 108 | fi 109 | 110 | sendemail -f "${SENDER%@*} <$SENDER>" \ 111 | -u "$EVENT event notify" \ 112 | -t "$EMAIL" \ 113 | -s "$SMTP_SERVER" \ 114 | -o tls=yes \ 115 | -xu "$SENDER" \ 116 | -xp "$SENDER_PASSWD" \ 117 | -m "$EVENT completed ($TIME|$PASSWD) at $(date +'%Y-%m-%d %H:%M:%S')" 118 | -------------------------------------------------------------------------------- /backup/ftpbackup.list: -------------------------------------------------------------------------------- 1 | files=( 2 | "/root/bin/ftpbackup.list" 3 | "/root/bin/ftpbackup.sh" 4 | "/home/libvirt/images/www.img" 5 | "/home/libvirt/images/builder.img" 6 | "/home/libvirt/images/redhat7.img" 7 | "/home/libvirt/images/ubuntu.img" 8 | ) 9 | 10 | dirs=( 11 | "root" 12 | "etc" 13 | "var" 14 | "usr" 15 | ) 16 | -------------------------------------------------------------------------------- /backup/ftpbackup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | SERVER="dedibackup-dc2.online.net" 4 | 5 | LIST_FILE="/root/bin/ftpbackup.list" 6 | LOG_FILE="/var/log/ftpbackup.log" 7 | BACKUP_DIR="/home/backups" 8 | 9 | USER="auto" 10 | PASS="" 11 | 12 | if [ -f "$LIST_FILE" ];then 13 | source "$LIST_FILE" 14 | else 15 | echo "file list not exist." 16 | exit -1 17 | fi 18 | 19 | if [ ! -d "$BACKUP_DIR" ];then 20 | mkdir "$BACKUP_DIR" 21 | fi 22 | 23 | #for file in "${files[@]}" 24 | #do 25 | # if [[ "$file" == *.img ]];then 26 | # echo "$(date) --> shrink: $file" >> "$LOG_FILE" 27 | # qemu-img convert -O qcow2 "$file" "$file.qcow2" 28 | # fi 29 | #done 30 | 31 | for dir in "${dirs[@]}" 32 | do 33 | if [ -f "$BACKUP_DIR/$dir.tar.gz" ];then 34 | rm "$BACKUP_DIR/$dir.tar.gz" 35 | fi 36 | 37 | tar czf "$BACKUP_DIR/$dir.tar.gz" -C / "$dir" 38 | done 39 | 40 | ftp_mkdir() { 41 | local r 42 | local a 43 | r="$@" 44 | while [[ "$r" != "$a" ]] ; do 45 | a=${r%%/*} 46 | if [ -n "$a" ];then 47 | echo "mkdir $a" 48 | echo "cd $a" 49 | fi 50 | r=${r#*/} 51 | done 52 | } 53 | 54 | ftp_put() { 55 | echo "cd /" 56 | ftp_mkdir "$BACKUP_DIR" 57 | echo "lcd $BACKUP_DIR" 58 | for dir in "${dirs[@]}" 59 | do 60 | echo "$(date) --> backup: $dir" >> "$LOG_FILE" 61 | echo "put $dir.tar.gz" 62 | done 63 | 64 | for file in "${files[@]}" 65 | do 66 | echo "$(date) --> backup: $file" >> "$LOG_FILE" 67 | LCD="$(dirname $file)" 68 | FILE="$(basename $file)" 69 | echo "cd /" 70 | ftp_mkdir "$LCD" 71 | echo "lcd $LCD" 72 | # if [[ "$file" == *.img ]];then 73 | # echo "put $FILE.qcow2" 74 | # else 75 | echo "put $FILE" 76 | # fi 77 | done 78 | } 79 | 80 | ftp -p -n $SERVER < done" 89 | 90 | -------------------------------------------------------------------------------- /ddns/README.md: -------------------------------------------------------------------------------- 1 | # ddns 更新脚本 2 | 3 | 这个脚本适用于 openwrt 和 ddwrt 路由动态更新域名。理论上支持所有可以运行 `shell (/bin/sh)` 脚本的 `linux` 环境。 4 | 5 | ## openwrt 6 | 7 | **1\. 下载脚本** 8 | 9 | cloudxns: 10 | 11 | ``` 12 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/cloudxns.sh 13 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/cloudxns.conf 14 | chmod +x cloudxns.sh 15 | ``` 16 | 17 | dnspod: 18 | 19 | ``` 20 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/dnspod.sh 21 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/dnspod.conf 22 | chmod +x dnspod.sh 23 | ``` 24 | 25 | **2\. 配置** 26 | 27 | cloudxns: 28 | 29 | ``` 30 | API_KEY="YOUR_API_KEY" 31 | SECRET_KEY="YOUR_SECRET_KEY" 32 | DOMAIN="example.com" 33 | HOST="ddns" 34 | LAST_IP_FILE="/tmp/.LAST_IP" 35 | ``` 36 | 37 | dnspod: 38 | 39 | ``` 40 | ACCOUNT="xxxxxx@gmail.com" 41 | PASSWORD="xxxxxxxxxx" 42 | DOMAIN="xxxx.xxx.org" 43 | RECORD_LINE="默认" 44 | ``` 45 | 46 | **3\. 运行测试** 47 | 48 | dnspod: `./dnspod.sh dnspod.conf` 49 | 50 | cloudxns: `./cloudxns.sh cloudxns.conf` 51 | 52 | **4\. 添加到 cron 定时任务** 53 | 54 | ``` 55 | /etc/init.d/cron enable 56 | crontab -e 57 | ``` 58 | 59 | 添加如下内容,注意修改路径 60 | 61 | cloudxns: 62 | 63 | ``` 64 | */3 * * * * /root/ddns/cloudxns.sh /root/ddns/cloudxns.conf >> /root/ddns/cloudxns.log 65 | ``` 66 | 67 | dnspod: 68 | 69 | ``` 70 | */3 * * * * /root/ddns/dnspod.sh /root/ddns/dnspod.conf >> /root/ddns/dnspod.log 71 | ``` 72 | 73 | ## ddwrt 74 | 75 | **1\. 下载脚本** 76 | 77 | 下载脚本及配置文件,保存文件到 `/jffs` 及 `/opt` 等不会重启丢失的目录,注意可能需要在网页管理页面先启用 `jffs` 78 | 79 | dnspod: 80 | 81 | ``` 82 | curl -k -s https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/dnspod.sh 83 | curl -k -s https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/dnspod.conf 84 | chmod +x dnspod.sh 85 | ``` 86 | 87 | cloudxns: 88 | 89 | ``` 90 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/cloudxns.sh 91 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/ddns/cloudxns.conf 92 | chmod +x cloudxns.sh 93 | ``` 94 | 95 | 96 | **2\. 修改配置文件** 97 | 98 | dnspod: 99 | 100 | ``` 101 | ACCOUNT="xxxxxx@gmail.com" 102 | PASSWORD="xxxxxxxxxx" 103 | DOMAIN="xxxx.xxx.org" 104 | RECORD_LINE="默认" 105 | ``` 106 | 107 | cloudxns: 108 | 109 | ``` 110 | API_KEY="YOUR_API_KEY" 111 | SECRET_KEY="YOUR_SECRET_KEY" 112 | DOMAIN="example.com" 113 | HOST="ddns" 114 | LAST_IP_FILE="/tmp/.LAST_IP" 115 | ``` 116 | 117 | **3\. 运行测试** 118 | 119 | dnspod: `./dnspod.sh dnspod.conf` 120 | 121 | cloudxns: `./cloudxns.sh cloudxns.conf` 122 | 123 | **4\. 添加 cron 定时任务** 124 | 125 | ddwrt 在网页管理页面添加 cron 定时任务,注意修改命令的路径: 126 | 127 | dnspod: 128 | 129 | ``` 130 | */3 * * * * root /jffs/ddns/dnspod.sh /jffs/ddns/dnspod.conf >> /jffs/ddns/cloudxns.log 131 | ``` 132 | 133 | cloudxns: 134 | 135 | ``` 136 | */3 * * * * root /jffs/ddns/cloudxns.sh /jffs/ddns/cloudxns.conf >> /jffs/ddns/cloudxns.log 137 | ``` 138 | 139 | ## 更多细节内容可以参考我之前写的博客 140 | 141 | [ddwrt路由/linux动态解析ip(ddns)到dnspod配置](https://www.xdty.org/1841) 142 | 143 | [用于ddwrt或openwrt的cloudxns动态域名更新shell脚本](https://www.xdty.org/1907) 144 | -------------------------------------------------------------------------------- /ddns/cloudflare.conf: -------------------------------------------------------------------------------- 1 | CF_EMAIL="YOUR_EMAIL@gmail.com" 2 | CF_TOKEN="YOUR_API_TOKEN" 3 | CF_ZONE_NAME="example.com" 4 | CF_DOMAIN_NAME="ddns.example.com" 5 | LAST_IP_FILE="/tmp/.CF_LAST_IP" 6 | -------------------------------------------------------------------------------- /ddns/cloudflare.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sh 2 | 3 | CONFIG=$1 4 | 5 | if [ ! -f "$CONFIG" ];then 6 | echo "ERROR, CONFIG NOT EXIST." 7 | exit 1 8 | fi 9 | 10 | # shellcheck source=/dev/null 11 | . "$CONFIG" 12 | 13 | if [ -f "$LAST_IP_FILE" ];then 14 | # shellcheck source=/dev/null 15 | . "$LAST_IP_FILE" 16 | fi 17 | 18 | IP="" 19 | RETRY="0" 20 | while [ $RETRY -lt 5 ]; do 21 | IP=$(curl -s ip.xdty.org) 22 | RETRY=$((RETRY+1)) 23 | if [ -z "$IP" ];then 24 | sleep 3 25 | else 26 | break 27 | fi 28 | done 29 | 30 | if [ "$IP" = "$LAST_IP" ];then 31 | echo "$(date) -- Already updated." 32 | exit 0 33 | fi 34 | 35 | # Optional Parameter 36 | #EXTERNAL_IP=$(curl -s https://api.ipify.org) 37 | EXTERNAL_IP="$IP" 38 | 39 | # we get them automatically for you 40 | CF_ZONE_ID="" 41 | CF_DOMAIN_ID="" 42 | 43 | jsonValue() { 44 | KEY=$1 45 | num=$2 46 | awk -F"[,:}]" '{for(i=1;i<=NF;i++){if($i~/'"$KEY"'\042/){print $(i+1)}}}' | tr -d '"' | sed -n "${num}"p 47 | } 48 | 49 | 50 | getZoneID() { 51 | CF_ZONE_ID=$(curl -s \ 52 | -X GET "https://api.cloudflare.com/client/v4/zones?name=${CF_ZONE_NAME}" \ 53 | -H "X-Auth-Email: ${CF_EMAIL}" \ 54 | -H "X-Auth-Key: ${CF_TOKEN}" \ 55 | -H "Content-Type: application/json"| \ 56 | jsonValue id 1) 57 | } 58 | 59 | getDomainID() { 60 | CF_DOMAIN_ID=$(curl -s \ 61 | -X GET "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records?name=${CF_DOMAIN_NAME}" \ 62 | -H "X-Auth-Email: ${CF_EMAIL}" \ 63 | -H "X-Auth-Key: ${CF_TOKEN}" \ 64 | -H "Content-Type: application/json" | \ 65 | jsonValue id 1) 66 | } 67 | 68 | updateDomain() { 69 | RESULT=$(curl -s \ 70 | -X PUT "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records/${CF_DOMAIN_ID}" \ 71 | -H "X-Auth-Email: ${CF_EMAIL}" \ 72 | -H "X-Auth-Key: ${CF_TOKEN}" \ 73 | -H "Content-Type: application/json" \ 74 | --data '{"type":"A","name":"'"${CF_DOMAIN_NAME}"'","content":"'"${EXTERNAL_IP}"'","ttl":1,"proxied":false}' | \ 75 | jsonValue success 1) 76 | 77 | if [ "$RESULT" = "true" ];then 78 | echo "$(date) -- Update success" 79 | echo "LAST_IP=\"$IP\"" > "$LAST_IP_FILE" 80 | else 81 | echo "$(date) -- Update failed" 82 | fi 83 | 84 | } 85 | 86 | getZoneID 87 | getDomainID 88 | updateDomain 89 | -------------------------------------------------------------------------------- /ddns/cloudxns.conf: -------------------------------------------------------------------------------- 1 | API_KEY="YOUR_API_KEY" 2 | SECRET_KEY="YOUR_SECRET_KEY" 3 | DOMAIN="example.com" 4 | HOST="ddns" 5 | LAST_IP_FILE="/tmp/.LAST_IP" 6 | -------------------------------------------------------------------------------- /ddns/cloudxns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | CONFIG=$1 4 | 5 | if [ ! -f "$CONFIG" ];then 6 | echo "ERROR, CONFIG NOT EXIST." 7 | exit 1 8 | fi 9 | 10 | # shellcheck source=/dev/null 11 | . "$CONFIG" 12 | 13 | if [ -f "$LAST_IP_FILE" ];then 14 | # shellcheck source=/dev/null 15 | . "$LAST_IP_FILE" 16 | fi 17 | 18 | IP="" 19 | RETRY="0" 20 | while [ $RETRY -lt 5 ]; do 21 | IP=$(curl -s ip.xdty.org) 22 | RETRY=$((RETRY+1)) 23 | if [ -z "$IP" ];then 24 | sleep 3 25 | else 26 | break 27 | fi 28 | done 29 | 30 | if [ "$IP" = "$LAST_IP" ];then 31 | echo "$(date) -- Already updated." 32 | exit 0 33 | fi 34 | 35 | URL_D="https://www.cloudxns.net/api2/domain" 36 | DATE=$(date) 37 | HMAC_D=$(printf "%s" "$API_KEY$URL_D$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 38 | DOMAIN_ID=$(curl -k -s $URL_D -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_D"|grep -o "id\":\"[0-9]*\",\"domain\":\"$DOMAIN"|grep -o "[0-9]*"|head -n1) 39 | 40 | echo "DOMAIN ID: $DOMAIN_ID" 41 | 42 | URL_R="https://www.cloudxns.net/api2/record/$DOMAIN_ID?host_id=0&row_num=500" 43 | HMAC_R=$(printf "%s" "$API_KEY$URL_R$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 44 | RECORD_ID=$(curl -k -s "$URL_R" -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_R"|grep -o "record_id\":\"[0-9]*\",\"host_id\":\"[0-9]*\",\"host\":\"$HOST\""|grep -o "record_id\":\"[0-9]*"|grep -o "[0-9]*") 45 | 46 | echo "RECORD ID: $RECORD_ID" 47 | 48 | URL_U="https://www.cloudxns.net/api2/record/$RECORD_ID" 49 | 50 | PARAM_BODY="{\"domain_id\":\"$DOMAIN_ID\",\"host\":\"$HOST\",\"value\":\"$IP\"}" 51 | HMAC_U=$(printf "%s" "$API_KEY$URL_U$PARAM_BODY$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 52 | 53 | RESULT=$(curl -k -s "$URL_U" -X PUT -d "$PARAM_BODY" -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_U" -H 'Content-Type: application/json') 54 | 55 | echo "$RESULT" 56 | 57 | if [ "$(printf "%s" "$RESULT"|grep -c -o "message\":\"success\"")" = 1 ];then 58 | echo "$(date) -- Update success" 59 | echo "LAST_IP=\"$IP\"" > "$LAST_IP_FILE" 60 | else 61 | echo "$(date) -- Update failed" 62 | fi 63 | -------------------------------------------------------------------------------- /ddns/dnspod.conf: -------------------------------------------------------------------------------- 1 | ACCOUNT="xxxxxx@gmail.com" 2 | PASSWORD="xxxxxxxxxx" 3 | DOMAIN="xxxx.xxx.org" 4 | RECORD_LINE="默认" 5 | -------------------------------------------------------------------------------- /ddns/dnspod.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # usage: ./dnspod.sh ddns.conf 3 | 4 | if [ "$#" != 1 ];then 5 | echo "param error." 6 | exit 0 7 | fi 8 | ACCOUNT="" 9 | PASSWORD="" 10 | DOMAIN="" 11 | SUBDOMAIN="" 12 | RECORD_LINE="" 13 | 14 | DOMAIN_ID="" 15 | RECORD_LIST="" 16 | 17 | i=0; 18 | 19 | dnspod_load_config(){ 20 | cfg=$1; 21 | content=`cat ${cfg}`; 22 | ACCOUNT=`echo "${content}" |grep 'ACCOUNT'| sed 's/^ACCOUNT=[\"]\(.*\)[\"]/\1/'`; 23 | PASSWORD=`echo "${content}" |grep 'PASSWORD'| sed 's/^PASSWORD=[\"]\(.*\)[\"]/\1/'`; 24 | DOMAIN=`echo "${content}" |grep 'DOMAIN='| sed 's/^DOMAIN=[\"]\(.*\)[\"]/\1/'`; 25 | RECORD_LINE=`echo "${content}" |grep 'RECORD_LINE'| sed 's/^RECORD_LINE=[\"]\(.*\)[\"]/\1/'`; 26 | SUBDOMAIN=${DOMAIN%%.*} 27 | DOMAIN=${DOMAIN#*.} 28 | } 29 | 30 | dnspod_is_record_updated(){ 31 | resolve_ip=$(curl -s -k https://www.xdty.org/resolve.php -X POST -d "domain=$SUBDOMAIN.$DOMAIN") 32 | #current_ip=$(curl -s icanhazip.com) 33 | current_ip=$(curl -s ip.xdty.org) 34 | echo $resolve_ip 35 | echo $current_ip 36 | if [ "$resolve_ip" = "$current_ip" ]; then 37 | echo "Record updated." 38 | exit 0; 39 | fi 40 | } 41 | 42 | dnspod_is_record_updated2(){ 43 | options="login_email=${ACCOUNT}&login_password=${PASSWORD}&format=json"; 44 | out=$(curl -s -k https://dnsapi.cn/Record.List -d "${options}&domain=${DOMAIN}&sub_domain=${SUBDOMAIN}") 45 | #echo $out 46 | #resolve_ip=$(echo $out | sed 's/.*"value":"\([0-9.]*\)",.*/\1/g') 47 | resolve_ip=${out#*value\":\"}; 48 | resolve_ip=${resolve_ip%%\"*} 49 | #current_ip=$(curl -s icanhazip.com) 50 | current_ip=$(curl -s ip.xdty.org) 51 | echo $resolve_ip 52 | echo $current_ip 53 | if [ "$resolve_ip" = "$current_ip" ]; then 54 | echo "Record updated." 55 | exit 0; 56 | fi 57 | } 58 | dnspod_domain_get_id(){ 59 | options="login_email=${ACCOUNT}&login_password=${PASSWORD}"; 60 | out=$(curl -s -k https://dnsapi.cn/Domain.List -d ${options}); 61 | for line in $out;do 62 | if [ $(echo $line|grep '' |wc -l) != 0 ];then 63 | DOMAIN_ID=${line%<*}; 64 | DOMAIN_ID=${DOMAIN_ID#*>}; 65 | #echo "domain id: $DOMAIN_ID"; 66 | fi 67 | if [ $(echo $line|grep '' |wc -l) != 0 ];then 68 | DOMAIN_NAME=${line%<*}; 69 | DOMAIN_NAME=${DOMAIN_NAME#*>}; 70 | #echo "domain name: $DOMAIN_NAME"; 71 | if [ "$DOMAIN_NAME" = "$DOMAIN" ];then 72 | break; 73 | fi 74 | fi 75 | done 76 | out=$(curl -s -k https://dnsapi.cn/Record.List -d "${options}&domain_id=${DOMAIN_ID}") 77 | for line in $out;do 78 | if [ $(echo $line|grep '' |wc -l) != 0 ];then 79 | RECORD_ID=${line%<*}; 80 | RECORD_ID=${RECORD_ID#*>}; 81 | #echo "record id: $RECORD_ID"; 82 | fi 83 | if [ $(echo $line|grep '' |wc -l) != 0 ];then 84 | RECORD_NAME=${line%<*}; 85 | RECORD_NAME=${RECORD_NAME#*>}; 86 | #echo "record name: $RECORD_NAME"; 87 | if [ "$RECORD_NAME" = "$SUBDOMAIN" ];then 88 | break; 89 | fi 90 | fi 91 | done 92 | echo "$RECORD_NAME:$RECORD_ID" 93 | } 94 | 95 | dnspod_update_record_ip(){ 96 | curl -k https://dnsapi.cn/Record.Ddns -d "login_email=${ACCOUNT}&login_password=${PASSWORD}&domain_id=${DOMAIN_ID}&record_id=${RECORD_ID}&sub_domain=${RECORD_NAME}&record_line=${RECORD_LINE}" 97 | curl -k https://www.xdty.org/mail.php -X POST -d "event=ip($current_ip) changed&name=$SUBDOMAIN&email=$ACCOUNT" 98 | } 99 | 100 | main(){ 101 | 102 | dnspod_load_config $1 103 | dnspod_is_record_updated2 104 | dnspod_domain_get_id 105 | dnspod_update_record_ip 106 | } 107 | 108 | main $1 109 | -------------------------------------------------------------------------------- /ip/17monipdb.dat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xdtianyu/scripts/4d04ee21aaae5f7ae72d2486d2b8fd1c59915867/ip/17monipdb.dat -------------------------------------------------------------------------------- /ip/IP.class.php: -------------------------------------------------------------------------------- 1 | 6 | Build 20141009 版权所有 17MON.CN 7 | (C) 2006 - 2014 保留所有权利 8 | 请注意及时更新 IP 数据库版本 9 | 数据问题请加 QQ 群: 346280296 10 | Code for PHP 5.3+ only 11 | */ 12 | 13 | class IP 14 | { 15 | private static $ip = NULL; 16 | 17 | private static $fp = NULL; 18 | private static $offset = NULL; 19 | private static $index = NULL; 20 | 21 | private static $cached = array(); 22 | 23 | public static function find($ip) 24 | { 25 | if (empty($ip) === TRUE) 26 | { 27 | return 'N/A'; 28 | } 29 | 30 | $nip = gethostbyname($ip); 31 | $ipdot = explode('.', $nip); 32 | 33 | if ($ipdot[0] < 0 || $ipdot[0] > 255 || count($ipdot) !== 4) 34 | { 35 | return 'N/A'; 36 | } 37 | 38 | if (isset(self::$cached[$nip]) === TRUE) 39 | { 40 | return self::$cached[$nip]; 41 | } 42 | 43 | if (self::$fp === NULL) 44 | { 45 | self::init(); 46 | } 47 | 48 | $nip2 = pack('N', ip2long($nip)); 49 | 50 | $tmp_offset = (int)$ipdot[0] * 4; 51 | $start = unpack('Vlen', self::$index[$tmp_offset] . self::$index[$tmp_offset + 1] . self::$index[$tmp_offset + 2] . self::$index[$tmp_offset + 3]); 52 | 53 | $index_offset = $index_length = NULL; 54 | $max_comp_len = self::$offset['len'] - 1024 - 4; 55 | for ($start = $start['len'] * 8 + 1024; $start < $max_comp_len; $start += 8) 56 | { 57 | if (self::$index{$start} . self::$index{$start + 1} . self::$index{$start + 2} . self::$index{$start + 3} >= $nip2) 58 | { 59 | $index_offset = unpack('Vlen', self::$index{$start + 4} . self::$index{$start + 5} . self::$index{$start + 6} . "\x0"); 60 | $index_length = unpack('Clen', self::$index{$start + 7}); 61 | 62 | break; 63 | } 64 | } 65 | 66 | if ($index_offset === NULL) 67 | { 68 | return 'N/A'; 69 | } 70 | 71 | fseek(self::$fp, self::$offset['len'] + $index_offset['len'] - 1024); 72 | 73 | self::$cached[$nip] = explode("\t", fread(self::$fp, $index_length['len'])); 74 | 75 | return self::$cached[$nip]; 76 | } 77 | 78 | private static function init() 79 | { 80 | if (self::$fp === NULL) 81 | { 82 | self::$ip = new self(); 83 | 84 | self::$fp = fopen(__DIR__ . '/17monipdb.dat', 'rb'); 85 | if (self::$fp === FALSE) 86 | { 87 | throw new Exception('Invalid 17monipdb.dat file!'); 88 | } 89 | 90 | self::$offset = unpack('Nlen', fread(self::$fp, 4)); 91 | if (self::$offset['len'] < 4) 92 | { 93 | throw new Exception('Invalid 17monipdb.dat file!'); 94 | } 95 | 96 | self::$index = fread(self::$fp, self::$offset['len'] - 4); 97 | } 98 | } 99 | 100 | public function __destruct() 101 | { 102 | if (self::$fp !== NULL) 103 | { 104 | fclose(self::$fp); 105 | } 106 | } 107 | } 108 | 109 | ?> -------------------------------------------------------------------------------- /ip/README-CN.md: -------------------------------------------------------------------------------- 1 | ##示例 2 | 3 | 1\. 修改 `index.php`, 替换 `$api_key = "YOUR_DB_IP_KEY(https://db-ip.com/api/)";` 为你的 api key. 4 | 5 | 2\. `GET` 请求 如 `curl http://YOUR_DOMAIN_NAME` 会返回你的公网ip地址. 6 | 7 | 3\. `POST` 请求 如 `curl http://YOUR_DOMAIN_NAME -X POST -d "geo=IP_ADDRESS_YOU_ARE_QUERY"` 会返回 ip 地理信息. 8 | 位于中国的ip会使用 `17monip`, 否则使用 `DB-IP API`. 9 | -------------------------------------------------------------------------------- /ip/README.md: -------------------------------------------------------------------------------- 1 | ##Useage 2 | 3 | 1\. In `index.php`, replace `$api_key = "YOUR_DB_IP_KEY(https://db-ip.com/api/)";` with your api key. 4 | 5 | 2\. `GET` request e.g. `curl http://YOUR_DOMAIN_NAME` will return your current public ip address. 6 | 7 | 3\. `POST` request e.g. `curl http://YOUR_DOMAIN_NAME -X POST -d "geo=IP_ADDRESS_YOU_ARE_QUERY"` will return ip geo info. 8 | IP address located in china will use `17monip`, otherwise will use `DB-IP API`. 9 | -------------------------------------------------------------------------------- /ip/dbip-client.class.php: -------------------------------------------------------------------------------- 1 | api_key = $api_key; 37 | 38 | if (isset($base_url)) { 39 | $this->base_url = $base_url; 40 | } 41 | 42 | } 43 | 44 | protected function Do_API_Call($method, $params = array()) { 45 | 46 | $qp = array("api_key=" . $this->api_key); 47 | foreach ($params as $k => $v) { 48 | $qp[] = $k . "=" . urlencode($v); 49 | } 50 | 51 | $url = $this->base_url . $method . "?" . implode("&", $qp); 52 | 53 | if (!$jdata = @file_get_contents($url)) { 54 | throw new DBIP_Client_Exception("{$method}: unable to fetch URL: {$url}"); 55 | } 56 | 57 | if (!$data = @json_decode($jdata)) { 58 | throw new DBIP_Client_Exception("{$method}: error decoding server response"); 59 | } 60 | 61 | if (property_exists($data, 'error') && $data->error) { 62 | throw new DBIP_Client_Exception("{$method}: server reported an error: {$data->error}"); 63 | } 64 | 65 | return $data; 66 | 67 | } 68 | 69 | public function Get_Address_Info($addr) { 70 | 71 | return $this->Do_API_Call("addrinfo", array("addr" => $addr)); 72 | 73 | } 74 | 75 | public function Get_Key_Info() { 76 | 77 | return $this->Do_API_Call("keyinfo"); 78 | 79 | } 80 | 81 | } 82 | 83 | ?> 84 | -------------------------------------------------------------------------------- /ip/example.php: -------------------------------------------------------------------------------- 1 | #!/usr/local/bin/php 2 | \n"); 10 | 11 | try { 12 | 13 | $dbip = new DBIP_Client($api_key); 14 | 15 | echo "keyinfo:\n"; 16 | foreach ($dbip->Get_Key_Info() as $k => $v) { 17 | echo "{$k}: {$v}\n"; 18 | } 19 | 20 | echo "\naddrinfo:\n"; 21 | foreach ($dbip->Get_Address_Info($ip_addr) as $k => $v) { 22 | echo "{$k}: {$v}\n"; 23 | } 24 | 25 | } catch (Exception $e) { 26 | 27 | die("error: {$e->getMessage()}\n"); 28 | 29 | } 30 | 31 | ?> 32 | -------------------------------------------------------------------------------- /ip/index.php: -------------------------------------------------------------------------------- 1 | Get_Address_Info($ip_addr) as $k => $v) { 21 | # echo "$k:$v,"; 22 | if ($k=="country" && $v=="CN") { 23 | $finds = IP::find($ip_addr); 24 | $result = ""; 25 | foreach ($finds as $res) { 26 | if (!empty($res)) { 27 | if ($result!=$res) 28 | $result = empty($result)?$res:"$result,"."$res"; 29 | } 30 | } 31 | echo $result; 32 | break; 33 | } 34 | if ($k=="address") { 35 | echo ""; 36 | } else if ($k=="city") { 37 | echo "$v"; 38 | } else { 39 | echo "$v,"; 40 | } 41 | } 42 | } 43 | ?> 44 | -------------------------------------------------------------------------------- /ip/index2.php: -------------------------------------------------------------------------------- 1 | 8 | -------------------------------------------------------------------------------- /le-dns/README.md: -------------------------------------------------------------------------------- 1 | 通过 DNS 验证方式获取 lets-encrypt 证书的快速脚本 2 | ---------------- 3 | 4 | 脚本基于 [letsencrypt.sh](https://github.com/lukas2511/letsencrypt.sh),通过调用 dns 服务商接口更新 TXT 记录用于认证,实现快速获取 lets-encrypt 证书。无需root权限,无需指定网站目录及DNS解析 5 | 6 | ## cloudflare 7 | 8 | **下载** 9 | 10 | ``` 11 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/le-cloudflare.sh 12 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudflare.conf 13 | chmod +x le-cloudflare.sh 14 | ``` 15 | 16 | **配置** 17 | 18 | `cloudflare.conf` 文件内容 19 | 20 | ``` 21 | CF_EMAIL="YOUR_API_KEY" 22 | CF_EMAIL="YOUR_SECRET_KEY" 23 | DOMAIN="example.com" 24 | CERT_DOMAINS="example.com www.example.com im.example.com" 25 | #ECC=TRUE 26 | ``` 27 | 28 | 修改其中的 `CF_EMAIL` 及 `CF_EMAIL` 为您的邮箱和 [cloudflare api key](https://www.cloudflare.com/a/profile) ,修改 `DOMAIN` 为你的根域名,修改 `CERT_DOMAINS` 为您要签的域名列表,需要 `ECC` 证书时请取消 `#ECC=TRUE` 的注释。 29 | 30 | **野卡证书** 31 | 32 | 脚本支持野卡证书,请修改 `CERT_DOMAINS` 为 "example.com *.example.com sub.example.com *.sub.example.com" 。注意如果之前使用过脚本,需要更新脚本内容,删除所有 `*.sh` 文件再下载运行脚本。 33 | 34 | **运行** 35 | 36 | `./le-cloudflare.sh ./cloudflare.conf` 37 | 38 | 最后生成的文件在当前目录的 certs 目录下 39 | 40 | **cron 定时任务** 41 | 42 | 如果证书过期时间不少于30天, [letsencrypt.sh](https://github.com/lukas2511/letsencrypt.sh) 脚本会自动忽略更新,所以至少需要29天运行一次更新。 43 | 44 | 每隔20天(每个月的2号和22号)自动更新一次证书,可以在 `le-cloudflare.sh` 脚本最后加入 service nginx reload等重新加载服务。 45 | 46 | `0 0 2/20 * * /etc/nginx/le-cloudflare.sh /etc/nginx/le-cloudflare.conf >> /var/log/le-cloudflare.log 2>&1` 47 | 48 | **注意** `ubuntu 16.04` 不能定义 `day of month` 含有开始天数的 `step values`,可以替换命令中的 `2/20` 为 `2,22`。 49 | 50 | 更详细的 crontab 参数请参考 [crontab.guru](http://crontab.guru/) 进行自定义 51 | 52 | 53 | ## cloudxns 54 | 55 | **下载** 56 | 57 | ``` 58 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/le-cloudxns.sh 59 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudxns.conf 60 | chmod +x le-cloudxns.sh 61 | ``` 62 | 63 | **配置** 64 | 65 | `cloudxns.conf` 文件内容 66 | 67 | ``` 68 | API_KEY="YOUR_API_KEY" 69 | SECRET_KEY="YOUR_SECRET_KEY" 70 | DOMAIN="example.com" 71 | CERT_DOMAINS="example.com www.example.com im.example.com" 72 | #ECC=TRUE 73 | ``` 74 | 75 | 修改其中的 `API_KEY` 及 `SECRET_KEY` 为您的 [cloudxns api key](https://www.cloudxns.net/AccountManage/apimanage.html) ,修改 `DOMAIN` 为你的根域名,修改 `CERT_DOMAINS` 为您要签的域名列表,需要 `ECC` 证书时请取消 `#ECC=TRUE` 的注释。 76 | 77 | **运行** 78 | 79 | `./le-cloudxns.sh cloudxns.conf` 80 | 81 | 最后生成的文件在当前目录的 certs 目录下 82 | 83 | **cron 定时任务** 84 | 85 | 如果证书过期时间不少于30天, [letsencrypt.sh](https://github.com/lukas2511/letsencrypt.sh) 脚本会自动忽略更新,所以至少需要29天运行一次更新。 86 | 87 | 每隔20天(每个月的2号和22号)自动更新一次证书,可以在 `le-cloudxns.sh` 脚本最后加入 service nginx reload等重新加载服务。 88 | 89 | `0 0 2/20 * * /etc/nginx/le-cloudxns.sh /etc/nginx/le-cloudxns.conf >> /var/log/le-cloudxns.log 2>&1` 90 | 91 | **注意** `ubuntu 16.04` 不能定义 `day of month` 含有开始天数的 `step values`,可以替换命令中的 `2/20` 为 `2,22`。 92 | 93 | 更详细的 crontab 参数请参考 [crontab.guru](http://crontab.guru/) 进行自定义 94 | 95 | ## dnspod 96 | 97 | **下载** 98 | 99 | ``` 100 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/le-dnspod.sh 101 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/dnspod.conf 102 | chmod +x le-dnspod.sh 103 | ``` 104 | 105 | **配置** 106 | 107 | `dnspod.conf` 文件内容 108 | 109 | ``` 110 | TOKEN="YOUR_TOKEN_ID,YOUR_API_TOKEN" 111 | RECORD_LINE="默认" 112 | DOMAIN="example.com" 113 | CERT_DOMAINS="example.com www.example.com im.example.com" 114 | #ECC=TRUE 115 | ``` 116 | 117 | 修改其中的 `TOKEN` 为您的 [dnspod api token](https://www.dnspod.cn/console/user/security) ,注意格式为`123456,556cxxxx`。 118 | 修改 `DOMAIN` 为你的根域名,修改 `CERT_DOMAINS` 为您要签的域名列表,需要 `ECC` 证书时请取消 `#ECC=TRUE` 的注释。 119 | 120 | **运行** 121 | 122 | `./le-dnspod.sh dnspod.conf` 123 | 124 | 最后生成的文件在当前目录的 certs 目录下 125 | 126 | **cron 定时任务** 127 | 128 | 如果证书过期时间不少于30天, [letsencrypt.sh](https://github.com/lukas2511/letsencrypt.sh) 脚本会自动忽略更新,所以至少需要29天运行一次更新。 129 | 130 | 每隔20天(每个月的5号和25号)自动更新一次证书,可以在 `le-dnspod.sh` 脚本最后加入 service nginx reload等重新加载服务。 131 | 132 | `0 0 5/20 * * /etc/nginx/le-dnspod.sh /etc/nginx/le-dnspod.conf >> /var/log/le-dnspod.log 2>&1` 133 | 134 | **注意** `ubuntu 16.04` 不能定义 `day of month` 含有开始天数的 `step values`,可以替换命令中的 `5/20` 为 `5,25`。 135 | 136 | 更详细的 crontab 参数请参考 [crontab.guru](http://crontab.guru/) 进行自定义 137 | -------------------------------------------------------------------------------- /le-dns/cloudflare-hook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | function deploy_challenge { 4 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 5 | echo "$DOMAIN" "$TOKEN_FILENAME" "$TOKEN_VALUE" 6 | ./cloudflare.sh "$CONFIG" "$DOMAIN" "$TOKEN_VALUE" 7 | sleep 15 8 | } 9 | 10 | function clean_challenge { 11 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 12 | } 13 | 14 | function deploy_cert { 15 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" CHAINFILE="${4}" 16 | } 17 | 18 | function unchanged_cert { 19 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" 20 | } 21 | 22 | function invalid_challenge { 23 | local DOMAIN="${1}" RESPONSE="${2}" 24 | } 25 | 26 | function request_failure { 27 | local STATUSCODE="${1}" REASON="${2}" REQTYPE="${3}" HEADERS="${4}" 28 | } 29 | 30 | function generate_csr { 31 | local DOMAIN="${1}" CERTDIR="${2}" ALTNAMES="${3}" 32 | } 33 | 34 | function startup_hook { 35 | : 36 | } 37 | 38 | function exit_hook { 39 | : 40 | } 41 | 42 | HANDLER="$1"; shift 43 | if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then 44 | "$HANDLER" "$@" 45 | fi 46 | 47 | -------------------------------------------------------------------------------- /le-dns/cloudflare.conf: -------------------------------------------------------------------------------- 1 | CF_EMAIL="YOUR_EMAIL@gmail.com" 2 | CF_TOKEN="YOUR_API_TOKEN" 3 | DOMAIN="example.com" 4 | CERT_DOMAINS="example.com www.example.com" 5 | #ECC=TRUE 6 | 7 | -------------------------------------------------------------------------------- /le-dns/cloudflare.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sh 2 | 3 | CONFIG=$1 4 | DOMAIN_FULL=$2 5 | TXT_TOKEN=$3 6 | 7 | if [ ! -f "$CONFIG" ];then 8 | echo "ERROR, CONFIG NOT EXIST." 9 | exit 1 10 | fi 11 | 12 | # shellcheck source=/dev/null 13 | . "$CONFIG" 14 | 15 | SUB_DOMAIN=${DOMAIN_FULL%$DOMAIN} 16 | 17 | HOST="_acme-challenge.${DOMAIN_FULL}" 18 | 19 | # we get them automatically for you 20 | CF_ZONE_ID="" 21 | CF_DOMAIN_ID="" 22 | 23 | jsonValue() { 24 | KEY=$1 25 | num=$2 26 | awk -F"[,:}]" '{for(i=1;i<=NF;i++){if($i~/'"$KEY"'\042/){print $(i+1)}}}' | tr -d '"' | sed -n "${num}"p 27 | } 28 | 29 | 30 | getZoneID() { 31 | CF_ZONE_ID=$(curl -s \ 32 | -X GET "https://api.cloudflare.com/client/v4/zones?name=${DOMAIN}" \ 33 | -H "X-Auth-Email: ${CF_EMAIL}" \ 34 | -H "X-Auth-Key: ${CF_TOKEN}" \ 35 | -H "Content-Type: application/json"| \ 36 | jsonValue id 1) 37 | } 38 | 39 | getDomainID() { 40 | CF_DOMAIN_ID=$(curl -s \ 41 | -X GET "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records?name=${HOST}" \ 42 | -H "X-Auth-Email: ${CF_EMAIL}" \ 43 | -H "X-Auth-Key: ${CF_TOKEN}" \ 44 | -H "Content-Type: application/json" | \ 45 | jsonValue id 1) 46 | } 47 | 48 | createDomain() { 49 | RESULT=$(curl -s \ 50 | -X POST "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records" \ 51 | -H "X-Auth-Email: ${CF_EMAIL}" \ 52 | -H "X-Auth-Key: ${CF_TOKEN}" \ 53 | -H "Content-Type: application/json" \ 54 | --data '{"type":"TXT","name":"'"${HOST}"'","content":"'"${TXT_TOKEN}"'","ttl":1,"proxied":false}' | \ 55 | jsonValue success 1) 56 | 57 | if [ "$RESULT" = "true" ];then 58 | echo "$(date) -- Update success" 59 | else 60 | echo "$(date) -- Update failed" 61 | fi 62 | 63 | } 64 | 65 | updateDomain() { 66 | RESULT=$(curl -s \ 67 | -X PUT "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records/${CF_DOMAIN_ID}" \ 68 | -H "X-Auth-Email: ${CF_EMAIL}" \ 69 | -H "X-Auth-Key: ${CF_TOKEN}" \ 70 | -H "Content-Type: application/json" \ 71 | --data '{"type":"TXT","name":"'"${HOST}"'","content":"'"${TXT_TOKEN}"'","ttl":1,"proxied":false}' | \ 72 | jsonValue success 1) 73 | 74 | if [ "$RESULT" = "true" ];then 75 | echo "$(date) -- Update success" 76 | else 77 | echo "$(date) -- Update failed" 78 | fi 79 | 80 | } 81 | 82 | getZoneID 83 | 84 | if [ -z "$ALWAYS_CREATE_DOMAIN" ]; then 85 | getDomainID 86 | fi 87 | 88 | if [ -z "$CF_DOMAIN_ID" ];then 89 | createDomain 90 | else 91 | updateDomain 92 | fi 93 | 94 | -------------------------------------------------------------------------------- /le-dns/cloudxns-hook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | deploy_challenge() { 4 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 5 | echo "$DOMAIN" "$TOKEN_FILENAME" "$TOKEN_VALUE" 6 | ./cloudxns.sh "$CONFIG" "$DOMAIN" "$TOKEN_VALUE" 7 | sleep 5 8 | } 9 | 10 | clean_challenge() { 11 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 12 | } 13 | 14 | deploy_cert() { 15 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}" 16 | } 17 | 18 | unchanged_cert() { 19 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" 20 | } 21 | 22 | invalid_challenge() { 23 | local DOMAIN="${1}" RESPONSE="${2}" 24 | } 25 | 26 | request_failure() { 27 | local STATUSCODE="${1}" REASON="${2}" REQTYPE="${3}" 28 | } 29 | 30 | function generate_csr { 31 | local DOMAIN="${1}" CERTDIR="${2}" ALTNAMES="${3}" 32 | } 33 | 34 | function startup_hook { 35 | : 36 | } 37 | 38 | function exit_hook { 39 | : 40 | } 41 | 42 | HANDLER="$1"; shift 43 | if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then 44 | "$HANDLER" "$@" 45 | fi 46 | -------------------------------------------------------------------------------- /le-dns/cloudxns.conf: -------------------------------------------------------------------------------- 1 | API_KEY="YOUR_API_KEY" 2 | SECRET_KEY="YOUR_SECRET_KEY" 3 | DOMAIN="example.com" 4 | CERT_DOMAINS="example.com www.example.com im.example.com" 5 | #ECC=TRUE 6 | -------------------------------------------------------------------------------- /le-dns/cloudxns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | CONFIG=$1 4 | DOMAIN_FULL=$2 5 | TXT_TOKEN=$3 6 | 7 | if [ ! -f "$CONFIG" ];then 8 | echo "ERROR, CONFIG NOT EXIST." 9 | exit 1 10 | fi 11 | 12 | . "$CONFIG" 13 | 14 | SUB_DOMAIN=${DOMAIN_FULL%$DOMAIN} 15 | 16 | if [ -z "$SUB_DOMAIN" ];then 17 | HOST="_acme-challenge" 18 | else 19 | HOST="_acme-challenge.${SUB_DOMAIN%.}" 20 | fi 21 | 22 | URL_D="https://www.cloudxns.net/api2/domain" 23 | DATE=$(date) 24 | HMAC_D=$(echo -n "$API_KEY$URL_D$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 25 | DOMAIN_ID=$(curl -k -s $URL_D -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_D"|grep -o "id\":\"[0-9]*\",\"domain\":\"$DOMAIN"|grep -o "[0-9]*"|head -n1) 26 | 27 | echo "DOMAIN ID: $DOMAIN_ID" 28 | 29 | URL_R="https://www.cloudxns.net/api2/record/$DOMAIN_ID?host_id=0&row_num=500" 30 | HMAC_R=$(echo -n "$API_KEY$URL_R$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 31 | RECORD_ID=$(curl -k -s "$URL_R" -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_R"|grep -o "record_id\":\"[0-9]*\",\"host_id\":\"[0-9]*\",\"host\":\"$HOST\""|grep -o "record_id\":\"[0-9]*"|grep -o "[0-9]*"|head -n1) 32 | 33 | echo "RECORD ID: $RECORD_ID" 34 | 35 | if [ -z "$RECORD_ID" ];then 36 | URL_U="https://www.cloudxns.net/api2/record" 37 | CURLX="POST" 38 | else 39 | URL_U="https://www.cloudxns.net/api2/record/$RECORD_ID" 40 | CURLX="PUT" 41 | fi 42 | 43 | PARAM_BODY="{\"domain_id\":\"$DOMAIN_ID\",\"host\":\"$HOST\",\"value\":\"$TXT_TOKEN\",\"type\":\"TXT\",\"line_id\":1,\"ttl\":60}" 44 | HMAC_U=$(echo -n "$API_KEY$URL_U$PARAM_BODY$DATE$SECRET_KEY"|md5sum|cut -d" " -f1) 45 | 46 | RESULT=$(curl -k -s "$URL_U" -X "$CURLX" -d "$PARAM_BODY" -H "API-KEY: $API_KEY" -H "API-REQUEST-DATE: $DATE" -H "API-HMAC: $HMAC_U" -H 'Content-Type: application/json') 47 | 48 | echo "$RESULT" 49 | 50 | RES=$(echo -n "$RESULT"|grep -o "message\":\"success\"" -c) 51 | 52 | if [ "$RES" = 1 ];then 53 | echo "$(date) -- Update success" 54 | else 55 | echo "$(date) -- Update failed" 56 | fi 57 | -------------------------------------------------------------------------------- /le-dns/dnspod-hook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | deploy_challenge() { 4 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 5 | echo "$DOMAIN" "$TOKEN_FILENAME" "$TOKEN_VALUE" 6 | ./dnspod.sh "$CONFIG" "$DOMAIN" "$TOKEN_VALUE" 7 | sleep 5 8 | } 9 | 10 | clean_challenge() { 11 | local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}" 12 | } 13 | 14 | deploy_cert() { 15 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}" 16 | } 17 | 18 | unchanged_cert() { 19 | local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" 20 | } 21 | 22 | invalid_challenge() { 23 | local DOMAIN="${1}" RESPONSE="${2}" 24 | } 25 | 26 | request_failure() { 27 | local STATUSCODE="${1}" REASON="${2}" REQTYPE="${3}" 28 | } 29 | 30 | function generate_csr { 31 | local DOMAIN="${1}" CERTDIR="${2}" ALTNAMES="${3}" 32 | } 33 | 34 | function startup_hook { 35 | : 36 | } 37 | 38 | function exit_hook { 39 | : 40 | } 41 | 42 | HANDLER="$1"; shift 43 | if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then 44 | "$HANDLER" "$@" 45 | fi 46 | -------------------------------------------------------------------------------- /le-dns/dnspod.conf: -------------------------------------------------------------------------------- 1 | TOKEN="YOUR_TOKEN_ID,YOUR_API_TOKEN" 2 | RECORD_LINE="默认" 3 | DOMAIN="example.com" 4 | CERT_DOMAINS="example.com www.example.com im.example.com" 5 | #ECC=TRUE 6 | -------------------------------------------------------------------------------- /le-dns/dnspod.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | CONFIG=$1 4 | DOMAIN_FULL=$2 5 | TXT_TOKEN=$3 6 | 7 | if [ ! -f "$CONFIG" ];then 8 | echo "ERROR, CONFIG NOT EXIST." 9 | exit 1 10 | fi 11 | 12 | . "$CONFIG" 13 | 14 | SUB_DOMAIN=${DOMAIN_FULL%$DOMAIN} 15 | 16 | if [ -z "$SUB_DOMAIN" ];then 17 | HOST="_acme-challenge" 18 | else 19 | HOST="_acme-challenge.${SUB_DOMAIN%.}" 20 | fi 21 | 22 | echo "$HOST.$DOMAIN" 23 | 24 | OPTIONS="login_token=${TOKEN}"; 25 | OUT=$(curl -s -k "https://dnsapi.cn/Domain.List" -d "${OPTIONS}"); 26 | for line in $OUT;do 27 | if [ "$(echo "$line"|grep '' -c)" != 0 ];then 28 | DOMAIN_ID=${line%<*}; 29 | DOMAIN_ID=${DOMAIN_ID#*>}; 30 | # echo "domain id: $DOMAIN_ID"; 31 | fi 32 | if [ "$(echo "$line"|grep '' -c)" != 0 ];then 33 | DOMAIN_NAME=${line%<*}; 34 | DOMAIN_NAME=${DOMAIN_NAME#*>}; 35 | # echo "domain name: $DOMAIN_NAME"; 36 | if [ "$DOMAIN_NAME" = "$DOMAIN" ];then 37 | break; 38 | fi 39 | fi 40 | done 41 | 42 | echo "$DOMAIN_NAME $DOMAIN_ID" 43 | 44 | OUT=$(curl -s -k "https://dnsapi.cn/Record.List" -d "${OPTIONS}&domain_id=${DOMAIN_ID}") 45 | for line in $OUT;do 46 | if [ "$(echo "$line"|grep '' -c)" != 0 ];then 47 | RECORD_ID=${line%<*}; 48 | RECORD_ID=${RECORD_ID#*>}; 49 | # echo "record id: $RECORD_ID"; 50 | fi 51 | if [ "$(echo "$line"|grep '' -c)" != 0 ];then 52 | RECORD_NAME=${line%<*}; 53 | RECORD_NAME=${RECORD_NAME#*>}; 54 | # echo "record name: $RECORD_NAME"; 55 | if [ "$RECORD_NAME" = "$HOST" ];then 56 | break; 57 | fi 58 | fi 59 | done 60 | echo "$RECORD_NAME:$RECORD_ID" 61 | 62 | if [ "$RECORD_NAME" = "$HOST" ];then 63 | echo "UPDATE RECORD" 64 | OUT=$(curl -k -s "https://dnsapi.cn/Record.Modify" -d "${OPTIONS}&domain_id=${DOMAIN_ID}&record_id=${RECORD_ID}&sub_domain=${HOST}&record_line=${RECORD_LINE}&record_type=TXT&value=${TXT_TOKEN}") 65 | else 66 | echo "NEW RECORD" 67 | OUT=$(curl -k -s "https://dnsapi.cn/Record.Create" -d "${OPTIONS}&domain_id=${DOMAIN_ID}&sub_domain=${HOST}&record_line=${RECORD_LINE}&record_type=TXT&value=${TXT_TOKEN}") 68 | fi 69 | 70 | if [ "$(echo "$OUT"|grep 'successful' -c)" != 0 ];then 71 | echo "DNS UPDATE SUCCESS" 72 | else 73 | echo "DNS UPDATE FAILED" 74 | fi 75 | -------------------------------------------------------------------------------- /le-dns/le-cloudflare.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export CONFIG=$1 4 | 5 | if [ -f "$CONFIG" ];then 6 | . "$CONFIG" 7 | DIRNAME=$(dirname "$CONFIG") 8 | cd "$DIRNAME" || exit 1 9 | else 10 | echo "ERROR CONFIG." 11 | exit 1 12 | fi 13 | 14 | echo "$CERT_DOMAINS" > domains.txt 15 | 16 | if [ ! -f "cloudflare.sh" ];then 17 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudflare.sh -O cloudflare.sh -o /dev/null 18 | chmod +x cloudflare.sh 19 | fi 20 | 21 | if [ ! -f "cloudflare-hook.sh" ];then 22 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudflare-hook.sh -O cloudflare-hook.sh -o /dev/null 23 | chmod +x cloudflare-hook.sh 24 | fi 25 | 26 | if [ ! -f "letsencrypt.sh" ];then 27 | wget https://raw.githubusercontent.com/lukas2511/dehydrated/master/dehydrated -O letsencrypt.sh -o /dev/null 28 | chmod +x letsencrypt.sh 29 | fi 30 | 31 | ./letsencrypt.sh --register --accept-terms 32 | 33 | if [ "$ECC" = "TRUE" ];then 34 | ./letsencrypt.sh -c -k ./cloudflare-hook.sh -t dns-01 -a secp384r1 35 | else 36 | ./letsencrypt.sh -c -k ./cloudflare-hook.sh -t dns-01 37 | fi 38 | -------------------------------------------------------------------------------- /le-dns/le-cloudxns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export CONFIG=$1 4 | 5 | if [ -f "$CONFIG" ];then 6 | . "$CONFIG" 7 | DIRNAME=$(dirname "$CONFIG") 8 | cd "$DIRNAME" || exit 1 9 | else 10 | echo "ERROR CONFIG." 11 | exit 1 12 | fi 13 | 14 | echo "$CERT_DOMAINS" > domains.txt 15 | 16 | if [ ! -f "cloudxns.sh" ];then 17 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudxns.sh -O cloudxns.sh -o /dev/null 18 | chmod +x cloudxns.sh 19 | fi 20 | 21 | if [ ! -f "cloudxns-hook.sh" ];then 22 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/cloudxns-hook.sh -O cloudxns-hook.sh -o /dev/null 23 | chmod +x cloudxns-hook.sh 24 | fi 25 | 26 | if [ ! -f "letsencrypt.sh" ];then 27 | wget https://raw.githubusercontent.com/lukas2511/dehydrated/master/dehydrated -O letsencrypt.sh -o /dev/null 28 | chmod +x letsencrypt.sh 29 | fi 30 | 31 | ./letsencrypt.sh --register --accept-terms 32 | 33 | if [ "$ECC" = "TRUE" ];then 34 | ./letsencrypt.sh -c -k ./cloudxns-hook.sh -t dns-01 -a secp384r1 35 | else 36 | ./letsencrypt.sh -c -k ./cloudxns-hook.sh -t dns-01 37 | fi 38 | -------------------------------------------------------------------------------- /le-dns/le-dnspod.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export CONFIG=$1 4 | 5 | if [ -f "$CONFIG" ];then 6 | . "$CONFIG" 7 | DIRNAME=$(dirname "$CONFIG") 8 | cd "$DIRNAME" || exit 1 9 | else 10 | echo "ERROR CONFIG." 11 | exit 1 12 | fi 13 | 14 | echo "$CERT_DOMAINS" > domains.txt 15 | 16 | if [ ! -f "dnspod.sh" ];then 17 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/dnspod.sh -O dnspod.sh -o /dev/null 18 | chmod +x dnspod.sh 19 | fi 20 | 21 | if [ ! -f "dnspod-hook.sh" ];then 22 | wget https://github.com/xdtianyu/scripts/raw/master/le-dns/dnspod-hook.sh -O dnspod-hook.sh -o /dev/null 23 | chmod +x dnspod-hook.sh 24 | fi 25 | 26 | if [ ! -f "letsencrypt.sh" ];then 27 | wget https://raw.githubusercontent.com/lukas2511/dehydrated/master/dehydrated -O letsencrypt.sh -o /dev/null 28 | chmod +x letsencrypt.sh 29 | fi 30 | 31 | ./letsencrypt.sh --register --accept-terms 32 | 33 | if [ "$ECC" = "TRUE" ];then 34 | ./letsencrypt.sh -c -k ./dnspod-hook.sh -t dns-01 -a secp384r1 35 | else 36 | ./letsencrypt.sh -c -k ./dnspod-hook.sh -t dns-01 37 | fi 38 | -------------------------------------------------------------------------------- /lets-encrypt/README-CN.md: -------------------------------------------------------------------------------- 1 | 一个快速获取/更新 Let's encrypt 证书的 shell script 2 | ------------ 3 | 4 | 调用 acme_tiny.py 认证、获取、更新证书,不需要额外的依赖。 5 | 6 | **下载到本地** 7 | 8 | ``` 9 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/lets-encrypt/letsencrypt.conf 10 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/lets-encrypt/letsencrypt.sh 11 | chmod +x letsencrypt.sh 12 | ``` 13 | 14 | **配置文件** 15 | 16 | 只需要修改 DOMAIN_KEY DOMAIN_DIR DOMAINS 为你自己的信息 17 | 18 | ``` 19 | ACCOUNT_KEY="letsencrypt-account.key" 20 | DOMAIN_KEY="example.com.key" 21 | DOMAIN_DIR="/var/www/example.com" 22 | DOMAINS="DNS:example.com,DNS:whatever.example.com" 23 | #ECC=TRUE 24 | #LIGHTTPD=TRUE 25 | ``` 26 | 27 | 执行过程中会自动生成需要的 key 文件。其中 `ACCOUNT_KEY` 为账户密钥, `DOMAIN_KEY` 为域名私钥, `DOMAIN_DIR` 为域名指向的目录,`DOMAINS` 为要签的域名列表, 需要 `ECC` 证书时取消 `#ECC=TRUE` 的注释,需要为 `lighttpd` 生成 `pem` 文件时,取消 `#LIGHTTPD=TRUE` 的注释。 28 | 29 | **运行** 30 | 31 | ``` 32 | ./letsencrypt.sh letsencrypt.conf 33 | ``` 34 | 35 | **注意** 36 | 37 | 需要已经绑定域名到 `/var/www/example.com` 目录,即通过 `http://example.com` `http://whatever.example.com` 可以访问到 `/var/www/example.com` 目录,用于域名的验证 38 | 39 | **将会生成如下几个文件** 40 | 41 | lets-encrypt-x1-cross-signed.pem 42 | example.chained.crt # 即网上搜索教程里常见的 fullchain.pem 43 | example.com.key # 即网上搜索教程里常见的 privkey.pem 44 | example.crt 45 | example.csr 46 | 47 | **在 nginx 里添加 ssl 相关的配置** 48 | 49 | ssl_certificate /path/to/cert/example.chained.crt; 50 | ssl_certificate_key /path/to/cert/example.com.key; 51 | 52 | **cron 定时任务** 53 | 54 | 每个月自动更新一次证书,可以在脚本最后加入 service nginx reload等重新加载服务。 55 | 56 | ``` 57 | 0 0 1 * * /etc/nginx/certs/letsencrypt.sh /etc/nginx/certs/letsencrypt.conf >> /var/log/lets-encrypt.log 2>&1 58 | ``` 59 | -------------------------------------------------------------------------------- /lets-encrypt/README.md: -------------------------------------------------------------------------------- 1 | A shell script to get/update Let's encrypt certs quickly. [中文](https://github.com/xdtianyu/scripts/blob/master/lets-encrypt/README-CN.md) 2 | ------------ 3 | 4 | This script uses acme_tiny.py to auth, fetch and update cert,no need for other dependency. 5 | 6 | **Download** 7 | 8 | ``` 9 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/lets-encrypt/letsencrypt.conf 10 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/lets-encrypt/letsencrypt.sh 11 | chmod +x letsencrypt.sh 12 | ``` 13 | 14 | **Configuration** 15 | 16 | Only modify DOMAIN_KEY DOMAIN_DIR DOMAINS to yours. 17 | 18 | ``` 19 | ACCOUNT_KEY="letsencrypt-account.key" 20 | DOMAIN_KEY="example.com.key" 21 | DOMAIN_DIR="/var/www/example.com" 22 | DOMAINS="DNS:example.com,DNS:whatever.example.com" 23 | ``` 24 | 25 | key files will be gererated automatically. 26 | 27 | **Run** 28 | 29 | ``` 30 | ./letsencrypt.sh letsencrypt.conf 31 | ``` 32 | 33 | **Attention** 34 | 35 | Domain name need bind to `DOMAIN_DIR` e.g. `/var/www/example.com`,that is to say visit `http://example.com` `http://whatever.example.com` can get into `/var/www/example.com` directory,this is used to verify your domain. 36 | 37 | **cron task** 38 | 39 | Update the certs every month,you can add your command to the end of script to reload your service, e.g. `service nginx reload` 40 | 41 | ``` 42 | 0 0 1 * * /etc/nginx/certs/letsencrypt.sh /etc/nginx/certs/letsencrypt.conf >> /var/log/lets-encrypt.log 2>&1 43 | ``` 44 | -------------------------------------------------------------------------------- /lets-encrypt/letsencrypt.conf: -------------------------------------------------------------------------------- 1 | # only modify the values, key files will be generated automaticly. 2 | ACCOUNT_KEY="letsencrypt-account.key" 3 | DOMAIN_KEY="example.com.key" 4 | DOMAIN_DIR="/var/www/example.com" 5 | DOMAINS="DNS:example.com,DNS:www.example.com" 6 | #ECC=TRUE 7 | #LIGHTTPD=TRUE 8 | -------------------------------------------------------------------------------- /lets-encrypt/letsencrypt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Usage: /etc/nginx/certs/letsencrypt.sh /etc/nginx/certs/letsencrypt.conf 4 | 5 | CONFIG=$1 6 | ACME_TINY="/tmp/acme_tiny.py" 7 | DOMAIN_KEY="" 8 | 9 | if [ -f "$CONFIG" ];then 10 | . "$CONFIG" 11 | DIRNAME=$(dirname "$CONFIG") 12 | cd "$DIRNAME" || exit 1 13 | else 14 | echo "ERROR CONFIG." 15 | exit 1 16 | fi 17 | 18 | KEY_PREFIX="${DOMAIN_KEY%%.*}" 19 | DOMAIN_CRT="$KEY_PREFIX.crt" 20 | DOMAIN_PEM="$KEY_PREFIX.pem" 21 | DOMAIN_CSR="$KEY_PREFIX.csr" 22 | DOMAIN_CHAINED_CRT="$KEY_PREFIX.chained.crt" 23 | 24 | if [ ! -f "$ACCOUNT_KEY" ];then 25 | echo "Generate account key..." 26 | openssl genrsa 4096 > "$ACCOUNT_KEY" 27 | fi 28 | 29 | if [ ! -f "$DOMAIN_KEY" ];then 30 | echo "Generate domain key..." 31 | if [ "$ECC" = "TRUE" ];then 32 | openssl ecparam -genkey -name secp256r1 | openssl ec -out "$DOMAIN_KEY" 33 | else 34 | openssl genrsa 2048 > "$DOMAIN_KEY" 35 | fi 36 | fi 37 | 38 | echo "Generate CSR...$DOMAIN_CSR" 39 | 40 | OPENSSL_CONF="/etc/ssl/openssl.cnf" 41 | 42 | if [ ! -f "$OPENSSL_CONF" ];then 43 | OPENSSL_CONF="/etc/pki/tls/openssl.cnf" 44 | if [ ! -f "$OPENSSL_CONF" ];then 45 | echo "Error, file openssl.cnf not found." 46 | exit 1 47 | fi 48 | fi 49 | 50 | openssl req -new -sha256 -key "$DOMAIN_KEY" -subj "/" -reqexts SAN -config <(cat $OPENSSL_CONF <(printf "[SAN]\nsubjectAltName=%s" "$DOMAINS")) > "$DOMAIN_CSR" 51 | 52 | wget https://raw.githubusercontent.com/diafygi/acme-tiny/master/acme_tiny.py --no-check-certificate -O $ACME_TINY -o /dev/null 53 | 54 | if [ -f "$DOMAIN_CRT" ];then 55 | mv "$DOMAIN_CRT" "$DOMAIN_CRT-OLD-$(date +%y%m%d-%H%M%S)" 56 | fi 57 | 58 | DOMAIN_DIR="$DOMAIN_DIR/.well-known/acme-challenge/" 59 | mkdir -p "$DOMAIN_DIR" 60 | 61 | python $ACME_TINY --account-key "$ACCOUNT_KEY" --csr "$DOMAIN_CSR" --acme-dir "$DOMAIN_DIR" > "$DOMAIN_CRT" 62 | 63 | if [ "$?" != 0 ];then 64 | exit 1 65 | fi 66 | 67 | if [ ! -f "lets-encrypt-x3-cross-signed.pem" ];then 68 | wget https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem --no-check-certificate -o /dev/null 69 | fi 70 | 71 | cat "$DOMAIN_CRT" lets-encrypt-x3-cross-signed.pem > "$DOMAIN_CHAINED_CRT" 72 | 73 | if [ "$LIGHTTPD" = "TRUE" ];then 74 | cat "$DOMAIN_KEY" "$DOMAIN_CRT" > "$DOMAIN_PEM" 75 | echo -e "\e[01;32mNew pem: $DOMAIN_PEM has been generated\e[0m" 76 | fi 77 | 78 | echo -e "\e[01;32mNew cert: $DOMAIN_CHAINED_CRT has been generated\e[0m" 79 | 80 | #service nginx reload 81 | -------------------------------------------------------------------------------- /localrt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :localrt.sh 3 | #description :add local route after vpn or usb wifi. 4 | #author :xdtianyu@gmail.com 5 | #date :20141029 6 | #version :1.0 final 7 | #usage :bash localrt.sh 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | 11 | route add -net 192.168.160.0 netmask 255.255.255.0 gw 192.168.163.1 12 | route add -net 192.168.161.0 netmask 255.255.255.0 gw 192.168.163.1 13 | route add -net 192.168.162.0 netmask 255.255.255.0 gw 192.168.163.1 14 | route add -net 192.168.163.0 netmask 255.255.255.0 gw 192.168.163.1 15 | -------------------------------------------------------------------------------- /media/generate.sh: -------------------------------------------------------------------------------- 1 | #/bin/bash 2 | 3 | filename=$1 4 | 5 | if [ -f "${filename%%.mp4*}.live" ]; then 6 | rm "${filename%%.mp4*}.live" 7 | fi 8 | 9 | if [ -f "$filename" ]; then 10 | dur=$(avconv -i $filename 2>&1 | grep "Duration"| cut -d ' ' -f 4 | sed s/,// | sed 's@\..*@@g' | awk '{ split($1, A, ":"); split(A[3], B, "."); print 3600*A[1] + 60*A[2] + B[1] }') 11 | 12 | #echo $filename $dur 13 | 14 | time=$(( $dur / 3 )) 15 | 16 | avconv -y -i $filename -f mjpeg -vframes 1 -ss $time ${filename%%.mp4*}.jpg >/dev/null 2>&1 17 | 18 | echo $dur > ${filename%%.mp4*}.duration 19 | else 20 | touch ${filename%%.mp4*}.fail 21 | fi 22 | -------------------------------------------------------------------------------- /net/DownloadHelper/.gitignore: -------------------------------------------------------------------------------- 1 | config.json 2 | *.pyc 3 | -------------------------------------------------------------------------------- /net/DownloadHelper/README.md: -------------------------------------------------------------------------------- 1 | ###DownloadHelper### 2 | 3 | This can convert nginx auto index page to download links for aria2c, can add mirrors(nginx proxy) to the link too. 4 | 5 | 6 | ``` 7 | files servered by nginx (https with basic auth) => mirrors (nginx proxy) => 8 | index2list.py(copy generated links) => webui-aria2(local aria2c) => HDD/SSD 9 | ``` 10 | 11 | **python requests SNI support:** 12 | 13 | ``` 14 | pip install pyOpenSSL 15 | pip install ndg-httpsclient 16 | pip install pyasn1 17 | ``` 18 | 19 | **nginx proxy config exmaple** 20 | 21 | I suggest you set `proxy_buffering` `off`, otherwise mirrors will use huge bandwidth to cache files. 22 | 23 | ``` 24 | location /downloads/ { 25 | proxy_buffering off; 26 | proxy_pass https://www.xxx.com/downloads/; 27 | } 28 | ``` 29 | -------------------------------------------------------------------------------- /net/DownloadHelper/config.json.example: -------------------------------------------------------------------------------- 1 | { 2 | "url": "https://your.domain.name/your_downloads_dir_name/", 3 | "http_user": "YOUR_USERNAME", 4 | "http_password": "YOUR_PASSWORD", 5 | "file_type": [ 6 | ".mp4", 7 | ".wmv", 8 | ".mkv", 9 | ".avi", 10 | ".zip", 11 | ".rar", 12 | ".7z", 13 | ".tar", 14 | ".ANY_OTHER" 15 | ], 16 | "mirror": [ 17 | "https://mirror1.aaa.com/any/", 18 | "https://mirror2.bbb.org/proxy/", 19 | "https://mirror3.ccc.net/directory/", 20 | "https://mirror4.ddd.me/name/" 21 | ] 22 | } -------------------------------------------------------------------------------- /net/DownloadHelper/index2link.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import os 4 | import requests 5 | import json 6 | import base64 7 | 8 | from bs4 import BeautifulSoup 9 | 10 | __author__ = 'ty' 11 | 12 | config = json.load(open('config.json')) 13 | 14 | authorization = base64.b64encode(config["http_user"] + ":" + 15 | config["http_password"]) 16 | 17 | url = config["url"] 18 | 19 | file_type = config["file_type"] 20 | mirror = config["mirror"] 21 | 22 | headers = {"Authorization": "Basic " + authorization} 23 | 24 | link_list = [] 25 | 26 | 27 | def list_dir(target_url): 28 | r = requests.get(target_url, headers=headers, verify=True) 29 | 30 | soup = BeautifulSoup(r.text) 31 | 32 | links = soup.find_all("a") 33 | 34 | for link in links: 35 | href = link.get('href') 36 | 37 | if href == "../": 38 | continue 39 | 40 | if href.endswith('/'): 41 | list_dir(target_url + href) 42 | else: 43 | link_list.append(target_url + href) 44 | 45 | 46 | list_dir(url) 47 | 48 | link_list.sort(key=os.path.splitext) 49 | link_list.sort(key=lambda f: os.path.splitext(f)[1]) 50 | 51 | for l in link_list: 52 | 53 | if any(os.path.splitext(l)[1].lower() in s for s in file_type): 54 | 55 | download_link = l 56 | 57 | for m in mirror: 58 | download_link += " " + l.replace(url, m) 59 | print download_link + "\n" 60 | -------------------------------------------------------------------------------- /net/DownloadHelper/mirror.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import json 4 | import sys 5 | 6 | __author__ = 'ty' 7 | 8 | config = json.load(open('config.json')) 9 | mirror = config["mirror"] 10 | url = config["url"] 11 | 12 | l = sys.argv[1] 13 | 14 | download_link = l 15 | 16 | for m in mirror: 17 | download_link += " " + l.replace(url, m) 18 | print download_link + "\n" 19 | -------------------------------------------------------------------------------- /net/cm.device.list: -------------------------------------------------------------------------------- 1 | a5 2 | a700 3 | acclaim 4 | ace 5 | amami 6 | anzu 7 | aoba 8 | apexqtmo 9 | aries 10 | bacon 11 | blade 12 | bravo 13 | bravoc 14 | buzz 15 | c660 16 | c800 17 | captivatemtd 18 | castor 19 | castor_windy 20 | cdma_droid2we 21 | click 22 | coconut 23 | condor 24 | cooper 25 | crespo 26 | crespo4g 27 | d2att 28 | d2cri 29 | d2lte 30 | d2mtr 31 | d2spr 32 | d2tmo 33 | d2usc 34 | d2vzw 35 | d710 36 | d800 37 | d801 38 | d802 39 | d850 40 | d851 41 | d852 42 | d855 43 | deb 44 | desirec 45 | dlx 46 | dogo 47 | doubleshot 48 | dream_sapphire 49 | droid2 50 | droid2we 51 | e400 52 | e510 53 | e610 54 | e720 55 | e730 56 | e739 57 | e970 58 | e973 59 | e975 60 | e980 61 | e986 62 | encore 63 | endeavoru 64 | enrc2b 65 | epicmtd 66 | espresso 67 | everest 68 | evita 69 | exhilarate 70 | expressatt 71 | falcon 72 | fascinatemtd 73 | find5 74 | find7 75 | find7s 76 | fireball 77 | flo 78 | flounder 79 | fugu 80 | galaxys2 81 | galaxys2att 82 | galaxysbmtd 83 | galaxysmtd 84 | ghost 85 | glacier 86 | grouper 87 | haida 88 | hallon 89 | hammerhead 90 | hammerheadcaf 91 | harmony 92 | hayabusa 93 | hercules 94 | hero 95 | heroc 96 | hikari 97 | hlte 98 | hltespr 99 | hltetmo 100 | hlteusc 101 | hltevzw 102 | hltexx 103 | holiday 104 | honami 105 | huashan 106 | hummingbird 107 | i605 108 | i777 109 | i9100 110 | i9100g 111 | i9103 112 | i925 113 | i9300 114 | i9305 115 | i9500 116 | inc 117 | iyokan 118 | jactivelte 119 | jem 120 | jewel 121 | jflte 122 | jflteatt 123 | jfltecan 124 | jfltecri 125 | jfltecsp 126 | jfltespr 127 | jfltetmo 128 | jflteusc 129 | jfltevzw 130 | jfltexx 131 | jordan 132 | jordan_plus 133 | klimtwifi 134 | klte 135 | kltechn 136 | kltechnduo 137 | kltespr 138 | klteusc 139 | kltevzw 140 | ks01lte 141 | l01f 142 | l900 143 | legend 144 | liberty 145 | ls970 146 | ls980 147 | m4 148 | m7 149 | m7att 150 | m7spr 151 | m7tmo 152 | m7ul 153 | m7vzw 154 | m8 155 | maguro 156 | mako 157 | mango 158 | manta 159 | maserati 160 | mb886 161 | memul 162 | mesmerizemtd 163 | mimmi 164 | mint 165 | mondrianwifi 166 | morrison 167 | moto_msm8960 168 | moto_msm8960_jbbl 169 | moto_msm8960dt 170 | motus 171 | n1 172 | n3 173 | n5100 174 | n5110 175 | n5120 176 | n7000 177 | n7100 178 | n8000 179 | n8013 180 | nicki 181 | nozomi 182 | obake 183 | odin 184 | odroidu2 185 | olympus 186 | one 187 | otter 188 | otter2 189 | otterx 190 | ovation 191 | p1 192 | p1c 193 | p1l 194 | p1n 195 | p3 196 | p3100 197 | p3110 198 | p3113 199 | p350 200 | p4 201 | p4tmo 202 | p4vzw 203 | p4wifi 204 | p5 205 | p500 206 | p5100 207 | p5110 208 | p5113 209 | p5wifi 210 | p700 211 | p720 212 | p760 213 | p880 214 | p920 215 | p925 216 | p930 217 | p970 218 | p990 219 | p999 220 | passion 221 | peregrine 222 | picassowifi 223 | pollux 224 | pollux_windy 225 | pyramid 226 | quark 227 | quincyatt 228 | quincytmo 229 | r950 230 | robyn 231 | ruby 232 | s5670 233 | saga 234 | satsuma 235 | scorpion 236 | scorpion_windy 237 | serrano3gxx 238 | serranoltexx 239 | shadow 240 | shakira 241 | shamu 242 | sholes 243 | shooteru 244 | showcasemtd 245 | sirius 246 | skate 247 | skyrocket 248 | smb_a1002 249 | smultron 250 | solana 251 | speedy 252 | sprout 253 | spyder 254 | steelhead 255 | stingray 256 | su640 257 | sunfire 258 | superior 259 | supersonic 260 | t0lte 261 | t0lteatt 262 | t0ltetmo 263 | t6 264 | t6spr 265 | t6vzw 266 | t769 267 | taoshan 268 | targa 269 | tass 270 | tate 271 | tenderloin 272 | tf101 273 | tf201 274 | tf300t 275 | tf700t 276 | tf701t 277 | thea 278 | tianchi 279 | tilapia 280 | titan 281 | togari 282 | togari_gpe 283 | toro 284 | toroplus 285 | trltespr 286 | trltetmo 287 | trlteusc 288 | trltexx 289 | tsubasa 290 | u8150 291 | u8160 292 | u8220 293 | umts_spyder 294 | urushi 295 | v410 296 | v500 297 | v9 298 | vega 299 | vibrantmtd 300 | victara 301 | ville 302 | vision 303 | vivo 304 | vivow 305 | vs920 306 | vs980 307 | vs985 308 | w7 309 | wingray 310 | xt1053 311 | xt1058 312 | xt1060 313 | xt897 314 | xt897c 315 | xt907 316 | xt925 317 | xt925_jbbl 318 | xt926 319 | ypg1 320 | yuga 321 | z3 322 | z3c 323 | z71 324 | zeppelin 325 | zero 326 | zeus 327 | zeusc 328 | -------------------------------------------------------------------------------- /net/cm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | DEVICE=$1 3 | /root/bin/cmurl http://download.cyanogenmod.org $DEVICE /var/cyanogenmod/$DEVICE >>/var/cyanogenmod-log/$DEVICE.crontab.log 2>&1 4 | -------------------------------------------------------------------------------- /net/cm/README-CN.md: -------------------------------------------------------------------------------- 1 | 这个脚本会从 [http://download.cyanogenmod.org/](http://download.cyanogenmod.org/) 抓取 `*.zip` 及 `*.img` 文件并上传到百度云。 2 | 3 | ## 依赖 4 | 5 | 这工具需要 `python3` `bypy` `python3-bs4`, `trickle` 被用来限制上传速度 6 | 7 | ``` 8 | apt-get install python3 python3-bs4 python3-pip trickle 9 | git clone https://github.com/houtianze/bypy 10 | ``` 11 | 12 | ## 使用 13 | 14 | 请确保路径如下 `/root/bypy/bypy.py` `/root/cm/cm.py`, 否你需要自己修改脚本中的路径 15 | 16 | **上传** 17 | ``` 18 | cd ~/cm/ 19 | chmod +x cm.sh cm.py 20 | screen 21 | ./cm.py 22 | ``` 23 | 24 | 这个命令会抓取 `http://download.cyanogenmod.org/` 下载 `*.zip` `*.img` 文件并检查 `sha1`. 然后上传文件到百度云。 25 | 26 | 默认的下载文件夹是 `/var/cyanogenmod`, 默认的上传文件夹是 `/YOUR_APP_DIR/cm/DEVICE`. 日志文件保存在 `/var/cyanogenmod/DEVICE/log/DEVICE.log`. 已上传的文件列表保存在 `/root/cm/cm.json`, `cm.json` 中存在的文件将不会再下载. 这个脚本只检查最新的50个编译版本, 当所有上传完成后, 将等待60秒 然后再重新检查. 27 | 28 | **生成设备列表** 29 | 30 | ``` 31 | ./cm.py --list 32 | ``` 33 | -------------------------------------------------------------------------------- /net/cm/README.md: -------------------------------------------------------------------------------- 1 | This script will download `*.zip` and `*.img` files from [http://download.cyanogenmod.org/](http://download.cyanogenmod.org/) and then upload it to baiduyun. 2 | 3 | ## Dependency 4 | 5 | This tool requires `python3` `bypy` `python3-bs4`, `trickle` is used for limiting upload speed. 6 | 7 | ``` 8 | apt-get install python3 python3-bs4 python3-pip trickle 9 | git clone https://github.com/houtianze/bypy 10 | ``` 11 | 12 | ## Usage 13 | 14 | Make sure the paths is `/root/bypy/bypy.py` `/root/cm/cm.py`, otherwise you have to modify the script for your need. 15 | 16 | **Upload** 17 | ``` 18 | cd ~/cm/ 19 | chmod +x cm.sh cm.py 20 | screen 21 | ./cm.py 22 | ``` 23 | 24 | This will fetch `http://download.cyanogenmod.org/` and download `*.zip` `*.img` and check their `sha1`. Then upload it to baiduyun. 25 | 26 | The default download directory is `/var/cyanogenmod`, and the default upload directory is `/YOUR_APP_DIR/cm/DEVICE`. Log file is saved in `/var/cyanogenmod/DEVICE/log/DEVICE.log`. Files have been uploaded is recorded in `/root/cm/cm.json`, file exists in `cm.json` will not download again. The script checks only the last 50 builds. After all uploads are done, it will sleep 60s and then check again. 27 | 28 | **Generate device list** 29 | 30 | ``` 31 | ./cm.py --list 32 | ``` 33 | -------------------------------------------------------------------------------- /net/cm/cm.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | import re 4 | import sys 5 | import io 6 | from subprocess import call, Popen, check_output 7 | import time 8 | 9 | __author__ = 'ty' 10 | 11 | from bs4 import BeautifulSoup 12 | from bs4 import Tag 13 | 14 | from cmfile import CMFile 15 | 16 | import requests 17 | import json 18 | 19 | devices = ["mako", "hammerhead", "shamu", "grouper", "tilapia", "flo", "deb", "flounder", "manta", "a5", "dlx", "jewei", 20 | "memul", "m7", "m8", "m8d", "t6", "cherry", "g620_a2", "mt2", "d802", "d855", "e975", "falcon", "peregrine", 21 | "titan", "thea", "osprey", "ghost", "victara", "bacon", "find5", "find7", "find7s", "n3", "r7plus", "hlte", 22 | "trltexx", "i9100", "jfltexx", "i9500", "kltechn", "kltechnduo", "honami", "amami", "sirius", "z3", "z3c", 23 | "cancro"] 24 | 25 | 26 | def sha1_file(file_path): 27 | import hashlib 28 | 29 | sha = hashlib.sha1() 30 | with open(file_path, 'rb') as s_f: 31 | while True: 32 | block = s_f.read(2 ** 10) # Magic number: one-megabyte blocks. 33 | if not block: 34 | break 35 | sha.update(block) 36 | return sha.hexdigest() 37 | 38 | 39 | try: 40 | mode = sys.argv[1] 41 | except IndexError: 42 | mode = "--download" 43 | 44 | url = "http://download.cyanogenmod.org" 45 | 46 | while True: 47 | 48 | cmFiles_dict = {} 49 | cmFiles = [] 50 | count = 0 51 | 52 | download_dir = "/var/cyanogenmod/" 53 | 54 | if os.path.exists("cm.json"): 55 | with open('cm.json') as data_file: 56 | data = json.load(data_file) 57 | for d in data["cmFiles"]: 58 | cmFile = CMFile() 59 | cmFile.url = d["url"] 60 | cmFile.sha1 = d["sha1"] 61 | cmFiles.append(json.JSONDecoder().decode(cmFile.json())) 62 | count += 1 63 | 64 | r = requests.get(url, verify=False) 65 | 66 | # print(r.text) 67 | 68 | soup = BeautifulSoup(r.text) 69 | 70 | if mode and (mode == "--list" or mode == "-l"): 71 | td_list = soup.find_all('li', id=re.compile('device_.*')) 72 | 73 | print("codename".ljust(20) + "fullname") 74 | 75 | for td in td_list: 76 | for content in td.contents[0].contents: 77 | s = "" 78 | if isinstance(content, Tag): 79 | device = content.text.rsplit(' ', 1) 80 | s = device[1].replace('(', '').replace(')', '') 81 | s = s.ljust(20) + device[0] 82 | print(s) 83 | sys.exit(0) 84 | else: 85 | link_list = soup.find_all('a', href=re.compile('/get/jenkins/.*')) 86 | for link in link_list: 87 | if isinstance(link, Tag): 88 | 89 | cmFile = CMFile() 90 | cmFile.url = url + link.get('href') 91 | 92 | parent = link.parent 93 | if isinstance(parent, Tag): 94 | sha1 = parent.find('small', class_='md5') 95 | sha1 = sha1.text.replace('\n', '').strip().replace('sha1: ', '') 96 | cmFile.sha1 = sha1 97 | s = json.JSONDecoder().decode(cmFile.json()) 98 | if s in cmFiles: 99 | print("downloaded, skip") 100 | else: 101 | 102 | while int( 103 | check_output('ps -ef | grep /root/bypy/bypy.py |grep -v grep |wc -l', shell=True)) >= 3: 104 | print("wait other jobs done...") 105 | time.sleep(3) 106 | 107 | # download zip and recovery.img and check hash 108 | print(cmFile.url) 109 | 110 | index = cmFile.url.rfind('/') 111 | filename = cmFile.url[index + 1:] 112 | print(filename) 113 | 114 | device = "" 115 | if filename.endswith('.img'): 116 | device = filename.replace('-recovery.img', '') 117 | else: 118 | device = filename.replace('.zip', '') 119 | device = device[device.rfind('-') + 1:] 120 | print(device) 121 | 122 | if device not in devices: 123 | print(device + " is not in support list, ignore.") 124 | continue 125 | 126 | out_file = "{}{}/{}".format(download_dir, device, filename) 127 | 128 | call("mkdir -p {}{}".format(download_dir, device), shell=True) 129 | 130 | call("wget --limit-rate=3000k --progress=dot:binary \"{}\" -O \"{}\" -o \"{}.wget.log\"". 131 | format(cmFile.url, out_file, out_file), shell=True) 132 | if sha1_file(out_file) == cmFile.sha1: 133 | 134 | # upload file via bypy 135 | 136 | call("sha1sum {} > {}.sha1".format(out_file, out_file), shell=True) 137 | 138 | cmFiles.append(s) 139 | count += 1 140 | 141 | Popen("./cm.sh {}{} {} {}".format(download_dir, device, filename, device), shell=True) 142 | Popen("./cm.sh {}{} {}.sha1 {}".format(download_dir, device, filename, device), shell=True) 143 | Popen("./cm.sh {}{} {}.wget.log {}".format(download_dir, device, filename, device), 144 | shell=True) 145 | else: 146 | print("sha1 not match, continue") 147 | continue 148 | 149 | cmFiles_dict["count"] = count 150 | cmFiles_dict["cmFiles"] = cmFiles 151 | 152 | with io.open('cm.json', 'w') as f: 153 | json.dump(cmFiles_dict, f, ensure_ascii=False, sort_keys=True, indent=4, separators=(',', ': ')) 154 | 155 | time.sleep(60) 156 | print("job finished, restart now") 157 | -------------------------------------------------------------------------------- /net/cm/cm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | DIR=$1 4 | FILE=$2 5 | DEVICE=$3 6 | 7 | mkdir -p $DIR/log 8 | /root/bypy/bypy.py mkdir cm/$DEVICE >> $DIR/log/$DEVICE.log 2>&1 9 | trickle -s -u 2000 /root/bypy/bypy.py -v --disable-ssl-check -s 10MB upload $DIR/$FILE cm/$DEVICE/$FILE >> $DIR/log/$DEVICE.log 2>&1 10 | 11 | rm $DIR/$FILE >> $DIR/log/$DEVICE.log 2>&1 12 | -------------------------------------------------------------------------------- /net/cm/cm_monitor.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # crontab: */1 * * * * /root/bin/cm_monitor.sh >> /tmp/cm_monitor.log 2>&1 3 | 4 | if [ $(ps -ef|grep cm.py|grep -v grep|wc -l) -eq 0 ]; then 5 | echo "$(date) -- cm.py is down, start again." 6 | cd /root/cm/; screen -dmS cm python3 /root/cm/cm.py 7 | else 8 | echo "$(date) -- cm.py is running." 9 | fi 10 | -------------------------------------------------------------------------------- /net/cm/cmfile.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | __author__ = 'ty' 4 | 5 | import json 6 | 7 | class CMFile: 8 | url = "" 9 | sha1 = "" 10 | 11 | def __init__(self): 12 | pass 13 | 14 | def json(self): 15 | return json.dumps(self, default=lambda o: o.__dict__, ensure_ascii=False, sort_keys=True) 16 | -------------------------------------------------------------------------------- /net/cm2cron.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from bs4 import BeautifulSoup 4 | 5 | import urllib2 6 | 7 | url = 'https://download.cyanogenmod.org/' 8 | soup = BeautifulSoup(urllib2.urlopen(url)) 9 | 10 | tag = soup.find_all('span') 11 | 12 | count = 0 13 | 14 | for span in tag: 15 | if span.get('class')[0] == "codename": 16 | print str(count) + " * * * * bash /root/bin/cm " + span.string 17 | count += 2 18 | if count == 60: 19 | count = 0 20 | -------------------------------------------------------------------------------- /net/cm2list.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from bs4 import BeautifulSoup 4 | 5 | import urllib2 6 | 7 | url = 'https://download.cyanogenmod.org/' 8 | soup = BeautifulSoup(urllib2.urlopen(url)) 9 | 10 | a_list = soup.find_all('a') 11 | 12 | print "codename".ljust(20) + "fullname" 13 | 14 | for a in a_list: 15 | if a.get('class') and a.get('class')[0] == "device": 16 | 17 | spans = a.find_all('span') 18 | 19 | s = "" 20 | for span in spans: 21 | if span.get('class')[0] == "codename": 22 | s = span.string 23 | if span.get('class')[0] == "fullname": 24 | s = s.ljust(20) + span.string 25 | print s 26 | -------------------------------------------------------------------------------- /net/cmurl.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Please read cm.sh and cyanogenmod.crontab.list too. 3 | 4 | URL=$1 5 | DEVICE=$2 6 | TARGET=$DEVICE.zip 7 | DIRECTORY=$3 8 | BYPY='/root/bypy/bypy.py' 9 | 10 | URL=$(curl -k -s $URL |grep $TARGET |grep -v html| sed -n "s|.*href=\"\([^\"]*\).*|$URL\1|p") 11 | 12 | if [ -z "$URL" ];then 13 | echo "ERROR" 14 | exit 0 15 | fi 16 | 17 | echo $URL 18 | ZIP="${URL##*/}" 19 | echo $ZIP 20 | 21 | if [ -f "$DIRECTORY/$ZIP" ];then 22 | echo "DOWNLOADED" 23 | exit 0 24 | fi 25 | 26 | if [ ! -d "$DIRECTORY" ];then 27 | echo "ERROR DIR" 28 | mkdir -p "$DIRECTORY" 29 | fi 30 | 31 | if [ -f "$DIRECTORY/$ZIP.done" ];then 32 | echo "UPLOADED" 33 | exit 0 34 | fi 35 | 36 | cd $DIRECTORY 37 | 38 | touch none.done 39 | 40 | for file in *.done;do 41 | echo "REMOVE $file" 42 | rm $file 43 | done 44 | 45 | wget $URL > $ZIP.wget.log 2>&1 46 | 47 | md5sum $ZIP > $ZIP.md5 48 | 49 | $BYPY -v --disable-ssl-check -s 10MB syncup . cm/$DEVICE 50 | 51 | if [ -f "$DIRECTORY/$ZIP" ];then 52 | rm "$DIRECTORY/$ZIP" 53 | fi 54 | 55 | if [ -f "$DIRECTORY/$ZIP.md5" ];then 56 | rm "$DIRECTORY/$ZIP.md5" 57 | fi 58 | 59 | if [ -f "$DIRECTORY/$ZIP.wget.log" ];then 60 | rm "$DIRECTORY/$ZIP.wget.log" 61 | fi 62 | 63 | touch "$DIRECTORY/$ZIP.done" 64 | 65 | 66 | cd - 67 | -------------------------------------------------------------------------------- /net/cyanogenmod.crontab.list: -------------------------------------------------------------------------------- 1 | 0 * * * * bash /root/bin/cm a5 2 | 2 * * * * bash /root/bin/cm a700 3 | 4 * * * * bash /root/bin/cm acclaim 4 | 6 * * * * bash /root/bin/cm ace 5 | 8 * * * * bash /root/bin/cm amami 6 | 10 * * * * bash /root/bin/cm anzu 7 | 12 * * * * bash /root/bin/cm aoba 8 | 14 * * * * bash /root/bin/cm apexqtmo 9 | 16 * * * * bash /root/bin/cm aries 10 | 18 * * * * bash /root/bin/cm bacon 11 | 20 * * * * bash /root/bin/cm blade 12 | 22 * * * * bash /root/bin/cm bravo 13 | 24 * * * * bash /root/bin/cm bravoc 14 | 26 * * * * bash /root/bin/cm buzz 15 | 28 * * * * bash /root/bin/cm c660 16 | 30 * * * * bash /root/bin/cm c800 17 | 32 * * * * bash /root/bin/cm captivatemtd 18 | 34 * * * * bash /root/bin/cm castor 19 | 36 * * * * bash /root/bin/cm castor_windy 20 | 38 * * * * bash /root/bin/cm cdma_droid2we 21 | 40 * * * * bash /root/bin/cm click 22 | 42 * * * * bash /root/bin/cm coconut 23 | 44 * * * * bash /root/bin/cm condor 24 | 46 * * * * bash /root/bin/cm cooper 25 | 48 * * * * bash /root/bin/cm crespo 26 | 50 * * * * bash /root/bin/cm crespo4g 27 | 52 * * * * bash /root/bin/cm d2att 28 | 54 * * * * bash /root/bin/cm d2cri 29 | 56 * * * * bash /root/bin/cm d2lte 30 | 58 * * * * bash /root/bin/cm d2mtr 31 | 0 * * * * bash /root/bin/cm d2spr 32 | 2 * * * * bash /root/bin/cm d2tmo 33 | 4 * * * * bash /root/bin/cm d2usc 34 | 6 * * * * bash /root/bin/cm d2vzw 35 | 8 * * * * bash /root/bin/cm d710 36 | 10 * * * * bash /root/bin/cm d800 37 | 12 * * * * bash /root/bin/cm d801 38 | 14 * * * * bash /root/bin/cm d802 39 | 16 * * * * bash /root/bin/cm d850 40 | 18 * * * * bash /root/bin/cm d851 41 | 20 * * * * bash /root/bin/cm d852 42 | 22 * * * * bash /root/bin/cm d855 43 | 24 * * * * bash /root/bin/cm deb 44 | 26 * * * * bash /root/bin/cm desirec 45 | 28 * * * * bash /root/bin/cm dlx 46 | 30 * * * * bash /root/bin/cm dogo 47 | 32 * * * * bash /root/bin/cm doubleshot 48 | 34 * * * * bash /root/bin/cm dream_sapphire 49 | 36 * * * * bash /root/bin/cm droid2 50 | 38 * * * * bash /root/bin/cm droid2we 51 | 40 * * * * bash /root/bin/cm e400 52 | 42 * * * * bash /root/bin/cm e510 53 | 44 * * * * bash /root/bin/cm e610 54 | 46 * * * * bash /root/bin/cm e720 55 | 48 * * * * bash /root/bin/cm e730 56 | 50 * * * * bash /root/bin/cm e739 57 | 52 * * * * bash /root/bin/cm e970 58 | 54 * * * * bash /root/bin/cm e973 59 | 56 * * * * bash /root/bin/cm e975 60 | 58 * * * * bash /root/bin/cm e980 61 | 0 * * * * bash /root/bin/cm e986 62 | 2 * * * * bash /root/bin/cm encore 63 | 4 * * * * bash /root/bin/cm endeavoru 64 | 6 * * * * bash /root/bin/cm enrc2b 65 | 8 * * * * bash /root/bin/cm epicmtd 66 | 10 * * * * bash /root/bin/cm espresso 67 | 12 * * * * bash /root/bin/cm everest 68 | 14 * * * * bash /root/bin/cm evita 69 | 16 * * * * bash /root/bin/cm exhilarate 70 | 18 * * * * bash /root/bin/cm expressatt 71 | 20 * * * * bash /root/bin/cm falcon 72 | 22 * * * * bash /root/bin/cm fascinatemtd 73 | 24 * * * * bash /root/bin/cm find5 74 | 26 * * * * bash /root/bin/cm find7 75 | 28 * * * * bash /root/bin/cm find7s 76 | 30 * * * * bash /root/bin/cm fireball 77 | 32 * * * * bash /root/bin/cm flo 78 | 34 * * * * bash /root/bin/cm flounder 79 | 36 * * * * bash /root/bin/cm fugu 80 | 38 * * * * bash /root/bin/cm galaxys2 81 | 40 * * * * bash /root/bin/cm galaxys2att 82 | 42 * * * * bash /root/bin/cm galaxysbmtd 83 | 44 * * * * bash /root/bin/cm galaxysmtd 84 | 46 * * * * bash /root/bin/cm ghost 85 | 48 * * * * bash /root/bin/cm glacier 86 | 50 * * * * bash /root/bin/cm grouper 87 | 52 * * * * bash /root/bin/cm haida 88 | 54 * * * * bash /root/bin/cm hallon 89 | 56 * * * * bash /root/bin/cm hammerhead 90 | 58 * * * * bash /root/bin/cm hammerheadcaf 91 | 0 * * * * bash /root/bin/cm harmony 92 | 2 * * * * bash /root/bin/cm hayabusa 93 | 4 * * * * bash /root/bin/cm hercules 94 | 6 * * * * bash /root/bin/cm hero 95 | 8 * * * * bash /root/bin/cm heroc 96 | 10 * * * * bash /root/bin/cm hikari 97 | 12 * * * * bash /root/bin/cm hlte 98 | 14 * * * * bash /root/bin/cm hltespr 99 | 16 * * * * bash /root/bin/cm hltetmo 100 | 18 * * * * bash /root/bin/cm hlteusc 101 | 20 * * * * bash /root/bin/cm hltevzw 102 | 22 * * * * bash /root/bin/cm hltexx 103 | 24 * * * * bash /root/bin/cm holiday 104 | 26 * * * * bash /root/bin/cm honami 105 | 28 * * * * bash /root/bin/cm huashan 106 | 30 * * * * bash /root/bin/cm hummingbird 107 | 32 * * * * bash /root/bin/cm i605 108 | 34 * * * * bash /root/bin/cm i777 109 | 36 * * * * bash /root/bin/cm i9100 110 | 38 * * * * bash /root/bin/cm i9100g 111 | 40 * * * * bash /root/bin/cm i9103 112 | 42 * * * * bash /root/bin/cm i925 113 | 44 * * * * bash /root/bin/cm i9300 114 | 46 * * * * bash /root/bin/cm i9305 115 | 48 * * * * bash /root/bin/cm i9500 116 | 50 * * * * bash /root/bin/cm inc 117 | 52 * * * * bash /root/bin/cm iyokan 118 | 54 * * * * bash /root/bin/cm jactivelte 119 | 56 * * * * bash /root/bin/cm jem 120 | 58 * * * * bash /root/bin/cm jewel 121 | 0 * * * * bash /root/bin/cm jflte 122 | 2 * * * * bash /root/bin/cm jflteatt 123 | 4 * * * * bash /root/bin/cm jfltecan 124 | 6 * * * * bash /root/bin/cm jfltecri 125 | 8 * * * * bash /root/bin/cm jfltecsp 126 | 10 * * * * bash /root/bin/cm jfltespr 127 | 12 * * * * bash /root/bin/cm jfltetmo 128 | 14 * * * * bash /root/bin/cm jflteusc 129 | 16 * * * * bash /root/bin/cm jfltevzw 130 | 18 * * * * bash /root/bin/cm jfltexx 131 | 20 * * * * bash /root/bin/cm jordan 132 | 22 * * * * bash /root/bin/cm jordan_plus 133 | 24 * * * * bash /root/bin/cm klimtwifi 134 | 26 * * * * bash /root/bin/cm klte 135 | 28 * * * * bash /root/bin/cm kltechn 136 | 30 * * * * bash /root/bin/cm kltechnduo 137 | 32 * * * * bash /root/bin/cm kltespr 138 | 34 * * * * bash /root/bin/cm klteusc 139 | 36 * * * * bash /root/bin/cm kltevzw 140 | 38 * * * * bash /root/bin/cm ks01lte 141 | 40 * * * * bash /root/bin/cm l01f 142 | 42 * * * * bash /root/bin/cm l900 143 | 44 * * * * bash /root/bin/cm legend 144 | 46 * * * * bash /root/bin/cm liberty 145 | 48 * * * * bash /root/bin/cm ls970 146 | 50 * * * * bash /root/bin/cm ls980 147 | 52 * * * * bash /root/bin/cm m4 148 | 54 * * * * bash /root/bin/cm m7 149 | 56 * * * * bash /root/bin/cm m7att 150 | 58 * * * * bash /root/bin/cm m7spr 151 | 0 * * * * bash /root/bin/cm m7tmo 152 | 2 * * * * bash /root/bin/cm m7ul 153 | 4 * * * * bash /root/bin/cm m7vzw 154 | 6 * * * * bash /root/bin/cm m8 155 | 8 * * * * bash /root/bin/cm maguro 156 | 10 * * * * bash /root/bin/cm mako 157 | 12 * * * * bash /root/bin/cm mango 158 | 14 * * * * bash /root/bin/cm manta 159 | 16 * * * * bash /root/bin/cm maserati 160 | 18 * * * * bash /root/bin/cm mb886 161 | 20 * * * * bash /root/bin/cm memul 162 | 22 * * * * bash /root/bin/cm mesmerizemtd 163 | 24 * * * * bash /root/bin/cm mimmi 164 | 26 * * * * bash /root/bin/cm mint 165 | 28 * * * * bash /root/bin/cm mondrianwifi 166 | 30 * * * * bash /root/bin/cm morrison 167 | 32 * * * * bash /root/bin/cm moto_msm8960 168 | 34 * * * * bash /root/bin/cm moto_msm8960_jbbl 169 | 36 * * * * bash /root/bin/cm moto_msm8960dt 170 | 38 * * * * bash /root/bin/cm motus 171 | 40 * * * * bash /root/bin/cm n1 172 | 42 * * * * bash /root/bin/cm n3 173 | 44 * * * * bash /root/bin/cm n5100 174 | 46 * * * * bash /root/bin/cm n5110 175 | 48 * * * * bash /root/bin/cm n5120 176 | 50 * * * * bash /root/bin/cm n7000 177 | 52 * * * * bash /root/bin/cm n7100 178 | 54 * * * * bash /root/bin/cm n8000 179 | 56 * * * * bash /root/bin/cm n8013 180 | 58 * * * * bash /root/bin/cm nicki 181 | 0 * * * * bash /root/bin/cm nozomi 182 | 2 * * * * bash /root/bin/cm obake 183 | 4 * * * * bash /root/bin/cm odin 184 | 6 * * * * bash /root/bin/cm odroidu2 185 | 8 * * * * bash /root/bin/cm olympus 186 | 10 * * * * bash /root/bin/cm one 187 | 12 * * * * bash /root/bin/cm otter 188 | 14 * * * * bash /root/bin/cm otter2 189 | 16 * * * * bash /root/bin/cm otterx 190 | 18 * * * * bash /root/bin/cm ovation 191 | 20 * * * * bash /root/bin/cm p1 192 | 22 * * * * bash /root/bin/cm p1c 193 | 24 * * * * bash /root/bin/cm p1l 194 | 26 * * * * bash /root/bin/cm p1n 195 | 28 * * * * bash /root/bin/cm p3 196 | 30 * * * * bash /root/bin/cm p3100 197 | 32 * * * * bash /root/bin/cm p3110 198 | 34 * * * * bash /root/bin/cm p3113 199 | 36 * * * * bash /root/bin/cm p350 200 | 38 * * * * bash /root/bin/cm p4 201 | 40 * * * * bash /root/bin/cm p4tmo 202 | 42 * * * * bash /root/bin/cm p4vzw 203 | 44 * * * * bash /root/bin/cm p4wifi 204 | 46 * * * * bash /root/bin/cm p5 205 | 48 * * * * bash /root/bin/cm p500 206 | 50 * * * * bash /root/bin/cm p5100 207 | 52 * * * * bash /root/bin/cm p5110 208 | 54 * * * * bash /root/bin/cm p5113 209 | 56 * * * * bash /root/bin/cm p5wifi 210 | 58 * * * * bash /root/bin/cm p700 211 | 0 * * * * bash /root/bin/cm p720 212 | 2 * * * * bash /root/bin/cm p760 213 | 4 * * * * bash /root/bin/cm p880 214 | 6 * * * * bash /root/bin/cm p920 215 | 8 * * * * bash /root/bin/cm p925 216 | 10 * * * * bash /root/bin/cm p930 217 | 12 * * * * bash /root/bin/cm p970 218 | 14 * * * * bash /root/bin/cm p990 219 | 16 * * * * bash /root/bin/cm p999 220 | 18 * * * * bash /root/bin/cm passion 221 | 20 * * * * bash /root/bin/cm peregrine 222 | 22 * * * * bash /root/bin/cm picassowifi 223 | 24 * * * * bash /root/bin/cm pollux 224 | 26 * * * * bash /root/bin/cm pollux_windy 225 | 28 * * * * bash /root/bin/cm pyramid 226 | 30 * * * * bash /root/bin/cm quark 227 | 32 * * * * bash /root/bin/cm quincyatt 228 | 34 * * * * bash /root/bin/cm quincytmo 229 | 36 * * * * bash /root/bin/cm r950 230 | 38 * * * * bash /root/bin/cm robyn 231 | 40 * * * * bash /root/bin/cm ruby 232 | 42 * * * * bash /root/bin/cm s5670 233 | 44 * * * * bash /root/bin/cm saga 234 | 46 * * * * bash /root/bin/cm satsuma 235 | 48 * * * * bash /root/bin/cm scorpion 236 | 50 * * * * bash /root/bin/cm scorpion_windy 237 | 52 * * * * bash /root/bin/cm serrano3gxx 238 | 54 * * * * bash /root/bin/cm serranoltexx 239 | 56 * * * * bash /root/bin/cm shadow 240 | 58 * * * * bash /root/bin/cm shakira 241 | 0 * * * * bash /root/bin/cm shamu 242 | 2 * * * * bash /root/bin/cm sholes 243 | 4 * * * * bash /root/bin/cm shooteru 244 | 6 * * * * bash /root/bin/cm showcasemtd 245 | 8 * * * * bash /root/bin/cm sirius 246 | 10 * * * * bash /root/bin/cm skate 247 | 12 * * * * bash /root/bin/cm skyrocket 248 | 14 * * * * bash /root/bin/cm smb_a1002 249 | 16 * * * * bash /root/bin/cm smultron 250 | 18 * * * * bash /root/bin/cm solana 251 | 20 * * * * bash /root/bin/cm speedy 252 | 22 * * * * bash /root/bin/cm sprout 253 | 24 * * * * bash /root/bin/cm spyder 254 | 26 * * * * bash /root/bin/cm steelhead 255 | 28 * * * * bash /root/bin/cm stingray 256 | 30 * * * * bash /root/bin/cm su640 257 | 32 * * * * bash /root/bin/cm sunfire 258 | 34 * * * * bash /root/bin/cm superior 259 | 36 * * * * bash /root/bin/cm supersonic 260 | 38 * * * * bash /root/bin/cm t0lte 261 | 40 * * * * bash /root/bin/cm t0lteatt 262 | 42 * * * * bash /root/bin/cm t0ltetmo 263 | 44 * * * * bash /root/bin/cm t6 264 | 46 * * * * bash /root/bin/cm t6spr 265 | 48 * * * * bash /root/bin/cm t6vzw 266 | 50 * * * * bash /root/bin/cm t769 267 | 52 * * * * bash /root/bin/cm taoshan 268 | 54 * * * * bash /root/bin/cm targa 269 | 56 * * * * bash /root/bin/cm tass 270 | 58 * * * * bash /root/bin/cm tate 271 | 0 * * * * bash /root/bin/cm tenderloin 272 | 2 * * * * bash /root/bin/cm tf101 273 | 4 * * * * bash /root/bin/cm tf201 274 | 6 * * * * bash /root/bin/cm tf300t 275 | 8 * * * * bash /root/bin/cm tf700t 276 | 10 * * * * bash /root/bin/cm tf701t 277 | 12 * * * * bash /root/bin/cm thea 278 | 14 * * * * bash /root/bin/cm tianchi 279 | 16 * * * * bash /root/bin/cm tilapia 280 | 18 * * * * bash /root/bin/cm titan 281 | 20 * * * * bash /root/bin/cm togari 282 | 22 * * * * bash /root/bin/cm togari_gpe 283 | 24 * * * * bash /root/bin/cm toro 284 | 26 * * * * bash /root/bin/cm toroplus 285 | 28 * * * * bash /root/bin/cm trltespr 286 | 30 * * * * bash /root/bin/cm trltetmo 287 | 32 * * * * bash /root/bin/cm trlteusc 288 | 34 * * * * bash /root/bin/cm trltexx 289 | 36 * * * * bash /root/bin/cm tsubasa 290 | 38 * * * * bash /root/bin/cm u8150 291 | 40 * * * * bash /root/bin/cm u8160 292 | 42 * * * * bash /root/bin/cm u8220 293 | 44 * * * * bash /root/bin/cm umts_spyder 294 | 46 * * * * bash /root/bin/cm urushi 295 | 48 * * * * bash /root/bin/cm v410 296 | 50 * * * * bash /root/bin/cm v500 297 | 52 * * * * bash /root/bin/cm v9 298 | 54 * * * * bash /root/bin/cm vega 299 | 56 * * * * bash /root/bin/cm vibrantmtd 300 | 58 * * * * bash /root/bin/cm victara 301 | 0 * * * * bash /root/bin/cm ville 302 | 2 * * * * bash /root/bin/cm vision 303 | 4 * * * * bash /root/bin/cm vivo 304 | 6 * * * * bash /root/bin/cm vivow 305 | 8 * * * * bash /root/bin/cm vs920 306 | 10 * * * * bash /root/bin/cm vs980 307 | 12 * * * * bash /root/bin/cm vs985 308 | 14 * * * * bash /root/bin/cm w7 309 | 16 * * * * bash /root/bin/cm wingray 310 | 18 * * * * bash /root/bin/cm xt1053 311 | 20 * * * * bash /root/bin/cm xt1058 312 | 22 * * * * bash /root/bin/cm xt1060 313 | 24 * * * * bash /root/bin/cm xt897 314 | 26 * * * * bash /root/bin/cm xt897c 315 | 28 * * * * bash /root/bin/cm xt907 316 | 30 * * * * bash /root/bin/cm xt925 317 | 32 * * * * bash /root/bin/cm xt925_jbbl 318 | 34 * * * * bash /root/bin/cm xt926 319 | 36 * * * * bash /root/bin/cm ypg1 320 | 38 * * * * bash /root/bin/cm yuga 321 | 40 * * * * bash /root/bin/cm z3 322 | 42 * * * * bash /root/bin/cm z3c 323 | 44 * * * * bash /root/bin/cm z71 324 | 46 * * * * bash /root/bin/cm zeppelin 325 | 48 * * * * bash /root/bin/cm zero 326 | 50 * * * * bash /root/bin/cm zeus 327 | 52 * * * * bash /root/bin/cm zeusc 328 | -------------------------------------------------------------------------------- /net/http/http.php: -------------------------------------------------------------------------------- 1 | >/tmp/out.txt 2>&1 &"); 14 | $i = 0; 15 | foreach ($files as $file) { 16 | shell_exec("./http.sh '".$file."' '".$names[$i]."' >>/tmp/http.txt 2>&1 &"); 17 | $i = $i+1; 18 | } 19 | exit; 20 | } else { 21 | echo json_encode(array("error" => "empty file")); 22 | } 23 | break; 24 | case 'GET': 25 | echo '
26 | LINKS:
27 |
28 | NAMES:
29 |
30 | 31 |
'; 32 | break; 33 | } 34 | ?> 35 | -------------------------------------------------------------------------------- /net/http/http.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | LINK="$1" 4 | NAME="$2" 5 | 6 | echo "$LINK" "$NAME" 7 | 8 | ARIA2_RPC=/etc/transmission-daemon/aria2-rpc.sh 9 | X2T=/etc/transmission-daemon/x2t.sh 10 | 11 | export LC_ALL=en_US.UTF-8 12 | 13 | cd /home/downloads 14 | 15 | wget "$LINK" -O "$NAME" 16 | 17 | if [ -f "$NAME" ]; then 18 | # handle file 19 | EXT="${NAME##*.}" 20 | if [ "$EXT" = "zip" ]; then 21 | echo "$X2T -z $NAME" 22 | $X2T -z "$NAME" 23 | rm "$NAME" 24 | elif [ "$EXT" = "rar" ]; then 25 | echo "$X2T -r $NAME" 26 | $X2T -r "$NAME" 27 | rm "$NAME" 28 | else 29 | echo "$ARIA2_RPC $NAME" 30 | $ARIA2_RPC "$NAME" 31 | fi 32 | else 33 | echo "File not found." 34 | fi 35 | -------------------------------------------------------------------------------- /net/lurl.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Convert lighttpd autoindex url to download link. 3 | if [ -z "$1" ];then 4 | echo "Error param." 5 | exit 0 6 | fi 7 | 8 | URL=$1 9 | 10 | if [ -z "$2" ];then 11 | curl -k -s $URL | sed -n "s|.*href=\"\([^\"]*\).*|$URL\1|p" 12 | else 13 | if [ "$2"=="--auth" ];then 14 | if [ -z "$3" ];then 15 | echo "Error user." 16 | exit 0 17 | fi 18 | if [ -z "$4" ];then 19 | echo "Error password." 20 | exit 0 21 | fi 22 | curl -k -s -u $3:$4 $URL | sed -n "s|.*href=\"\([^\"]*\).*|$URL\1|p" 23 | exit 0 24 | fi 25 | fi 26 | -------------------------------------------------------------------------------- /net/mail/mail.php: -------------------------------------------------------------------------------- 1 | '; 10 | $to = $_POST['email']; 11 | $subject = "$name"." event notify"; 12 | $body = "Hi, \n\n$name $event at $time\n\n------------\n\nThis mail is auto generated by mail api."; 13 | 14 | $headers = array( 15 | 'From' => "$from", 16 | 'To' => "$to", 17 | 'Subject' => "$subject" 18 | ); 19 | 20 | $smtp = Mail::factory('smtp', array( 21 | 'host' => 'ssl://smtp.gmail.com', 22 | 'port' => '465', 23 | 'auth' => true, 24 | 'username' => 'noreply.watcher@gmail.com', 25 | 'password' => 'YOUR_PASSWORD' 26 | )); 27 | $mail = $smtp->send($to, $headers, $body); 28 | 29 | if (PEAR::isError($mail)) { 30 | echo('

' . $mail->getMessage() . '

'); 31 | } else { 32 | #echo('Email successfully sent!'); 33 | } 34 | ?> 35 | -------------------------------------------------------------------------------- /net/ngnix_autoindex_2_download_link.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Convert nginx autoindex url to download link. 3 | if [ -z "$1" ];then 4 | echo "Error param." 5 | exit 0 6 | fi 7 | 8 | URL=$1 9 | 10 | for uri in $(curl -k -s $URL |grep '>/tmp/out.txt 2>&1 &"); 15 | foreach ($files as $file) { 16 | shell_exec("cd /home/downloads;rm -- ".escapeshellarg($file)." >>/tmp/rm.txt 2>&1 &"); 17 | } 18 | exit; 19 | } else { 20 | echo json_encode(array("error" => "empty file")); 21 | } 22 | break; 23 | case 'GET': 24 | echo '
25 | FILES:
26 |
27 | 28 |
'; 29 | break; 30 | } 31 | ?> 32 | -------------------------------------------------------------------------------- /net/transmission/aria2-rpc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | USER="HTTP_USER" 4 | PASSWORD="HTTP_PASSOWRD" 5 | RPC="https://home.example.com/jsonrpc" 6 | TOKEN="ARIA2_TOKEN" 7 | 8 | DOWNLOAD_URLS=( 9 | https://remote.example.com/downloads/ 10 | https://cdn1.example.com/downloads/ 11 | https://cdn2.example.com/downloads/ 12 | https://cdn3.example.com/downloads/ 13 | https://cdn4.example.com/downloads/ 14 | ) 15 | 16 | export LC_ALL=en_US.UTF-8 17 | 18 | FILE="$1" 19 | FILE=$(python -c "import sys, urllib as ul; \ 20 | print ul.quote(\"$FILE\")") 21 | 22 | LINK="" 23 | for URL in "${DOWNLOAD_URLS[@]}"; do 24 | if [ -z "$LINK" ]; then 25 | LINK="\"$URL$FILE\"" 26 | else 27 | LINK="$LINK, \"$URL$FILE\"" 28 | fi 29 | done 30 | curl -s -v --user $USER:$PASSWORD $RPC -X POST -d "[{\"jsonrpc\":\"2.0\",\"method\":\"aria2.addUri\",\"id\":1,\"params\":[\"token:$TOKEN\",[$LINK],{\"split\":\"10\",\"max-connection-per-server\":\"10\",\"seed-ratio\":\"1.0\"}]}]" 31 | -------------------------------------------------------------------------------- /net/transmission/complete.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | USER=RPC_USER 4 | PASSWORD=RPC_PASSWORD 5 | OUTPUT=/tmp/tr.complete.txt 6 | UNCOMPRESS_OUTPUT=/tmp/uncompress.txt 7 | X2T=/etc/transmission-daemon/x2t.sh 8 | D2Z=/etc/transmission-daemon/d2z.sh 9 | ARIA2_RPC=/etc/transmission-daemon/aria2-rpc.sh 10 | TARGET_DIR=/home/downloads 11 | REMOVE="true" 12 | 13 | export LC_ALL=en_US.UTF-8 14 | 15 | umask 000 16 | 17 | echo "$TR_APP_VERSION $TR_TIME_LOCALTIME $TR_TORRENT_DIR $TR_TORRENT_HASH $TR_TORRENT_ID $TR_TORRENT_NAME" >>$OUTPUT 18 | 19 | if [ "$TR_TORRENT_DIR" != "$TARGET_DIR" ] && [ "$TR_TORRENT_DIR" != "$TARGET_DIR/" ]; then 20 | #echo "Not in $TARGET_DIR, exit" >>$OUTPUT 21 | #exit 0 22 | echo "Not in $TARGET_DIR, copy file now." >>$OUTPUT 23 | cp -r "$TR_TORRENT_DIR/$TR_TORRENT_NAME" "$TARGET_DIR" 24 | TR_TORRENT_DIR="$TARGET_DIR" 25 | REMOVE="false" 26 | fi 27 | 28 | cd "$TR_TORRENT_DIR" 29 | 30 | if [ -f "$TR_TORRENT_DIR/$TR_TORRENT_NAME" ]; then 31 | # handle file 32 | REMOVE_PARAM="--remove-and-delete" 33 | EXT="${TR_TORRENT_NAME##*.}" 34 | if [ "$EXT" = "zip" ]; then 35 | echo "$X2T -z $TR_TORRENT_NAME" >>$OUTPUT 36 | $X2T -z "$TR_TORRENT_NAME" >>$UNCOMPRESS_OUTPUT 37 | elif [ "$EXT" = "rar" ]; then 38 | echo "$X2T -r $TR_TORRENT_NAME" >>$OUTPUT 39 | $X2T -r "$TR_TORRENT_NAME" >>$UNCOMPRESS_OUTPUT 40 | else 41 | echo "$ARIA2_RPC $TR_TORRENT_NAME" >>$OUTPUT 42 | REMOVE_PARAM="--remove" 43 | $ARIA2_RPC "$TR_TORRENT_NAME" >>$UNCOMPRESS_OUTPUT 44 | fi 45 | 46 | echo "remove $TR_TORRENT_NAME" >>$OUTPUT 47 | if [ "$REMOVE" == "true" ]; then 48 | transmission-remote --auth $USER:$PASSWORD -t $TR_TORRENT_ID $REMOVE_PARAM >>$OUTPUT 2>&1 49 | fi 50 | elif [ -d "$TR_TORRENT_DIR/$TR_TORRENT_NAME" ]; then 51 | # handle dir 52 | IMAGE_COUNT=$(find "$TR_TORRENT_NAME" -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image'|wc -l) 53 | if [ $IMAGE_COUNT -gt 10 ];then 54 | # d2z and compress 55 | echo "$D2Z -z $TR_TORRENT_NAME" >>$OUTPUT 56 | $D2Z -z "$TR_TORRENT_NAME" 57 | else 58 | # d2z only 59 | echo "$D2Z "$TR_TORRENT_NAME"" >>$OUTPUT 60 | $D2Z "$TR_TORRENT_NAME" 61 | fi 62 | if [ "$REMOVE" == "true" ]; then 63 | transmission-remote --auth $USER:$PASSWORD -t $TR_TORRENT_ID --remove-and-delete >>$OUTPUT 2>&1 64 | fi 65 | else 66 | echo "Unknow file format: $TR_TORRENT_NAME" >>$OUTPUT 67 | fi 68 | 69 | if [ "$REMOVE" == "false" ]; then 70 | rm -r "$TARGET_DIR/$TR_TORRENT_NAME" 71 | fi 72 | -------------------------------------------------------------------------------- /net/transmission/d2z.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | X2T=/etc/transmission-daemon/x2t.sh 4 | ARIA2_RPC=/etc/transmission-daemon/aria2-rpc.sh 5 | 6 | export LC_ALL=en_US.UTF-8 7 | 8 | while [ -f /tmp/.d2t ] 9 | do 10 | echo "wait other job exit" 11 | sleep 2 12 | done 13 | 14 | touch /tmp/.d2t 15 | 16 | if [ "$#" -eq 2 ]; then 17 | DIR=$(basename "$2") 18 | OPT="$1" 19 | else 20 | DIR=$(basename "$1") 21 | OPT="-o" 22 | fi 23 | 24 | chmod -R a+rwX "$DIR" 25 | zip -r -0 "$DIR.zip" "$DIR" 26 | 27 | if [ "$OPT" = "-z" ]; then 28 | mv "$DIR" "$DIR-tmp" 29 | $X2T -z "$DIR.zip" 30 | rm "$DIR.zip" 31 | mv "$DIR-tmp" "$DIR" 32 | else 33 | $ARIA2_RPC "$DIR.zip" 34 | fi 35 | 36 | sync 37 | rm /tmp/.d2t 38 | -------------------------------------------------------------------------------- /net/transmission/transmission.settings.json.example: -------------------------------------------------------------------------------- 1 | { 2 | "alt-speed-down": 50, 3 | "alt-speed-enabled": false, 4 | "alt-speed-time-begin": 540, 5 | "alt-speed-time-day": 127, 6 | "alt-speed-time-enabled": false, 7 | "alt-speed-time-end": 1020, 8 | "alt-speed-up": 50, 9 | "bind-address-ipv4": "0.0.0.0", 10 | "bind-address-ipv6": "::", 11 | "blocklist-enabled": false, 12 | "blocklist-url": "http://www.example.com/blocklist", 13 | "cache-size-mb": 4, 14 | "dht-enabled": false, 15 | "download-dir": "/home/downloads", 16 | "download-limit": 100, 17 | "download-limit-enabled": 0, 18 | "download-queue-enabled": true, 19 | "download-queue-size": 5, 20 | "encryption": 1, 21 | "idle-seeding-limit": 30, 22 | "idle-seeding-limit-enabled": false, 23 | "incomplete-dir": "/home/downloads", 24 | "incomplete-dir-enabled": false, 25 | "lpd-enabled": false, 26 | "max-peers-global": 200, 27 | "message-level": 2, 28 | "peer-congestion-algorithm": "", 29 | "peer-id-ttl-hours": 6, 30 | "peer-limit-global": 20000, 31 | "peer-limit-per-torrent": 5000, 32 | "peer-port": 51413, 33 | "peer-port-random-high": 65535, 34 | "peer-port-random-low": 49152, 35 | "peer-port-random-on-start": false, 36 | "peer-socket-tos": "default", 37 | "pex-enabled": true, 38 | "port-forwarding-enabled": false, 39 | "preallocation": 1, 40 | "prefetch-enabled": 1, 41 | "queue-stalled-enabled": true, 42 | "queue-stalled-minutes": 30, 43 | "ratio-limit": 2, 44 | "ratio-limit-enabled": true, 45 | "rename-partial-files": true, 46 | "rpc-authentication-required": true, 47 | "rpc-bind-address": "127.0.0.1", 48 | "rpc-enabled": true, 49 | "rpc-password": "RPC_PASSWORD", 50 | "rpc-port": 9091, 51 | "rpc-url": "/transmission/", 52 | "rpc-username": "PRC_USER", 53 | "rpc-whitelist": "127.0.0.1", 54 | "rpc-whitelist-enabled": true, 55 | "scrape-paused-torrents-enabled": true, 56 | "script-torrent-done-enabled": true, 57 | "script-torrent-done-filename": "/etc/transmission-daemon/complete.sh", 58 | "seed-queue-enabled": false, 59 | "seed-queue-size": 10, 60 | "speed-limit-down": 100, 61 | "speed-limit-down-enabled": false, 62 | "speed-limit-up": 20, 63 | "speed-limit-up-enabled": false, 64 | "start-added-torrents": true, 65 | "trash-original-torrent-files": false, 66 | "umask": 18, 67 | "upload-limit": 100, 68 | "upload-limit-enabled": 0, 69 | "upload-slots-per-torrent": 14, 70 | "utp-enabled": true 71 | } 72 | -------------------------------------------------------------------------------- /net/transmission/x2t.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | TYPE=$1 4 | file=$2 5 | 6 | ARIA2_RPC=/etc/transmission-daemon/aria2-rpc.sh 7 | 8 | export LC_ALL=en_US.UTF-8 9 | 10 | check_sub(){ 11 | echo "check sub." 12 | SAVEIFS=$IFS # setup this case the space char in file name. 13 | IFS=$(echo -en "\n\b") 14 | for subdir in $(find -maxdepth 1 -type d |grep ./ |cut -c 3-); 15 | do 16 | echo $subdir 17 | cd "$subdir" 18 | convert_to_jpg 19 | cd .. 20 | done 21 | IFS=$SAVEIFS 22 | } 23 | 24 | convert_to_jpg(){ 25 | 26 | for ext in jpg JPG bmp BMP png PNG; do 27 | echo "ext is $ext" 28 | if [ ! $(find . -maxdepth 1 -name \*.$ext | wc -l) = 0 ]; 29 | then 30 | x2jpg $ext 31 | fi 32 | done 33 | 34 | check_sub # check if has sub directory. 35 | } 36 | 37 | x2jpg(){ 38 | if [ ! -d origin ];then 39 | mkdir origin 40 | fi 41 | if [ ! -d /tmp/jpg ]; then 42 | mkdir /tmp/jpg 43 | chmod -R 777 /tmp/jpg 44 | fi 45 | 46 | tmp_fifofile="/tmp/$$.fifo" 47 | mkfifo $tmp_fifofile # create a fifo type file. 48 | exec 6<>$tmp_fifofile # point fd6 to fifo file. 49 | rm $tmp_fifofile 50 | 51 | 52 | thread=10 # define numbers of threads. 53 | for ((i=0;i<$thread;i++));do 54 | echo 55 | done >&6 # actually only put $thread RETURNs to fd6. 56 | 57 | for file in ./*.$1;do 58 | read -u6 59 | { 60 | echo 'convert -quality 80' "$file" /tmp/jpg/"${file%.*}"'.jpg' 61 | convert -limit memory 64 -limit map 128 -quality 80 "$file" /tmp/jpg/"${file%.*}".jpg 62 | mv "$file" origin 63 | echo >&6 64 | } & 65 | done 66 | 67 | wait # wait for all child thread end. 68 | exec 6>&- # close fd6 69 | 70 | mv /tmp/jpg/* . 71 | pwd 72 | rm -r origin 73 | 74 | echo 'DONE!' 75 | } 76 | 77 | # wait for other x2t jobs done. 78 | 79 | while [ -f /tmp/.x2t ] 80 | do 81 | echo "wait other job exit" 82 | #sleep 2 83 | sleep $[ ( $RANDOM % 10 ) + 1 ] 84 | done 85 | 86 | touch /tmp/.x2t 87 | 88 | tmpdir=$(mktemp -d) 89 | 90 | if [ "$TYPE" = "-z" ]; then 91 | DIR="${file%%.zip*}" 92 | echo "unzip $file -d $tmpdir"; 93 | unzip "$file" -d $tmpdir # unzip to a tmp directory. 94 | elif [ "$TYPE" = "-r" ]; then 95 | DIR="${file%%.rar*}" 96 | echo "unrar x $file $tmpdir"; 97 | mv "$file" tmp.rar 98 | unrar x tmp.rar $tmpdir # unrar to a tmp directory. 99 | mv tmp.rar "$file" 100 | fi 101 | 102 | if [ $(ls $tmpdir | wc -l) = 1 ]; then # check if has folders, and mv the unziped directory as same name with the zip file. 103 | DIR2=$(ls $tmpdir) 104 | mv "$tmpdir/$DIR2" "$DIR" 105 | rmdir $tmpdir 106 | else 107 | mv $tmpdir "$DIR" 108 | fi 109 | 110 | echo $DIR 111 | if [ -d "$DIR" ]; then 112 | cd "$DIR" 113 | convert_to_jpg # convert process. 114 | cd .. 115 | echo "tar cvf $DIR.tar $DIR" 116 | tar cvf "$DIR.tar" "$DIR" --force-local --mode=a+rwX # tar the directory. 117 | rm -r "$DIR" 118 | # send to home aria2 119 | $ARIA2_RPC "$DIR.tar" 120 | else 121 | echo "$DIR not exist." 122 | fi 123 | 124 | sync 125 | rm /tmp/.x2t 126 | -------------------------------------------------------------------------------- /net/url.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Convert nginx autoindex url to download link. 3 | if [ -z "$1" ];then 4 | echo "Error param." 5 | exit 0 6 | fi 7 | 8 | URL=$1 9 | 10 | for uri in $(curl -k -s $URL |grep '
> /var/log/wireless.log 2>&1 32 | ``` 33 | -------------------------------------------------------------------------------- /net/wireless/wireless.conf: -------------------------------------------------------------------------------- 1 | essids=( 2 | "innernet:12345678" 3 | "HUAZHU-Hanting:" 4 | "test:test1234" 5 | ) 6 | 7 | -------------------------------------------------------------------------------- /net/wireless/wireless.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # dependencies: wireless-tools 4 | 5 | DIR=$(dirname $0) 6 | 7 | source "$DIR/wireless.conf" 8 | 9 | WIFI=$(uci get wireless.@wifi-iface[0].ssid) 10 | 11 | echo "$(date) current wifi: $WIFI" 12 | 13 | if [ $(iw dev wlan0 scan|grep "$WIFI"|wc -l) -eq 0 ];then 14 | uci set wireless.@wifi-iface[0].disabled=1 15 | uci commit 16 | /etc/init.d/network restart 17 | sleep 15 18 | fi 19 | 20 | 21 | for wifi in "${essids[@]}" ; do 22 | ESSID="${wifi%%:*}" 23 | PASS="${wifi##*:}" 24 | echo "$(date) checking: $ESSID ..." 25 | #echo "$PASS" 26 | 27 | if [ $(iw dev wlan0 scan|grep "$ESSID"|wc -l) -ne 0 ];then 28 | echo "$(date) wifi: $ESSID is detected." 29 | 30 | if [ "$WIFI" != "$ESSID" ]; then 31 | uci set wireless.@wifi-device[0].channel="auto" 32 | uci set wireless.@wifi-iface[0].ssid="$ESSID" 33 | uci set wireless.@wifi-iface[0].key="$PASS" 34 | uci delete wireless.@wifi-iface[0].bssid 35 | uci set wireless.@wifi-iface[0].disabled=0 36 | if [ -z "$PASS" ]; then 37 | uci set wireless.@wifi-iface[0].encryption="none" 38 | uci delete wireless.@wifi-iface[0].key 39 | else 40 | uci set wireless.@wifi-iface[0].encryption="psk-mixed" 41 | fi 42 | uci commit 43 | /etc/init.d/network restart 44 | echo "$(date) wifi: $ESSID is updated." 45 | fi 46 | 47 | break 48 | fi 49 | done 50 | -------------------------------------------------------------------------------- /net/youtube/aria2-complete.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | USER=HTTP_USER 4 | PASSWORD=HTTP_PASSWORD 5 | API_URI=https://remote.example.com/rm/ 6 | 7 | DOWNLOAD_DIR="YOUR_DOWNLOAD_DIR" # e.g. /mnt/usb/download 8 | SORT_DIR="YOUR_SORT_DIR" #e.g. /mnt/usb/complete 9 | 10 | OUTOUT="/tmp/aria2-complete.txt" 11 | 12 | echo "$3" >>$OUTOUT 13 | 14 | # remove file from server 15 | FILENAME=$(basename "$3") 16 | curl -s -v --user $USER:$PASSWORD $API_URI -X POST --data-urlencode "files=$FILENAME" --capath /etc/ssl/certs/ >>/tmp/aria2.txt 2>&1 17 | 18 | # sort out files (mp4/m4a/tar/zip) 19 | 20 | if [ ! -f "$DOWNLOAD_DIR/$FILENAME" ];then 21 | echo "$DOWNLOAD_DIR/$FILENAME not exists, exit." >>$OUTOUT 22 | exit 0 23 | fi 24 | 25 | # wait for other sort jobs done. 26 | 27 | while [ -f /tmp/.sort ] 28 | do 29 | echo "wait other job exit" >>$OUTOUT 30 | #sleep 2 31 | sleep $[ ( $RANDOM % 10 ) + 1 ] 32 | done 33 | 34 | umask 000 35 | 36 | touch /tmp/.sort 37 | 38 | EXT="${FILENAME##*.}" 39 | 40 | MON="$(date +%m)" 41 | DAY="$(date +%m%d)" 42 | 43 | if [ "$EXT" = "zip" ];then 44 | 45 | echo "unzip" >>$OUTOUT 46 | mkdir -p "$SORT_DIR/archives/$DAY" 47 | mv "$DOWNLOAD_DIR/$FILENAME" "$SORT_DIR/archives/$DAY" 48 | TARGET_DIR="$SORT_DIR/$MON/$DAY" 49 | if [ $(unzip -l "$SORT_DIR/archives/$DAY/$FILENAME"|grep -E '.mp4$|.wmv$|.avi$'|wc -l) -gt 0 ];then 50 | TARGET_DIR="$SORT_DIR/$MON/$DAY-2" 51 | fi 52 | mkdir -p "$TARGET_DIR" 53 | unzip "$SORT_DIR/archives/$DAY/$FILENAME" -d "$TARGET_DIR" >>$OUTOUT 2>&1 54 | 55 | elif [ "$EXT" = "tar" ];then 56 | 57 | echo "tar xvf \"$FILENAME\" -C \"$SORT_DIR/$MON/$DAY\"" >>$OUTOUT 58 | mkdir -p "$SORT_DIR/archives/$DAY" 59 | mv "$DOWNLOAD_DIR/$FILENAME" "$SORT_DIR/archives/$DAY" 60 | TARGET_DIR="$SORT_DIR/$MON/$DAY" 61 | if [ $(tar tvf "$SORT_DIR/archives/$DAY/$FILENAME"|grep -E '.mp4$|.wmv$|.avi$'|wc -l) -gt 0 ];then 62 | TARGET_DIR="$SORT_DIR/$MON/$DAY-2" 63 | fi 64 | mkdir -p "$TARGET_DIR" 65 | tar xvf "$SORT_DIR/archives/$DAY/$FILENAME" -C "$TARGET_DIR" >>$OUTOUT 2>&1 66 | 67 | elif [ "$EXT" = "mp4" -o "$EXT" = "m4a" ];then 68 | 69 | SUBDIR="" 70 | 71 | if [ $(echo "$FILENAME"|grep "\.f[0-9]...mp4"|wc -l) -eq 1 ];then 72 | SUBDIR="video" 73 | fi 74 | 75 | if [ $(echo "$FILENAME"|grep "\.f[0-9]...m4a"|wc -l) -eq 1 ];then 76 | SUBDIR="audio" 77 | fi 78 | 79 | if [ $(echo "$FILENAME"|grep "MMD"|wc -l) -eq 1 ];then 80 | echo "mv \"$FILENAME\" \"$SORT_DIR/MMD/$DAY/$SUBDIR\"" >>$OUTOUT 81 | mkdir -p "$SORT_DIR/MMD/$DAY/$SUBDIR" 82 | mv "$DOWNLOAD_DIR/$FILENAME" "$SORT_DIR/MMD/$DAY/$SUBDIR" 83 | elif [ $(echo "$FILENAME"|grep "AMV"|wc -l) -eq 1 ];then 84 | echo "mv \"$FILENAME\" \"$SORT_DIR/AMV/$DAY/$SUBDIR\"" >>$OUTOUT 85 | mkdir -p "$SORT_DIR/AMV/$DAY/$SUBDIR" 86 | mv "$DOWNLOAD_DIR/$FILENAME" "$SORT_DIR/AMV/$DAY/$SUBDIR" 87 | else 88 | echo "mv \"$FILENAME\" \"$SORT_DIR/TV/$DAY/$SUBDIR\"" >>$OUTOUT 89 | mkdir -p "$SORT_DIR/TV/$DAY/$SUBDIR" 90 | mv "$DOWNLOAD_DIR/$FILENAME" "$SORT_DIR/TV/$DAY/$SUBDIR" 91 | fi 92 | else 93 | echo "unknown file type: $EXT" >>$OUTOUT 94 | fi 95 | 96 | sync 97 | rm /tmp/.sort 98 | 99 | sync 100 | rm /tmp/.sort 101 | -------------------------------------------------------------------------------- /net/youtube/aria2.conf.example: -------------------------------------------------------------------------------- 1 | dir=/mnt/usb/aria2 2 | disable-ipv6=true 3 | enable-rpc=true 4 | rpc-allow-origin-all=true 5 | rpc-listen-all=false 6 | rpc-listen-port=6800 7 | rpc-secret=YOUR_TOKEN 8 | continue=true 9 | input-file=/var/aria2.session 10 | 11 | save-session=/var/aria2.session 12 | max-concurrent-downloads=5 13 | ca-certificate=/etc/nginx/certs/ca-bundle.pem 14 | check-certificate=true 15 | 16 | http-auth-challenge=true 17 | http-user=HTTP_USER 18 | http-passwd=HTTP_PASSWORD 19 | file-allocation=trunc 20 | max-connection-per-server=10 21 | split=10 22 | max-concurrent-downloads=10 23 | min-split-size=2097152 24 | 25 | on-download-complete=/etc/aria2-complete.sh 26 | -------------------------------------------------------------------------------- /net/youtube/aria2.init.example: -------------------------------------------------------------------------------- 1 | #!/bin/sh /etc/rc.common 2 | # Copyright (C) 2006-2011 OpenWrt.org 3 | 4 | SERVICE_USE_PID=1 5 | 6 | START=50 7 | 8 | start() { 9 | umask 0000 10 | aria2c --daemon=true --enable-rpc --rpc-listen-all=false -D --conf-path=/etc/aria2.conf 11 | } 12 | 13 | 14 | stop() { 15 | /usr/bin/killall aria2c 16 | } 17 | -------------------------------------------------------------------------------- /net/youtube/nginx.config.cdn.example: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | listen 443 ssl; 4 | server_name cdn.example.com; 5 | ssl_certificate certs/cdn.example.com.chained.crt; 6 | ssl_certificate_key certs/cdn.example.com.key; 7 | 8 | ssl_protocols TLSv1 TLSv1.1 TLSv1.2; 9 | ssl_ciphers HIGH:!aNULL:!MD5; 10 | 11 | charset utf-8; 12 | 13 | access_log/var/log/nginx/$host.access.log; 14 | 15 | client_max_body_size 20M; 16 | 17 | root /var/www/; 18 | indexindex.html index.htm index.php; 19 | 20 | if ($ssl_protocol = "") { 21 | return 301 https://$http_host$request_uri; 22 | } 23 | 24 | location / { 25 | try_files $uri $uri/ /index.php?q=$uri&$args; 26 | } 27 | 28 | #error_page404/404.html; 29 | 30 | # redirect server error pages to the static page /50x.html 31 | # 32 | error_page 500 502 503 504/50x.html; 33 | location = /50x.html { 34 | root /usr/share/nginx/html; 35 | } 36 | 37 | location /downloads/ { 38 | proxy_buffering off; 39 | proxy_pass https://remote.example.com/downloads/; 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /net/youtube/nginx.config.home.example: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | listen 443 ssl; 4 | server_name home.example.com; 5 | ssl_certificate certs/home.example.com.chained.crt; 6 | ssl_certificate_key certs/home.example.com.key; 7 | 8 | ssl_protocols TLSv1 TLSv1.1 TLSv1.2; 9 | ssl_ciphers HIGH:!aNULL:!MD5; 10 | 11 | charset utf-8; 12 | 13 | access_log /var/log/nginx/$host.access.log; 14 | 15 | fastcgi_connect_timeout 300; 16 | fastcgi_send_timeout 300; 17 | fastcgi_read_timeout 300; 18 | fastcgi_buffer_size 32k; 19 | fastcgi_buffers 4 32k; 20 | fastcgi_busy_buffers_size 32k; 21 | fastcgi_temp_file_write_size 32k; 22 | client_body_timeout 10; 23 | client_header_timeout 10; 24 | send_timeout 60; 25 | output_buffers 1 32k; 26 | postpone_output 1460; 27 | 28 | root /www; 29 | index index.html index.htm /_h5ai/server/php/index.php; 30 | 31 | if ($ssl_protocol = "") { 32 | return 302 https://$http_host$request_uri; 33 | } 34 | 35 | location / { 36 | try_files $uri $uri/ =404; 37 | } 38 | 39 | # openwrt luci (proxy of uhttpd) 40 | location /cgi-bin/luci { 41 | proxy_pass http://127.0.0.1:81; 42 | } 43 | 44 | # transmission/web 45 | location /transmission { 46 | auth_basic "Authentication required"; 47 | auth_basic_user_file /etc/nginx/.dlpasswd; 48 | proxy_pass http://127.0.0.1:9092; 49 | proxy_pass_header X-Transmission-Session-Id; 50 | } 51 | 52 | # aria2-webui 53 | location /aria2 { 54 | auth_basic "Authentication required"; 55 | auth_basic_user_file /etc/nginx/.dlpasswd; 56 | } 57 | 58 | # aria2 jsonrpc 59 | location /jsonrpc { 60 | auth_basic "Authentication required"; 61 | auth_basic_user_file /etc/nginx/.dlpasswd; 62 | proxy_pass http://127.0.0.1:6800; 63 | 64 | proxy_http_version 1.1; 65 | proxy_set_header Upgrade $http_upgrade; 66 | proxy_set_header Connection "upgrade"; 67 | proxy_set_header Host $host; 68 | } 69 | 70 | # aria2 xmlrpc 71 | location /rpc { 72 | auth_basic "Authentication required"; 73 | auth_basic_user_file /etc/nginx/.dlpasswd; 74 | proxy_pass http://127.0.0.1:6800; 75 | 76 | proxy_set_header Host $host; 77 | } 78 | } 79 | -------------------------------------------------------------------------------- /net/youtube/nginx.config.remote.example: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | listen 443 ssl; 4 | server_name remote.example.com; 5 | ssl_certificate certs/remote.example.com.chained.crt; 6 | ssl_certificate_key certs/remote.example.com.key; 7 | 8 | ssl_protocols TLSv1 TLSv1.1 TLSv1.2; 9 | ssl_ciphers HIGH:!aNULL:!MD5; 10 | 11 | charset utf-8; 12 | 13 | access_log/var/log/nginx/$host.access.log; 14 | 15 | client_max_body_size 20M; 16 | 17 | root /var/www/; 18 | indexindex.html index.htm index.php; 19 | 20 | if ($ssl_protocol = "") { 21 | return 302 https://$http_host$request_uri; 22 | } 23 | 24 | location / { 25 | try_files $uri $uri/ /index.php?q=$uri&$args; 26 | } 27 | 28 | #error_page404/404.html; 29 | 30 | # redirect server error pages to the static page /50x.html 31 | # 32 | error_page 500 502 503 504/50x.html; 33 | location = /50x.html { 34 | root /usr/share/nginx/html; 35 | } 36 | 37 | location /downloads { 38 | alias /home/downloads; 39 | auth_basic "Authentication required"; 40 | auth_basic_user_file /etc/nginx/.dlpasswd; 41 | autoindex on; 42 | autoindex_exact_size off; 43 | } 44 | 45 | location /path/to/youtube.php { 46 | auth_basic "Authentication required"; 47 | auth_basic_user_file /etc/nginx/.dlpasswd; 48 | } 49 | 50 | location /path/to/rm.php { 51 | auth_basic "Authentication required"; 52 | auth_basic_user_file /etc/nginx/.dlpasswd; 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /net/youtube/youtube.php: -------------------------------------------------------------------------------- 1 | >/tmp/out.txt 2>&1 &"); 15 | foreach ($urls as $url) { 16 | shell_exec("./youtube.sh ".escapeshellarg($url)." >>/tmp/youtube.txt 2>&1 &"); 17 | } 18 | exit; 19 | } else { 20 | echo json_encode(array("error" => "empty url")); 21 | } 22 | break; 23 | case 'GET': 24 | echo '
25 | URL:
26 |
27 | 28 |
'; 29 | break; 30 | } 31 | ?> 32 | -------------------------------------------------------------------------------- /net/youtube/youtube.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | DOWNLOAD_DIR="YOUR_DOWNLOAD_DIR" 4 | USER="HTTP_USER" 5 | PASSWORD="HTTP_PASSWORD" 6 | RPC="https://home.example.com/jsonrpc" 7 | TOKEN="ARIA2_TOKEN" 8 | 9 | DOWNLOAD_URLS=( 10 | https://cdn1.example.com/downloads/ 11 | https://cdn2.example.com/downloads/ 12 | https://cdn3.example.com/downloads/ 13 | https://cdn4.example.com/downloads/ 14 | ) 15 | 16 | URI=$1 17 | 18 | export LC_ALL=en_US.UTF-8 19 | 20 | cd "$DOWNLOAD_DIR" 21 | 22 | AUDIO=$(youtube-dl -F $URI | grep "DASH audio"|grep "aac\|webm"|tail -1|cut -d ' ' -f 1) 23 | VIDEO=$(youtube-dl -F $URI | grep "DASH video\|video only"|grep "mp4"|tail -1|cut -d ' ' -f 1) 24 | 25 | youtube-dl -v -f $VIDEO+$AUDIO -k $URI 26 | 27 | NAME=$(echo "$URI" |cut -d '=' -f 2) 28 | 29 | SAVEIFS=$IFS 30 | IFS=$(echo -en "\n\b") 31 | 32 | FILES=($(ls -- *$NAME*)) 33 | 34 | for FILE in "${FILES[@]}"; do 35 | LINK="" 36 | for URL in "${DOWNLOAD_URLS[@]}"; do 37 | if [ -z "$LINK" ]; then 38 | LINK="\"$URL$FILE\"" 39 | else 40 | LINK="$LINK, \"$URL$FILE\"" 41 | fi 42 | done 43 | curl -v --user $USER:$PASSWORD $RPC -X POST -d "[{\"jsonrpc\":\"2.0\",\"method\":\"aria2.addUri\",\"id\":1,\"params\":[\"token:$TOKEN\",[$LINK],{\"split\":\"10\",\"max-connection-per-server\":\"10\",\"seed-ratio\":\"1.0\"}]}]" 44 | done 45 | 46 | IFS=$SAVEIFS 47 | -------------------------------------------------------------------------------- /newuser.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :newuser.sh 3 | #description :create users for freeswitch. 4 | #author :xdtianyu@gmail.com 5 | #date :20141029 6 | #version :1.0 final 7 | #usage :bash newuser.sh user_number_start user_number_end (bash newuser.sh 10000 20000) 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | XML='\n 11 | \n 12 | \n 13 | \n 14 | \n 15 | \n 16 | \n 17 | \n 18 | \n 19 | \n 20 | \n 21 | \n 22 | \n 23 | \n 24 | \n 25 | \n 26 | \n 27 | ' 28 | 29 | mkdir xml 30 | 31 | for (( i = $1; i <= $2 ; i ++ )) 32 | do 33 | echo -e ${XML//1000/$i} >xml/$i.xml 34 | done 35 | 36 | 37 | -------------------------------------------------------------------------------- /nginx/upgrade-nginx-deb.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | VERSION="1.10.0" 4 | 5 | if [ ! -z "$1" ]; then 6 | VERSION="$1" 7 | fi 8 | 9 | OLD_VERSION=$(nginx -v 2>&1|cut -d '/' -f 2) 10 | 11 | DEB_FILE="nginx_$VERSION-$(lsb_release -sc)-1_amd64.deb" 12 | 13 | wget "https://www.xdty.org/dl/vps/$DEB_FILE" -O "$DEB_FILE" 14 | 15 | if [ "$?" -ne 0 ]; then 16 | echo "$DEB_FILE download failed!" 17 | exit 1 18 | fi 19 | 20 | echo "Stop service ..." 21 | service nginx stop 22 | cd /etc || exit 1 23 | 24 | if [ -d "nginx-$OLD_VERSION" ];then 25 | mv "nginx-$OLD_VERSION" "nginx-$OLD_VERSION-$(date +%m%d%M%S)" 26 | fi 27 | 28 | mv nginx "nginx-$OLD_VERSION" 29 | cd - || exit 1 30 | 31 | dpkg --force-overwrite -i "$DEB_FILE" 32 | 33 | cd /etc || exit 1 34 | mv nginx "nginx-$VERSION" 35 | mv "nginx-$OLD_VERSION" nginx 36 | 37 | echo "Start service ..." 38 | 39 | service nginx start 40 | 41 | echo "Done." 42 | -------------------------------------------------------------------------------- /nginx/upgrade-nginx.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | VERSION="1.10.0" 4 | 5 | if [ ! -z "$1" ];then 6 | VERSION="$1" 7 | fi 8 | 9 | echo "Start building nginx-$VERSION ..." 10 | 11 | OLD_VERSION=$(nginx -v 2>&1|cut -d '/' -f 2) 12 | 13 | cd || exit 1 14 | 15 | if [ -d "nginx" ]; then 16 | echo "mv nginx nginx-$OLD_VERSION" 17 | mv nginx "nginx-$OLD_VERSION" 18 | fi 19 | 20 | mkdir nginx 21 | cd nginx || exit 1 22 | 23 | # Download and extract nginx 24 | wget "http://nginx.org/download/nginx-$VERSION.tar.gz" 25 | tar xf "nginx-$VERSION.tar.gz" 26 | 27 | # Delete downloads 28 | rm -- *.tar.gz 29 | 30 | # Download ngx_http_auth_pam_module 31 | git clone https://github.com/stogh/ngx_http_auth_pam_module \ 32 | "nginx-$VERSION/modules/nginx-auth-pam" 33 | 34 | # Download nginx-dav-ext-module 35 | git clone https://github.com/arut/nginx-dav-ext-module \ 36 | "nginx-$VERSION/modules/nginx-dav-ext-module" 37 | 38 | # Download echo-nginx-module 39 | git clone https://github.com/openresty/echo-nginx-module \ 40 | "nginx-$VERSION/modules/nginx-echo" 41 | 42 | # Download nginx-upstream-fair 43 | git clone https://github.com/gnosek/nginx-upstream-fair \ 44 | "nginx-$VERSION/modules/nginx-upstream-fair" 45 | 46 | # Download ngx_http_substitutions_filter_module 47 | git clone https://github.com/yaoweibin/ngx_http_substitutions_filter_module \ 48 | "nginx-$VERSION/modules/ngx_http_substitutions_filter_module" 49 | 50 | # Download nginx-module-vts 51 | git clone https://github.com/vozlt/nginx-module-vts \ 52 | "nginx-$VERSION/modules/nginx-module-vts" 53 | 54 | cd "nginx-$VERSION" || exit 1 55 | 56 | U_VERSION=$(lsb_release -rs) 57 | 58 | CC_OPT="-g -O2 -fPIE -fstack-protector -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2" 59 | LD_OPT="-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now" 60 | 61 | if [ "$U_VERSION" = "16.04" ];then 62 | CC_OPT="-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2" 63 | fi 64 | 65 | ./configure \ 66 | --with-cc-opt="$CC_OPT" \ 67 | --with-ld-opt="$LD_OPT" \ 68 | --sbin-path=/usr/sbin/nginx \ 69 | --prefix=/usr/share/nginx \ 70 | --conf-path=/etc/nginx/nginx.conf \ 71 | --http-log-path=/var/log/nginx/access.log \ 72 | --error-log-path=/var/log/nginx/error.log \ 73 | --lock-path=/var/lock/nginx.lock \ 74 | --pid-path=/run/nginx.pid \ 75 | --http-client-body-temp-path=/var/lib/nginx/body \ 76 | --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ 77 | --http-proxy-temp-path=/var/lib/nginx/proxy \ 78 | --http-scgi-temp-path=/var/lib/nginx/scgi \ 79 | --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ 80 | --with-debug \ 81 | --with-pcre-jit \ 82 | --with-ipv6 \ 83 | --with-http_ssl_module \ 84 | --with-http_stub_status_module \ 85 | --with-http_realip_module \ 86 | --with-http_auth_request_module \ 87 | --with-http_addition_module \ 88 | --with-http_dav_module \ 89 | --with-http_geoip_module \ 90 | --with-http_gunzip_module \ 91 | --with-http_gzip_static_module \ 92 | --with-http_image_filter_module \ 93 | --with-http_v2_module \ 94 | --with-http_sub_module \ 95 | --with-http_xslt_module \ 96 | --with-stream \ 97 | --with-stream_ssl_module \ 98 | --with-mail \ 99 | --with-mail_ssl_module \ 100 | --with-threads \ 101 | --add-module=modules/nginx-auth-pam \ 102 | --add-module=modules/nginx-dav-ext-module \ 103 | --add-module=modules/nginx-echo \ 104 | --add-module=modules/nginx-upstream-fair \ 105 | --add-module=modules/ngx_http_substitutions_filter_module \ 106 | --add-module=modules/nginx-module-vts 107 | 108 | make 109 | 110 | if [ "$?" -ne 0 ]; then 111 | exit 1; 112 | fi 113 | 114 | echo "Stop service ..." 115 | service nginx stop 116 | cd /etc || exit 1 117 | 118 | if [ -d "nginx-$OLD_VERSION" ]; then 119 | mv "nginx-$OLD_VERSION" "nginx-$OLD_VERSION-$(date +%m%d%M%S)" 120 | fi 121 | 122 | mv nginx "nginx-$OLD_VERSION" 123 | cd - || exit 1 124 | 125 | make install 126 | 127 | cd /etc || exit 1 128 | mv nginx "nginx-$VERSION" 129 | mv "nginx-$OLD_VERSION" nginx 130 | 131 | echo "Start service ..." 132 | 133 | service nginx start 134 | 135 | echo "Done." 136 | -------------------------------------------------------------------------------- /opensips/README.md: -------------------------------------------------------------------------------- 1 | ## General 2 | 3 | autobuild.sh will generate a file 'opensips.run' and send emails to the receivers. Add cron will auto build at every day's 0:00. This script also push the release file to remote server. Then you can call ./opensips.run on remote server to install opensips. 4 | 5 | ## Useage 6 | 7 | 1. First, you need a user maybe named builder or any else to build opensips, and sudo permission too. 8 | 2. Add this script to PATH, maybe ~/bin. Make sure the *.sh files have execute access. 9 | 3. Add a cron tab with something like this 10 | '0 0 * * * /home/builder/bin/autobuild.sh > /dev/null 2>&1 &' 11 | 4. You need modify email.conf for the receivers. 12 | 5. You need to modify the push server where scp pushs the file 'opensips.run'. Maybe need setup auth key of remote server. 13 | 6. You need to modify script's location if necessary, for example my file tree is: 14 | . 15 | ```bash 16 | ├── bin 17 | │ ├── autobuild.sh 18 | │ ├── mail.cfg 19 | │ └── make.sh 20 | └── opensips 21 | └── install 22 | ├── build.sh 23 | ├── makeself-header.sh 24 | └── makeself.sh 25 | ``` 26 | 27 | 28 | ## Related files 29 | autobuild.sh, mail.cfg, make.sh, build.sh, autobuild.cron 30 | 31 | ##More information please read *.sh files. 32 | 33 | -------------------------------------------------------------------------------- /opensips/autobuild.cron: -------------------------------------------------------------------------------- 1 | 0 0 * * * /home/builder/bin/autobuild.sh > /dev/null 2>&1 & 2 | -------------------------------------------------------------------------------- /opensips/autobuild.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :autobuild.sh 3 | #description :This script will auto build opensips and send email to receivers. 4 | #author :xdtianyu@gmail.com 5 | #date :20141205 6 | #version :1.0 final 7 | #usage :bash autobuild.sh 8 | #bash_version :4.3.11(1)-release 9 | 10 | SUDO_PASSWORD="123456" # PLEASE MODIFY THIS 11 | OUTPUT="/tmp/make.put" 12 | ERROR_OUTPUT="/tmp/error.out" 13 | OUTPUT_DATA="/tmp/output.data" 14 | MAKE_SCRIPT="/home/builder/bin/make.sh" 15 | 16 | EVENT="event" 17 | NAME="auto build" 18 | CONF="/home/builder/bin/mail.cfg" 19 | PUSH_SERVER="sip.example.com" # PLEASE MODIFY THIS 20 | 21 | SUCCESS_MESSAGE="Please check file: $PUSH_SERVER:/root/opensips.run ." 22 | 23 | if [ -f "$OUTPUT" ];then 24 | echo $SUDO_PASSWORD|sudo -S rm -rf $"$OUTPUT" 25 | fi 26 | 27 | if [ -f "$ERROR_OUTPUT" ];then 28 | echo $SUDO_PASSWORD|sudo -S rm -rf $"$OUTPUT" 29 | fi 30 | 31 | if [ -f "$MAKE_SCRIPT" ];then 32 | start=$(date +%s) 33 | bash $MAKE_SCRIPT > $OUTPUT 2>&1 34 | stop=$(date +%s) 35 | else 36 | echo "no $MAKE_SCRIPT find." >$ERROR_OUTPUT 37 | fi 38 | 39 | if [ -f "$ERROR_OUTPUT" ];then 40 | echo $SUDO_PASSWORD|sudo -S cat $ERROR_OUTPUT > $OUTPUT_DATA 41 | else 42 | echo $SUDO_PASSWORD|sudo -S echo -e "build success (takes $[ stop - start ] seconds). \n$SUCCESS_MESSAGE" > $OUTPUT_DATA 43 | fi 44 | 45 | if [ -f "$OUTPUT" ];then 46 | echo $SUDO_PASSWORD|sudo -S echo -e "\n--- build log ------\n" >> $OUTPUT_DATA 47 | echo $SUDO_PASSWORD|sudo -S cat $OUTPUT >> $OUTPUT_DATA 48 | fi 49 | 50 | # MAY BE NEED AUTH KEY TO SEND EMAIL LATER 51 | curl -s --http1.0 https://www.xdty.org/mail_extra.php -X POST -d "event=$EVENT&name=$NAME&email=$(cat $CONF|grep email|cut -c7-)" --data-urlencode extra@$OUTPUT_DATA 52 | -------------------------------------------------------------------------------- /opensips/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :build.sh 3 | #description :This script will build a release(opensips.run) file of opensips 4 | #author :xdtianyu@gmail.com 5 | #date :20141105 6 | #version :1.0 final 7 | #usage :sudo ./build.sh to build the opensips.run, sudo ./build.sh -c to clean release. 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | 11 | BINDIR="archive/sbin" 12 | ETCDIR="archive/etc" 13 | LIBDIR="archive/lib64" 14 | SBINS="opensipsctl opensipsunix osipsconfig opensips opensipsdbctl osipsconsole" 15 | BINNAME="opensips" 16 | RELEASEBIN="opensips.run" 17 | RUNSCRIPT="install.sh" 18 | BACKUP="no" 19 | RELEASEDIR="release" 20 | 21 | if [ $UID -ne 0 ]; then 22 | echo "Superuser privileges are required to run this script." 23 | echo "e.g. \"sudo $0\"" 24 | exit 0 25 | fi 26 | 27 | if [ ! -z $1 ]; then 28 | if [ $1 = "-c" ]; then 29 | echo "cleaning build..." 30 | if [ -d "archive" ]; then 31 | rm -rf archive 32 | fi 33 | if [ -n "$2" ] && [ "$2" = "--backup" ]; then 34 | echo "cleaning backup..." 35 | RELEASEDIR="backup" 36 | fi 37 | if [ -d "$RELEASEDIR" ]; then 38 | rm -rf $RELEASEDIR 39 | fi 40 | exit 0 41 | elif [ "$1" = "--backup" ]; then 42 | echo "create backup..." 43 | RELEASEBIN="opensips-backup-"$(date +%F-%H-%M-%S)".run" 44 | RELEASEDIR="backup" 45 | BACKUP="yes" 46 | if [ -d "archive" ]; then 47 | rm -rf archive 48 | fi 49 | if [ -d "$RELEASEDIR" ]; then 50 | rm -rf $RELEASEDIR 51 | fi 52 | else 53 | echo "unknow param: $1" 54 | exit 0 55 | fi 56 | fi 57 | 58 | echo "Build opensips installer..." 59 | if [ ! -d "archive" ]; 60 | then 61 | echo "create directory archive..." 62 | else 63 | echo "recreate directory archive..." 64 | rm -rf archive 65 | fi 66 | 67 | mkdir -p $BINDIR $ETCDIR $LIBDIR 68 | 69 | for bins in $SBINS; 70 | do 71 | if [ ! -f "/sbin/$bins" ]; 72 | then 73 | echo "/sbin/$bins not exist, stop now." 74 | exit 0 75 | else 76 | echo "copying /sbin/$bins..." 77 | cp /sbin/$bins $BINDIR 78 | fi 79 | done 80 | 81 | if [ ! -d "/etc/$BINNAME" ];then 82 | echo "/etc/$BINNAME not exist, stop now." 83 | exit 0 84 | else 85 | echo "copying /etc/$BINNAME..." 86 | cp -r /etc/$BINNAME $ETCDIR 87 | 88 | mkdir $ETCDIR/default 89 | mkdir $ETCDIR/init.d 90 | if [ -n "$(command -v apt-get)" ]; then 91 | if [ "$BACKUP" = "no" ]; then 92 | echo "copying ../packaging/debian/$BINNAME.default ..." 93 | cp ../packaging/debian/$BINNAME.default $ETCDIR/default 94 | echo "copying ../packaging/debian/$BINNAME.init ..." 95 | cp ../packaging/debian/$BINNAME.init $ETCDIR/init.d 96 | cp ../packaging/debian/$BINNAME.postinst $ETCDIR/ 97 | else 98 | echo "backup /etc/default/$BINNAME ..." 99 | cp /etc/default/$BINNAME $ETCDIR/default/$BINNAME.default 100 | echo "backup /etc/init.d/$BINNAME ..." 101 | cp /etc/init.d/$BINNAME $ETCDIR/init.d/$BINNAME.init 102 | fi 103 | elif [ -n "$(command -v yum)" ]; then 104 | if [ "$BACKUP" = "no" ]; then 105 | echo "copying ../packaging/debian/$BINNAME.default ..." 106 | cp ../packaging/rpm/$BINNAME.default $ETCDIR/default 107 | echo "copying ../packaging/debian/$BINNAME.init ..." 108 | cp ../packaging/rpm/$BINNAME.init $ETCDIR/init.d 109 | cp ../packaging/rpm/$BINNAME.postinst $ETCDIR/ 110 | else 111 | echo "backup /etc/default/$BINNAME ..." 112 | cp /etc/default/$BINNAME $ETCDIR/default/$BINNAME.default 113 | echo "backup /etc/init.d/$BINNAME ..." 114 | cp /etc/init.d/$BINNAME $ETCDIR/init.d/$BINNAME.init 115 | fi 116 | fi 117 | fi 118 | 119 | if [ ! -d "/lib64/$BINNAME" ];then 120 | echo "/lib64/$BINNAME not exist, stop now." 121 | exit 0 122 | else 123 | echo "copying /lib64/$BINNAME..." 124 | cp -r /lib64/$BINNAME $LIBDIR 125 | fi 126 | 127 | echo -e $(cat /etc/issue |head -n1)|head -n1 >archive/os 128 | if [ -f "/etc/redhat-release" ];then 129 | echo -e $(cat /etc/redhat-release |head -n1)|head -n1 >archive/os 130 | fi 131 | 132 | uname -i > archive/hardware 133 | 134 | #cp $RUNSCRIPT archive 135 | 136 | echo -e '#!/bin/bash 137 | #title :install.sh 138 | #description :This script will be build in the release file 139 | #author :xdtianyu@gmail.com 140 | #date :20141105 141 | #version :1.0 final 142 | #usage :DO NOT RUN THIS FILE BY HAND. 143 | #bash_version :4.3.11(1)-release 144 | #============================================================================== 145 | BINDIR="sbin" 146 | ETCDIR="etc" 147 | LIBDIR="lib64" 148 | SBINS="opensipsctl opensipsunix osipsconfig opensips opensipsdbctl osipsconsole" 149 | BINNAME="opensips" 150 | RELEASEBIN="opensips.run" 151 | 152 | if [ $UID -ne 0 ]; then 153 | echo "Superuser privileges are required to run this script." 154 | echo "e.g. \"sudo ./$RELEASEBIN\"" 155 | exit 0 156 | fi 157 | 158 | if [ ! -z "$1" ];then 159 | if [ $1 = "install" ];then 160 | echo "Install opensips..." 161 | elif [ $1 = "uninstall" ];then 162 | echo "Uninstall opensips..." 163 | for file in $SBINS;do 164 | if [ -f "/$BINDIR/$file" ];then 165 | echo "uninstall /$BINDIR/$file" 166 | rm -f /$BINDIR/$file 167 | fi 168 | done 169 | if [ -d "/$LIBDIR/$BINNAME" ];then 170 | echo "removing /$LIBDIR/$BINNAME" 171 | rm -rf /$LIBDIR/$BINNAME 172 | fi 173 | echo "Uninstall completed." 174 | exit 0 175 | elif [ $1 = "purge" ];then 176 | echo "Purge opensips..." 177 | for file in $SBINS;do 178 | if [ -f "/$BINDIR/$file" ];then 179 | echo "purge /$BINDIR/$file" 180 | rm -f /$BINDIR/$file 181 | fi 182 | done 183 | if [ -d "/$LIBDIR/$BINNAME" ];then 184 | echo "purge /$LIBDIR/$BINNAME" 185 | rm -rf /$LIBDIR/$BINNAME 186 | fi 187 | if [ -d "/$ETCDIR/$BINNAME" ];then 188 | echo "purge /$ETCDIR/$BINNAME" 189 | rm -rf /$ETCDIR/$BINNAME 190 | fi 191 | echo "Purge completed." 192 | exit 0 193 | else 194 | echo "Unknow param, exit." 195 | exit 0 196 | fi 197 | else 198 | echo "Install opensips..." 199 | fi 200 | 201 | echo -e "Checking for a supported OS... \\c" 202 | 203 | FAIL="true" 204 | if [ "$(echo -e $(cat /etc/issue |head -n1)|head -n1)" = "$(cat os)" ];then 205 | echo "OK" 206 | FAIL="false" 207 | else 208 | if [ -f "/etc/redhat-release" ];then 209 | if [ "$(echo -e $(cat /etc/redhat-release |head -n1)|head -n1)" = "$(cat os)" ];then 210 | echo "OK" 211 | FAIL="false" 212 | fi 213 | fi 214 | 215 | # disable os check. 216 | if [ ! "$FAIL" = "false" ];then 217 | echo "OK" 218 | FAIL="false" 219 | fi 220 | 221 | 222 | if [ ! "$FAIL" = "false" ];then 223 | echo "This file can only installed on $(cat os), abort." 224 | FAIL="true" 225 | fi 226 | fi 227 | 228 | if [ ! $FAIL = "true" ]; then 229 | 230 | echo -e "Checking for a 64-bit OS... \\c" 231 | 232 | if [ $(uname -i) = $(cat hardware) ];then 233 | echo "OK" 234 | FAIL="false" 235 | else 236 | echo "This file can only installed on $(cat hardware) hardware platform, abort." 237 | FAIL="true" 238 | fi 239 | 240 | fi 241 | 242 | if [ $FAIL = "true" ];then 243 | echo "Aborting installation due to unsatisfied requirements." 244 | echo "Installation failed." 245 | exit 0 246 | fi 247 | 248 | for file in $SBINS;do 249 | echo "install $file to /$BINDIR/ ..." 250 | if [ -f "/$BINDIR/$file" ];then 251 | rm -f /$BINDIR/$file 252 | fi 253 | cp $BINDIR/$file /$BINDIR/ 254 | done 255 | 256 | if [ -d "/$ETCDIR/$BINNAME" ];then 257 | echo "/$ETCDIR/$BINNAME already exist, make backup." 258 | mv /$ETCDIR/$BINNAME /$ETCDIR/$BINNAME.backup-$(date +%F-%H-%M-%S) 259 | echo "install $BINNAME.default to /$ETCDIR/default/$BINNAME..." 260 | cp $ETCDIR/default/$BINNAME.default /$ETCDIR/default/$BINNAME 261 | echo "install $BINNAME.init to /$ETCDIR/init.d/$BINNAME..." 262 | cp $ETCDIR/init.d/$BINNAME.init /$ETCDIR/init.d/$BINNAME 263 | chmod +x /$ETCDIR/init.d/$BINNAME 264 | if [ -n "$(command -v apt-get)" ]; then 265 | if [ -f "$ETCDIR/$BINNAME.postinst" ]; then 266 | bash $ETCDIR/$BINNAME.postinst configure 267 | fi 268 | elif [ -n "$(command -v yum)" ]; then 269 | if [ -f "$ETCDIR/$BINNAME.postinst" ]; then 270 | bash $ETCDIR/$BINNAME.postinst configure 271 | passwd -l opensips 272 | fi 273 | fi 274 | fi 275 | cp -r $ETCDIR/$BINNAME /$ETCDIR 276 | 277 | if [ -d "/$LIBDIR/$BINNAME" ];then 278 | echo "/$LIBDIR/$BINNAME already exist, make backup." 279 | mv /$LIBDIR/$BINNAME /$LIBDIR/$BINNAME.backup-$(date +%F-%H-%M-%S) 280 | fi 281 | 282 | cp -r $LIBDIR/$BINNAME /$LIBDIR/$BINNAME 283 | 284 | echo "Done. Please edit necessary configures to run opensips." 285 | ' > archive/$RUNSCRIPT 286 | chmod +x archive/$RUNSCRIPT 287 | ./makeself.sh archive/ $RELEASEBIN "$RELEASEBIN" ./$RUNSCRIPT 288 | rm -rf archive 289 | 290 | if [ -f "$RELEASEBIN" ]; then 291 | if [ -d "$RELEASEDIR" ]; then 292 | rm -rf $RELEASEDIR 293 | fi 294 | mkdir $RELEASEDIR 295 | mv $RELEASEBIN $RELEASEDIR 296 | echo "Done. Please check file \"$RELEASEDIR/$RELEASEBIN\"" 297 | else 298 | echo "Error detected." 299 | fi 300 | -------------------------------------------------------------------------------- /opensips/gdb.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | OUTPUT=/tmp/gdb.output 3 | 4 | for file in $(find /tmp -maxdepth 1 -name core.*);do 5 | #echo $file; 6 | gdb -batch -ex "set logging file $file.trace" -ex "set logging on" -ex "set pagination off" -ex "bt full" -ex quit "opensips" "$file" > /dev/null 2>&1 7 | if [ ! -d "/tmp/opensips_coredump" ];then 8 | mkdir /tmp/opensips_coredump 9 | fi 10 | mv $file /tmp/opensips_coredump 11 | done 12 | 13 | for file in $(find /tmp -maxdepth 1 -name *.trace);do 14 | echo -e "########## "$file" ##########\n\n" >> $OUTPUT 15 | cat $file >> $OUTPUT 16 | echo -e "\n\n" >> $OUTPUT 17 | rm -f $file 18 | done 19 | -------------------------------------------------------------------------------- /opensips/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :install.sh 3 | #description :This script will be build in the release file 4 | #author :xdtianyu@gmail.com 5 | #date :20141105 6 | #version :1.0 final 7 | #usage :DO NOT RUN THIS FILE BY HAND. 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | BINDIR="sbin" 11 | ETCDIR="etc" 12 | LIBDIR="lib64" 13 | SBINS="opensipsctl opensipsunix osipsconfig opensips opensipsdbctl osipsconsole" 14 | BINNAME="opensips" 15 | RELEASEBIN="opensips.run" 16 | 17 | if [ $UID -ne 0 ]; then 18 | echo "Superuser privileges are required to run this script." 19 | echo "e.g. \"sudo ./$RELEASEBIN\"" 20 | exit 0 21 | fi 22 | 23 | if [ ! -z "$1" ];then 24 | if [ $1 = "install" ];then 25 | echo "Install opensips..." 26 | elif [ $1 = "uninstall" ];then 27 | echo "Uninstall opensips..." 28 | for file in $SBINS;do 29 | if [ -f "/$BINDIR/$file" ];then 30 | echo "uninstall /$BINDIR/$file" 31 | rm -f /$BINDIR/$file 32 | fi 33 | done 34 | if [ -d "/$LIBDIR/$BINNAME" ];then 35 | echo "removing /$LIBDIR/$BINNAME" 36 | rm -rf /$LIBDIR/$BINNAME 37 | fi 38 | echo "Uninstall completed." 39 | exit 0 40 | elif [ $1 = "purge" ];then 41 | echo "Purge opensips..." 42 | for file in $SBINS;do 43 | if [ -f "/$BINDIR/$file" ];then 44 | echo "purge /$BINDIR/$file" 45 | rm -f /$BINDIR/$file 46 | fi 47 | done 48 | if [ -d "/$LIBDIR/$BINNAME" ];then 49 | echo "purge /$LIBDIR/$BINNAME" 50 | rm -rf /$LIBDIR/$BINNAME 51 | fi 52 | if [ -d "/$ETCDIR/$BINNAME" ];then 53 | echo "purge /$ETCDIR/$BINNAME" 54 | rm -rf /$ETCDIR/$BINNAME 55 | fi 56 | echo "Purge completed." 57 | exit 0 58 | else 59 | echo "Unknow param, exit." 60 | exit 0 61 | fi 62 | else 63 | echo "Install opensips..." 64 | fi 65 | 66 | echo -e "Checking for a supported OS... \c" 67 | 68 | FAIL="true" 69 | if [ "$(echo -e $(cat /etc/issue |head -n1)|head -n1)" = "$(cat os)" ];then 70 | echo "OK" 71 | FAIL="false" 72 | else 73 | if [ -f "/etc/redhat-release" ];then 74 | if [ "$(echo -e $(cat /etc/redhat-release |head -n1)|head -n1)" = "$(cat os)" ];then 75 | echo "OK" 76 | FAIL="false" 77 | fi 78 | fi 79 | 80 | # disable os check. 81 | if [ ! "$FAIL" = "false" ];then 82 | echo "OK" 83 | FAIL="false" 84 | fi 85 | 86 | 87 | if [ ! "$FAIL" = "false" ];then 88 | echo "This file can only installed on $(cat os), abort." 89 | FAIL="true" 90 | fi 91 | fi 92 | 93 | if [ ! $FAIL = "true" ]; then 94 | 95 | echo -e "Checking for a 64-bit OS... \c" 96 | 97 | if [ $(uname -i) = $(cat hardware) ];then 98 | echo "OK" 99 | FAIL="false" 100 | else 101 | echo "This file can only installed on $(cat hardware) hardware platform, abort." 102 | FAIL="true" 103 | fi 104 | 105 | fi 106 | 107 | if [ $FAIL = "true" ];then 108 | echo "Aborting installation due to unsatisfied requirements." 109 | echo "Installation failed." 110 | exit 0 111 | fi 112 | 113 | for file in $SBINS;do 114 | echo "install $file to /$BINDIR/ ..." 115 | if [ -f "/$BINDIR/$file" ];then 116 | rm -f /$BINDIR/$file 117 | fi 118 | cp $BINDIR/$file /$BINDIR/ 119 | done 120 | 121 | if [ -d "/$ETCDIR/$BINNAME" ];then 122 | echo "/$ETCDIR/$BINNAME already exist, make backup." 123 | mv /$ETCDIR/$BINNAME /$ETCDIR/$BINNAME.backup-$(date +%F-%H-%M-%S) 124 | echo "install $BINNAME.default to /$ETCDIR/default/$BINNAME..." 125 | cp $ETCDIR/default/$BINNAME.default /$ETCDIR/default/$BINNAME 126 | echo "install $BINNAME.init to /$ETCDIR/init.d/$BINNAME..." 127 | cp $ETCDIR/init.d/$BINNAME.init /$ETCDIR/init.d/$BINNAME 128 | chmod +x /$ETCDIR/init.d/$BINNAME 129 | if [ -n "$(command -v apt-get)" ]; then 130 | if [ -f "$ETCDIR/$BINNAME.postinst" ]; then 131 | bash $ETCDIR/$BINNAME.postinst configure 132 | fi 133 | elif [ -n "$(command -v yum)" ]; then 134 | if [ -f "$ETCDIR/$BINNAME.postinst" ]; then 135 | bash $ETCDIR/$BINNAME.postinst configure 136 | passwd -l opensips 137 | fi 138 | fi 139 | fi 140 | cp -r $ETCDIR/$BINNAME /$ETCDIR 141 | 142 | if [ -d "/$LIBDIR/$BINNAME" ];then 143 | echo "/$LIBDIR/$BINNAME already exist, make backup." 144 | mv /$LIBDIR/$BINNAME /$LIBDIR/$BINNAME.backup-$(date +%F-%H-%M-%S) 145 | fi 146 | 147 | cp -r $LIBDIR/$BINNAME /$LIBDIR/$BINNAME 148 | 149 | echo "Done. Please edit necessary configures to run opensips." 150 | -------------------------------------------------------------------------------- /opensips/mail.cfg: -------------------------------------------------------------------------------- 1 | email=xxx@xxx.com,xxxxx@xxxxx.com 2 | -------------------------------------------------------------------------------- /opensips/mail.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :mail.sh [event] [service name] [config file] 3 | #description :This script will call mail api to send mail via gmail. 4 | #author :xdtianyu@gmail.com 5 | #date :20141120 6 | #version :1.0 final 7 | #usage :bash mail.sh 8 | #bash_version :4.3.11(1)-release 9 | 10 | if [ $# -ne 3 ];then 11 | echo "Error param."; 12 | echo "Usage: $0 [event] [service name] [config file]" 13 | exit 0; 14 | fi 15 | 16 | EVENT=$1 17 | NAME=$2 18 | CONF=$(cat $3) 19 | 20 | if [ "$(echo $CONF|grep email|wc -l)" == "1" ];then 21 | curl -s https://www.xdty.org/mail.php -X POST -d "event=$EVENT&name=$NAME&email=$(echo $CONF|grep email|cut -c7-)" 22 | else 23 | echo "Config file error." 24 | fi 25 | -------------------------------------------------------------------------------- /opensips/mail2.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :mail.sh [event] [service name] [config file] 3 | #description :This script will call mail api to send mail via gmail. 4 | #author :xdtianyu@gmail.com 5 | #date :20141201 6 | #version :2.0 final 7 | #usage :bash mail.sh 8 | #bash_version :4.3.11(1)-release 9 | 10 | OUTPUT=/tmp/gdb.output 11 | 12 | if [ $# -ne 3 ];then 13 | echo "Error param."; 14 | echo "Usage: $0 [event] [service name] [config file]" 15 | exit 0; 16 | fi 17 | 18 | EVENT=$1 19 | NAME=$2 20 | CONF=$(cat $3) 21 | 22 | #bash /etc/opensips/gdb.sh 23 | for file in $(find /tmp -maxdepth 1 -name core.*);do 24 | #echo $file; 25 | gdb -batch -ex "set logging file $file.trace" -ex "set logging on" -ex "set pagination off" -ex "bt full" -ex quit "opensips" "$file" > /dev/null 2>&1 26 | if [ ! -d "/tmp/opensips_coredump" ];then 27 | mkdir /tmp/opensips_coredump 28 | fi 29 | mv $file /tmp/opensips_coredump 30 | done 31 | 32 | for file in $(find /tmp -maxdepth 1 -name *.trace);do 33 | echo -e "########## "$file" ##########\n\n" >> $OUTPUT 34 | cat $file >> $OUTPUT 35 | echo -e "\n\n" >> $OUTPUT 36 | rm -f $file 37 | done 38 | 39 | if [ "$(echo $CONF|grep email|wc -l)" == "1" ];then 40 | curl -s --http1.0 https://www.xdty.org/mail_extra.php -X POST -d "event=$EVENT&name=$NAME&email=$(echo $CONF|grep email|cut -c7-)" --data-urlencode extra@$OUTPUT 41 | else 42 | echo "Config file error." 43 | fi 44 | 45 | if [ -f "$OUTPUT" ];then 46 | rm -f $OUTPUT 47 | fi 48 | -------------------------------------------------------------------------------- /opensips/make.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :make.sh 3 | #description :This script is called by autobuild.sh. 4 | #author :xdtianyu@gmail.com 5 | #date :20141205 6 | #version :1.0 final 7 | #usage :bash make.sh 8 | #bash_version :4.3.11(1)-release 9 | 10 | GIT_DIR=/home/builder/opensips-git 11 | BUILD_DIR=/home/builder/opensips-autobuild 12 | SUDO_PASSWORD="123456" # PLEASE MODIFY THIS 13 | 14 | PUSH_SERVER="sip.example.com" # PLEASE MODIFY THIS 15 | PUSH_PORT="12345" # PLEASE MODIFY THIS 16 | 17 | ERROR_MSG="" 18 | ERROR_OUTPUT="/tmp/error.out" 19 | 20 | if [ -f "$ERROR_OUTPUT" ];then 21 | echo $SUDO_PASSWORD | sudo -S rm -rf $ERROR_OUTPUT 22 | fi 23 | 24 | function finish() 25 | { 26 | if [ ! -z "$ERROR_MSG" ];then 27 | echo "$ERROR_MSG" >> $ERROR_OUTPUT 28 | fi 29 | exit 0; 30 | } 31 | 32 | if [ ! -d "$GIT_DIR" ];then 33 | echo "git source directory not exist, exit."; 34 | ERROR_MSG="git directory not exist."; 35 | finish; 36 | fi 37 | 38 | if [ -d "$BUILD_DIR" ];then 39 | echo "build directory exist, remove now."; 40 | echo $SUDO_PASSWORD|sudo -S rm -r $BUILD_DIR; 41 | fi 42 | 43 | cd $GIT_DIR 44 | 45 | git clean -df 46 | 47 | echo "pull git source to the latest." 48 | git pull 49 | 50 | mkdir -p $BUILD_DIR 51 | cd $BUILD_DIR 52 | 53 | echo "copy source to $BUILD_DIR" 54 | cp -r $GIT_DIR/* . 55 | 56 | echo -e "\nmake clean\n" 57 | make clean 58 | 59 | echo -e "\nmake all\n" 60 | make all|| ERROR_MSG="build failed." 61 | 62 | if [ ! -z "$ERROR_MSG" ];then 63 | finish; 64 | fi 65 | 66 | echo -e "\nmake backup\n" 67 | cd install 68 | echo $SUDO_PASSWORD|sudo -S ./build.sh --backup 69 | cd .. 70 | echo -e "\nmake install\n" 71 | echo $SUDO_PASSWORD|sudo -S make install 72 | 73 | echo -e "\nmake opensips.run\n" 74 | cd install 75 | echo $SUDO_PASSWORD|sudo -S ./build.sh 76 | 77 | echo -e "\nrestory backup\n" 78 | cd backup 79 | echo $SUDO_PASSWORD|sudo -S ./opensips* 80 | 81 | cd .. 82 | echo -e "\nclean backup\n" 83 | echo $SUDO_PASSWORD|sudo -S ./build.sh -c --backup 84 | 85 | cd .. 86 | 87 | if [ -f "install/release/opensips.run" ];then 88 | echo -e "\npush opensips.run to $PUSH_SERVER\n" 89 | scp -P $PUSH_PORT install/release/opensips.run root@$PUSH_SERVER:/root 90 | else 91 | echo -e "\nError, no release find.\n" 92 | ERROR_MSG="Build failed, please check log for detail." 93 | finish; 94 | fi 95 | -------------------------------------------------------------------------------- /opensips/makeself-header.sh: -------------------------------------------------------------------------------- 1 | cat << EOF > "$archname" 2 | #!/bin/sh 3 | # This script was generated using Makeself $MS_VERSION 4 | 5 | CRCsum="$CRCsum" 6 | MD5="$MD5sum" 7 | TMPROOT=\${TMPDIR:=/tmp} 8 | 9 | label="$LABEL" 10 | script="$SCRIPT" 11 | scriptargs="$SCRIPTARGS" 12 | targetdir="$archdirname" 13 | filesizes="$filesizes" 14 | keep=$KEEP 15 | 16 | print_cmd_arg="" 17 | if type printf > /dev/null; then 18 | print_cmd="printf" 19 | elif test -x /usr/ucb/echo; then 20 | print_cmd="/usr/ucb/echo" 21 | else 22 | print_cmd="echo" 23 | fi 24 | 25 | unset CDPATH 26 | 27 | MS_Printf() 28 | { 29 | \$print_cmd \$print_cmd_arg "\$1" 30 | } 31 | 32 | MS_Progress() 33 | { 34 | while read a; do 35 | MS_Printf . 36 | done 37 | } 38 | 39 | MS_diskspace() 40 | { 41 | ( 42 | if test -d /usr/xpg4/bin; then 43 | PATH=/usr/xpg4/bin:\$PATH 44 | fi 45 | df -kP "\$1" | tail -1 | awk '{print \$4}' 46 | ) 47 | } 48 | 49 | MS_dd() 50 | { 51 | blocks=\`expr \$3 / 1024\` 52 | bytes=\`expr \$3 % 1024\` 53 | dd if="\$1" ibs=\$2 skip=1 obs=1024 conv=sync 2> /dev/null | \\ 54 | { test \$blocks -gt 0 && dd ibs=1024 obs=1024 count=\$blocks ; \\ 55 | test \$bytes -gt 0 && dd ibs=1 obs=1024 count=\$bytes ; } 2> /dev/null 56 | } 57 | 58 | MS_Help() 59 | { 60 | cat << EOH >&2 61 | Makeself version $MS_VERSION 62 | 1) Getting help or info about \$0 : 63 | \$0 --help Print this message 64 | \$0 --info Print embedded info : title, default target directory, embedded script ... 65 | \$0 --lsm Print embedded lsm entry (or no LSM) 66 | \$0 --list Print the list of files in the archive 67 | \$0 --check Checks integrity of the archive 68 | 69 | 2) Running \$0 : 70 | \$0 [options] [--] [additional arguments to embedded script] 71 | with following options (in that order) 72 | --confirm Ask before running embedded script 73 | --noexec Do not run embedded script 74 | --keep Do not erase target directory after running 75 | the embedded script 76 | --nox11 Do not spawn an xterm 77 | --nochown Do not give the extracted files to the current user 78 | --target NewDirectory Extract in NewDirectory 79 | --tar arg1 [arg2 ...] Access the contents of the archive through the tar command 80 | -- Following arguments will be passed to the embedded script 81 | EOH 82 | } 83 | 84 | MS_Check() 85 | { 86 | OLD_PATH="\$PATH" 87 | PATH=\${GUESS_MD5_PATH:-"\$OLD_PATH:/bin:/usr/bin:/sbin:/usr/local/ssl/bin:/usr/local/bin:/opt/openssl/bin"} 88 | MD5_ARG="" 89 | MD5_PATH=\`exec <&- 2>&-; which md5sum || type md5sum\` 90 | test -x "\$MD5_PATH" || MD5_PATH=\`exec <&- 2>&-; which md5 || type md5\` 91 | test -x "\$MD5_PATH" || MD5_PATH=\`exec <&- 2>&-; which digest || type digest\` 92 | PATH="\$OLD_PATH" 93 | 94 | MS_Printf "Verifying archive integrity..." 95 | offset=\`head -n $SKIP "\$1" | wc -c | tr -d " "\` 96 | verb=\$2 97 | i=1 98 | for s in \$filesizes 99 | do 100 | crc=\`echo \$CRCsum | cut -d" " -f\$i\` 101 | if test -x "\$MD5_PATH"; then 102 | if test \`basename \$MD5_PATH\` = digest; then 103 | MD5_ARG="-a md5" 104 | fi 105 | md5=\`echo \$MD5 | cut -d" " -f\$i\` 106 | if test \$md5 = "00000000000000000000000000000000"; then 107 | test x\$verb = xy && echo " \$1 does not contain an embedded MD5 checksum." >&2 108 | else 109 | md5sum=\`MS_dd "\$1" \$offset \$s | eval "\$MD5_PATH \$MD5_ARG" | cut -b-32\`; 110 | if test "\$md5sum" != "\$md5"; then 111 | echo "Error in MD5 checksums: \$md5sum is different from \$md5" >&2 112 | exit 2 113 | else 114 | test x\$verb = xy && MS_Printf " MD5 checksums are OK." >&2 115 | fi 116 | crc="0000000000"; verb=n 117 | fi 118 | fi 119 | if test \$crc = "0000000000"; then 120 | test x\$verb = xy && echo " \$1 does not contain a CRC checksum." >&2 121 | else 122 | sum1=\`MS_dd "\$1" \$offset \$s | CMD_ENV=xpg4 cksum | awk '{print \$1}'\` 123 | if test "\$sum1" = "\$crc"; then 124 | test x\$verb = xy && MS_Printf " CRC checksums are OK." >&2 125 | else 126 | echo "Error in checksums: \$sum1 is different from \$crc" 127 | exit 2; 128 | fi 129 | fi 130 | i=\`expr \$i + 1\` 131 | offset=\`expr \$offset + \$s\` 132 | done 133 | echo " All good." 134 | } 135 | 136 | UnTAR() 137 | { 138 | tar \$1vf - 2>&1 || { echo Extraction failed. > /dev/tty; kill -15 \$$; } 139 | } 140 | 141 | finish=true 142 | xterm_loop= 143 | nox11=$NOX11 144 | copy=$COPY 145 | ownership=y 146 | verbose=n 147 | 148 | initargs="\$@" 149 | 150 | while true 151 | do 152 | case "\$1" in 153 | -h | --help) 154 | MS_Help 155 | exit 0 156 | ;; 157 | --info) 158 | echo Identification: "\$label" 159 | echo Target directory: "\$targetdir" 160 | echo Uncompressed size: $USIZE KB 161 | echo Compression: $COMPRESS 162 | echo Date of packaging: $DATE 163 | echo Built with Makeself version $MS_VERSION on $OSTYPE 164 | echo Build command was: "$MS_COMMAND" 165 | if test x\$script != x; then 166 | echo Script run after extraction: 167 | echo " " \$script \$scriptargs 168 | fi 169 | if test x"$copy" = xcopy; then 170 | echo "Archive will copy itself to a temporary location" 171 | fi 172 | if test x"$KEEP" = xy; then 173 | echo "directory \$targetdir is permanent" 174 | else 175 | echo "\$targetdir will be removed after extraction" 176 | fi 177 | exit 0 178 | ;; 179 | --dumpconf) 180 | echo LABEL=\"\$label\" 181 | echo SCRIPT=\"\$script\" 182 | echo SCRIPTARGS=\"\$scriptargs\" 183 | echo archdirname=\"$archdirname\" 184 | echo KEEP=$KEEP 185 | echo COMPRESS=$COMPRESS 186 | echo filesizes=\"\$filesizes\" 187 | echo CRCsum=\"\$CRCsum\" 188 | echo MD5sum=\"\$MD5\" 189 | echo OLDUSIZE=$USIZE 190 | echo OLDSKIP=`expr $SKIP + 1` 191 | exit 0 192 | ;; 193 | --lsm) 194 | cat << EOLSM 195 | EOF 196 | eval "$LSM_CMD" 197 | cat << EOF >> "$archname" 198 | EOLSM 199 | exit 0 200 | ;; 201 | --list) 202 | echo Target directory: \$targetdir 203 | offset=\`head -n $SKIP "\$0" | wc -c | tr -d " "\` 204 | for s in \$filesizes 205 | do 206 | MS_dd "\$0" \$offset \$s | eval "$GUNZIP_CMD" | UnTAR t 207 | offset=\`expr \$offset + \$s\` 208 | done 209 | exit 0 210 | ;; 211 | --tar) 212 | offset=\`head -n $SKIP "\$0" | wc -c | tr -d " "\` 213 | arg1="\$2" 214 | shift 2 215 | for s in \$filesizes 216 | do 217 | MS_dd "\$0" \$offset \$s | eval "$GUNZIP_CMD" | tar "\$arg1" - \$* 218 | offset=\`expr \$offset + \$s\` 219 | done 220 | exit 0 221 | ;; 222 | --check) 223 | MS_Check "\$0" y 224 | exit 0 225 | ;; 226 | --confirm) 227 | verbose=y 228 | shift 229 | ;; 230 | --noexec) 231 | script="" 232 | shift 233 | ;; 234 | --keep) 235 | keep=y 236 | shift 237 | ;; 238 | --target) 239 | keep=y 240 | targetdir=\${2:-.} 241 | shift 2 242 | ;; 243 | --nox11) 244 | nox11=y 245 | shift 246 | ;; 247 | --nochown) 248 | ownership=n 249 | shift 250 | ;; 251 | --xwin) 252 | finish="echo Press Return to close this window...; read junk" 253 | xterm_loop=1 254 | shift 255 | ;; 256 | --phase2) 257 | copy=phase2 258 | shift 259 | ;; 260 | --) 261 | shift 262 | break ;; 263 | -*) 264 | echo Unrecognized flag : "\$1" >&2 265 | MS_Help 266 | exit 1 267 | ;; 268 | *) 269 | break ;; 270 | esac 271 | done 272 | 273 | case "\$copy" in 274 | copy) 275 | tmpdir=\$TMPROOT/makeself.\$RANDOM.\`date +"%y%m%d%H%M%S"\`.\$\$ 276 | mkdir "\$tmpdir" || { 277 | echo "Could not create temporary directory \$tmpdir" >&2 278 | exit 1 279 | } 280 | SCRIPT_COPY="\$tmpdir/makeself" 281 | echo "Copying to a temporary location..." >&2 282 | cp "\$0" "\$SCRIPT_COPY" 283 | chmod +x "\$SCRIPT_COPY" 284 | cd "\$TMPROOT" 285 | exec "\$SCRIPT_COPY" --phase2 -- \$initargs 286 | ;; 287 | phase2) 288 | finish="\$finish ; rm -rf \`dirname \$0\`" 289 | ;; 290 | esac 291 | 292 | if test "\$nox11" = "n"; then 293 | if tty -s; then # Do we have a terminal? 294 | : 295 | else 296 | if test x"\$DISPLAY" != x -a x"\$xterm_loop" = x; then # No, but do we have X? 297 | if xset q > /dev/null 2>&1; then # Check for valid DISPLAY variable 298 | GUESS_XTERMS="xterm rxvt dtterm eterm Eterm kvt konsole aterm" 299 | for a in \$GUESS_XTERMS; do 300 | if type \$a >/dev/null 2>&1; then 301 | XTERM=\$a 302 | break 303 | fi 304 | done 305 | chmod a+x \$0 || echo Please add execution rights on \$0 306 | if test \`echo "\$0" | cut -c1\` = "/"; then # Spawn a terminal! 307 | exec \$XTERM -title "\$label" -e "\$0" --xwin "\$initargs" 308 | else 309 | exec \$XTERM -title "\$label" -e "./\$0" --xwin "\$initargs" 310 | fi 311 | fi 312 | fi 313 | fi 314 | fi 315 | 316 | if test "\$targetdir" = "."; then 317 | tmpdir="." 318 | else 319 | if test "\$keep" = y; then 320 | echo "Creating directory \$targetdir" >&2 321 | tmpdir="\$targetdir" 322 | dashp="-p" 323 | else 324 | tmpdir="\$TMPROOT/selfgz\$\$\$RANDOM" 325 | dashp="" 326 | fi 327 | mkdir \$dashp \$tmpdir || { 328 | echo 'Cannot create target directory' \$tmpdir >&2 329 | echo 'You should try option --target OtherDirectory' >&2 330 | eval \$finish 331 | exit 1 332 | } 333 | fi 334 | 335 | location="\`pwd\`" 336 | if test x\$SETUP_NOCHECK != x1; then 337 | MS_Check "\$0" 338 | fi 339 | offset=\`head -n $SKIP "\$0" | wc -c | tr -d " "\` 340 | 341 | if test x"\$verbose" = xy; then 342 | MS_Printf "About to extract $USIZE KB in \$tmpdir ... Proceed ? [Y/n] " 343 | read yn 344 | if test x"\$yn" = xn; then 345 | eval \$finish; exit 1 346 | fi 347 | fi 348 | 349 | MS_Printf "Uncompressing \$label" 350 | res=3 351 | if test "\$keep" = n; then 352 | trap 'echo Signal caught, cleaning up >&2; cd \$TMPROOT; /bin/rm -rf \$tmpdir; eval \$finish; exit 15' 1 2 3 15 353 | fi 354 | 355 | leftspace=\`MS_diskspace \$tmpdir\` 356 | if test \$leftspace -lt $USIZE; then 357 | echo 358 | echo "Not enough space left in "\`dirname \$tmpdir\`" (\$leftspace KB) to decompress \$0 ($USIZE KB)" >&2 359 | if test "\$keep" = n; then 360 | echo "Consider setting TMPDIR to a directory with more free space." 361 | fi 362 | eval \$finish; exit 1 363 | fi 364 | 365 | for s in \$filesizes 366 | do 367 | if MS_dd "\$0" \$offset \$s | eval "$GUNZIP_CMD" | ( cd "\$tmpdir"; UnTAR x ) | MS_Progress; then 368 | if test x"\$ownership" = xy; then 369 | (PATH=/usr/xpg4/bin:\$PATH; cd "\$tmpdir"; chown -R \`id -u\` .; chgrp -R \`id -g\` .) 370 | fi 371 | else 372 | echo 373 | echo "Unable to decompress \$0" >&2 374 | eval \$finish; exit 1 375 | fi 376 | offset=\`expr \$offset + \$s\` 377 | done 378 | echo 379 | 380 | cd "\$tmpdir" 381 | res=0 382 | if test x"\$script" != x; then 383 | if test x"\$verbose" = xy; then 384 | MS_Printf "OK to execute: \$script \$scriptargs \$* ? [Y/n] " 385 | read yn 386 | if test x"\$yn" = x -o x"\$yn" = xy -o x"\$yn" = xY; then 387 | eval \$script \$scriptargs \$*; res=\$?; 388 | fi 389 | else 390 | eval \$script \$scriptargs \$*; res=\$? 391 | fi 392 | if test \$res -ne 0; then 393 | test x"\$verbose" = xy && echo "The program '\$script' returned an error code (\$res)" >&2 394 | fi 395 | fi 396 | if test "\$keep" = n; then 397 | cd \$TMPROOT 398 | /bin/rm -rf \$tmpdir 399 | fi 400 | eval \$finish; exit \$res 401 | EOF 402 | -------------------------------------------------------------------------------- /opensips/rpm.opensips.init: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Startup script for OpenSIPS 4 | # 5 | # chkconfig: 345 85 15 6 | # description: OpenSIPS is a fast SIP Server. 7 | # 8 | # processname: opensips 9 | # pidfile: /var/run/opensips.pid 10 | # config: /etc/opensips/opensips.cfg 11 | 12 | # Source function library. 13 | . /etc/rc.d/init.d/functions 14 | 15 | OSER=/sbin/opensips 16 | PROG=opensips 17 | PID_FILE=/var/run/opensips.pid 18 | LOCK_FILE=/var/lock/subsys/opensips 19 | RETVAL=0 20 | DEFAULTS=/etc/default/opensips 21 | RUN_OPENSIPS=no 22 | 23 | EMAIL_SCRIPT=/etc/opensips/mail2.sh 24 | EMAIL_CONF=/etc/opensips/mail.cfg 25 | CORE_DUMP=/tmp 26 | 27 | # Do not start opensips if fork=no is set in the config file 28 | # otherwise the boot process will just stop 29 | check_fork () 30 | { 31 | if grep -q "^[[:space:]]*fork[[:space:]]*=[[:space:]]*no.*" /etc/opensips/opensips.cfg; then 32 | echo "Not starting $DESC: fork=no specified in config file; run /etc/init.d/opensips debug instead" 33 | exit 1 34 | fi 35 | } 36 | 37 | check_opensips_config () 38 | { 39 | # Check if opensips configuration is valid before starting the server 40 | out=$($OSER -c 2>&1 > /dev/null) 41 | retcode=$? 42 | if [ "$retcode" != '0' ]; then 43 | echo "Not starting $DESC: invalid configuration file!" 44 | echo -e "\n$out\n" 45 | exit 1 46 | fi 47 | } 48 | 49 | 50 | start() { 51 | check_opensips_config 52 | if [ "$1" != "debug" ]; then 53 | check_fork 54 | fi 55 | echo -n $"Starting $PROG: " 56 | daemon $OSER $OPTIONS >/dev/null 2>/dev/null 57 | RETVAL=$? 58 | echo 59 | [ $RETVAL = 0 ] && touch $LOCK_FILE 60 | return $RETVAL 61 | } 62 | 63 | stop() { 64 | echo -n $"Stopping $PROG: " 65 | killproc $OSER 66 | RETVAL=$? 67 | echo 68 | [ $RETVAL = 0 ] && rm -f $LOCK_FILE $PID_FILE 69 | } 70 | 71 | # Load startup options if available 72 | if [ -f $DEFAULTS ]; then 73 | . $DEFAULTS || true 74 | fi 75 | 76 | if [ "$RUN_OPENSIPS" != "yes" ]; then 77 | echo "OpenSIPS not yet configured. Edit /etc/default/opensips first." 78 | exit 0 79 | fi 80 | 81 | 82 | S_MEMORY=$((`echo $S_MEMORY | sed -e 's/[^0-9]//g'`)) 83 | P_MEMORY=$((`echo $P_MEMORY | sed -e 's/[^0-9]//g'`)) 84 | [ -z "$USER" ] && USER=opensips 85 | [ -z "$GROUP" ] && GROUP=opensips 86 | [ $S_MEMORY -le 0 ] && S_MEMORY=32 87 | [ $P_MEMORY -le 0 ] && P_MEMORY=4 88 | 89 | if test "$DUMP_CORE" = "yes" ; then 90 | # set proper ulimit 91 | ulimit -c unlimited 92 | 93 | # directory for the core dump files 94 | # COREDIR=/home/corefiles 95 | # [ -d $COREDIR ] || mkdir $COREDIR 96 | # chmod 777 $COREDIR 97 | # echo "$COREDIR/core.%e.sig%s.%p" > /proc/sys/kernel/core_pattern 98 | fi 99 | 100 | OPTIONS="-P $PID_FILE -m $S_MEMORY -M $P_MEMORY -u $USER -g $GROUP -w $CORE_DUMP" 101 | 102 | 103 | # See how we were called. 104 | case "$1" in 105 | start|debug) 106 | start 107 | $EMAIL_SCRIPT "started" "service opensips" $EMAIL_CONF & 108 | ;; 109 | stop) 110 | stop 111 | $EMAIL_SCRIPT "stoped" "service opensips" $EMAIL_CONF & 112 | ;; 113 | status) 114 | status $OSER 115 | RETVAL=$? 116 | ;; 117 | restart) 118 | stop 119 | start 120 | $EMAIL_SCRIPT "restarted" "service opensips" $EMAIL_CONF & 121 | ;; 122 | condrestart) 123 | if [ -f $PID_FILE ] ; then 124 | stop 125 | start 126 | fi 127 | ;; 128 | *) 129 | echo $"Usage: $PROG {start|stop|restart|condrestart|status|debug|help}" 130 | exit 1 131 | esac 132 | 133 | exit $RETVAL 134 | -------------------------------------------------------------------------------- /py/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | USER=$(cat /etc/passwd |grep bash| grep home|head -n1 |cut -d ':' -f 1) 4 | 5 | if [ -z '$USER' ];then 6 | echo 'Error, must have a normal user!!' 7 | exit 0 8 | else 9 | echo "Run as $USER now." 10 | fi 11 | 12 | if [ -d '/tmp/rtmpweb' ];then 13 | rm -r /tmp/rtmpweb 14 | fi 15 | 16 | sudo -u $USER mkdir /tmp/rtmpweb 17 | sudo -u $USER cp -a . /tmp/rtmpweb 18 | 19 | cd /tmp/rtmpweb 20 | 21 | echo -e "cleaning...\n" 22 | sudo -u $USER ./clean.sh 23 | 24 | sudo -u $USER python server.py -c server.cfg -g 25 | sudo -u $USER pyinstaller server.py -F 26 | 27 | if [ -f '.auto' ];then 28 | sudo -u $USER cp .auto dist 29 | else 30 | echo "Error, .auto file not find" 31 | fi 32 | 33 | if [ ! -d 'dist' ];then 34 | echo "Error, pyinstall failed!" 35 | exit 0 36 | fi 37 | 38 | if [ -d 'records' ];then 39 | sudo -u $USER cp -r records dist 40 | fi 41 | 42 | if [ -d 'static' ];then 43 | sudo -u $USER cp -r static dist 44 | else 45 | echo "Error, no static." 46 | fi 47 | 48 | if [ -f 'server.cfg' ];then 49 | sudo -u $USER cp server.cfg dist 50 | fi 51 | 52 | cd - 53 | 54 | echo "DONE!" 55 | -------------------------------------------------------------------------------- /py/clean.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ $(find -name \*.pyc|wc -l) != 0 ];then 4 | rm *.pyc 5 | fi 6 | 7 | if [ -f ".auto" ];then 8 | rm .auto 9 | fi 10 | 11 | if [ -d "dist" ];then 12 | rm -r dist 13 | fi 14 | 15 | if [ -d "build" ];then 16 | rm -r build 17 | fi 18 | 19 | if [ -f "server.spec" ];then 20 | rm server.spec 21 | fi 22 | 23 | echo "DONE!" 24 | -------------------------------------------------------------------------------- /py/header.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #''' 3 | # File name: header.sh 4 | # Author: xdtianyu@gmail.com 5 | # Date created: 2015-01-07 14:24:16 6 | # Date last modified: 2015-01-09 15:19:31 7 | # Bash Version: 4.3.11(1)-release 8 | #''' 9 | 10 | AUTHOR="xdtianyu@gmail.com" 11 | PYTHON_VERSION=$(python -c 'import sys; print(sys.version[:5])') 12 | 13 | for file in $(ls *.py); do 14 | echo "check $file ..." 15 | MODIFIED=$(stat -c %y $file| cut -d'.' -f1) 16 | #echo "Last modified: $MODIFIED" 17 | CURRENT_DATE=$(date "+%Y-%m-%d %H:%M:%S") 18 | AUTHOR_COUNT=$(cat "$file" |grep " Author:" |wc -l) 19 | if [ $AUTHOR_COUNT -gt 1 ];then 20 | echo "More than one author, skip." 21 | continue 22 | elif [ $AUTHOR_COUNT -eq 1 ];then 23 | echo "Have author, check modified date." 24 | ORI=$(cat $file |grep " Date last modified: ") 25 | if [ "$ORI" == "" ];then 26 | echo "no line" 27 | continue 28 | fi 29 | TARGET=" Date last modified: ${MODIFIED}" 30 | #echo $ORI 31 | #echo $TARGET 32 | if [ "$ORI" == "$TARGET" ];then 33 | echo "No change detected." 34 | else 35 | if [ $(cat $file |grep " Date last modified: "|wc -l) -gt 1 ];then 36 | echo "More than one \"Date last modified\" detected, skip" 37 | continue 38 | fi 39 | LINE=$(cat $file |grep " Date last modified: " -n | cut -d':' -f1) 40 | sed -i "${LINE}s/.*/ Date last modified: ${CURRENT_DATE}/" $file 41 | fi 42 | # Check file name 43 | FILE_COUNT=$(cat $file |grep " File name:"|wc -l) 44 | if [ $FILE_COUNT -gt 1 ];then 45 | echo "More than one \"File name\" detecetd, skip" 46 | continue 47 | elif [ $FILE_COUNT -eq 1 ];then 48 | echo "Check file name..." 49 | ORI_FILE_NAME=$(cat $file |grep " File name:") 50 | TARGET_FILE_NAME=" File name: $file" 51 | if [ ! "$ORI_FILE_NAME" == "$TARGET_FILE_NAME" ];then 52 | echo "File name changed, update now" 53 | FILE_LINE=$(cat $file |grep " File name: " -n | cut -d':' -f1) 54 | sed -i "${FILE_LINE}s/.*/${TARGET_FILE_NAME}/" $file 55 | else 56 | echo "No change detected." 57 | fi 58 | fi 59 | else 60 | echo "Have no author, add header." 61 | sed -i "1s/^/\'\'\'\n File name: $file\n Author: $AUTHOR\n Date created: $MODIFIED\n Date last modified: $CURRENT_DATE\n Python Version: $PYTHON_VERSION\n\'\'\'\n\n/" $file 62 | fi 63 | done 64 | -------------------------------------------------------------------------------- /speedtest/README-CN.md: -------------------------------------------------------------------------------- 1 | 这个脚本可以做一个本地到服务器的下载速度测试,测试完成后会发送邮件到你的邮箱 2 | 3 | ###openwrt 依赖 4 | 5 | 这个脚本使用了 `curl` `wget` `timeout` and `https`. 如果你使用的是 openwrt 路由器, 你需要安装以下依赖 6 | 7 | ``` 8 | opkg update 9 | opkg install coreutils-timeout 10 | opkg install ca-certificates 11 | opkg install curl 12 | opkg install wget 13 | ``` 14 | 15 | ###如何使用 16 | 17 | **1\. 从 github 获取脚本** 18 | ``` 19 | cd ~/bin 20 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/speedtest/speedtest.sh -O speedtest 21 | chmod +x speedtest 22 | ``` 23 | 你可以添加 `~/bin` 到你的环境变量,修改 `/etc/profile` 文件的 `PATH` 24 | 25 | **2\. 脚本配置** 26 | 27 | 修改 `EMAIL` 为的邮箱地址. 28 | 29 | 修改 `TEST_FILES` 数组为你的测试文件地址. 每一个测试会在 `TIMEOUT` 秒(默认20)后强制打断 即 `timeout`. 30 | 31 | 你可以修改 `NAME` 为 `speedtest(home)` 来区分其他测试. 32 | 33 | **3\. 运行测试** 34 | 35 | ``` 36 | ~/bin/speedtest 37 | ``` 38 | 运行完成后检查邮查看结果 39 | 40 | **4\. 添加到 `crontab`** 41 | 42 | 43 | ``` 44 | crontab -e 45 | 46 | 25 * * * * /root/bin/speedtest 47 | ``` 48 | 49 | 之后会在每小时25分运行一次测试,并将测试结果发送到你的邮箱 50 | -------------------------------------------------------------------------------- /speedtest/README.md: -------------------------------------------------------------------------------- 1 | This script can do a speedtest to servers and then send the result to your email, you can add this to crontab. 2 | 3 | ###openwrt dependency 4 | 5 | This script are using `curl` `wget` `timeout` and `https`. If you're using this on openwrt router, you need to do the following commands 6 | 7 | ``` 8 | opkg update 9 | opkg install coreutils-timeout 10 | opkg install ca-certificates 11 | opkg install curl 12 | opkg install wget 13 | ``` 14 | 15 | ###How to use 16 | 17 | **1\. Get the script from github** 18 | ``` 19 | cd ~/bin 20 | wget https://raw.githubusercontent.com/xdtianyu/scripts/master/speedtest/speedtest.sh -O speedtest 21 | chmod +x speedtest 22 | ``` 23 | You can add `~/bin` to your `PATH` in `/etc/profile` 24 | 25 | **2\. Script configuration** 26 | 27 | Replace `EMAIL` with your email address. 28 | 29 | Replace `TEST_FILES` arrays with your own test files. Each test will only has `TIMEOUT` seconds(default 20) and then `timeout`. 30 | 31 | You can replace `NAME` with `speedtest(home)` to distinguish other tests. 32 | 33 | **3\. Do a speedtest** 34 | 35 | ``` 36 | ~/bin/speedtest 37 | ``` 38 | Then check your email for result. 39 | 40 | **4\. Add to `crontab`** 41 | 42 | 43 | ``` 44 | crontab -e 45 | 46 | 25 * * * * /root/bin/speedtest 47 | ``` 48 | 49 | This means at every hour's 25, a speedtest runs and you get a result email. 50 | -------------------------------------------------------------------------------- /speedtest/speedtest.php: -------------------------------------------------------------------------------- 1 | >/tmp/speedtest.txt 2>&1 &"); 21 | } 22 | break; 23 | case 'GET': 24 | echo ''; 25 | break; 26 | } 27 | ?> 28 | -------------------------------------------------------------------------------- /speedtest/speedtest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | 3 | import hashlib 4 | import os 5 | import sys 6 | 7 | __author__ = 'ty' 8 | 9 | import re 10 | import time 11 | from datetime import datetime 12 | from datetime import timedelta 13 | from nvd3 import lineWithFocusChart 14 | 15 | MAX_SPEED = 5000 16 | 17 | 18 | class SpeedObject: 19 | def __init__(self, _date, _uri, _speed): 20 | self.date = _date 21 | self.uri = _uri 22 | self.speed = _speed 23 | 24 | directory = sys.argv[1] 25 | name = sys.argv[2] 26 | test_duration = sys.argv[3] 27 | 28 | with open(directory+"/"+name+".txt", 'r') as content_file: 29 | content = content_file.read() 30 | 31 | tests = content.split('#######################') 32 | 33 | date = datetime 34 | min_date = datetime.now() + timedelta(0, -3600*36) 35 | 36 | uri = "" 37 | 38 | testDict = {} 39 | 40 | for test in tests: 41 | try: 42 | item = re.findall("--.*", test, re.MULTILINE)[0] 43 | time_s = item.split(' ')[0] 44 | uri = item.split(' ')[1] 45 | date = datetime.strptime(time_s, '--%Y-%m-%d %H:%M:%S--') 46 | 47 | print date 48 | print uri 49 | 50 | if date < min_date: 51 | print "more than 36 hour, ignore." 52 | continue 53 | 54 | if uri in testDict: 55 | speedList = testDict[uri] 56 | else: 57 | speedList = [] 58 | testDict[uri] = speedList 59 | 60 | speeds = re.findall(".*%.*", test, re.MULTILINE) 61 | 62 | if len(speeds) == 0: 63 | continue 64 | else: 65 | delta = float(test_duration) / len(speeds) 66 | 67 | speedList.append(SpeedObject(date + timedelta(0, -3), uri, "10K")) 68 | 69 | for speed in speeds: 70 | speed = speed.split('% ')[1].strip().split(' ')[0] 71 | 72 | if "M" in speed: 73 | speed_i = float(speed.split('M')[0]) * 1024 74 | if speed_i > MAX_SPEED: 75 | continue 76 | speed = str(speed_i) + 'K' 77 | 78 | speedList.append(SpeedObject(date, uri, speed)) 79 | date = date + timedelta(0, delta) 80 | print speed 81 | speedList.append(SpeedObject(date + timedelta(0, 3), uri, "10K")) 82 | 83 | except IndexError as e: 84 | pass 85 | 86 | 87 | output_file = open(directory+'/'+name+'.htm', 'w') 88 | 89 | chart_name = "Speed Test" 90 | chart = lineWithFocusChart(name=chart_name, width="1280", height="720", color_category='category20b', x_is_date=True, 91 | x_axis_format="%m-%d %H:%M") 92 | 93 | title = "\n\n

" + chart_name + "

" 94 | 95 | href = "
"+name+"" 96 | 97 | for filename in os.listdir(directory): 98 | if filename.endswith('.htm') and filename != name+".htm": 99 | href = href + " "+filename[:-4]+"" 100 | 101 | href = href + " RAW" 102 | 103 | chart.set_containerheader(title + href + "\n\n") 104 | 105 | extra_series = {"tooltip": {"y_start": "", "y_end": " K/s"}, 106 | "date_format": "%d %b %Y %H:%M:%S %p"} 107 | 108 | for uri in testDict: 109 | speeds = [] 110 | dates = [] 111 | 112 | for speed in testDict[uri]: 113 | dates.append(time.mktime(speed.date.timetuple()) * 1e3 + speed.date.microsecond / 1e3) 114 | speeds.append(float(speed.speed.split('K')[0])) 115 | md5 = hashlib.md5(uri).hexdigest() 116 | color = '#%s' % (md5[-6:]) 117 | chart.add_serie(name=uri, y=speeds, x=dates, extra=extra_series, color=color) 118 | 119 | chart.buildhtml() 120 | 121 | output_file.write(chart.htmlcontent) 122 | 123 | output_file.close() 124 | -------------------------------------------------------------------------------- /speedtest/speedtest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | EMAIL="YOUR_EMAIL@example.com" 4 | TIMEOUT=15 5 | NAME="speedtest" 6 | EVENT="finished" 7 | OUTPUT=/tmp/wget.speedtest 8 | SPEED_URL=https://example.com/speed.php 9 | MAIL_URL=https://example.com/mail_extra.php 10 | 11 | TEST_FILES=( 12 | https://node1.example.com/10meg.test 13 | https://node2.example.com/10meg.test 14 | https://node3.example.com/10meg.test 15 | https://node4.example.com/10meg.test 16 | https://node5.example.com/10meg.test 17 | ) 18 | 19 | # speed test 20 | if [ -f $OUTPUT ]; then 21 | rm $OUTPUT 22 | fi 23 | 24 | for file in ${TEST_FILES[@]}; do 25 | timeout $TIMEOUT wget -4 -O /dev/null $file -a $OUTPUT --progress=dot:binary 26 | echo -e "\n\n#######################\n" >>$OUTPUT 27 | done 28 | 29 | # send email 30 | 31 | echo -e "Generate chart... \n" 32 | curl -s --http1.0 $SPEED_URL -X POST -d "time=$TIMEOUT&client=$CLIENT" --data-urlencode extra@$OUTPUT --capath /etc/ssl/certs/ 33 | 34 | echo -e "Send email... \n" 35 | curl -s --http1.0 $MAIL_URL -X POST -d "event=$EVENT&name=$NAME&email=$EMAIL" --data-urlencode extra@$OUTPUT --capath /etc/ssl/certs/ 36 | 37 | -------------------------------------------------------------------------------- /ssh/README.md: -------------------------------------------------------------------------------- 1 | sshrc.sh: Send mail while ssh login 2 | 3 | pam.sh: Send mail while ssh login or logout, need add the follow line to /etc/pam.d/sshd 4 | 5 | >session optional pam_exec.so log=/var/log/login.log /etc/ssh/pam.sh 6 | 7 | -------------------------------------------------------------------------------- /ssh/authorized_keys: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW3T/zTunS3+AWbDJEdieE6gK4NtEHkhFQ8MKNuQ5Isj4V8W3zo318r8LbBbM38E4iwmMkpaVkJXVXyp4g7LP9kWkxAdAIDfX8pbxc09N+gb4CwSZP4qX0chDJOT9Am2/X3S+3sHNOky/EWhWG7UpJiwUM1D110H/7/ZlH3++5Lyy//QzashKkQ/4htmL09u+wIbMXuJ+1uZ9AZRu9i7h1ud2vtQhhuAhV3cdBTMxt44/aAUkvZrs9YoVfuLdN6nm0EKDMCbZy9L3zYubQ4WaocCoguoCtZ3gCtMUkNKDXW2XdvcFgOkpsDRwa5NYUh3+fbhV37OaNXufPoYGBuvzD 2 | -------------------------------------------------------------------------------- /ssh/pam.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | NAME="xxx" 4 | EMAIL="xxx@gmail.com" 5 | 6 | read -d " " ip <<< $PAM_RHOST 7 | geo=$(curl -s http://ip.xdty.org -X POST -d "geo=$ip") 8 | if [ -z "$geo" ];then 9 | geo="unknown" 10 | fi 11 | ip="$ip($geo)" 12 | 13 | if [ $PAM_TYPE = "close_session" ];then 14 | EVENT="logout" 15 | elif [ $PAM_TYPE = "open_session" ];then 16 | EVENT="login" 17 | fi 18 | 19 | curl -s https://www.xdty.org/mail.php -X POST -d "event=($PAM_USER) $EVENT from $ip&name=$NAME&email=$EMAIL" & 20 | -------------------------------------------------------------------------------- /ssh/sshrc: -------------------------------------------------------------------------------- 1 | /etc/ssh/sshrc.sh & 2 | -------------------------------------------------------------------------------- /ssh/sshrc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | read -d " " ip <<< $SSH_CONNECTION 3 | #date=$(date "+%d.%m.%Y %Hh%M") 4 | #reverse=$(dig -x $ip +short) 5 | geo=$(curl -s http://ip.xdty.org -X POST -d "geo=$ip") 6 | if [ -z "$geo" ];then 7 | geo="unknown" 8 | fi 9 | ip="$ip($geo)" 10 | curl -s https://www.xdty.org/mail.php -X POST -d "event=($USER) login from $ip&name=vps&email=xxx@gmail.com" & 11 | -------------------------------------------------------------------------------- /sticker/.gitignore: -------------------------------------------------------------------------------- 1 | cache/ 2 | .idea/ 3 | -------------------------------------------------------------------------------- /sticker/README.md: -------------------------------------------------------------------------------- 1 | #Sticker 2 | 3 | ```shell 4 | pip3 install -r requirements.txt 5 | ``` 6 | 7 | ## Usage 8 | 9 | ```shell 10 | ./sticker.py https://store.line.me/stickershop/product/1271679/en 11 | ``` -------------------------------------------------------------------------------- /sticker/requirements.txt: -------------------------------------------------------------------------------- 1 | lxml == 3.5.0 2 | requests == 2.9.1 3 | urllib3 == 1.13.1 4 | Pillow == 3.3.1 5 | beautifulsoup4 == 4.5.1 6 | -------------------------------------------------------------------------------- /sticker/sticker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import os 4 | import urllib.request 5 | from PIL import Image 6 | 7 | import requests 8 | from urllib3.exceptions import HTTPError 9 | 10 | 11 | user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)Chrome/50.0.2661.75 Safari/537.36' 12 | 13 | header = {'User-Agent': user_agent} 14 | 15 | cache_dir = 'cache/' 16 | 17 | 18 | def fetch(url, headers, cookies): 19 | print('fetch: ' + url) 20 | 21 | r = requests.get(url, headers=headers, cookies=cookies) 22 | from bs4 import BeautifulSoup 23 | import lxml 24 | 25 | bs = BeautifulSoup(r.text, lxml.__name__) 26 | 27 | name = bs.find('title').text.split(' -')[0] 28 | 29 | print('name: ' + name) 30 | 31 | import re 32 | 33 | spans = bs.find_all('span', style=re.compile('129px;')) 34 | 35 | for span in spans: 36 | link = re.match('.*(https.*png)', span.get('style')).group(1) 37 | download(name, link) 38 | 39 | if spans: 40 | pass 41 | else: 42 | print('Error html content!') 43 | print(r.text) 44 | 45 | 46 | def download(dir_name, url): 47 | print('download: ' + url) 48 | 49 | if not os.path.exists(cache_dir + dir_name): 50 | os.makedirs(cache_dir + dir_name) 51 | 52 | res = None 53 | try: 54 | res = urllib.request.urlopen(url) 55 | except HTTPError: 56 | print('download error.') 57 | 58 | file_name = url.split('/')[-1] 59 | 60 | path = cache_dir + dir_name + '/' + file_name 61 | 62 | with open(path, 'b+w') as f: 63 | f.write(res.read()) 64 | 65 | scale(path) 66 | 67 | 68 | def scale(file): 69 | print('scale: ' + file) 70 | img = Image.open(file) 71 | width = 512 72 | p = (width / float(img.size[0])) 73 | height = int((float(img.size[1]) * float(p))) 74 | img.resize((width, height), Image.CUBIC).save(file.split('.')[0]+"_big.png") 75 | 76 | 77 | if __name__ == '__main__': 78 | import sys 79 | 80 | if len(sys.argv) > 1 and sys.argv[1]: 81 | fetch(sys.argv[1], header, {}) 82 | else: 83 | print('Error parameter.') 84 | 85 | 86 | 87 | -------------------------------------------------------------------------------- /trcp.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # copy complete torrent's *.resume *.torrent file 3 | # Usage: trcp /media/smb/complete/ /media/wndr4300/transmission/ 4 | 5 | COMPLETE_DIR=$1 6 | TR_DIR=$2 7 | 8 | if [ ! -d "$COMPLETE_DIR" ];then 9 | echo "Param error." 10 | exit 0 11 | fi 12 | if [ ! -d "$TR_DIR" ];then 13 | echo "Param error." 14 | exit 0 15 | fi 16 | 17 | if [ ! -d "$COMPLETE_DIR/../resume" ];then 18 | mkdir $COMPLETE_DIR/../resume 19 | fi 20 | if [ ! -d "$COMPLETE_DIR/../torrents" ];then 21 | mkdir $COMPLETE_DIR/../torrents 22 | fi 23 | 24 | SAVEIFS=$IFS # setup this case the space char in file name. 25 | IFS=$(echo -en "\n\b") 26 | 27 | cd $COMPLETE_DIR 28 | for file in *;do 29 | echo $file 30 | if [ $(find $TR_DIR/torrents |grep -F "$file"|wc -l) -gt 1 ];then 31 | echo -e "\033[0;31mMore than one torrent detected, you may have to check manually.\033[0m" 32 | fi 33 | TORRENT=$(find $TR_DIR/torrents |grep -F "$file"|head -n1) 34 | if [ ! -z "$TORRENT" ];then 35 | #echo "copy $TORRENT" 36 | if [ -f "$COMPLETE_DIR/../torrents/$TORRENT" ];then 37 | echo "file already exist." 38 | else 39 | cp "$TORRENT" $COMPLETE_DIR/../torrents 40 | fi 41 | else 42 | echo "error, no torrent find" 43 | fi 44 | RESUME=$(find $TR_DIR/resume |grep -F "$file"|head -n1) 45 | if [ ! -z "$RESUME" ];then 46 | #echo "copy $RESUME" 47 | if [ -f "$COMPLETE_DIR/../resume/$RESUME" ];then 48 | echo "file already exist." 49 | else 50 | cp "$RESUME" $COMPLETE_DIR/../resume 51 | fi 52 | else 53 | echo "error, no resume find" 54 | fi 55 | done 56 | 57 | IFS=$SAVEIFS 58 | -------------------------------------------------------------------------------- /type/input.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import time 3 | import itertools 4 | 5 | __author__ = 'ty' 6 | 7 | 8 | INPUT = "The quick brown fox jumps over the lazy dog" 9 | 10 | print "Please enter '"+INPUT+"': \n" 11 | 12 | var = "" 13 | 14 | 15 | def compare(string1, string2, no_match_c=' ', match_c='|'): 16 | 17 | result_c = '' 18 | diff_count = 0 19 | 20 | if len(string2) < len(string1): 21 | string1, string2 = string2, string1 22 | 23 | for c1, c2 in itertools.izip(string1, string2): 24 | if c1 == c2: 25 | result_c += match_c 26 | else: 27 | result_c += no_match_c 28 | diff_count += 1 29 | delta = len(string2) - len(string1) 30 | result_c += delta * no_match_c 31 | diff_count += delta 32 | return result_c, diff_count 33 | 34 | while var != "exit": 35 | try: 36 | before = time.time() 37 | var = raw_input() 38 | after = time.time() 39 | time_user_input = after-before 40 | if var == "exit": 41 | print "bye" 42 | else: 43 | result, n_diff = compare(INPUT, var, no_match_c='*') 44 | print result 45 | print INPUT 46 | print "used", time_user_input, "seconds,", "%d difference(s)." % n_diff, "\n" 47 | except EOFError: 48 | print "bye" 49 | var = "exit" 50 | -------------------------------------------------------------------------------- /u2helper/.gitignore: -------------------------------------------------------------------------------- 1 | config.json 2 | seeding.json 3 | *.pyc 4 | .idea 5 | -------------------------------------------------------------------------------- /u2helper/README-CN.md: -------------------------------------------------------------------------------- 1 | ###u2helper### 2 | 3 | u2helper 是一个整理transmission下载的脚本 4 | 5 | ###配置### 6 | 7 | 重命名 `config.json.example` 为 `config.json` 8 | 9 | 1. 设置 `download_dir` 为你的 transmission 下载位置,与 `download-dir` 一致 10 | 2. 设置 `target_dir_parent` 到你要移动到的位置 , 比如: `/mnt/usb`. 然后这个脚本会根据分类自动生成 `/mnt/usb/动漫`, `/mnt/usb/音乐`, `/mnt/usb/字幕` 目录. 你可以在 `transmission.py` `target_dir` 中修改这些名称. 11 | 3. 设置 `transmission_url` 到 transmission web rpc 地址, 如 `https://your.doamin.name/transmission/rpc` 12 | 4. 设置 `transmission_user` 和 `transmission_password` 为 transmission 网页管理用户名和密码 13 | 5. 设置 `uid` 为你的 u2 uid 14 | 6. 设置 `nexusphp_u2` 和 `__cfduid` 为你 u2 cookies. 15 | 16 | 17 | ###用法### 18 | 19 | 这个脚本依赖 `BeautifulSoup4` and `requests`, 先确保它们已被安装. 20 | 21 | 1\. 首先获得所你当前做种的信息 22 | 23 | `python u2.py` 24 | 25 | 这一步会生成一个 `seeding.json` 文件, 它包含你的所有做种信息 `标题`, `类`, `副标题`, `文件夹(或文件名)`, `id`, `名称`. 此文件被 `transmission.py` 调用, 每一个种子的抓取过程限制在3秒. 所以抓取60个种子信息会耗时3分钟. 26 | 27 | 2\. 设置下载位置 28 | 29 | `python transmission.py` 30 | 31 | 这一步会调用 transmission rpc 设置下载位置接口,会自动将文件移动到 `目标父目录/类别名/资源名`。每一个请求会有1秒延时,所以移动60个种子会耗时1分钟 32 | -------------------------------------------------------------------------------- /u2helper/README.md: -------------------------------------------------------------------------------- 1 | ###u2helper### 2 | 3 | u2helper is a script for arranging torrents downloaded by transmission. 4 | 5 | ###config### 6 | 7 | rename `config.json.example` to `config.json` 8 | 9 | 1. set `download_dir` to your transmission `download-dir` 10 | 2. set `target_dir_parent` to the directory which you want move the files in, for example: `/mnt/usb`. Then the script will generate `/mnt/usb/动漫`, `/mnt/usb/音乐`, `/mnt/usb/字幕` forders for different catalog. You can modify the names in `transmission.py` `target_dir` by yourself. 11 | 3. set `transmission_url` to your transmission web rpc url, e.g. `https://your.doamin.name/transmission/rpc` 12 | 4. set `transmission_user` and `transmission_password` to your transmission web rpc username and password 13 | 5. set `uid` to your u2 uid 14 | 6. set `nexusphp_u2` and `__cfduid` the same as your u2 cookies. 15 | 16 | 17 | ###usage### 18 | 19 | This script requires `BeautifulSoup4` and `requests`, make sure your have install them first. 20 | 21 | 1\. First get the torrent info you are seeding 22 | 23 | `python u2.py` 24 | 25 | This will generate a file named `seeding.json`, it contains an array of your torrents's `title`, `catalog`, `description`, `folder(or filename)`, `id`, `name`. It's used by `transmission.py`, and each torrent fetch shall be limited in 3 seconds. So it will take 3 minutes to get 60 torrents's info. 26 | 27 | 2\. Set the location of your files 28 | 29 | `python transmission.py` 30 | 31 | This will call transmission's rpc set location method, it will move the files to `target_dir_parent/catalog/name`. Each request has a delay for 1 second, so it will take 1 minutes to set all 60 torrents. 32 | -------------------------------------------------------------------------------- /u2helper/config.json.example: -------------------------------------------------------------------------------- 1 | { 2 | "download_dir": "/mnt/usb/download", 3 | "target_dir_parent": "/mnt/usb", 4 | "transmission_url": "https://your.doamin.name/transmission/rpc", 5 | "transmission_user": "YOUR USERNAME", 6 | "transmission_password": "YOUR PASSWORD", 7 | "uid": "123456", 8 | "nexusphp_u2": "nexusphp_u2 FROM U2 COOKIES", 9 | "__cfduid": "__cfduid FROM U2 COOKIES" 10 | } -------------------------------------------------------------------------------- /u2helper/transmission.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding=utf-8 3 | 4 | import time 5 | import requests 6 | import json 7 | import base64 8 | 9 | from bs4 import BeautifulSoup 10 | 11 | __author__ = 'ty' 12 | 13 | config = json.load(open('config.json')) 14 | 15 | download_dir = config["download_dir"] 16 | url = config["transmission_url"] 17 | 18 | authorization = base64.b64encode(config["transmission_user"] + ":" + 19 | config["transmission_password"]) 20 | 21 | target_dir_parent = config["target_dir_parent"] 22 | 23 | target_dir = {"Lossless Music": target_dir_parent + u"/音乐/", 24 | "BDISO": target_dir_parent + u"/动漫/", 25 | "BDrip": target_dir_parent + u"/动漫/", 26 | "U2-RBD": target_dir_parent + u"/动漫/", 27 | "U2-Rip": target_dir_parent + u"/动漫/", 28 | u"加流重灌": target_dir_parent + u"/动漫/", 29 | u"外挂结构": target_dir_parent + u"/字幕/", 30 | "Others": target_dir_parent + u"/其他/", 31 | "DVDrip": target_dir_parent + u"/动漫/", 32 | "HDTVrip": target_dir_parent + u"/动漫/", 33 | "DVDISO": target_dir_parent + u"/动漫/"} 34 | 35 | headers = {'X-Transmission-Session-Id': '', 36 | "Authorization": "Basic " + authorization} 37 | 38 | list_payload = '''{"method": "torrent-get", "arguments": { 39 | "fields": ["id", "name", "percentDone","status","downloadDir"]}}''' 40 | 41 | r = requests.post(url, headers=headers, data=list_payload, verify=False) 42 | 43 | soup = BeautifulSoup(r.text, "html.parser") 44 | code = soup.find("code") 45 | headers['X-Transmission-Session-Id'] = code.text.split(': ')[1] 46 | 47 | r = requests.post(url, headers=headers, data=list_payload, verify=False) 48 | 49 | result = json.JSONDecoder().decode(r.text) 50 | 51 | # print json.dumps(result, indent=2, ensure_ascii=False) 52 | 53 | with open("seeding.json") as data_file: 54 | seedings = json.load(data_file) 55 | 56 | for torrent in result["arguments"]["torrents"]: 57 | if torrent["downloadDir"] == download_dir and torrent["percentDone"] == 1: 58 | print torrent["name"] 59 | 60 | for seeding in seedings["torrents"]: 61 | if seeding["folder"] == torrent["name"]: 62 | if seeding["catalog"] == "Lossless Music" or seeding["catalog"] == u"外挂结构": 63 | location_payload = '''{"method": "torrent-set-location", "arguments": {"move": true, "location": "''' + \ 64 | target_dir[seeding["catalog"]].encode('utf8') + '''", "ids": [''' + \ 65 | str(torrent["id"]) + ''']}}''' 66 | else: 67 | location_payload = '''{"method": "torrent-set-location", "arguments": {"move": true, "location": "''' + \ 68 | target_dir[seeding["catalog"]].encode('utf8') + \ 69 | seeding["name"].encode('utf8').replace('/', '/').replace(':', ':') \ 70 | + '''", "ids": [''' + \ 71 | str(torrent["id"]) + ''']}}''' 72 | print location_payload 73 | r = requests.post(url, headers=headers, data=location_payload, verify=False) 74 | print r.text 75 | # time.sleep(1) 76 | break 77 | -------------------------------------------------------------------------------- /u2helper/u2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import time 3 | import io 4 | 5 | from bs4 import BeautifulSoup 6 | from bs4 import Tag 7 | 8 | from u2torrent import U2Torrent 9 | 10 | import requests 11 | import itertools 12 | import json 13 | 14 | __author__ = 'ty' 15 | 16 | config = json.load(open('config.json')) 17 | 18 | url = 'https://u2.dmhy.org/getusertorrentlistajax.php?userid='+config["uid"]+'&type=seeding' 19 | 20 | info_url = 'https://u2.dmhy.org/torrent_info.php?id=' 21 | 22 | cookies = dict( 23 | nexusphp_u2=config["nexusphp_u2"], 24 | __cfduid=config["__cfduid"]) 25 | 26 | r = requests.get(url, cookies=cookies) 27 | 28 | soup = BeautifulSoup(r.text, "html.parser") 29 | 30 | td_list = soup.find_all('td', {'class': 'rowfollow nowrap'}) 31 | 32 | table_list = soup.find_all('table', {'class': 'torrentname'}) 33 | 34 | torrents_dict = {} 35 | torrents = [] 36 | 37 | count = 0 38 | 39 | for td, table in itertools.izip(td_list, table_list): 40 | 41 | catalog = "" 42 | for s in td.contents[0].contents: 43 | if isinstance(s, Tag): 44 | catalog += " " 45 | else: 46 | catalog = catalog + s 47 | u2torrent = U2Torrent() 48 | 49 | u2torrent.catalog = catalog 50 | u2torrent.title = table.find('b').string 51 | u2torrent.description = table.find('span').string 52 | 53 | u2torrent.id = int(table.find('a').get('href').split('&')[0].split('=')[1]) 54 | 55 | print info_url + str(u2torrent.id) 56 | 57 | try: 58 | info_r = requests.get(info_url + str(u2torrent.id), cookies=cookies) 59 | info_soup = BeautifulSoup(info_r.text, "html.parser") 60 | 61 | info_name = info_soup.find("span", {'class': 'title'}, text="[name]").parent.find("span", {'class': 'value'}) 62 | u2torrent.folder = info_name.text 63 | 64 | u2torrent.name = u2torrent.title.split('][')[0].replace('[', '') 65 | 66 | print u2torrent.name + " : " + u2torrent.folder 67 | 68 | torrents.append(json.JSONDecoder().decode(u2torrent.json())) 69 | count += 1 70 | except Exception as e: 71 | print str(e) 72 | print "Fetch folder name failed: " + u2torrent.title 73 | # time.sleep(3) 74 | 75 | torrents_dict["count"] = count 76 | torrents_dict["torrents"] = torrents 77 | 78 | with io.open('seeding.json', 'w', encoding='utf-8') as f: 79 | f.write(unicode(json.dumps(torrents_dict, indent=2, ensure_ascii=False))) 80 | -------------------------------------------------------------------------------- /u2helper/u2torrent.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | __author__ = 'ty' 4 | 5 | import json 6 | 7 | class U2Torrent: 8 | title = "" 9 | id = 0 10 | catalog = "" 11 | description = "" 12 | name = "" 13 | folder = "" 14 | 15 | def __init__(self): 16 | pass 17 | 18 | def json(self): 19 | return json.dumps(self, default=lambda o: o.__dict__, ensure_ascii=False).encode('utf8') 20 | -------------------------------------------------------------------------------- /uploader/.gitignore: -------------------------------------------------------------------------------- 1 | 2 | # Created by https://www.gitignore.io/api/pycharm 3 | 4 | ### PyCharm ### 5 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm 6 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 7 | 8 | .idea/ 9 | # User-specific stuff: 10 | .idea/workspace.xml 11 | .idea/tasks.xml 12 | .idea/dictionaries 13 | .idea/vcs.xml 14 | .idea/jsLibraryMappings.xml 15 | 16 | # Sensitive or high-churn files: 17 | .idea/dataSources.ids 18 | .idea/dataSources.xml 19 | .idea/dataSources.local.xml 20 | .idea/sqlDataSources.xml 21 | .idea/dynamic.xml 22 | .idea/uiDesigner.xml 23 | 24 | # Gradle: 25 | .idea/gradle.xml 26 | .idea/libraries 27 | 28 | # Mongo Explorer plugin: 29 | .idea/mongoSettings.xml 30 | 31 | ## File-based project format: 32 | *.iws 33 | 34 | ## Plugin-specific files: 35 | 36 | # IntelliJ 37 | /out/ 38 | 39 | # mpeltonen/sbt-idea plugin 40 | .idea_modules/ 41 | 42 | # JIRA plugin 43 | atlassian-ide-plugin.xml 44 | 45 | # Crashlytics plugin (for Android Studio and IntelliJ) 46 | com_crashlytics_export_strings.xml 47 | crashlytics.properties 48 | crashlytics-build.properties 49 | fabric.properties 50 | 51 | ### PyCharm Patch ### 52 | # Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721 53 | 54 | # *.iml 55 | # modules.xml 56 | __pycache__ 57 | 58 | *.db 59 | *.json 60 | config.py 61 | cache/ 62 | venv/ 63 | -------------------------------------------------------------------------------- /uploader/README.md: -------------------------------------------------------------------------------- 1 | # Uploader 2 | 3 | 上传文件到七牛和腾讯云存储 4 | 5 | ## 配置 6 | 7 | 参考 `config.py.example` 文件,修改 `API key`,修改 `targets` 为要上传的文件或目录 8 | 9 | ## 运行 10 | 11 | ```shell 12 | virtualenv -p python3 venv 13 | source venv/bin/activate 14 | pip install -r requirements.txt -I 15 | python upload.py 16 | ``` 17 | 18 | **crontab** 19 | 20 | ```shell 21 | 10 * * * * /path/to/uploader/cron.sh >> /var/log/uploader.log 2>&1 22 | ``` 23 | 24 | -------------------------------------------------------------------------------- /uploader/config.py.example: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | qn_access_key = "ACCESS_KEY" 4 | qn_secret_key = "SECRET_KEY" 5 | qn_bucket_name = "BUCKET_NAME" 6 | 7 | cos_app_id = "" 8 | cos_bucket_name = "" 9 | cos_secret_id = "" 10 | cos_key = "" 11 | 12 | targets = ['/var/www/test.json', '/var/www/test_dir'] 13 | -------------------------------------------------------------------------------- /uploader/cron.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo $(date) 4 | 5 | PWD="$(dirname $0)" 6 | 7 | echo "$PWD" 8 | 9 | cd "$PWD" || exit 1 10 | 11 | venv/bin/python upload.py 12 | -------------------------------------------------------------------------------- /uploader/requirements.txt: -------------------------------------------------------------------------------- 1 | qiniu == 7.0.7 2 | requests == 2.9.1 3 | -------------------------------------------------------------------------------- /uploader/upload.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | 4 | import config 5 | import uploader 6 | 7 | 8 | def upload(targets): 9 | for target in targets: 10 | 11 | if os.path.isdir(target): 12 | upload([os.path.join(target, f) for f in os.listdir(target)]) 13 | else: 14 | if os.path.exists(target): 15 | uploader.upload(target) 16 | 17 | upload(config.targets) 18 | -------------------------------------------------------------------------------- /uploader/uploader.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | 4 | import qiniu 5 | 6 | import config 7 | 8 | access_key = config.qn_access_key 9 | secret_key = config.qn_secret_key 10 | bucket_name = config.qn_bucket_name 11 | 12 | cos_app_id = config.cos_app_id 13 | cos_bucket_name = config.cos_bucket_name 14 | cos_secret_id = config.cos_secret_id 15 | cos_key = config.cos_key 16 | 17 | 18 | def upload(name): 19 | upload_file(name) 20 | upload_cos(name) 21 | 22 | 23 | # upload to qiniu 24 | def upload_file(file_name): 25 | q = qiniu.Auth(access_key, secret_key) 26 | 27 | key = os.path.basename(file_name) 28 | 29 | token = q.upload_token(bucket_name, key) 30 | ret, info = qiniu.put_file(token, key, file_name) 31 | if ret is not None: 32 | print(file_name + ' uploaded.') 33 | else: 34 | print(info) 35 | 36 | 37 | # upload to q-cloud cos 38 | def upload_cos(file): 39 | headers = { 40 | 'Authorization': sign() 41 | } 42 | url = 'https://web.file.myqcloud.com/files/v1/' + cos_app_id + '/' + cos_bucket_name + '/' + os.path.basename(file) 43 | data = {'op': 'upload', 'insertOnly': '0'} 44 | files = {'filecontent': open(file, 'rb')} 45 | import requests 46 | r = requests.post(url, data=data, files=files, headers=headers) 47 | print(r.text) 48 | 49 | 50 | def sign(): 51 | import hmac 52 | import hashlib 53 | 54 | # a=[appid]&b=[bucket]&k=[SecretID]&e=[expiredTime]&t=[currentTime]&r=[rand]&f= 55 | import time 56 | current_time = int(time.time()) 57 | sign_text = 'a=' + cos_app_id + '&b=' + cos_bucket_name + '&k=' + cos_secret_id + '&e=' + str( 58 | current_time + 3600) + '&t=' + str(current_time) + '&r=123&f=' 59 | sign_tmp = hmac.new(cos_key.encode(), sign_text.encode(), hashlib.sha1).digest() + sign_text.encode() 60 | import base64 61 | 62 | return base64.b64encode(sign_tmp).decode() 63 | 64 | -------------------------------------------------------------------------------- /zip/d2t.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #============================================================================== 3 | 4 | check_sub(){ 5 | SAVEIFS=$IFS # setup this case the space char in file name. 6 | IFS=$(echo -en "\n\b") 7 | for subdir in $(find -maxdepth 1 -type d |grep ./ |cut -c 3-); 8 | do 9 | echo $subdir 10 | tar cf "$subdir.tar" "$subdir" 11 | done 12 | IFS=$SAVEIFS 13 | } 14 | check_sub 15 | -------------------------------------------------------------------------------- /zip/d2z.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #============================================================================== 3 | 4 | check_sub(){ 5 | SAVEIFS=$IFS # setup this case the space char in file name. 6 | IFS=$(echo -en "\n\b") 7 | for subdir in $(find -maxdepth 1 -type d |grep ./ |cut -c 3-); 8 | do 9 | echo $subdir 10 | zip -r -0 "$subdir.zip" "$subdir" 11 | done 12 | IFS=$SAVEIFS 13 | } 14 | check_sub 15 | -------------------------------------------------------------------------------- /zip/r2t.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :r2t 3 | #description :This script will convert all rars(of jpg/bmp/png) with any content struct to tars(of jpg). Use "convert" command to convert images to jpg to depress disk space. the origin file's content struct will not change. 4 | #author :xdtianyu@gmail.com 5 | #date :20141029 6 | #version :1.0 final 7 | #usage :bash r2t 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | 11 | if [ $1 = "-c" ]; then 12 | echo "cleaning..." 13 | cd tars 14 | for file in *.tar; do 15 | RAR="${file%%.tar*}.rar" 16 | echo "check \"$RAR\" ..." 17 | if [ -f "../$RAR" ];then 18 | echo "delete $RAR ..." 19 | rm -i "../$RAR" 20 | else 21 | echo "$RAR not exist." 22 | fi 23 | done 24 | cd .. 25 | rm -ri tars 26 | exit 0 27 | fi 28 | 29 | check_sub(){ 30 | echo "check sub." 31 | SAVEIFS=$IFS # setup this case the space char in file name. 32 | IFS=$(echo -en "\n\b") 33 | for subdir in $(find -maxdepth 1 -type d |grep ./ |cut -c 3-); 34 | do 35 | echo $subdir 36 | cd "$subdir" 37 | convert_to_jpg 38 | cd .. 39 | done 40 | IFS=$SAVEIFS 41 | } 42 | 43 | convert_to_jpg(){ 44 | 45 | for ext in jpg JPG bmp BMP png PNG; do 46 | echo "ext is $ext" 47 | if [ ! $(find . -maxdepth 1 -name \*.$ext | wc -l) = 0 ]; 48 | then 49 | x2jpg $ext 50 | fi 51 | done 52 | 53 | check_sub # check if has sub directory. 54 | } 55 | 56 | x2jpg(){ 57 | if [ ! -d origin ];then 58 | mkdir origin 59 | fi 60 | if [ ! -d /tmp/jpg ]; then 61 | 62 | mkdir /tmp/jpg 63 | fi 64 | 65 | tmp_fifofile="/tmp/$$.fifo" 66 | mkfifo $tmp_fifofile # create a fifo type file. 67 | exec 6<>$tmp_fifofile # point fd6 to fifo file. 68 | rm $tmp_fifofile 69 | 70 | 71 | thread=10 # define numbers of threads. 72 | for ((i=0;i<$thread;i++));do 73 | echo 74 | done >&6 # actually only put $thread RETURNs to fd6. 75 | 76 | for file in ./*.$1;do 77 | read -u6 78 | { 79 | echo 'convert -quality 80' "$file" /tmp/jpg/"${file%.*}"'.jpg' 80 | convert -limit memory 64 -limit map 128 -quality 80 "$file" /tmp/jpg/"${file%.*}".jpg 81 | mv "$file" origin 82 | echo >&6 83 | } & 84 | done 85 | 86 | wait # wait for all child thread end. 87 | exec 6>&- # close fd6 88 | 89 | mv /tmp/jpg/* . 90 | rm -r origin 91 | 92 | echo 'DONE!' 93 | } 94 | 95 | 96 | for file in *.rar ; do 97 | tmpdir=$(mktemp -d) 98 | DIR="${file%%.rar*}" 99 | echo "unrar x $file $tmpdir"; 100 | mv "$file" tmp.rar 101 | unrar x tmp.rar $tmpdir # unrar to a tmp directory. 102 | mv tmp.rar "$file" 103 | 104 | if [ $(ls $tmpdir | wc -l) = 1 ]; then # check if has folders, and mv the unrared directory as same name with the rar file. 105 | DIR2=$(ls $tmpdir) 106 | mv "$tmpdir/$DIR2" "$DIR" 107 | rmdir $tmpdir 108 | else 109 | mv $tmpdir "$DIR" 110 | fi 111 | 112 | echo $DIR 113 | if [ -d "$DIR" ]; 114 | then 115 | cd "$DIR" 116 | convert_to_jpg # convert process. 117 | cd .. 118 | echo "tar cvf $DIR.tar $DIR" 119 | tar cvf "$DIR.tar" "$DIR" # tar the directory. 120 | rm -r "$DIR" 121 | else 122 | echo "$DIR not exist." 123 | fi 124 | done 125 | 126 | if [ ! -d "tars" ]; then 127 | mkdir tars 128 | fi 129 | 130 | if [ ! $(find . -maxdepth 1 -name \*.tar | wc -l) = 0 ]; 131 | then 132 | mv *.tar tars 133 | fi 134 | 135 | # check status. 136 | for file in *.rar; do 137 | TAR="tars/${file%%.rar*}.tar" 138 | echo "check \"$TAR\" ..." 139 | if [ -f "$TAR" ];then 140 | echo "$file convert OK." 141 | cd tars 142 | md5sum "${file%%.rar*}.tar" > "${file%%.rar*}.md5" # generate md5 file. 143 | cd .. 144 | else 145 | echo "$file convert FAILED." 146 | fi 147 | done 148 | -------------------------------------------------------------------------------- /zip/z2t.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #title :z2t 3 | #description :This script will convert all zips(of jpg/bmp/png) with any content struct to tars(of jpg). Use "convert" command to convert images to jpg to depress disk space. the origin file's content struct will not change. 4 | #author :xdtianyu@gmail.com 5 | #date :20141029 6 | #version :1.0 final 7 | #usage :bash z2t 8 | #bash_version :4.3.11(1)-release 9 | #============================================================================== 10 | 11 | if [ $1 = "-c" ]; then 12 | echo "cleaning..." 13 | if [ $2 = "-i" ]; then 14 | PARAM = "-i" 15 | fi 16 | cd tars 17 | for file in *.tar; do 18 | ZIP="${file%%.tar*}.zip" 19 | echo "check \"$ZIP\" ..." 20 | if [ -f "../$ZIP" ];then 21 | echo "delete $ZIP ..." 22 | rm $PARAM "../$ZIP" 23 | else 24 | echo "$ZIP not exist." 25 | fi 26 | done 27 | cd .. 28 | rm -r $PARAM tars 29 | exit 0 30 | fi 31 | 32 | check_sub(){ 33 | echo "check sub." 34 | SAVEIFS=$IFS # setup this case the space char in file name. 35 | IFS=$(echo -en "\n\b") 36 | for subdir in $(find -maxdepth 1 -type d |grep ./ |cut -c 3-); 37 | do 38 | echo $subdir 39 | cd "$subdir" 40 | convert_to_jpg 41 | cd .. 42 | done 43 | IFS=$SAVEIFS 44 | } 45 | 46 | convert_to_jpg(){ 47 | 48 | for ext in jpg JPG bmp BMP png PNG; do 49 | echo "ext is $ext" 50 | if [ ! $(find . -maxdepth 1 -name \*.$ext | wc -l) = 0 ]; 51 | then 52 | x2jpg $ext 53 | fi 54 | done 55 | 56 | check_sub # check if has sub directory. 57 | } 58 | 59 | x2jpg(){ 60 | if [ ! -d origin ];then 61 | mkdir origin 62 | fi 63 | if [ ! -d /tmp/jpg ]; then 64 | 65 | mkdir /tmp/jpg 66 | fi 67 | 68 | tmp_fifofile="/tmp/$$.fifo" 69 | mkfifo $tmp_fifofile # create a fifo type file. 70 | exec 6<>$tmp_fifofile # point fd6 to fifo file. 71 | rm $tmp_fifofile 72 | 73 | 74 | thread=10 # define numbers of threads. 75 | for ((i=0;i<$thread;i++));do 76 | echo 77 | done >&6 # actually only put $thread RETURNs to fd6. 78 | 79 | for file in ./*.$1;do 80 | read -u6 81 | { 82 | echo 'convert -quality 80' "$file" /tmp/jpg/"${file%.*}"'.jpg' 83 | convert -limit memory 64 -limit map 128 -quality 80 "$file" /tmp/jpg/"${file%.*}".jpg 84 | mv "$file" origin 85 | echo >&6 86 | } & 87 | done 88 | 89 | wait # wait for all child thread end. 90 | exec 6>&- # close fd6 91 | 92 | mv /tmp/jpg/* . 93 | rm -r origin 94 | 95 | echo 'DONE!' 96 | } 97 | 98 | 99 | for file in *.zip ; do 100 | tmpdir=$(mktemp -d) 101 | DIR="${file%%.zip*}" 102 | echo "unzip $file -d $tmpdir"; 103 | unzip "$file" -d $tmpdir # unzip to a tmp directory. 104 | 105 | if [ $(ls $tmpdir | wc -l) = 1 ]; then # check if has folders, and mv the unziped directory as same name with the zip file. 106 | DIR2=$(ls $tmpdir) 107 | mv "$tmpdir/$DIR2" "$DIR" 108 | rmdir $tmpdir 109 | else 110 | mv $tmpdir "$DIR" 111 | fi 112 | 113 | echo $DIR 114 | if [ -d "$DIR" ]; 115 | then 116 | cd "$DIR" 117 | convert_to_jpg # convert process. 118 | cd .. 119 | echo "tar cvf $DIR.tar $DIR" 120 | tar cvf "$DIR.tar" "$DIR" # tar the directory. 121 | rm -r "$DIR" 122 | else 123 | echo "$DIR not exist." 124 | fi 125 | done 126 | 127 | if [ ! -d "tars" ]; then 128 | mkdir tars 129 | fi 130 | 131 | if [ ! $(find . -maxdepth 1 -name \*.tar | wc -l) = 0 ]; 132 | then 133 | mv *.tar tars 134 | fi 135 | 136 | # check status. 137 | for file in *.zip; do 138 | TAR="tars/${file%%.zip*}.tar" 139 | echo "check \"$TAR\" ..." 140 | if [ -f "$TAR" ];then 141 | echo "$file convert OK." 142 | cd tars 143 | md5sum "${file%%.zip*}.tar" > "${file%%.zip*}.md5" # generate md5 file. 144 | cd .. 145 | else 146 | echo "$file convert FAILED." 147 | fi 148 | done 149 | --------------------------------------------------------------------------------