├── .emacs.el ├── .gitconfig ├── .gitignore ├── README.md ├── all_scripts ├── branch_checkout.sh ├── branch_status.sh ├── deploy.sh ├── error_log.sh ├── healthcheck.sh ├── hosts.txt ├── kataribe_log.sh ├── log.sh └── restart.sh ├── kataribe.toml ├── lua ├── img.lua └── redis.lua ├── netdata ├── apps_groups.conf └── netdata.conf ├── redis ├── dump.sh └── restore.sh ├── scripts ├── check_mysql.sh ├── deploy.sh ├── deploy_app.sh ├── deploy_nginx.sh ├── deploy_redis.sh ├── deploy_service.sh ├── error_log.sh ├── log_app.sh ├── log_nginx.sh ├── rotate.sh ├── rotate_and_cp.sh └── var.txt ├── setup.sh ├── setup_netdata.sh └── tmpl ├── README.md └── authorized_keys /.emacs.el: -------------------------------------------------------------------------------- 1 | ;;delete 2 | (global-set-key "\C-h" 'delete-backward-char) 3 | -------------------------------------------------------------------------------- /.gitconfig: -------------------------------------------------------------------------------- 1 | [user] 2 | name = Nao Minami 3 | email = south37777@gmail.com 4 | [push] 5 | default = current 6 | [color] 7 | ui = auto 8 | [alias] 9 | co = checkout 10 | st = status 11 | sync = !git checkout master && git pull origin master && git fetch -p origin && git branch -d $(git branch --merged | grep -v master | grep -v '*') && git push origin $(git branch -r --merged | grep origin/ | grep -v master | sed s~origin/~:~) 12 | [core] 13 | editor = vim 14 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Added by south37 2 | .mysql_history 3 | .viminfo 4 | .profile 5 | .rediscli_history 6 | .ssh/* 7 | .sudo_as_admin_successful 8 | .bash_logout 9 | .bash_history 10 | .cache/* 11 | *.swp 12 | dump.sql 13 | 14 | isucon-settings/* 15 | data/* 16 | 17 | 18 | # Created by https://www.gitignore.io/api/vim,ruby,linux 19 | 20 | ### Linux ### 21 | *~ 22 | 23 | # temporary files which can be created if a process still has a handle open of a deleted file 24 | .fuse_hidden* 25 | 26 | # KDE directory preferences 27 | .directory 28 | 29 | # Linux trash folder which might appear on any partition or disk 30 | .Trash-* 31 | 32 | # .nfs files are created when an open file is removed but is still being accessed 33 | .nfs* 34 | 35 | ### Ruby ### 36 | *.gem 37 | *.rbc 38 | /.config 39 | /coverage/ 40 | /InstalledFiles 41 | /pkg/ 42 | /spec/reports/ 43 | /spec/examples.txt 44 | /test/tmp/ 45 | /test/version_tmp/ 46 | /tmp/ 47 | 48 | # Used by dotenv library to load environment variables. 49 | # .env 50 | 51 | ## Specific to RubyMotion: 52 | .dat* 53 | .repl_history 54 | build/ 55 | *.bridgesupport 56 | build-iPhoneOS/ 57 | build-iPhoneSimulator/ 58 | 59 | ## Specific to RubyMotion (use of CocoaPods): 60 | # 61 | # We recommend against adding the Pods directory to your .gitignore. However 62 | # you should judge for yourself, the pros and cons are mentioned at: 63 | # https://guides.cocoapods.org/using/using-cocoapods.html#should-i-check-the-pods-directory-into-source-control 64 | # 65 | # vendor/Pods/ 66 | 67 | ## Documentation cache and generated files: 68 | /.yardoc/ 69 | /_yardoc/ 70 | /doc/ 71 | /rdoc/ 72 | 73 | ## Environment normalization: 74 | /.bundle/ 75 | /vendor/bundle 76 | /lib/bundler/man/ 77 | 78 | # for a library or gem, you might want to ignore these files since the code is 79 | # intended to run in multiple environments; otherwise, check them in: 80 | # Gemfile.lock 81 | # .ruby-version 82 | # .ruby-gemset 83 | 84 | # unless supporting rvm < 1.11.0 or doing something fancy, ignore this: 85 | .rvmrc 86 | 87 | ### Vim ### 88 | # swap 89 | [._]*.s[a-v][a-z] 90 | [._]*.sw[a-p] 91 | [._]s[a-v][a-z] 92 | [._]sw[a-p] 93 | # session 94 | Session.vim 95 | # temporary 96 | .netrwhist 97 | # auto-generated tag files 98 | tags 99 | 100 | # End of https://www.gitignore.io/api/vim,ruby,linux 101 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## 最初にやることリスト 2 | - [ ] 0. server の CPU, Memory確認(`cat /proc/meminfo | grep MemTotal` と `cat /proc/cpuinfo | grep processor` を実行) 3 | - [ ] 1. go 実装に切り替える(忘れがちだから注意。systemd は [Systemd #18](https://github.com/ngtk/orenoie/issues/18) を参照) 4 | 実装に切り替える(忘れがちだから注意。systemd は [Systemd #18](https://github.com/ngtk/orenoie/issues/18) を参照) 5 | - [ ] 2-1. nginx.conf に access log を設定して bench スタート( https://gist.github.com/south37/d4a5a8158f49e067237c17d13ecab12a#file-04_nginx-md ) 6 | - [ ] 2-2. `netdata` の /etc/netdata/app_groups を良い感じに書き換える (go をちゃんと計測出来るようにするため) 7 | - [ ] 3. `git clone https://github.com/south37/isucon-settings && cd isucon-settings && ./setup.sh` を実行して諸々ファイルを設置。 8 | - [ ] 4. `deploy.sh` を環境に合わせて書き換える。これが終了して他の2人もdeployが出来るようになったら、repository を共有。 9 | - [ ] 5. nginx の静的ファイル配信など設定 10 | 11 | ## 開発の為に必要になること 12 | - [ ] `sudo apt-get install vim` で vim をインストール。 13 | - [ ] `deploy.sh`、`log_app.sh`あたりは環境に合わせて適宜書き換える(systemd のサービス名が何なのか次第で変わるので)。その他の shell script は、他の人に使ってもらって動かなかったら修正する。`rotate_and_cp.sh` も、ユーザー名が `isucon` 以外だったら書き換える。 14 | - [ ] scripts/deploy_app.sh 15 | - [ ] scripts/deploy_nginx.sh 16 | - [ ] all_scripts/hosts.txt 17 | - [ ] all_scripts/deploy.sh 18 | - [ ] あとで修正 19 | - [ ] scripts/log_app.sh 20 | - [ ] scripts/log_nginx.sh 21 | - [ ] all_scripts/log.sh 22 |  - [ ] all_scripts/healthcheck.sh 23 | 24 | ## Redis を利用する場合 25 | - [ ] `redis.cnf` の中で、`bind` の設定をコメントアウト(デフォルトだとlocalhostからの接続しか受け付けなくなってる) 26 | 27 | ## MySQL の connection 28 | - [ ] `my.cnf` の中で、`bind` の設定をコメントアウト(デフォルトだとlocalhostからの接続しか受け付けなくなってる) 29 | -------------------------------------------------------------------------------- /all_scripts/branch_checkout.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | if [ "$#" -ne 1 ]; then 6 | echo "You must specify branch!" 7 | exit 1 8 | fi 9 | 10 | COMMAND="git checkout $1" 11 | echo "Checkout branch..." 12 | echo "COMMAND: ${COMMAND}" 13 | i=1 14 | for host in ${HOSTS[@]}; do 15 | echo "" 16 | echo "HOST${i}: ${host}" 17 | ssh "${ISUCONUSER}@${host}" "${COMMAND}" 18 | i=$((i+1)) 19 | done 20 | -------------------------------------------------------------------------------- /all_scripts/branch_status.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | COMMAND="git status" 6 | echo "Check branch status..." 7 | echo "COMMAND: ${COMMAND}" 8 | i=1 9 | for host in ${HOSTS[@]}; do 10 | echo "" 11 | echo "HOST${i}: ${host}" 12 | ssh "${ISUCONUSER}@${host}" "${COMMAND}" 13 | i=$((i+1)) 14 | done 15 | -------------------------------------------------------------------------------- /all_scripts/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | TARGET="master" 6 | 7 | while [ "$1" != "" ]; do 8 | case "$1" in 9 | '--bundle' ) 10 | FLAG_BUNDLE=1 11 | shift 12 | ;; 13 | * ) 14 | TARGET="$1" 15 | shift 16 | ;; 17 | esac 18 | done 19 | 20 | LOAD_COMMAND="git checkout master && git pull origin master && git fetch origin ${TARGET} && git checkout ${TARGET} && git pull origin ${TARGET} && git merge master" 21 | 22 | NGINX_COMMAND="hostname && ${LOAD_COMMAND} && \$(pwd)/scripts/deploy_nginx.sh" 23 | echo "Deploy nginx..." 24 | echo "COMMAND: ${NGINX_COMMAND}" 25 | for i in ${NGINX_HOSTS[@]}; do 26 | echo "" 27 | echo $i 28 | ssh "${ISUCONUSER}@${i}" "${NGINX_COMMAND}" 29 | done 30 | echo "Deployed nginx!" 31 | echo "" 32 | 33 | if [ $FLAG_BUNDLE ]; then 34 | BUNDLE_OPTION=" --bundle" 35 | else 36 | BUNDLE_OPTION="" 37 | fi 38 | WEB_COMMAND="hostname && ${LOAD_COMMAND} && \$(pwd)/scripts/deploy_app.sh${BUNDLE_OPTION}" 39 | echo "Deploy app..." 40 | echo "COMMAND: ${WEB_COMMAND}" 41 | for i in ${WEB_HOSTS[@]}; do 42 | echo "" 43 | echo $i 44 | ssh "${ISUCONUSER}@${i}" "${WEB_COMMAND}" 45 | done 46 | echo "Deployed app!" 47 | 48 | echo "" 49 | `pwd`/all_scripts/branch_status.sh 50 | # TODO(south37) Start healthcheck 51 | # echo "" 52 | # `pwd`/all_scripts/healthcheck.sh 53 | echo "" 54 | 55 | echo ' ________ ________ _____ ______ ________ ___ _______ _________ _______ ' 56 | echo ' |\ ____\ |\ __ \ |\ _ \ _ \ |\ __ \ |\ \ |\ ___ \ |\___ ___\|\ ___ \ ' 57 | echo ' \ \ \___| \ \ \|\ \\ \ \\\__\ \ \\ \ \|\ \\ \ \ \ \ __/|\|___ \ \_|\ \ __/| ' 58 | echo ' \ \ \ \ \ \\\ \\ \ \\|__| \ \\ \ ____\\ \ \ \ \ \_|/__ \ \ \ \ \ \_|/__ ' 59 | echo ' \ \ \____ \ \ \\\ \\ \ \ \ \ \\ \ \___| \ \ \____ \ \ \_|\ \ \ \ \ \ \ \_|\ \ ' 60 | echo ' \ \_______\\ \_______\\ \__\ \ \__\\ \__\ \ \_______\\ \_______\ \ \__\ \ \_______\' 61 | echo ' \|_______| \|_______| \|__| \|__| \|__| \|_______| \|_______| \|__| \|_______|' 62 | -------------------------------------------------------------------------------- /all_scripts/error_log.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | TAIL_LENGTH=10 6 | while [ "$1" != "" ]; do 7 | case "$1" in 8 | '-n' ) 9 | TAIL_LENGTH=$2 10 | shift 2 11 | ;; 12 | * ) 13 | shift 1 14 | ;; 15 | esac 16 | done 17 | 18 | NGINX_COMMAND="hostname && echo '' && ( \$(pwd)/scripts/error_log.sh | tail -n ${TAIL_LENGTH} )" 19 | for i in ${NGINX_HOSTS[@]}; do 20 | echo "" 21 | echo $i 22 | ssh "${ISUCONUSER}@${i}" "${NGINX_COMMAND}" 23 | done 24 | -------------------------------------------------------------------------------- /all_scripts/healthcheck.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | echo "Start Healthcheck..." 6 | for i in ${WEB_HOSTS[@]}; do 7 | echo "" 8 | echo $i 9 | 10 | echo "http://${i}:8080/api/rooms" 11 | curl "http://${i}:8080/api/rooms" -LI -o /dev/null -w '%{http_code}\n' -s 12 | 13 | echo "http://${i}:3000/" 14 | curl "http://${i}:3000/" -LI -o /dev/null -w '%{http_code}\n' -s 15 | done 16 | echo "Complete Healthcheck!" 17 | -------------------------------------------------------------------------------- /all_scripts/hosts.txt: -------------------------------------------------------------------------------- 1 | # Hosts is defined here 2 | 3 | HOST1="" 4 | HOST2="" 5 | HOST3="" 6 | HOST4="" 7 | HOST5="" 8 | 9 | HOSTS="${HOST1} ${HOST2} ${HOST3} ${HOST4} ${HOST5}" 10 | NGINX_HOSTS="${HOST2}" 11 | WEB_HOSTS="${HOST1} ${HOST2} ${HOST4} ${HOST5}" 12 | 13 | ISUCONUSER="isucon" 14 | -------------------------------------------------------------------------------- /all_scripts/kataribe_log.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | BRANCH_COMMAND="git branch | grep '*' | awk '{ print \$2 }'" 6 | COMMAND="hostname && git pull origin \$(${BRANCH_COMMAND}) && \$(pwd)/scripts/rotate_and_cp.sh && git add . && git commit -m 'Add log' && git push origin \$(${BRANCH_COMMAND})" 7 | 8 | echo "${COMMAND}" 9 | for i in ${NGINX_HOSTS[@]}; do 10 | echo "" 11 | echo $i 12 | ssh "${ISUCONUSER}@${i}" "${COMMAND}" 13 | done 14 | 15 | git pull origin `git branch | grep '\*' | awk '{ print \$2 }'` 16 | -------------------------------------------------------------------------------- /all_scripts/log.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | TAIL_LENGTH=10 6 | TARGETS="nginx app" 7 | 8 | while [ "$1" != "" ]; do 9 | case "$1" in 10 | '-n' ) 11 | TAIL_LENGTH=$2 12 | shift 2 13 | ;; 14 | '-t' ) 15 | TARGETS=$2 16 | shift 2 17 | ;; 18 | * ) 19 | shift 20 | ;; 21 | esac 22 | done 23 | 24 | NGINX_COMMAND="hostname && echo '' && ( \$(pwd)/scripts/log_nginx.sh | tail -n ${TAIL_LENGTH} )" 25 | # nginx 26 | if [ "${TARGETS}" == *"nginx"* ]; then 27 | for i in ${NGINX_HOSTS[@]}; do 28 | echo "" 29 | echo $i 30 | ssh "${ISUCONUSER}@${i}" "${NGINX_COMMAND}" 31 | done 32 | fi 33 | 34 | APP_COMMAND="hostname && echo '' && ( \$(pwd)/scripts/log_app.sh | tail -n ${TAIL_LENGTH} )" 35 | # app 36 | if [ "${TARGETS}" == *"app"* ]; then 37 | for i in ${WEB_HOSTS[@]}; do 38 | echo "" 39 | echo $i 40 | ssh "${ISUCONUSER}@${i}" "${APP_COMMAND}" 41 | done 42 | fi 43 | -------------------------------------------------------------------------------- /all_scripts/restart.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/all_scripts/hosts.txt" 4 | 5 | NGINX_COMMAND="hostname && \$(pwd)/scripts/deploy_nginx.sh" 6 | echo "Deploy nginx..." 7 | echo "COMMAND: ${NGINX_COMMAND}" 8 | for i in ${NGINX_HOSTS[@]}; do echo "" && echo $i; ssh "${ISUCONUSER}@${i}" "${NGINX_COMMAND}"; done 9 | echo "Deployed nginx!" 10 | echo "" 11 | 12 | WEB_COMMAND="hostname && \$(pwd)/scripts/deploy_app.sh" 13 | echo "Deploy app..." 14 | echo "COMMAND: ${WEB_COMMAND}" 15 | for i in ${WEB_HOSTS[@]}; do echo "" && echo $i; ssh "${ISUCONUSER}@${i}" "${WEB_COMMAND}"; done 16 | echo "Deployed app!" 17 | -------------------------------------------------------------------------------- /kataribe.toml: -------------------------------------------------------------------------------- 1 | # Top Ranking Group By Request 2 | ranking_count = 40 3 | 4 | # Top Slow Requests 5 | slow_count = 37 6 | 7 | # Show Standard Deviation column 8 | show_stddev = true 9 | 10 | # Show HTTP Status Code columns 11 | show_status_code = true 12 | 13 | # Percentiles 14 | percentiles = [ 50.0, 90.0, 95.0, 99.0 ] 15 | 16 | # for Nginx($request_time) 17 | scale = 0 18 | effective_digit = 3 19 | 20 | # for Apache(%D) and Varnishncsa(%D) 21 | #scale = -6 22 | #effective_digit = 6 23 | 24 | # for Rack(Rack::CommonLogger) 25 | #scale = 0 26 | #effective_digit = 4 27 | 28 | 29 | # combined + duration 30 | # Nginx example: '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_time' 31 | # Apache example: "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" 32 | # Varnishncsa example: '%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" %D' 33 | log_format = '^([^ ]+) ([^ ]+) ([^ ]+) \[([^\]]+)\] "((?:\\"|[^"])*)" (\d+) (\d+|-) "((?:\\"|[^"])*)" "((?:\\"|[^"])*)" ([0-9.]+)$' 34 | 35 | request_index = 5 36 | status_index = 6 37 | duration_index = 10 38 | 39 | # Rack example: use Rack::CommonLogger, Logger.new("/tmp/app.log") 40 | #log_format = '^([^ ]+) ([^ ]+) ([^ ]+) \[([^\]]+)\] "((?:\\"|[^"])*)" (\d+) (\d+|-) ([0-9.]+)$' 41 | #request_index = 5 42 | #status_index = 6 43 | #duration_index = 8 44 | 45 | # You can aggregate requests by regular expression 46 | # For overview of regexp syntax: https://golang.org/pkg/regexp/syntax/ 47 | -------------------------------------------------------------------------------- /lua/img.lua: -------------------------------------------------------------------------------- 1 | local redis = require "resty.redis" 2 | local red = redis:new() 3 | 4 | red:set_timeout(1000) 5 | 6 | local uri = ngx.var.request_uri 7 | local ok, err = red:connect("13.113.191.11", 6379) 8 | if not ok then 9 | ngx.say("failed to connect: ", err) 10 | return 11 | end 12 | local res, err = red:get(uri) 13 | if not res then 14 | ngx.say("failed to get: ", error) 15 | return 16 | end 17 | if res == ngx.null then 18 | local res = ngx.location.capture("/proxy"..uri) 19 | ngx.header["Content-Type"] = res.header["Content-Type"] 20 | ngx.status = res.status 21 | ngx.say(res.body) 22 | return 23 | end 24 | ngx.say(res) 25 | return 26 | -------------------------------------------------------------------------------- /lua/redis.lua: -------------------------------------------------------------------------------- 1 | -- Copyright (C) Yichun Zhang (agentzh) 2 | 3 | 4 | local sub = string.sub 5 | local byte = string.byte 6 | local tcp = ngx.socket.tcp 7 | local null = ngx.null 8 | local type = type 9 | local pairs = pairs 10 | local unpack = unpack 11 | local setmetatable = setmetatable 12 | local tonumber = tonumber 13 | local tostring = tostring 14 | local rawget = rawget 15 | --local error = error 16 | 17 | 18 | local ok, new_tab = pcall(require, "table.new") 19 | if not ok or type(new_tab) ~= "function" then 20 | new_tab = function (narr, nrec) return {} end 21 | end 22 | 23 | 24 | local _M = new_tab(0, 54) 25 | 26 | _M._VERSION = '0.26' 27 | 28 | 29 | local common_cmds = { 30 | "get", "set", "mget", "mset", 31 | "del", "incr", "decr", -- Strings 32 | "llen", "lindex", "lpop", "lpush", 33 | "lrange", "linsert", -- Lists 34 | "hexists", "hget", "hset", "hmget", 35 | --[[ "hmset", ]] "hdel", -- Hashes 36 | "smembers", "sismember", "sadd", "srem", 37 | "sdiff", "sinter", "sunion", -- Sets 38 | "zrange", "zrangebyscore", "zrank", "zadd", 39 | "zrem", "zincrby", -- Sorted Sets 40 | "auth", "eval", "expire", "script", 41 | "sort" -- Others 42 | } 43 | 44 | 45 | local sub_commands = { 46 | "subscribe", "psubscribe" 47 | } 48 | 49 | 50 | local unsub_commands = { 51 | "unsubscribe", "punsubscribe" 52 | } 53 | 54 | 55 | local mt = { __index = _M } 56 | 57 | 58 | function _M.new(self) 59 | local sock, err = tcp() 60 | if not sock then 61 | return nil, err 62 | end 63 | return setmetatable({ _sock = sock, _subscribed = false }, mt) 64 | end 65 | 66 | 67 | function _M.set_timeout(self, timeout) 68 | local sock = rawget(self, "_sock") 69 | if not sock then 70 | return nil, "not initialized" 71 | end 72 | 73 | return sock:settimeout(timeout) 74 | end 75 | 76 | 77 | function _M.connect(self, ...) 78 | local sock = rawget(self, "_sock") 79 | if not sock then 80 | return nil, "not initialized" 81 | end 82 | 83 | self._subscribed = false 84 | 85 | return sock:connect(...) 86 | end 87 | 88 | 89 | function _M.set_keepalive(self, ...) 90 | local sock = rawget(self, "_sock") 91 | if not sock then 92 | return nil, "not initialized" 93 | end 94 | 95 | if rawget(self, "_subscribed") then 96 | return nil, "subscribed state" 97 | end 98 | 99 | return sock:setkeepalive(...) 100 | end 101 | 102 | 103 | function _M.get_reused_times(self) 104 | local sock = rawget(self, "_sock") 105 | if not sock then 106 | return nil, "not initialized" 107 | end 108 | 109 | return sock:getreusedtimes() 110 | end 111 | 112 | 113 | local function close(self) 114 | local sock = rawget(self, "_sock") 115 | if not sock then 116 | return nil, "not initialized" 117 | end 118 | 119 | return sock:close() 120 | end 121 | _M.close = close 122 | 123 | 124 | local function _read_reply(self, sock) 125 | local line, err = sock:receive() 126 | if not line then 127 | if err == "timeout" and not rawget(self, "_subscribed") then 128 | sock:close() 129 | end 130 | return nil, err 131 | end 132 | 133 | local prefix = byte(line) 134 | 135 | if prefix == 36 then -- char '$' 136 | -- print("bulk reply") 137 | 138 | local size = tonumber(sub(line, 2)) 139 | if size < 0 then 140 | return null 141 | end 142 | 143 | local data, err = sock:receive(size) 144 | if not data then 145 | if err == "timeout" then 146 | sock:close() 147 | end 148 | return nil, err 149 | end 150 | 151 | local dummy, err = sock:receive(2) -- ignore CRLF 152 | if not dummy then 153 | return nil, err 154 | end 155 | 156 | return data 157 | 158 | elseif prefix == 43 then -- char '+' 159 | -- print("status reply") 160 | 161 | return sub(line, 2) 162 | 163 | elseif prefix == 42 then -- char '*' 164 | local n = tonumber(sub(line, 2)) 165 | 166 | -- print("multi-bulk reply: ", n) 167 | if n < 0 then 168 | return null 169 | end 170 | 171 | local vals = new_tab(n, 0) 172 | local nvals = 0 173 | for i = 1, n do 174 | local res, err = _read_reply(self, sock) 175 | if res then 176 | nvals = nvals + 1 177 | vals[nvals] = res 178 | 179 | elseif res == nil then 180 | return nil, err 181 | 182 | else 183 | -- be a valid redis error value 184 | nvals = nvals + 1 185 | vals[nvals] = {false, err} 186 | end 187 | end 188 | 189 | return vals 190 | 191 | elseif prefix == 58 then -- char ':' 192 | -- print("integer reply") 193 | return tonumber(sub(line, 2)) 194 | 195 | elseif prefix == 45 then -- char '-' 196 | -- print("error reply: ", n) 197 | 198 | return false, sub(line, 2) 199 | 200 | else 201 | -- when `line` is an empty string, `prefix` will be equal to nil. 202 | return nil, "unknown prefix: \"" .. tostring(prefix) .. "\"" 203 | end 204 | end 205 | 206 | 207 | local function _gen_req(args) 208 | local nargs = #args 209 | 210 | local req = new_tab(nargs * 5 + 1, 0) 211 | req[1] = "*" .. nargs .. "\r\n" 212 | local nbits = 2 213 | 214 | for i = 1, nargs do 215 | local arg = args[i] 216 | if type(arg) ~= "string" then 217 | arg = tostring(arg) 218 | end 219 | 220 | req[nbits] = "$" 221 | req[nbits + 1] = #arg 222 | req[nbits + 2] = "\r\n" 223 | req[nbits + 3] = arg 224 | req[nbits + 4] = "\r\n" 225 | 226 | nbits = nbits + 5 227 | end 228 | 229 | -- it is much faster to do string concatenation on the C land 230 | -- in real world (large number of strings in the Lua VM) 231 | return req 232 | end 233 | 234 | 235 | local function _do_cmd(self, ...) 236 | local args = {...} 237 | 238 | local sock = rawget(self, "_sock") 239 | if not sock then 240 | return nil, "not initialized" 241 | end 242 | 243 | local req = _gen_req(args) 244 | 245 | local reqs = rawget(self, "_reqs") 246 | if reqs then 247 | reqs[#reqs + 1] = req 248 | return 249 | end 250 | 251 | -- print("request: ", table.concat(req)) 252 | 253 | local bytes, err = sock:send(req) 254 | if not bytes then 255 | return nil, err 256 | end 257 | 258 | return _read_reply(self, sock) 259 | end 260 | 261 | 262 | local function _check_subscribed(self, res) 263 | if type(res) == "table" 264 | and (res[1] == "unsubscribe" or res[1] == "punsubscribe") 265 | and res[3] == 0 266 | then 267 | self._subscribed = false 268 | end 269 | end 270 | 271 | 272 | function _M.read_reply(self) 273 | local sock = rawget(self, "_sock") 274 | if not sock then 275 | return nil, "not initialized" 276 | end 277 | 278 | if not rawget(self, "_subscribed") then 279 | return nil, "not subscribed" 280 | end 281 | 282 | local res, err = _read_reply(self, sock) 283 | _check_subscribed(self, res) 284 | 285 | return res, err 286 | end 287 | 288 | 289 | for i = 1, #common_cmds do 290 | local cmd = common_cmds[i] 291 | 292 | _M[cmd] = 293 | function (self, ...) 294 | return _do_cmd(self, cmd, ...) 295 | end 296 | end 297 | 298 | 299 | for i = 1, #sub_commands do 300 | local cmd = sub_commands[i] 301 | 302 | _M[cmd] = 303 | function (self, ...) 304 | self._subscribed = true 305 | return _do_cmd(self, cmd, ...) 306 | end 307 | end 308 | 309 | 310 | for i = 1, #unsub_commands do 311 | local cmd = unsub_commands[i] 312 | 313 | _M[cmd] = 314 | function (self, ...) 315 | local res, err = _do_cmd(self, cmd, ...) 316 | _check_subscribed(self, res) 317 | return res, err 318 | end 319 | end 320 | 321 | 322 | function _M.hmset(self, hashname, ...) 323 | if select('#', ...) == 1 then 324 | local t = select(1, ...) 325 | 326 | local n = 0 327 | for k, v in pairs(t) do 328 | n = n + 2 329 | end 330 | 331 | local array = new_tab(n, 0) 332 | 333 | local i = 0 334 | for k, v in pairs(t) do 335 | array[i + 1] = k 336 | array[i + 2] = v 337 | i = i + 2 338 | end 339 | -- print("key", hashname) 340 | return _do_cmd(self, "hmset", hashname, unpack(array)) 341 | end 342 | 343 | -- backwards compatibility 344 | return _do_cmd(self, "hmset", hashname, ...) 345 | end 346 | 347 | 348 | function _M.init_pipeline(self, n) 349 | self._reqs = new_tab(n or 4, 0) 350 | end 351 | 352 | 353 | function _M.cancel_pipeline(self) 354 | self._reqs = nil 355 | end 356 | 357 | 358 | function _M.commit_pipeline(self) 359 | local reqs = rawget(self, "_reqs") 360 | if not reqs then 361 | return nil, "no pipeline" 362 | end 363 | 364 | self._reqs = nil 365 | 366 | local sock = rawget(self, "_sock") 367 | if not sock then 368 | return nil, "not initialized" 369 | end 370 | 371 | local bytes, err = sock:send(reqs) 372 | if not bytes then 373 | return nil, err 374 | end 375 | 376 | local nvals = 0 377 | local nreqs = #reqs 378 | local vals = new_tab(nreqs, 0) 379 | for i = 1, nreqs do 380 | local res, err = _read_reply(self, sock) 381 | if res then 382 | nvals = nvals + 1 383 | vals[nvals] = res 384 | 385 | elseif res == nil then 386 | if err == "timeout" then 387 | close(self) 388 | end 389 | return nil, err 390 | 391 | else 392 | -- be a valid redis error value 393 | nvals = nvals + 1 394 | vals[nvals] = {false, err} 395 | end 396 | end 397 | 398 | return vals 399 | end 400 | 401 | 402 | function _M.array_to_hash(self, t) 403 | local n = #t 404 | -- print("n = ", n) 405 | local h = new_tab(0, n / 2) 406 | for i = 1, n, 2 do 407 | h[t[i]] = t[i + 1] 408 | end 409 | return h 410 | end 411 | 412 | 413 | -- this method is deperate since we already do lazy method generation. 414 | function _M.add_commands(...) 415 | local cmds = {...} 416 | for i = 1, #cmds do 417 | local cmd = cmds[i] 418 | _M[cmd] = 419 | function (self, ...) 420 | return _do_cmd(self, cmd, ...) 421 | end 422 | end 423 | end 424 | 425 | 426 | setmetatable(_M, {__index = function(self, cmd) 427 | local method = 428 | function (self, ...) 429 | return _do_cmd(self, cmd, ...) 430 | end 431 | 432 | -- cache the lazily generated method in our 433 | -- module table 434 | _M[cmd] = method 435 | return method 436 | end}) 437 | 438 | 439 | return _M 440 | -------------------------------------------------------------------------------- /netdata/apps_groups.conf: -------------------------------------------------------------------------------- 1 | # 2 | # apps.plugin process grouping 3 | # 4 | # The apps.plugin displays charts with information about the processes running. 5 | # This config allows grouping processes together, so that several processes 6 | # will be reported as one. 7 | # 8 | # Only groups in this file are reported. All other processes will be reported 9 | # as 'other'. 10 | # 11 | # For each process given, its whole process tree will be grouped, not just 12 | # the process matched. The plugin will include both parents and childs. 13 | # 14 | # The format is: 15 | # 16 | # group: process1 process2 process3 ... 17 | # 18 | # Each group can be given multiple times, to add more processes to it. 19 | # 20 | # The process names are the ones returned by: 21 | # 22 | # - ps -e or /proc/PID/stat 23 | # - in case of substring mode (see below): /proc/PID/cmdline 24 | # 25 | # To add process names with spaces, enclose them in quotes (single or double) 26 | # example: 'Plex Media Serv' "my other process". 27 | # 28 | # Wildcard support: 29 | # You can add an asterisk (*) at the beginning and/or the end of a process: 30 | # 31 | # *name suffix mode: will search for processes ending with 'name' 32 | # (/proc/PID/stat) 33 | # 34 | # name* prefix mode: will search for processes beginning with 'name' 35 | # (/proc/PID/stat) 36 | # 37 | # *name* substring mode: will search for 'name' in the whole command line 38 | # (/proc/PID/cmdline) 39 | # 40 | # If you enter even just one *name* (substring), apps.plugin will process 41 | # /proc/PID/cmdline for all processes, just once (when they are first seen). 42 | # 43 | # To add processes with single quotes, enclose them in double quotes 44 | # example: "process with this ' single quote" 45 | # 46 | # To add processes with double quotes, enclose them in single quotes: 47 | # example: 'process with this " double quote' 48 | # 49 | # If a group or process name starts with a -, the dimension will be hidden 50 | # (cpu chart only). 51 | # 52 | # If a process starts with a +, debugging will be enabled for it 53 | # (debugging produces a lot of output - do not enable it in production systems) 54 | # 55 | # You can add any number of groups you like. Only the ones found running will 56 | # affect the charts generated. However, producing charts with hundreds of 57 | # dimensions may slow down your web browser. 58 | # 59 | # The order of the entries in this list is important: the first that matches 60 | # a process is used, so put important ones at the top. Processes not matched 61 | # by any row, will inherit it from their parents or children. 62 | # 63 | # The order also controls the order of the dimensions on the generated charts 64 | # (although applications started after apps.plugin is started, will be appended 65 | # to the existing list of dimensions the netdata daemon maintains). 66 | 67 | # ----------------------------------------------------------------------------- 68 | # NETDATA processes accounting 69 | 70 | # netdata main process 71 | netdata: netdata 72 | 73 | # netdata known plugins 74 | # plugins not defined here will be accumulated in netdata, above 75 | apps.plugin: apps.plugin 76 | freeipmi.plugin: freeipmi.plugin 77 | charts.d.plugin: *charts.d.plugin* 78 | node.d.plugin: *node.d.plugin* 79 | python.d.plugin: *python.d.plugin* 80 | tc-qos-helper: *tc-qos-helper.sh* 81 | fping: fping 82 | 83 | # ----------------------------------------------------------------------------- 84 | # authentication/authorization related servers 85 | 86 | auth: radius* openldap* ldap* 87 | fail2ban: fail2ban* 88 | 89 | # ----------------------------------------------------------------------------- 90 | # web/ftp servers 91 | 92 | apache: apache* 93 | httpd: httpd 94 | lighttpd: lighttpd 95 | nginx: nginx* 96 | proxy: squid* c-icap squidGuard varnish* 97 | php: php* 98 | ftpd: proftpd in.tftpd vsftpd 99 | uwsgi: uwsgi 100 | unicorn: *unicorn* 101 | puma: *puma* 102 | thin: thin 103 | 104 | # ----------------------------------------------------------------------------- 105 | # go 106 | 107 | go: go* 108 | 109 | # ----------------------------------------------------------------------------- 110 | # node 111 | 112 | node: node 113 | 114 | # ----------------------------------------------------------------------------- 115 | # database servers 116 | 117 | sql: mariad* postgres* postmaster* oracle_* ora_* 118 | mysqld: mysqld* 119 | nosql: mongod redis* memcached *couchdb* 120 | timedb: prometheus *carbon-cache.py* *carbon-aggregator.py* *graphite/manage.py* *net.opentsdb.tools.TSDMain* 121 | 122 | # ----------------------------------------------------------------------------- 123 | # email servers 124 | 125 | email: dovecot imapd pop3d amavis* master zmstat* zmmailboxdmgr qmgr oqmgr 126 | 127 | # ----------------------------------------------------------------------------- 128 | # network, routing, VPN 129 | 130 | ppp: ppp* 131 | vpn: openvpn pptp* cjdroute gvpe tincd 132 | wifi: hostapd wpa_supplicant 133 | routing: ospfd* ospf6d* bgpd isisd ripd ripngd pimd ldpd zebra vtysh bird* 134 | 135 | # ----------------------------------------------------------------------------- 136 | # high availability and balancers 137 | 138 | camo: *camo* 139 | balancer: ipvs_* haproxy 140 | ha: corosync hs_logd ha_logd stonithd pacemakerd lrmd crmd 141 | 142 | # ----------------------------------------------------------------------------- 143 | # telephony 144 | 145 | pbx: asterisk safe_asterisk *vicidial* 146 | sip: opensips* stund 147 | 148 | # ----------------------------------------------------------------------------- 149 | # chat 150 | 151 | chat: irssi *vines* *prosody* murmurd 152 | 153 | # ----------------------------------------------------------------------------- 154 | # monitoring 155 | 156 | logs: ulogd* syslog* rsyslog* logrotate systemd-journald 157 | nms: snmpd vnstatd smokeping zabbix* monit munin* mon openhpid watchdog tailon nrpe 158 | splunk: splunkd 159 | azure: mdsd *waagent* *omiserver* *omiagent* hv_kvp_daemon hv_vss_daemon 160 | 161 | # ----------------------------------------------------------------------------- 162 | # file systems and file servers 163 | 164 | samba: smbd nmbd winbindd 165 | nfs: rpcbind rpc.* nfs* 166 | zfs: spl_* z_* txg_* zil_* arc_* l2arc* 167 | btrfs: btrfs* 168 | iscsi: iscsid iscsi_eh 169 | 170 | # ----------------------------------------------------------------------------- 171 | # containers & virtual machines 172 | 173 | containers: lxc* docker* 174 | VMs: vbox* VBox* qemu* 175 | 176 | # ----------------------------------------------------------------------------- 177 | # ssh servers and clients 178 | 179 | ssh: ssh* scp 180 | 181 | # ----------------------------------------------------------------------------- 182 | # print servers and clients 183 | 184 | print: cups* lpd lpq 185 | 186 | # ----------------------------------------------------------------------------- 187 | # time servers and clients 188 | 189 | time: ntp* systemd-timesyncd 190 | 191 | # ----------------------------------------------------------------------------- 192 | # dhcp servers and clients 193 | 194 | dhcp: *dhcp* 195 | 196 | # ----------------------------------------------------------------------------- 197 | # name servers and clients 198 | 199 | named: named rncd dig 200 | 201 | # ----------------------------------------------------------------------------- 202 | # installation / compilation / debugging 203 | 204 | build: cc1 cc1plus as gcc* cppcheck ld make cmake automake autoconf autoreconf 205 | build: git gdb valgrind* 206 | 207 | # ----------------------------------------------------------------------------- 208 | # antivirus 209 | 210 | antivirus: clam* *clam 211 | 212 | # ----------------------------------------------------------------------------- 213 | # torrent clients 214 | 215 | torrents: *deluge* transmission* *SickBeard* *CouchPotato* *rtorrent* 216 | 217 | # ----------------------------------------------------------------------------- 218 | # backup servers and clients 219 | 220 | backup: rsync bacula* 221 | 222 | # ----------------------------------------------------------------------------- 223 | # cron 224 | 225 | cron: cron* atd anacron systemd-cron* 226 | 227 | # ----------------------------------------------------------------------------- 228 | # UPS 229 | 230 | ups: upsmon upsd */nut/* 231 | 232 | # ----------------------------------------------------------------------------- 233 | # media players, servers, clients 234 | 235 | media: mplayer vlc xine mediatomb omxplayer* kodi* xbmc* mediacenter eventlircd 236 | media: mpd minidlnad mt-daapd avahi* Plex* 237 | 238 | # ----------------------------------------------------------------------------- 239 | # java applications 240 | 241 | hdfsdatanode: *org.apache.hadoop.hdfs.server.datanode.DataNode* 242 | hdfsnamenode: *org.apache.hadoop.hdfs.server.namenode.NameNode* 243 | hdfsjournalnode: *org.apache.hadoop.hdfs.qjournal.server.JournalNode* 244 | hdfszkfc: *org.apache.hadoop.hdfs.tools.DFSZKFailoverController* 245 | 246 | yarnnode: *org.apache.hadoop.yarn.server.nodemanager.NodeManager* 247 | yarnmgr: *org.apache.hadoop.yarn.server.resourcemanager.ResourceManager* 248 | yarnproxy: *org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer* 249 | 250 | sparkworker: *org.apache.spark.deploy.worker.Worker* 251 | sparkmaster: *org.apache.spark.deploy.master.Master* 252 | 253 | hbaseregion: *org.apache.hadoop.hbase.regionserver.HRegionServer* 254 | hbaserest: *org.apache.hadoop.hbase.rest.RESTServer* 255 | hbasethrift: *org.apache.hadoop.hbase.thrift.ThriftServer* 256 | hbasemaster: *org.apache.hadoop.hbase.master.HMaster* 257 | 258 | zookeeper: *org.apache.zookeeper.server.quorum.QuorumPeerMain* 259 | 260 | hive2: *org.apache.hive.service.server.HiveServer2* 261 | hivemetastore: *org.apache.hadoop.hive.metastore.HiveMetaStore* 262 | 263 | solr: *solr.install.dir* 264 | 265 | airflow: *airflow* 266 | 267 | # ----------------------------------------------------------------------------- 268 | # X 269 | 270 | X: X Xorg xinit lightdm xdm pulseaudio gkrellm xfwm4 xfdesktop xfce* Thunar 271 | X: xfsettingsd xfconfd gnome-* gdm gconf* dconf* xfconf* *gvfs gvfs* kdm slim 272 | X: evolution-* firefox chromium opera vivaldi-bin epiphany WebKit* 273 | 274 | # ----------------------------------------------------------------------------- 275 | # Kernel / System 276 | 277 | ksmd: ksmd 278 | 279 | system: systemd* udisks* udevd* *udevd connmand ipv6_addrconf dbus-* rtkit* 280 | system: inetd xinetd mdadm polkitd acpid uuidd packagekitd upowerd colord 281 | system: accounts-daemon rngd haveged 282 | 283 | kernel: kthreadd kauditd lockd khelper kdevtmpfs khungtaskd rpciod 284 | kernel: fsnotify_mark kthrotld deferwq scsi_* 285 | 286 | # ----------------------------------------------------------------------------- 287 | # other application servers 288 | 289 | kafka: *kafka.Kafka* 290 | 291 | rabbitmq: *rabbitmq* 292 | 293 | sidekiq: *sidekiq* 294 | java: java 295 | ipfs: ipfs 296 | -------------------------------------------------------------------------------- /redis/dump.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # dir in redis.conf. `sudo systemctl restart redis` may be necessary 4 | # -rw-r--r-- 1 redis redis 26 Oct 21 11:09 dump.rdb 5 | # echo password | sudo -S cp /var/lib/redis/dump.rdb "$HOME/data/`date +"%Y%m%d%H%M%S"`_redis_dump.rdb" 6 | sudo cp /var/lib/redis/dump.rdb "$HOME/redis/`date +"%Y%m%d%H%M%S"`_redis_dump.rdb" 7 | -------------------------------------------------------------------------------- /redis/restore.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # dir in redis.conf. `sudo systemctl restart redis` may be necessary 4 | # -rw-r--r-- 1 redis redis 26 Oct 21 11:09 dump.rdb 5 | if [ $# -ne 1 ]; then 6 | echo "please specify redis dump file" 7 | echo "example: ./redis_restore redis/20171021114654_redis_dump.rdb" 8 | exit 1 9 | fi 10 | 11 | # echo password | sudo -S systemctl stop redis 12 | # echo password | sudo -S cp $1 /var/lib/redis/dump.rdb 13 | # echo password | sudo -S systemctl start redis 14 | sudo systemctl stop redis 15 | sudo cp $1 /var/lib/redis/dump.rdb 16 | sudo systemctl start redis 17 | -------------------------------------------------------------------------------- /scripts/check_mysql.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # DB_USER=root 4 | # DB_PW=root 5 | # DB_NAME= 6 | 7 | if [ "$DB_PW" = "" ] || [ "$DB_USER" = "" ] || [ "$DB_NAME" = "" ] ; then 8 | echo "Fatal: DB_PW or DB_USER or DB_NAME is not set." 9 | echo "" 10 | echo "Usage: DB_USER=root DB_PW=root DB_NAME=isuketch check_mysql.sh" 11 | exit 1 12 | fi 13 | 14 | echo "\n\n## データベース一覧" 15 | mysql -u$DB_USER -p$DB_PW -e "SHOW DATABASES"; 16 | 17 | 18 | echo "\n\n## テーブル一覧" 19 | for i in $(mysql -u$DB_USER -p$DB_PW -D$DB_NAME -e 'SHOW TABLES' | grep -v "Tables_in" | awk '{print $1}') 20 | do 21 | echo "\n\nTable: $i" 22 | mysql -u$DB_USER -p$DB_PW -D$DB_NAME -e "DESC $i" 23 | done 24 | 25 | 26 | echo "\n\n## 各テーブルの容量" 27 | mysql -u $DB_USER -p$DB_PW $DB_NAME -e "SELECT table_name, engine, table_rows, avg_row_length, floor((data_length+index_length)/1024/1024) as allMB, floor((data_length)/1024/1024) as dMB, floor((index_length)/1024/1024) as iMB FROM information_schema.tables WHERE table_schema=database() ORDER BY (data_length+index_length) DESC;" 28 | 29 | 30 | echo "\n\n## インデックス一覧" 31 | for i in $(mysql -u$DB_USER -p$DB_PW -D$DB_NAME -e 'SHOW TABLES' | grep -v "Tables_in" | awk '{print $1}') 32 | do 33 | echo "\n\nTable: $i" 34 | mysql -u$DB_USER -p$DB_PW -D$DB_NAME -e "SHOW INDEX FROM $i" 35 | done 36 | -------------------------------------------------------------------------------- /scripts/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | "$HOME/scripts/deploy_nginx.sh" 4 | "$HOME/scripts/deploy_app.sh" 5 | -------------------------------------------------------------------------------- /scripts/deploy_app.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | # NOTE: Comment out because we use go and dep! 6 | # if [ "$1" = "--bundle" ]; then 7 | # echo 'Start bundle install...' 8 | # cd "$HOME/webapp/ruby" 9 | # bundle install 10 | # cd "$HOME" 11 | # echo 'bundle install finished!' 12 | # fi 13 | 14 | echo "Building ${ISUSERVICE}..." 15 | cd "$HOME/isubata/webapp/go" 16 | make 17 | echo "Finished building ${ISUSERVICE}..." 18 | 19 | echo "Restart ${ISUSERVICE}..." 20 | echo "${SUDOPASS}" | sudo -S systemctl restart "${ISUSERVICE}" 21 | echo "Restarted ${ISUSERVICE}!" 22 | -------------------------------------------------------------------------------- /scripts/deploy_nginx.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo 'Rotate log file...' 6 | "$HOME/scripts/rotate.sh" 7 | echo 'Rotated log file!' 8 | 9 | echo 'Update config file...' 10 | echo "${SUDOPASS}" | sudo -S cp "$HOME/nginx.conf" /etc/nginx/nginx.conf 11 | echo 'Updateed config file!' 12 | 13 | echo 'Restart nginx...' 14 | echo "${SUDOPASS}" | sudo -S systemctl restart nginx.service 15 | echo 'Restarted!' 16 | -------------------------------------------------------------------------------- /scripts/deploy_redis.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo 'Update config file...' 6 | echo "${SUDOPASS}" | sudo -S cp "$HOME/redis.conf" /etc/redis/redis.conf 7 | echo 'Updateed config file!' 8 | 9 | echo 'Restart redis...' 10 | echo "${SUDOPASS}" | sudo -S systemctl restart redis 11 | echo 'Restarted!' 12 | -------------------------------------------------------------------------------- /scripts/deploy_service.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo "${SUDOPASS}" | sudo -S cp "$HOME/${ISUSERVICE}.service" "/etc/systemd/system/${ISUSERVICE}.service" 6 | echo "${SUDOPASS}" | sudo -S systemctl daemon-reload 7 | -------------------------------------------------------------------------------- /scripts/error_log.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo "Error Log of nginx..." 6 | echo "${SUDOPASS}" | sudo -S cat /var/log/nginx/error.log 7 | -------------------------------------------------------------------------------- /scripts/log_app.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo "Log go app..." 6 | echo "${SUDOPASS}" | sudo -S journalctl -u "${ISUSERVICE}" 7 | -------------------------------------------------------------------------------- /scripts/log_nginx.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo "Log nginx..." 6 | echo "${SUDOPASS}" | sudo -S journalctl -u nginx.service 7 | -------------------------------------------------------------------------------- /scripts/rotate.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | echo "${SUDOPASS}" | sudo -S mv "/var/log/nginx/access.log" "/var/log/nginx/`date +"%Y%m%d%H%M%S"`_access.log" 6 | echo "${SUDOPASS}" | sudo -S systemctl restart nginx 7 | -------------------------------------------------------------------------------- /scripts/rotate_and_cp.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | . "$(pwd)/scripts/var.txt" 4 | 5 | LOGFILE="`date +"%Y%m%d%H%M%S"`_access.log" 6 | echo "${SUDOPASS}" | sudo -S mv "/var/log/nginx/access.log" "/var/log/nginx/$LOGFILE" 7 | echo "${SUDOPASS}" | sudo -S cp "/var/log/nginx/$LOGFILE" "log/$LOGFILE" 8 | echo "${SUDOPASS}" | sudo -S chown ${ISUUSER}:${ISUUSER} "log/$LOGFILE" 9 | echo "${SUDOPASS}" | sudo -S systemctl restart nginx 10 | -------------------------------------------------------------------------------- /scripts/var.txt: -------------------------------------------------------------------------------- 1 | ISUUSER=isucon 2 | SUDOPASS=isucon 3 | ISUSERVICE=isu.service 4 | -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # For dev 4 | cp ./.emacs.el "$HOME/.emacs.el" 5 | cp ./.gitconfig "$HOME/.gitconfig" 6 | cp ./.gitignore "$HOME/.gitignore" 7 | cp ./tmpl/authorized_keys "$HOME/.ssh/authorized_keys" 8 | chmod 600 "$HOME/.ssh/authorized_keys" 9 | 10 | # For dev 11 | cp ./tmpl/README.md "$HOME/README.md" 12 | cp ./kataribe.toml "$HOME/kataribe.toml" 13 | 14 | # Shell Script 15 | cp -rf `pwd`/scripts "$HOME" 16 | 17 | # Shell Script for all hosts 18 | cp -rf `pwd`/all_scripts "$HOME" 19 | 20 | # For lua 21 | cp -rf `pwd`/lua "$HOME" 22 | 23 | mkdir "$HOME/redis" 24 | cp ./redis/dump.sh "$HOME/redis/dump.sh" 25 | cp ./redis/restore.sh "$HOME/redis/restore.sh" 26 | 27 | mkdir "$HOME/log" # Used in rotate_and_cp.sh 28 | 29 | ./setup_netdata.sh 30 | -------------------------------------------------------------------------------- /setup_netdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Install netdata 4 | bash <(curl -Ss https://my-netdata.io/kickstart.sh) all 5 | sudo cp ./netdata/netdata.conf /etc/netdata/netdata.conf 6 | sudo cp ./netdata/apps_groups.conf /etc/netdata/apps_groups.conf 7 | sudo systemctl restart netdata 8 | -------------------------------------------------------------------------------- /tmpl/README.md: -------------------------------------------------------------------------------- 1 | ## Deploy 2 | `./all_scripts/deploy.sh` を実行するとgo appの再起動、nginxの再起動などを行って、branchの最新状態でアプリケーションが動くようになります。 3 | 4 | ``` 5 | ./all_scripts/deploy.sh 6 | ``` 7 | 8 | ## デバッグ用のログ 9 | `./all_scripts/log.sh` を実行すると、go appやnginxのログが見れます。typo などでエラーになってないかデバッグするのに役立ちます。 10 | 11 | ``` 12 | ./all_scripts/log.sh 13 | ``` 14 | 15 | ## branch status の確認 16 | 17 | ``` 18 | ./all_scripts/branch_status.sh 19 | ``` 20 | 21 | ## branch のデバッグフロー 22 | 23 | 1. `./all_scripts/deploy.sh [対象branch]` でデプロイ 24 | 2. ブラウザで挙動を確認、`./all_scripts/log.sh` でエラーが出てないことを確認 25 | 26 | ## ベンチを回す時のフロー 27 | 28 | 1. `./all_scripts/deploy.sh [対象branch]` でデプロイ 29 | 2. ブラウザで挙動を確認、`./all_scripts/log.sh` でエラーが出てないことを確認 30 | 3. ベンチマーク実行開始 31 | 4. `./all_scripts/kataribe_log.sh` を実行して、log file を git 管理下に置いて手元に pull してくる 32 | 33 | -------------------------------------------------------------------------------- /tmpl/authorized_keys: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDWi+LQO/d953c+sn9ZjCDOSBomP+lzP+p0H3kupOt72H7d++TVHAIWGHKB4UBq/+fcT+Bd8k++ttNDOCLnqJpyfjejBULTkFZMdZJGc6XQ5/l9zGia/pqu4vgWxn7PmFfutE/WXMiNP2hCm7kmAVJEFB93/7+EtnGD82RwLYOBld2NyLaElaYzyl2M+aSuzspW3VSjtNRZ7Px71Wt/C/xkDO/LUENEI+O2kyKkAgPsO7HDr/69+hcPODPcET3L7sxD1m2gdHdHUAXN4Hzkzi588i0896as3vzZ46tE+F1yE1+VR/laQoP4nvzrx+zTD7lSFfhElPlSYeJn/rctkyav minami@minami-no-MacBook-Air.local 2 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDOpg6941orvvnSB5KOXbCHwz+UkxWKAkuJsVOH83B+UvP47W/7IMC+AG1VFs3FY5skSG4X/amEfhQBHTVxx2K6uGtBfJA6YdOJ4A7Ny/u16/kMQHP0jUR8t/5n9WUWSLmcN4DwQqz1NHOQzIHjFBRFB7syLTV7HxA3cdei9IxOopYh5qD/ovdCQaeykXxPljIiMg/oLb2yaGUMK1jm5kboAiHf0+c0oh9N/4vu3ndX8/8+V6dSJIMGBPg8HEWSp7SQEjPI8Tro+3JxUe9QLlnkhYiwFFQJX+fXeOpriMNzUHWD5CR9qTL3RNcp9Fpe6tM9N/6FI2MKAWVwhspszRit kubonagarei@gmail.com 3 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMDGI5gfDkwr7SXosKAdAuyaYDKvg6CU4WtDuprZfothXY9SMgdy5rBs87Z0kO0UBOn8mvGv88hUmqMWmVGi9Y/l21eT1TcG0gXmAloYc3/vya3azzJyUnnaXSVekF2fG96YMs5B/xzzT9W6RM85l9uNcz0V5PxylXNBiS7f+Pqtr8M/4kAgiCp11fw3SRdgIVm65bHhAZAt4wYv8TIXersHhxFoXP7WiiYldr5g+UZxqEo0Bc67otSiBFkkzvB9mLi+yhpXf8cNAkDkJ9HadR5UmpnZY9W3u7S0U2Vh2VAEvswH9ETKEsld04qExSx8V1TDFyZT45kvZCplJN8Va3 kubochourei@kubochoureinoMacBook-Pro.local 4 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtTayiMkCdE7sUhlfQepM7biHTEfa+D/YVGaI6I2P8W3CPom3hp+oo5xUDlh988kDemRgf1QMDVuPjZ9l1VkWSRvV6dh8LZYyGOzWYQP+XV47lqs65b7F2Qnh2AqmUF/S+0GkAzIPXM8resVI2rHqxSDerOaL6UsXiwzSKw1bP3rQtW7xvYtm+G9VP3Ku4I98vArlZzpxZ6XqN50L8kgKjH5s3ekR0lcGgAHlw6VnZYVEFbRY5Dmkw43GJ92XcNE1HSwlB3xfbC9kgh91J8j1tCo7fJ4jCWu4bryioiIzksY5hSFm703iZyAdxh+YGneQpHfHDZBUDIkL0YTIhu00/ ngtknt@me.com 5 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3gua2y0sCT6Ra88C1O9c1boQPHZd6VuXIZJqu0pIGvktsjvurRn0sp39Wcsd9FZsxSPcP7z/V8iJGFg+8gmlYvd0agMmTqKGs1Vi3PYCINtMkiBqMoWq8EnArbP+VdRKPdxaY6IxvlNpegqYugVxOP6UZ7/mjzBUODkBDV5zNqFwwcR1NWbqsT1XtG2CfK9TvwJo2NM8B/DbRz2fHgS3N2GmHM2XTh6m/Fl67nBhF93bfPi78bKhnc8oyS/QlNGSZTm4Qwb/Mb9GeYLceqvp1ZEGoojrcd9ah+CyXeXVEyagbCW+oSec1k/AYG0MtKoqGF9NTuoOK6k/9bF9fvGqV ngtknt@me.com 6 | --------------------------------------------------------------------------------